Generate all the benchmarks to run.

Instead of requiring the need to maintain a list of all the benchmarks,
add a programmatic way to generate all of the benchmarks.

This generation runs the benchmarks in alphabetical order.

Add a new macro BIONIC_BENCHMARK_WITH_ARG that will be the default argument
to pass to the benchmark. Change the benchmarks that require default arguments.

Add a small example xml file, and remove the full.xml/host.xml files.

Update readme.

Test: Ran new unit tests, verified all tests are added.
Change-Id: I8036daeae7635393222a7a92d18f34119adba745
diff --git a/benchmarks/test_suites/test_from_each.xml b/benchmarks/test_suites/test_from_each.xml
index 0118365..bad18e7 100644
--- a/benchmarks/test_suites/test_from_each.xml
+++ b/benchmarks/test_suites/test_from_each.xml
@@ -1,5 +1,5 @@
 <fn>
-  <name>BM_empty</name>
+  <name>BM_atomic_empty</name>
 </fn>
 <fn>
   <name>BM_math_sqrt</name>