Generate all the benchmarks to run.

Instead of requiring the need to maintain a list of all the benchmarks,
add a programmatic way to generate all of the benchmarks.

This generation runs the benchmarks in alphabetical order.

Add a new macro BIONIC_BENCHMARK_WITH_ARG that will be the default argument
to pass to the benchmark. Change the benchmarks that require default arguments.

Add a small example xml file, and remove the full.xml/host.xml files.

Update readme.

Test: Ran new unit tests, verified all tests are added.
Change-Id: I8036daeae7635393222a7a92d18f34119adba745
diff --git a/benchmarks/suites/example.xml b/benchmarks/suites/example.xml
new file mode 100644
index 0000000..51dd2ab
--- /dev/null
+++ b/benchmarks/suites/example.xml
@@ -0,0 +1,8 @@
+<!--
+ Small example file.
+-->
+<fn>
+  <name>BM_stdio_fread</name>
+  <iterations>20</iterations>
+  <args>AT_COMMON_SIZES</args>
+</fn>