Pagetable-friendly shared library address randomization.

Add inaccessible gaps between shared libraries to make it harder for the
attackers to defeat ASLR by random probing.

To avoid excessive page table bloat, only do this when a library is
about to cross a huge page boundary, effectively allowing several
smaller libraries to be lumped together.

Bug: 158113540
Test: look at /proc/$$/maps
Change-Id: I39c0100b81f72447e8b3c6faafa561111492bf8c
diff --git a/linker/linker_soinfo.cpp b/linker/linker_soinfo.cpp
index 4f67003..60fd242 100644
--- a/linker/linker_soinfo.cpp
+++ b/linker/linker_soinfo.cpp
@@ -900,6 +900,24 @@
   g_soinfo_handles_map[handle_] = this;
 }
 
+void soinfo::set_gap_start(ElfW(Addr) gap_start) {
+  CHECK(has_min_version(6));
+  gap_start_ = gap_start;
+}
+ElfW(Addr) soinfo::get_gap_start() const {
+  CHECK(has_min_version(6));
+  return gap_start_;
+}
+
+void soinfo::set_gap_size(size_t gap_size) {
+  CHECK(has_min_version(6));
+  gap_size_ = gap_size;
+}
+size_t soinfo::get_gap_size() const {
+  CHECK(has_min_version(6));
+  return gap_size_;
+}
+
 // TODO(dimitry): Move SymbolName methods to a separate file.
 
 uint32_t calculate_elf_hash(const char* name) {