Re-land^2 linker support for MTE globals

The original reland was fine, but there was an app compat problem
because it incremented SOINFO_VERSION. This is a problem with the
app, but turns out we don't actually need to increment the version,
because it is unused on aarch64 and memtag-globals are aarch64
only.

The original patch (aosp/1845896) had a major bug. This happened for
out-of-bounds RELR relocations, as we'd attempt to materialize the
address tag of the result unconditionally. This meant crashes happened
on non-MTE devices (particularly the webview) when loading DSOs
containing an OOB RELR reloc.

This patch includes a fix, where we only materialize the tag during RELR
and other relocations if the result is in a binary that uses MTE
globals. Note that that's not the binary *containing* the relocation has
MTE globals (for RELR/RELATIVE relocations the binary containing the
relocation and the result is the same binary, but that's not the case
for GLOB_DAT).

Other than that, from the original patch, this adds the necessary
bionic code for the linker to protect global data during MTE.

The implementation is described in the MemtagABI addendum to the
AArch64 ELF ABI:
https://github.com/ARM-software/abi-aa/blob/main/memtagabielf64/memtagabielf64.rst

In summary, this patch includes:

1. When MTE globals is requested, the linker maps writable SHF_ALLOC
   sections as anonymous pages with PROT_MTE (copying the file contents
   into the anonymous mapping), rather than using a file-backed private
   mapping. This is required as file-based mappings are not necessarily
   backed by the kernel with tag-capable memory. For sections already
   mapped by the kernel when the linker is invoked via. PT_INTERP, we
   unmap the contents, remap a PROT_MTE+anonymous mapping in its place,
   and re-load the file contents from disk.

2. When MTE globals is requested, the linker tags areas of global memory
   (as defined in SHT_AARCH64_MEMTAG_GLOBALS_DYNAMIC) with random tags,
   but ensuring that adjacent globals are never tagged using the same
   memory tag (to provide detemrinistic overflow detection).

3. Changes to RELATIVE, ABS64, and GLOB_DAT relocations to load and
   store tags in the right places. This ensures that the address tags are
   materialized into the GOT entries as well. These changes are a
   functional no-op to existing binaries and/or non-MTE capable hardware.

Bug: 315182011
Test: On both an MTE-enabled and non-MTE-enabled device:
Test: atest libprocinfo_test bionic-unit-tests bionic-unit-tests-static CtsGwpAsanTestCases gwp_asan_unittest debuggerd_test memtag_stack_dlopen_test
Change-Id: I75f2ca4a5a08550d80b6b0978fc1bcefff284505
diff --git a/linker/linker_soinfo.h b/linker/linker_soinfo.h
index 9bec2aa..c7407b5 100644
--- a/linker/linker_soinfo.h
+++ b/linker/linker_soinfo.h
@@ -30,6 +30,7 @@
 
 #include <link.h>
 
+#include <list>
 #include <memory>
 #include <string>
 #include <vector>
@@ -66,6 +67,7 @@
                                          // soinfo is executed and this flag is
                                          // unset.
 #define FLAG_PRELINKED        0x00000400 // prelink_image has successfully processed this soinfo
+#define FLAG_GLOBALS_TAGGED   0x00000800 // globals have been tagged by MTE.
 #define FLAG_NEW_SOINFO       0x40000000 // new soinfo format
 
 #define SOINFO_VERSION 6
@@ -257,6 +259,9 @@
                   const android_dlextinfo* extinfo, size_t* relro_fd_offset);
   bool protect_relro();
 
+  void tag_globals();
+  ElfW(Addr) apply_memtag_if_mte_globals(ElfW(Addr) sym_addr) const;
+
   void add_child(soinfo* child);
   void remove_all_links();
 
@@ -295,7 +300,7 @@
 #else
     // If you make this return non-true in the case where
     // __work_around_b_24465209__ is not defined, you will have to change
-    // memtag_dynamic_entries.
+    // memtag_dynamic_entries() and vma_names().
     return true;
 #endif
   }
@@ -394,6 +399,18 @@
    should_pad_segments_ = should_pad_segments;
   }
   bool should_pad_segments() const { return should_pad_segments_; }
+  bool should_tag_memtag_globals() const {
+    return !is_linker() && memtag_globals() && memtag_globalssz() > 0 && __libc_mte_enabled();
+  }
+  std::list<std::string>* vma_names() {
+#ifdef __aarch64__
+#ifdef __work_around_b_24465209__
+#error "Assuming aarch64 does not use versioned soinfo."
+#endif
+    return &vma_names_;
+#endif
+    return nullptr;
+};
 
   void set_should_use_16kib_app_compat(bool should_use_16kib_app_compat) {
     should_use_16kib_app_compat_ = should_use_16kib_app_compat;
@@ -489,6 +506,7 @@
 
   // __aarch64__ only, which does not use versioning.
   memtag_dynamic_entries_t memtag_dynamic_entries_;
+  std::list<std::string> vma_names_;
 
   // Pad gaps between segments when memory mapping?
   bool should_pad_segments_ = false;