vmbase: Only lazy-map MMIO if hyp has MMIO_GUARD

Instead of MMIO_GUARD_MAP-ing all MMIO pages described in the DT (e.g.
for PCI), we lazy-map them in the MMU and only MMIO_GUARD_MAP in the
handler. This results in a greatly reduced number of pages being
mapped/unmapped with the hypervisor.

However, there is no point in doing so on hypervisors that don't support
MMIO_GUARD and the current architecture forces their backends to provide
stub mmio_guard_{,un}map() functions. Instead, create valid mappings for
those MMIO pages from the start, which will remove the need for the
stubs (addressed in a later commit) and won't involve the fault handler
on those platforms, in turn reducing the number of page table updates,
potentially improving performance.

No functional change intended on hypervisors that support MMIO_GUARD.

Test: atest DebugPolicyHostTests#testNoAdbInDebugPolicy_withDebugLevelNone_boots
Test: atest rialto_test vmbase_example.integration_test
Change-Id: Idde59d24c8bf848eac0ae5c38b3f7dede47863e3
diff --git a/vmbase/src/memory/shared.rs b/vmbase/src/memory/shared.rs
index f07178e..8c80618 100644
--- a/vmbase/src/memory/shared.rs
+++ b/vmbase/src/memory/shared.rs
@@ -179,10 +179,17 @@
             return Err(MemoryTrackerError::Full);
         }
 
-        self.page_table.map_device_lazy(&get_va_range(&range)).map_err(|e| {
-            error!("Error during MMIO device mapping: {e}");
-            MemoryTrackerError::FailedToMap
-        })?;
+        if get_hypervisor().has_cap(HypervisorCap::MMIO_GUARD) {
+            self.page_table.map_device_lazy(&get_va_range(&range)).map_err(|e| {
+                error!("Error during lazy MMIO device mapping: {e}");
+                MemoryTrackerError::FailedToMap
+            })?;
+        } else {
+            self.page_table.map_device(&get_va_range(&range)).map_err(|e| {
+                error!("Error during MMIO device mapping: {e}");
+                MemoryTrackerError::FailedToMap
+            })?;
+        }
 
         if self.mmio_regions.try_push(range).is_some() {
             return Err(MemoryTrackerError::Full);
@@ -219,10 +226,12 @@
     ///
     /// Note that they are not unmapped from the page table.
     pub fn mmio_unmap_all(&mut self) -> Result<()> {
-        for range in &self.mmio_regions {
-            self.page_table
-                .modify_range(&get_va_range(range), &mmio_guard_unmap_page)
-                .map_err(|_| MemoryTrackerError::FailedToUnmap)?;
+        if get_hypervisor().has_cap(HypervisorCap::MMIO_GUARD) {
+            for range in &self.mmio_regions {
+                self.page_table
+                    .modify_range(&get_va_range(range), &mmio_guard_unmap_page)
+                    .map_err(|_| MemoryTrackerError::FailedToUnmap)?;
+            }
         }
         Ok(())
     }
@@ -287,6 +296,7 @@
     pub fn handle_mmio_fault(&mut self, addr: VirtualAddress) -> Result<()> {
         let page_start = VirtualAddress(page_4kb_of(addr.0));
         let page_range: VaRange = (page_start..page_start + MMIO_GUARD_GRANULE_SIZE).into();
+        assert!(get_hypervisor().has_cap(HypervisorCap::MMIO_GUARD));
         self.page_table
             .modify_range(&page_range, &verify_lazy_mapped_block)
             .map_err(|_| MemoryTrackerError::InvalidPte)?;
@@ -475,6 +485,7 @@
         // Since mmio_guard_map takes IPAs, if pvmfw moves non-ID address mapping, page_base
         // should be converted to IPA. However, since 0x0 is a valid MMIO address, we don't use
         // virt_to_phys here, and just pass page_base instead.
+        assert!(get_hypervisor().has_cap(HypervisorCap::MMIO_GUARD));
         get_hypervisor().mmio_guard_unmap(page_base).map_err(|e| {
             error!("Error MMIO guard unmapping: {e}");
         })?;