Map DBM data region down to pages

The purpose of the DBM data region is to keep track of updates, and only
write back those regions that were actually modified by the program.
This is implemented either using hardware DBM (where the page table
walker updates the descriptor directly) or via the exception handler
that clears the READ_ONLY bit when taking a write permission fault on a
read-only region that has the DBM flag set.

Currently, the DBM mappings of data regions could consist of block
mappings, if the size and placement of the region permits it. This means
that any store to such a region will mark the entire 2 MB window as
dirty and subject to writeback, which defeats the purpose of dirty state
tracking.

It also creates potential problems with the updated behavior of
modify_range() in the aarch64_paging crate, which will split ranges
rather than hand out mutable references to live block descriptors to the
users of the API. This means that calling modify_range() on a live
translation could result in break-before-make (BBM) violations.

These problems all go away if we simply map the DBM regions down to
pages from the start. This way, modify_range() will never result in a
split, and updating the read-only attribute is always BBM safe. It also
ensure that the amount of data requiring writeback is minimized.

Test: build tested only
Change-Id: I212e39d4fbbd6a65fb1544df9590ca7d9afb8a14
diff --git a/vmbase/src/memory/page_table.rs b/vmbase/src/memory/page_table.rs
index dab801a..ad164b4 100644
--- a/vmbase/src/memory/page_table.rs
+++ b/vmbase/src/memory/page_table.rs
@@ -16,7 +16,7 @@
 
 use crate::read_sysreg;
 use aarch64_paging::idmap::IdMap;
-use aarch64_paging::paging::{Attributes, Descriptor, MemoryRegion};
+use aarch64_paging::paging::{Attributes, Constraints, Descriptor, MemoryRegion};
 use aarch64_paging::MapError;
 use core::result;
 
@@ -107,7 +107,15 @@
     /// Maps the given range of virtual addresses to the physical addresses as non-executable,
     /// read-only and writable-clean normal memory.
     pub fn map_data_dbm(&mut self, range: &MemoryRegion) -> Result<()> {
-        self.idmap.map_range(range, DATA_DBM)
+        // Map the region down to pages to minimize the size of the regions that will be marked
+        // dirty once a store hits them, but also to ensure that we can clear the read-only
+        // attribute while the mapping is live without causing break-before-make (BBM) violations.
+        // The latter implies that we must avoid the use of the contiguous hint as well.
+        self.idmap.map_range_with_constraints(
+            range,
+            DATA_DBM,
+            Constraints::NO_BLOCK_MAPPINGS | Constraints::NO_CONTIGUOUS_HINT,
+        )
     }
 
     /// Maps the given range of virtual addresses to the physical addresses as read-only