Merge changes I9d684029,Iaedc0925,I8d4c8b4f,I93b17a87,I822116f5 into main
* changes:
pvmfw: Pre-populate pviommu nodes
libfdt: Add Fdt::node_mut_with_phandle()
libfdt: Fix bugs in comparing two list in tests
pvmfw: Call patch() in patch test
libfdt: Add FdtNode::get_phandle()
diff --git a/javalib/README.md b/javalib/README.md
index 4eb64df..cf7a6cb 100644
--- a/javalib/README.md
+++ b/javalib/README.md
@@ -15,6 +15,9 @@
`android.permission.MANAGE_VIRTUAL_MACHINE` permission, so they are not
available to third party apps.
+All of these APIs were introduced in API level 34 (Android 14). The classes may
+not exist in devices running an earlier version.
+
## Detecting AVF Support
The simplest way to detect whether a device has support for AVF is to retrieve
@@ -22,7 +25,7 @@
[`VirtualMachineManager`](src/android/system/virtualmachine/VirtualMachineManager.java)
class; if the result is not `null` then the device has support. You can then
find out whether protected, non-protected VMs, or both are supported using the
-`getCapabilities()` method:
+`getCapabilities()` method. Note that this code requires API level 34 or higher:
```Java
VirtualMachineManager vmm = context.getSystemService(VirtualMachineManager.class);
@@ -41,7 +44,8 @@
```
An alternative for detecting AVF support is to query support for the
-`android.software.virtualization_framework` system feature:
+`android.software.virtualization_framework` system feature. This method will
+work on any API level, and return false if it is below 34:
```Java
if (getPackageManager().hasSystemFeature(PackageManager.FEATURE_VIRTUALIZATION_FRAMEWORK)) {
@@ -116,7 +120,9 @@
reached - but there is some overhead proportional to the maximum size.)
- How many virtual CPUs the VM has.
- How much encrypted storage the VM has.
-- The path to the installed APK containing the code to run as the VM payload.
+- The path to the installed APK containing the code to run as the VM
+ payload. (Normally you don't need this; the APK path is determined from the
+ context passed to the config builder.)
## VM Life-cycle
@@ -244,7 +250,7 @@
### Binder
-The use of AIDL interfaces between the VM and app is support via Binder RPC,
+The use of AIDL interfaces between the VM and app is supported via Binder RPC,
which transmits messages over an underlying vsock socket.
Note that Binder RPC has some limitations compared to the kernel Binder used in
@@ -304,10 +310,12 @@
which includes the payload's exit code.
Use of `stop()` should be reserved as a recovery mechanism - for example if the
-VM has not stopped within a reasonable time after being requested to.
+VM has not stopped within a reasonable time (a few seconds, say) after being
+requested to.
-The status of a VM will be `STATUS_STOPPED` after a successful call to `stop()`,
-or if your `onPayloadStopped()` callback is invoked.
+The status of a VM will be `STATUS_STOPPED` if your `onStopped()` callback is
+invoked, or after a successful call to `stop()`. Note that your `onStopped()`
+will be called on the VM even if it ended as a result of a call to `stop()`.
# Encrypted Storage
@@ -333,9 +341,8 @@
# Transferring a VM
-It is possible to make a copy of a VM instance with a new name. This can be used
-to transfer a VM from one app to another, which can be useful in some
-circumstances.
+It is possible to make a copy of a VM instance. This can be used to transfer a
+VM from one app to another, which can be useful in some circumstances.
This should only be done while the VM is stopped. The first step is to call
`toDescriptor()` on the
diff --git a/microdroid/init.rc b/microdroid/init.rc
index f5f3f15..4cc0475 100644
--- a/microdroid/init.rc
+++ b/microdroid/init.rc
@@ -30,6 +30,11 @@
# We don't directly exec the binary to specify stdio_to_kmsg.
exec_start init_debug_policy
+ # Wait for ueventd to have finished cold boot.
+ # This is needed by prng-seeder (at least).
+ # (In Android this happens inside apexd-bootstrap.)
+ wait_for_prop ro.cold_boot_done true
+
on init
mkdir /mnt/apk 0755 root root
mkdir /mnt/extra-apk 0755 root root
diff --git a/microdroid_manager/src/main.rs b/microdroid_manager/src/main.rs
index 7ba54f8..1b41e58 100644
--- a/microdroid_manager/src/main.rs
+++ b/microdroid_manager/src/main.rs
@@ -34,7 +34,7 @@
use crate::dice::dice_derivation;
use crate::dice_driver::DiceDriver;
-use crate::instance::{ApexData, InstanceDisk, MicrodroidData};
+use crate::instance::{InstanceDisk, MicrodroidData};
use crate::verify::verify_payload;
use crate::vm_payload_service::register_vm_payload_service;
use anyhow::{anyhow, bail, ensure, Context, Error, Result};
@@ -42,10 +42,10 @@
use keystore2_crypto::ZVec;
use libc::VMADDR_CID_HOST;
use log::{error, info};
-use microdroid_metadata::{write_metadata, PayloadMetadata};
+use microdroid_metadata::PayloadMetadata;
use microdroid_payload_config::{OsConfig, Task, TaskType, VmPayloadConfig};
use nix::sys::signal::Signal;
-use payload::{load_metadata, to_metadata};
+use payload::load_metadata;
use rpcbinder::RpcSession;
use rustutils::sockets::android_get_control_socket;
use rustutils::system_properties;
@@ -143,15 +143,6 @@
Ok(())
}
-fn get_vms_rpc_binder() -> Result<Strong<dyn IVirtualMachineService>> {
- // The host is running a VirtualMachineService for this VM on a port equal
- // to the CID of this VM.
- let port = vsock::get_local_cid().context("Could not determine local CID")?;
- RpcSession::new()
- .setup_vsock_client(VMADDR_CID_HOST, port)
- .context("Could not connect to IVirtualMachineService")
-}
-
fn main() -> Result<()> {
// If debuggable, print full backtrace to console log with stdio_to_kmsg
if is_debuggable()? {
@@ -174,25 +165,6 @@
})
}
-/// Prepares a socket file descriptor for the vm payload service.
-///
-/// # Safety
-///
-/// The caller must ensure that this function is the only place that claims ownership
-/// of the file descriptor and it is called only once.
-unsafe fn prepare_vm_payload_service_socket() -> Result<OwnedFd> {
- let raw_fd = android_get_control_socket(VM_PAYLOAD_SERVICE_SOCKET_NAME)?;
-
- // Creating OwnedFd for stdio FDs is not safe.
- if [libc::STDIN_FILENO, libc::STDOUT_FILENO, libc::STDERR_FILENO].contains(&raw_fd) {
- bail!("File descriptor {raw_fd} is standard I/O descriptor");
- }
- // SAFETY: Initializing OwnedFd for a RawFd created by the init.
- // We checked that the integer value corresponds to a valid FD and that the caller
- // ensures that this is the only place to claim its ownership.
- Ok(unsafe { OwnedFd::from_raw_fd(raw_fd) })
-}
-
fn try_main() -> Result<()> {
android_logger::init_once(
android_logger::Config::default()
@@ -245,71 +217,6 @@
}
}
-fn post_payload_work() -> Result<()> {
- // Sync the encrypted storage filesystem (flushes the filesystem caches).
- if Path::new(ENCRYPTEDSTORE_BACKING_DEVICE).exists() {
- let mountpoint = CString::new(ENCRYPTEDSTORE_MOUNTPOINT).unwrap();
-
- // SAFETY: `mountpoint` is a valid C string. `syncfs` and `close` are safe for any parameter
- // values.
- let ret = unsafe {
- let dirfd = libc::open(
- mountpoint.as_ptr(),
- libc::O_DIRECTORY | libc::O_RDONLY | libc::O_CLOEXEC,
- );
- ensure!(dirfd >= 0, "Unable to open {:?}", mountpoint);
- let ret = libc::syncfs(dirfd);
- libc::close(dirfd);
- ret
- };
- if ret != 0 {
- error!("failed to sync encrypted storage.");
- return Err(anyhow!(std::io::Error::last_os_error()));
- }
- }
- Ok(())
-}
-
-fn is_strict_boot() -> bool {
- Path::new(AVF_STRICT_BOOT).exists()
-}
-
-fn is_new_instance() -> bool {
- Path::new(AVF_NEW_INSTANCE).exists()
-}
-
-fn is_verified_boot() -> bool {
- !Path::new(DEBUG_MICRODROID_NO_VERIFIED_BOOT).exists()
-}
-
-fn is_debuggable() -> Result<bool> {
- Ok(system_properties::read_bool(DEBUGGABLE_PROP, true)?)
-}
-
-fn should_export_tombstones(config: &VmPayloadConfig) -> bool {
- match config.export_tombstones {
- Some(b) => b,
- None => is_debuggable().unwrap_or(false),
- }
-}
-
-/// Get debug policy value in bool. It's true iff the value is explicitly set to <1>.
-fn get_debug_policy_bool(path: &'static str) -> Result<Option<bool>> {
- let mut file = match File::open(path) {
- Ok(dp) => dp,
- Err(e) => {
- info!(
- "Assumes that debug policy is disabled because failed to read debug policy ({e:?})"
- );
- return Ok(Some(false));
- }
- };
- let mut log: [u8; 4] = Default::default();
- file.read_exact(&mut log).context("Malformed data in {path}")?;
- // DT spec uses big endian although Android is always little endian.
- Ok(Some(u32::from_be_bytes(log) == 1))
-}
-
fn try_run_payload(
service: &Strong<dyn IVirtualMachineService>,
vm_payload_service_fd: OwnedFd,
@@ -377,6 +284,13 @@
let dice_artifacts = dice_derivation(dice, &verified_data, &payload_metadata)?;
let vm_secret = VmSecret::new(dice_artifacts).context("Failed to create VM secrets")?;
+ if cfg!(dice_changes) {
+ // Now that the DICE derivation is done, it's ok to allow payload code to run.
+
+ // Start apexd to activate APEXes. This may allow code within them to run.
+ system_properties::write("ctl.start", "apexd-vm")?;
+ }
+
// Run encryptedstore binary to prepare the storage
let encryptedstore_child = if Path::new(ENCRYPTEDSTORE_BACKING_DEVICE).exists() {
info!("Preparing encryptedstore ...");
@@ -419,10 +333,12 @@
);
mount_extra_apks(&config, &mut zipfuse)?;
- // Wait until apex config is done. (e.g. linker configuration for apexes)
- wait_for_apex_config_done()?;
-
- setup_config_sysprops(&config)?;
+ register_vm_payload_service(
+ allow_restricted_apis,
+ service.clone(),
+ vm_secret,
+ vm_payload_service_fd,
+ )?;
// Set export_tombstones if enabled
if should_export_tombstones(&config) {
@@ -431,16 +347,20 @@
.context("set microdroid_manager.export_tombstones.enabled")?;
}
+ // Wait until apex config is done. (e.g. linker configuration for apexes)
+ wait_for_property_true(APEX_CONFIG_DONE_PROP).context("Failed waiting for apex config done")?;
+
+ // Trigger init post-fs-data. This will start authfs if we wask it to.
+ if config.enable_authfs {
+ system_properties::write("microdroid_manager.authfs.enabled", "1")
+ .context("failed to write microdroid_manager.authfs.enabled")?;
+ }
+ system_properties::write("microdroid_manager.config_done", "1")
+ .context("failed to write microdroid_manager.config_done")?;
+
// Wait until zipfuse has mounted the APKs so we can access the payload
zipfuse.wait_until_done()?;
- register_vm_payload_service(
- allow_restricted_apis,
- service.clone(),
- vm_secret,
- vm_payload_service_fd,
- )?;
-
// Wait for encryptedstore to finish mounting the storage (if enabled) before setting
// microdroid_manager.init_done. Reason is init stops uneventd after that.
// Encryptedstore, however requires ueventd
@@ -449,7 +369,10 @@
ensure!(exitcode.success(), "Unable to prepare encrypted storage. Exitcode={}", exitcode);
}
+ // Wait for init to have finished booting.
wait_for_property_true("dev.bootcomplete").context("failed waiting for dev.bootcomplete")?;
+
+ // And then tell it we're done so unnecessary services can be shut down.
system_properties::write("microdroid_manager.init_done", "1")
.context("set microdroid_manager.init_done")?;
@@ -457,6 +380,120 @@
exec_task(task, service).context("Failed to run payload")
}
+fn post_payload_work() -> Result<()> {
+ // Sync the encrypted storage filesystem (flushes the filesystem caches).
+ if Path::new(ENCRYPTEDSTORE_BACKING_DEVICE).exists() {
+ let mountpoint = CString::new(ENCRYPTEDSTORE_MOUNTPOINT).unwrap();
+
+ // SAFETY: `mountpoint` is a valid C string. `syncfs` and `close` are safe for any parameter
+ // values.
+ let ret = unsafe {
+ let dirfd = libc::open(
+ mountpoint.as_ptr(),
+ libc::O_DIRECTORY | libc::O_RDONLY | libc::O_CLOEXEC,
+ );
+ ensure!(dirfd >= 0, "Unable to open {:?}", mountpoint);
+ let ret = libc::syncfs(dirfd);
+ libc::close(dirfd);
+ ret
+ };
+ if ret != 0 {
+ error!("failed to sync encrypted storage.");
+ return Err(anyhow!(std::io::Error::last_os_error()));
+ }
+ }
+ Ok(())
+}
+
+fn mount_extra_apks(config: &VmPayloadConfig, zipfuse: &mut Zipfuse) -> Result<()> {
+ // For now, only the number of apks is important, as the mount point and dm-verity name is fixed
+ for i in 0..config.extra_apks.len() {
+ let mount_dir = format!("/mnt/extra-apk/{i}");
+ create_dir(Path::new(&mount_dir)).context("Failed to create mount dir for extra apks")?;
+
+ let mount_for_exec =
+ if cfg!(multi_tenant) { MountForExec::Allowed } else { MountForExec::Disallowed };
+ // These run asynchronously in parallel - we wait later for them to complete.
+ zipfuse.mount(
+ mount_for_exec,
+ "fscontext=u:object_r:zipfusefs:s0,context=u:object_r:extra_apk_file:s0",
+ Path::new(&format!("/dev/block/mapper/extra-apk-{i}")),
+ Path::new(&mount_dir),
+ format!("microdroid_manager.extra_apk.mounted.{i}"),
+ )?;
+ }
+
+ Ok(())
+}
+
+fn get_vms_rpc_binder() -> Result<Strong<dyn IVirtualMachineService>> {
+ // The host is running a VirtualMachineService for this VM on a port equal
+ // to the CID of this VM.
+ let port = vsock::get_local_cid().context("Could not determine local CID")?;
+ RpcSession::new()
+ .setup_vsock_client(VMADDR_CID_HOST, port)
+ .context("Could not connect to IVirtualMachineService")
+}
+
+/// Prepares a socket file descriptor for the vm payload service.
+///
+/// # Safety
+///
+/// The caller must ensure that this function is the only place that claims ownership
+/// of the file descriptor and it is called only once.
+unsafe fn prepare_vm_payload_service_socket() -> Result<OwnedFd> {
+ let raw_fd = android_get_control_socket(VM_PAYLOAD_SERVICE_SOCKET_NAME)?;
+
+ // Creating OwnedFd for stdio FDs is not safe.
+ if [libc::STDIN_FILENO, libc::STDOUT_FILENO, libc::STDERR_FILENO].contains(&raw_fd) {
+ bail!("File descriptor {raw_fd} is standard I/O descriptor");
+ }
+ // SAFETY: Initializing OwnedFd for a RawFd created by the init.
+ // We checked that the integer value corresponds to a valid FD and that the caller
+ // ensures that this is the only place to claim its ownership.
+ Ok(unsafe { OwnedFd::from_raw_fd(raw_fd) })
+}
+
+fn is_strict_boot() -> bool {
+ Path::new(AVF_STRICT_BOOT).exists()
+}
+
+fn is_new_instance() -> bool {
+ Path::new(AVF_NEW_INSTANCE).exists()
+}
+
+fn is_verified_boot() -> bool {
+ !Path::new(DEBUG_MICRODROID_NO_VERIFIED_BOOT).exists()
+}
+
+fn is_debuggable() -> Result<bool> {
+ Ok(system_properties::read_bool(DEBUGGABLE_PROP, true)?)
+}
+
+fn should_export_tombstones(config: &VmPayloadConfig) -> bool {
+ match config.export_tombstones {
+ Some(b) => b,
+ None => is_debuggable().unwrap_or(false),
+ }
+}
+
+/// Get debug policy value in bool. It's true iff the value is explicitly set to <1>.
+fn get_debug_policy_bool(path: &'static str) -> Result<Option<bool>> {
+ let mut file = match File::open(path) {
+ Ok(dp) => dp,
+ Err(e) => {
+ info!(
+ "Assumes that debug policy is disabled because failed to read debug policy ({e:?})"
+ );
+ return Ok(Some(false));
+ }
+ };
+ let mut log: [u8; 4] = Default::default();
+ file.read_exact(&mut log).context("Malformed data in {path}")?;
+ // DT spec uses big endian although Android is always little endian.
+ Ok(Some(u32::from_be_bytes(log) == 1))
+}
+
enum MountForExec {
Allowed,
Disallowed,
@@ -504,65 +541,6 @@
}
}
-fn write_apex_payload_data(
- saved_data: Option<&MicrodroidData>,
- apex_data_from_payload: &[ApexData],
-) -> Result<()> {
- if let Some(saved_apex_data) = saved_data.map(|d| &d.apex_data) {
- // We don't support APEX updates. (assuming that update will change root digest)
- ensure!(
- saved_apex_data == apex_data_from_payload,
- MicrodroidError::PayloadChanged(String::from("APEXes have changed."))
- );
- let apex_metadata = to_metadata(apex_data_from_payload);
- // Pass metadata(with public keys and root digests) to apexd so that it uses the passed
- // metadata instead of the default one (/dev/block/by-name/payload-metadata)
- OpenOptions::new()
- .create_new(true)
- .write(true)
- .open("/apex/vm-payload-metadata")
- .context("Failed to open /apex/vm-payload-metadata")
- .and_then(|f| write_metadata(&apex_metadata, f))?;
- }
- Ok(())
-}
-
-fn mount_extra_apks(config: &VmPayloadConfig, zipfuse: &mut Zipfuse) -> Result<()> {
- // For now, only the number of apks is important, as the mount point and dm-verity name is fixed
- for i in 0..config.extra_apks.len() {
- let mount_dir = format!("/mnt/extra-apk/{i}");
- create_dir(Path::new(&mount_dir)).context("Failed to create mount dir for extra apks")?;
-
- let mount_for_exec =
- if cfg!(multi_tenant) { MountForExec::Allowed } else { MountForExec::Disallowed };
- // These run asynchronously in parallel - we wait later for them to complete.
- zipfuse.mount(
- mount_for_exec,
- "fscontext=u:object_r:zipfusefs:s0,context=u:object_r:extra_apk_file:s0",
- Path::new(&format!("/dev/block/mapper/extra-apk-{i}")),
- Path::new(&mount_dir),
- format!("microdroid_manager.extra_apk.mounted.{i}"),
- )?;
- }
-
- Ok(())
-}
-
-fn setup_config_sysprops(config: &VmPayloadConfig) -> Result<()> {
- if config.enable_authfs {
- system_properties::write("microdroid_manager.authfs.enabled", "1")
- .context("failed to write microdroid_manager.authfs.enabled")?;
- }
- system_properties::write("microdroid_manager.config_done", "1")
- .context("failed to write microdroid_manager.config_done")?;
- Ok(())
-}
-
-// Waits until linker config is generated
-fn wait_for_apex_config_done() -> Result<()> {
- wait_for_property_true(APEX_CONFIG_DONE_PROP).context("Failed waiting for apex config done")
-}
-
fn wait_for_property_true(property_name: &str) -> Result<()> {
let mut prop = PropertyWatcher::new(property_name)?;
loop {
diff --git a/microdroid_manager/src/verify.rs b/microdroid_manager/src/verify.rs
index 22f3414..e63530b 100644
--- a/microdroid_manager/src/verify.rs
+++ b/microdroid_manager/src/verify.rs
@@ -12,18 +12,19 @@
// See the License for the specific language governing permissions and
// limitations under the License.
-use crate::instance::{ApkData, MicrodroidData, RootHash};
-use crate::payload::get_apex_data_from_payload;
-use crate::{is_strict_boot, is_verified_boot, write_apex_payload_data, MicrodroidError};
+use crate::instance::{ApexData, ApkData, MicrodroidData, RootHash};
+use crate::payload::{get_apex_data_from_payload, to_metadata};
+use crate::{is_strict_boot, is_verified_boot, MicrodroidError};
use anyhow::{anyhow, ensure, Context, Result};
use apkmanifest::get_manifest_info;
use apkverify::{get_public_key_der, verify, V4Signature};
use glob::glob;
use itertools::sorted;
use log::{info, warn};
-use microdroid_metadata::Metadata;
+use microdroid_metadata::{write_metadata, Metadata};
use rand::Fill;
use rustutils::system_properties;
+use std::fs::OpenOptions;
use std::path::Path;
use std::process::{Child, Command};
use std::str;
@@ -134,8 +135,10 @@
write_apex_payload_data(saved_data, &apex_data_from_payload)?;
}
- // Start apexd to activate APEXes
- system_properties::write("ctl.start", "apexd-vm")?;
+ if cfg!(not(dice_changes)) {
+ // Start apexd to activate APEXes
+ system_properties::write("ctl.start", "apexd-vm")?;
+ }
// TODO(inseob): add timeout
apkdmverity_child.wait()?;
@@ -207,6 +210,29 @@
})
}
+fn write_apex_payload_data(
+ saved_data: Option<&MicrodroidData>,
+ apex_data_from_payload: &[ApexData],
+) -> Result<()> {
+ if let Some(saved_apex_data) = saved_data.map(|d| &d.apex_data) {
+ // We don't support APEX updates. (assuming that update will change root digest)
+ ensure!(
+ saved_apex_data == apex_data_from_payload,
+ MicrodroidError::PayloadChanged(String::from("APEXes have changed."))
+ );
+ let apex_metadata = to_metadata(apex_data_from_payload);
+ // Pass metadata(with public keys and root digests) to apexd so that it uses the passed
+ // metadata instead of the default one (/dev/block/by-name/payload-metadata)
+ OpenOptions::new()
+ .create_new(true)
+ .write(true)
+ .open("/apex/vm-payload-metadata")
+ .context("Failed to open /apex/vm-payload-metadata")
+ .and_then(|f| write_metadata(&apex_metadata, f))?;
+ }
+ Ok(())
+}
+
fn get_apk_root_hash_from_idsig<P: AsRef<Path>>(idsig_path: P) -> Result<Box<RootHash>> {
Ok(V4Signature::from_idsig_path(idsig_path)?.hashing_info.raw_root_hash)
}
diff --git a/service_vm/comm/src/client_vm_csr.cddl b/service_vm/comm/src/client_vm_csr.cddl
new file mode 100644
index 0000000..bbc709a
--- /dev/null
+++ b/service_vm/comm/src/client_vm_csr.cddl
@@ -0,0 +1,62 @@
+; CDDL for the CSR sent from the client VM to the RKP VM for pVM remote attestation.
+
+Csr = [
+ DiceCertChain, ; The DICE chain containing measurement of the client VM. See
+ ; keymint/generateCertificateRequestV2.cddl for the DiceCertChain
+ ; definition.
+ SignedData,
+]
+
+; COSE_Sign [RFC9052 s4.1]
+SignedData = [
+ protected: {}, ; The signing algorithms are specified in each signature
+ ; separately.
+ unprotected: {},
+ payload: bstr .cbor CsrPayload,
+ Signatures,
+]
+
+CsrPayload = [ ; CBOR Array defining the payload for CSR
+ challenge: bstr .size (0..64), ; The challenge is provided by the client server.
+ ; It will be included in the certificate chain in the
+ ; attestation result, serving as proof of the freshness
+ ; of the result.
+ PublicKey, ; COSE_Key encoded EC P-256 public key [ RFC9053 s7.1.1 ]
+ ; to be attested. See keymint/PublicKey.cddl for the
+ ; definition, the test flag `-70000` is never used.
+]
+
+Signatures = [
+ dice_cdi_leaf_signature: COSE_Signature_Dice_Cdi_Leaf,
+ attestation_key_signature: COSE_Signature_Attestation_Key,
+]
+
+; COSE_Signature [RFC9052 s4.1]
+COSE_Signature_Dice_Cdi_Leaf = [
+ protected: bstr .cbor { 1: AlgorithmEdDSA },
+ unprotected: {},
+ signature: bstr, ; Ed25519(CDI_Leaf_Priv, SigStruct)
+]
+
+; COSE_Signature [RFC9052 s4.1]
+COSE_Signature_Attestation_Key = [
+ protected: bstr .cbor { 1: AlgorithmES256 },
+ unprotected: {},
+ signature: bstr, ; ECDSA(PrivateKey, SigStruct)
+]
+
+; Sig_structure for SignedData [ RFC9052 s4.4 ]
+SigStruct = {
+ context: "Signature",
+ external_aad: bstr .size 0,
+ payload: bstr .cbor CsrPayload,
+}
+
+; ASN.1 DER-encoded EC P-256 ECPrivateKey [ RFC 5915 s3 ]:
+; ECPrivateKey ::= SEQUENCE {
+; version INTEGER { ecPrivkeyVer1(1) } (ecPrivkeyVer1),
+; privateKey OCTET STRING,
+; parameters [0] ECParameters {{ NamedCurve }} OPTIONAL,
+; publicKey [1] BIT STRING OPTIONAL
+;}
+PrivateKey = bstr
diff --git a/service_vm/comm/src/csr.rs b/service_vm/comm/src/csr.rs
index 5e1cbad..757d080 100644
--- a/service_vm/comm/src/csr.rs
+++ b/service_vm/comm/src/csr.rs
@@ -19,8 +19,11 @@
use alloc::vec::Vec;
use ciborium::Value;
use coset::{self, CborSerializable, CoseError};
+use log::error;
/// Represents a CSR sent from the client VM to the service VM for attestation.
+///
+/// See client_vm_csr.cddl for the definition of the CSR.
#[derive(Clone, Debug, Eq, PartialEq)]
pub struct Csr {
/// The DICE certificate chain of the client VM.
@@ -52,8 +55,8 @@
return Err(CoseError::UnexpectedItem("array", "array with 2 items"));
}
Ok(Self {
- signed_csr_payload: try_as_bytes(arr.remove(1))?,
- dice_cert_chain: try_as_bytes(arr.remove(0))?,
+ signed_csr_payload: try_as_bytes(arr.remove(1), "signed_csr_payload")?,
+ dice_cert_chain: try_as_bytes(arr.remove(0), "dice_cert_chain")?,
})
}
}
@@ -91,17 +94,19 @@
return Err(CoseError::UnexpectedItem("array", "array with 2 items"));
}
Ok(Self {
- challenge: try_as_bytes(arr.remove(1))?,
- public_key: try_as_bytes(arr.remove(0))?,
+ challenge: try_as_bytes(arr.remove(1), "challenge")?,
+ public_key: try_as_bytes(arr.remove(0), "public_key")?,
})
}
}
-fn try_as_bytes(v: Value) -> coset::Result<Vec<u8>> {
+fn try_as_bytes(v: Value, context: &str) -> coset::Result<Vec<u8>> {
if let Value::Bytes(data) = v {
Ok(data)
} else {
- Err(CoseError::UnexpectedItem(cbor_value_type(&v), "bytes"))
+ let v_type = cbor_value_type(&v);
+ error!("The provided value type '{v_type}' is not of type 'bytes': {context}");
+ Err(CoseError::UnexpectedItem(v_type, "bytes"))
}
}
diff --git a/service_vm/comm/src/lib.rs b/service_vm/comm/src/lib.rs
index 0818f24..bb85a26 100644
--- a/service_vm/comm/src/lib.rs
+++ b/service_vm/comm/src/lib.rs
@@ -25,7 +25,7 @@
pub use csr::{Csr, CsrPayload};
pub use message::{
- EcdsaP256KeyPair, GenerateCertificateRequestParams, Request, RequestProcessingError, Response,
- ServiceVmRequest,
+ ClientVmAttestationParams, EcdsaP256KeyPair, GenerateCertificateRequestParams, Request,
+ RequestProcessingError, Response, ServiceVmRequest,
};
pub use vsock::VmType;
diff --git a/service_vm/comm/src/message.rs b/service_vm/comm/src/message.rs
index f8d7420..6dd0ccd 100644
--- a/service_vm/comm/src/message.rs
+++ b/service_vm/comm/src/message.rs
@@ -50,6 +50,22 @@
/// Creates a certificate signing request to be sent to the
/// provisioning server.
GenerateCertificateRequest(GenerateCertificateRequestParams),
+
+ /// Requests the service VM to attest the client VM and issue a certificate
+ /// if the attestation succeeds.
+ RequestClientVmAttestation(ClientVmAttestationParams),
+}
+
+/// Represents the params passed to `Request::RequestClientVmAttestation`.
+#[derive(Clone, Debug, Serialize, Deserialize)]
+pub struct ClientVmAttestationParams {
+ /// The CBOR-encoded CSR signed by the CDI_Leaf_Priv of the client VM's DICE chain
+ /// and the private key to be attested.
+ /// See client_vm_csr.cddl for the definition of the CSR.
+ pub csr: Vec<u8>,
+
+ /// The key blob retrieved from RKPD by virtualizationservice.
+ pub remotely_provisioned_key_blob: Vec<u8>,
}
/// Represents a response to a request sent to the service VM.
@@ -66,6 +82,11 @@
/// Returns a CBOR Certificate Signing Request (Csr) serialized into a byte array.
GenerateCertificateRequest(Vec<u8>),
+ /// Returns a certificate covering the public key to be attested in the provided CSR.
+ /// The certificate is signed by the remotely provisioned private key and also
+ /// includes an extension that describes the attested client VM.
+ RequestClientVmAttestation(Vec<u8>),
+
/// Encountered an error during the request processing.
Err(RequestProcessingError),
}
@@ -93,6 +114,12 @@
/// The DICE chain of the service VM is missing.
MissingDiceChain,
+
+ /// Failed to decrypt the remotely provisioned key blob.
+ FailedToDecryptKeyBlob,
+
+ /// The requested operation has not been implemented.
+ OperationUnimplemented,
}
impl fmt::Display for RequestProcessingError {
@@ -109,6 +136,12 @@
write!(f, "An error happened when serializing to/from a CBOR Value.")
}
Self::MissingDiceChain => write!(f, "The DICE chain of the service VM is missing"),
+ Self::FailedToDecryptKeyBlob => {
+ write!(f, "Failed to decrypt the remotely provisioned key blob")
+ }
+ Self::OperationUnimplemented => {
+ write!(f, "The requested operation has not been implemented")
+ }
}
}
}
diff --git a/service_vm/requests/src/api.rs b/service_vm/requests/src/api.rs
index eae0370..315d2af 100644
--- a/service_vm/requests/src/api.rs
+++ b/service_vm/requests/src/api.rs
@@ -14,6 +14,7 @@
//! This module contains the main API for the request processing module.
+use crate::client_vm;
use crate::rkp;
use alloc::vec::Vec;
use diced_open_dice::DiceArtifacts;
@@ -31,6 +32,8 @@
rkp::generate_certificate_request(p, dice_artifacts)
.map_or_else(Response::Err, Response::GenerateCertificateRequest)
}
+ Request::RequestClientVmAttestation(p) => client_vm::request_attestation(p, dice_artifacts)
+ .map_or_else(Response::Err, Response::RequestClientVmAttestation),
}
}
diff --git a/service_vm/requests/src/client_vm.rs b/service_vm/requests/src/client_vm.rs
new file mode 100644
index 0000000..1081f3a
--- /dev/null
+++ b/service_vm/requests/src/client_vm.rs
@@ -0,0 +1,46 @@
+// Copyright 2023, The Android Open Source Project
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//! This module contains functions related to the attestation of the
+//! client VM.
+
+use crate::keyblob::decrypt_private_key;
+use alloc::vec::Vec;
+use core::result;
+use diced_open_dice::DiceArtifacts;
+use log::error;
+use service_vm_comm::{ClientVmAttestationParams, RequestProcessingError};
+
+type Result<T> = result::Result<T, RequestProcessingError>;
+
+pub(super) fn request_attestation(
+ params: ClientVmAttestationParams,
+ dice_artifacts: &dyn DiceArtifacts,
+) -> Result<Vec<u8>> {
+ // TODO(b/309440321): Verify the signatures in the csr.
+
+ // TODO(b/278717513): Compare client VM's DICE chain up to pvmfw cert with
+ // RKP VM's DICE chain.
+
+ let _private_key =
+ decrypt_private_key(¶ms.remotely_provisioned_key_blob, dice_artifacts.cdi_seal())
+ .map_err(|e| {
+ error!("Failed to decrypt the remotely provisioned key blob: {e}");
+ RequestProcessingError::FailedToDecryptKeyBlob
+ })?;
+
+ // TODO(b/309441500): Build a new certificate signed with the remotely provisioned
+ // private key.
+ Err(RequestProcessingError::OperationUnimplemented)
+}
diff --git a/service_vm/requests/src/keyblob.rs b/service_vm/requests/src/keyblob.rs
index 456c879..1fb7a67 100644
--- a/service_vm/requests/src/keyblob.rs
+++ b/service_vm/requests/src/keyblob.rs
@@ -20,8 +20,6 @@
use core::result;
use serde::{Deserialize, Serialize};
use service_vm_comm::RequestProcessingError;
-// TODO(b/241428146): This will be used once the retrieval mechanism is available.
-#[cfg(test)]
use zeroize::Zeroizing;
type Result<T> = result::Result<T, RequestProcessingError>;
@@ -61,9 +59,6 @@
EncryptedKeyBlobV1::new(private_key, kek_secret).map(Self::V1)
}
- // TODO(b/241428146): Use this function to decrypt the retrieved keyblob once the retrieval
- // mechanism is available.
- #[cfg(test)]
pub(crate) fn decrypt_private_key(&self, kek_secret: &[u8]) -> Result<Zeroizing<Vec<u8>>> {
match self {
Self::V1(blob) => blob.decrypt_private_key(kek_secret),
@@ -85,7 +80,6 @@
Ok(Self { kek_salt, encrypted_private_key: ciphertext.to_vec() })
}
- #[cfg(test)]
fn decrypt_private_key(&self, kek_secret: &[u8]) -> Result<Zeroizing<Vec<u8>>> {
let kek = hkdf::<32>(kek_secret, &self.kek_salt, KEK_INFO, Digester::sha512())?;
let mut out = Zeroizing::new(vec![0u8; self.encrypted_private_key.len()]);
@@ -101,6 +95,15 @@
}
}
+pub(crate) fn decrypt_private_key(
+ encrypted_key_blob: &[u8],
+ kek_secret: &[u8],
+) -> Result<Zeroizing<Vec<u8>>> {
+ let key_blob: EncryptedKeyBlob = cbor_util::deserialize(encrypted_key_blob)?;
+ let private_key = key_blob.decrypt_private_key(kek_secret)?;
+ Ok(private_key)
+}
+
#[cfg(test)]
mod tests {
use super::*;
@@ -127,8 +130,7 @@
fn decrypting_keyblob_succeeds_with_the_same_kek() -> Result<()> {
let encrypted_key_blob =
cbor_util::serialize(&EncryptedKeyBlob::new(&TEST_KEY, &TEST_SECRET1)?)?;
- let encrypted_key_blob: EncryptedKeyBlob = cbor_util::deserialize(&encrypted_key_blob)?;
- let decrypted_key = encrypted_key_blob.decrypt_private_key(&TEST_SECRET1)?;
+ let decrypted_key = decrypt_private_key(&encrypted_key_blob, &TEST_SECRET1)?;
assert_eq!(TEST_KEY, decrypted_key.as_slice());
Ok(())
@@ -138,8 +140,7 @@
fn decrypting_keyblob_fails_with_a_different_kek() -> Result<()> {
let encrypted_key_blob =
cbor_util::serialize(&EncryptedKeyBlob::new(&TEST_KEY, &TEST_SECRET1)?)?;
- let encrypted_key_blob: EncryptedKeyBlob = cbor_util::deserialize(&encrypted_key_blob)?;
- let err = encrypted_key_blob.decrypt_private_key(&TEST_SECRET2).unwrap_err();
+ let err = decrypt_private_key(&encrypted_key_blob, &TEST_SECRET2).unwrap_err();
let expected_err: RequestProcessingError =
Error::CallFailed(ApiName::EVP_AEAD_CTX_open, CipherError::BadDecrypt.into()).into();
diff --git a/service_vm/requests/src/lib.rs b/service_vm/requests/src/lib.rs
index e3c5794..b2db298 100644
--- a/service_vm/requests/src/lib.rs
+++ b/service_vm/requests/src/lib.rs
@@ -19,6 +19,7 @@
extern crate alloc;
mod api;
+mod client_vm;
mod keyblob;
mod pub_key;
mod rkp;
diff --git a/virtualizationmanager/src/aidl.rs b/virtualizationmanager/src/aidl.rs
index 4024b04..bf00852 100644
--- a/virtualizationmanager/src/aidl.rs
+++ b/virtualizationmanager/src/aidl.rs
@@ -1030,6 +1030,7 @@
.try_clone()
.context("Failed to clone File from ParcelFileDescriptor")
.or_binder_exception(ExceptionCode::BAD_PARCELABLE)
+ .map(File::from)
}
/// Converts an `&Option<ParcelFileDescriptor>` to an `Option<File>` by cloning the file.
diff --git a/virtualizationmanager/src/composite.rs b/virtualizationmanager/src/composite.rs
index fe17ff4..a4b7eae 100644
--- a/virtualizationmanager/src/composite.rs
+++ b/virtualizationmanager/src/composite.rs
@@ -93,7 +93,8 @@
.context("Invalid partition image file descriptor")?
.as_ref()
.try_clone()
- .context("Failed to clone partition image file descriptor")?;
+ .context("Failed to clone partition image file descriptor")?
+ .into();
let path = fd_path_for_file(&file);
let size = get_partition_size(&file, &path)?;
files.push(file);
diff --git a/virtualizationmanager/src/selinux.rs b/virtualizationmanager/src/selinux.rs
index 0485943..ba62b7f 100644
--- a/virtualizationmanager/src/selinux.rs
+++ b/virtualizationmanager/src/selinux.rs
@@ -17,11 +17,10 @@
use anyhow::{anyhow, bail, Context, Result};
use std::ffi::{CStr, CString};
use std::fmt;
-use std::fs::File;
use std::io;
use std::ops::Deref;
+use std::os::fd::AsRawFd;
use std::os::raw::c_char;
-use std::os::unix::io::AsRawFd;
use std::ptr;
// Partially copied from system/security/keystore2/selinux/src/lib.rs
@@ -102,7 +101,7 @@
}
}
-pub fn getfilecon(file: &File) -> Result<SeContext> {
+pub fn getfilecon<F: AsRawFd>(file: &F) -> Result<SeContext> {
let fd = file.as_raw_fd();
let mut con: *mut c_char = ptr::null_mut();
// SAFETY: the returned pointer `con` is wrapped in SeContext::Raw which is freed with
diff --git a/virtualizationservice/vfio_handler/src/aidl.rs b/virtualizationservice/vfio_handler/src/aidl.rs
index 2968ff9..63f19c6 100644
--- a/virtualizationservice/vfio_handler/src/aidl.rs
+++ b/virtualizationservice/vfio_handler/src/aidl.rs
@@ -282,11 +282,13 @@
.or_binder_exception(ExceptionCode::ILLEGAL_STATE)?;
let buffer = read_values(dtbo_img_file, dt_size, entry.dt_offset.get().into())?;
- let mut dtbo_fd = dtbo_fd
- .as_ref()
- .try_clone()
- .context("Failed to clone File from ParcelFileDescriptor")
- .or_binder_exception(ExceptionCode::BAD_PARCELABLE)?;
+ let mut dtbo_fd = File::from(
+ dtbo_fd
+ .as_ref()
+ .try_clone()
+ .context("Failed to create File from ParcelFileDescriptor")
+ .or_binder_exception(ExceptionCode::BAD_PARCELABLE)?,
+ );
dtbo_fd
.write_all(&buffer)
diff --git a/vm_payload/Android.bp b/vm_payload/Android.bp
index 7f2b9df..286612c 100644
--- a/vm_payload/Android.bp
+++ b/vm_payload/Android.bp
@@ -8,7 +8,7 @@
crate_name: "vm_payload",
defaults: ["avf_build_flags_rust"],
visibility: ["//visibility:private"],
- srcs: ["src/*.rs"],
+ srcs: ["src/lib.rs"],
include_dirs: ["include"],
prefer_rlib: true,
rustlibs: [
diff --git a/vm_payload/include/vm_payload.h b/vm_payload/include/vm_payload.h
index 2dfa2cb..951b57f 100644
--- a/vm_payload/include/vm_payload.h
+++ b/vm_payload/include/vm_payload.h
@@ -43,11 +43,14 @@
/** The remote attestation completes successfully. */
ATTESTATION_OK = 0,
- /** The remote attestation has failed due to an unspecified cause. */
- ATTESTATION_UNKNOWN_ERROR = -10000,
-
/** The challenge size is not between 0 and 64. */
ATTESTATION_ERROR_INVALID_CHALLENGE = -10001,
+
+ /** Failed to attest the VM. Please retry at a later time. */
+ ATTESTATION_ERROR_ATTESTATION_FAILED = -10002,
+
+ /** Remote attestation is not supported in the current environment. */
+ ATTESTATION_ERROR_UNSUPPORTED = -10003,
} attestation_status_t;
/**
diff --git a/vm_payload/src/api.rs b/vm_payload/src/api.rs
index 64f8d6a..c76f2d3 100644
--- a/vm_payload/src/api.rs
+++ b/vm_payload/src/api.rs
@@ -21,7 +21,7 @@
use anyhow::{bail, ensure, Context, Result};
use binder::{
unstable_api::{new_spibinder, AIBinder},
- Strong,
+ Strong, ExceptionCode,
};
use lazy_static::lazy_static;
use log::{error, info, Level};
@@ -296,15 +296,24 @@
// `challenge_size` bytes and `challenge_size` is not zero.
unsafe { std::slice::from_raw_parts(challenge, challenge_size) }
};
- let attestation_res = unwrap_or_abort(try_request_attestation(challenge));
- *res = Box::into_raw(Box::new(attestation_res));
- attestation_status_t::ATTESTATION_OK
+ let service = unwrap_or_abort(get_vm_payload_service());
+ match service.requestAttestation(challenge) {
+ Ok(attestation_res) => {
+ *res = Box::into_raw(Box::new(attestation_res));
+ attestation_status_t::ATTESTATION_OK
+ }
+ Err(e) => {
+ error!("Remote attestation failed: {e:?}");
+ binder_status_to_attestation_status(e)
+ }
+ }
}
-fn try_request_attestation(public_key: &[u8]) -> Result<AttestationResult> {
- get_vm_payload_service()?
- .requestAttestation(public_key)
- .context("Failed to request attestation")
+fn binder_status_to_attestation_status(status: binder::Status) -> attestation_status_t {
+ match status.exception_code() {
+ ExceptionCode::UNSUPPORTED_OPERATION => attestation_status_t::ATTESTATION_ERROR_UNSUPPORTED,
+ _ => attestation_status_t::ATTESTATION_ERROR_ATTESTATION_FAILED,
+ }
}
/// Converts the return value from `AVmPayload_requestAttestation` to a text string
@@ -320,8 +329,12 @@
attestation_status_t::ATTESTATION_ERROR_INVALID_CHALLENGE => {
CStr::from_bytes_with_nul(b"The challenge size is not between 0 and 64.\0").unwrap()
}
- _ => CStr::from_bytes_with_nul(
- b"The remote attestation has failed due to an unspecified cause.\0",
+ attestation_status_t::ATTESTATION_ERROR_ATTESTATION_FAILED => {
+ CStr::from_bytes_with_nul(b"Failed to attest the VM. Please retry at a later time.\0")
+ .unwrap()
+ }
+ attestation_status_t::ATTESTATION_ERROR_UNSUPPORTED => CStr::from_bytes_with_nul(
+ b"Remote attestation is not supported in the current environment.\0",
)
.unwrap(),
};
diff --git a/vm_payload/src/lib.rs b/vm_payload/src/lib.rs
index e305769..9e10895 100644
--- a/vm_payload/src/lib.rs
+++ b/vm_payload/src/lib.rs
@@ -18,7 +18,7 @@
pub use api::{
AVmAttestationResult_free, AVmAttestationResult_getCertificateAt,
- AVmAttestationResult_getCertificatesCount, AVmAttestationResult_getPrivateKey,
+ AVmAttestationResult_getCertificateCount, AVmAttestationResult_getPrivateKey,
AVmAttestationResult_resultToString, AVmAttestationResult_sign,
AVmPayload_getDiceAttestationCdi, AVmPayload_getDiceAttestationChain,
AVmPayload_getVmInstanceSecret, AVmPayload_notifyPayloadReady, AVmPayload_requestAttestation,
diff --git a/vmbase/src/memory/dbm.rs b/vmbase/src/memory/dbm.rs
index 401022e..108cd5d 100644
--- a/vmbase/src/memory/dbm.rs
+++ b/vmbase/src/memory/dbm.rs
@@ -14,7 +14,7 @@
//! Hardware management of the access flag and dirty state.
-use super::page_table::{is_leaf_pte, PageTable};
+use super::page_table::PageTable;
use super::util::flush_region;
use crate::{dsb, isb, read_sysreg, tlbi, write_sysreg};
use aarch64_paging::paging::{Attributes, Descriptor, MemoryRegion};
@@ -52,14 +52,10 @@
/// Flushes a memory range the descriptor refers to, if the descriptor is in writable-dirty state.
pub(super) fn flush_dirty_range(
va_range: &MemoryRegion,
- desc: &mut Descriptor,
- level: usize,
+ desc: &Descriptor,
+ _level: usize,
) -> Result<(), ()> {
- // Only flush ranges corresponding to dirty leaf PTEs.
let flags = desc.flags().ok_or(())?;
- if !is_leaf_pte(&flags, level) {
- return Ok(());
- }
if !flags.contains(Attributes::READ_ONLY) {
flush_region(va_range.start().0, va_range.len());
}
@@ -71,12 +67,9 @@
pub(super) fn mark_dirty_block(
va_range: &MemoryRegion,
desc: &mut Descriptor,
- level: usize,
+ _level: usize,
) -> Result<(), ()> {
let flags = desc.flags().ok_or(())?;
- if !is_leaf_pte(&flags, level) {
- return Ok(());
- }
if flags.contains(Attributes::DBM) {
assert!(flags.contains(Attributes::READ_ONLY), "unexpected PTE writable state");
desc.modify_flags(Attributes::empty(), Attributes::READ_ONLY);
diff --git a/vmbase/src/memory/page_table.rs b/vmbase/src/memory/page_table.rs
index e067e96..dc346e7 100644
--- a/vmbase/src/memory/page_table.rs
+++ b/vmbase/src/memory/page_table.rs
@@ -16,7 +16,7 @@
use crate::read_sysreg;
use aarch64_paging::idmap::IdMap;
-use aarch64_paging::paging::{Attributes, MemoryRegion, PteUpdater};
+use aarch64_paging::paging::{Attributes, Constraints, Descriptor, MemoryRegion};
use aarch64_paging::MapError;
use core::result;
@@ -83,7 +83,9 @@
/// code being currently executed. Otherwise, the Rust execution model (on which the borrow
/// checker relies) would be violated.
pub unsafe fn activate(&mut self) {
- self.idmap.activate()
+ // SAFETY: the caller of this unsafe function asserts that switching to a different
+ // translation is safe
+ unsafe { self.idmap.activate() }
}
/// Maps the given range of virtual addresses to the physical addresses as lazily mapped
@@ -107,7 +109,15 @@
/// Maps the given range of virtual addresses to the physical addresses as non-executable,
/// read-only and writable-clean normal memory.
pub fn map_data_dbm(&mut self, range: &MemoryRegion) -> Result<()> {
- self.idmap.map_range(range, DATA_DBM)
+ // Map the region down to pages to minimize the size of the regions that will be marked
+ // dirty once a store hits them, but also to ensure that we can clear the read-only
+ // attribute while the mapping is live without causing break-before-make (BBM) violations.
+ // The latter implies that we must avoid the use of the contiguous hint as well.
+ self.idmap.map_range_with_constraints(
+ range,
+ DATA_DBM,
+ Constraints::NO_BLOCK_MAPPINGS | Constraints::NO_CONTIGUOUS_HINT,
+ )
}
/// Maps the given range of virtual addresses to the physical addresses as read-only
@@ -124,18 +134,20 @@
/// Applies the provided updater function to a number of PTEs corresponding to a given memory
/// range.
- pub fn modify_range(&mut self, range: &MemoryRegion, f: &PteUpdater) -> Result<()> {
+ pub fn modify_range<F>(&mut self, range: &MemoryRegion, f: &F) -> Result<()>
+ where
+ F: Fn(&MemoryRegion, &mut Descriptor, usize) -> result::Result<(), ()>,
+ {
self.idmap.modify_range(range, f)
}
-}
-/// Checks whether a PTE at given level is a page or block descriptor.
-#[inline]
-pub(super) fn is_leaf_pte(flags: &Attributes, level: usize) -> bool {
- const LEAF_PTE_LEVEL: usize = 3;
- if flags.contains(Attributes::TABLE_OR_PAGE) {
- level == LEAF_PTE_LEVEL
- } else {
- level < LEAF_PTE_LEVEL
+ /// Applies the provided callback function to a number of PTEs corresponding to a given memory
+ /// range.
+ pub fn walk_range<F>(&self, range: &MemoryRegion, f: &F) -> Result<()>
+ where
+ F: Fn(&MemoryRegion, &Descriptor, usize) -> result::Result<(), ()>,
+ {
+ let mut callback = |mr: &MemoryRegion, d: &Descriptor, l: usize| f(mr, d, l);
+ self.idmap.walk_range(range, &mut callback)
}
}
diff --git a/vmbase/src/memory/shared.rs b/vmbase/src/memory/shared.rs
index 6c8a844..dd433d4 100644
--- a/vmbase/src/memory/shared.rs
+++ b/vmbase/src/memory/shared.rs
@@ -16,12 +16,14 @@
use super::dbm::{flush_dirty_range, mark_dirty_block, set_dbm_enabled};
use super::error::MemoryTrackerError;
-use super::page_table::{is_leaf_pte, PageTable, MMIO_LAZY_MAP_FLAG};
+use super::page_table::{PageTable, MMIO_LAZY_MAP_FLAG};
use super::util::{page_4kb_of, virt_to_phys};
use crate::dsb;
use crate::exceptions::HandleExceptionError;
use crate::util::RangeExt as _;
-use aarch64_paging::paging::{Attributes, Descriptor, MemoryRegion as VaRange, VirtualAddress};
+use aarch64_paging::paging::{
+ Attributes, Descriptor, MemoryRegion as VaRange, VirtualAddress, BITS_PER_LEVEL, PAGE_SIZE,
+};
use alloc::alloc::{alloc_zeroed, dealloc, handle_alloc_error};
use alloc::boxed::Box;
use alloc::vec::Vec;
@@ -253,7 +255,7 @@
if get_mmio_guard().is_some() {
for range in &self.mmio_regions {
self.page_table
- .modify_range(&get_va_range(range), &mmio_guard_unmap_page)
+ .walk_range(&get_va_range(range), &mmio_guard_unmap_page)
.map_err(|_| MemoryTrackerError::FailedToUnmap)?;
}
}
@@ -319,14 +321,24 @@
/// table entry and MMIO guard mapping the block. Breaks apart a block entry if required.
fn handle_mmio_fault(&mut self, addr: VirtualAddress) -> Result<()> {
let page_start = VirtualAddress(page_4kb_of(addr.0));
+ assert_eq!(page_start.0 % MMIO_GUARD_GRANULE_SIZE, 0);
let page_range: VaRange = (page_start..page_start + MMIO_GUARD_GRANULE_SIZE).into();
let mmio_guard = get_mmio_guard().unwrap();
+ // This must be safe and free from break-before-make (BBM) violations, given that the
+ // initial lazy mapping has the valid bit cleared, and each newly created valid descriptor
+ // created inside the mapping has the same size and alignment.
self.page_table
- .modify_range(&page_range, &verify_lazy_mapped_block)
+ .modify_range(&page_range, &|_: &VaRange, desc: &mut Descriptor, _: usize| {
+ let flags = desc.flags().expect("Unsupported PTE flags set");
+ if flags.contains(MMIO_LAZY_MAP_FLAG) && !flags.contains(Attributes::VALID) {
+ desc.modify_flags(Attributes::VALID, Attributes::empty());
+ Ok(())
+ } else {
+ Err(())
+ }
+ })
.map_err(|_| MemoryTrackerError::InvalidPte)?;
- mmio_guard.map(page_start.0)?;
- // Maps a single device page, breaking up block mappings if necessary.
- self.page_table.map_device(&page_range).map_err(|_| MemoryTrackerError::FailedToMap)
+ Ok(mmio_guard.map(page_start.0)?)
}
/// Flush all memory regions marked as writable-dirty.
@@ -340,7 +352,7 @@
// Now flush writable-dirty pages in those regions.
for range in writable_regions.chain(self.payload_range.as_ref().into_iter()) {
self.page_table
- .modify_range(&get_va_range(range), &flush_dirty_range)
+ .walk_range(&get_va_range(range), &flush_dirty_range)
.map_err(|_| MemoryTrackerError::FlushRegionFailed)?;
}
Ok(())
@@ -467,33 +479,13 @@
}
}
-/// Checks whether block flags indicate it should be MMIO guard mapped.
-fn verify_lazy_mapped_block(
- _range: &VaRange,
- desc: &mut Descriptor,
- level: usize,
-) -> result::Result<(), ()> {
- let flags = desc.flags().expect("Unsupported PTE flags set");
- if !is_leaf_pte(&flags, level) {
- return Ok(()); // Skip table PTEs as they aren't tagged with MMIO_LAZY_MAP_FLAG.
- }
- if flags.contains(MMIO_LAZY_MAP_FLAG) && !flags.contains(Attributes::VALID) {
- Ok(())
- } else {
- Err(())
- }
-}
-
/// MMIO guard unmaps page
fn mmio_guard_unmap_page(
va_range: &VaRange,
- desc: &mut Descriptor,
+ desc: &Descriptor,
level: usize,
) -> result::Result<(), ()> {
let flags = desc.flags().expect("Unsupported PTE flags set");
- if !is_leaf_pte(&flags, level) {
- return Ok(());
- }
// This function will be called on an address range that corresponds to a device. Only if a
// page has been accessed (written to or read from), will it contain the VALID flag and be MMIO
// guard mapped. Therefore, we can skip unmapping invalid pages, they were never MMIO guard
@@ -503,9 +495,11 @@
flags.contains(MMIO_LAZY_MAP_FLAG),
"Attempting MMIO guard unmap for non-device pages"
);
+ const MMIO_GUARD_GRANULE_SHIFT: u32 = MMIO_GUARD_GRANULE_SIZE.ilog2() - PAGE_SIZE.ilog2();
+ const MMIO_GUARD_GRANULE_LEVEL: usize =
+ 3 - (MMIO_GUARD_GRANULE_SHIFT as usize / BITS_PER_LEVEL);
assert_eq!(
- va_range.len(),
- MMIO_GUARD_GRANULE_SIZE,
+ level, MMIO_GUARD_GRANULE_LEVEL,
"Failed to break down block mapping before MMIO guard mapping"
);
let page_base = va_range.start().0;