[automerger skipped] Merge "[RESTRICT AUTOMERGE] Enable build MCTS on aosp-android13" into android13-tests-dev am: 41f7dbd101 am: 25586dde83 am: a2fef11e67 am: f6d3ca43fa -s ours
am skip reason: contains skip directive
Original change: https://android-review.googlesource.com/c/platform/build/+/3195473
Change-Id: Ic9207d5175e134f3dba5cb15bb64b913bf65476e
Signed-off-by: Automerger Merge Worker <android-build-automerger-merge-worker@system.gserviceaccount.com>
diff --git a/.gitignore b/.gitignore
index f1f4a52..54c90ed 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,3 +1,4 @@
+*.iml
*.pyc
*.swp
blueprint/
diff --git a/Changes.md b/Changes.md
index cabbed6..daebd52 100644
--- a/Changes.md
+++ b/Changes.md
@@ -1,5 +1,104 @@
# Build System Changes for Android.mk Writers
+## Python 2 to 3 migration
+
+The path set when running builds now makes the `python` executable point to python 3,
+whereas on previous versions it pointed to python 2. If you still have python 2 scripts,
+you can change the shebang line to use `python2` explicitly. This only applies for
+scripts run directly from makefiles, or from soong genrules. This behavior can be
+temporarily overridden by setting the `BUILD_BROKEN_PYTHON_IS_PYTHON2` environment
+variable to `true`. It's only an environment variable and not a product config variable
+because product config sometimes calls python code.
+
+In addition, `python_*` soong modules no longer allow python 2. This can be temporarily
+overridden by setting the `BUILD_BROKEN_USES_SOONG_PYTHON2_MODULES` product configuration
+variable to `true`.
+
+Python 2 is slated for complete removal in V.
+
+## Stop referencing sysprop_library directly from cc modules
+
+For the migration to Bazel, we are no longer mapping sysprop_library targets
+to their generated `cc_library` counterparts when dependning on them from a
+cc module. Instead, directly depend on the generated module by prefixing the
+module name with `lib`. For example, depending on the following module:
+
+```
+sysprop_library {
+ name: "foo",
+ srcs: ["foo.sysprop"],
+}
+```
+
+from a module named `bar` can be done like so:
+
+```
+cc_library {
+ name: "bar",
+ srcs: ["bar.cc"],
+ deps: ["libfoo"],
+}
+```
+
+Failure to do this will result in an error about a missing variant.
+
+## Gensrcs starts disallowing depfile property
+
+To migrate all gensrcs to Bazel, we are restricting the use of depfile property
+because Bazel requires specifying the dependencies directly.
+
+To fix existing uses, remove depfile and directly specify all the dependencies
+in .bp files. For example:
+
+```
+gensrcs {
+ name: "framework-cppstream-protos",
+ tools: [
+ "aprotoc",
+ "protoc-gen-cppstream",
+ ],
+ cmd: "mkdir -p $(genDir)/$(in) " +
+ "&& $(location aprotoc) " +
+ " --plugin=$(location protoc-gen-cppstream) " +
+ " -I . " +
+ " $(in) ",
+ srcs: [
+ "bar.proto",
+ ],
+ output_extension: "srcjar",
+}
+```
+where `bar.proto` imports `external.proto` would become
+
+```
+gensrcs {
+ name: "framework-cppstream-protos",
+ tools: [
+ "aprotoc",
+ "protoc-gen-cpptream",
+ ],
+ tool_files: [
+ "external.proto",
+ ],
+ cmd: "mkdir -p $(genDir)/$(in) " +
+ "&& $(location aprotoc) " +
+ " --plugin=$(location protoc-gen-cppstream) " +
+ " $(in) ",
+ srcs: [
+ "bar.proto",
+ ],
+ output_extension: "srcjar",
+}
+```
+as in https://android-review.googlesource.com/c/platform/frameworks/base/+/2125692/.
+
+`BUILD_BROKEN_DEPFILE` can be used to allowlist usage of depfile in `gensrcs`.
+
+If `depfile` is needed for generating javastream proto, `java_library` with `proto.type`
+set `stream` is the alternative solution. Sees
+https://android-review.googlesource.com/c/platform/packages/modules/Permission/+/2118004/
+for an example.
+
## Genrule starts disallowing directory inputs
To better specify the inputs to the build, we are restricting use of directories
@@ -733,6 +832,38 @@
Clang is the default and only supported Android compiler, so there is no reason
for this option to exist.
+### Stop using clang property
+
+The clang property has been deleted from Soong. To fix any build errors, remove the clang
+property from affected Android.bp files using bpmodify.
+
+
+``` make
+go run bpmodify.go -w -m=module_name -remove-property=true -property=clang filepath
+```
+
+`BUILD_BROKEN_CLANG_PROPERTY` can be used as temporarily workaround
+
+
+### Stop using clang_cflags and clang_asflags
+
+clang_cflags and clang_asflags are deprecated.
+To fix any build errors, use bpmodify to either
+ - move the contents of clang_asflags/clang_cflags into asflags/cflags or
+ - delete clang_cflags/as_flags as necessary
+
+To Move the contents:
+``` make
+go run bpmodify.go -w -m=module_name -move-property=true -property=clang_cflags -new-location=cflags filepath
+```
+
+To Delete:
+``` make
+go run bpmodify.go -w -m=module_name -remove-property=true -property=clang_cflags filepath
+```
+
+`BUILD_BROKEN_CLANG_ASFLAGS` and `BUILD_BROKEN_CLANG_CFLAGS` can be used as temporarily workarounds
+
### Other envsetup.sh variables {#other_envsetup_variables}
* ANDROID_TOOLCHAIN
@@ -745,6 +876,39 @@
the makefile system. If you need one of them, you'll have to set up your own
version.
+## Soong config variables
+
+### Soong config string variables must list all values they can be set to
+
+In order to facilitate the transition to bazel, all soong_config_string_variables
+must only be set to a value listed in their `values` property, or an empty string.
+It is a build error otherwise.
+
+Example Android.bp:
+```
+soong_config_string_variable {
+ name: "my_string_variable",
+ values: [
+ "foo",
+ "bar",
+ ],
+}
+
+soong_config_module_type {
+ name: "my_cc_defaults",
+ module_type: "cc_defaults",
+ config_namespace: "my_namespace",
+ variables: ["my_string_variable"],
+ properties: [
+ "shared_libs",
+ "static_libs",
+ ],
+}
+```
+Product config:
+```
+$(call soong_config_set,my_namespace,my_string_variable,baz) # Will be an error as baz is not listed in my_string_variable's values.
+```
[build/soong/Changes.md]: https://android.googlesource.com/platform/build/soong/+/master/Changes.md
[build/soong/docs/best_practices.md#headers]: https://android.googlesource.com/platform/build/soong/+/master/docs/best_practices.md#headers
diff --git a/OWNERS b/OWNERS
index 1ee860c..3209665 100644
--- a/OWNERS
+++ b/OWNERS
@@ -1,2 +1,6 @@
include platform/build/soong:/OWNERS
+# Since this file affects all Android developers, lock it down. There is still
+# round the world timzeone coverage.
+per-file envsetup.sh = joeo@google.com, jingwen@google.com, lberki@google.com
+per-file shell_utils.sh = joeo@google.com, jingwen@google.com, lberki@google.com
diff --git a/core/BUILD.bazel b/core/BUILD.bazel
new file mode 100644
index 0000000..3e69e62
--- /dev/null
+++ b/core/BUILD.bazel
@@ -0,0 +1,4 @@
+# Export tradefed templates for tests.
+exports_files(
+ glob(["*.xml"]),
+)
diff --git a/core/Makefile b/core/Makefile
index 72aa890..38dc37b 100644
--- a/core/Makefile
+++ b/core/Makefile
@@ -7,6 +7,7 @@
SYSTEM_NOTICE_DEPS :=
VENDOR_NOTICE_DEPS :=
UNMOUNTED_NOTICE_DEPS :=
+UNMOUNTED_NOTICE_VENDOR_DEPS :=
ODM_NOTICE_DEPS :=
OEM_NOTICE_DEPS :=
PRODUCT_NOTICE_DEPS :=
@@ -180,6 +181,7 @@
ifeq ($(HOST_OS),linux)
$(call dist-for-goals,sdk,$(API_FINGERPRINT))
+$(call dist-for-goals,droidcore,$(API_FINGERPRINT))
endif
INSTALLED_RECOVERYIMAGE_TARGET :=
@@ -305,6 +307,8 @@
# $(7): module archive
# $(8): staging dir for stripped modules
# $(9): module directory name
+# $(10): extra modules that might be dependency of modules in this partition, but should not be copied to output dir
+# $(11): mount point for extra modules
# Returns a list of src:dest pairs to install the modules using copy-many-files.
define build-image-kernel-modules
$(if $(9), \
@@ -316,7 +320,7 @@
$(eval _src := $(8)/$(notdir $(module))) \
$(eval $(call copy-and-strip-kernel-module,$(module),$(_src)))) \
$(_src):$(2)/lib/modules/$(_dir)$(notdir $(module))) \
- $(eval $(call build-image-kernel-modules-depmod,$(1),$(3),$(4),$(5),$(6),$(7),$(2),$(9))) \
+ $(eval $(call build-image-kernel-modules-depmod,$(1),$(3),$(4),$(5),$(6),$(7),$(2),$(9),$(10),$(11))) \
$(4)/$(DEPMOD_STAGING_SUBDIR)/modules.dep:$(2)/lib/modules/$(_dir)modules.dep \
$(4)/$(DEPMOD_STAGING_SUBDIR)/modules.alias:$(2)/lib/modules/$(_dir)modules.alias \
$(4)/$(DEPMOD_STAGING_SUBDIR)/modules.softdep:$(2)/lib/modules/$(_dir)modules.softdep \
@@ -331,6 +335,8 @@
# $(6): module archive
# $(7): output dir
# $(8): module directory name
+# $(9): extra modules which should not be copied to output dir, but might be dependency of modules in this partition
+# $(10): mount point for extra modules
# TODO(b/144844424): If a module archive is being used, this step (which
# generates obj/PACKAGING/.../modules.dep) also unzips the module archive into
# the output directory. This should be moved to a module with a
@@ -340,8 +346,11 @@
$(3)/$(DEPMOD_STAGING_SUBDIR)/modules.dep: .KATI_IMPLICIT_OUTPUTS := $(3)/$(DEPMOD_STAGING_SUBDIR)/modules.alias $(3)/$(DEPMOD_STAGING_SUBDIR)/modules.softdep $(3)/$(DEPMOD_STAGING_SUBDIR)/$(5)
$(3)/$(DEPMOD_STAGING_SUBDIR)/modules.dep: $(DEPMOD)
$(3)/$(DEPMOD_STAGING_SUBDIR)/modules.dep: PRIVATE_MODULES := $(strip $(1))
+$(3)/$(DEPMOD_STAGING_SUBDIR)/modules.dep: PRIVATE_EXTRA_MODULES := $(strip $(9))
$(3)/$(DEPMOD_STAGING_SUBDIR)/modules.dep: PRIVATE_MOUNT_POINT := $(2)
+$(3)/$(DEPMOD_STAGING_SUBDIR)/modules.dep: PRIVATE_EXTRA_MOUNT_POINT := $(10)
$(3)/$(DEPMOD_STAGING_SUBDIR)/modules.dep: PRIVATE_MODULE_DIR := $(3)/$(DEPMOD_STAGING_SUBDIR)/$(2)/lib/modules/$(8)
+$(3)/$(DEPMOD_STAGING_SUBDIR)/modules.dep: PRIVATE_EXTRA_MODULE_DIR := $(3)/$(DEPMOD_STAGING_SUBDIR)/$(10)/lib/modules/$(8)
$(3)/$(DEPMOD_STAGING_SUBDIR)/modules.dep: PRIVATE_STAGING_DIR := $(3)
$(3)/$(DEPMOD_STAGING_SUBDIR)/modules.dep: PRIVATE_LOAD_MODULES := $(strip $(4))
$(3)/$(DEPMOD_STAGING_SUBDIR)/modules.dep: PRIVATE_LOAD_FILE := $(3)/$(DEPMOD_STAGING_SUBDIR)/$(5)
@@ -363,6 +372,20 @@
basename $$$$MODULE >> $$(PRIVATE_LOAD_FILE); \
done; \
)
+ # The ln -sf + find -delete sequence is to remove any modules in
+ # PRIVATE_EXTRA_MODULES which have same basename as MODULES in PRIVATE_MODULES
+ # Basically, it computes a set difference. When there is a duplicate module
+ # present in both directories, we want modules in PRIVATE_MODULES to take
+ # precedence. Since depmod does not provide any guarantee about ordering of
+ # dependency resolution, we achieve this by maually removing any duplicate
+ # modules with lower priority.
+ $(if $(9),\
+ mkdir -p $$(PRIVATE_EXTRA_MODULE_DIR); \
+ find $$(PRIVATE_EXTRA_MODULE_DIR) -maxdepth 1 -type f -name "*.ko" -delete; \
+ cp $$(PRIVATE_EXTRA_MODULES) $$(PRIVATE_EXTRA_MODULE_DIR); \
+ ln -sf $$(PRIVATE_MODULE_DIR)/*.ko $$(PRIVATE_EXTRA_MODULE_DIR); \
+ find $$(PRIVATE_EXTRA_MODULE_DIR) -type l -delete; \
+ )
$(DEPMOD) -b $$(PRIVATE_STAGING_DIR) 0.0
# Turn paths in modules.dep into absolute paths
sed -i.tmp -e 's|\([^: ]*lib/modules/[^: ]*\)|/\1|g' $$(PRIVATE_STAGING_DIR)/$$(DEPMOD_STAGING_SUBDIR)/modules.dep
@@ -436,6 +459,9 @@
# $(4): module load filename
# $(5): stripped staging directory
# $(6): kernel module directory name (top is an out of band value for no directory)
+# $(7): list of extra modules that might be dependency of modules in this partition
+# $(8): mount point for extra modules. e.g. system
+
define build-image-kernel-modules-dir
$(if $(filter top,$(6)),\
$(eval _kver :=)$(eval _sep :=),\
@@ -446,7 +472,12 @@
$(if $(strip $(BOARD_$(1)_KERNEL_MODULES$(_sep)$(_kver))$(BOARD_$(1)_KERNEL_MODULES_ARCHIVE$(_sep)$(_kver))),\
$(if $(BOARD_$(1)_KERNEL_MODULES_LOAD$(_sep)$(_kver)),,\
$(eval BOARD_$(1)_KERNEL_MODULES_LOAD$(_sep)$(_kver) := $(BOARD_$(1)_KERNEL_MODULES$(_sep)$(_kver)))) \
- $(call copy-many-files,$(call build-image-kernel-modules,$(BOARD_$(1)_KERNEL_MODULES$(_sep)$(_kver)),$(2),$(3),$(call intermediates-dir-for,PACKAGING,depmod_$(1)$(_sep)$(_kver)),$(BOARD_$(1)_KERNEL_MODULES_LOAD$(_sep)$(_kver)),$(4),$(BOARD_$(1)_KERNEL_MODULES_ARCHIVE$(_sep)$(_kver)),$(_stripped_staging_dir),$(_kver)))) \
+ $(if $(filter false,$(BOARD_$(1)_KERNEL_MODULES_LOAD$(_sep)$(_kver))),\
+ $(eval BOARD_$(1)_KERNEL_MODULES_LOAD$(_sep)$(_kver) :=),) \
+ $(eval _files := $(call build-image-kernel-modules,$(BOARD_$(1)_KERNEL_MODULES$(_sep)$(_kver)),$(2),$(3),$(call intermediates-dir-for,PACKAGING,depmod_$(1)$(_sep)$(_kver)),$(BOARD_$(1)_KERNEL_MODULES_LOAD$(_sep)$(_kver)),$(4),$(BOARD_$(1)_KERNEL_MODULES_ARCHIVE$(_sep)$(_kver)),$(_stripped_staging_dir),$(_kver),$(7),$(8))) \
+ $(call copy-many-files,$(_files)) \
+ $(eval _modules := $(BOARD_$(1)_KERNEL_MODULES$(_sep)$(_kver)) ANDROID-GEN ANDROID-GEN ANDROID-GEN ANDROID-GEN) \
+ $(eval KERNEL_MODULE_COPY_FILES += $(join $(addsuffix :,$(_modules)),$(_files)))) \
$(if $(_kver), \
$(eval _dir := $(_kver)/), \
$(eval _dir :=)) \
@@ -459,6 +490,7 @@
$(eval $(call build-image-kernel-modules-blocklist-file, \
$(BOARD_$(1)_KERNEL_MODULES_BLOCKLIST_FILE$(_sep)$(_kver)), \
$(2)/lib/modules/$(_dir)modules.blocklist)) \
+ $(eval ALL_KERNEL_MODULES_BLOCKLIST += $(2)/lib/modules/$(_dir)modules.blocklist) \
$(2)/lib/modules/$(_dir)modules.blocklist)
endef
@@ -481,6 +513,15 @@
endef
# $(1): kernel module directory name (top is an out of band value for no directory)
+define build-vendor-kernel-ramdisk-recovery-load
+$(if $(filter top,$(1)),\
+ $(eval _kver :=)$(eval _sep :=),\
+ $(eval _kver := $(1))$(eval _sep :=_))\
+ $(if $(BOARD_VENDOR_KERNEL_RAMDISK_RECOVERY_KERNEL_MODULES_LOAD$(_sep)$(_kver)),\
+ $(call copy-many-files,$(call module-load-list-copy-paths,$(call intermediates-dir-for,PACKAGING,vendor_kernel_ramdisk_recovery_module_list$(_sep)$(_kver)),$(BOARD_VENDOR_KERNEL_RAMDISK_KERNEL_MODULES$(_sep)$(_kver)),$(BOARD_VENDOR_KERNEL_RAMDISK_RECOVERY_KERNEL_MODULES_LOAD$(_sep)$(_kver)),modules.load.recovery,$(TARGET_VENDOR_KERNEL_RAMDISK_OUT))))
+endef
+
+# $(1): kernel module directory name (top is an out of band value for no directory)
define build-vendor-charger-load
$(if $(filter top,$(1)),\
$(eval _kver :=)$(eval _sep :=),\
@@ -531,6 +572,14 @@
endif
BOARD_KERNEL_MODULE_DIRS += top
+
+# Default to not generating modules.dep for kernel modules on system
+# side. We should only load these modules if they are depended by vendor
+# side modules.
+ifeq ($(BOARD_SYSTEM_KERNEL_MODULES_LOAD),)
+ BOARD_SYSTEM_KERNEL_MODULES_LOAD := false
+endif
+
$(foreach kmd,$(BOARD_KERNEL_MODULE_DIRS), \
$(eval ALL_DEFAULT_INSTALLED_MODULES += $(call build-image-kernel-modules-dir,RECOVERY,$(TARGET_RECOVERY_ROOT_OUT),,modules.load.recovery,$(RECOVERY_STRIPPED_MODULE_STAGING_DIR),$(kmd))) \
$(eval vendor_ramdisk_fragment := $(KERNEL_MODULE_DIR_VENDOR_RAMDISK_FRAGMENT_$(kmd))) \
@@ -543,13 +592,28 @@
$(eval $(result_var) += $(call build-image-kernel-modules-dir,VENDOR_RAMDISK,$(output_dir),,modules.load,$(VENDOR_RAMDISK_STRIPPED_MODULE_STAGING_DIR),$(kmd))) \
$(eval ALL_DEFAULT_INSTALLED_MODULES += $(call build-image-kernel-modules-dir,VENDOR_KERNEL_RAMDISK,$(TARGET_VENDOR_KERNEL_RAMDISK_OUT),,modules.load,$(VENDOR_KERNEL_RAMDISK_STRIPPED_MODULE_STAGING_DIR),$(kmd))) \
$(eval ALL_DEFAULT_INSTALLED_MODULES += $(call build-vendor-ramdisk-recovery-load,$(kmd))) \
- $(eval ALL_DEFAULT_INSTALLED_MODULES += $(call build-image-kernel-modules-dir,VENDOR,$(if $(filter true,$(BOARD_USES_VENDOR_DLKMIMAGE)),$(TARGET_OUT_VENDOR_DLKM),$(TARGET_OUT_VENDOR)),vendor,modules.load,$(VENDOR_STRIPPED_MODULE_STAGING_DIR),$(kmd))) \
+ $(eval ALL_DEFAULT_INSTALLED_MODULES += $(call build-vendor-kernel-ramdisk-recovery-load,$(kmd))) \
+ $(eval ALL_DEFAULT_INSTALLED_MODULES += $(call build-image-kernel-modules-dir,VENDOR,$(if $(filter true,$(BOARD_USES_VENDOR_DLKMIMAGE)),$(TARGET_OUT_VENDOR_DLKM),$(TARGET_OUT_VENDOR)),vendor,modules.load,$(VENDOR_STRIPPED_MODULE_STAGING_DIR),$(kmd),$(BOARD_SYSTEM_KERNEL_MODULES),system)) \
$(eval ALL_DEFAULT_INSTALLED_MODULES += $(call build-vendor-charger-load,$(kmd))) \
$(eval ALL_DEFAULT_INSTALLED_MODULES += $(call build-image-kernel-modules-dir,ODM,$(if $(filter true,$(BOARD_USES_ODM_DLKMIMAGE)),$(TARGET_OUT_ODM_DLKM),$(TARGET_OUT_ODM)),odm,modules.load,,$(kmd))) \
+ $(eval ALL_DEFAULT_INSTALLED_MODULES += $(call build-image-kernel-modules-dir,SYSTEM,$(if $(filter true,$(BOARD_USES_SYSTEM_DLKMIMAGE)),$(TARGET_OUT_SYSTEM_DLKM),$(TARGET_OUT_SYSTEM)),system,modules.load,,$(kmd))) \
$(if $(filter true,$(BOARD_USES_RECOVERY_AS_BOOT)),\
$(eval ALL_DEFAULT_INSTALLED_MODULES += $(call build-recovery-as-boot-load,$(kmd))),\
$(eval ALL_DEFAULT_INSTALLED_MODULES += $(call build-image-kernel-modules-dir,GENERIC_RAMDISK,$(TARGET_RAMDISK_OUT),,modules.load,,$(kmd)))))
+ifeq ($(BOARD_SYSTEM_KERNEL_MODULES),)
+ifneq ($(BOARD_SYSTEM_DLKM_SRC),)
+ifneq ($(wildcard $(BOARD_SYSTEM_DLKM_SRC)/*),)
+ SYSTEM_KERNEL_MODULES := $(shell find $(BOARD_SYSTEM_DLKM_SRC) -type f)
+ SRC_SYSTEM_KERNEL_MODULES := $(SYSTEM_KERNEL_MODULES)
+ DST_SYSTEM_KERNEL_MODULES := $(patsubst $(BOARD_SYSTEM_DLKM_SRC)/%,:$(TARGET_OUT_SYSTEM_DLKM)/%,$(SRC_SYSTEM_KERNEL_MODULES))
+ SYSTEM_KERNEL_MODULE_COPY_PAIRS := $(join $(SRC_SYSTEM_KERNEL_MODULES),$(DST_SYSTEM_KERNEL_MODULES))
+ ALL_DEFAULT_INSTALLED_MODULES += $(call copy-many-files,$(SYSTEM_KERNEL_MODULE_COPY_PAIRS))
+endif
+endif
+endif
+
+
# -----------------------------------------------------------------
# Cert-to-package mapping. Used by the post-build signing tools.
# Use a macro to add newline to each echo command
@@ -599,8 +663,10 @@
$(if $(PACKAGES.$(p).EXTERNAL_KEY),\
$(call _apkcerts_write_line,$(PACKAGES.$(p).STEM),EXTERNAL,,$(PACKAGES.$(p).COMPRESSED),$(PACKAGES.$(p).PARTITION),$@),\
$(call _apkcerts_write_line,$(PACKAGES.$(p).STEM),$(PACKAGES.$(p).CERTIFICATE),$(PACKAGES.$(p).PRIVATE_KEY),$(PACKAGES.$(p).COMPRESSED),$(PACKAGES.$(p).PARTITION),$@))))
- $(if $(filter true,$(PRODUCT_SYSTEM_FSVERITY_GENERATE_METADATA)),\
- $(call _apkcerts_write_line,$(notdir $(basename $(FSVERITY_APK_OUT))),$(FSVERITY_APK_KEY_PATH).x509.pem,$(FSVERITY_APK_KEY_PATH).pk8,,system,$@))
+ $(if $(filter true,$(PRODUCT_FSVERITY_GENERATE_METADATA)),\
+ $(call _apkcerts_write_line,BuildManifest,$(FSVERITY_APK_KEY_PATH).x509.pem,$(FSVERITY_APK_KEY_PATH).pk8,,system,$@) \
+ $(if $(filter true,$(BUILDING_SYSTEM_EXT_IMAGE)),\
+ $(call _apkcerts_write_line,BuildManifestSystemExt,$(FSVERITY_APK_KEY_PATH).x509.pem,$(FSVERITY_APK_KEY_PATH).pk8,,system_ext,$@)))
# In case value of PACKAGES is empty.
$(hide) touch $@
@@ -694,20 +760,14 @@
@rm -f $@
echo "# Modules using -Wno-error" >> $@
for m in $(sort $(SOONG_MODULES_USING_WNO_ERROR) $(MODULES_USING_WNO_ERROR)); do echo $$m >> $@; done
- echo "# Modules added default -Wall" >> $@
- for m in $(sort $(SOONG_MODULES_ADDED_WALL) $(MODULES_ADDED_WALL)); do echo $$m >> $@; done
+ echo "# Modules that allow warnings" >> $@
+ for m in $(sort $(SOONG_MODULES_WARNINGS_ALLOWED) $(MODULES_WARNINGS_ALLOWED)); do echo $$m >> $@; done
$(call declare-0p-target,$(WALL_WERROR))
$(call dist-for-goals,droidcore-unbundled,$(WALL_WERROR))
# -----------------------------------------------------------------
-# C/C++ flag information for modules
-$(call dist-for-goals,droidcore-unbundled,$(SOONG_MODULES_CFLAG_ARTIFACTS))
-
-$(foreach a,$(SOONG_MODULES_CFLAG_ARTIFACTS),$(call declare-0p-target,$(call word-colon,1,$(a))))
-
-# -----------------------------------------------------------------
# Modules missing profile files
PGO_PROFILE_MISSING := $(PRODUCT_OUT)/pgo_profile_file_missing.txt
$(PGO_PROFILE_MISSING):
@@ -886,21 +946,24 @@
RAMDISK_EXT := .gz
endif
+# This file contains /dev nodes description added to the generic ramdisk
+RAMDISK_NODE_LIST := $(PRODUCT_OUT)/ramdisk_node_list
+
# We just build this directly to the install location.
INSTALLED_RAMDISK_TARGET := $(BUILT_RAMDISK_TARGET)
$(INSTALLED_RAMDISK_TARGET): PRIVATE_DIRS := debug_ramdisk dev metadata mnt proc second_stage_resources sys
-$(INSTALLED_RAMDISK_TARGET): $(MKBOOTFS) $(INTERNAL_RAMDISK_FILES) $(INSTALLED_FILES_FILE_RAMDISK) | $(COMPRESSION_COMMAND_DEPS)
+$(INSTALLED_RAMDISK_TARGET): $(MKBOOTFS) $(RAMDISK_NODE_LIST) $(INTERNAL_RAMDISK_FILES) $(INSTALLED_FILES_FILE_RAMDISK) | $(COMPRESSION_COMMAND_DEPS)
$(call pretty,"Target ramdisk: $@")
$(hide) mkdir -p $(addprefix $(TARGET_RAMDISK_OUT)/,$(PRIVATE_DIRS))
ifeq (true,$(BOARD_USES_GENERIC_KERNEL_IMAGE))
$(hide) mkdir -p $(addprefix $(TARGET_RAMDISK_OUT)/first_stage_ramdisk/,$(PRIVATE_DIRS))
endif
- $(hide) $(MKBOOTFS) -d $(TARGET_OUT) $(TARGET_RAMDISK_OUT) | $(COMPRESSION_COMMAND) > $@
+ $(hide) $(MKBOOTFS) -n $(RAMDISK_NODE_LIST) -d $(TARGET_OUT) $(TARGET_RAMDISK_OUT) | $(COMPRESSION_COMMAND) > $@
$(call declare-1p-container,$(INSTALLED_RAMDISK_TARGET),)
$(call declare-container-license-deps,$(INSTALLED_RAMDISK_TARGET),$(INTERNAL_RAMDISK_FILE),$(PRODUCT_OUT)/:/)
-UNMOUNTED_NOTICE_DEPS += $(INSTALLED_RAMDISK_TARGET)
+UNMOUNTED_NOTICE_VENDOR_DEPS += $(INSTALLED_RAMDISK_TARGET)
.PHONY: ramdisk-nodeps
ramdisk-nodeps: $(MKBOOTFS) | $(COMPRESSION_COMMAND_DEPS)
@@ -964,20 +1027,16 @@
$(if $(1),--partition_size $(1),--dynamic_partition_size)
endef
+ifndef BOARD_PREBUILT_BOOTIMAGE
+
ifneq ($(strip $(TARGET_NO_KERNEL)),true)
INTERNAL_BOOTIMAGE_ARGS := \
$(addprefix --second ,$(INSTALLED_2NDBOOTLOADER_TARGET))
-INTERNAL_INIT_BOOT_IMAGE_ARGS :=
-
# TODO(b/229701033): clean up BOARD_BUILD_GKI_BOOT_IMAGE_WITHOUT_RAMDISK.
ifneq ($(BOARD_BUILD_GKI_BOOT_IMAGE_WITHOUT_RAMDISK),true)
- ifneq ($(BOARD_BUILD_SYSTEM_ROOT_IMAGE),true)
- ifneq ($(BUILDING_INIT_BOOT_IMAGE),true)
- INTERNAL_BOOTIMAGE_ARGS += --ramdisk $(INSTALLED_RAMDISK_TARGET)
- else
- INTERNAL_INIT_BOOT_IMAGE_ARGS += --ramdisk $(INSTALLED_RAMDISK_TARGET)
- endif
+ ifneq ($(BUILDING_INIT_BOOT_IMAGE),true)
+ INTERNAL_BOOTIMAGE_ARGS += --ramdisk $(INSTALLED_RAMDISK_TARGET)
endif
endif
@@ -989,15 +1048,6 @@
INTERNAL_BOOTIMAGE_FILES := $(filter-out --%,$(INTERNAL_BOOTIMAGE_ARGS))
-ifeq ($(PRODUCT_SUPPORTS_VERITY),true)
-ifeq ($(BOARD_BUILD_SYSTEM_ROOT_IMAGE),true)
-VERITY_KEYID := veritykeyid=id:`openssl x509 -in $(PRODUCT_VERITY_SIGNING_KEY).x509.pem -text \
- | grep keyid | sed 's/://g' | tr -d '[:space:]' | tr '[:upper:]' '[:lower:]' | sed 's/keyid//g'`
-endif
-endif
-
-INTERNAL_KERNEL_CMDLINE := $(strip $(INTERNAL_KERNEL_CMDLINE) buildvariant=$(TARGET_BUILD_VARIANT) $(VERITY_KEYID))
-
# kernel cmdline/base/pagesize in boot.
# - If using GKI, use GENERIC_KERNEL_CMDLINE. Remove kernel base and pagesize because they are
# device-specific.
@@ -1106,37 +1156,14 @@
$(call declare-container-license-metadata,$(INSTALLED_BOOTIMAGE_TARGET),SPDX-license-identifier-GPL-2.0-only SPDX-license-identifier-Apache-2.0,restricted notice,$(BUILD_SYSTEM)/LINUX_KERNEL_COPYING build/soong/licenses/LICENSE,"Boot Image",boot)
$(call declare-container-license-deps,$(INSTALLED_BOOTIMAGE_TARGET),$(INTERNAL_BOOTIMAGE_FILES) $(INTERNAL_GKI_CERTIFICATE_DEPS),$(PRODUCT_OUT)/:/)
-UNMOUNTED_NOTICE_DEPS += $(INSTALLED_BOOTIMAGE_TARGET)
+UNMOUNTED_NOTICE_VENDOR_DEPS += $(INSTALLED_BOOTIMAGE_TARGET)
.PHONY: bootimage-nodeps
bootimage-nodeps: $(MKBOOTIMG) $(AVBTOOL) $(BOARD_AVB_BOOT_KEY_PATH) $(INTERNAL_GKI_CERTIFICATE_DEPS)
@echo "make $@: ignoring dependencies"
$(foreach b,$(INSTALLED_BOOTIMAGE_TARGET),$(call build_boot_board_avb_enabled,$(b)))
-else ifeq (true,$(PRODUCT_SUPPORTS_BOOT_SIGNER)) # BOARD_AVB_ENABLE != true
-
-# $1: boot image target
-define build_boot_supports_boot_signer
- $(MKBOOTIMG) --kernel $(call bootimage-to-kernel,$(1)) $(INTERNAL_BOOTIMAGE_ARGS) $(INTERNAL_MKBOOTIMG_VERSION_ARGS) $(BOARD_MKBOOTIMG_ARGS) --output $(1)
- $(BOOT_SIGNER) /boot $@ $(PRODUCT_VERITY_SIGNING_KEY).pk8 $(PRODUCT_VERITY_SIGNING_KEY).x509.pem $(1)
- $(call assert-max-image-size,$(1),$(call get-bootimage-partition-size,$(1),boot))
-endef
-
-$(INSTALLED_BOOTIMAGE_TARGET): $(MKBOOTIMG) $(INTERNAL_BOOTIMAGE_FILES) $(BOOT_SIGNER)
- $(call pretty,"Target boot image: $@")
- $(call build_boot_supports_boot_signer,$@)
-
-$(call declare-1p-container,$(INSTALLED_BOOTIMAGE_TARGET),)
-$(call declare-container-license-deps,$(INSTALLED_BOOTIMAGE_TARGET),$(INTERNAL_BOOTIMAGE_FILES),$(PRODUCT_OUT)/:/)
-
-UNMOUNTED_NOTICE_DEPS += $(INSTALLED_BOOTIMAGE_TARGET)
-
-.PHONY: bootimage-nodeps
-bootimage-nodeps: $(MKBOOTIMG) $(BOOT_SIGNER)
- @echo "make $@: ignoring dependencies"
- $(foreach b,$(INSTALLED_BOOTIMAGE_TARGET),$(call build_boot_supports_boot_signer,$(b)))
-
-else ifeq (true,$(PRODUCT_SUPPORTS_VBOOT)) # PRODUCT_SUPPORTS_BOOT_SIGNER != true
+else ifeq (true,$(PRODUCT_SUPPORTS_VBOOT)) # BOARD_AVB_ENABLE != true
# $1: boot image target
define build_boot_supports_vboot
@@ -1152,7 +1179,7 @@
$(call declare-container-license-metadata,$(INSTALLED_BOOTIMAGE_TARGET),SPDX-license-identifier-GPL-2.0-only SPDX-license-identifier-Apache-2.0,restricted notice,$(BUILD_SYSTEM)/LINUX_KERNEL_COPYING build/soong/licenses/LICENSE,"Boot Image",boot)
$(call declare-container-license-deps,$(INSTALLED_BOOTIMAGE_TARGET),$(INTERNAL_BOOTIMAGE_FILES),$(PRODUCT_OUT)/:/)
-UNMOUNTED_NOTICE_DEPS += $(INSTALLED_BOOTIMAGE_TARGET)
+UNMOUNTED_NOTICE_VENDOR_DEPS += $(INSTALLED_BOOTIMAGE_TARGET)
.PHONY: bootimage-nodeps
bootimage-nodeps: $(MKBOOTIMG) $(VBOOT_SIGNER) $(FUTILITY)
@@ -1174,7 +1201,7 @@
$(call declare-container-license-metadata,$(INSTALLED_BOOTIMAGE_TARGET),SPDX-license-identifier-GPL-2.0-only SPDX-license-identifier-Apache-2.0,restricted notice,$(BUILD_SYSTEM)/LINUX_KERNEL_COPYING build/soong/licenses/LICENSE,"Boot Image",boot)
$(call declare-container-license-deps,$(INSTALLED_BOOTIMAGE_TARGET),$(INTERNAL_BOOTIMAGE_FILES),$(PRODUCT_OUT)/:/)
-UNMOUNTED_NOTICE_DEPS += $(INSTALLED_BOOTIMAGE_TARGET)
+UNMOUNTED_NOTICE_VENDOR_DEPS += $(INSTALLED_BOOTIMAGE_TARGET)
.PHONY: bootimage-nodeps
bootimage-nodeps: $(MKBOOTIMG)
@@ -1185,13 +1212,17 @@
endif # BUILDING_BOOT_IMAGE
else # TARGET_NO_KERNEL == "true"
-ifdef BOARD_PREBUILT_BOOTIMAGE
+INSTALLED_BOOTIMAGE_TARGET :=
+endif # TARGET_NO_KERNEL
+
+else # BOARD_PREBUILT_BOOTIMAGE defined
INTERNAL_PREBUILT_BOOTIMAGE := $(BOARD_PREBUILT_BOOTIMAGE)
INSTALLED_BOOTIMAGE_TARGET := $(PRODUCT_OUT)/boot.img
ifeq ($(BOARD_AVB_ENABLE),true)
$(INSTALLED_BOOTIMAGE_TARGET): $(INTERNAL_PREBUILT_BOOTIMAGE) $(AVBTOOL) $(BOARD_AVB_BOOT_KEY_PATH)
cp $(INTERNAL_PREBUILT_BOOTIMAGE) $@
+ chmod +w $@
$(AVBTOOL) add_hash_footer \
--image $@ \
$(call get-partition-size-argument,$(BOARD_BOOTIMAGE_PARTITION_SIZE)) \
@@ -1201,16 +1232,14 @@
$(call declare-container-license-metadata,$(INSTALLED_BOOTIMAGE_TARGET),SPDX-license-identifier-GPL-2.0-only SPDX-license-identifier-Apache-2.0,restricted notice,$(BUILD_SYSTEM)/LINUX_KERNEL_COPYING build/soong/licenses/LICENSE,"Boot Image",bool)
$(call declare-container-license-deps,$(INSTALLED_BOOTIMAGE_TARGET),$(INTERNAL_PREBUILT_BOOTIMAGE),$(PRODUCT_OUT)/:/)
-UNMOUNTED_NOTICE_DEPS += $(INSTALLED_BOOTIMAGE_TARGET)
+UNMOUNTED_NOTICE_VENDOR_DEPS += $(INSTALLED_BOOTIMAGE_TARGET)
else
$(INSTALLED_BOOTIMAGE_TARGET): $(INTERNAL_PREBUILT_BOOTIMAGE)
cp $(INTERNAL_PREBUILT_BOOTIMAGE) $@
endif # BOARD_AVB_ENABLE
-else # BOARD_PREBUILT_BOOTIMAGE not defined
-INSTALLED_BOOTIMAGE_TARGET :=
endif # BOARD_PREBUILT_BOOTIMAGE
-endif # TARGET_NO_KERNEL
+
endif # my_installed_prebuilt_gki_apex not defined
my_apex_extracted_boot_image :=
@@ -1223,6 +1252,8 @@
INSTALLED_INIT_BOOT_IMAGE_TARGET := $(PRODUCT_OUT)/init_boot.img
$(INSTALLED_INIT_BOOT_IMAGE_TARGET): $(MKBOOTIMG) $(INSTALLED_RAMDISK_TARGET)
+INTERNAL_INIT_BOOT_IMAGE_ARGS := --ramdisk $(INSTALLED_RAMDISK_TARGET)
+
ifdef BOARD_KERNEL_PAGESIZE
INTERNAL_INIT_BOOT_IMAGE_ARGS += --pagesize $(BOARD_KERNEL_PAGESIZE)
endif
@@ -1249,7 +1280,7 @@
$(call declare-1p-target,$(INSTALLED_INIT_BOOT_IMAGE_TARGET),)
endif
-UNMOUNTED_NOTICE_DEPS += $(INSTALLED_INIT_BOOT_IMAGE_TARGET)
+UNMOUNTED_NOTICE_VENDOR_DEPS+= $(INSTALLED_INIT_BOOT_IMAGE_TARGET)
else # BUILDING_INIT_BOOT_IMAGE is not true
@@ -1260,10 +1291,11 @@
ifeq ($(BOARD_AVB_ENABLE),true)
$(INSTALLED_INIT_BOOT_IMAGE_TARGET): $(INTERNAL_PREBUILT_INIT_BOOT_IMAGE) $(AVBTOOL) $(BOARD_AVB_INIT_BOOT_KEY_PATH)
cp $(INTERNAL_PREBUILT_INIT_BOOT_IMAGE) $@
+ chmod +w $@
$(AVBTOOL) add_hash_footer \
--image $@ \
$(call get-partition-size-argument,$(BOARD_INIT_BOOT_IMAGE_PARTITION_SIZE)) \
- --partition_name boot $(INTERNAL_AVB_INIT_BOOT_SIGNING_ARGS) \
+ --partition_name init_boot $(INTERNAL_AVB_INIT_BOOT_SIGNING_ARGS) \
$(BOARD_AVB_INIT_BOOT_ADD_HASH_FOOTER_ARGS)
$(call declare-1p-container,$(INSTALLED_INIT_BOOT_IMAGE_TARGET),)
@@ -1275,7 +1307,7 @@
$(call declare-1p-target,$(INSTALLED_INIT_BOOT_IMAGE_TARGET),)
endif # BOARD_AVB_ENABLE
-UNMOUNTED_NOTICE_DEPS += $(INSTALLED_INIT_BOOT_IMAGE_TARGET)
+UNMOUNTED_NOTICE_VENDOR_DEPS+= $(INSTALLED_INIT_BOOT_IMAGE_TARGET)
else # BOARD_PREBUILT_INIT_BOOT_IMAGE not defined
INSTALLED_INIT_BOOT_IMAGE_TARGET :=
@@ -1288,10 +1320,6 @@
INSTALLED_FILES_OUTSIDE_IMAGES := $(filter-out $(TARGET_VENDOR_RAMDISK_OUT)/%, $(INSTALLED_FILES_OUTSIDE_IMAGES))
ifeq ($(BUILDING_VENDOR_BOOT_IMAGE),true)
-ifeq ($(PRODUCT_SUPPORTS_VERITY),true)
- $(error vboot 1.0 does not support vendor_boot partition)
-endif
-
INTERNAL_VENDOR_RAMDISK_FILES := $(filter $(TARGET_VENDOR_RAMDISK_OUT)/%, \
$(ALL_DEFAULT_INSTALLED_MODULES))
@@ -1613,6 +1641,21 @@
target_system_dlkm_notice_file_xml_gz := $(TARGET_OUT_INTERMEDIATES)/NOTICE_SYSTEM_DLKM.xml.gz
installed_system_dlkm_notice_xml_gz := $(TARGET_OUT_SYSTEM_DLKM)/etc/NOTICE.xml.gz
+ALL_INSTALLED_NOTICE_FILES := \
+ $(installed_notice_html_or_xml_gz) \
+ $(installed_vendor_notice_xml_gz) \
+ $(installed_product_notice_xml_gz) \
+ $(installed_system_ext_notice_xml_gz) \
+ $(installed_odm_notice_xml_gz) \
+ $(installed_vendor_dlkm_notice_xml_gz) \
+ $(installed_odm_dlkm_notice_xml_gz) \
+ $(installed_system_dlkm_notice_xml_gz) \
+
+# $1 installed file path, e.g. out/target/product/vsoc_x86_64/system_ext/etc/NOTICE.xml.gz
+define is-notice-file
+$(if $(findstring $1,$(ALL_INSTALLED_NOTICE_FILES)),Y)
+endef
+
# Notice files are copied to TARGET_OUT_NOTICE_FILES as a side-effect of their module
# being built. A notice xml file must depend on all modules that could potentially
# install a license file relevant to it.
@@ -1762,19 +1805,6 @@
# Targets for user images
# #################################################################
-INTERNAL_USERIMAGES_EXT_VARIANT :=
-ifeq ($(TARGET_USERIMAGES_USE_EXT2),true)
-INTERNAL_USERIMAGES_EXT_VARIANT := ext2
-else
-ifeq ($(TARGET_USERIMAGES_USE_EXT3),true)
-INTERNAL_USERIMAGES_EXT_VARIANT := ext3
-else
-ifeq ($(TARGET_USERIMAGES_USE_EXT4),true)
-INTERNAL_USERIMAGES_EXT_VARIANT := ext4
-endif
-endif
-endif
-
# These options tell the recovery updater/installer how to mount the partitions writebale.
# <fstype>=<fstype_opts>[|<fstype_opts>]...
# fstype_opts := <opt>[,<opt>]...
@@ -1782,19 +1812,6 @@
# The following worked on Nexus devices with Kernel 3.1, 3.4, 3.10
DEFAULT_TARGET_RECOVERY_FSTYPE_MOUNT_OPTIONS := ext4=max_batch_time=0,commit=1,data=ordered,barrier=1,errors=panic,nodelalloc
-ifneq (true,$(TARGET_USERIMAGES_SPARSE_EXT_DISABLED))
- INTERNAL_USERIMAGES_SPARSE_EXT_FLAG := -s
-endif
-ifneq (true,$(TARGET_USERIMAGES_SPARSE_EROFS_DISABLED))
- INTERNAL_USERIMAGES_SPARSE_EROFS_FLAG := -s
-endif
-ifneq (true,$(TARGET_USERIMAGES_SPARSE_SQUASHFS_DISABLED))
- INTERNAL_USERIMAGES_SPARSE_SQUASHFS_FLAG := -s
-endif
-ifneq (true,$(TARGET_USERIMAGES_SPARSE_F2FS_DISABLED))
- INTERNAL_USERIMAGES_SPARSE_F2FS_FLAG := -S
-endif
-
INTERNAL_USERIMAGES_DEPS := \
$(BUILD_IMAGE) \
$(MKE2FS_CONF) \
@@ -1837,13 +1854,6 @@
INTERNAL_USERIMAGES_DEPS += $(MKSQUASHFSUSERIMG)
endif
-ifeq (true,$(PRODUCT_SUPPORTS_VERITY))
-INTERNAL_USERIMAGES_DEPS += $(BUILD_VERITY_METADATA) $(BUILD_VERITY_TREE) $(APPEND2SIMG) $(VERITY_SIGNER)
-ifeq (true,$(PRODUCT_SUPPORTS_VERITY_FEC))
-INTERNAL_USERIMAGES_DEPS += $(FEC)
-endif
-endif
-
ifeq ($(BOARD_AVB_ENABLE),true)
INTERNAL_USERIMAGES_DEPS += $(AVBTOOL)
endif
@@ -1860,14 +1870,6 @@
INTERNAL_USERIMAGES_DEPS += $(SELINUX_FC)
-ifeq (true,$(PRODUCT_USE_DYNAMIC_PARTITIONS))
-
-ifeq ($(PRODUCT_SUPPORTS_VERITY),true)
- $(error vboot 1.0 doesn't support logical partition)
-endif
-
-endif # PRODUCT_USE_DYNAMIC_PARTITIONS
-
# $(1) the partition name (eg system)
# $(2) the image prop file
define add-common-flags-to-image-props
@@ -1881,6 +1883,7 @@
define add-common-ro-flags-to-image-props
$(eval _var := $(call to-upper,$(1)))
$(if $(BOARD_$(_var)IMAGE_EROFS_COMPRESSOR),$(hide) echo "$(1)_erofs_compressor=$(BOARD_$(_var)IMAGE_EROFS_COMPRESSOR)" >> $(2))
+$(if $(BOARD_$(_var)IMAGE_EROFS_COMPRESS_HINTS),$(hide) echo "$(1)_erofs_compress_hints=$(BOARD_$(_var)IMAGE_EROFS_COMPRESS_HINTS)" >> $(2))
$(if $(BOARD_$(_var)IMAGE_EROFS_PCLUSTER_SIZE),$(hide) echo "$(1)_erofs_pcluster_size=$(BOARD_$(_var)IMAGE_EROFS_PCLUSTER_SIZE)" >> $(2))
$(if $(BOARD_$(_var)IMAGE_EXTFS_INODE_COUNT),$(hide) echo "$(1)_extfs_inode_count=$(BOARD_$(_var)IMAGE_EXTFS_INODE_COUNT)" >> $(2))
$(if $(BOARD_$(_var)IMAGE_EXTFS_RSV_PCT),$(hide) echo "$(1)_extfs_rsv_pct=$(BOARD_$(_var)IMAGE_EXTFS_RSV_PCT)" >> $(2))
@@ -1960,23 +1963,22 @@
)
$(hide) echo "ext_mkuserimg=$(notdir $(MKEXTUSERIMG))" >> $(1)
-$(if $(INTERNAL_USERIMAGES_EXT_VARIANT),$(hide) echo "fs_type=$(INTERNAL_USERIMAGES_EXT_VARIANT)" >> $(1))
-$(if $(INTERNAL_USERIMAGES_SPARSE_EXT_FLAG),$(hide) echo "extfs_sparse_flag=$(INTERNAL_USERIMAGES_SPARSE_EXT_FLAG)" >> $(1))
-$(if $(INTERNAL_USERIMAGES_SPARSE_EROFS_FLAG),$(hide) echo "erofs_sparse_flag=$(INTERNAL_USERIMAGES_SPARSE_EROFS_FLAG)" >> $(1))
-$(if $(INTERNAL_USERIMAGES_SPARSE_SQUASHFS_FLAG),$(hide) echo "squashfs_sparse_flag=$(INTERNAL_USERIMAGES_SPARSE_SQUASHFS_FLAG)" >> $(1))
-$(if $(INTERNAL_USERIMAGES_SPARSE_F2FS_FLAG),$(hide) echo "f2fs_sparse_flag=$(INTERNAL_USERIMAGES_SPARSE_F2FS_FLAG)" >> $(1))
+$(if $(filter true,$(TARGET_USERIMAGES_USE_EXT2)),$(hide) echo "fs_type=ext2" >> $(1),
+ $(if $(filter true,$(TARGET_USERIMAGES_USE_EXT3)),$(hide) echo "fs_type=ext3" >> $(1),
+ $(if $(filter true,$(TARGET_USERIMAGES_USE_EXT4)),$(hide) echo "fs_type=ext4" >> $(1))))
+
+$(if $(filter true,$(TARGET_USERIMAGES_SPARSE_EXT_DISABLED)),,$(hide) echo "extfs_sparse_flag=-s" >> $(1))
+$(if $(filter true,$(TARGET_USERIMAGES_SPARSE_EROFS_DISABLED)),,$(hide) echo "erofs_sparse_flag=-s" >> $(1))
+$(if $(filter true,$(TARGET_USERIMAGES_SPARSE_SQUASHFS_DISABLED)),,$(hide) echo "squashfs_sparse_flag=-s" >> $(1))
+$(if $(filter true,$(TARGET_USERIMAGES_SPARSE_F2FS_DISABLED)),,$(hide) echo "f2fs_sparse_flag=-S" >> $(1))
$(if $(BOARD_EROFS_COMPRESSOR),$(hide) echo "erofs_default_compressor=$(BOARD_EROFS_COMPRESSOR)" >> $(1))
+$(if $(BOARD_EROFS_COMPRESS_HINTS),$(hide) echo "erofs_default_compress_hints=$(BOARD_EROFS_COMPRESS_HINTS)" >> $(1))
$(if $(BOARD_EROFS_PCLUSTER_SIZE),$(hide) echo "erofs_pcluster_size=$(BOARD_EROFS_PCLUSTER_SIZE)" >> $(1))
$(if $(BOARD_EROFS_SHARE_DUP_BLOCKS),$(hide) echo "erofs_share_dup_blocks=$(BOARD_EROFS_SHARE_DUP_BLOCKS)" >> $(1))
$(if $(BOARD_EROFS_USE_LEGACY_COMPRESSION),$(hide) echo "erofs_use_legacy_compression=$(BOARD_EROFS_USE_LEGACY_COMPRESSION)" >> $(1))
$(if $(BOARD_EXT4_SHARE_DUP_BLOCKS),$(hide) echo "ext4_share_dup_blocks=$(BOARD_EXT4_SHARE_DUP_BLOCKS)" >> $(1))
$(if $(BOARD_FLASH_LOGICAL_BLOCK_SIZE), $(hide) echo "flash_logical_block_size=$(BOARD_FLASH_LOGICAL_BLOCK_SIZE)" >> $(1))
$(if $(BOARD_FLASH_ERASE_BLOCK_SIZE), $(hide) echo "flash_erase_block_size=$(BOARD_FLASH_ERASE_BLOCK_SIZE)" >> $(1))
-$(if $(PRODUCT_SUPPORTS_BOOT_SIGNER),$(hide) echo "boot_signer=$(PRODUCT_SUPPORTS_BOOT_SIGNER)" >> $(1))
-$(if $(PRODUCT_SUPPORTS_VERITY),$(hide) echo "verity=$(PRODUCT_SUPPORTS_VERITY)" >> $(1))
-$(if $(PRODUCT_SUPPORTS_VERITY),$(hide) echo "verity_key=$(PRODUCT_VERITY_SIGNING_KEY)" >> $(1))
-$(if $(PRODUCT_SUPPORTS_VERITY),$(hide) echo "verity_signer_cmd=$(notdir $(VERITY_SIGNER))" >> $(1))
-$(if $(PRODUCT_SUPPORTS_VERITY_FEC),$(hide) echo "verity_fec=$(PRODUCT_SUPPORTS_VERITY_FEC)" >> $(1))
$(if $(filter eng, $(TARGET_BUILD_VARIANT)),$(hide) echo "verity_disable=true" >> $(1))
$(if $(PRODUCT_SYSTEM_VERITY_PARTITION),$(hide) echo "system_verity_block_device=$(PRODUCT_SYSTEM_VERITY_PARTITION)" >> $(1))
$(if $(PRODUCT_VENDOR_VERITY_PARTITION),$(hide) echo "vendor_verity_block_device=$(PRODUCT_VENDOR_VERITY_PARTITION)" >> $(1))
@@ -2059,8 +2061,6 @@
$(hide) echo "avb_system_dlkm_rollback_index_location=$(BOARD_SYSTEM_SYSTEM_DLKM_ROLLBACK_INDEX_LOCATION)" >> $(1)))
$(if $(filter true,$(BOARD_USES_RECOVERY_AS_BOOT)),\
$(hide) echo "recovery_as_boot=true" >> $(1))
-$(if $(filter true,$(BOARD_BUILD_SYSTEM_ROOT_IMAGE)),\
- $(hide) echo "system_root_image=true" >> $(1))
$(if $(filter true,$(BOARD_BUILD_GKI_BOOT_IMAGE_WITHOUT_RAMDISK)),\
$(hide) echo "gki_boot_image_without_ramdisk=true" >> $(1))
$(hide) echo "root_dir=$(TARGET_ROOT_OUT)" >> $(1)
@@ -2337,20 +2337,18 @@
# (BOARD_USES_FULL_RECOVERY_IMAGE = true);
# b) We build a single image that contains boot and recovery both - no recovery image to install
# (BOARD_USES_RECOVERY_AS_BOOT = true);
-# c) We mount the system image as / and therefore do not have a ramdisk in boot.img
-# (BOARD_BUILD_SYSTEM_ROOT_IMAGE = true).
-# d) We include the recovery DTBO image within recovery - not needing the resource file as we
+# c) We include the recovery DTBO image within recovery - not needing the resource file as we
# do bsdiff because boot and recovery will contain different number of entries
# (BOARD_INCLUDE_RECOVERY_DTBO = true).
-# e) We include the recovery ACPIO image within recovery - not needing the resource file as we
+# d) We include the recovery ACPIO image within recovery - not needing the resource file as we
# do bsdiff because boot and recovery will contain different number of entries
# (BOARD_INCLUDE_RECOVERY_ACPIO = true).
-# f) We build a single image that contains vendor_boot and recovery both - no recovery image to
+# e) We build a single image that contains vendor_boot and recovery both - no recovery image to
# install
# (BOARD_MOVE_RECOVERY_RESOURCES_TO_VENDOR_BOOT = true).
ifeq (,$(filter true, $(BOARD_USES_FULL_RECOVERY_IMAGE) $(BOARD_USES_RECOVERY_AS_BOOT) \
- $(BOARD_BUILD_SYSTEM_ROOT_IMAGE) $(BOARD_INCLUDE_RECOVERY_DTBO) $(BOARD_INCLUDE_RECOVERY_ACPIO) \
+ $(BOARD_INCLUDE_RECOVERY_DTBO) $(BOARD_INCLUDE_RECOVERY_ACPIO) \
$(BOARD_MOVE_RECOVERY_RESOURCES_TO_VENDOR_BOOT)))
# Named '.dat' so we don't attempt to use imgdiff for patching it.
RECOVERY_RESOURCE_ZIP := $(TARGET_OUT_VENDOR)/etc/recovery-resource.dat
@@ -2472,8 +2470,7 @@
# Use rsync because "cp -Rf" fails to overwrite broken symlinks on Mac.
rsync -a --exclude=sdcard $(IGNORE_RECOVERY_SEPOLICY) $(IGNORE_CACHE_LINK) $(TARGET_ROOT_OUT) $(TARGET_RECOVERY_OUT)
# Modifying ramdisk contents...
- $(if $(filter true,$(BOARD_BUILD_SYSTEM_ROOT_IMAGE)),, \
- ln -sf /system/bin/init $(TARGET_RECOVERY_ROOT_OUT)/init)
+ ln -sf /system/bin/init $(TARGET_RECOVERY_ROOT_OUT)/init
# Removes $(TARGET_RECOVERY_ROOT_OUT)/init*.rc EXCEPT init.recovery*.rc.
find $(TARGET_RECOVERY_ROOT_OUT) -maxdepth 1 -name 'init*.rc' -type f -not -name "init.recovery.*.rc" | xargs rm -f
cp $(TARGET_ROOT_OUT)/init.recovery.*.rc $(TARGET_RECOVERY_ROOT_OUT)/ 2> /dev/null || true # Ignore error when the src file doesn't exist.
@@ -2506,12 +2503,6 @@
$(MKBOOTIMG) $(if $(strip $(2)),--kernel $(strip $(2))) $(INTERNAL_RECOVERYIMAGE_ARGS) \
$(INTERNAL_MKBOOTIMG_VERSION_ARGS) \
$(BOARD_RECOVERY_MKBOOTIMG_ARGS) --output $(1))
- $(if $(filter true,$(PRODUCT_SUPPORTS_BOOT_SIGNER)),\
- $(if $(filter true,$(BOARD_USES_RECOVERY_AS_BOOT)),\
- $(BOOT_SIGNER) /boot $(1) $(PRODUCT_VERITY_SIGNING_KEY).pk8 $(PRODUCT_VERITY_SIGNING_KEY).x509.pem $(1),\
- $(BOOT_SIGNER) /recovery $(1) $(PRODUCT_VERITY_SIGNING_KEY).pk8 $(PRODUCT_VERITY_SIGNING_KEY).x509.pem $(1)\
- )\
- )
$(if $(filter true,$(PRODUCT_SUPPORTS_VBOOT)), \
$(VBOOT_SIGNER) $(FUTILITY) $(1).unsigned $(PRODUCT_VBOOT_SIGNING_KEY).vbpubk $(PRODUCT_VBOOT_SIGNING_KEY).vbprivk $(PRODUCT_VBOOT_SIGNING_SUBKEY).vbprivk $(1).keyblock $(1))
$(if $(filter true,$(BOARD_USES_RECOVERY_AS_BOOT)), \
@@ -2524,9 +2515,6 @@
endef
recoveryimage-deps := $(MKBOOTIMG) $(recovery_ramdisk) $(recovery_kernel)
-ifeq (true,$(PRODUCT_SUPPORTS_BOOT_SIGNER))
- recoveryimage-deps += $(BOOT_SIGNER)
-endif
ifeq (true,$(PRODUCT_SUPPORTS_VBOOT))
recoveryimage-deps += $(VBOOT_SIGNER)
endif
@@ -2556,7 +2544,7 @@
$(call declare-container-license-metadata,$(INSTALLED_BOOTIMAGE_TARGET),SPDX-license-identifier-GPL-2.0-only SPDX-license-identifier-Apache-2.0,restricted notice,$(BUILD_SYSTEM)/LINUX_KERNEL_COPYING build/soong/licenses/LICENSE,"Boot Image",bool)
$(call declare-container-license-deps,$(INSTALLED_BOOTIMAGE_TARGET),$(recoveryimage-deps),$(PRODUCT_OUT)/:/)
-UNMOUNTED_NOTICE_DEPS += $(INSTALLED_BOOTIMAGE_TARGET)
+UNMOUNTED_NOTICE_VENDOR_DEPS+= $(INSTALLED_BOOTIMAGE_TARGET)
endif # BOARD_USES_RECOVERY_AS_BOOT
$(INSTALLED_RECOVERYIMAGE_TARGET): $(recoveryimage-deps)
@@ -2574,7 +2562,7 @@
$(call declare-1p-container,$(INSTALLED_RECOVERYIMAGE_TARGET),)
$(call declare-container-license-deps,$(INSTALLED_RECOVERYIMAGE_TARGET),$(recoveryimage-deps),$(PRODUCT_OUT)/:/)
-UNMOUNTED_NOTICE_DEPS += $(INSTALLED_RECOVERYIMAGE_TARGET)
+UNMOUNTED_NOTICE_VENDOR_DEPS+= $(INSTALLED_RECOVERYIMAGE_TARGET)
.PHONY: recoveryimage-nodeps
recoveryimage-nodeps:
@@ -2660,7 +2648,7 @@
$(call declare-1p-container,$(INSTALLED_DEBUG_RAMDISK_TARGET),)
$(call declare-container-license-deps,$(INSTALLED_DEBUG_RAMDISK_TARGET),$(INSTALLED_RAMDISK_TARGET),$(PRODUCT_OUT)/:/)
-UNMOUNTED_NOTICE_DEPS += $(INSTALLED_DEBUG_RAMDISK_TARGET)
+UNMOUNTED_NOTICE_VENDOR_DEPS+= $(INSTALLED_DEBUG_RAMDISK_TARGET)
.PHONY: ramdisk_debug-nodeps
ramdisk_debug-nodeps: $(MKBOOTFS) | $(COMPRESSION_COMMAND_DEPS)
@@ -2727,7 +2715,7 @@
$(call declare-container-license-metadata,$(INSTALLED_DEBUG_BOOTIMAGE_TARGET),SPDX-license-identifier-GPL-2.0-only SPDX-license-identifier-Apache-2.0,restricted notice,$(BUILD_SYSTEM)/LINUX_KERNEL_COPYING build/soong/licenses/LICENSE,"Boot Image",boot)
$(call declare-container-license-deps,$(INSTALLED_DEBUG_BOOTIMAGE_TARGET),$(INSTALLED_BOOTIMAGE_TARGET),$(PRODUCT_OUT)/:/)
-UNMOUNTED_NOTICE_DEPS += $(INSTALLED_DEBUG_BOOTIMAGE_TARGET)
+UNMOUNTED_NOTICE_VENDOR_DEPS+= $(INSTALLED_DEBUG_BOOTIMAGE_TARGET)
.PHONY: bootimage_debug-nodeps
bootimage_debug-nodeps: $(MKBOOTIMG) $(AVBTOOL)
@@ -2880,7 +2868,7 @@
$(call declare-1p-container,$(INSTALLED_TEST_HARNESS_RAMDISK_TARGET),)
$(call declare-container-license-deps,$(INSTALLED_TEST_HARNESS_RAMDISK_TARGET),$(INTERNAL_TEST_HARNESS_RAMDISK_SRC_DEPS),$(PRODUCT_OUT)/:/)
-UNMOUNTED_NOTICE_DEPS += $(INSTALLED_TEST_HARNESS_RAMDISK_TARGET)
+UNMOUNTED_NOTICE_VENDOR_DEPS+= $(INSTALLED_TEST_HARNESS_RAMDISK_TARGET)
.PHONY: ramdisk_test_harness-nodeps
ramdisk_test_harness-nodeps: $(MKBOOTFS) | $(COMPRESSION_COMMAND_DEPS)
@@ -2929,7 +2917,7 @@
$(call declare-1p-container,$(INSTALLED_TEST_HARNESS_BOOTIMAGE_TARGET),)
$(call declare-container-license-deps,$(INSTALLED_TEST_HARNESS_BOOTIMAGE_TARGET),$(INSTALLED_DEBUG_BOOTIMAGE_TARGET),$(PRODUCT_OUT)/:/)
-UNMOUNTED_NOTICE_DEPS += $(INSTALLED_TEST_HARNESS_BOOTIMAGE_TARGET)
+UNMOUNTED_NOTICE_VENDOR_DEPS+= $(INSTALLED_TEST_HARNESS_BOOTIMAGE_TARGET)
.PHONY: bootimage_test_harness-nodeps
bootimage_test_harness-nodeps: $(MKBOOTIMG) $(AVBTOOL)
@@ -2993,7 +2981,7 @@
endif # BUILDING_DEBUG_BOOT_IMAGE || BUILDING_DEBUG_VENDOR_BOOT_IMAGE
-
+PARTITION_COMPAT_SYMLINKS :=
# Creates a compatibility symlink between two partitions, e.g. /system/vendor to /vendor
# $1: from location (e.g $(TARGET_OUT)/vendor)
# $2: destination location (e.g. /vendor)
@@ -3011,28 +2999,36 @@
ln -sfn $2 $1
$1: .KATI_SYMLINK_OUTPUTS := $1
)
+$(eval PARTITION_COMPAT_SYMLINKS += $1)
$1
endef
-# -----------------------------------------------------------------
-# system image
-
# FSVerity metadata generation
# Generate fsverity metadata files (.fsv_meta) and build manifest
-# (system/etc/security/fsverity/BuildManifest.apk) BEFORE filtering systemimage files below
-ifeq ($(PRODUCT_SYSTEM_FSVERITY_GENERATE_METADATA),true)
+# (<partition>/etc/security/fsverity/BuildManifest<suffix>.apk) BEFORE filtering systemimage,
+# vendorimage, odmimage, productimage files below.
+ifeq ($(PRODUCT_FSVERITY_GENERATE_METADATA),true)
-# Generate fsv_meta
-fsverity-metadata-targets := $(sort $(filter \
+fsverity-metadata-targets-patterns := \
$(TARGET_OUT)/framework/% \
$(TARGET_OUT)/etc/boot-image.prof \
$(TARGET_OUT)/etc/dirty-image-objects \
$(TARGET_OUT)/etc/preloaded-classes \
- $(TARGET_OUT)/etc/classpaths/%.pb, \
+ $(TARGET_OUT)/etc/classpaths/%.pb \
+
+ifdef BUILDING_SYSTEM_EXT_IMAGE
+fsverity-metadata-targets-patterns += $(TARGET_OUT_SYSTEM_EXT)/framework/%
+endif
+
+# Generate fsv_meta
+fsverity-metadata-targets := $(sort $(filter \
+ $(fsverity-metadata-targets-patterns), \
$(ALL_DEFAULT_INSTALLED_MODULES)))
define fsverity-generate-metadata
+$(call declare-0p-target,$(1).fsv_meta)
+
$(1).fsv_meta: PRIVATE_SRC := $(1)
$(1).fsv_meta: PRIVATE_FSVERITY := $(HOST_OUT_EXECUTABLES)/fsverity
$(1).fsv_meta: $(HOST_OUT_EXECUTABLES)/fsverity_metadata_generator $(HOST_OUT_EXECUTABLES)/fsverity $(1)
@@ -3043,38 +3039,70 @@
$(foreach f,$(fsverity-metadata-targets),$(eval $(call fsverity-generate-metadata,$(f))))
ALL_DEFAULT_INSTALLED_MODULES += $(addsuffix .fsv_meta,$(fsverity-metadata-targets))
-# Generate BuildManifest.apk
FSVERITY_APK_KEY_PATH := $(DEFAULT_SYSTEM_DEV_CERTIFICATE)
-FSVERITY_APK_OUT := $(TARGET_OUT)/etc/security/fsverity/BuildManifest.apk
-FSVERITY_APK_MANIFEST_PATH := system/security/fsverity/AndroidManifest.xml
-$(FSVERITY_APK_OUT): PRIVATE_FSVERITY := $(HOST_OUT_EXECUTABLES)/fsverity
-$(FSVERITY_APK_OUT): PRIVATE_AAPT2 := $(HOST_OUT_EXECUTABLES)/aapt2
-$(FSVERITY_APK_OUT): PRIVATE_MIN_SDK_VERSION := $(DEFAULT_APP_TARGET_SDK)
-$(FSVERITY_APK_OUT): PRIVATE_VERSION_CODE := $(PLATFORM_SDK_VERSION)
-$(FSVERITY_APK_OUT): PRIVATE_VERSION_NAME := $(APPS_DEFAULT_VERSION_NAME)
-$(FSVERITY_APK_OUT): PRIVATE_APKSIGNER := $(HOST_OUT_EXECUTABLES)/apksigner
-$(FSVERITY_APK_OUT): PRIVATE_MANIFEST := $(FSVERITY_APK_MANIFEST_PATH)
-$(FSVERITY_APK_OUT): PRIVATE_FRAMEWORK_RES := $(call intermediates-dir-for,APPS,framework-res,,COMMON)/package-export.apk
-$(FSVERITY_APK_OUT): PRIVATE_KEY := $(FSVERITY_APK_KEY_PATH)
-$(FSVERITY_APK_OUT): PRIVATE_INPUTS := $(fsverity-metadata-targets)
-$(FSVERITY_APK_OUT): $(HOST_OUT_EXECUTABLES)/fsverity_manifest_generator \
+FSVERITY_APK_MANIFEST_TEMPLATE_PATH := system/security/fsverity/AndroidManifest.xml
+
+# Generate and install BuildManifest<suffix>.apk for the given partition
+# $(1): path of the output APK
+# $(2): partition name
+define fsverity-generate-and-install-manifest-apk
+fsverity-metadata-targets-$(2) := $(filter $(PRODUCT_OUT)/$(2)/%,\
+ $(fsverity-metadata-targets))
+$(1): PRIVATE_FSVERITY := $(HOST_OUT_EXECUTABLES)/fsverity
+$(1): PRIVATE_AAPT2 := $(HOST_OUT_EXECUTABLES)/aapt2
+$(1): PRIVATE_MIN_SDK_VERSION := $(DEFAULT_APP_TARGET_SDK)
+$(1): PRIVATE_VERSION_CODE := $(PLATFORM_SDK_VERSION)
+$(1): PRIVATE_VERSION_NAME := $(APPS_DEFAULT_VERSION_NAME)
+$(1): PRIVATE_APKSIGNER := $(HOST_OUT_EXECUTABLES)/apksigner
+$(1): PRIVATE_MANIFEST := $(FSVERITY_APK_MANIFEST_TEMPLATE_PATH)
+$(1): PRIVATE_FRAMEWORK_RES := $(call intermediates-dir-for,APPS,framework-res,,COMMON)/package-export.apk
+$(1): PRIVATE_KEY := $(FSVERITY_APK_KEY_PATH)
+$(1): PRIVATE_INPUTS := $$(fsverity-metadata-targets-$(2))
+$(1): PRIVATE_ASSETS := $(call intermediates-dir-for,ETC,build_manifest-$(2))/assets
+$(1): $(HOST_OUT_EXECUTABLES)/fsverity_manifest_generator \
$(HOST_OUT_EXECUTABLES)/fsverity $(HOST_OUT_EXECUTABLES)/aapt2 \
- $(HOST_OUT_EXECUTABLES)/apksigner $(FSVERITY_APK_MANIFEST_PATH) \
+ $(HOST_OUT_EXECUTABLES)/apksigner $(FSVERITY_APK_MANIFEST_TEMPLATE_PATH) \
$(FSVERITY_APK_KEY_PATH).x509.pem $(FSVERITY_APK_KEY_PATH).pk8 \
$(call intermediates-dir-for,APPS,framework-res,,COMMON)/package-export.apk \
- $(fsverity-metadata-targets)
- $< --fsverity-path $(PRIVATE_FSVERITY) --aapt2-path $(PRIVATE_AAPT2) \
- --min-sdk-version $(PRIVATE_MIN_SDK_VERSION) \
- --version-code $(PRIVATE_VERSION_CODE) \
- --version-name $(PRIVATE_VERSION_NAME) \
- --apksigner-path $(PRIVATE_APKSIGNER) --apk-key-path $(PRIVATE_KEY) \
- --apk-manifest-path $(PRIVATE_MANIFEST) --framework-res $(PRIVATE_FRAMEWORK_RES) \
- --output $@ \
- --base-dir $(PRODUCT_OUT) $(PRIVATE_INPUTS)
+ $$(fsverity-metadata-targets-$(2))
+ rm -rf $$(PRIVATE_ASSETS)
+ mkdir -p $$(PRIVATE_ASSETS)
+ $$< --fsverity-path $$(PRIVATE_FSVERITY) \
+ --base-dir $$(PRODUCT_OUT) \
+ --output $$(PRIVATE_ASSETS)/build_manifest.pb \
+ $$(PRIVATE_INPUTS)
+ $$(PRIVATE_AAPT2) link -o $$@ \
+ -A $$(PRIVATE_ASSETS) \
+ -I $$(PRIVATE_FRAMEWORK_RES) \
+ --min-sdk-version $$(PRIVATE_MIN_SDK_VERSION) \
+ --version-code $$(PRIVATE_VERSION_CODE) \
+ --version-name $$(PRIVATE_VERSION_NAME) \
+ --manifest $$(PRIVATE_MANIFEST) \
+ --rename-manifest-package com.android.security.fsverity_metadata.$(2)
+ $$(PRIVATE_APKSIGNER) sign --in $$@ \
+ --cert $$(PRIVATE_KEY).x509.pem \
+ --key $$(PRIVATE_KEY).pk8
-ALL_DEFAULT_INSTALLED_MODULES += $(FSVERITY_APK_OUT)
+$(1).idsig: $(1)
-endif # PRODUCT_SYSTEM_FSVERITY_GENERATE_METADATA
+ALL_DEFAULT_INSTALLED_MODULES += $(1) $(1).idsig
+
+endef # fsverity-generate-and-install-manifest-apk
+
+$(eval $(call fsverity-generate-and-install-manifest-apk, \
+ $(TARGET_OUT)/etc/security/fsverity/BuildManifest.apk,system))
+ALL_FSVERITY_BUILD_MANIFEST_APK += $(TARGET_OUT)/etc/security/fsverity/BuildManifest.apk $(TARGET_OUT)/etc/security/fsverity/BuildManifest.apk.idsig
+ifdef BUILDING_SYSTEM_EXT_IMAGE
+ $(eval $(call fsverity-generate-and-install-manifest-apk, \
+ $(TARGET_OUT_SYSTEM_EXT)/etc/security/fsverity/BuildManifestSystemExt.apk,system_ext))
+ ALL_FSVERITY_BUILD_MANIFEST_APK += $(TARGET_OUT_SYSTEM_EXT)/etc/security/fsverity/BuildManifestSystemExt.apk $(TARGET_OUT_SYSTEM_EXT)/etc/security/fsverity/BuildManifestSystemExt.apk.idsig
+endif
+
+endif # PRODUCT_FSVERITY_GENERATE_METADATA
+
+
+# -----------------------------------------------------------------
+# system image
INSTALLED_FILES_OUTSIDE_IMAGES := $(filter-out $(TARGET_OUT)/%, $(INSTALLED_FILES_OUTSIDE_IMAGES))
INTERNAL_SYSTEMIMAGE_FILES := $(sort $(filter $(TARGET_OUT)/%, \
@@ -3082,17 +3110,23 @@
# Create symlink /system/vendor to /vendor if necessary.
ifdef BOARD_USES_VENDORIMAGE
- INTERNAL_SYSTEMIMAGE_FILES += $(call create-partition-compat-symlink,$(TARGET_OUT)/vendor,/vendor,vendor.img)
+ _vendor_symlink := $(call create-partition-compat-symlink,$(TARGET_OUT)/vendor,/vendor,vendor.img)
+ INTERNAL_SYSTEMIMAGE_FILES += $(_vendor_symlink)
+ ALL_DEFAULT_INSTALLED_MODULES += $(_vendor_symlink)
endif
# Create symlink /system/product to /product if necessary.
ifdef BOARD_USES_PRODUCTIMAGE
- INTERNAL_SYSTEMIMAGE_FILES += $(call create-partition-compat-symlink,$(TARGET_OUT)/product,/product,product.img)
+ _product_symlink := $(call create-partition-compat-symlink,$(TARGET_OUT)/product,/product,product.img)
+ INTERNAL_SYSTEMIMAGE_FILES += $(_product_symlink)
+ ALL_DEFAULT_INSTALLED_MODULES += $(_product_symlink)
endif
# Create symlink /system/system_ext to /system_ext if necessary.
ifdef BOARD_USES_SYSTEM_EXTIMAGE
- INTERNAL_SYSTEMIMAGE_FILES += $(call create-partition-compat-symlink,$(TARGET_OUT)/system_ext,/system_ext,system_ext.img)
+ _systemext_symlink := $(call create-partition-compat-symlink,$(TARGET_OUT)/system_ext,/system_ext,system_ext.img)
+ INTERNAL_SYSTEMIMAGE_FILES += $(_systemext_symlink)
+ ALL_DEFAULT_INSTALLED_MODULES += $(_systemext_symlink)
endif
# -----------------------------------------------------------------
@@ -3105,7 +3139,9 @@
# - /system/lib/modules is a symlink to a directory that stores system DLKMs.
# - The system_dlkm partition is mounted at /system_dlkm at runtime.
ifdef BOARD_USES_SYSTEM_DLKMIMAGE
- INTERNAL_SYSTEMIMAGE_FILES += $(call create-partition-compat-symlink,$(TARGET_OUT)/lib/modules,/system_dlkm/lib/modules,system_dlkm.img)
+ _system_dlkm_lib_modules_symlink := $(call create-partition-compat-symlink,$(TARGET_OUT)/lib/modules,/system_dlkm/lib/modules,system_dlkm.img)
+ INTERNAL_SYSTEMIMAGE_FILES += $(_system_dlkm_lib_modules_symlink)
+ ALL_DEFAULT_INSTALLED_MODULES += $(_system_dlkm_lib_modules_symlink)
endif
FULL_SYSTEMIMAGE_DEPS := $(INTERNAL_SYSTEMIMAGE_FILES) $(INTERNAL_USERIMAGES_DEPS)
@@ -3125,17 +3161,24 @@
# Install system linker configuration
# Collect all available stub libraries installed in system and install with predefined linker configuration
+# Also append LLNDK libraries in the APEX as required libs
SYSTEM_LINKER_CONFIG := $(TARGET_OUT)/etc/linker.config.pb
SYSTEM_LINKER_CONFIG_SOURCE := $(call intermediates-dir-for,ETC,system_linker_config)/system_linker_config
$(SYSTEM_LINKER_CONFIG): PRIVATE_SYSTEM_LINKER_CONFIG_SOURCE := $(SYSTEM_LINKER_CONFIG_SOURCE)
-$(SYSTEM_LINKER_CONFIG) : $(INTERNAL_SYSTEMIMAGE_FILES) $(SYSTEM_LINKER_CONFIG_SOURCE) | conv_linker_config
+$(SYSTEM_LINKER_CONFIG): $(INTERNAL_SYSTEMIMAGE_FILES) $(SYSTEM_LINKER_CONFIG_SOURCE) | conv_linker_config
+ @echo Creating linker config: $@
+ @mkdir -p $(dir $@)
+ @rm -f $@
$(HOST_OUT_EXECUTABLES)/conv_linker_config systemprovide --source $(PRIVATE_SYSTEM_LINKER_CONFIG_SOURCE) \
- --output $@ --value "$(STUB_LIBRARIES)" --system "$(TARGET_OUT)"
+ --output $@ --value "$(STUB_LIBRARIES)" --system "$(TARGET_OUT)"
+ $(HOST_OUT_EXECUTABLES)/conv_linker_config append --source $@ --output $@ --key requireLibs \
+ --value "$(foreach lib,$(LLNDK_MOVED_TO_APEX_LIBRARIES), $(lib).so)"
$(call declare-1p-target,$(SYSTEM_LINKER_CONFIG),)
$(call declare-license-deps,$(SYSTEM_LINKER_CONFIG),$(INTERNAL_SYSTEMIMAGE_FILES) $(SYSTEM_LINKER_CONFIG_SOURCE))
FULL_SYSTEMIMAGE_DEPS += $(SYSTEM_LINKER_CONFIG)
+ALL_DEFAULT_INSTALLED_MODULES += $(SYSTEM_LINKER_CONFIG)
# installed file list
# Depending on anything that $(BUILT_SYSTEMIMAGE) depends on.
@@ -3200,7 +3243,7 @@
ifneq ($(INSTALLED_BOOTIMAGE_TARGET),)
ifneq ($(INSTALLED_RECOVERYIMAGE_TARGET),)
ifneq ($(BOARD_USES_FULL_RECOVERY_IMAGE),true)
-ifneq (,$(filter true, $(BOARD_BUILD_SYSTEM_ROOT_IMAGE) $(BOARD_INCLUDE_RECOVERY_DTBO) $(BOARD_INCLUDE_RECOVERY_ACPIO)))
+ifneq (,$(filter true,$(BOARD_INCLUDE_RECOVERY_DTBO) $(BOARD_INCLUDE_RECOVERY_ACPIO)))
diff_tool := $(HOST_OUT_EXECUTABLES)/bsdiff
else
diff_tool := $(HOST_OUT_EXECUTABLES)/imgdiff
@@ -3292,7 +3335,7 @@
$(call declare-1p-container,$(INSTALLED_USERDATAIMAGE_TARGET),)
$(call declare-container-license-deps,$(INSTALLED_USERDATAIMAGE_TARGET),$(INSTALLED_USERDATAIMAGE_TARGET_DEPS),$(PRODUCT_OUT)/:/)
-UNMOUNTED_NOTICE_DEPS += $(INSTALLED_USERDATAIMAGE_TARGET)
+UNMOUNTED_NOTICE_VENDOR_DEPS+= $(INSTALLED_USERDATAIMAGE_TARGET)
.PHONY: userdataimage-nodeps
userdataimage-nodeps: | $(INTERNAL_USERIMAGES_DEPS)
@@ -3344,7 +3387,7 @@
$(call declare-1p-container,$(INSTALLED_BPTIMAGE_TARGET),)
$(call declare-container-license-deps,$(INSTALLED_BPTIMAGE_TARGET),$(BOARD_BPT_INPUT_FILES),$(PRODUCT_OUT)/:/)
-UNMOUNTED_NOTICE_DEPS += $(INSTALLED_BPTIMAGE_TARGET)
+UNMOUNTED_NOTICE_VENDOR_DEPS+= $(INSTALLED_BPTIMAGE_TARGET)
.PHONY: bptimage-nodeps
bptimage-nodeps:
@@ -3383,7 +3426,7 @@
$(call declare-1p-container,$(INSTALLED_CACHEIMAGE_TARGET),)
$(call declare-container-license-deps,$(INSTALLED_CACHEIMAGE_TARGET),$(INTERNAL_USERIMAGES_DEPS) $(INTERNAL_CACHEIMAGE_FILES),$(PRODUCT_OUT)/:/)
-UNMOUNTED_NOTICE_DEPS += $(INSTALLED_CACHEIMAGE_TARGET)
+UNMOUNTED_NOTICE_VENDOR_DEPS+= $(INSTALLED_CACHEIMAGE_TARGET)
.PHONY: cacheimage-nodeps
cacheimage-nodeps: | $(INTERNAL_USERIMAGES_DEPS)
@@ -3487,7 +3530,9 @@
# Create symlink /vendor/odm to /odm if necessary.
ifdef BOARD_USES_ODMIMAGE
- INTERNAL_VENDORIMAGE_FILES += $(call create-partition-compat-symlink,$(TARGET_OUT_VENDOR)/odm,/odm,odm.img)
+ _odm_symlink := $(call create-partition-compat-symlink,$(TARGET_OUT_VENDOR)/odm,/odm,odm.img)
+ INTERNAL_VENDORIMAGE_FILES += $(_odm_symlink)
+ ALL_DEFAULT_INSTALLED_MODULES += $(_odm_symlink)
endif
# Create symlinks for vendor_dlkm on devices with a vendor_dlkm partition:
@@ -3505,9 +3550,27 @@
# The vendor DLKMs and other vendor_dlkm files must not be accessed using other paths because they
# are not guaranteed to exist on all devices.
ifdef BOARD_USES_VENDOR_DLKMIMAGE
- INTERNAL_VENDORIMAGE_FILES += $(call create-partition-compat-symlink,$(TARGET_OUT_VENDOR)/lib/modules,/vendor_dlkm/lib/modules,vendor_dlkm.img)
+ _vendor_dlkm_lib_modules_symlink := $(call create-partition-compat-symlink,$(TARGET_OUT_VENDOR)/lib/modules,/vendor_dlkm/lib/modules,vendor_dlkm.img)
+ INTERNAL_VENDORIMAGE_FILES += $(_vendor_dlkm_lib_modules_symlink)
+ ALL_DEFAULT_INSTALLED_MODULES += $(_vendor_dlkm_lib_modules_symlink)
endif
+# Install vendor/etc/linker.config.pb with PRODUCT_VENDOR_LINKER_CONFIG_FRAGMENTS and STUB_LIBRARIES
+vendor_linker_config_file := $(TARGET_OUT_VENDOR)/etc/linker.config.pb
+$(vendor_linker_config_file): private_linker_config_fragments := $(PRODUCT_VENDOR_LINKER_CONFIG_FRAGMENTS)
+$(vendor_linker_config_file): $(INTERNAL_VENDORIMAGE_FILES) $(PRODUCT_VENDOR_LINKER_CONFIG_FRAGMENTS) | $(HOST_OUT_EXECUTABLES)/conv_linker_config
+ @echo Creating linker config: $@
+ @mkdir -p $(dir $@)
+ @rm -f $@
+ $(HOST_OUT_EXECUTABLES)/conv_linker_config proto \
+ --source $(call normalize-path-list,$(private_linker_config_fragments)) \
+ --output $@
+ $(HOST_OUT_EXECUTABLES)/conv_linker_config systemprovide --source $@ \
+ --output $@ --value "$(STUB_LIBRARIES)" --system "$(TARGET_OUT_VENDOR)"
+$(call define declare-0p-target,$(vendor_linker_config_file),)
+INTERNAL_VENDORIMAGE_FILES += $(vendor_linker_config_file)
+ALL_DEFAULT_INSTALLED_MODULES += $(vendor_linker_config_file)
+
INSTALLED_FILES_FILE_VENDOR := $(PRODUCT_OUT)/installed-files-vendor.txt
INSTALLED_FILES_JSON_VENDOR := $(INSTALLED_FILES_FILE_VENDOR:.txt=.json)
$(INSTALLED_FILES_FILE_VENDOR): .KATI_IMPLICIT_OUTPUTS := $(INSTALLED_FILES_JSON_VENDOR)
@@ -3561,7 +3624,7 @@
$(eval $(call copy-one-file,$(BOARD_PREBUILT_VENDORIMAGE),$(INSTALLED_VENDORIMAGE_TARGET)))
$(if $(strip $(ALL_TARGETS.$(INSTALLED_VENDORIMAGE_TARGET).META_LIC)),,\
$(if $(strip $(ALL_TARGETS.$(BOARD_PREBUILT_VENDORIMAGE).META_LIC)),\
- $(eval ALL_TARGETS.$(INSTALLED_VENDORIMAGE_TARGET).META_LIC:=$(ALL_TARGETS.$(BOARD_PREBUILT_VENDORIMAGE).META_LIC)),\
+ $(call declare-copy-target-license-metadata,$(INSTALLED_VENDORIMAGE_TARGET),$(BOARD_PREBUILT_VENDORIMAGE)),\
$(call declare-license-metadata,$(INSTALLED_VENDORIMAGE_TARGET),legacy_proprietary,proprietary,,"Vendor Image",vendor)))
endif
@@ -3710,7 +3773,9 @@
# The odm DLKMs and other odm_dlkm files must not be accessed using other paths because they
# are not guaranteed to exist on all devices.
ifdef BOARD_USES_ODM_DLKMIMAGE
- INTERNAL_ODMIMAGE_FILES += $(call create-partition-compat-symlink,$(TARGET_OUT_ODM)/lib/modules,/odm_dlkm/lib/modules,odm_dlkm.img)
+ _odm_dlkm_lib_modules_symlink := $(call create-partition-compat-symlink,$(TARGET_OUT_ODM)/lib/modules,/odm_dlkm/lib/modules,odm_dlkm.img)
+ INTERNAL_ODMIMAGE_FILES += $(_odm_dlkm_lib_modules_symlink)
+ ALL_DEFAULT_INSTALLED_MODULES += $(_odm_dlkm_lib_modules_symlink)
endif
INSTALLED_FILES_FILE_ODM := $(PRODUCT_OUT)/installed-files-odm.txt
@@ -3904,7 +3969,6 @@
$(INSTALLED_FILES_FILE_SYSTEM_DLKM): $(INTERNAL_SYSTEM_DLKMIMAGE_FILES) $(FILESLIST) $(FILESLIST_UTIL)
@echo Installed file list: $@
mkdir -p $(dir $@)
- if [ -d "$(BOARD_SYSTEM_DLKM_SRC)" ]; then rsync -rupE $(BOARD_SYSTEM_DLKM_SRC)/ $(TARGET_OUT_SYSTEM_DLKM); fi
rm -f $@
$(FILESLIST) $(TARGET_OUT_SYSTEM_DLKM) > $(@:.txt=.json)
$(FILESLIST_UTIL) -c $(@:.txt=.json) > $@
@@ -3960,6 +4024,7 @@
ifeq ($(BOARD_AVB_ENABLE),true)
$(INSTALLED_DTBOIMAGE_TARGET): $(BOARD_PREBUILT_DTBOIMAGE) $(AVBTOOL) $(BOARD_AVB_DTBO_KEY_PATH)
cp $(BOARD_PREBUILT_DTBOIMAGE) $@
+ chmod +w $@
$(AVBTOOL) add_hash_footer \
--image $@ \
$(call get-partition-size-argument,$(BOARD_DTBOIMG_PARTITION_SIZE)) \
@@ -3969,7 +4034,7 @@
$(call declare-1p-container,$(INSTALLED_DTBOIMAGE_TARGET),)
$(call declare-container-license-deps,$(INSTALLED_DTBOIMAGE_TARGET),$(BOARD_PREBUILT_DTBOIMAGE),$(PRODUCT_OUT)/:/)
-UNMOUNTED_NOTICE_DEPS += $(INSTALLED_DTBOIMAGE_TARGET)
+UNMOUNTED_NOTICE_VENDOR_DEPS+= $(INSTALLED_DTBOIMAGE_TARGET)
else
$(INSTALLED_DTBOIMAGE_TARGET): $(BOARD_PREBUILT_DTBOIMAGE)
cp $(BOARD_PREBUILT_DTBOIMAGE) $@
@@ -3980,33 +4045,26 @@
# -----------------------------------------------------------------
# Protected VM firmware image
ifeq ($(BOARD_USES_PVMFWIMAGE),true)
+
+.PHONY: pvmfwimage
+pvmfwimage: $(INSTALLED_PVMFWIMAGE_TARGET)
+
INSTALLED_PVMFWIMAGE_TARGET := $(PRODUCT_OUT)/pvmfw.img
INSTALLED_PVMFW_EMBEDDED_AVBKEY_TARGET := $(PRODUCT_OUT)/pvmfw_embedded.avbpubkey
-INTERNAL_PREBUILT_PVMFWIMAGE := packages/modules/Virtualization/pvmfw/pvmfw.img
-INTERNAL_PVMFW_EMBEDDED_AVBKEY := external/avb/test/data/testkey_rsa4096_pub.bin
-
-ifdef BOARD_PREBUILT_PVMFWIMAGE
-PREBUILT_PVMFWIMAGE_TARGET := $(BOARD_PREBUILT_PVMFWIMAGE)
-else
-PREBUILT_PVMFWIMAGE_TARGET := $(INTERNAL_PREBUILT_PVMFWIMAGE)
-endif
-
-ifeq ($(BOARD_AVB_ENABLE),true)
-$(INSTALLED_PVMFWIMAGE_TARGET): $(PREBUILT_PVMFWIMAGE_TARGET) $(AVBTOOL) $(BOARD_AVB_PVMFW_KEY_PATH)
- cp $< $@
- $(AVBTOOL) add_hash_footer \
- --image $@ \
- $(call get-partition-size-argument,$(BOARD_PVMFWIMAGE_PARTITION_SIZE)) \
- --partition_name pvmfw $(INTERNAL_AVB_PVMFW_SIGNING_ARGS) \
- $(BOARD_AVB_PVMFW_ADD_HASH_FOOTER_ARGS)
+INSTALLED_PVMFW_BINARY_TARGET := $(call module-installed-files,pvmfw_bin)
+INTERNAL_PVMFWIMAGE_FILES := $(call module-target-built-files,pvmfw_img)
+INTERNAL_PVMFW_EMBEDDED_AVBKEY := $(call module-target-built-files,pvmfw_embedded_key)
+INTERNAL_PVMFW_SYMBOL := $(TARGET_OUT_EXECUTABLES_UNSTRIPPED)/pvmfw
$(call declare-1p-container,$(INSTALLED_PVMFWIMAGE_TARGET),)
-$(call declare-container-license-deps,$(INSTALLED_PVMFWIMAGE_TARGET),$(PREBUILT_PVMFWIMAGE_TARGET),$(PRODUCT_OUT)/:/)
+$(call declare-container-license-deps,$(INSTALLED_PVMFWIMAGE_TARGET),$(INTERNAL_PVMFWIMAGE_FILES),$(PRODUCT_OUT)/:/)
-UNMOUNTED_NOTICE_DEPS += $(INSTALLED_PVMFWIMAGE_TARGET)
-else
-$(eval $(call copy-one-file,$(PREBUILT_PVMFWIMAGE_TARGET),$(INSTALLED_PVMFWIMAGE_TARGET)))
-endif
+UNMOUNTED_NOTICE_VENDOR_DEPS += $(INSTALLED_PVMFWIMAGE_TARGET)
+
+# Place the unstripped pvmfw image to the symbols directory
+$(INTERNAL_PVMFWIMAGE_FILES): |$(INTERNAL_PVMFW_SYMBOL)
+
+$(eval $(call copy-one-file,$(INTERNAL_PVMFWIMAGE_FILES),$(INSTALLED_PVMFWIMAGE_TARGET)))
$(INSTALLED_PVMFWIMAGE_TARGET): $(INSTALLED_PVMFW_EMBEDDED_AVBKEY_TARGET)
@@ -4102,7 +4160,8 @@
INTERNAL_AVB_PARTITIONS_IN_CHAINED_VBMETA_IMAGES := \
$(BOARD_AVB_VBMETA_SYSTEM) \
- $(BOARD_AVB_VBMETA_VENDOR)
+ $(BOARD_AVB_VBMETA_VENDOR) \
+ $(foreach partition,$(BOARD_AVB_VBMETA_CUSTOM_PARTITIONS),$(BOARD_AVB_VBMETA_$(call to-upper,$(partition))))
# Not allowing the same partition to appear in multiple groups.
ifneq ($(words $(sort $(INTERNAL_AVB_PARTITIONS_IN_CHAINED_VBMETA_IMAGES))),$(words $(INTERNAL_AVB_PARTITIONS_IN_CHAINED_VBMETA_IMAGES)))
@@ -4408,23 +4467,16 @@
$(eval $(call check-and-set-avb-args,vbmeta_vendor))
endif
+ifdef BOARD_AVB_VBMETA_CUSTOM_PARTITIONS
+$(foreach partition,$(BOARD_AVB_VBMETA_CUSTOM_PARTITIONS),$(eval $(call check-and-set-avb-args,vbmeta_$(partition))))
+$(foreach partition,$(BOARD_AVB_VBMETA_CUSTOM_PARTITIONS),$(eval BOARD_AVB_MAKE_VBMETA_$(call to-upper,$(partition))_IMAGE_ARGS += --padding_size 4096))
+endif
+
ifneq ($(strip $(BOARD_CUSTOMIMAGES_PARTITION_LIST)),)
$(foreach partition,$(BOARD_CUSTOMIMAGES_PARTITION_LIST), \
$(eval $(call check-and-set-custom-avb-chain-args,$(partition))))
endif
-# Add kernel cmdline descriptor for kernel to mount system.img as root with
-# dm-verity. This works when system.img is either chained or not-chained:
-# - chained: The --setup_as_rootfs_from_kernel option will add dm-verity kernel
-# cmdline descriptor to system.img
-# - not-chained: The --include_descriptors_from_image option for make_vbmeta_image
-# will include the kernel cmdline descriptor from system.img into vbmeta.img
-ifeq ($(BOARD_BUILD_SYSTEM_ROOT_IMAGE),true)
-ifeq ($(filter system, $(BOARD_SUPER_PARTITION_PARTITION_LIST)),)
-BOARD_AVB_SYSTEM_ADD_HASHTREE_FOOTER_ARGS += --setup_as_rootfs_from_kernel
-endif
-endif
-
BOARD_AVB_MAKE_VBMETA_IMAGE_ARGS += --padding_size 4096
BOARD_AVB_MAKE_VBMETA_SYSTEM_IMAGE_ARGS += --padding_size 4096
BOARD_AVB_MAKE_VBMETA_VENDOR_IMAGE_ARGS += --padding_size 4096
@@ -4448,6 +4500,13 @@
--rollback_index $(BOARD_AVB_VBMETA_VENDOR_ROLLBACK_INDEX)
endif
+ifdef BOARD_AVB_VBMETA_CUSTOM_PARTITIONS
+ $(foreach partition,$(call to-upper,$(BOARD_AVB_VBMETA_CUSTOM_PARTITIONS)), \
+ $(if $(BOARD_AVB_VBMETA_$(partition)_ROLLBACK_INDEX),$(eval \
+ BOARD_AVB_MAKE_VBMETA_$(partition)_IMAGE_ARGS += \
+ --rollback_index $(BOARD_AVB_VBMETA_$(partition)_ROLLBACK_INDEX))))
+endif
+
# $(1): the directory to extract public keys to
define extract-avb-chain-public-keys
$(if $(BOARD_AVB_BOOT_KEY_PATH),\
@@ -4504,7 +4563,11 @@
$(if $(BOARD_CUSTOMIMAGES_PARTITION_LIST),\
$(hide) $(foreach partition,$(BOARD_CUSTOMIMAGES_PARTITION_LIST), \
$(AVBTOOL) extract_public_key --key $(BOARD_AVB_$(call to-upper,$(partition))_KEY_PATH) \
- --output $(1)/$(partition).avbpubkey;))
+ --output $(1)/$(partition).avbpubkey;)) \
+ $(if $(BOARD_AVB_VBMETA_CUSTOM_PARTITIONS),\
+ $(hide) $(foreach partition,$(BOARD_AVB_VBMETA_CUSTOM_PARTITIONS), \
+ $(AVBTOOL) extract_public_key --key $(BOARD_AVB_VBMETA_$(call to-upper,$(partition))_KEY_PATH) \
+ --output $(1)/vbmeta_$(partition).avbpubkey;))
endef
# Builds a chained VBMeta image. This VBMeta image will contain the descriptors for the partitions
@@ -4516,13 +4579,13 @@
# $(1): VBMeta image name, such as "vbmeta_system", "vbmeta_vendor" etc.
# $(2): Output filename.
define build-chained-vbmeta-image
- $(call pretty,"Target chained vbmeta image: $@")
- $(hide) $(AVBTOOL) make_vbmeta_image \
- $(INTERNAL_AVB_$(call to-upper,$(1))_SIGNING_ARGS) \
- $(BOARD_AVB_MAKE_$(call to-upper,$(1))_IMAGE_ARGS) \
- $(foreach image,$(BOARD_AVB_$(call to-upper,$(1))), \
- --include_descriptors_from_image $(call images-for-partitions,$(image))) \
- --output $@
+ $(call pretty,"Target chained vbmeta image: $@")
+ $(hide) $(AVBTOOL) make_vbmeta_image \
+ $(INTERNAL_AVB_$(call to-upper,$(1))_SIGNING_ARGS) \
+ $(BOARD_AVB_MAKE_$(call to-upper,$(1))_IMAGE_ARGS) \
+ $(foreach image,$(BOARD_AVB_$(call to-upper,$(1))), \
+ --include_descriptors_from_image $(call images-for-partitions,$(image))) \
+ --output $@
endef
ifdef BUILDING_SYSTEM_IMAGE
@@ -4550,7 +4613,26 @@
$(call declare-1p-container,$(INSTALLED_VBMETA_VENDORIMAGE_TARGET),)
-UNMOUNTED_NOTICE_DEPS += $(INSTALLED_VBMETA_VENDORIMAGE_TARGET)
+UNMOUNTED_NOTICE_VENDOR_DEPS += $(INSTALLED_VBMETA_VENDORIMAGE_TARGET)
+endif
+
+ifdef BOARD_AVB_VBMETA_CUSTOM_PARTITIONS
+define declare-custom-vbmeta-target
+INSTALLED_VBMETA_$(call to-upper,$(1))IMAGE_TARGET := $(PRODUCT_OUT)/vbmeta_$(call to-lower,$(1)).img
+$$(INSTALLED_VBMETA_$(call to-upper,$(1))IMAGE_TARGET): \
+ $(AVBTOOL) \
+ $(call images-for-partitions,$(BOARD_AVB_VBMETA_$(call to-upper,$(1)))) \
+ $(BOARD_AVB_VBMETA_$(call to-upper,$(1))_KEY_PATH)
+ $$(call build-chained-vbmeta-image,vbmeta_$(call to-lower,$(1)))
+
+$(call declare-1p-container,$(INSTALLED_VBMETA_$(call to-upper,$(1))IMAGE_TARGET),)
+
+UNMOUNTED_NOTICE_VENDOR_DEPS += $(INSTALLED_VBMETA_$(call to-upper,$(1))IMAGE_TARGET)
+endef
+
+$(foreach partition,\
+ $(call to-upper,$(BOARD_AVB_VBMETA_CUSTOM_PARTITIONS)),\
+ $(eval $(call declare-custom-vbmeta-target,$(partition))))
endif
define build-vbmetaimage-target
@@ -4570,6 +4652,7 @@
$(INSTALLED_VBMETAIMAGE_TARGET): PRIVATE_AVB_VBMETA_SIGNING_ARGS := \
--algorithm $(BOARD_AVB_ALGORITHM) --key $(BOARD_AVB_KEY_PATH)
+
$(INSTALLED_VBMETAIMAGE_TARGET): \
$(AVBTOOL) \
$(INSTALLED_BOOTIMAGE_TARGET) \
@@ -4590,8 +4673,10 @@
$(INSTALLED_RECOVERYIMAGE_TARGET) \
$(INSTALLED_VBMETA_SYSTEMIMAGE_TARGET) \
$(INSTALLED_VBMETA_VENDORIMAGE_TARGET) \
+ $(foreach partition,$(call to-upper,$(BOARD_AVB_VBMETA_CUSTOM_PARTITIONS)),$(INSTALLED_VBMETA_$(partition)IMAGE_TARGET)) \
$(BOARD_AVB_VBMETA_SYSTEM_KEY_PATH) \
$(BOARD_AVB_VBMETA_VENDOR_KEY_PATH) \
+ $(foreach partition,$(call to-upper,$(BOARD_AVB_VBMETA_CUSTOM_PARTITIONS)),$(BOARD_AVB_VBMETA_$(partition)_KEY_PATH)) \
$(BOARD_AVB_KEY_PATH)
$(build-vbmetaimage-target)
@@ -4620,6 +4705,7 @@
$(INTERNAL_VENDOR_DLKMIMAGE_FILES) \
$(INTERNAL_ODM_DLKMIMAGE_FILES) \
$(INTERNAL_SYSTEM_DLKMIMAGE_FILES) \
+ $(INTERNAL_PVMFWIMAGE_FILES) \
# -----------------------------------------------------------------
# Check VINTF of build
@@ -4630,14 +4716,39 @@
intermediates := $(call intermediates-dir-for,PACKAGING,check_vintf_all)
check_vintf_all_deps :=
+APEX_OUT := $(PRODUCT_OUT)/apex
+# -----------------------------------------------------------------
+# Create apex-info-file.xml
+
+apex_dirs := \
+ $(TARGET_OUT)/apex/% \
+ $(TARGET_OUT_SYSTEM_EXT)/apex/% \
+ $(TARGET_OUT_VENDOR)/apex/% \
+ $(TARGET_OUT_ODM)/apex/% \
+ $(TARGET_OUT_PRODUCT)/apex/% \
+
+apex_files := $(sort $(filter $(apex_dirs), $(INTERNAL_ALLIMAGES_FILES)))
+APEX_INFO_FILE := $(APEX_OUT)/apex-info-list.xml
+
+# dump_apex_info scans $(PRODUCT_OUT)/apex and writes apex-info-list.xml there.
+# This relies on the fact that rules for .apex files install the contents in $(PRODUCT_OUT)/apex.
+$(APEX_INFO_FILE): $(HOST_OUT_EXECUTABLES)/dump_apex_info $(apex_files)
+ @echo "Creating apex-info-file in $(PRODUCT_OUT) "
+ $< --root_dir $(PRODUCT_OUT)
+
+apex_files :=
+apex_dirs :=
+
# The build system only writes VINTF metadata to */etc/vintf paths. Legacy paths aren't needed here
# because they are only used for prebuilt images.
+# APEX files in /vendor/apex can have VINTF fragments as well.
check_vintf_common_srcs_patterns := \
$(TARGET_OUT)/etc/vintf/% \
$(TARGET_OUT_VENDOR)/etc/vintf/% \
$(TARGET_OUT_ODM)/etc/vintf/% \
$(TARGET_OUT_PRODUCT)/etc/vintf/% \
$(TARGET_OUT_SYSTEM_EXT)/etc/vintf/% \
+ $(TARGET_OUT_VENDOR)/apex/% \
check_vintf_common_srcs := $(sort $(filter $(check_vintf_common_srcs_patterns),$(INTERNAL_ALLIMAGES_FILES)))
check_vintf_common_srcs_patterns :=
@@ -4661,23 +4772,27 @@
check_vintf_all_deps += $(check_vintf_system_log)
$(check_vintf_system_log): $(HOST_OUT_EXECUTABLES)/checkvintf $(check_vintf_system_deps)
@( $< --check-one --dirmap /system:$(TARGET_OUT) > $@ 2>&1 ) || ( cat $@ && exit 1 )
-$(call declare-0p-target,$(check_vintf_system_log))
+$(call declare-1p-target,$(check_vintf_system_log))
check_vintf_system_log :=
-vintffm_log := $(intermediates)/vintffm.log
+# -- Check framework manifest against frozen manifests for GSI targets. They need to be compatible.
+ifneq (true, $(BUILDING_VENDOR_IMAGE))
+ vintffm_log := $(intermediates)/vintffm.log
+endif
check_vintf_all_deps += $(vintffm_log)
$(vintffm_log): $(HOST_OUT_EXECUTABLES)/vintffm $(check_vintf_system_deps)
@( $< --check --dirmap /system:$(TARGET_OUT) \
$(VINTF_FRAMEWORK_MANIFEST_FROZEN_DIR) > $@ 2>&1 ) || ( cat $@ && exit 1 )
-$(call declare-0p-target,$(vintffm_log))
+$(call declare-1p-target,$(vintffm_log))
endif # check_vintf_system_deps
check_vintf_system_deps :=
# -- Check vendor manifest / matrix including fragments (excluding other device manifests / matrices)
check_vintf_vendor_deps := $(filter $(TARGET_OUT_VENDOR)/etc/vintf/%, $(check_vintf_common_srcs))
-ifneq ($(check_vintf_vendor_deps),)
+check_vintf_vendor_deps += $(filter $(TARGET_OUT_VENDOR)/apex/%, $(check_vintf_common_srcs))
+ifneq ($(strip $(check_vintf_vendor_deps)),)
check_vintf_has_vendor := true
check_vintf_vendor_log := $(intermediates)/check_vintf_vendor.log
check_vintf_all_deps += $(check_vintf_vendor_log)
@@ -4688,12 +4803,12 @@
$(if $(DEVICE_MANIFEST_FILE),EMPTY_VENDOR_SKU_PLACEHOLDER,\
$(if $(DEVICE_MANIFEST_SKUS),,EMPTY_VENDOR_SKU_PLACEHOLDER)) \
$(DEVICE_MANIFEST_SKUS)
-$(check_vintf_vendor_log): $(HOST_OUT_EXECUTABLES)/checkvintf $(check_vintf_vendor_deps)
+$(check_vintf_vendor_log): $(HOST_OUT_EXECUTABLES)/checkvintf $(check_vintf_vendor_deps) $(APEX_INFO_FILE)
$(foreach vendor_sku,$(PRIVATE_VENDOR_SKUS), \
- ( $< --check-one --dirmap /vendor:$(TARGET_OUT_VENDOR) \
+ ( $< --check-one --dirmap /vendor:$(TARGET_OUT_VENDOR) --dirmap /apex:$(APEX_OUT) \
--property ro.boot.product.vendor.sku=$(filter-out EMPTY_VENDOR_SKU_PLACEHOLDER,$(vendor_sku)) \
> $@ 2>&1 ) || ( cat $@ && exit 1 ); )
-$(call declare-0p-target,$(check_vintf_vendor_log))
+$(call declare-1p-target,$(check_vintf_vendor_log))
check_vintf_vendor_log :=
endif # check_vintf_vendor_deps
check_vintf_vendor_deps :=
@@ -4715,8 +4830,8 @@
$(BUILT_KERNEL_VERSION_FILE):
echo $(BOARD_KERNEL_VERSION) > $@
-$(call declare-0p-target,$(BUILT_KERNEL_CONFIGS_FILE))
-$(call declare-0p-target,$(BUILT_KERNEL_VERSION_FILE))
+$(call declare-license-metadata,$(BUILT_KERNEL_CONFIGS_FILE),SPDX-license-identifier-GPL-2.0-only,restricted,$(BUILD_SYSTEM)/LINUX_KERNEL_COPYING,"Kernel",kernel)
+$(call declare-license-metadata,$(BUILT_KERNEL_VERSION_FILE),SPDX-license-identifier-GPL-2.0-only,restricted,$(BUILD_SYSTEM)/LINUX_KERNEL_COPYING,"Kernel",kernel)
my_board_extracted_kernel := true
endif # BOARD_KERNEL_VERSION
@@ -4741,7 +4856,7 @@
--output-configs $@ \
--output-release $(BUILT_KERNEL_VERSION_FILE)
-$(call declare-0p-target,$(BUILT_KERNEL_CONFIGS_FILE))
+$(call declare-license-metadata,$(BUILT_KERNEL_CONFIGS_FILE),SPDX-license-identifier-GPL-2.0-only,restricted,$(BUILD_SYSTEM)/LINUX_KERNEL_COPYING,"Kernel",kernel)
my_board_extracted_kernel := true
endif # INSTALLED_KERNEL_TARGET
@@ -4762,13 +4877,15 @@
--output-configs $@ \
--output-release $(BUILT_KERNEL_VERSION_FILE)
-$(call declare-0p-target,$(BUILT_KERNEL_CONFIGS_FILE))
+$(call declare-license-metadata,$(BUILT_KERNEL_CONFIGS_FILE),SPDX-license-identifier-GPL-2.0-only,restricted,$(BUILD_SYSTEM)/LINUX_KERNEL_COPYING,"Kernel",kernel)
my_board_extracted_kernel := true
endif # INSTALLED_BOOTIMAGE_TARGET
endif # my_board_extracted_kernel
-ifneq ($(my_board_extracted_kernel),true)
+ifeq ($(my_board_extracted_kernel),true)
+$(call dist-for-goals, droid_targets, $(BUILT_KERNEL_VERSION_FILE))
+else
$(warning Neither INSTALLED_KERNEL_TARGET nor INSTALLED_BOOTIMAGE_TARGET is defined when \
PRODUCT_OTA_ENFORCE_VINTF_KERNEL_REQUIREMENTS is true. Information about the updated kernel \
cannot be built into OTA update package. You can fix this by: \
@@ -4804,7 +4921,7 @@
check_vintf_all_deps += $(check_vintf_compatible_log)
check_vintf_compatible_args :=
-check_vintf_compatible_deps := $(check_vintf_common_srcs)
+check_vintf_compatible_deps := $(check_vintf_common_srcs) $(APEX_INFO_FILE)
ifeq ($(PRODUCT_OTA_ENFORCE_VINTF_KERNEL_REQUIREMENTS),true)
ifneq (,$(BUILT_KERNEL_VERSION_FILE)$(BUILT_KERNEL_CONFIGS_FILE))
@@ -4819,6 +4936,7 @@
--dirmap /odm:$(TARGET_OUT_ODM) \
--dirmap /product:$(TARGET_OUT_PRODUCT) \
--dirmap /system_ext:$(TARGET_OUT_SYSTEM_EXT) \
+ --dirmap /apex:$(APEX_OUT) \
ifdef PRODUCT_SHIPPING_API_LEVEL
check_vintf_compatible_args += --property ro.product.first_api_level=$(PRODUCT_SHIPPING_API_LEVEL)
@@ -4853,7 +4971,7 @@
--property ro.boot.product.vendor.sku=$(filter-out EMPTY_VENDOR_SKU_PLACEHOLDER,$(vendor_sku)) \
>> $@ 2>&1 ) || (cat $@ && exit 1); ))
-$(call declare-0p-target,$(check_vintf_compatible_log))
+$(call declare-1p-target,$(check_vintf_compatible_log))
check_vintf_compatible_log :=
check_vintf_compatible_args :=
@@ -4918,7 +5036,7 @@
$(call intermediates-dir-for,PACKAGING,check-all-partition-sizes)/misc_info.txt, \
$@)
-$(call declare-0p-target,$(check_all_partition_sizes_log))
+$(call declare-1p-target,$(check_all_partition_sizes_log))
.PHONY: check-all-partition-sizes
check-all-partition-sizes: $(check_all_partition_sizes_log)
@@ -5006,6 +5124,7 @@
check_target_files_signatures \
check_target_files_vintf \
checkvintf \
+ create_brick_ota \
delta_generator \
e2fsck \
e2fsdroid \
@@ -5020,12 +5139,14 @@
img2simg \
img_from_target_files \
imgdiff \
+ initrd_bootconfig \
libconscrypt_openjdk_jni \
lpmake \
lpunpack \
lz4 \
make_f2fs \
make_f2fs_casefold \
+ merge_ota \
merge_target_files \
minigzip \
mk_combined_img \
@@ -5034,9 +5155,9 @@
mke2fs \
mke2fs.conf \
mkfs.erofs \
- mkf2fsuserimg.sh \
+ mkf2fsuserimg \
mksquashfs \
- mksquashfsimage.sh \
+ mksquashfsimage \
mkuserimg_mke2fs \
ota_extractor \
ota_from_target_files \
@@ -5060,20 +5181,26 @@
verity_verifier \
zipalign \
zucchini \
+ zip2zip \
+
# Additional tools to unpack and repack the apex file.
INTERNAL_OTATOOLS_MODULES += \
apexer \
apex_compression_tool \
+ blkid_static \
deapexer \
debugfs_static \
+ dump_apex_info \
+ fsck.erofs \
+ make_erofs \
merge_zips \
resize2fs \
soong_zip \
ifeq (true,$(PRODUCT_SUPPORTS_VBOOT))
INTERNAL_OTATOOLS_MODULES += \
- futility \
+ futility-host \
vboot_signer
endif
@@ -5094,7 +5221,13 @@
INTERNAL_OTATOOLS_PACKAGE_FILES += \
$(sort $(shell find build/make/target/product/security -type f -name "*.x509.pem" -o \
- -name "*.pk8" -o -name verity_key))
+ -name "*.pk8"))
+
+ifneq (,$(wildcard packages/modules))
+INTERNAL_OTATOOLS_PACKAGE_FILES += \
+ $(sort $(shell find packages/modules -type f -name "*.x509.pem" -o -name "*.pk8" -o -name \
+ "key.pem"))
+endif
ifneq (,$(wildcard device))
INTERNAL_OTATOOLS_PACKAGE_FILES += \
@@ -5112,8 +5245,8 @@
endif
INTERNAL_OTATOOLS_RELEASETOOLS := \
- $(sort $(shell find build/make/tools/releasetools -name "*.pyc" -prune -o \
- \( -type f -o -type l \) -print))
+ $(shell find build/make/tools/releasetools -name "*.pyc" -prune -o \
+ \( -type f -o -type l \) -print | sort)
BUILT_OTATOOLS_PACKAGE := $(PRODUCT_OUT)/otatools.zip
$(BUILT_OTATOOLS_PACKAGE): PRIVATE_ZIP_ROOT := $(call intermediates-dir-for,PACKAGING,otatools)/otatools
@@ -5135,6 +5268,10 @@
.PHONY: otatools-package
otatools-package: $(BUILT_OTATOOLS_PACKAGE)
+$(call dist-for-goals, otatools-package, \
+ $(BUILT_OTATOOLS_PACKAGE) \
+)
+
endif # build_otatools_package
# -----------------------------------------------------------------
@@ -5298,11 +5435,20 @@
endif # BOARD_AVB_VBMETA_SYSTEM
ifneq (,$(strip $(BOARD_AVB_VBMETA_VENDOR)))
$(hide) echo "avb_vbmeta_vendor=$(BOARD_AVB_VBMETA_VENDOR)" >> $@
- $(hide) echo "avb_vbmeta_vendor_args=$(BOARD_AVB_MAKE_VBMETA_SYSTEM_IMAGE_ARGS)" >> $@
+ $(hide) echo "avb_vbmeta_vendor_args=$(BOARD_AVB_MAKE_VBMETA_VENDOR_IMAGE_ARGS)" >> $@
$(hide) echo "avb_vbmeta_vendor_key_path=$(BOARD_AVB_VBMETA_VENDOR_KEY_PATH)" >> $@
$(hide) echo "avb_vbmeta_vendor_algorithm=$(BOARD_AVB_VBMETA_VENDOR_ALGORITHM)" >> $@
$(hide) echo "avb_vbmeta_vendor_rollback_index_location=$(BOARD_AVB_VBMETA_VENDOR_ROLLBACK_INDEX_LOCATION)" >> $@
endif # BOARD_AVB_VBMETA_VENDOR_KEY_PATH
+ifneq (,$(strip $(BOARD_AVB_VBMETA_CUSTOM_PARTITIONS)))
+ $(hide) echo "avb_custom_vbmeta_images_partition_list=$(BOARD_AVB_VBMETA_CUSTOM_PARTITIONS)" >> $@
+ $(hide) $(foreach partition,$(BOARD_AVB_VBMETA_CUSTOM_PARTITIONS),\
+ echo "avb_vbmeta_$(partition)=$(BOARD_AVB_VBMETA_$(call to-upper,$(partition)))" >> $@ ;\
+ echo "avb_vbmeta_$(partition)_args=$(BOARD_AVB_MAKE_VBMETA_$(call to-upper,$(partition))_IMAGE_ARGS)" >> $@ ;\
+ echo "avb_vbmeta_$(partition)_key_path=$(BOARD_AVB_VBMETA_$(call to-upper,$(partition))_KEY_PATH)" >> $@ ;\
+ echo "avb_vbmeta_$(partition)_algorithm=$(BOARD_AVB_VBMETA_$(call to-upper,$(partition))_ALGORITHM)" >> $@ ;\
+ echo "avb_vbmeta_$(partition)_rollback_index_location=$(BOARD_AVB_VBMETA_$(call to-upper,$(partition))_ROLLBACK_INDEX_LOCATION)" >> $@ ;)
+endif # BOARD_AVB_VBMETA_CUSTOM_PARTITIONS
endif # BOARD_AVB_ENABLE
ifdef BOARD_BPT_INPUT_FILES
$(hide) echo "board_bpt_enable=true" >> $@
@@ -5425,7 +5571,7 @@
tool_extension := $(wildcard $(tool_extensions)/releasetools.py)
$(BUILT_TARGET_FILES_PACKAGE): PRIVATE_TOOL_EXTENSION := $(tool_extension)
-updaer_dep :=
+updater_dep :=
ifeq ($(AB_OTA_UPDATER),true)
updater_dep += system/update_engine/update_engine.conf
$(call declare-1p-target,system/update_engine/update_engine.conf,system/update_engine)
@@ -5673,6 +5819,7 @@
$(INSTALLED_CACHEIMAGE_TARGET) \
$(INSTALLED_DTBOIMAGE_TARGET) \
$(INSTALLED_PVMFWIMAGE_TARGET) \
+ $(INSTALLED_PVMFW_BINARY_TARGET) \
$(INSTALLED_PVMFW_EMBEDDED_AVBKEY_TARGET) \
$(INSTALLED_CUSTOMIMAGES_TARGET) \
$(INSTALLED_ANDROID_INFO_TXT_TARGET) \
@@ -5759,10 +5906,8 @@
$(TARGET_ROOT_OUT),$(zip_root)/ROOT)
@# If we are using recovery as boot, this is already done when processing recovery.
ifneq ($(BOARD_USES_RECOVERY_AS_BOOT),true)
-ifneq ($(BOARD_BUILD_SYSTEM_ROOT_IMAGE),true)
$(hide) $(call package_files-copy-root, \
$(TARGET_RAMDISK_OUT),$(zip_root)/BOOT/RAMDISK)
-endif
ifdef INSTALLED_KERNEL_TARGET
$(hide) cp $(INSTALLED_KERNEL_TARGET) $(zip_root)/BOOT/
endif
@@ -6025,6 +6170,8 @@
$(hide) mkdir -p $(zip_root)/PREBUILT_IMAGES
$(hide) cp $(INSTALLED_PVMFWIMAGE_TARGET) $(zip_root)/PREBUILT_IMAGES/
$(hide) cp $(INSTALLED_PVMFW_EMBEDDED_AVBKEY_TARGET) $(zip_root)/PREBUILT_IMAGES/
+ $(hide) mkdir -p $(zip_root)/PVMFW
+ $(hide) cp $(INSTALLED_PVMFW_BINARY_TARGET) $(zip_root)/PVMFW/
endif
ifdef BOARD_PREBUILT_BOOTLOADER
$(hide) mkdir -p $(zip_root)/IMAGES
@@ -6068,13 +6215,12 @@
endif
@# ROOT always contains the files for the root under normal boot.
$(hide) $(call fs_config,$(zip_root)/ROOT,) > $(zip_root)/META/root_filesystem_config.txt
-ifeq ($(BOARD_USES_RECOVERY_AS_BOOT),true)
- @# BOOT/RAMDISK exists and contains the ramdisk for recovery if using BOARD_USES_RECOVERY_AS_BOOT.
+ @# BOOT/RAMDISK contains the first stage and recovery ramdisk.
$(hide) $(call fs_config,$(zip_root)/BOOT/RAMDISK,) > $(zip_root)/META/boot_filesystem_config.txt
-endif
ifdef BUILDING_INIT_BOOT_IMAGE
$(hide) $(call package_files-copy-root, $(TARGET_RAMDISK_OUT),$(zip_root)/INIT_BOOT/RAMDISK)
$(hide) $(call fs_config,$(zip_root)/INIT_BOOT/RAMDISK,) > $(zip_root)/META/init_boot_filesystem_config.txt
+ $(hide) cp $(RAMDISK_NODE_LIST) $(zip_root)/META/ramdisk_node_list
ifdef BOARD_KERNEL_PAGESIZE
$(hide) echo "$(BOARD_KERNEL_PAGESIZE)" > $(zip_root)/INIT_BOOT/pagesize
endif # BOARD_KERNEL_PAGESIZE
@@ -6082,10 +6228,6 @@
ifneq ($(INSTALLED_VENDOR_BOOTIMAGE_TARGET),)
$(call fs_config,$(zip_root)/VENDOR_BOOT/RAMDISK,) > $(zip_root)/META/vendor_boot_filesystem_config.txt
endif
-ifneq ($(BOARD_BUILD_SYSTEM_ROOT_IMAGE),true)
- @# BOOT/RAMDISK also exists and contains the first stage ramdisk if not using BOARD_BUILD_SYSTEM_ROOT_IMAGE.
- $(hide) $(call fs_config,$(zip_root)/BOOT/RAMDISK,) > $(zip_root)/META/boot_filesystem_config.txt
-endif
ifneq ($(INSTALLED_RECOVERYIMAGE_TARGET),)
$(hide) $(call fs_config,$(zip_root)/RECOVERY/RAMDISK,) > $(zip_root)/META/recovery_filesystem_config.txt
endif
@@ -6158,12 +6300,14 @@
# -----------------------------------------------------------------
# NDK Sysroot Package
NDK_SYSROOT_TARGET := $(PRODUCT_OUT)/ndk_sysroot.tar.bz2
+.PHONY: ndk_sysroot
+ndk_sysroot: $(NDK_SYSROOT_TARGET)
$(NDK_SYSROOT_TARGET): $(SOONG_OUT_DIR)/ndk.timestamp
@echo Package NDK sysroot...
$(hide) tar cjf $@ -C $(SOONG_OUT_DIR) ndk
ifeq ($(HOST_OS),linux)
-$(call dist-for-goals,sdk,$(NDK_SYSROOT_TARGET))
+$(call dist-for-goals,sdk ndk_sysroot,$(NDK_SYSROOT_TARGET))
endif
ifeq ($(build_ota_package),true)
@@ -6270,7 +6414,7 @@
# The mac build doesn't build dex2oat, so create the zip file only if the build OS is linux.
ifeq ($(BUILD_OS),linux)
ifneq ($(DEX2OAT),)
-dexpreopt_tools_deps := $(DEXPREOPT_GEN_DEPS) $(DEXPREOPT_GEN) $(AAPT2)
+dexpreopt_tools_deps := $(DEXPREOPT_GEN_DEPS) $(DEXPREOPT_GEN)
dexpreopt_tools_deps += $(HOST_OUT_EXECUTABLES)/dexdump
dexpreopt_tools_deps += $(HOST_OUT_EXECUTABLES)/oatdump
DEXPREOPT_TOOLS_ZIP := $(PRODUCT_OUT)/dexpreopt_tools.zip
@@ -6387,7 +6531,7 @@
ifeq (true,$(CLANG_COVERAGE))
LLVM_PROFDATA := $(LLVM_PREBUILTS_BASE)/linux-x86/$(LLVM_PREBUILTS_VERSION)/bin/llvm-profdata
LLVM_COV := $(LLVM_PREBUILTS_BASE)/linux-x86/$(LLVM_PREBUILTS_VERSION)/bin/llvm-cov
- LIBCXX := $(LLVM_PREBUILTS_BASE)/linux-x86/$(LLVM_PREBUILTS_VERSION)/lib64/libc++.so.1
+ LIBCXX := $(LLVM_PREBUILTS_BASE)/linux-x86/$(LLVM_PREBUILTS_VERSION)/lib/x86_64-unknown-linux-gnu/libc++.so.1
# Use llvm-profdata.zip for backwards compatibility with tradefed code.
LLVM_COVERAGE_TOOLS_ZIP := $(PRODUCT_OUT)/llvm-profdata.zip
@@ -6444,6 +6588,14 @@
ifeq (,$(TARGET_BUILD_UNBUNDLED))
$(JACOCO_REPORT_CLASSES_ALL): $(INTERNAL_ALLIMAGES_FILES)
endif
+
+# This is not ideal, but it is difficult to correctly figure out the actual jacoco report
+# jars we need to add here as dependencies, so we add the device-tests as a dependency when
+# the env variable is set and this should guarantee thaat all the jacoco report jars are ready
+# when we package the final report jar here.
+ifeq ($(JACOCO_PACKAGING_INCLUDE_DEVICE_TESTS),true)
+ $(JACOCO_REPORT_CLASSES_ALL): $(COMPATIBILITY.device-tests.FILES)
+endif
endif # EMMA_INSTRUMENT=true
@@ -6605,22 +6757,22 @@
endif
endif
-# If BOARD_BUILD_SUPER_IMAGE_BY_DEFAULT is set, super.img is built from images in the
-# $(PRODUCT_OUT) directory, and is built to $(PRODUCT_OUT)/super.img. Also, it will
-# be built for non-dist builds. This is useful for devices that uses super.img directly, e.g.
-# virtual devices.
-ifeq (true,$(BOARD_BUILD_SUPER_IMAGE_BY_DEFAULT))
$(INSTALLED_SUPERIMAGE_TARGET): $(INSTALLED_SUPERIMAGE_DEPENDENCIES)
$(call pretty,"Target super fs image for debug: $@")
$(call build-superimage-target,$(INSTALLED_SUPERIMAGE_TARGET),\
$(call intermediates-dir-for,PACKAGING,superimage_debug)/misc_info.txt)
-droidcore-unbundled: $(INSTALLED_SUPERIMAGE_TARGET)
-
# For devices that uses super image directly, the superimage target points to the file in $(PRODUCT_OUT).
.PHONY: superimage
superimage: $(INSTALLED_SUPERIMAGE_TARGET)
+# If BOARD_BUILD_SUPER_IMAGE_BY_DEFAULT is set, super.img is built from images in the
+# $(PRODUCT_OUT) directory, and is built to $(PRODUCT_OUT)/super.img. Also, it will
+# be built for non-dist builds. This is useful for devices that uses super.img directly, e.g.
+# virtual devices.
+ifeq (true,$(BOARD_BUILD_SUPER_IMAGE_BY_DEFAULT))
+droidcore-unbundled: $(INSTALLED_SUPERIMAGE_TARGET)
+
$(call dist-for-goals,dist_files,$(INSTALLED_MISC_INFO_TARGET):super_misc_info.txt)
endif # BOARD_BUILD_SUPER_IMAGE_BY_DEFAULT
@@ -6881,12 +7033,26 @@
$(HOST_OUT_EXECUTABLES)/atree \
$(HOST_OUT_EXECUTABLES)/line_endings
+# The name of the subdir within the platforms dir of the sdk. One of:
+# - android-<SDK_INT> (stable base dessert SDKs)
+# - android-<CODENAME> (stable extension SDKs)
+# - android-<SDK_INT>-ext<EXT_INT> (codename SDKs)
+sdk_platform_dir_name := $(strip \
+ $(if $(filter REL,$(PLATFORM_VERSION_CODENAME)), \
+ $(if $(filter $(PLATFORM_SDK_EXTENSION_VERSION),$(PLATFORM_BASE_SDK_EXTENSION_VERSION)), \
+ android-$(PLATFORM_SDK_VERSION), \
+ android-$(PLATFORM_SDK_VERSION)-ext$(PLATFORM_SDK_EXTENSION_VERSION) \
+ ), \
+ android-$(PLATFORM_VERSION_CODENAME) \
+ ) \
+)
+
INTERNAL_SDK_TARGET := $(sdk_dir)/$(sdk_name).zip
$(INTERNAL_SDK_TARGET): PRIVATE_NAME := $(sdk_name)
$(INTERNAL_SDK_TARGET): PRIVATE_DIR := $(sdk_dir)/$(sdk_name)
$(INTERNAL_SDK_TARGET): PRIVATE_DEP_FILE := $(sdk_dep_file)
$(INTERNAL_SDK_TARGET): PRIVATE_INPUT_FILES := $(sdk_atree_files)
-
+$(INTERNAL_SDK_TARGET): PRIVATE_PLATFORM_NAME := $(sdk_platform_dir_name)
# Set SDK_GNU_ERROR to non-empty to fail when a GNU target is built.
#
#SDK_GNU_ERROR := true
@@ -6911,7 +7077,7 @@
-I $(PRODUCT_OUT) \
-I $(HOST_OUT) \
-I $(TARGET_COMMON_OUT_ROOT) \
- -v "PLATFORM_NAME=android-$(PLATFORM_VERSION)" \
+ -v "PLATFORM_NAME=$(PRIVATE_PLATFORM_NAME)" \
-v "OUT_DIR=$(OUT_DIR)" \
-v "HOST_OUT=$(HOST_OUT)" \
-v "TARGET_ARCH=$(TARGET_ARCH)" \
@@ -6996,14 +7162,14 @@
.PHONY: haiku
haiku: $(SOONG_FUZZ_PACKAGING_ARCH_MODULES) $(ALL_FUZZ_TARGETS)
$(call dist-for-goals,haiku,$(SOONG_FUZZ_PACKAGING_ARCH_MODULES))
-
+$(call dist-for-goals,haiku,$(PRODUCT_OUT)/module-info.json)
.PHONY: haiku-java
haiku-java: $(SOONG_JAVA_FUZZ_PACKAGING_ARCH_MODULES) $(ALL_JAVA_FUZZ_TARGETS)
$(call dist-for-goals,haiku-java,$(SOONG_JAVA_FUZZ_PACKAGING_ARCH_MODULES))
-
.PHONY: haiku-rust
haiku-rust: $(SOONG_RUST_FUZZ_PACKAGING_ARCH_MODULES) $(ALL_RUST_FUZZ_TARGETS)
$(call dist-for-goals,haiku-rust,$(SOONG_RUST_FUZZ_PACKAGING_ARCH_MODULES))
+$(call dist-for-goals,haiku-rust,$(PRODUCT_OUT)/module-info.json)
# -----------------------------------------------------------------
# Extract platform fonts used in Layoutlib
diff --git a/core/OWNERS b/core/OWNERS
index a016fb4..dbd7066 100644
--- a/core/OWNERS
+++ b/core/OWNERS
@@ -1,5 +1,6 @@
-per-file dex_preopt*.mk = ngeoffray@google.com,calin@google.com,mathewi@google.com,skvadrik@google.com
-per-file verify_uses_libraries.sh = ngeoffray@google.com,calin@google.com,skvadrik@google.com
+
+# For global Proguard rules
+per-file proguard*.flags = jdduke@google.com
# For version updates
per-file version_defaults.mk = ankurbakshi@google.com,bkhalife@google.com,jainne@google.com,lokeshgoel@google.com,lubomir@google.com,pscovanner@google.com
diff --git a/core/android_manifest.mk b/core/android_manifest.mk
index 254e09b..ff49262 100644
--- a/core/android_manifest.mk
+++ b/core/android_manifest.mk
@@ -87,13 +87,23 @@
endif
endif
+# TODO: Replace this hardcoded list of optional uses-libraries with build logic
+# that propagates optionality via the generated exported-sdk-libs files.
+# Hardcodng doesn't scale and enforces a single choice on each library, while in
+# reality this is a choice of the library users (which may differ).
+my_optional_sdk_lib_names := \
+ android.test.base \
+ android.test.mock \
+ androidx.window.extensions \
+ androidx.window.sidecar
+
$(fixed_android_manifest): PRIVATE_MANIFEST_FIXER_FLAGS := $(my_manifest_fixer_flags)
# These two libs are added as optional dependencies (<uses-library> with
# android:required set to false). This is because they haven't existed in pre-P
# devices, but classes in them were in bootclasspath jars, etc. So making them
# hard dependencies (andriod:required=true) would prevent apps from being
# installed to such legacy devices.
-$(fixed_android_manifest): PRIVATE_OPTIONAL_SDK_LIB_NAMES := android.test.base android.test.mock
+$(fixed_android_manifest): PRIVATE_OPTIONAL_SDK_LIB_NAMES := $(my_optional_sdk_lib_names)
$(fixed_android_manifest): $(MANIFEST_FIXER)
$(fixed_android_manifest): $(main_android_manifest)
echo $(PRIVATE_OPTIONAL_SDK_LIB_NAMES) | tr ' ' '\n' > $(PRIVATE_EXPORTED_SDK_LIBS_FILE).optional
@@ -109,3 +119,5 @@
) \
$< $@
rm $(PRIVATE_EXPORTED_SDK_LIBS_FILE).optional
+
+my_optional_sdk_lib_names :=
diff --git a/core/android_soong_config_vars.mk b/core/android_soong_config_vars.mk
index cba0e03..6bac52b 100644
--- a/core/android_soong_config_vars.mk
+++ b/core/android_soong_config_vars.mk
@@ -26,6 +26,8 @@
# Add variables to the namespace below:
+$(call add_soong_config_var,ANDROID,TARGET_DYNAMIC_64_32_MEDIASERVER)
+$(call add_soong_config_var,ANDROID,TARGET_DYNAMIC_64_32_DRMSERVER)
$(call add_soong_config_var,ANDROID,TARGET_ENABLE_MEDIADRM_64)
$(call add_soong_config_var,ANDROID,IS_TARGET_MIXED_SEPOLICY)
ifeq ($(IS_TARGET_MIXED_SEPOLICY),true)
@@ -33,7 +35,6 @@
endif
$(call add_soong_config_var,ANDROID,BOARD_USES_ODMIMAGE)
$(call add_soong_config_var,ANDROID,BOARD_USES_RECOVERY_AS_BOOT)
-$(call add_soong_config_var,ANDROID,BOARD_BUILD_SYSTEM_ROOT_IMAGE)
$(call add_soong_config_var,ANDROID,PRODUCT_INSTALL_DEBUG_POLICY_TO_SYSTEM_EXT)
# Default behavior for the tree wrt building modules or using prebuilts. This
@@ -60,13 +61,6 @@
BRANCH_DEFAULT_MODULE_BUILD_FROM_SOURCE := true
endif
-# TODO(b/172063604): Remove once products no longer use dex2oat(d)s.
-# If the product uses dex2oats and/or dex2oatds then build from sources as
-# ART does not currently provide prebuilts of those tools.
-ifneq (,$(filter dex2oats dex2oatds,$(PRODUCT_HOST_PACKAGES)))
- BRANCH_DEFAULT_MODULE_BUILD_FROM_SOURCE := true
-endif
-
# ART does not provide linux_bionic variants needed for products that
# set HOST_CROSS_OS=linux_bionic.
ifeq (linux_bionic,${HOST_CROSS_OS})
@@ -93,9 +87,9 @@
ifneq (,$(MODULE_BUILD_FROM_SOURCE))
# Keep an explicit setting.
-else ifeq (,$(filter docs sdk win_sdk sdk_addon,$(MAKECMDGOALS))$(findstring com.google.android.conscrypt,$(PRODUCT_PACKAGES)))
+else ifeq (,$(filter docs sdk win_sdk sdk_addon,$(MAKECMDGOALS))$(findstring com.google.android.conscrypt,$(PRODUCT_PACKAGES))$(findstring com.google.android.go.conscrypt,$(PRODUCT_PACKAGES)))
# Prebuilt module SDKs require prebuilt modules to work, and currently
- # prebuilt modules are only provided for com.google.android.xxx. If we can't
+ # prebuilt modules are only provided for com.google.android(.go)?.xxx. If we can't
# find one of them in PRODUCT_PACKAGES then assume com.android.xxx are in use,
# and disable prebuilt SDKs. In particular this applies to AOSP builds.
#
@@ -131,6 +125,7 @@
INDIVIDUALLY_TOGGLEABLE_PREBUILT_MODULES := \
btservices \
permission \
+ rkpd \
uwb \
wifi \
@@ -141,6 +136,10 @@
# Apex build mode variables
ifdef APEX_BUILD_FOR_PRE_S_DEVICES
$(call add_soong_config_var_value,ANDROID,library_linking_strategy,prefer_static)
+else
+ifdef KEEP_APEX_INHERIT
+$(call add_soong_config_var_value,ANDROID,library_linking_strategy,prefer_static)
+endif
endif
ifeq (true,$(MODULE_BUILD_FROM_SOURCE))
@@ -160,6 +159,10 @@
SYSTEMUI_USE_COMPOSE ?= false
$(call add_soong_config_var,ANDROID,SYSTEMUI_USE_COMPOSE)
+ifdef PRODUCT_AVF_ENABLED
+$(call add_soong_config_var_value,ANDROID,avf_enabled,$(PRODUCT_AVF_ENABLED))
+endif
+
# Enable system_server optimizations by default unless explicitly set or if
# there may be dependent runtime jars.
# TODO(b/240588226): Remove the off-by-default exceptions after handling
diff --git a/core/app_prebuilt_internal.mk b/core/app_prebuilt_internal.mk
index eb429cd..9fab44d 100644
--- a/core/app_prebuilt_internal.mk
+++ b/core/app_prebuilt_internal.mk
@@ -302,3 +302,7 @@
endif # LOCAL_PACKAGE_SPLITS
+###########################################################
+## SBOM generation
+###########################################################
+include $(BUILD_SBOM_GEN)
\ No newline at end of file
diff --git a/core/art_config.mk b/core/art_config.mk
new file mode 100644
index 0000000..1ea05db
--- /dev/null
+++ b/core/art_config.mk
@@ -0,0 +1,46 @@
+# ART configuration that has to be determined after product config is resolved.
+#
+# Inputs:
+# PRODUCT_ENABLE_UFFD_GC: See comments in build/make/core/product.mk.
+# OVERRIDE_ENABLE_UFFD_GC: Overrides PRODUCT_ENABLE_UFFD_GC. Can be passed from the commandline for
+# debugging purposes.
+# BOARD_API_LEVEL: See comments in build/make/core/main.mk.
+# BOARD_SHIPPING_API_LEVEL: See comments in build/make/core/main.mk.
+# PRODUCT_SHIPPING_API_LEVEL: See comments in build/make/core/product.mk.
+#
+# Outputs:
+# ENABLE_UFFD_GC: Whether to use userfaultfd GC.
+
+config_enable_uffd_gc := \
+ $(firstword $(OVERRIDE_ENABLE_UFFD_GC) $(PRODUCT_ENABLE_UFFD_GC))
+
+ifeq (,$(filter-out default,$(config_enable_uffd_gc)))
+ ENABLE_UFFD_GC := true
+
+ # Disable userfaultfd GC if the device doesn't support it (i.e., if
+ # `min(ro.board.api_level ?? ro.board.first_api_level ?? MAX_VALUE,
+ # ro.product.first_api_level ?? ro.build.version.sdk ?? MAX_VALUE) < 31`)
+ # This logic aligns with how `ro.vendor.api_level` is calculated in
+ # `system/core/init/property_service.cpp`.
+ # We omit the check on `ro.build.version.sdk` here because we are on the latest build system.
+ board_api_level := $(firstword $(BOARD_API_LEVEL) $(BOARD_SHIPPING_API_LEVEL))
+ ifneq (,$(board_api_level))
+ ifeq (true,$(call math_lt,$(board_api_level),31))
+ ENABLE_UFFD_GC := false
+ endif
+ endif
+
+ ifneq (,$(PRODUCT_SHIPPING_API_LEVEL))
+ ifeq (true,$(call math_lt,$(PRODUCT_SHIPPING_API_LEVEL),31))
+ ENABLE_UFFD_GC := false
+ endif
+ endif
+else ifeq (true,$(config_enable_uffd_gc))
+ ENABLE_UFFD_GC := true
+else ifeq (false,$(config_enable_uffd_gc))
+ ENABLE_UFFD_GC := false
+else
+ $(error Unknown PRODUCT_ENABLE_UFFD_GC value: $(config_enable_uffd_gc))
+endif
+
+ADDITIONAL_PRODUCT_PROPERTIES += ro.dalvik.vm.enable_uffd_gc=$(ENABLE_UFFD_GC)
diff --git a/core/base_rules.mk b/core/base_rules.mk
index 175b06b..893091a 100644
--- a/core/base_rules.mk
+++ b/core/base_rules.mk
@@ -20,7 +20,11 @@
# Users can define base-rules-hook in their buildspec.mk to perform
# arbitrary operations as each module is included.
ifdef base-rules-hook
-$(if $(base-rules-hook),)
+ ifndef _has_warned_about_base_rules_hook
+ $(warning base-rules-hook is deprecated, please remove usages of it and/or convert to Soong.)
+ _has_warned_about_base_rules_hook := true
+ endif
+ $(if $(base-rules-hook),)
endif
###########################################################
@@ -596,7 +600,11 @@
# Manually handle the case where the
# output file is in the recovery or ramdisk partition.
ifneq (,$(filter $(TARGET_RECOVERY_ROOT_OUT)/%,$(my_module_path)))
- my_init_rc_path := $(TARGET_RECOVERY_ROOT_OUT)/system/etc
+ ifneq (,$(filter $(TARGET_RECOVERY_ROOT_OUT)/first_stage_ramdisk/%,$(my_module_path)))
+ my_init_rc_path := $(TARGET_RECOVERY_ROOT_OUT)/first_stage_ramdisk/system/etc
+ else
+ my_init_rc_path := $(TARGET_RECOVERY_ROOT_OUT)/system/etc
+ endif
else ifneq (,$(filter $(TARGET_RAMDISK_OUT)/%,$(my_module_path)))
my_init_rc_path := $(TARGET_RAMDISK_OUT)/system/etc
else
@@ -708,6 +716,23 @@
## Compatibility suite files.
###########################################################
ifdef LOCAL_COMPATIBILITY_SUITE
+
+ifneq (,$(LOCAL_FULL_TEST_CONFIG))
+ test_config := $(LOCAL_FULL_TEST_CONFIG)
+else ifneq (,$(LOCAL_TEST_CONFIG))
+ test_config := $(LOCAL_PATH)/$(LOCAL_TEST_CONFIG)
+else
+ test_config := $(wildcard $(LOCAL_PATH)/AndroidTest.xml)
+endif
+
+ifeq ($(EXCLUDE_MCTS),true)
+ ifneq (,$(test_config))
+ ifneq (,$(filter mcts-%,$(LOCAL_COMPATIBILITY_SUITE)))
+ LOCAL_COMPATIBILITY_SUITE := $(filter-out cts,$(LOCAL_COMPATIBILITY_SUITE))
+ endif
+ endif
+endif
+
ifneq (true,$(LOCAL_UNINSTALLABLE_MODULE))
ifeq ($(EXCLUDE_MCTS),true)
@@ -762,13 +787,6 @@
# Auto-generate build config.
-ifneq (,$(LOCAL_FULL_TEST_CONFIG))
- test_config := $(LOCAL_FULL_TEST_CONFIG)
-else ifneq (,$(LOCAL_TEST_CONFIG))
- test_config := $(LOCAL_PATH)/$(LOCAL_TEST_CONFIG)
-else
- test_config := $(wildcard $(LOCAL_PATH)/AndroidTest.xml)
-endif
ifeq (,$(test_config))
ifneq (true,$(is_native))
is_instrumentation_test := true
@@ -847,16 +865,6 @@
endif
endif # $(my_prefix)$(LOCAL_MODULE_CLASS)_$(LOCAL_MODULE)_compat_files
-# HACK: pretend a soong LOCAL_FULL_TEST_CONFIG is autogenerated by setting the flag in
-# module-info.json
-# TODO: (b/113029686) Add explicit flag from Soong to determine if a test was
-# autogenerated.
-ifneq (,$(filter $(SOONG_OUT_DIR)%,$(LOCAL_FULL_TEST_CONFIG)))
- ifeq ($(LOCAL_MODULE_MAKEFILE),$(SOONG_ANDROID_MK))
- ALL_MODULES.$(my_register_name).auto_test_config := true
- endif
-endif
-
ifeq ($(use_testcase_folder),true)
ifneq ($(my_test_data_file_pairs),)
@@ -897,6 +905,17 @@
$(eval my_compat_dist_test_data_$(suite) := ))
endif # LOCAL_UNINSTALLABLE_MODULE
+
+# HACK: pretend a soong LOCAL_FULL_TEST_CONFIG is autogenerated by setting the flag in
+# module-info.json
+# TODO: (b/113029686) Add explicit flag from Soong to determine if a test was
+# autogenerated.
+ifneq (,$(filter $(SOONG_OUT_DIR)%,$(LOCAL_FULL_TEST_CONFIG)))
+ ifeq ($(LOCAL_MODULE_MAKEFILE),$(SOONG_ANDROID_MK))
+ ALL_MODULES.$(my_register_name).auto_test_config := true
+ endif
+endif
+
endif # LOCAL_COMPATIBILITY_SUITE
my_supported_variant :=
@@ -946,6 +965,8 @@
$(ALL_MODULES.$(my_register_name).CHECKED) $(my_checked_module)
ALL_MODULES.$(my_register_name).BUILT := \
$(ALL_MODULES.$(my_register_name).BUILT) $(LOCAL_BUILT_MODULE)
+ALL_MODULES.$(my_register_name).SOONG_MODULE_TYPE := \
+ $(ALL_MODULES.$(my_register_name).SOONG_MODULE_TYPE) $(LOCAL_SOONG_MODULE_TYPE)
ifndef LOCAL_IS_HOST_MODULE
ALL_MODULES.$(my_register_name).TARGET_BUILT := \
$(ALL_MODULES.$(my_register_name).TARGET_BUILT) $(LOCAL_BUILT_MODULE)
@@ -1016,7 +1037,11 @@
$(ALL_MODULES.$(my_register_name).SYSTEM_SHARED_LIBS) $(LOCAL_SYSTEM_SHARED_LIBRARIES)
ALL_MODULES.$(my_register_name).LOCAL_RUNTIME_LIBRARIES := \
- $(ALL_MODULES.$(my_register_name).LOCAL_RUNTIME_LIBRARIES) $(LOCAL_RUNTIME_LIBRARIES)
+ $(ALL_MODULES.$(my_register_name).LOCAL_RUNTIME_LIBRARIES) $(LOCAL_RUNTIME_LIBRARIES) \
+ $(LOCAL_JAVA_LIBRARIES)
+
+ALL_MODULES.$(my_register_name).LOCAL_STATIC_LIBRARIES := \
+ $(ALL_MODULES.$(my_register_name).LOCAL_STATIC_LIBRARIES) $(LOCAL_STATIC_JAVA_LIBRARIES)
ifdef LOCAL_TEST_DATA
# Export the list of targets that are handled as data inputs and required
@@ -1040,6 +1065,24 @@
$(filter-out $(ALL_MODULES.$(my_register_name).SUPPORTED_VARIANTS),$(my_supported_variant))
##########################################################################
+## When compiling against API imported module, use API import stub
+## libraries.
+##########################################################################
+ifneq ($(LOCAL_USE_VNDK),)
+ ifneq ($(LOCAL_MODULE_MAKEFILE),$(SOONG_ANDROID_MK))
+ apiimport_postfix := .apiimport
+ ifeq ($(LOCAL_USE_VNDK_PRODUCT),true)
+ apiimport_postfix := .apiimport.product
+ else
+ apiimport_postfix := .apiimport.vendor
+ endif
+
+ my_required_modules := $(foreach l,$(my_required_modules), \
+ $(if $(filter $(l), $(API_IMPORTED_SHARED_LIBRARIES)), $(l)$(apiimport_postfix), $(l)))
+ endif
+endif
+
+##########################################################################
## When compiling against the VNDK, add the .vendor or .product suffix to
## required modules.
##########################################################################
@@ -1125,6 +1168,9 @@
ifdef LOCAL_IS_UNIT_TEST
ALL_MODULES.$(my_register_name).IS_UNIT_TEST := $(LOCAL_IS_UNIT_TEST)
endif
+ifdef LOCAL_TEST_OPTIONS_TAGS
+ALL_MODULES.$(my_register_name).TEST_OPTIONS_TAGS := $(LOCAL_TEST_OPTIONS_TAGS)
+endif
test_config :=
INSTALLABLE_FILES.$(LOCAL_INSTALLED_MODULE).MODULE := $(my_register_name)
@@ -1212,3 +1258,8 @@
###########################################################
include $(BUILD_NOTICE_FILE)
+
+###########################################################
+## SBOM generation
+###########################################################
+include $(BUILD_SBOM_GEN)
\ No newline at end of file
diff --git a/core/binary.mk b/core/binary.mk
index 665270e..579e6b5 100644
--- a/core/binary.mk
+++ b/core/binary.mk
@@ -19,7 +19,11 @@
# supply that, for example, when building libc itself.
ifdef LOCAL_IS_HOST_MODULE
ifeq ($(LOCAL_SYSTEM_SHARED_LIBRARIES),none)
+ ifdef USE_HOST_MUSL
+ my_system_shared_libraries := libc_musl
+ else
my_system_shared_libraries :=
+ endif
else
my_system_shared_libraries := $(LOCAL_SYSTEM_SHARED_LIBRARIES)
endif
@@ -54,6 +58,9 @@
my_cppflags := $(LOCAL_CPPFLAGS)
my_cflags_no_override := $(GLOBAL_CLANG_CFLAGS_NO_OVERRIDE)
my_cppflags_no_override := $(GLOBAL_CLANG_CPPFLAGS_NO_OVERRIDE)
+ifeq ($(my_32_64_bit_suffix), 64)
+ my_cflags_no_override += $(GLOBAL_CLANG_CFLAGS_64_NO_OVERRIDE)
+endif
ifdef is_third_party
my_cflags_no_override += $(GLOBAL_CLANG_EXTERNAL_CFLAGS_NO_OVERRIDE)
my_cppflags_no_override += $(GLOBAL_CLANG_EXTERNAL_CFLAGS_NO_OVERRIDE)
@@ -161,7 +168,6 @@
endif
endif
-my_ndk_sysroot :=
my_ndk_sysroot_include :=
my_ndk_sysroot_lib :=
my_api_level := 10000
@@ -176,11 +182,7 @@
# Make sure we've built the NDK.
my_additional_dependencies += $(SOONG_OUT_DIR)/ndk_base.timestamp
- ifneq (,$(filter arm64 x86_64,$(my_arch)))
- my_min_sdk_version := 21
- else
- my_min_sdk_version := $(MIN_SUPPORTED_SDK_VERSION)
- endif
+ my_min_sdk_version := $(MIN_SUPPORTED_SDK_VERSION)
# Historically we've just set up a bunch of symlinks in prebuilts/ndk to map
# missing API levels to existing ones where necessary, but we're not doing
@@ -193,38 +195,19 @@
my_ndk_crt_version := $(my_ndk_api)
- my_ndk_hist_api := $(my_ndk_api)
- ifeq ($(my_ndk_api),current)
- # The last API level supported by the old prebuilt NDKs.
- my_ndk_hist_api := 24
- else
+ ifneq ($(my_ndk_api),current)
my_api_level := $(my_ndk_api)
endif
my_ndk_source_root := \
$(HISTORICAL_NDK_VERSIONS_ROOT)/$(LOCAL_NDK_VERSION)/sources
- my_ndk_sysroot := \
- $(HISTORICAL_NDK_VERSIONS_ROOT)/$(LOCAL_NDK_VERSION)/platforms/android-$(my_ndk_hist_api)/arch-$(my_arch)
my_built_ndk := $(SOONG_OUT_DIR)/ndk
my_ndk_triple := $($(LOCAL_2ND_ARCH_VAR_PREFIX)TARGET_NDK_TRIPLE)
my_ndk_sysroot_include := \
$(my_built_ndk)/sysroot/usr/include \
$(my_built_ndk)/sysroot/usr/include/$(my_ndk_triple) \
- $(my_ndk_sysroot)/usr/include \
- # x86_64 is a multilib toolchain, so their libraries are
- # installed in /usr/lib64. Aarch64, on the other hand, is not a multilib
- # compiler, so its libraries are in /usr/lib.
- ifneq (,$(filter x86_64,$(my_arch)))
- my_ndk_libdir_name := lib64
- else
- my_ndk_libdir_name := lib
- endif
-
- my_ndk_platform_dir := \
- $(my_built_ndk)/platforms/android-$(my_ndk_api)/arch-$(my_arch)
- my_built_ndk_libs := $(my_ndk_platform_dir)/usr/$(my_ndk_libdir_name)
- my_ndk_sysroot_lib := $(my_ndk_sysroot)/usr/$(my_ndk_libdir_name)
+ my_ndk_sysroot_lib := $(my_built_ndk)/sysroot/usr/lib/$(my_ndk_triple)/$(my_ndk_api)
# The bionic linker now has support for packed relocations and gnu style
# hashes (which are much faster!), but shipping to older devices requires
@@ -348,9 +331,11 @@
else # LOCAL_IS_HOST_MODULE
# Add -ldl, -lpthread, -lm and -lrt to host builds to match the default behavior of
# device builds
- my_ldlibs += -ldl -lpthread -lm
- ifneq ($(HOST_OS),darwin)
- my_ldlibs += -lrt
+ ifndef USE_HOST_MUSL
+ my_ldlibs += -ldl -lpthread -lm
+ ifneq ($(HOST_OS),darwin)
+ my_ldlibs += -lrt
+ endif
endif
endif
@@ -1145,6 +1130,28 @@
$(my_static_libraries),hwasan)
endif
+###################################################################
+## When compiling against API imported module, use API import stub
+## libraries.
+##################################################################
+
+apiimport_postfix := .apiimport
+
+ifneq ($(LOCAL_USE_VNDK),)
+ ifeq ($(LOCAL_USE_VNDK_PRODUCT),true)
+ apiimport_postfix := .apiimport.product
+ else
+ apiimport_postfix := .apiimport.vendor
+ endif
+endif
+
+my_shared_libraries := $(foreach l,$(my_shared_libraries), \
+ $(if $(filter $(l), $(API_IMPORTED_SHARED_LIBRARIES)), $(l)$(apiimport_postfix), $(l)))
+my_system_shared_libraries := $(foreach l,$(my_system_shared_libraries), \
+ $(if $(filter $(l), $(API_IMPORTED_SHARED_LIBRARIES)), $(l)$(apiimport_postfix), $(l)))
+my_header_libraries := $(foreach l,$(my_header_libraries), \
+ $(if $(filter $(l), $(API_IMPORTED_HEADER_LIBRARIES)), $(l)$(apiimport_postfix), $(l)))
+
###########################################################
## When compiling against the VNDK, use LL-NDK libraries
###########################################################
@@ -1397,7 +1404,6 @@
my_ndk_shared_libraries_fullpath := \
$(foreach _lib,$(my_ndk_shared_libraries),\
$(if $(filter $(NDK_KNOWN_LIBS),$(_lib)),\
- $(my_built_ndk_libs)/$(_lib)$(so_suffix),\
$(my_ndk_sysroot_lib)/$(_lib)$(so_suffix)))
built_shared_libraries += \
@@ -1506,7 +1512,7 @@
ifeq (,$(strip $(call find_warning_allowed_projects,$(LOCAL_PATH))))
my_cflags := -Wall -Werror $(my_cflags)
else
- $(eval MODULES_ADDED_WALL := $(MODULES_ADDED_WALL) $(LOCAL_MODULE_MAKEFILE):$(LOCAL_MODULE))
+ $(eval MODULES_WARNINGS_ALLOWED := $(MODULES_USING_WNO_ERROR) $(LOCAL_MODULE_MAKEFILE):$(LOCAL_MODULE))
my_cflags := -Wall $(my_cflags)
endif
endif
diff --git a/core/board_config.mk b/core/board_config.mk
index dc50a68..fae7aaa 100644
--- a/core/board_config.mk
+++ b/core/board_config.mk
@@ -174,6 +174,10 @@
_build_broken_var_list := \
+ BUILD_BROKEN_CLANG_PROPERTY \
+ BUILD_BROKEN_CLANG_ASFLAGS \
+ BUILD_BROKEN_CLANG_CFLAGS \
+ BUILD_BROKEN_DEPFILE \
BUILD_BROKEN_DUP_RULES \
BUILD_BROKEN_DUP_SYSPROP \
BUILD_BROKEN_ELF_PREBUILT_PRODUCT_COPY_FILES \
@@ -184,6 +188,7 @@
BUILD_BROKEN_PREBUILT_ELF_FILES \
BUILD_BROKEN_TREBLE_SYSPROP_NEVERALLOW \
BUILD_BROKEN_USES_NETWORK \
+ BUILD_BROKEN_USES_SOONG_PYTHON2_MODULES \
BUILD_BROKEN_VENDOR_PROPERTY_NAMESPACE \
BUILD_BROKEN_VINTF_PRODUCT_COPY_FILES \
@@ -234,10 +239,7 @@
.KATI_READONLY := TARGET_DEVICE_DIR
endif
-# TODO(colefaust) change this if to RBC_PRODUCT_CONFIG when
-# the board configuration is known to work on everything
-# the product config works on.
-ifndef RBC_BOARD_CONFIG
+ifndef RBC_PRODUCT_CONFIG
include $(board_config_mk)
else
$(shell mkdir -p $(OUT_DIR)/rbc)
@@ -285,6 +287,8 @@
$(if $(filter-out true false,$($(var))), \
$(error Valid values of $(var) are "true", "false", and "". Not "$($(var))")))
+include $(BUILD_SYSTEM)/board_config_wifi.mk
+
# Default *_CPU_VARIANT_RUNTIME to CPU_VARIANT if unspecified.
TARGET_CPU_VARIANT_RUNTIME := $(or $(TARGET_CPU_VARIANT_RUNTIME),$(TARGET_CPU_VARIANT))
TARGET_2ND_CPU_VARIANT_RUNTIME := $(or $(TARGET_2ND_CPU_VARIANT_RUNTIME),$(TARGET_2ND_CPU_VARIANT))
@@ -402,12 +406,6 @@
endef
###########################################
-# Now we can substitute with the real value of TARGET_COPY_OUT_RAMDISK
-ifeq ($(BOARD_BUILD_SYSTEM_ROOT_IMAGE),true)
-TARGET_COPY_OUT_RAMDISK := $(TARGET_COPY_OUT_ROOT)
-endif
-
-###########################################
# Configure whether we're building the system image
BUILDING_SYSTEM_IMAGE := true
ifeq ($(PRODUCT_BUILD_SYSTEM_IMAGE),)
@@ -556,15 +554,8 @@
# Are we building a debug vendor_boot image
BUILDING_DEBUG_VENDOR_BOOT_IMAGE :=
-# Can't build vendor_boot-debug.img if BOARD_BUILD_SYSTEM_ROOT_IMAGE is true,
-# because building debug vendor_boot image requires a ramdisk.
-ifeq ($(BOARD_BUILD_SYSTEM_ROOT_IMAGE),true)
- ifeq ($(PRODUCT_BUILD_DEBUG_VENDOR_BOOT_IMAGE),true)
- $(warning PRODUCT_BUILD_DEBUG_VENDOR_BOOT_IMAGE is true, but so is BOARD_BUILD_SYSTEM_ROOT_IMAGE. \
- Skip building the debug vendor_boot image.)
- endif
# Can't build vendor_boot-debug.img if we're not building a ramdisk.
-else ifndef BUILDING_RAMDISK_IMAGE
+ifndef BUILDING_RAMDISK_IMAGE
ifeq ($(PRODUCT_BUILD_DEBUG_VENDOR_BOOT_IMAGE),true)
$(warning PRODUCT_BUILD_DEBUG_VENDOR_BOOT_IMAGE is true, but we're not building a ramdisk image. \
Skip building the debug vendor_boot image.)
@@ -601,15 +592,8 @@
# Are we building a debug boot image
BUILDING_DEBUG_BOOT_IMAGE :=
-# Can't build boot-debug.img if BOARD_BUILD_SYSTEM_ROOT_IMAGE is true,
-# because building debug boot image requires a ramdisk.
-ifeq ($(BOARD_BUILD_SYSTEM_ROOT_IMAGE),true)
- ifeq ($(PRODUCT_BUILD_DEBUG_BOOT_IMAGE),true)
- $(warning PRODUCT_BUILD_DEBUG_BOOT_IMAGE is true, but so is BOARD_BUILD_SYSTEM_ROOT_IMAGE. \
- Skip building the debug boot image.)
- endif
# Can't build boot-debug.img if we're not building a ramdisk.
-else ifndef BUILDING_RAMDISK_IMAGE
+ifndef BUILDING_RAMDISK_IMAGE
ifeq ($(PRODUCT_BUILD_DEBUG_BOOT_IMAGE),true)
$(warning PRODUCT_BUILD_DEBUG_BOOT_IMAGE is true, but we're not building a ramdisk image. \
Skip building the debug boot image.)
@@ -930,23 +914,11 @@
.KATI_READONLY := BUILDING_SYSTEM_DLKM_IMAGE
BOARD_USES_PVMFWIMAGE :=
-ifdef BOARD_PREBUILT_PVMFWIMAGE
- BOARD_USES_PVMFWIMAGE := true
-endif
ifeq ($(PRODUCT_BUILD_PVMFW_IMAGE),true)
BOARD_USES_PVMFWIMAGE := true
endif
.KATI_READONLY := BOARD_USES_PVMFWIMAGE
-BUILDING_PVMFW_IMAGE :=
-ifeq ($(PRODUCT_BUILD_PVMFW_IMAGE),true)
- BUILDING_PVMFW_IMAGE := true
-endif
-ifdef BOARD_PREBUILT_PVMFWIMAGE
- BUILDING_PVMFW_IMAGE :=
-endif
-.KATI_READONLY := BUILDING_PVMFW_IMAGE
-
###########################################
# Ensure consistency among TARGET_RECOVERY_UPDATER_LIBS, AB_OTA_UPDATER, and PRODUCT_OTA_FORCE_NON_AB_PACKAGE.
TARGET_RECOVERY_UPDATER_LIBS ?=
@@ -1018,19 +990,13 @@
endif
###########################################
-# APEXes are by default flattened, i.e. non-updatable, if not building unbundled
-# apps. It can be unflattened (and updatable) by inheriting from
-# updatable_apex.mk
+# APEXes are by default not flattened, i.e. updatable.
#
# APEX flattening can also be forcibly enabled (resp. disabled) by
# setting OVERRIDE_TARGET_FLATTEN_APEX to true (resp. false), e.g. by
# setting the OVERRIDE_TARGET_FLATTEN_APEX environment variable.
ifdef OVERRIDE_TARGET_FLATTEN_APEX
TARGET_FLATTEN_APEX := $(OVERRIDE_TARGET_FLATTEN_APEX)
-else
- ifeq (,$(TARGET_BUILD_APPS)$(TARGET_FLATTEN_APEX))
- TARGET_FLATTEN_APEX := true
- endif
endif
ifeq (,$(TARGET_BUILD_UNBUNDLED))
diff --git a/core/board_config_wifi.mk b/core/board_config_wifi.mk
new file mode 100644
index 0000000..8289bf2
--- /dev/null
+++ b/core/board_config_wifi.mk
@@ -0,0 +1,83 @@
+#
+# Copyright (C) 2022 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# ###############################################################
+# This file adds WIFI variables into soong config namespace (`wifi`)
+# ###############################################################
+
+ifdef BOARD_WLAN_DEVICE
+ $(call soong_config_set,wifi,board_wlan_device,$(BOARD_WLAN_DEVICE))
+endif
+ifdef WIFI_DRIVER_MODULE_PATH
+ $(call soong_config_set,wifi,driver_module_path,$(WIFI_DRIVER_MODULE_PATH))
+endif
+ifdef WIFI_DRIVER_MODULE_ARG
+ $(call soong_config_set,wifi,driver_module_arg,$(WIFI_DRIVER_MODULE_ARG))
+endif
+ifdef WIFI_DRIVER_MODULE_NAME
+ $(call soong_config_set,wifi,driver_module_name,$(WIFI_DRIVER_MODULE_NAME))
+endif
+ifdef WIFI_DRIVER_FW_PATH_STA
+ $(call soong_config_set,wifi,driver_fw_path_sta,$(WIFI_DRIVER_FW_PATH_STA))
+endif
+ifdef WIFI_DRIVER_FW_PATH_AP
+ $(call soong_config_set,wifi,driver_fw_path_ap,$(WIFI_DRIVER_FW_PATH_AP))
+endif
+ifdef WIFI_DRIVER_FW_PATH_P2P
+ $(call soong_config_set,wifi,driver_fw_path_p2p,$(WIFI_DRIVER_FW_PATH_P2P))
+endif
+ifdef WIFI_DRIVER_FW_PATH_PARAM
+ $(call soong_config_set,wifi,driver_fw_path_param,$(WIFI_DRIVER_FW_PATH_PARAM))
+endif
+ifdef WIFI_DRIVER_STATE_CTRL_PARAM
+ $(call soong_config_set,wifi,driver_state_ctrl_param,$(WIFI_DRIVER_STATE_CTRL_PARAM))
+endif
+ifdef WIFI_DRIVER_STATE_ON
+ $(call soong_config_set,wifi,driver_state_on,$(WIFI_DRIVER_STATE_ON))
+endif
+ifdef WIFI_DRIVER_STATE_OFF
+ $(call soong_config_set,wifi,driver_state_off,$(WIFI_DRIVER_STATE_OFF))
+endif
+ifdef WIFI_MULTIPLE_VENDOR_HALS
+ $(call soong_config_set,wifi,multiple_vendor_hals,$(WIFI_MULTIPLE_VENDOR_HALS))
+endif
+ifneq ($(wildcard vendor/google/libraries/GoogleWifiConfigLib),)
+ $(call soong_config_set,wifi,google_wifi_config_lib,true)
+endif
+ifdef WIFI_HAL_INTERFACE_COMBINATIONS
+ $(call soong_config_set,wifi,hal_interface_combinations,$(WIFI_HAL_INTERFACE_COMBINATIONS))
+endif
+ifdef WIFI_HIDL_FEATURE_AWARE
+ $(call soong_config_set,wifi,hidl_feature_aware,true)
+endif
+ifdef WIFI_HIDL_FEATURE_DUAL_INTERFACE
+ $(call soong_config_set,wifi,hidl_feature_dual_interface,true)
+endif
+ifdef WIFI_HIDL_FEATURE_DISABLE_AP
+ $(call soong_config_set,wifi,hidl_feature_disable_ap,true)
+endif
+ifdef WIFI_HIDL_FEATURE_DISABLE_AP_MAC_RANDOMIZATION
+ $(call soong_config_set,wifi,hidl_feature_disable_ap_mac_randomization,true)
+endif
+ifdef WIFI_AVOID_IFACE_RESET_MAC_CHANGE
+ $(call soong_config_set,wifi,avoid_iface_reset_mac_change,true)
+endif
+ifdef WIFI_SKIP_STATE_TOGGLE_OFF_ON_FOR_NAN
+ $(call soong_config_set,wifi,wifi_skip_state_toggle_off_on_for_nan,true)
+endif
+ifeq ($(strip $(TARGET_USES_AOSP_FOR_WLAN)),true)
+ $(call soong_config_set,wifi,target_uses_aosp_for_wlan,true)
+endif
\ No newline at end of file
diff --git a/core/build_id.mk b/core/build_id.mk
index a489788..8a68056 100644
--- a/core/build_id.mk
+++ b/core/build_id.mk
@@ -18,4 +18,4 @@
# (like "CRB01"). It must be a single word, and is
# capitalized by convention.
-BUILD_ID=TM
+BUILD_ID=UDC
diff --git a/core/cc_prebuilt_internal.mk b/core/cc_prebuilt_internal.mk
index e8e01d8..2de4115 100644
--- a/core/cc_prebuilt_internal.mk
+++ b/core/cc_prebuilt_internal.mk
@@ -139,6 +139,27 @@
# my_shared_libraries).
include $(BUILD_SYSTEM)/cxx_stl_setup.mk
+# When compiling against API imported module, use API import stub libraries.
+apiimport_postfix := .apiimport
+
+ifneq ($(LOCAL_USE_VNDK),)
+ ifeq ($(LOCAL_USE_VNDK_PRODUCT),true)
+ apiimport_postfix := .apiimport.product
+ else
+ apiimport_postfix := .apiimport.vendor
+ endif
+endif
+
+ifdef my_shared_libraries
+my_shared_libraries := $(foreach l,$(my_shared_libraries), \
+ $(if $(filter $(l), $(API_IMPORTED_SHARED_LIBRARIES)), $(l)$(apiimport_postfix), $(l)))
+endif #my_shared_libraries
+
+ifdef my_system_shared_libraries
+my_system_shared_libraries := $(foreach l,$(my_system_shared_libraries), \
+ $(if $(filter $(l), $(API_IMPORTED_SHARED_LIBRARIES)), $(l)$(apiimport_postfix), $(l)))
+endif #my_system_shared_libraries
+
ifdef my_shared_libraries
ifdef LOCAL_USE_VNDK
ifeq ($(LOCAL_USE_VNDK_PRODUCT),true)
diff --git a/core/clang/OWNERS b/core/clang/OWNERS
deleted file mode 100644
index d41d3fc..0000000
--- a/core/clang/OWNERS
+++ /dev/null
@@ -1,4 +0,0 @@
-chh@google.com
-pirama@google.com
-srhines@google.com
-yikong@google.com
diff --git a/core/clang/TARGET_riscv64.mk b/core/clang/TARGET_riscv64.mk
new file mode 100644
index 0000000..cfb5c7d
--- /dev/null
+++ b/core/clang/TARGET_riscv64.mk
@@ -0,0 +1,10 @@
+RS_TRIPLE := renderscript64-linux-android
+RS_TRIPLE_CFLAGS := -D__riscv64__
+RS_COMPAT_TRIPLE := riscv64-linux-android
+
+TARGET_LIBPROFILE_RT := $(LLVM_RTLIB_PATH)/libclang_rt.profile-riscv64-android.a
+TARGET_LIBCRT_BUILTINS := $(LLVM_RTLIB_PATH)/libclang_rt.builtins-riscv64-android.a
+
+# Address sanitizer clang config
+ADDRESS_SANITIZER_LINKER := /system/bin/linker_asan64
+ADDRESS_SANITIZER_LINKER_FILE := /system/bin/bootstrap/linker_asan64
diff --git a/core/clang/config.mk b/core/clang/config.mk
index 28a75ec..d03c541 100644
--- a/core/clang/config.mk
+++ b/core/clang/config.mk
@@ -2,7 +2,7 @@
LLVM_READOBJ := $(LLVM_PREBUILTS_BASE)/$(BUILD_OS)-x86/$(LLVM_PREBUILTS_VERSION)/bin/llvm-readobj
-LLVM_RTLIB_PATH := $(LLVM_PREBUILTS_BASE)/linux-x86/$(LLVM_PREBUILTS_VERSION)/lib64/clang/$(LLVM_RELEASE_VERSION)/lib/linux/
+LLVM_RTLIB_PATH := $(LLVM_PREBUILTS_BASE)/linux-x86/$(LLVM_PREBUILTS_VERSION)/lib/clang/$(LLVM_RELEASE_VERSION)/lib/linux/
define convert-to-clang-flags
$(strip $(filter-out $(CLANG_CONFIG_UNKNOWN_CFLAGS),$(1)))
diff --git a/core/cleanbuild.mk b/core/cleanbuild.mk
index 5576785..f41f1b7 100644
--- a/core/cleanbuild.mk
+++ b/core/cleanbuild.mk
@@ -33,8 +33,6 @@
# CTS-specific config.
-include cts/build/config.mk
-# VTS-specific config.
--include test/vts/tools/vts-tradefed/build/config.mk
# device-tests-specific-config.
-include tools/tradefederation/build/suites/device-tests/config.mk
# general-tests-specific-config.
diff --git a/core/cleanspec.mk b/core/cleanspec.mk
index af28954..0232a17 100644
--- a/core/cleanspec.mk
+++ b/core/cleanspec.mk
@@ -58,6 +58,12 @@
#$(call add-clean-step, rm -rf $(OUT_DIR)/target/common/obj/JAVA_LIBRARIES/core_intermediates)
#$(call add-clean-step, find $(OUT_DIR) -type f -name "IGTalkSession*" -print0 | xargs -0 rm -f)
#$(call add-clean-step, rm -rf $(PRODUCT_OUT)/data/*)
+$(call add-clean-step, rm -rf $(OUT_DIR)/obj/ETC/build_manifest-vendor_intermediates)
+$(call add-clean-step, rm -rf $(OUT_DIR)/obj/ETC/build_manifest-odm_intermediates)
+$(call add-clean-step, rm -rf $(OUT_DIR)/obj/ETC/build_manifest-product_intermediates)
+$(call add-clean-step, rm -rf $(TARGET_OUT_VENDOR)/etc/security/fsverity)
+$(call add-clean-step, rm -rf $(TARGET_OUT_ODM)/etc/security/fsverity)
+$(call add-clean-step, rm -rf $(TARGET_OUT_PRODUCT)/etc/security/fsverity)
# ************************************************
# NEWER CLEAN STEPS MUST BE AT THE END OF THE LIST
diff --git a/core/clear_vars.mk b/core/clear_vars.mk
index b5b371c..bb7ba1b 100644
--- a/core/clear_vars.mk
+++ b/core/clear_vars.mk
@@ -134,6 +134,7 @@
LOCAL_IS_HOST_MODULE:=
LOCAL_IS_RUNTIME_RESOURCE_OVERLAY:=
LOCAL_IS_UNIT_TEST:=
+LOCAL_TEST_OPTIONS_TAGS:=
LOCAL_JACK_CLASSPATH:=
LOCAL_JACK_COVERAGE_EXCLUDE_FILTER:=
LOCAL_JACK_COVERAGE_INCLUDE_FILTER:=
@@ -152,7 +153,6 @@
LOCAL_JAR_PROCESSOR_ARGS:=
LOCAL_JAVACFLAGS:=
LOCAL_JAVA_LANGUAGE_VERSION:=
-LOCAL_JAVA_LAYERS_FILE:=
LOCAL_JAVA_LIBRARIES:=
LOCAL_JAVA_RESOURCE_DIRS:=
LOCAL_JAVA_RESOURCE_FILES:=
@@ -293,6 +293,7 @@
LOCAL_SOONG_LICENSE_METADATA :=
LOCAL_SOONG_LINK_TYPE :=
LOCAL_SOONG_LINT_REPORTS :=
+LOCAL_SOONG_MODULE_TYPE :=
LOCAL_SOONG_PROGUARD_DICT :=
LOCAL_SOONG_PROGUARD_USAGE_ZIP :=
LOCAL_SOONG_RESOURCE_EXPORT_PACKAGE :=
@@ -502,6 +503,7 @@
# Robolectric variables
LOCAL_INSTRUMENT_SOURCE_DIRS :=
+LOCAL_INSTRUMENT_SRCJARS :=
LOCAL_ROBOTEST_FAILURE_FATAL :=
LOCAL_ROBOTEST_FILES :=
LOCAL_ROBOTEST_TIMEOUT :=
diff --git a/core/combo/TARGET_linux-riscv64.mk b/core/combo/TARGET_linux-riscv64.mk
new file mode 100644
index 0000000..8f8fd3c
--- /dev/null
+++ b/core/combo/TARGET_linux-riscv64.mk
@@ -0,0 +1,40 @@
+#
+# Copyright (C) 2022 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# Configuration for Linux on riscv64 as a target.
+# Included by combo/select.mk
+
+# Provide a default variant.
+ifeq ($(strip $(TARGET_ARCH_VARIANT)),)
+TARGET_ARCH_VARIANT := riscv64
+endif
+
+# Include the arch-variant-specific configuration file.
+# Its role is to define various ARCH_X86_HAVE_XXX feature macros,
+# plus initial values for TARGET_GLOBAL_CFLAGS
+#
+TARGET_ARCH_SPECIFIC_MAKEFILE := $(BUILD_COMBOS)/arch/$(TARGET_ARCH)/$(TARGET_ARCH_VARIANT).mk
+ifeq ($(strip $(wildcard $(TARGET_ARCH_SPECIFIC_MAKEFILE))),)
+$(error Unknown $(TARGET_ARCH) architecture version: $(TARGET_ARCH_VARIANT))
+endif
+
+include $(TARGET_ARCH_SPECIFIC_MAKEFILE)
+
+define $(combo_var_prefix)transform-shared-lib-to-toc
+$(call _gen_toc_command_for_elf,$(1),$(2))
+endef
+
+TARGET_LINKER := /system/bin/linker64
diff --git a/core/combo/arch/arm64/armv9-a.mk b/core/combo/arch/arm64/armv9-a.mk
new file mode 100644
index 0000000..de0760a
--- /dev/null
+++ b/core/combo/arch/arm64/armv9-a.mk
@@ -0,0 +1,19 @@
+#
+# Copyright (C) 2023 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# .mk file required to support build for the new armv9-a Arm64 arch
+# variant. The file just needs to be present but does not require to contain
+# anything
diff --git a/core/combo/arch/riscv64/riscv64.mk b/core/combo/arch/riscv64/riscv64.mk
new file mode 100644
index 0000000..0505541
--- /dev/null
+++ b/core/combo/arch/riscv64/riscv64.mk
@@ -0,0 +1,2 @@
+# This file contains feature macro definitions specific to the
+# base 'riscv64' platform ABI.
diff --git a/core/combo/arch/x86/goldmont-plus.mk b/core/combo/arch/x86/goldmont-plus.mk
new file mode 100644
index 0000000..4ce2053
--- /dev/null
+++ b/core/combo/arch/x86/goldmont-plus.mk
@@ -0,0 +1,7 @@
+# This file contains feature macro definitions specific to the
+# goldmont-plus arch variant.
+#
+# See build/make/core/combo/arch/x86/x86-atom.mk for differences.
+#
+
+ARCH_X86_HAVE_SSE4_1 := true
diff --git a/core/combo/arch/x86/goldmont.mk b/core/combo/arch/x86/goldmont.mk
new file mode 100644
index 0000000..b5a6ff2
--- /dev/null
+++ b/core/combo/arch/x86/goldmont.mk
@@ -0,0 +1,7 @@
+# This file contains feature macro definitions specific to the
+# goldmont arch variant.
+#
+# See build/make/core/combo/arch/x86/x86-atom.mk for differences.
+#
+
+ARCH_X86_HAVE_SSE4_1 := true
diff --git a/core/combo/arch/x86/tremont.mk b/core/combo/arch/x86/tremont.mk
new file mode 100644
index 0000000..b80d228
--- /dev/null
+++ b/core/combo/arch/x86/tremont.mk
@@ -0,0 +1,7 @@
+# This file contains feature macro definitions specific to the
+# tremont arch variant.
+#
+# See build/make/core/combo/arch/x86/x86-atom.mk for differences.
+#
+
+ARCH_X86_HAVE_SSE4_1 := true
diff --git a/core/combo/arch/x86_64/goldmont-plus.mk b/core/combo/arch/x86_64/goldmont-plus.mk
new file mode 100644
index 0000000..4ce2053
--- /dev/null
+++ b/core/combo/arch/x86_64/goldmont-plus.mk
@@ -0,0 +1,7 @@
+# This file contains feature macro definitions specific to the
+# goldmont-plus arch variant.
+#
+# See build/make/core/combo/arch/x86/x86-atom.mk for differences.
+#
+
+ARCH_X86_HAVE_SSE4_1 := true
diff --git a/core/combo/arch/x86_64/goldmont.mk b/core/combo/arch/x86_64/goldmont.mk
new file mode 100644
index 0000000..b5a6ff2
--- /dev/null
+++ b/core/combo/arch/x86_64/goldmont.mk
@@ -0,0 +1,7 @@
+# This file contains feature macro definitions specific to the
+# goldmont arch variant.
+#
+# See build/make/core/combo/arch/x86/x86-atom.mk for differences.
+#
+
+ARCH_X86_HAVE_SSE4_1 := true
diff --git a/core/combo/arch/x86_64/tremont.mk b/core/combo/arch/x86_64/tremont.mk
new file mode 100644
index 0000000..b80d228
--- /dev/null
+++ b/core/combo/arch/x86_64/tremont.mk
@@ -0,0 +1,7 @@
+# This file contains feature macro definitions specific to the
+# tremont arch variant.
+#
+# See build/make/core/combo/arch/x86/x86-atom.mk for differences.
+#
+
+ARCH_X86_HAVE_SSE4_1 := true
diff --git a/core/config.mk b/core/config.mk
index 7f0e98e..0c086ee 100644
--- a/core/config.mk
+++ b/core/config.mk
@@ -155,12 +155,19 @@
$(KATI_obsolete_var COVERAGE_EXCLUDE_PATHS,Use NATIVE_COVERAGE_EXCLUDE_PATHS instead)
$(KATI_obsolete_var BOARD_VNDK_RUNTIME_DISABLE,VNDK-Lite is no longer supported)
$(KATI_obsolete_var LOCAL_SANITIZE_BLACKLIST,Use LOCAL_SANITIZE_BLOCKLIST instead)
-$(KATI_deprecated_var BOARD_PLAT_PUBLIC_SEPOLICY_DIR,Use SYSTEM_EXT_PUBLIC_SEPOLICY_DIRS instead)
-$(KATI_deprecated_var BOARD_PLAT_PRIVATE_SEPOLICY_DIR,Use SYSTEM_EXT_PRIVATE_SEPOLICY_DIRS instead)
+$(KATI_obsolete_var BOARD_PLAT_PUBLIC_SEPOLICY_DIR,Use SYSTEM_EXT_PUBLIC_SEPOLICY_DIRS instead)
+$(KATI_obsolete_var BOARD_PLAT_PRIVATE_SEPOLICY_DIR,Use SYSTEM_EXT_PRIVATE_SEPOLICY_DIRS instead)
$(KATI_obsolete_var TARGET_NO_VENDOR_BOOT,Use PRODUCT_BUILD_VENDOR_BOOT_IMAGE instead)
$(KATI_obsolete_var PRODUCT_CHECK_ELF_FILES,Use BUILD_BROKEN_PREBUILT_ELF_FILES instead)
$(KATI_obsolete_var ALL_GENERATED_SOURCES,ALL_GENERATED_SOURCES is no longer used)
$(KATI_obsolete_var ALL_ORIGINAL_DYNAMIC_BINARIES,ALL_ORIGINAL_DYNAMIC_BINARIES is no longer used)
+$(KATI_obsolete_var PRODUCT_SUPPORTS_VERITY,VB 1.0 and related variables are no longer supported)
+$(KATI_obsolete_var PRODUCT_SUPPORTS_VERITY_FEC,VB 1.0 and related variables are no longer supported)
+$(KATI_obsolete_var PRODUCT_SUPPORTS_BOOT_SIGNER,VB 1.0 and related variables are no longer supported)
+$(KATI_obsolete_var PRODUCT_VERITY_SIGNING_KEY,VB 1.0 and related variables are no longer supported)
+$(KATI_obsolete_var BOARD_PREBUILT_PVMFWIMAGE,pvmfw.bin is now built in AOSP and custom versions are no longer supported)
+$(KATI_obsolete_var BUILDING_PVMFW_IMAGE,BUILDING_PVMFW_IMAGE is no longer used)
+$(KATI_obsolete_var BOARD_BUILD_SYSTEM_ROOT_IMAGE)
# Used to force goals to build. Only use for conditionally defined goals.
.PHONY: FORCE
@@ -226,8 +233,7 @@
BUILD_FUZZ_TEST :=$= $(BUILD_SYSTEM)/fuzz_test.mk
BUILD_NOTICE_FILE :=$= $(BUILD_SYSTEM)/notice_files.mk
-BUILD_HOST_DALVIK_JAVA_LIBRARY :=$= $(BUILD_SYSTEM)/host_dalvik_java_library.mk
-BUILD_HOST_DALVIK_STATIC_JAVA_LIBRARY :=$= $(BUILD_SYSTEM)/host_dalvik_static_java_library.mk
+BUILD_SBOM_GEN :=$= $(BUILD_SYSTEM)/sbom.mk
include $(BUILD_SYSTEM)/deprecation.mk
@@ -352,6 +358,51 @@
# are specific to the user's build configuration.
include $(BUILD_SYSTEM)/envsetup.mk
+# Returns true if it is a low memory device, otherwise it returns false.
+define is-low-mem-device
+$(if $(findstring ro.config.low_ram=true,$(PRODUCT_PROPERTY_OVERRIDES)),true,\
+$(if $(findstring ro.config.low_ram=true,$(PRODUCT_DEFAULT_PROPERTY_OVERRIDES)),true,\
+$(if $(findstring ro.config.low_ram=true,$(PRODUCT_COMPATIBLE_PROPERTY_OVERRIDE)),true,\
+$(if $(findstring ro.config.low_ram=true,$(PRODUCT_COMPATIBLE_PROPERTY)),true,\
+$(if $(findstring ro.config.low_ram=true,$(PRODUCT_SYSTEM_DEFAULT_PROPERTIES)),true,\
+$(if $(findstring ro.config.low_ram=true,$(PRODUCT_SYSTEM_EXT_PROPERTIES)),true,\
+$(if $(findstring ro.config.low_ram=true,$(PRODUCT_PRODUCT_PROPERTIES)),true,\
+$(if $(findstring ro.config.low_ram=true,$(PRODUCT_VENDOR_PROPERTIES)),true,\
+$(if $(findstring ro.config.low_ram=true,$(PRODUCT_ODM_PROPERTIES)),true,false)))))))))
+endef
+
+# Get the board API level.
+board_api_level := $(PLATFORM_SDK_VERSION)
+ifdef BOARD_API_LEVEL
+ board_api_level := $(BOARD_API_LEVEL)
+else ifdef BOARD_SHIPPING_API_LEVEL
+ # Vendors with GRF must define BOARD_SHIPPING_API_LEVEL for the vendor API level.
+ board_api_level := $(BOARD_SHIPPING_API_LEVEL)
+endif
+
+# Calculate the VSR vendor API level.
+vsr_vendor_api_level := $(board_api_level)
+
+ifdef PRODUCT_SHIPPING_API_LEVEL
+ vsr_vendor_api_level := $(call math_min,$(PRODUCT_SHIPPING_API_LEVEL),$(board_api_level))
+endif
+
+# Set TARGET_MAX_PAGE_SIZE_SUPPORTED.
+ifdef PRODUCT_MAX_PAGE_SIZE_SUPPORTED
+ TARGET_MAX_PAGE_SIZE_SUPPORTED := $(PRODUCT_MAX_PAGE_SIZE_SUPPORTED)
+else ifeq ($(strip $(call is-low-mem-device)),true)
+ # Low memory device will have 4096 binary alignment.
+ TARGET_MAX_PAGE_SIZE_SUPPORTED := 4096
+else
+ # The default binary alignment for userspace is 4096.
+ TARGET_MAX_PAGE_SIZE_SUPPORTED := 4096
+ # When VSR vendor API level >= 34, binary alignment will be 65536.
+ ifeq ($(call math_gt_or_eq,$(vsr_vendor_api_level),34),true)
+ TARGET_MAX_PAGE_SIZE_SUPPORTED := 65536
+ endif
+endif
+.KATI_READONLY := TARGET_MAX_PAGE_SIZE_SUPPORTED
+
# Pruned directory options used when using findleaves.py
# See envsetup.mk for a description of SCAN_EXCLUDE_DIRS
FIND_LEAVES_EXCLUDES := $(addprefix --prune=, $(SCAN_EXCLUDE_DIRS) .repo .git)
@@ -427,6 +478,9 @@
$(hide) $(HOST_NM) -gP $(1) | cut -f1-2 -d" " | (grep -v U$$ >> $(2) || true)
endef
+# Pick a Java compiler.
+include $(BUILD_SYSTEM)/combo/javac.mk
+
ifeq ($(CALLED_FROM_SETUP),true)
include $(BUILD_SYSTEM)/ccache.mk
include $(BUILD_SYSTEM)/goma.mk
@@ -449,9 +503,6 @@
WITH_TIDY_ONLY :=
endif
-# Pick a Java compiler.
-include $(BUILD_SYSTEM)/combo/javac.mk
-
# ---------------------------------------------------------------
# Check that the configuration is current. We check that
# BUILD_ENV_SEQUENCE_NUMBER is current against this value.
@@ -494,8 +545,10 @@
TARGET_BUILD_USE_PREBUILT_SDKS :=
DISABLE_PREOPT :=
+DISABLE_PREOPT_BOOT_IMAGES :=
ifneq (,$(TARGET_BUILD_APPS)$(TARGET_BUILD_UNBUNDLED_IMAGE))
DISABLE_PREOPT := true
+ DISABLE_PREOPT_BOOT_IMAGES := true
endif
ifeq (true,$(TARGET_BUILD_UNBUNDLED))
ifneq (true,$(UNBUNDLED_BUILD_SDKS_FROM_SOURCE))
@@ -506,6 +559,7 @@
.KATI_READONLY := \
TARGET_BUILD_USE_PREBUILT_SDKS \
DISABLE_PREOPT \
+ DISABLE_PREOPT_BOOT_IMAGES \
prebuilt_sdk_tools := prebuilts/sdk/tools
prebuilt_sdk_tools_bin := $(prebuilt_sdk_tools)/$(HOST_OS)/bin
@@ -577,7 +631,6 @@
endif
PROTOC := $(HOST_OUT_EXECUTABLES)/aprotoc$(HOST_EXECUTABLE_SUFFIX)
NANOPB_SRCS := $(HOST_OUT_EXECUTABLES)/protoc-gen-nanopb
-VTSC := $(HOST_OUT_EXECUTABLES)/vtsc$(HOST_EXECUTABLE_SUFFIX)
MKBOOTFS := $(HOST_OUT_EXECUTABLES)/mkbootfs$(HOST_EXECUTABLE_SUFFIX)
MINIGZIP := $(HOST_OUT_EXECUTABLES)/minigzip$(HOST_EXECUTABLE_SUFFIX)
LZ4 := $(HOST_OUT_EXECUTABLES)/lz4$(HOST_EXECUTABLE_SUFFIX)
@@ -602,19 +655,23 @@
MKEXTUSERIMG := $(HOST_OUT_EXECUTABLES)/mkuserimg_mke2fs
MKE2FS_CONF := system/extras/ext4_utils/mke2fs.conf
MKEROFS := $(HOST_OUT_EXECUTABLES)/mkfs.erofs
-MKSQUASHFSUSERIMG := $(HOST_OUT_EXECUTABLES)/mksquashfsimage.sh
-MKF2FSUSERIMG := $(HOST_OUT_EXECUTABLES)/mkf2fsuserimg.sh
+MKSQUASHFSUSERIMG := $(HOST_OUT_EXECUTABLES)/mksquashfsimage
+MKF2FSUSERIMG := $(HOST_OUT_EXECUTABLES)/mkf2fsuserimg
SIMG2IMG := $(HOST_OUT_EXECUTABLES)/simg2img$(HOST_EXECUTABLE_SUFFIX)
E2FSCK := $(HOST_OUT_EXECUTABLES)/e2fsck$(HOST_EXECUTABLE_SUFFIX)
TUNE2FS := $(HOST_OUT_EXECUTABLES)/tune2fs$(HOST_EXECUTABLE_SUFFIX)
JARJAR := $(HOST_OUT_JAVA_LIBRARIES)/jarjar.jar
DATA_BINDING_COMPILER := $(HOST_OUT_JAVA_LIBRARIES)/databinding-compiler.jar
FAT16COPY := build/make/tools/fat16copy.py
-CHECK_ELF_FILE := build/make/tools/check_elf_file.py
+CHECK_ELF_FILE := $(HOST_OUT_EXECUTABLES)/check_elf_file$(HOST_EXECUTABLE_SUFFIX)
LPMAKE := $(HOST_OUT_EXECUTABLES)/lpmake$(HOST_EXECUTABLE_SUFFIX)
ADD_IMG_TO_TARGET_FILES := $(HOST_OUT_EXECUTABLES)/add_img_to_target_files$(HOST_EXECUTABLE_SUFFIX)
BUILD_IMAGE := $(HOST_OUT_EXECUTABLES)/build_image$(HOST_EXECUTABLE_SUFFIX)
+ifeq (,$(strip $(BOARD_CUSTOM_BUILD_SUPER_IMAGE)))
BUILD_SUPER_IMAGE := $(HOST_OUT_EXECUTABLES)/build_super_image$(HOST_EXECUTABLE_SUFFIX)
+else
+BUILD_SUPER_IMAGE := $(BOARD_CUSTOM_BUILD_SUPER_IMAGE)
+endif
IMG_FROM_TARGET_FILES := $(HOST_OUT_EXECUTABLES)/img_from_target_files$(HOST_EXECUTABLE_SUFFIX)
MAKE_RECOVERY_PATCH := $(HOST_OUT_EXECUTABLES)/make_recovery_patch$(HOST_EXECUTABLE_SUFFIX)
OTA_FROM_TARGET_FILES := $(HOST_OUT_EXECUTABLES)/ota_from_target_files$(HOST_EXECUTABLE_SUFFIX)
@@ -631,14 +688,14 @@
VERITY_SIGNER := $(HOST_OUT_EXECUTABLES)/verity_signer
BUILD_VERITY_METADATA := $(HOST_OUT_EXECUTABLES)/build_verity_metadata
BUILD_VERITY_TREE := $(HOST_OUT_EXECUTABLES)/build_verity_tree
-BOOT_SIGNER := $(HOST_OUT_EXECUTABLES)/boot_signer
FUTILITY := $(HOST_OUT_EXECUTABLES)/futility-host
VBOOT_SIGNER := $(HOST_OUT_EXECUTABLES)/vboot_signer
-FEC := $(HOST_OUT_EXECUTABLES)/fec
DEXDUMP := $(HOST_OUT_EXECUTABLES)/dexdump$(BUILD_EXECUTABLE_SUFFIX)
PROFMAN := $(HOST_OUT_EXECUTABLES)/profman
+GEN_SBOM := $(HOST_OUT_EXECUTABLES)/generate-sbom
+
FINDBUGS_DIR := external/owasp/sanitizer/tools/findbugs/bin
FINDBUGS := $(FINDBUGS_DIR)/findbugs
@@ -693,6 +750,14 @@
PRODUCT_FULL_TREBLE_OVERRIDE ?=
$(foreach req,$(requirements),$(eval $(req)_OVERRIDE ?=))
+ifneq ($(PRODUCT_SEPOLICY_SPLIT),true)
+# WARNING: DO NOT CHANGE: if you are downstream of AOSP, and you change this, without
+# letting upstream know it's important to you, we may do cleanup which breaks this
+# significantly. Please let us know if you are changing this.
+# TODO(b/257176017) - unsplit sepolicy is no longer supported
+PRODUCT_SEPOLICY_SPLIT := true
+endif
+
# TODO(b/114488870): disallow PRODUCT_FULL_TREBLE_OVERRIDE from being used.
.KATI_READONLY := \
PRODUCT_FULL_TREBLE_OVERRIDE \
@@ -713,27 +778,16 @@
BOARD_PROPERTY_OVERRIDES_SPLIT_ENABLED ?= true
endif
-# If PRODUCT_USE_VNDK is true and BOARD_VNDK_VERSION is not defined yet,
-# BOARD_VNDK_VERSION will be set to "current" as default.
-# PRODUCT_USE_VNDK will be true in Android-P or later launching devices.
-PRODUCT_USE_VNDK := false
-ifneq ($(PRODUCT_USE_VNDK_OVERRIDE),)
- PRODUCT_USE_VNDK := $(PRODUCT_USE_VNDK_OVERRIDE)
-else ifeq ($(PRODUCT_SHIPPING_API_LEVEL),)
- # No shipping level defined
-else ifeq ($(call math_gt,$(PRODUCT_SHIPPING_API_LEVEL),27),true)
- PRODUCT_USE_VNDK := $(PRODUCT_FULL_TREBLE)
+# Starting in Android U, non-VNDK devices not supported
+# WARNING: DO NOT CHANGE: if you are downstream of AOSP, and you change this, without
+# letting upstream know it's important to you, we may do cleanup which breaks this
+# significantly. Please let us know if you are changing this.
+ifndef BOARD_VNDK_VERSION
+# READ WARNING - DO NOT CHANGE
+BOARD_VNDK_VERSION := current
+# READ WARNING - DO NOT CHANGE
endif
-ifeq ($(PRODUCT_USE_VNDK),true)
- ifndef BOARD_VNDK_VERSION
- BOARD_VNDK_VERSION := current
- endif
-endif
-
-$(KATI_obsolete_var PRODUCT_USE_VNDK,Use BOARD_VNDK_VERSION instead)
-$(KATI_obsolete_var PRODUCT_USE_VNDK_OVERRIDE,Use BOARD_VNDK_VERSION instead)
-
ifdef PRODUCT_PRODUCT_VNDK_VERSION
ifndef BOARD_VNDK_VERSION
# VNDK for product partition is not available unless BOARD_VNDK_VERSION
@@ -805,6 +859,7 @@
else
MAINLINE_SEPOLICY_DEV_CERTIFICATES := $(dir $(DEFAULT_SYSTEM_DEV_CERTIFICATE))
endif
+.KATI_READONLY := MAINLINE_SEPOLICY_DEV_CERTIFICATES
BUILD_NUMBER_FROM_FILE := $$(cat $(SOONG_OUT_DIR)/build_number.txt)
BUILD_DATETIME_FROM_FILE := $$(cat $(BUILD_DATETIME_FILE))
@@ -821,7 +876,7 @@
# is made which breaks compatibility with the previous platform sepolicy version,
# not just on every increase in PLATFORM_SDK_VERSION. The minor version should
# be reset to 0 on every bump of the PLATFORM_SDK_VERSION.
-sepolicy_major_vers := 33
+sepolicy_major_vers := 34
sepolicy_minor_vers := 0
ifneq ($(sepolicy_major_vers), $(PLATFORM_SDK_VERSION))
@@ -856,11 +911,11 @@
# A list of SEPolicy versions, besides PLATFORM_SEPOLICY_VERSION, that the framework supports.
PLATFORM_SEPOLICY_COMPAT_VERSIONS := \
- 28.0 \
29.0 \
30.0 \
31.0 \
32.0 \
+ 33.0 \
.KATI_READONLY := \
PLATFORM_SEPOLICY_COMPAT_VERSIONS \
@@ -881,9 +936,6 @@
endif
ifeq ($(PRODUCT_USE_DYNAMIC_PARTITIONS),true)
- ifeq ($(BOARD_BUILD_SYSTEM_ROOT_IMAGE),true)
- $(error BOARD_BUILD_SYSTEM_ROOT_IMAGE cannot be true for devices with dynamic partitions)
- endif
ifneq ($(PRODUCT_USE_DYNAMIC_PARTITION_SIZE),true)
$(error PRODUCT_USE_DYNAMIC_PARTITION_SIZE must be true for devices with dynamic partitions)
endif
@@ -974,16 +1026,6 @@
$(eval .KATI_READONLY := BOARD_$(group)_PARTITION_LIST) \
)
-# BOARD_*_PARTITION_LIST: a list of the following tokens
-valid_super_partition_list := system vendor product system_ext odm vendor_dlkm odm_dlkm system_dlkm
-$(foreach group,$(call to-upper,$(BOARD_SUPER_PARTITION_GROUPS)), \
- $(if $(filter-out $(valid_super_partition_list),$(BOARD_$(group)_PARTITION_LIST)), \
- $(error BOARD_$(group)_PARTITION_LIST contains invalid partition name \
- $(filter-out $(valid_super_partition_list),$(BOARD_$(group)_PARTITION_LIST)). \
- Valid names are $(valid_super_partition_list))))
-valid_super_partition_list :=
-
-
# Define BOARD_SUPER_PARTITION_PARTITION_LIST, the sum of all BOARD_*_PARTITION_LIST
ifdef BOARD_SUPER_PARTITION_PARTITION_LIST
$(error BOARD_SUPER_PARTITION_PARTITION_LIST should not be defined, but computed from \
@@ -1074,14 +1116,6 @@
BOARD_PREBUILT_HIDDENAPI_DIR ?=
.KATI_READONLY := BOARD_PREBUILT_HIDDENAPI_DIR
-ifdef USE_HOST_MUSL
- ifneq (,$(or $(BUILD_BROKEN_USES_BUILD_HOST_EXECUTABLE),\
- $(BUILD_BROKEN_USES_BUILD_HOST_SHARED_LIBRARY),\
- $(BUILD_BROKEN_USES_BUILD_HOST_STATIC_LIBRARY)))
- $(error USE_HOST_MUSL can't be set when native host builds are enabled in Make with BUILD_BROKEN_USES_BUILD_HOST_*)
- endif
-endif
-
# ###############################################################
# Set up final options.
# ###############################################################
diff --git a/core/config_sanitizers.mk b/core/config_sanitizers.mk
index a0ff119..ebce4c2 100644
--- a/core/config_sanitizers.mk
+++ b/core/config_sanitizers.mk
@@ -155,23 +155,36 @@
endif
endif
+# Enable HWASan in included paths.
+ifeq ($(filter hwaddress, $(my_sanitize)),)
+ combined_include_paths := $(HWASAN_INCLUDE_PATHS) \
+ $(PRODUCT_HWASAN_INCLUDE_PATHS)
+
+ ifneq ($(strip $(foreach dir,$(subst $(comma),$(space),$(combined_include_paths)),\
+ $(filter $(dir)%,$(LOCAL_PATH)))),)
+ my_sanitize := hwaddress $(my_sanitize)
+ endif
+endif
+
# If CFI is disabled globally, remove it from my_sanitize.
ifeq ($(strip $(ENABLE_CFI)),false)
my_sanitize := $(filter-out cfi,$(my_sanitize))
my_sanitize_diag := $(filter-out cfi,$(my_sanitize_diag))
endif
-# Also disable CFI if ASAN is enabled.
+# Also disable CFI and MTE if ASAN is enabled.
ifneq ($(filter address,$(my_sanitize)),)
my_sanitize := $(filter-out cfi,$(my_sanitize))
+ my_sanitize := $(filter-out memtag_stack,$(my_sanitize))
+ my_sanitize := $(filter-out memtag_heap,$(my_sanitize))
my_sanitize_diag := $(filter-out cfi,$(my_sanitize_diag))
endif
# Disable memtag for host targets. Host executables in AndroidMk files are
# deprecated, but some partners still have them floating around.
ifdef LOCAL_IS_HOST_MODULE
- my_sanitize := $(filter-out memtag_heap,$(my_sanitize))
- my_sanitize_diag := $(filter-out memtag_heap,$(my_sanitize_diag))
+ my_sanitize := $(filter-out memtag_heap memtag_stack,$(my_sanitize))
+ my_sanitize_diag := $(filter-out memtag_heap memtag_stack,$(my_sanitize_diag))
endif
# Disable sanitizers which need the UBSan runtime for host targets.
@@ -205,10 +218,13 @@
ifneq ($(filter arm x86 x86_64,$(TARGET_$(LOCAL_2ND_ARCH_VAR_PREFIX)ARCH)),)
my_sanitize := $(filter-out hwaddress,$(my_sanitize))
my_sanitize := $(filter-out memtag_heap,$(my_sanitize))
+ my_sanitize := $(filter-out memtag_stack,$(my_sanitize))
endif
ifneq ($(filter hwaddress,$(my_sanitize)),)
my_sanitize := $(filter-out address,$(my_sanitize))
+ my_sanitize := $(filter-out memtag_stack,$(my_sanitize))
+ my_sanitize := $(filter-out memtag_heap,$(my_sanitize))
my_sanitize := $(filter-out thread,$(my_sanitize))
my_sanitize := $(filter-out cfi,$(my_sanitize))
endif
@@ -224,21 +240,34 @@
endif
endif
-ifneq ($(filter memtag_heap,$(my_sanitize)),)
- # Add memtag ELF note.
- ifneq ($(filter EXECUTABLES NATIVE_TESTS,$(LOCAL_MODULE_CLASS)),)
- ifneq ($(filter memtag_heap,$(my_sanitize_diag)),)
- my_whole_static_libraries += note_memtag_heap_sync
- else
- my_whole_static_libraries += note_memtag_heap_async
- endif
+ifneq ($(filter memtag_heap memtag_stack,$(my_sanitize)),)
+ ifneq ($(filter memtag_heap,$(my_sanitize_diag)),)
+ my_cflags += -fsanitize-memtag-mode=sync
+ my_sanitize_diag := $(filter-out memtag_heap,$(my_sanitize_diag))
+ else
+ my_cflags += -fsanitize-memtag-mode=async
endif
- # This is all that memtag_heap does - it is not an actual -fsanitize argument.
- # Remove it from the list.
+endif
+
+# Ignore SANITIZE_TARGET_DIAG=memtag_heap without SANITIZE_TARGET=memtag_heap
+# This can happen if a condition above filters out memtag_heap from
+# my_sanitize. It is easier to handle all of these cases here centrally.
+ifneq ($(filter memtag_heap,$(my_sanitize_diag)),)
+ my_sanitize_diag := $(filter-out memtag_heap,$(my_sanitize_diag))
+endif
+
+ifneq ($(filter memtag_heap,$(my_sanitize)),)
+ my_cflags += -fsanitize=memtag-heap
my_sanitize := $(filter-out memtag_heap,$(my_sanitize))
endif
-my_sanitize_diag := $(filter-out memtag_heap,$(my_sanitize_diag))
+ifneq ($(filter memtag_stack,$(my_sanitize)),)
+ my_cflags += -fsanitize=memtag-stack
+ my_cflags += -march=armv8a+memtag
+ my_ldflags += -march=armv8a+memtag
+ my_asflags += -march=armv8a+memtag
+ my_sanitize := $(filter-out memtag_stack,$(my_sanitize))
+endif
# TSAN is not supported on 32-bit architectures. For non-multilib cases, make
# its use an error. For multilib cases, don't use it for the 32-bit case.
diff --git a/core/definitions.mk b/core/definitions.mk
index 0c46de9..e4cee7a 100644
--- a/core/definitions.mk
+++ b/core/definitions.mk
@@ -41,6 +41,9 @@
ALL_NON_MODULES:=
NON_MODULES_WITHOUT_LICENSE_METADATA:=
+# List of copied targets that need license metadata copied.
+ALL_COPIED_TARGETS:=
+
# Full paths to targets that should be added to the "make droid"
# set of installed targets.
ALL_DEFAULT_INSTALLED_MODULES:=
@@ -567,19 +570,35 @@
## Target directory for license metadata files.
###########################################################
define license-metadata-dir
-$(call generated-sources-dir-for,META,lic,)
+$(call generated-sources-dir-for,META,lic,$(filter-out $(PRODUCT_OUT)%,$(1)))
endef
+TARGETS_MISSING_LICENSE_METADATA:=
+
###########################################################
# License metadata targets corresponding to targets in $(1)
###########################################################
define corresponding-license-metadata
-$(strip $(foreach target, $(sort $(1)), \
+$(strip $(filter-out 0p,$(foreach target, $(sort $(1)), \
$(if $(strip $(ALL_MODULES.$(target).META_LIC)), \
$(ALL_MODULES.$(target).META_LIC), \
$(if $(strip $(ALL_TARGETS.$(target).META_LIC)), \
$(ALL_TARGETS.$(target).META_LIC), \
- $(call append-path,$(call license-metadata-dir),$(patsubst $(OUT_DIR)%,out%,$(target).meta_lic))))))
+ $(eval TARGETS_MISSING_LICENSE_METADATA += $(target)) \
+ ) \
+ ) \
+)))
+endef
+
+###########################################################
+## Record a target $(1) copied from another target(s) $(2) that will need
+## license metadata.
+###########################################################
+define declare-copy-target-license-metadata
+$(strip $(if $(filter $(OUT_DIR)%,$(2)),\
+ $(eval _tgt:=$(strip $(1)))\
+ $(eval ALL_COPIED_TARGETS.$(_tgt).SOURCES := $(sort $(ALL_COPIED_TARGETS.$(_tgt).SOURCES) $(filter $(OUT_DIR)%,$(2))))\
+ $(eval ALL_COPIED_TARGETS += $(_tgt))))
endef
###########################################################
@@ -620,6 +639,7 @@
$(2): PRIVATE_IS_CONTAINER := $(ALL_MODULES.$(1).IS_CONTAINER)
$(2): PRIVATE_PACKAGE_NAME := $(strip $(ALL_MODULES.$(1).LICENSE_PACKAGE_NAME))
$(2): PRIVATE_INSTALL_MAP := $(_map)
+$(2): PRIVATE_MODULE_NAME := $(1)
$(2): PRIVATE_MODULE_TYPE := $(ALL_MODULES.$(1).MODULE_TYPE)
$(2): PRIVATE_MODULE_CLASS := $(ALL_MODULES.$(1).MODULE_CLASS)
$(2): PRIVATE_INSTALL_MAP := $(_map)
@@ -630,6 +650,7 @@
mkdir -p $$(dir $$@)
mkdir -p $$(dir $$(PRIVATE_ARGUMENT_FILE))
$$(call dump-words-to-file,\
+ $$(addprefix -mn ,$$(PRIVATE_MODULE_NAME))\
$$(addprefix -mt ,$$(PRIVATE_MODULE_TYPE))\
$$(addprefix -mc ,$$(PRIVATE_MODULE_CLASS))\
$$(addprefix -k ,$$(PRIVATE_KINDS))\
@@ -654,20 +675,13 @@
## License metadata build rule for non-module target $(1)
###########################################################
define non-module-license-metadata-rule
-$(strip $(eval _dir := $(call license-metadata-dir)))
+$(strip $(eval _dir := $(call license-metadata-dir,$(1))))
$(strip $(eval _tgt := $(strip $(1))))
$(strip $(eval _meta := $(call append-path,$(_dir),$(patsubst $(OUT_DIR)%,out%,$(_tgt).meta_lic))))
$(strip $(eval _deps := $(sort $(filter-out 0p: :,$(foreach d,$(strip $(ALL_NON_MODULES.$(_tgt).DEPENDENCIES)),$(ALL_TARGETS.$(call word-colon,1,$(d)).META_LIC):$(call wordlist-colon,2,9999,$(d)))))))
$(strip $(eval _notices := $(sort $(ALL_NON_MODULES.$(_tgt).NOTICES))))
$(strip $(eval _path := $(sort $(ALL_NON_MODULES.$(_tgt).PATH))))
$(strip $(eval _install_map := $(ALL_NON_MODULES.$(_tgt).ROOT_MAPPINGS)))
-$(strip $(eval \
- $$(foreach d,$(strip $(ALL_NON_MODULES.$(_tgt).DEPENDENCIES)), \
- $$(if $$(strip $$(ALL_TARGETS.$$(d).META_LIC)), \
- , \
- $$(eval NON_MODULES_WITHOUT_LICENSE_METADATA += $$(d))) \
- )) \
-)
$(_meta): PRIVATE_KINDS := $(sort $(ALL_NON_MODULES.$(_tgt).LICENSE_KINDS))
$(_meta): PRIVATE_CONDITIONS := $(sort $(ALL_NON_MODULES.$(_tgt).LICENSE_CONDITIONS))
@@ -705,6 +719,64 @@
endef
###########################################################
+## Record missing dependencies for non-module target $(1)
+###########################################################
+define record-missing-non-module-dependencies
+$(strip $(eval _tgt := $(strip $(1))))
+$(strip $(foreach d,$(strip $(ALL_NON_MODULES.$(_tgt).DEPENDENCIES)), \
+ $(if $(strip $(ALL_TARGETS.$(d).META_LIC)), \
+ , \
+ $(eval NON_MODULES_WITHOUT_LICENSE_METADATA += $(d))) \
+))
+endef
+
+###########################################################
+## License metadata build rule for copied target $(1)
+###########################################################
+define copied-target-license-metadata-rule
+$(if $(strip $(ALL_TARGETS.$(1).META_LIC)),,$(call _copied-target-license-metadata-rule,$(1)))
+endef
+
+define _copied-target-license-metadata-rule
+$(strip $(eval _dir := $(call license-metadata-dir,$(1))))
+$(strip $(eval _meta := $(call append-path,$(_dir),$(patsubst $(OUT_DIR)%,out%,$(1).meta_lic))))
+$(strip $(eval ALL_TARGETS.$(1).META_LIC:=$(_meta)))
+$(strip $(eval _dep:=))
+$(strip $(foreach s,$(ALL_COPIED_TARGETS.$(1).SOURCES),\
+ $(eval _dmeta:=$(ALL_TARGETS.$(s).META_LIC))\
+ $(if $(filter-out 0p,$(_dep)),\
+ $(if $(filter-out $(_dep),$(_dmeta)),$(error cannot copy target from multiple modules: $(1) from $(_dep) and $(_dmeta))),\
+ $(eval _dep:=$(_dmeta)))))
+$(if $(filter 0p,$(_dep)),$(eval ALL_TARGETS.$(1).META_LIC:=0p))
+$(strip $(if $(strip $(_dep)),,$(error cannot copy target from unknown module: $(1) from $(ALL_COPIED_TARGETS.$(1).SOURCES))))
+
+ifneq (0p,$(ALL_TARGETS.$(1).META_LIC))
+$(_meta): PRIVATE_DEST_TARGET := $(1)
+$(_meta): PRIVATE_SOURCE_TARGETS := $(ALL_COPIED_TARGETS.$(1).SOURCES)
+$(_meta): PRIVATE_SOURCE_METADATA := $(_dep)
+$(_meta): PRIVATE_ARGUMENT_FILE := $(call intermediates-dir-for,PACKAGING,copynotice)/$(_meta)/arguments
+$(_meta) : $(_dep) $(COPY_LICENSE_METADATA)
+ rm -f $$@
+ mkdir -p $$(dir $$@)
+ mkdir -p $$(dir $$(PRIVATE_ARGUMENT_FILE))
+ $$(call dump-words-to-file,\
+ $$(addprefix -i ,$$(PRIVATE_DEST_TARGET))\
+ $$(addprefix -s ,$$(PRIVATE_SOURCE_TARGETS))\
+ $$(addprefix -d ,$$(PRIVATE_SOURCE_METADATA)),\
+ $$(PRIVATE_ARGUMENT_FILE))
+ OUT_DIR=$(OUT_DIR) $(COPY_LICENSE_METADATA) \
+ @$$(PRIVATE_ARGUMENT_FILE) \
+ -o $$@
+
+endif
+
+$(eval _dep:=)
+$(eval _dmeta:=)
+$(eval _meta:=)
+$(eval _dir:=)
+endef
+
+###########################################################
## Declare the license metadata for non-module target $(1).
##
## $(2) -- license kinds e.g. SPDX-license-identifier-Apache-2.0
@@ -717,6 +789,7 @@
$(strip \
$(eval _tgt := $(subst //,/,$(strip $(1)))) \
$(eval ALL_NON_MODULES += $(_tgt)) \
+ $(eval ALL_TARGETS.$(_tgt).META_LIC := $(call license-metadata-dir,$(1))/$(patsubst $(OUT_DIR)%,out%,$(_tgt)).meta_lic) \
$(eval ALL_NON_MODULES.$(_tgt).LICENSE_KINDS := $(strip $(2))) \
$(eval ALL_NON_MODULES.$(_tgt).LICENSE_CONDITIONS := $(strip $(3))) \
$(eval ALL_NON_MODULES.$(_tgt).NOTICES := $(strip $(4))) \
@@ -757,6 +830,7 @@
$(strip \
$(eval _tgt := $(subst //,/,$(strip $(1)))) \
$(eval ALL_NON_MODULES += $(_tgt)) \
+ $(eval ALL_TARGETS.$(_tgt).META_LIC := $(call license-metadata-dir,$(1))/$(patsubst $(OUT_DIR)%,out%,$(_tgt)).meta_lic) \
$(eval ALL_NON_MODULES.$(_tgt).LICENSE_KINDS := $(strip $(2))) \
$(eval ALL_NON_MODULES.$(_tgt).LICENSE_CONDITIONS := $(strip $(3))) \
$(eval ALL_NON_MODULES.$(_tgt).NOTICES := $(strip $(4))) \
@@ -827,8 +901,9 @@
###########################################################
define declare-license-deps
$(strip \
- $(eval _tgt := $(strip $(1))) \
+ $(eval _tgt := $(subst //,/,$(strip $(1)))) \
$(eval ALL_NON_MODULES += $(_tgt)) \
+ $(eval ALL_TARGETS.$(_tgt).META_LIC := $(call license-metadata-dir,$(1))/$(patsubst $(OUT_DIR)%,out%,$(_tgt)).meta_lic) \
$(eval ALL_NON_MODULES.$(_tgt).DEPENDENCIES := $(strip $(ALL_NON_MODULES.$(_tgt).DEPENDENCIES) $(2))) \
)
endef
@@ -843,8 +918,9 @@
###########################################################
define declare-container-license-deps
$(strip \
- $(eval _tgt := $(strip $(1))) \
+ $(eval _tgt := $(subst //,/,$(strip $(1)))) \
$(eval ALL_NON_MODULES += $(_tgt)) \
+ $(eval ALL_TARGETS.$(_tgt).META_LIC := $(call license-metadata-dir,$(1))/$(patsubst $(OUT_DIR)%,out%,$(_tgt)).meta_lic) \
$(eval ALL_NON_MODULES.$(_tgt).DEPENDENCIES := $(strip $(ALL_NON_MODULES.$(_tgt).DEPENDENCIES) $(2))) \
$(eval ALL_NON_MODULES.$(_tgt).IS_CONTAINER := true) \
$(eval ALL_NON_MODULES.$(_tgt).ROOT_MAPPINGS := $(strip $(ALL_NON_MODULES.$(_tgt).ROOT_MAPPINGS) $(3))) \
@@ -856,12 +932,14 @@
###########################################################
define report-missing-licenses-rule
.PHONY: reportmissinglicenses
-reportmissinglicenses: PRIVATE_NON_MODULES:=$(sort $(NON_MODULES_WITHOUT_LICENSE_METADATA))
-reportmissinglicenses: PRIVATE_COPIED_FILES:=$(sort $(filter $(NON_MODULES_WITHOUT_LICENSE_METADATA),$(foreach _pair,$(PRODUCT_COPY_FILES), $(PRODUCT_OUT)/$(call word-colon,2,$(_pair)))))
+reportmissinglicenses: PRIVATE_NON_MODULES:=$(sort $(NON_MODULES_WITHOUT_LICENSE_METADATA) $(TARGETS_MISSING_LICENSE_METADATA))
+reportmissinglicenses: PRIVATE_COPIED_FILES:=$(sort $(filter $(NON_MODULES_WITHOUT_LICENSE_METADATA) $(TARGETS_MISSING_LICENSE_METADATA),\
+ $(foreach _pair,$(PRODUCT_COPY_FILES), $(PRODUCT_OUT)/$(call word-colon,2,$(_pair)))))
reportmissinglicenses:
@echo Reporting $$(words $$(PRIVATE_NON_MODULES)) targets without license metadata
$$(foreach t,$$(PRIVATE_NON_MODULES),if ! [ -h $$(t) ]; then echo No license metadata for $$(t) >&2; fi;)
$$(foreach t,$$(PRIVATE_COPIED_FILES),if ! [ -h $$(t) ]; then echo No license metadata for copied file $$(t) >&2; fi;)
+ echo $$(words $$(PRIVATE_NON_MODULES)) targets missing license metadata >&2
endef
@@ -883,7 +961,7 @@
$(strip $(eval _all := $(call all-license-metadata)))
.PHONY: reportallnoticelibrarynames
-reportallnoticelibrarynames: PRIVATE_LIST_FILE := $(call license-metadata-dir)/filelist
+reportallnoticelibrarynames: PRIVATE_LIST_FILE := $(call license-metadata-dir,COMMON)/filelist
reportallnoticelibrarynames: | $(COMPLIANCENOTICE_SHIPPEDLIBS)
reportallnoticelibrarynames: $(_all)
@echo Reporting notice library names for at least $$(words $(_all)) license metadata files
@@ -910,17 +988,12 @@
###########################################################
define build-license-metadata
$(strip \
- $(strip $(eval _dir := $(call license-metadata-dir))) \
$(foreach t,$(sort $(ALL_0P_TARGETS)), \
$(eval ALL_TARGETS.$(t).META_LIC := 0p) \
) \
- $(foreach t,$(sort $(ALL_NON_MODULES)), \
- $(eval ALL_TARGETS.$(t).META_LIC := $(call append-path,$(_dir),$(patsubst $(OUT_DIR)%,out%,$(t).meta_lic))) \
- ) \
+ $(foreach t,$(sort $(ALL_COPIED_TARGETS)),$(eval $(call copied-target-license-metadata-rule,$(t)))) \
$(foreach t,$(sort $(ALL_NON_MODULES)),$(eval $(call non-module-license-metadata-rule,$(t)))) \
$(foreach m,$(sort $(ALL_MODULES)),$(eval $(call license-metadata-rule,$(m)))) \
- $(eval $(call report-missing-licenses-rule)) \
- $(eval $(call report-all-notice-library-names-rule)) \
$(eval $(call build-all-license-metadata-rule)))
endef
@@ -992,6 +1065,22 @@
)
endef
+# Uses LOCAL_MODULE_CLASS, LOCAL_MODULE, and LOCAL_IS_HOST_MODULE
+# to determine the intermediates directory.
+#
+# $(1): if non-empty, force the intermediates to be COMMON
+# $(2): if non-empty, force the intermediates to be for the 2nd arch
+# $(3): if non-empty, force the intermediates to be for the host cross os
+define local-meta-intermediates-dir
+$(strip \
+ $(if $(strip $(LOCAL_MODULE_CLASS)),, \
+ $(error $(LOCAL_PATH): LOCAL_MODULE_CLASS not defined before call to local-meta-intermediates-dir)) \
+ $(if $(strip $(LOCAL_MODULE)),, \
+ $(error $(LOCAL_PATH): LOCAL_MODULE not defined before call to local-meta-intermediates-dir)) \
+ $(call intermediates-dir-for,META$(LOCAL_MODULE_CLASS),$(LOCAL_MODULE),$(if $(strip $(LOCAL_IS_HOST_MODULE)),HOST),$(1),$(2),$(3)) \
+)
+endef
+
###########################################################
## The generated sources directory. Placing generated
## source files directly in the intermediates directory
@@ -2034,6 +2123,7 @@
$(PRIVATE_HOST_GLOBAL_LDFLAGS) \
) \
$(PRIVATE_LDFLAGS) \
+ $(PRIVATE_CRTBEGIN) \
$(PRIVATE_ALL_OBJECTS) \
-Wl,--whole-archive \
$(PRIVATE_ALL_WHOLE_STATIC_LIBRARIES) \
@@ -2042,8 +2132,10 @@
$(PRIVATE_ALL_STATIC_LIBRARIES) \
$(if $(PRIVATE_GROUP_STATIC_LIBRARIES),-Wl$(comma)--end-group) \
$(if $(filter true,$(NATIVE_COVERAGE)),$(PRIVATE_HOST_LIBPROFILE_RT)) \
+ $(PRIVATE_LIBCRT_BUILTINS) \
$(PRIVATE_ALL_SHARED_LIBRARIES) \
-o $@ \
+ $(PRIVATE_CRTEND) \
$(PRIVATE_LDLIBS)
endef
endif
@@ -2177,6 +2269,7 @@
ifneq ($(HOST_CUSTOM_LD_COMMAND),true)
define transform-host-o-to-executable-inner
$(hide) $(PRIVATE_CXX_LINK) \
+ $(PRIVATE_CRTBEGIN) \
$(PRIVATE_ALL_OBJECTS) \
-Wl,--whole-archive \
$(PRIVATE_ALL_WHOLE_STATIC_LIBRARIES) \
@@ -2185,6 +2278,7 @@
$(PRIVATE_ALL_STATIC_LIBRARIES) \
$(if $(PRIVATE_GROUP_STATIC_LIBRARIES),-Wl$(comma)--end-group) \
$(if $(filter true,$(NATIVE_COVERAGE)),$(PRIVATE_HOST_LIBPROFILE_RT)) \
+ $(PRIVATE_LIBCRT_BUILTINS) \
$(PRIVATE_ALL_SHARED_LIBRARIES) \
$(foreach path,$(PRIVATE_RPATHS), \
-Wl,-rpath,\$$ORIGIN/$(path)) \
@@ -2193,6 +2287,7 @@
) \
$(PRIVATE_LDFLAGS) \
-o $@ \
+ $(PRIVATE_CRTEND) \
$(PRIVATE_LDLIBS)
endef
endif
@@ -2411,7 +2506,127 @@
@$(call emit-line,$(wordlist 38001,38500,$(1)),$(2))
@$(call emit-line,$(wordlist 38501,39000,$(1)),$(2))
@$(call emit-line,$(wordlist 39001,39500,$(1)),$(2))
- @$(if $(wordlist 39501,39502,$(1)),$(error Too many words ($(words $(1)))))
+ @$(call emit-line,$(wordlist 39501,40000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 40001,40500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 40501,41000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 41001,41500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 41501,42000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 42001,42500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 42501,43000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 43001,43500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 43501,44000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 44001,44500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 44501,45000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 45001,45500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 45501,46000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 46001,46500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 46501,47000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 47001,47500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 47501,48000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 48001,48500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 48501,49000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 49001,49500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 49501,50000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 50001,50500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 50501,51000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 51001,51500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 51501,52000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 52001,52500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 52501,53000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 53001,53500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 53501,54000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 54001,54500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 54501,55000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 55001,55500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 55501,56000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 56001,56500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 56501,57000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 57001,57500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 57501,58000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 58001,58500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 58501,59000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 59001,59500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 59501,60000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 60001,60500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 60501,61000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 61001,61500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 61501,62000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 62001,62500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 62501,63000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 63001,63500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 63501,64000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 64001,64500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 64501,65000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 65001,65500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 65501,66000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 66001,66500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 66501,67000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 67001,67500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 67501,68000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 68001,68500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 68501,69000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 69001,69500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 69501,70000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 70001,70500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 70501,71000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 71001,71500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 71501,72000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 72001,72500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 72501,73000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 73001,73500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 73501,74000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 74001,74500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 74501,75000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 75001,75500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 75501,76000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 76001,76500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 76501,77000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 77001,77500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 77501,78000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 78001,78500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 78501,79000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 79001,79500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 79501,80000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 80001,80500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 80501,81000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 81001,81500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 81501,82000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 82001,82500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 82501,83000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 83001,83500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 83501,84000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 84001,84500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 84501,85000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 85001,85500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 85501,86000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 86001,86500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 86501,87000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 87001,87500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 87501,88000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 88001,88500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 88501,89000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 89001,89500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 89501,90000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 90001,90500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 90501,91000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 91001,91500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 91501,92000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 92001,92500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 92501,93000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 93001,93500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 93501,94000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 94001,94500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 94501,95000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 95001,95500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 95501,96000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 96001,96500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 96501,97000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 97001,97500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 97501,98000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 98001,98500,$(1)),$(2))
+ @$(call emit-line,$(wordlist 98501,99000,$(1)),$(2))
+ @$(call emit-line,$(wordlist 99001,99500,$(1)),$(2))
+ @$(if $(wordlist 99501,99502,$(1)),$(error dump-words-to-file: Too many words ($(words $(1)))))
endef
# Return jar arguments to compress files in a given directory
# $(1): directory
@@ -2477,8 +2692,6 @@
$(if $(PRIVATE_SRCJARS),\@$(PRIVATE_SRCJAR_LIST_FILE)) \
|| ( rm -rf $(PRIVATE_CLASS_INTERMEDIATES_DIR) ; exit 41 ) \
fi
-$(if $(PRIVATE_JAVA_LAYERS_FILE), $(hide) build/make/tools/java-layers.py \
- $(PRIVATE_JAVA_LAYERS_FILE) @$(PRIVATE_JAVA_SOURCE_LIST),)
$(if $(PRIVATE_JAR_EXCLUDE_FILES), $(hide) find $(PRIVATE_CLASS_INTERMEDIATES_DIR) \
-name $(word 1, $(PRIVATE_JAR_EXCLUDE_FILES)) \
$(addprefix -o -name , $(wordlist 2, 999, $(PRIVATE_JAR_EXCLUDE_FILES))) \
@@ -2609,7 +2822,7 @@
@mkdir -p $(dir $@)tmp
$(hide) rm -f $(dir $@)classes*.dex $(dir $@)d8_input.jar
$(hide) $(ZIP2ZIP) -j -i $< -o $(dir $@)d8_input.jar "**/*.class"
-$(hide) $(D8_WRAPPER) $(DX_COMMAND) $(DEX_FLAGS) \
+$(hide) $(D8_WRAPPER) $(D8_COMMAND) \
--output $(dir $@)tmp \
$(addprefix --lib ,$(PRIVATE_D8_LIBS)) \
--min-api $(PRIVATE_MIN_SDK_VERSION) \
@@ -2747,7 +2960,7 @@
$(extract-package) \
echo "Module name in Android tree: $(PRIVATE_MODULE)" >> $(PRODUCT_OUT)/appcompat/$(PRIVATE_MODULE).log && \
echo "Local path in Android tree: $(PRIVATE_PATH)" >> $(PRODUCT_OUT)/appcompat/$(PRIVATE_MODULE).log && \
- echo "Install path on $(TARGET_PRODUCT)-$(TARGET_BUILD_VARIANT): $(PRIVATE_INSTALLED_MODULE)" >> $(PRODUCT_OUT)/appcompat/$(PRIVATE_MODULE).log && \
+ echo "Install path: $(patsubst $(PRODUCT_OUT)/%,%,$(PRIVATE_INSTALLED_MODULE))" >> $(PRODUCT_OUT)/appcompat/$(PRIVATE_MODULE).log && \
echo >> $(PRODUCT_OUT)/appcompat/$(PRIVATE_MODULE).log
endef
ART_VERIDEX_APPCOMPAT_SCRIPT:=$(HOST_OUT)/bin/appcompat.sh
@@ -2851,6 +3064,19 @@
$$(copy-file-to-target)
endef
+# Define a rule to copy a license metadata file. For use via $(eval).
+# $(1): source license metadata file
+# $(2): destination license metadata file
+# $(3): built targets
+# $(4): installed targets
+define copy-one-license-metadata-file
+$(2): PRIVATE_BUILT=$(3)
+$(2): PRIVATE_INSTALLED=$(4)
+$(2): $(1)
+ @echo "Copy: $$@"
+ $$(call copy-license-metadata-file-to-target,$$(PRIVATE_BUILT),$$(PRIVATE_INSTALLED))
+endef
+
define copy-and-uncompress-dexs
$(2): $(1) $(ZIPALIGN) $(ZIP2ZIP)
@echo "Uncompress dexs in: $$@"
@@ -2899,7 +3125,7 @@
# $(2): destination file
define copy-init-script-file-checked
ifdef TARGET_BUILD_UNBUNDLED
-# TODO (b/185624993): Remove the chck on TARGET_BUILD_UNBUNDLED when host_init_verifier can run
+# TODO (b/185624993): Remove the check on TARGET_BUILD_UNBUNDLED when host_init_verifier can run
# without requiring the HIDL interface map.
$(2): $(1)
else ifneq ($(HOST_OS),darwin)
@@ -3038,6 +3264,17 @@
$(hide) cp "$<" "$@"
endef
+# Same as copy-file-to-target, but assume file is a licenes metadata file,
+# and append built from $(1) and installed from $(2).
+define copy-license-metadata-file-to-target
+@mkdir -p $(dir $@)
+$(hide) rm -f $@
+$(hide) cp "$<" "$@" $(strip \
+ $(foreach b,$(1), && (grep -F 'built: "'"$(b)"'"' "$@" >/dev/null || echo 'built: "'"$(b)"'"' >>"$@")) \
+ $(foreach i,$(2), && (grep -F 'installed: "'"$(i)"'"' "$@" >/dev/null || echo 'installed: "'"$(i)"'"' >>"$@")) \
+)
+endef
+
# The same as copy-file-to-target, but use the local
# cp command instead of acp.
define copy-file-to-target-with-cp
@@ -3214,7 +3451,7 @@
define transform-jar-to-dex-r8
@echo R8: $@
$(hide) rm -f $(PRIVATE_PROGUARD_DICTIONARY)
-$(hide) $(R8_WRAPPER) $(R8_COMPAT_PROGUARD) $(DEX_FLAGS) \
+$(hide) $(R8_WRAPPER) $(R8_COMMAND) \
-injars '$<' \
--min-api $(PRIVATE_MIN_SDK_VERSION) \
--no-data-resources \
@@ -3361,8 +3598,6 @@
STATIC_TEST_LIBRARY \
HOST_STATIC_TEST_LIBRARY \
NOTICE_FILE \
- HOST_DALVIK_JAVA_LIBRARY \
- HOST_DALVIK_STATIC_JAVA_LIBRARY \
base_rules \
HEADER_LIBRARY \
HOST_TEST_CONFIG \
@@ -3405,12 +3640,12 @@
define create-suite-dependencies
$(foreach suite, $(LOCAL_COMPATIBILITY_SUITE), \
$(eval $(if $(strip $(module_license_metadata)),\
- $$(foreach f,$$(my_compat_dist_$(suite)),$$(eval ALL_TARGETS.$$(call word-colon,2,$$(f)).META_LIC := $(module_license_metadata))),\
- $$(eval my_test_data += $$(foreach f,$$(my_compat_dist_$(suite)), $$(call word-colon,2,$$(f)))) \
+ $$(foreach f,$$(my_compat_dist_$(suite)),$$(call declare-copy-target-license-metadata,$$(call word-colon,2,$$(f)),$$(call word-colon,1,$$(f)))),\
+ $$(eval my_test_data += $$(my_compat_dist_$(suite))) \
)) \
$(eval $(if $(strip $(module_license_metadata)),\
- $$(foreach f,$$(my_compat_dist_config_$(suite)),$$(eval ALL_TARGETS.$$(call word-colon,2,$$(f)).META_LIC := $(module_license_metadata))),\
- $$(eval my_test_config += $$(foreach f,$$(my_compat_dist_config_$(suite)), $$(call word-colon,2,$$(f)))) \
+ $$(foreach f,$$(my_compat_dist_config_$(suite)),$$(call declare-copy-target-license-metadata,$$(call word-colon,2,$$(f)),$$(call word-colon,1,$$(f)))),\
+ $$(eval my_test_config += $$(my_compat_dist_config_$(suite))) \
)) \
$(if $(filter $(suite),$(ALL_COMPATIBILITY_SUITES)),,\
$(eval ALL_COMPATIBILITY_SUITES += $(suite)) \
@@ -3688,6 +3923,10 @@
-include $(TOPDIR)vendor/*/build/core/definitions.mk
-include $(TOPDIR)device/*/build/core/definitions.mk
-include $(TOPDIR)product/*/build/core/definitions.mk
+# Also the project-specific definitions.mk file
+-include $(TOPDIR)vendor/*/*/build/core/definitions.mk
+-include $(TOPDIR)device/*/*/build/core/definitions.mk
+-include $(TOPDIR)product/*/*/build/core/definitions.mk
# broken:
# $(foreach file,$^,$(if $(findstring,.a,$(suffix $file)),-l$(file),$(file)))
diff --git a/core/deprecation.mk b/core/deprecation.mk
index 2b7a869..ed4215e 100644
--- a/core/deprecation.mk
+++ b/core/deprecation.mk
@@ -3,8 +3,6 @@
BUILD_EXECUTABLE \
BUILD_FUZZ_TEST \
BUILD_HEADER_LIBRARY \
- BUILD_HOST_DALVIK_JAVA_LIBRARY \
- BUILD_HOST_DALVIK_STATIC_JAVA_LIBRARY \
BUILD_HOST_JAVA_LIBRARY \
BUILD_HOST_PREBUILT \
BUILD_JAVA_LIBRARY \
@@ -39,6 +37,8 @@
OBSOLETE_BUILD_MODULE_TYPES :=$= \
BUILD_AUX_EXECUTABLE \
BUILD_AUX_STATIC_LIBRARY \
+ BUILD_HOST_DALVIK_JAVA_LIBRARY \
+ BUILD_HOST_DALVIK_STATIC_JAVA_LIBRARY \
BUILD_HOST_FUZZ_TEST \
BUILD_HOST_NATIVE_TEST \
BUILD_HOST_SHARED_TEST_LIBRARY \
diff --git a/core/device.mk b/core/device.mk
deleted file mode 100644
index 20ff447..0000000
--- a/core/device.mk
+++ /dev/null
@@ -1,76 +0,0 @@
-#
-# Copyright (C) 2007 The Android Open Source Project
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-_device_var_list := \
- DEVICE_NAME \
- DEVICE_BOARD \
- DEVICE_REGION
-
-define dump-device
-$(info ==== $(1) ====)\
-$(foreach v,$(_device_var_list),\
-$(info DEVICES.$(1).$(v) := $(DEVICES.$(1).$(v))))\
-$(info --------)
-endef
-
-define dump-devices
-$(foreach p,$(DEVICES),$(call dump-device,$(p)))
-endef
-
-#
-# $(1): device to inherit
-#
-define inherit-device
- $(foreach v,$(_device_var_list), \
- $(eval $(v) := $($(v)) $(INHERIT_TAG)$(strip $(1))))
-endef
-
-#
-# $(1): device makefile list
-#
-#TODO: check to make sure that devices have all the necessary vars defined
-define import-devices
-$(call import-nodes,DEVICES,$(1),$(_device_var_list))
-endef
-
-
-#
-# $(1): short device name like "sooner"
-#
-define _resolve-short-device-name
- $(eval dn := $(strip $(1)))
- $(eval d := \
- $(foreach d,$(DEVICES), \
- $(if $(filter $(dn),$(DEVICES.$(d).DEVICE_NAME)), \
- $(d) \
- )) \
- )
- $(eval d := $(sort $(d)))
- $(if $(filter 1,$(words $(d))), \
- $(d), \
- $(if $(filter 0,$(words $(d))), \
- $(error No matches for device "$(dn)"), \
- $(error Device "$(dn)" ambiguous: matches $(d)) \
- ) \
- )
-endef
-
-#
-# $(1): short device name like "sooner"
-#
-define resolve-short-device-name
-$(strip $(call _resolve-short-device-name,$(1)))
-endef
diff --git a/core/dex_preopt.mk b/core/dex_preopt.mk
index 593ad66..86ca729 100644
--- a/core/dex_preopt.mk
+++ b/core/dex_preopt.mk
@@ -62,22 +62,92 @@
boot_zip := $(PRODUCT_OUT)/boot.zip
bootclasspath_jars := $(DEXPREOPT_BOOTCLASSPATH_DEX_FILES)
+
+# TODO remove system_server_jars usages from boot.zip and depend directly on system_server.zip file.
+
+# Use "/system" path for JARs with "platform:" prefix.
+# These JARs counterintuitively use "platform" prefix but they will
+# be actually installed to /system partition.
+platform_system_server_jars = $(filter platform:%, $(PRODUCT_SYSTEM_SERVER_JARS))
system_server_jars := \
- $(foreach m,$(PRODUCT_SYSTEM_SERVER_JARS),\
+ $(foreach m,$(platform_system_server_jars),\
$(PRODUCT_OUT)/system/framework/$(call word-colon,2,$(m)).jar)
+# For the remaining system server JARs use the partition signified by the prefix.
+# For example, prefix "system_ext:" will use "/system_ext" path.
+other_system_server_jars = $(filter-out $(platform_system_server_jars), $(PRODUCT_SYSTEM_SERVER_JARS))
+system_server_jars += \
+ $(foreach m,$(other_system_server_jars),\
+ $(PRODUCT_OUT)/$(call word-colon,1,$(m))/framework/$(call word-colon,2,$(m)).jar)
+
+# Infix can be 'art' (ART image for testing), 'boot' (primary), or 'mainline' (mainline extension).
+# Soong creates a set of variables for Make, one or each boot image. The only reason why the ART
+# image is exposed to Make is testing (art gtests) and benchmarking (art golem benchmarks). Install
+# rules that use those variables are in dex_preopt_libart.mk. Here for dexpreopt purposes the infix
+# is always 'boot' or 'mainline'.
+DEXPREOPT_INFIX := $(if $(filter true,$(DEX_PREOPT_WITH_UPDATABLE_BCP)),mainline,boot)
+
+# The input variables are written by build/soong/java/dexpreopt_bootjars.go. Examples can be found
+# at the bottom of build/soong/java/dexpreopt_config_testing.go.
+dexpreopt_root_dir := $(dir $(patsubst %/,%,$(dir $(firstword $(bootclasspath_jars)))))
+booclasspath_arg := $(subst $(space),:,$(patsubst $(dexpreopt_root_dir)%,%,$(DEXPREOPT_BOOTCLASSPATH_DEX_FILES)))
+booclasspath_locations_arg := $(subst $(space),:,$(DEXPREOPT_BOOTCLASSPATH_DEX_LOCATIONS))
+boot_images := $(subst :,$(space),$(DEXPREOPT_IMAGE_LOCATIONS_ON_DEVICE$(DEXPREOPT_INFIX)))
+boot_image_arg := $(subst $(space),:,$(patsubst /%,%,$(boot_images)))
+
+boot_zip_metadata_txt := $(dir $(boot_zip))boot_zip/METADATA.txt
+$(boot_zip_metadata_txt):
+ rm -f $@
+ echo "booclasspath = $(booclasspath_arg)" >> $@
+ echo "booclasspath-locations = $(booclasspath_locations_arg)" >> $@
+ echo "boot-image = $(boot_image_arg)" >> $@
+
+$(call dist-for-goals, droidcore, $(boot_zip_metadata_txt))
+
$(boot_zip): PRIVATE_BOOTCLASSPATH_JARS := $(bootclasspath_jars)
$(boot_zip): PRIVATE_SYSTEM_SERVER_JARS := $(system_server_jars)
-$(boot_zip): $(bootclasspath_jars) $(system_server_jars) $(SOONG_ZIP) $(MERGE_ZIPS) $(DEXPREOPT_IMAGE_ZIP_boot) $(DEXPREOPT_IMAGE_ZIP_art)
+$(boot_zip): $(bootclasspath_jars) $(system_server_jars) $(SOONG_ZIP) $(MERGE_ZIPS) $(DEXPREOPT_IMAGE_ZIP_boot) $(DEXPREOPT_IMAGE_ZIP_art) $(DEXPREOPT_IMAGE_ZIP_mainline) $(boot_zip_metadata_txt)
@echo "Create boot package: $@"
rm -f $@
$(SOONG_ZIP) -o $@.tmp \
-C $(dir $(firstword $(PRIVATE_BOOTCLASSPATH_JARS)))/.. $(addprefix -f ,$(PRIVATE_BOOTCLASSPATH_JARS)) \
- -C $(PRODUCT_OUT) $(addprefix -f ,$(PRIVATE_SYSTEM_SERVER_JARS))
- $(MERGE_ZIPS) $@ $@.tmp $(DEXPREOPT_IMAGE_ZIP_boot) $(DEXPREOPT_IMAGE_ZIP_art)
+ -C $(PRODUCT_OUT) $(addprefix -f ,$(PRIVATE_SYSTEM_SERVER_JARS)) \
+ -j -f $(boot_zip_metadata_txt)
+ $(MERGE_ZIPS) $@ $@.tmp $(DEXPREOPT_IMAGE_ZIP_boot) $(DEXPREOPT_IMAGE_ZIP_art) $(DEXPREOPT_IMAGE_ZIP_mainline)
rm -f $@.tmp
$(call dist-for-goals, droidcore, $(boot_zip))
+ifneq (,$(filter true,$(ART_MODULE_BUILD_FROM_SOURCE) $(MODULE_BUILD_FROM_SOURCE)))
+# Build the system_server.zip which contains the Apex system server jars and standalone system server jars
+system_server_zip := $(PRODUCT_OUT)/system_server.zip
+apex_system_server_jars := \
+ $(foreach m,$(PRODUCT_APEX_SYSTEM_SERVER_JARS),\
+ $(PRODUCT_OUT)/apex/$(call word-colon,1,$(m))/javalib/$(call word-colon,2,$(m)).jar)
+
+apex_standalone_system_server_jars := \
+ $(foreach m,$(PRODUCT_APEX_STANDALONE_SYSTEM_SERVER_JARS),\
+ $(PRODUCT_OUT)/apex/$(call word-colon,1,$(m))/javalib/$(call word-colon,2,$(m)).jar)
+
+standalone_system_server_jars := \
+ $(foreach m,$(PRODUCT_STANDALONE_SYSTEM_SERVER_JARS),\
+ $(PRODUCT_OUT)/apex/$(call word-colon,1,$(m))/javalib/$(call word-colon,2,$(m)).jar)
+
+$(system_server_zip): PRIVATE_SYSTEM_SERVER_JARS := $(system_server_jars)
+$(system_server_zip): PRIVATE_APEX_SYSTEM_SERVER_JARS := $(apex_system_server_jars)
+$(system_server_zip): PRIVATE_APEX_STANDALONE_SYSTEM_SERVER_JARS := $(apex_standalone_system_server_jars)
+$(system_server_zip): PRIVATE_STANDALONE_SYSTEM_SERVER_JARS := $(standalone_system_server_jars)
+$(system_server_zip): $(system_server_jars) $(apex_system_server_jars) $(apex_standalone_system_server_jars) $(standalone_system_server_jars) $(SOONG_ZIP)
+ @echo "Create system server package: $@"
+ rm -f $@
+ $(SOONG_ZIP) -o $@ \
+ -C $(PRODUCT_OUT) $(addprefix -f ,$(PRIVATE_SYSTEM_SERVER_JARS)) \
+ -C $(PRODUCT_OUT) $(addprefix -f ,$(PRIVATE_APEX_SYSTEM_SERVER_JARS)) \
+ -C $(PRODUCT_OUT) $(addprefix -f ,$(PRIVATE_APEX_STANDALONE_SYSTEM_SERVER_JARS)) \
+ -C $(PRODUCT_OUT) $(addprefix -f ,$(PRIVATE_STANDALONE_SYSTEM_SERVER_JARS))
+
+$(call dist-for-goals, droidcore, $(system_server_zip))
+
+endif #ART_MODULE_BUILD_FROM_SOURCE || MODULE_BUILD_FROM_SOURCE
endif #PRODUCT_USES_DEFAULT_ART_CONFIG
endif #WITH_DEXPREOPT
diff --git a/core/dex_preopt_config.mk b/core/dex_preopt_config.mk
index d5293cf..7b9c4db 100644
--- a/core/dex_preopt_config.mk
+++ b/core/dex_preopt_config.mk
@@ -12,9 +12,15 @@
# would result in passing bad arguments to dex2oat and failing the build.
ENABLE_PREOPT :=
ENABLE_PREOPT_BOOT_IMAGES :=
-else ifeq (true,$(DISABLE_PREOPT))
- # Disable dexpreopt for libraries/apps, but do compile boot images.
- ENABLE_PREOPT :=
+else
+ ifeq (true,$(DISABLE_PREOPT))
+ # Disable dexpreopt for libraries/apps, but may compile boot images.
+ ENABLE_PREOPT :=
+ endif
+ ifeq (true,$(DISABLE_PREOPT_BOOT_IMAGES))
+ # Disable dexpreopt for boot images, but may compile libraries/apps.
+ ENABLE_PREOPT_BOOT_IMAGES :=
+ endif
endif
# The default value for LOCAL_DEX_PREOPT
@@ -96,7 +102,6 @@
$(call add_json_list, DisablePreoptModules, $(DEXPREOPT_DISABLED_MODULES))
$(call add_json_bool, OnlyPreoptBootImageAndSystemServer, $(filter true,$(WITH_DEXPREOPT_BOOT_IMG_AND_SYSTEM_SERVER_ONLY)))
$(call add_json_bool, PreoptWithUpdatableBcp, $(filter true,$(DEX_PREOPT_WITH_UPDATABLE_BCP)))
- $(call add_json_bool, UseArtImage, $(filter true,$(DEXPREOPT_USE_ART_IMAGE)))
$(call add_json_bool, DontUncompressPrivAppsDex, $(filter true,$(DONT_UNCOMPRESS_PRIV_APPS_DEXS)))
$(call add_json_list, ModulesLoadedByPrivilegedModules, $(PRODUCT_LOADED_BY_PRIVILEGED_MODULES))
$(call add_json_bool, HasSystemOther, $(BOARD_USES_SYSTEM_OTHER_ODEX))
@@ -131,6 +136,7 @@
$(call add_json_str, Dex2oatXmx, $(DEX2OAT_XMX))
$(call add_json_str, Dex2oatXms, $(DEX2OAT_XMS))
$(call add_json_str, EmptyDirectory, $(OUT_DIR)/empty)
+ $(call add_json_bool, EnableUffdGc, $(filter true,$(ENABLE_UFFD_GC)))
ifdef TARGET_ARCH
$(call add_json_map, CpuVariant)
diff --git a/core/dex_preopt_odex_install.mk b/core/dex_preopt_odex_install.mk
index ea50313..8ebf34e 100644
--- a/core/dex_preopt_odex_install.mk
+++ b/core/dex_preopt_odex_install.mk
@@ -245,7 +245,7 @@
$(my_enforced_uses_libraries): PRIVATE_OPTIONAL_USES_LIBRARIES := $(my_optional_uses_libs_args)
$(my_enforced_uses_libraries): PRIVATE_DEXPREOPT_CONFIGS := $(my_dexpreopt_config_args)
$(my_enforced_uses_libraries): PRIVATE_RELAX_CHECK := $(my_relax_check_arg)
- $(my_enforced_uses_libraries): $(AAPT)
+ $(my_enforced_uses_libraries): $(AAPT2)
$(my_enforced_uses_libraries): $(my_verify_script)
$(my_enforced_uses_libraries): $(my_dexpreopt_dep_configs)
$(my_enforced_uses_libraries): $(my_manifest_or_apk)
@@ -254,7 +254,7 @@
$(my_verify_script) \
--enforce-uses-libraries \
--enforce-uses-libraries-status $@ \
- --aapt $(AAPT) \
+ --aapt $(AAPT2) \
$(PRIVATE_USES_LIBRARIES) \
$(PRIVATE_OPTIONAL_USES_LIBRARIES) \
$(PRIVATE_DEXPREOPT_CONFIGS) \
@@ -272,11 +272,8 @@
my_dexpreopt_images_deps :=
my_dexpreopt_image_locations_on_host :=
my_dexpreopt_image_locations_on_device :=
-my_dexpreopt_infix := boot
+my_dexpreopt_infix := $(DEXPREOPT_INFIX)
my_create_dexpreopt_config :=
-ifeq (true, $(DEXPREOPT_USE_ART_IMAGE))
- my_dexpreopt_infix := art
-endif
ifdef LOCAL_DEX_PREOPT
ifeq (,$(filter PRESIGNED,$(LOCAL_CERTIFICATE)))
@@ -445,6 +442,7 @@
my_dexpreopt_script := $(intermediates)/dexpreopt.sh
my_dexpreopt_zip := $(intermediates)/dexpreopt.zip
+ DEXPREOPT.$(LOCAL_MODULE).POST_INSTALLED_DEXPREOPT_ZIP := $(my_dexpreopt_zip)
.KATI_RESTAT: $(my_dexpreopt_script)
$(my_dexpreopt_script): PRIVATE_MODULE := $(LOCAL_MODULE)
$(my_dexpreopt_script): PRIVATE_GLOBAL_SOONG_CONFIG := $(DEX_PREOPT_SOONG_CONFIG_FOR_MAKE)
@@ -504,4 +502,4 @@
my_dexpreopt_zip :=
my_dexpreopt_config_for_postprocessing :=
endif # LOCAL_DEX_PREOPT
-endif # my_create_dexpreopt_config
\ No newline at end of file
+endif # my_create_dexpreopt_config
diff --git a/core/distdir.mk b/core/distdir.mk
index aad8ff3..bce8e7f 100644
--- a/core/distdir.mk
+++ b/core/distdir.mk
@@ -45,6 +45,140 @@
$(eval _all_dist_goal_output_pairs += $$(goal):$$(dst))))
endef
+.PHONY: shareprojects
+
+define __share-projects-rule
+$(1) : PRIVATE_TARGETS := $(2)
+$(1): $(2) $(COMPLIANCE_LISTSHARE)
+ $(hide) rm -f $$@
+ mkdir -p $$(dir $$@)
+ $$(if $$(strip $$(PRIVATE_TARGETS)),OUT_DIR=$(OUT_DIR) $(COMPLIANCE_LISTSHARE) -o $$@ $$(PRIVATE_TARGETS),touch $$@)
+endef
+
+# build list of projects to share in $(1) for meta_lic in $(2)
+#
+# $(1): the intermediate project sharing file
+# $(2): the license metadata to base the sharing on
+define _share-projects-rule
+$(eval $(call __share-projects-rule,$(1),$(2)))
+endef
+
+.PHONY: alllicensetexts
+
+define __license-texts-rule
+$(2) : PRIVATE_GOAL := $(1)
+$(2) : PRIVATE_TARGETS := $(3)
+$(2) : PRIVATE_ROOTS := $(4)
+$(2) : PRIVATE_ARGUMENT_FILE := $(call intermediates-dir-for,METAPACKAGING,licensetexts)/$(2)/arguments
+$(2): $(3) $(TEXTNOTICE)
+ $(hide) rm -f $$@
+ mkdir -p $$(dir $$@)
+ mkdir -p $$(dir $$(PRIVATE_ARGUMENT_FILE))
+ $$(if $$(strip $$(PRIVATE_TARGETS)),$$(call dump-words-to-file,\
+ -product="$$(PRIVATE_GOAL)" -title="$$(PRIVATE_GOAL)" \
+ $$(addprefix -strip_prefix ,$$(PRIVATE_ROOTS)) \
+ -strip_prefix=$(PRODUCT_OUT)/ -strip_prefix=$(HOST_OUT)/\
+ $$(PRIVATE_TARGETS),\
+ $$(PRIVATE_ARGUMENT_FILE)))
+ $$(if $$(strip $$(PRIVATE_TARGETS)),OUT_DIR=$(OUT_DIR) $(TEXTNOTICE) -o $$@ @$$(PRIVATE_ARGUMENT_FILE),touch $$@)
+endef
+
+# build list of projects to share in $(2) for meta_lic in $(3) for dist goals $(1)
+# Strip `out/dist/` used as proxy for 'DIST_DIR'
+#
+# $(1): the name of the dist goals
+# $(2): the intermediate project sharing file
+# $(3): the license metadata to base the sharing on
+define _license-texts-rule
+$(eval $(call __license-texts-rule,$(1),$(2),$(3),out/dist/))
+endef
+
+###########################################################
+## License metadata build rule for dist target $(1) with meta_lic $(2) copied from $(3)
+###########################################################
+define _dist-target-license-metadata-rule
+$(strip $(eval _meta :=$(2)))
+$(strip $(eval _dep:=))
+# 0p is the indicator for a non-copyrightable file where no party owns the copyright.
+# i.e. pure data with no copyrightable expression.
+# If all of the sources are 0p and only 0p, treat the copied file as 0p. Otherwise, all
+# of the sources must either be 0p or originate from a single metadata file to copy.
+$(strip $(foreach s,$(strip $(3)),\
+ $(eval _dmeta:=$(ALL_TARGETS.$(s).META_LIC))\
+ $(if $(strip $(_dmeta)),\
+ $(if $(filter-out 0p,$(_dep)),\
+ $(if $(filter-out $(_dep) 0p,$(_dmeta)),\
+ $(error cannot copy target from multiple modules: $(1) from $(_dep) and $(_dmeta)),\
+ $(if $(filter 0p,$(_dep)),$(eval _dep:=$(_dmeta)))),\
+ $(eval _dep:=$(_dmeta))\
+ ),\
+ $(eval TARGETS_MISSING_LICENSE_METADATA += $(s) $(1)))))
+
+
+ifeq (0p,$(strip $(_dep)))
+# Not copyrightable. No emcumbrances, no license text, no license kind etc.
+$(_meta): PRIVATE_CONDITIONS := unencumbered
+$(_meta): PRIVATE_SOURCES := $(3)
+$(_meta): PRIVATE_INSTALLED := $(1)
+# use `$(1)` which is the unique and relatively short `out/dist/$(target)`
+$(_meta): PRIVATE_ARGUMENT_FILE := $(call intermediates-dir-for,METAPACKAGING,notice)/$(1)/arguments
+$(_meta): $(BUILD_LICENSE_METADATA)
+$(_meta) :
+ rm -f $$@
+ mkdir -p $$(dir $$@)
+ mkdir -p $$(dir $$(PRIVATE_ARGUMENT_FILE))
+ $$(call dump-words-to-file,\
+ $$(addprefix -c ,$$(PRIVATE_CONDITIONS))\
+ $$(addprefix -s ,$$(PRIVATE_SOURCES))\
+ $$(addprefix -t ,$$(PRIVATE_TARGETS))\
+ $$(addprefix -i ,$$(PRIVATE_INSTALLED)),\
+ $$(PRIVATE_ARGUMENT_FILE))
+ OUT_DIR=$(OUT_DIR) $(BUILD_LICENSE_METADATA) \
+ @$$(PRIVATE_ARGUMENT_FILE) \
+ -o $$@
+
+else ifneq (,$(strip $(_dep)))
+# Not a missing target, copy metadata and `is_container` etc. from license metadata file `$(_dep)`
+$(_meta): PRIVATE_DEST_TARGET := $(1)
+$(_meta): PRIVATE_SOURCE_TARGETS := $(3)
+$(_meta): PRIVATE_SOURCE_METADATA := $(_dep)
+# use `$(1)` which is the unique and relatively short `out/dist/$(target)`
+$(_meta): PRIVATE_ARGUMENT_FILE := $(call intermediates-dir-for,METAPACKAGING,copynotice)/$(1)/arguments
+$(_meta) : $(_dep) $(COPY_LICENSE_METADATA)
+ rm -f $$@
+ mkdir -p $$(dir $$@)
+ mkdir -p $$(dir $$(PRIVATE_ARGUMENT_FILE))
+ $$(call dump-words-to-file,\
+ $$(addprefix -i ,$$(PRIVATE_DEST_TARGET))\
+ $$(addprefix -s ,$$(PRIVATE_SOURCE_TARGETS))\
+ $$(addprefix -d ,$$(PRIVATE_SOURCE_METADATA)),\
+ $$(PRIVATE_ARGUMENT_FILE))
+ OUT_DIR=$(OUT_DIR) $(COPY_LICENSE_METADATA) \
+ @$$(PRIVATE_ARGUMENT_FILE) \
+ -o $$@
+
+endif
+endef
+
+# use `out/dist/` as a proxy for 'DIST_DIR'
+define _add_projects_to_share
+$(strip $(eval _mdir := $(call intermediates-dir-for,METAPACKAGING,meta)/out/dist)) \
+$(strip $(eval _idir := $(call intermediates-dir-for,METAPACKAGING,shareprojects))) \
+$(strip $(eval _tdir := $(call intermediates-dir-for,METAPACKAGING,licensetexts))) \
+$(strip $(eval _allt := $(sort $(foreach goal,$(_all_dist_goal_output_pairs),$(call word-colon,2,$(goal)))))) \
+$(foreach target,$(_allt), \
+ $(eval _goals := $(sort $(foreach dg,$(filter %:$(target),$(_all_dist_goal_output_pairs)),$(call word-colon,1,$(dg))))) \
+ $(eval _srcs := $(sort $(foreach sdp,$(filter %:$(target),$(_all_dist_src_dst_pairs)),$(call word-colon,1,$(sdp))))) \
+ $(eval $(call _dist-target-license-metadata-rule,out/dist/$(target),$(_mdir)/out/dist/$(target).meta_lic,$(_srcs))) \
+ $(eval _f := $(_idir)/$(target).shareprojects) \
+ $(eval _n := $(_tdir)/$(target).txt) \
+ $(eval $(call dist-for-goals,$(_goals),$(_f):shareprojects/$(target).shareprojects)) \
+ $(eval $(call dist-for-goals,$(_goals),$(_n):licensetexts/$(target).txt)) \
+ $(eval $(call _share-projects-rule,$(_f),$(foreach t, $(filter-out $(TARGETS_MISSING_LICENSE_METADATA),out/dist/$(target)),$(_mdir)/$(t).meta_lic))) \
+ $(eval $(call _license-texts-rule,$(_goals),$(_n),$(foreach t,$(filter-out $(TARGETS_MISSING_LICENSE_METADATA),out/dist/$(target)),$(_mdir)/$(t).meta_lic))) \
+)
+endef
+
#------------------------------------------------------------------
# To be used at the end of the build to collect all the uses of
# dist-for-goals, and write them into a file for the packaging step to use.
@@ -52,6 +186,15 @@
# $(1): The file to write
define dist-write-file
$(strip \
+ $(call _add_projects_to_share)\
+ $(if $(strip $(ANDROID_REQUIRE_LICENSE_METADATA)),\
+ $(foreach target,$(sort $(TARGETS_MISSING_LICENSE_METADATA)),$(warning target $(target) missing license metadata))\
+ $(if $(strip $(TARGETS_MISSING_LICENSE_METADATA)),\
+ $(if $(filter true error,$(ANDROID_REQUIRE_LICENSE_METADATA)),\
+ $(error $(words $(sort $(TARGETS_MISSING_LICENSE_METADATA))) targets need license metadata))))\
+ $(foreach t,$(sort $(ALL_NON_MODULES)),$(call record-missing-non-module-dependencies,$(t))) \
+ $(eval $(call report-missing-licenses-rule)) \
+ $(eval $(call report-all-notice-library-names-rule)) \
$(KATI_obsolete_var dist-for-goals,Cannot be used after dist-write-file) \
$(foreach goal,$(sort $(_all_dist_goals)), \
$(eval $$(goal): _dist_$$(goal))) \
diff --git a/core/dumpvar.mk b/core/dumpvar.mk
index 6f3d14f..4f313bf 100644
--- a/core/dumpvar.mk
+++ b/core/dumpvar.mk
@@ -3,18 +3,6 @@
# what to add to the path given the config we have chosen.
ifeq ($(CALLED_FROM_SETUP),true)
-ifneq ($(filter /%,$(SOONG_HOST_OUT_EXECUTABLES)),)
-ABP := $(SOONG_HOST_OUT_EXECUTABLES)
-else
-ABP := $(PWD)/$(SOONG_HOST_OUT_EXECUTABLES)
-endif
-ifneq ($(filter /%,$(HOST_OUT_EXECUTABLES)),)
-ABP := $(ABP):$(HOST_OUT_EXECUTABLES)
-else
-ABP := $(ABP):$(PWD)/$(HOST_OUT_EXECUTABLES)
-endif
-
-ANDROID_BUILD_PATHS := $(ABP)
ANDROID_PREBUILTS := prebuilt/$(HOST_PREBUILT_TAG)
ANDROID_GCC_PREBUILTS := prebuilts/gcc/$(HOST_PREBUILT_TAG)
ANDROID_CLANG_PREBUILTS := prebuilts/clang/host/$(HOST_PREBUILT_TAG)
diff --git a/core/envsetup.mk b/core/envsetup.mk
index c32d380..7dd9b12 100644
--- a/core/envsetup.mk
+++ b/core/envsetup.mk
@@ -135,15 +135,17 @@
HOST_OS := darwin
endif
-HOST_OS_EXTRA := $(shell uname -rsm)
-ifeq ($(HOST_OS),linux)
- ifneq ($(wildcard /etc/os-release),)
- HOST_OS_EXTRA += $(shell source /etc/os-release; echo $$PRETTY_NAME)
+ifeq ($(CALLED_FROM_SETUP),true)
+ HOST_OS_EXTRA := $(shell uname -rsm)
+ ifeq ($(HOST_OS),linux)
+ ifneq ($(wildcard /etc/os-release),)
+ HOST_OS_EXTRA += $(shell source /etc/os-release; echo $$PRETTY_NAME)
+ endif
+ else ifeq ($(HOST_OS),darwin)
+ HOST_OS_EXTRA += $(shell sw_vers -productVersion)
endif
-else ifeq ($(HOST_OS),darwin)
- HOST_OS_EXTRA += $(shell sw_vers -productVersion)
+ HOST_OS_EXTRA := $(subst $(space),-,$(HOST_OS_EXTRA))
endif
-HOST_OS_EXTRA := $(subst $(space),-,$(HOST_OS_EXTRA))
# BUILD_OS is the real host doing the build.
BUILD_OS := $(HOST_OS)
@@ -323,11 +325,11 @@
# likely to be relevant to the product or board configuration.
# Soong config variables are dumped as $(call soong_config_set) calls
# instead of the raw variable values, because mk2rbc can't read the
-# raw ones.
+# raw ones. There is a final sed command on the output file to
+# remove leading spaces because I couldn't figure out how to remove
+# them in pure make code.
define dump-variables-rbc
$(eval _dump_variables_rbc_excluded := \
- BOARD_PLAT_PRIVATE_SEPOLICY_DIR \
- BOARD_PLAT_PUBLIC_SEPOLICY_DIR \
BUILD_NUMBER \
DATE \
LOCAL_PATH \
@@ -347,6 +349,7 @@
$(foreach ns,$(sort $(SOONG_CONFIG_NAMESPACES)),\
$(foreach v,$(sort $(SOONG_CONFIG_$(ns))),\
$$(call soong_config_set,$(ns),$(v),$(SOONG_CONFIG_$(ns)_$(v)))$(newline))))
+$(shell sed -i "s/^ *//g" $(1))
endef
# Read the product specs so we can get TARGET_DEVICE and other
diff --git a/core/generate_enforce_rro.mk b/core/generate_enforce_rro.mk
index 9079981..ed258cc 100644
--- a/core/generate_enforce_rro.mk
+++ b/core/generate_enforce_rro.mk
@@ -38,7 +38,7 @@
LOCAL_FULL_MANIFEST_FILE := $(rro_android_manifest_file)
-LOCAL_AAPT_FLAGS += --auto-add-overlay
+LOCAL_AAPT_FLAGS += --auto-add-overlay --keep-raw-values
LOCAL_RESOURCE_DIR := $(enforce_rro_source_overlays)
ifeq (product,$(enforce_rro_partition))
diff --git a/core/host_dalvik_java_library.mk b/core/host_dalvik_java_library.mk
deleted file mode 100644
index 5eeb8ac..0000000
--- a/core/host_dalvik_java_library.mk
+++ /dev/null
@@ -1,191 +0,0 @@
-#
-# Copyright (C) 2013 The Android Open Source Project
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-$(call record-module-type,HOST_DALVIK_JAVA_LIBRARY)
-
-#
-# Rules for building a host dalvik java library. These libraries
-# are meant to be used by a dalvik VM instance running on the host.
-# They will be compiled against libcore and not the host JRE.
-#
-
-ifeq ($(HOST_OS),linux)
-USE_CORE_LIB_BOOTCLASSPATH := true
-
-#######################################
-include $(BUILD_SYSTEM)/host_java_library_common.mk
-#######################################
-
-full_classes_turbine_jar := $(intermediates.COMMON)/classes-turbine.jar
-full_classes_header_jarjar := $(intermediates.COMMON)/classes-header-jarjar.jar
-full_classes_header_jar := $(intermediates.COMMON)/classes-header.jar
-full_classes_compiled_jar := $(intermediates.COMMON)/classes-full-debug.jar
-full_classes_combined_jar := $(intermediates.COMMON)/classes-combined.jar
-full_classes_jarjar_jar := $(intermediates.COMMON)/classes-jarjar.jar
-full_classes_jar := $(intermediates.COMMON)/classes.jar
-built_dex := $(intermediates.COMMON)/classes.dex
-java_source_list_file := $(intermediates.COMMON)/java-source-list
-
-LOCAL_INTERMEDIATE_TARGETS += \
- $(full_classes_turbine_jar) \
- $(full_classes_compiled_jar) \
- $(full_classes_combined_jar) \
- $(full_classes_jarjar_jar) \
- $(full_classes_jar) \
- $(built_dex) \
- $(java_source_list_file)
-
-# See comment in java.mk
-ifndef LOCAL_CHECKED_MODULE
-ifeq ($(LOCAL_IS_STATIC_JAVA_LIBRARY),true)
-LOCAL_CHECKED_MODULE := $(full_classes_compiled_jar)
-else
-LOCAL_CHECKED_MODULE := $(built_dex)
-endif
-endif
-
-#######################################
-include $(BUILD_SYSTEM)/base_rules.mk
-#######################################
-java_sources := $(addprefix $(LOCAL_PATH)/, $(filter %.java,$(LOCAL_SRC_FILES))) \
- $(filter %.java,$(LOCAL_GENERATED_SOURCES))
-all_java_sources := $(java_sources)
-
-include $(BUILD_SYSTEM)/java_common.mk
-
-include $(BUILD_SYSTEM)/sdk_check.mk
-
-$(cleantarget): PRIVATE_CLEAN_FILES += $(intermediates.COMMON)
-
-# List of dependencies for anything that needs all java sources in place
-java_sources_deps := \
- $(java_sources) \
- $(java_resource_sources) \
- $(LOCAL_SRCJARS) \
- $(LOCAL_ADDITIONAL_DEPENDENCIES)
-
-$(java_source_list_file): $(java_sources_deps)
- $(write-java-source-list)
-
-# TODO(b/143658984): goma can't handle the --system argument to javac.
-#$(full_classes_compiled_jar): .KATI_NINJA_POOL := $(GOMA_POOL)
-$(full_classes_compiled_jar): PRIVATE_JAVA_LAYERS_FILE := $(layers_file)
-$(full_classes_compiled_jar): PRIVATE_JAVACFLAGS := $(LOCAL_JAVACFLAGS) $(annotation_processor_flags)
-$(full_classes_compiled_jar): PRIVATE_JAR_EXCLUDE_FILES :=
-$(full_classes_compiled_jar): PRIVATE_JAR_PACKAGES :=
-$(full_classes_compiled_jar): PRIVATE_JAR_EXCLUDE_PACKAGES :=
-$(full_classes_compiled_jar): PRIVATE_SRCJARS := $(LOCAL_SRCJARS)
-$(full_classes_compiled_jar): PRIVATE_SRCJAR_LIST_FILE := $(intermediates.COMMON)/srcjar-list
-$(full_classes_compiled_jar): PRIVATE_SRCJAR_INTERMEDIATES_DIR := $(intermediates.COMMON)/srcjars
-$(full_classes_compiled_jar): \
- $(java_source_list_file) \
- $(java_sources_deps) \
- $(full_java_header_libs) \
- $(full_java_bootclasspath_libs) \
- $(full_java_system_modules_deps) \
- $(annotation_processor_deps) \
- $(NORMALIZE_PATH) \
- $(JAR_ARGS) \
- $(ZIPSYNC) \
- $(SOONG_ZIP) \
- | $(SOONG_JAVAC_WRAPPER)
- $(transform-host-java-to-dalvik-package)
-
-ifneq ($(TURBINE_ENABLED),false)
-
-$(full_classes_turbine_jar): PRIVATE_JAVACFLAGS := $(LOCAL_JAVACFLAGS) $(annotation_processor_flags)
-$(full_classes_turbine_jar): PRIVATE_SRCJARS := $(LOCAL_SRCJARS)
-$(full_classes_turbine_jar): \
- $(java_source_list_file) \
- $(java_sources_deps) \
- $(full_java_header_libs) \
- $(full_java_bootclasspath_libs) \
- $(NORMALIZE_PATH) \
- $(JAR_ARGS) \
- $(ZIPTIME) \
- | $(TURBINE) \
- $(MERGE_ZIPS)
- $(transform-java-to-header.jar)
-
-.KATI_RESTAT: $(full_classes_turbine_jar)
-
-# Run jarjar before generate classes-header.jar if necessary.
-ifneq ($(strip $(LOCAL_JARJAR_RULES)),)
-$(full_classes_header_jarjar): PRIVATE_JARJAR_RULES := $(LOCAL_JARJAR_RULES)
-$(full_classes_header_jarjar): $(full_classes_turbine_jar) $(LOCAL_JARJAR_RULES) | $(JARJAR)
- $(call transform-jarjar)
-else
-full_classes_header_jarjar := $(full_classes_turbine_jar)
-endif
-
-$(eval $(call copy-one-file,$(full_classes_header_jarjar),$(full_classes_header_jar)))
-
-endif # TURBINE_ENABLED != false
-
-$(full_classes_combined_jar): PRIVATE_DONT_DELETE_JAR_META_INF := $(LOCAL_DONT_DELETE_JAR_META_INF)
-$(full_classes_combined_jar): $(full_classes_compiled_jar) \
- $(jar_manifest_file) \
- $(full_static_java_libs) | $(MERGE_ZIPS)
- $(if $(PRIVATE_JAR_MANIFEST), $(hide) sed -e "s/%BUILD_NUMBER%/$(BUILD_NUMBER_FROM_FILE)/" \
- $(PRIVATE_JAR_MANIFEST) > $(dir $@)/manifest.mf)
- $(MERGE_ZIPS) -j --ignore-duplicates $(if $(PRIVATE_JAR_MANIFEST),-m $(dir $@)/manifest.mf) \
- $(if $(PRIVATE_DONT_DELETE_JAR_META_INF),,-stripDir META-INF -zipToNotStrip $<) \
- $@ $< $(PRIVATE_STATIC_JAVA_LIBRARIES)
-
-# Run jarjar if necessary, otherwise just copy the file.
-ifneq ($(strip $(LOCAL_JARJAR_RULES)),)
-$(full_classes_jarjar_jar): PRIVATE_JARJAR_RULES := $(LOCAL_JARJAR_RULES)
-$(full_classes_jarjar_jar): $(full_classes_combined_jar) $(LOCAL_JARJAR_RULES) | $(JARJAR)
- $(call transform-jarjar)
-else
-full_classes_jarjar_jar := $(full_classes_combined_jar)
-endif
-
-$(eval $(call copy-one-file,$(full_classes_jarjar_jar),$(full_classes_jar)))
-
-ifeq ($(LOCAL_IS_STATIC_JAVA_LIBRARY),true)
-# No dex; all we want are the .class files with resources.
-$(LOCAL_BUILT_MODULE) : $(java_resource_sources)
-$(LOCAL_BUILT_MODULE) : $(full_classes_jar)
- @echo "host Static Jar: $(PRIVATE_MODULE) ($@)"
- $(copy-file-to-target)
-
-else # !LOCAL_IS_STATIC_JAVA_LIBRARY
-$(built_dex): PRIVATE_INTERMEDIATES_DIR := $(intermediates.COMMON)
-$(built_dex): PRIVATE_DX_FLAGS := $(LOCAL_DX_FLAGS)
-$(built_dex): $(full_classes_jar) $(DX) $(ZIP2ZIP)
- $(transform-classes.jar-to-dex)
-
-$(LOCAL_BUILT_MODULE): PRIVATE_DEX_FILE := $(built_dex)
-$(LOCAL_BUILT_MODULE): PRIVATE_SOURCE_ARCHIVE := $(full_classes_jarjar_jar)
-$(LOCAL_BUILT_MODULE): $(MERGE_ZIPS) $(SOONG_ZIP) $(ZIP2ZIP)
-$(LOCAL_BUILT_MODULE): $(built_dex) $(java_resource_sources)
- @echo "Host Jar: $(PRIVATE_MODULE) ($@)"
- rm -rf $@.parts
- mkdir -p $@.parts
- $(call create-dex-jar,$@.parts/dex.zip,$(PRIVATE_DEX_FILE))
- $(call extract-resources-jar,$@.parts/res.zip,$(PRIVATE_SOURCE_ARCHIVE))
- $(MERGE_ZIPS) -j $@ $@.parts/dex.zip $@.parts/res.zip
- rm -rf $@.parts
-
-endif # !LOCAL_IS_STATIC_JAVA_LIBRARY
-
-$(LOCAL_INTERMEDIATE_TARGETS): PRIVATE_DEFAULT_APP_TARGET_SDK := $(call module-target-sdk-version)
-$(LOCAL_INTERMEDIATE_TARGETS): PRIVATE_SDK_VERSION := $(call module-sdk-version)
-$(LOCAL_INTERMEDIATE_TARGETS): PRIVATE_MIN_SDK_VERSION := $(call codename-or-sdk-to-sdk,$(call module-min-sdk-version))
-
-USE_CORE_LIB_BOOTCLASSPATH :=
-
-endif
diff --git a/core/host_dalvik_static_java_library.mk b/core/host_dalvik_static_java_library.mk
deleted file mode 100644
index 78faf73..0000000
--- a/core/host_dalvik_static_java_library.mk
+++ /dev/null
@@ -1,28 +0,0 @@
-#
-# Copyright (C) 2013 The Android Open Source Project
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-$(call record-module-type,HOST_DALVIK_STATIC_JAVA_LIBRARY)
-
-#
-# Rules for building a host dalvik static java library.
-# These libraries will be compiled against libcore and not the host
-# JRE.
-#
-LOCAL_UNINSTALLABLE_MODULE := true
-LOCAL_IS_STATIC_JAVA_LIBRARY := true
-
-include $(BUILD_SYSTEM)/host_dalvik_java_library.mk
-
-LOCAL_IS_STATIC_JAVA_LIBRARY :=
diff --git a/core/host_executable_internal.mk b/core/host_executable_internal.mk
index 0cf62a4..2ff9ff2 100644
--- a/core/host_executable_internal.mk
+++ b/core/host_executable_internal.mk
@@ -39,6 +39,21 @@
endif
my_libdir :=
+my_crtbegin :=
+my_crtend :=
+my_libcrt_builtins :=
+ifdef USE_HOST_MUSL
+ my_crtbegin := $(SOONG_$(LOCAL_2ND_ARCH_VAR_PREFIX)HOST_OBJECT_libc_musl_crtbegin_dynamic)
+ my_crtend := $(SOONG_$(LOCAL_2ND_ARCH_VAR_PREFIX)HOST_OBJECT_libc_musl_crtend)
+ my_libcrt_builtins := $($(LOCAL_2ND_ARCH_VAR_PREFIX)$(my_prefix)LIBCRT_BUILTINS)
+ $(LOCAL_BUILT_MODULE): PRIVATE_LDFLAGS += -Wl,--no-dynamic-linker
+endif
+
+$(LOCAL_BUILT_MODULE): PRIVATE_CRTBEGIN := $(my_crtbegin)
+$(LOCAL_BUILT_MODULE): PRIVATE_CRTEND := $(my_crtend)
+$(LOCAL_BUILT_MODULE): PRIVATE_LIBCRT_BUILTINS := $(my_libcrt_builtins)
+$(LOCAL_BUILT_MODULE): $(my_crtbegin) $(my_crtend) $(my_libcrt_builtins)
+
$(LOCAL_BUILT_MODULE): $(all_objects) $(all_libraries) $(CLANG_CXX)
$(transform-host-o-to-executable)
diff --git a/core/host_java_library.mk b/core/host_java_library.mk
index 0f95202..89aa53c 100644
--- a/core/host_java_library.mk
+++ b/core/host_java_library.mk
@@ -56,10 +56,6 @@
include $(BUILD_SYSTEM)/java_common.mk
-# The layers file allows you to enforce a layering between java packages.
-# Run build/make/tools/java-layers.py for more details.
-layers_file := $(addprefix $(LOCAL_PATH)/, $(LOCAL_JAVA_LAYERS_FILE))
-
# List of dependencies for anything that needs all java sources in place
java_sources_deps := \
$(java_sources) \
@@ -72,7 +68,6 @@
# TODO(b/143658984): goma can't handle the --system argument to javac.
#$(full_classes_compiled_jar): .KATI_NINJA_POOL := $(GOMA_POOL)
-$(full_classes_compiled_jar): PRIVATE_JAVA_LAYERS_FILE := $(layers_file)
$(full_classes_compiled_jar): PRIVATE_JAVACFLAGS := $(LOCAL_JAVACFLAGS) $(annotation_processor_flags)
$(full_classes_compiled_jar): PRIVATE_JAR_EXCLUDE_FILES :=
$(full_classes_compiled_jar): PRIVATE_JAR_PACKAGES :=
diff --git a/core/host_shared_library_internal.mk b/core/host_shared_library_internal.mk
index da20874..ae8b798 100644
--- a/core/host_shared_library_internal.mk
+++ b/core/host_shared_library_internal.mk
@@ -36,6 +36,17 @@
my_host_libprofile_rt := $($(LOCAL_2ND_ARCH_VAR_PREFIX)$(my_prefix)LIBPROFILE_RT)
$(LOCAL_BUILT_MODULE): PRIVATE_HOST_LIBPROFILE_RT := $(my_host_libprofile_rt)
+ifdef USE_HOST_MUSL
+ my_crtbegin := $(SOONG_$(LOCAL_2ND_ARCH_VAR_PREFIX)HOST_OBJECT_libc_musl_crtbegin_so)
+ my_crtend := $(SOONG_$(LOCAL_2ND_ARCH_VAR_PREFIX)HOST_OBJECT_libc_musl_crtend_so)
+ my_libcrt_builtins := $($(LOCAL_2ND_ARCH_VAR_PREFIX)$(my_prefix)LIBCRT_BUILTINS)
+endif
+
+$(LOCAL_BUILT_MODULE): PRIVATE_CRTBEGIN := $(my_crtbegin)
+$(LOCAL_BUILT_MODULE): PRIVATE_CRTEND := $(my_crtend)
+$(LOCAL_BUILT_MODULE): PRIVATE_LIBCRT_BUILTINS := $(my_libcrt_builtins)
+$(LOCAL_BUILT_MODULE): $(my_crtbegin) $(my_crtend) $(my_libcrt_builtins)
+
$(LOCAL_BUILT_MODULE): \
$(all_objects) \
$(all_libraries) \
diff --git a/core/install_jni_libs_internal.mk b/core/install_jni_libs_internal.mk
index 289d16f..5491247 100644
--- a/core/install_jni_libs_internal.mk
+++ b/core/install_jni_libs_internal.mk
@@ -5,6 +5,7 @@
# my_prebuilt_jni_libs
# my_installed_module_stem (from configure_module_stem.mk)
# partition_tag (from base_rules.mk)
+# partition_lib_pairs
# my_prebuilt_src_file (from prebuilt_internal.mk)
#
# Output variables:
@@ -66,13 +67,32 @@
ifeq ($(filter address,$(SANITIZE_TARGET)),)
my_symlink_target_dir := $(patsubst $(PRODUCT_OUT)%,%,\
$(my_shared_library_path))
- $(foreach lib,$(my_jni_filenames),\
- $(call symlink-file, \
- $(my_shared_library_path)/$(lib), \
- $(my_symlink_target_dir)/$(lib), \
- $(my_app_lib_path)/$(lib)) \
- $(eval $$(LOCAL_INSTALLED_MODULE) : $$(my_app_lib_path)/$$(lib)) \
- $(eval ALL_MODULES.$(my_register_name).INSTALLED += $$(my_app_lib_path)/$$(lib)))
+
+ ifdef partition_lib_pairs
+ # Support cross-partition jni lib dependency for bp modules
+ # API domain check is done in Soong
+ $(foreach pl_pair,$(partition_lib_pairs),\
+ $(eval lib_name := $(call word-colon, 1, $(pl_pair)))\
+ $(eval lib_partition := $(call word-colon, 2, $(pl_pair)))\
+ $(eval shared_library_path := $(call get_non_asan_path,\
+ $($(my_2nd_arch_prefix)TARGET_OUT$(lib_partition)_SHARED_LIBRARIES)))\
+ $(call symlink-file,\
+ $(shared_library_path)/$(lib_name).so,\
+ $(my_symlink_target_dir)/$(lib_name).so,\
+ $(my_app_lib_path)/$(lib_name).so)\
+ $(eval $$(LOCAL_INSTALLED_MODULE) : $$(my_app_lib_path)/$$(lib_name).so)\
+ $(eval ALL_MODULES.$(my_register_name).INSTALLED += $$(my_app_lib_path)/$$(lib_name).so))
+
+ else
+ # Cross-partition jni lib dependency currently not supported for mk modules
+ $(foreach lib,$(my_jni_filenames),\
+ $(call symlink-file, \
+ $(my_shared_library_path)/$(lib), \
+ $(my_symlink_target_dir)/$(lib), \
+ $(my_app_lib_path)/$(lib)) \
+ $(eval $$(LOCAL_INSTALLED_MODULE) : $$(my_app_lib_path)/$$(lib)) \
+ $(eval ALL_MODULES.$(my_register_name).INSTALLED += $$(my_app_lib_path)/$$(lib)))
+ endif # partition_lib_pairs
endif
# Clear jni_shared_libraries to not embed it into the apk.
diff --git a/core/instrumentation_test_config_template.xml b/core/instrumentation_test_config_template.xml
index 6ca964e..379126c 100644
--- a/core/instrumentation_test_config_template.xml
+++ b/core/instrumentation_test_config_template.xml
@@ -1,5 +1,5 @@
<?xml version="1.0" encoding="utf-8"?>
-<!-- Copyright (C) 2017 The Android Open Source Project
+<!-- Copyright (C) 2023 The Android Open Source Project
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -24,7 +24,7 @@
</target_preparer>
<test class="com.android.tradefed.testtype.{TEST_TYPE}" >
- <option name="package" value="{PACKAGE}" />
+ {EXTRA_TEST_RUNNER_CONFIGS}<option name="package" value="{PACKAGE}" />
<option name="runner" value="{RUNNER}" />
</test>
</configuration>
diff --git a/core/java.mk b/core/java.mk
index a29f820..b13ef4d 100644
--- a/core/java.mk
+++ b/core/java.mk
@@ -200,10 +200,6 @@
$(eval $(call copy-one-file,$(full_classes_jar),$(full_classes_stubs_jar)))
ALL_MODULES.$(my_register_name).STUBS := $(full_classes_stubs_jar)
-# The layers file allows you to enforce a layering between java packages.
-# Run build/make/tools/java-layers.py for more details.
-layers_file := $(addprefix $(LOCAL_PATH)/, $(LOCAL_JAVA_LAYERS_FILE))
-$(full_classes_compiled_jar): PRIVATE_JAVA_LAYERS_FILE := $(layers_file)
$(full_classes_compiled_jar): PRIVATE_WARNINGS_ENABLE := $(LOCAL_WARNINGS_ENABLE)
# Compile the java files to a .jar file.
@@ -494,13 +490,13 @@
$(built_dex_intermediate): PRIVATE_EXTRA_INPUT_JAR := $(extra_input_jar)
$(built_dex_intermediate): PRIVATE_PROGUARD_FLAGS := $(legacy_proguard_flags) $(common_proguard_flags) $(LOCAL_PROGUARD_FLAGS)
$(built_dex_intermediate): PRIVATE_PROGUARD_DICTIONARY := $(proguard_dictionary)
- $(built_dex_intermediate) : $(full_classes_pre_proguard_jar) $(extra_input_jar) $(my_proguard_sdk_raise) $(common_proguard_flag_files) $(legacy_proguard_lib_deps) $(R8_COMPAT_PROGUARD) $(LOCAL_PROGUARD_FLAGS_DEPS)
+ $(built_dex_intermediate) : $(full_classes_pre_proguard_jar) $(extra_input_jar) $(my_proguard_sdk_raise) $(common_proguard_flag_files) $(legacy_proguard_lib_deps) $(R8) $(LOCAL_PROGUARD_FLAGS_DEPS)
$(transform-jar-to-dex-r8)
else # !LOCAL_PROGUARD_ENABLED
$(built_dex_intermediate): .KATI_NINJA_POOL := $(D8_NINJA_POOL)
$(built_dex_intermediate): PRIVATE_D8_LIBS := $(full_java_bootclasspath_libs) $(full_shared_java_header_libs)
$(built_dex_intermediate): $(full_java_bootclasspath_libs) $(full_shared_java_header_libs)
- $(built_dex_intermediate): $(full_classes_pre_proguard_jar) $(DX) $(ZIP2ZIP)
+ $(built_dex_intermediate): $(full_classes_pre_proguard_jar) $(D8) $(ZIP2ZIP)
$(transform-classes.jar-to-dex)
endif
diff --git a/core/java_common.mk b/core/java_common.mk
index 5981b60..0e03d0b 100644
--- a/core/java_common.mk
+++ b/core/java_common.mk
@@ -296,16 +296,16 @@
# Note: the lib naming scheme must be kept in sync with build/soong/java/sdk_library.go.
sdk_lib_suffix = $(call pretty-error,sdk_lib_suffix was not set correctly)
ifeq (current,$(LOCAL_SDK_VERSION))
- sdk_module := android_stubs_current
+ sdk_module := $(ANDROID_PUBLIC_STUBS)
sdk_lib_suffix := .stubs
else ifeq (system_current,$(LOCAL_SDK_VERSION))
- sdk_module := android_system_stubs_current
+ sdk_module := $(ANDROID_SYSTEM_STUBS)
sdk_lib_suffix := .stubs.system
else ifeq (test_current,$(LOCAL_SDK_VERSION))
- sdk_module := android_test_stubs_current
+ sdk_module := $(ANDROID_TEST_STUBS)
sdk_lib_suffix := .stubs.test
else ifeq (core_current,$(LOCAL_SDK_VERSION))
- sdk_module := core.current.stubs
+ sdk_module := $(ANDROID_CORE_STUBS)
sdk_lib_suffix = $(call pretty-error,LOCAL_SDK_LIBRARIES not supported for LOCAL_SDK_VERSION = core_current)
endif
sdk_libs := $(foreach lib_name,$(LOCAL_SDK_LIBRARIES),$(lib_name)$(sdk_lib_suffix))
diff --git a/core/java_host_test_config_template.xml b/core/java_host_test_config_template.xml
index 26c1caf..e123dc7 100644
--- a/core/java_host_test_config_template.xml
+++ b/core/java_host_test_config_template.xml
@@ -1,5 +1,5 @@
<?xml version="1.0" encoding="utf-8"?>
-<!-- Copyright (C) 2018 The Android Open Source Project
+<!-- Copyright (C) 2023 The Android Open Source Project
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -21,6 +21,6 @@
{EXTRA_CONFIGS}
<test class="com.android.tradefed.testtype.HostTest" >
- <option name="jar" value="{MODULE}.jar" />
+ {EXTRA_TEST_RUNNER_CONFIGS}<option name="jar" value="{MODULE}.jar" />
</test>
</configuration>
diff --git a/core/java_host_unit_test_config_template.xml b/core/java_host_unit_test_config_template.xml
index d8795f9..5d8b254 100644
--- a/core/java_host_unit_test_config_template.xml
+++ b/core/java_host_unit_test_config_template.xml
@@ -23,5 +23,14 @@
<test class="com.android.tradefed.testtype.IsolatedHostTest" >
<option name="jar" value="{MODULE}.jar" />
+ <option name="java-flags" value="--add-modules=jdk.compiler"/>
+ <option name="java-flags" value="--add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED"/>
+ <option name="java-flags" value="--add-exports=jdk.compiler/com.sun.tools.javac.code=ALL-UNNAMED"/>
+ <option name="java-flags" value="--add-exports=jdk.compiler/com.sun.tools.javac.comp=ALL-UNNAMED"/>
+ <option name="java-flags" value="--add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED"/>
+ <option name="java-flags" value="--add-exports=jdk.compiler/com.sun.tools.javac.main=ALL-UNNAMED"/>
+ <option name="java-flags" value="--add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED"/>
+ <option name="java-flags" value="--add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED"/>
+ <option name="java-flags" value="--add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED"/>
</test>
</configuration>
diff --git a/core/main.mk b/core/main.mk
index c63c6df..04b6b33 100644
--- a/core/main.mk
+++ b/core/main.mk
@@ -72,8 +72,6 @@
# CTS-specific config.
-include cts/build/config.mk
-# VTS-specific config.
--include test/vts/tools/vts-tradefed/build/config.mk
# device-tests-specific-config.
-include tools/tradefederation/build/suites/device-tests/config.mk
# general-tests-specific-config.
@@ -92,6 +90,9 @@
-include test/catbox/tools/build/config.mk
# CTS-Root-specific config.
-include test/cts-root/tools/build/config.mk
+# WVTS-specific config.
+-include test/wvts/tools/build/config.mk
+
# Clean rules
.PHONY: clean-dex-files
@@ -346,6 +347,10 @@
ADDITIONAL_PRODUCT_PROPERTIES += ro.product.ab_ota_partitions=$(subst $(space),$(comma),$(sort $(AB_OTA_PARTITIONS)))
endif
+# Set this property for VTS to skip large page size tests on unsupported devices.
+ADDITIONAL_PRODUCT_PROPERTIES += \
+ ro.product.cpu.pagesize.max=$(TARGET_MAX_PAGE_SIZE_SUPPORTED)
+
# -----------------------------------------------------------------
###
### In this section we set up the things that are different
@@ -460,6 +465,9 @@
ADDITIONAL_SYSTEM_PROPERTIES += net.bt.name=Android
+# This property is set by flashing debug boot image, so default to false.
+ADDITIONAL_SYSTEM_PROPERTIES += ro.force.debuggable=0
+
# ------------------------------------------------------------
# Define a function that, given a list of module tags, returns
# non-empty if that module should be installed in /system.
@@ -490,6 +498,8 @@
# Typical build; include any Android.mk files we can find.
#
+include $(BUILD_SYSTEM)/art_config.mk
+
# Bring in dex_preopt.mk
# This creates some modules so it needs to be included after
# should-install-to-system is defined (in order for base_rules.mk to function
@@ -756,6 +766,9 @@
$(info $(word 1,$(r)) module $(word 2,$(r)) requires non-existent $(word 3,$(r)) module: $(word 4,$(r))) \
)
$(warning Set BUILD_BROKEN_MISSING_REQUIRED_MODULES := true to bypass this check if this is intentional)
+ ifneq (,$(PRODUCT_SOURCE_ROOT_DIRS))
+ $(warning PRODUCT_SOURCE_ROOT_DIRS is non-empty. Some necessary modules may have been skipped by Soong)
+ endif
$(error Build failed)
endif # _nonexistent_required != empty
endif # check_missing_required_modules == true
@@ -931,11 +944,11 @@
$(eval my_deps := $(call get-all-shared-libs-deps,$(m)))\
$(foreach dep,$(my_deps),\
$(foreach f,$(ALL_MODULES.$(dep).HOST_SHARED_LIBRARY_FILES),\
- $(if $(filter $(suite),device-tests general-tests),\
+ $(if $(filter $(suite),device-tests general-tests art-host-tests host-unit-tests),\
$(eval my_testcases := $(HOST_OUT_TESTCASES)),\
$(eval my_testcases := $$(COMPATIBILITY_TESTCASES_OUT_$(suite))))\
$(eval target := $(my_testcases)/$(lastword $(subst /, ,$(dir $(f))))/$(notdir $(f)))\
- $(if $(strip $(ALL_TARGETS.$(target).META_LIC)),,$(eval ALL_TARGETS.$(target).META_LIC:=$(module_license_metadata)))\
+ $(if $(strip $(ALL_TARGETS.$(target).META_LIC)),,$(call declare-copy-target-license-metadata,$(target),$(f)))\
$(eval COMPATIBILITY.$(suite).HOST_SHARED_LIBRARY.FILES := \
$$(COMPATIBILITY.$(suite).HOST_SHARED_LIBRARY.FILES) $(f):$(target))\
$(eval COMPATIBILITY.$(suite).HOST_SHARED_LIBRARY.FILES := \
@@ -1243,6 +1256,7 @@
$(if $(filter tests,$(tags_to_install)),$(call get-product-var,$(1),PRODUCT_PACKAGES_TESTS)) \
$(if $(filter asan,$(tags_to_install)),$(call get-product-var,$(1),PRODUCT_PACKAGES_DEBUG_ASAN)) \
$(if $(filter java_coverage,$(tags_to_install)),$(call get-product-var,$(1),PRODUCT_PACKAGES_DEBUG_JAVA_COVERAGE)) \
+ $(if $(filter arm64,$(TARGET_ARCH) $(TARGET_2ND_ARCH)),$(call get-product-var,$(1),PRODUCT_PACKAGES_ARM64)) \
$(call auto-included-modules) \
) \
$(eval ### Filter out the overridden packages and executables before doing expansion) \
@@ -1337,6 +1351,13 @@
$(if $(ALL_MODULES.$(m).INSTALLED),\
$(if $(filter-out $(HOST_OUT_ROOT)/%,$(ALL_MODULES.$(m).INSTALLED)),,\
$(m))))
+ ifeq ($(TARGET_ARCH),riscv64)
+ # HACK: riscv64 can't build the device version of bcc and ld.mc due to a
+ # dependency on an old version of LLVM, but they are listed in
+ # base_system.mk which can't add them conditionally based on the target
+ # architecture.
+ _host_modules := $(filter-out bcc ld.mc,$(_host_modules))
+ endif
$(call maybe-print-list-and-error,$(sort $(_host_modules)),\
Host modules should be in PRODUCT_HOST_PACKAGES$(comma) not PRODUCT_PACKAGES)
endif
@@ -1369,29 +1390,6 @@
$(CUSTOM_MODULES) \
)
-ifdef FULL_BUILD
-#
-# Used by the cleanup logic in soong_ui to remove files that should no longer
-# be installed.
-#
-
-# Include all tests, so that we remove them from the test suites / testcase
-# folders when they are removed.
-test_files := $(foreach ts,$(ALL_COMPATIBILITY_SUITES),$(COMPATIBILITY.$(ts).FILES))
-
-$(shell mkdir -p $(PRODUCT_OUT) $(HOST_OUT))
-
-$(file >$(PRODUCT_OUT)/.installable_files$(if $(filter address,$(SANITIZE_TARGET)),_asan), \
- $(sort $(patsubst $(PRODUCT_OUT)/%,%,$(filter $(PRODUCT_OUT)/%, \
- $(modules_to_install) $(test_files)))))
-
-$(file >$(HOST_OUT)/.installable_test_files,$(sort \
- $(patsubst $(HOST_OUT)/%,%,$(filter $(HOST_OUT)/%, \
- $(test_files)))))
-
-test_files :=
-endif
-
# Dedpulicate compatibility suite dist files across modules and packages before
# copying them to their requested locations. Assign the eval result to an unused
# var to prevent Make from trying to make a sense of it.
@@ -1450,6 +1448,28 @@
modules_to_install := $(sort $(ALL_DEFAULT_INSTALLED_MODULES))
ALL_DEFAULT_INSTALLED_MODULES :=
+ifdef FULL_BUILD
+#
+# Used by the cleanup logic in soong_ui to remove files that should no longer
+# be installed.
+#
+
+# Include all tests, so that we remove them from the test suites / testcase
+# folders when they are removed.
+test_files := $(foreach ts,$(ALL_COMPATIBILITY_SUITES),$(COMPATIBILITY.$(ts).FILES))
+
+$(shell mkdir -p $(PRODUCT_OUT) $(HOST_OUT))
+
+$(file >$(PRODUCT_OUT)/.installable_files$(if $(filter address,$(SANITIZE_TARGET)),_asan), \
+ $(sort $(patsubst $(PRODUCT_OUT)/%,%,$(filter $(PRODUCT_OUT)/%, \
+ $(modules_to_install) $(test_files)))))
+
+$(file >$(HOST_OUT)/.installable_test_files,$(sort \
+ $(patsubst $(HOST_OUT)/%,%,$(filter $(HOST_OUT)/%, \
+ $(test_files)))))
+
+test_files :=
+endif
# Some notice deps refer to module names without prefix or arch suffix where
# only the variants with them get built.
@@ -1597,6 +1617,9 @@
.PHONY: vbmetavendorimage
vbmetavendorimage: $(INSTALLED_VBMETA_VENDORIMAGE_TARGET)
+.PHONY: vbmetacustomimages
+vbmetacustomimages: $(foreach partition,$(call to-upper,$(BOARD_AVB_VBMETA_CUSTOM_PARTITIONS)),$(INSTALLED_VBMETA_$(partition)IMAGE_TARGET))
+
# The droidcore-unbundled target depends on the subset of targets necessary to
# perform a full system build (either unbundled or not).
.PHONY: droidcore-unbundled
@@ -1843,30 +1866,28 @@
$(INSTALLED_FILES_JSON_ROOT) \
)
- ifneq ($(BOARD_BUILD_SYSTEM_ROOT_IMAGE),true)
- $(call dist-for-goals, droidcore-unbundled, \
- $(INSTALLED_FILES_FILE_RAMDISK) \
- $(INSTALLED_FILES_JSON_RAMDISK) \
- $(INSTALLED_FILES_FILE_DEBUG_RAMDISK) \
- $(INSTALLED_FILES_JSON_DEBUG_RAMDISK) \
- $(INSTALLED_FILES_FILE_VENDOR_RAMDISK) \
- $(INSTALLED_FILES_JSON_VENDOR_RAMDISK) \
- $(INSTALLED_FILES_FILE_VENDOR_KERNEL_RAMDISK) \
- $(INSTALLED_FILES_JSON_VENDOR_KERNEL_RAMDISK) \
- $(INSTALLED_FILES_FILE_VENDOR_DEBUG_RAMDISK) \
- $(INSTALLED_FILES_JSON_VENDOR_DEBUG_RAMDISK) \
- $(INSTALLED_DEBUG_RAMDISK_TARGET) \
- $(INSTALLED_DEBUG_BOOTIMAGE_TARGET) \
- $(INSTALLED_TEST_HARNESS_RAMDISK_TARGET) \
- $(INSTALLED_TEST_HARNESS_BOOTIMAGE_TARGET) \
- $(INSTALLED_VENDOR_DEBUG_BOOTIMAGE_TARGET) \
- $(INSTALLED_VENDOR_TEST_HARNESS_RAMDISK_TARGET) \
- $(INSTALLED_VENDOR_TEST_HARNESS_BOOTIMAGE_TARGET) \
- $(INSTALLED_VENDOR_RAMDISK_TARGET) \
- $(INSTALLED_VENDOR_DEBUG_RAMDISK_TARGET) \
- $(INSTALLED_VENDOR_KERNEL_RAMDISK_TARGET) \
- )
- endif
+ $(call dist-for-goals, droidcore-unbundled, \
+ $(INSTALLED_FILES_FILE_RAMDISK) \
+ $(INSTALLED_FILES_JSON_RAMDISK) \
+ $(INSTALLED_FILES_FILE_DEBUG_RAMDISK) \
+ $(INSTALLED_FILES_JSON_DEBUG_RAMDISK) \
+ $(INSTALLED_FILES_FILE_VENDOR_RAMDISK) \
+ $(INSTALLED_FILES_JSON_VENDOR_RAMDISK) \
+ $(INSTALLED_FILES_FILE_VENDOR_KERNEL_RAMDISK) \
+ $(INSTALLED_FILES_JSON_VENDOR_KERNEL_RAMDISK) \
+ $(INSTALLED_FILES_FILE_VENDOR_DEBUG_RAMDISK) \
+ $(INSTALLED_FILES_JSON_VENDOR_DEBUG_RAMDISK) \
+ $(INSTALLED_DEBUG_RAMDISK_TARGET) \
+ $(INSTALLED_DEBUG_BOOTIMAGE_TARGET) \
+ $(INSTALLED_TEST_HARNESS_RAMDISK_TARGET) \
+ $(INSTALLED_TEST_HARNESS_BOOTIMAGE_TARGET) \
+ $(INSTALLED_VENDOR_DEBUG_BOOTIMAGE_TARGET) \
+ $(INSTALLED_VENDOR_TEST_HARNESS_RAMDISK_TARGET) \
+ $(INSTALLED_VENDOR_TEST_HARNESS_BOOTIMAGE_TARGET) \
+ $(INSTALLED_VENDOR_RAMDISK_TARGET) \
+ $(INSTALLED_VENDOR_DEBUG_RAMDISK_TARGET) \
+ $(INSTALLED_VENDOR_KERNEL_RAMDISK_TARGET) \
+ )
ifeq ($(PRODUCT_EXPORT_BOOT_IMAGE_TO_DIST),true)
$(call dist-for-goals, droidcore-unbundled, $(INSTALLED_BOOTIMAGE_TARGET))
@@ -1883,11 +1904,11 @@
endif
# Put XML formatted API files in the dist dir.
- $(TARGET_OUT_COMMON_INTERMEDIATES)/api.xml: $(call java-lib-files,android_stubs_current) $(APICHECK)
- $(TARGET_OUT_COMMON_INTERMEDIATES)/system-api.xml: $(call java-lib-files,android_system_stubs_current) $(APICHECK)
- $(TARGET_OUT_COMMON_INTERMEDIATES)/module-lib-api.xml: $(call java-lib-files,android_module_lib_stubs_current) $(APICHECK)
- $(TARGET_OUT_COMMON_INTERMEDIATES)/system-server-api.xml: $(call java-lib-files,android_system_server_stubs_current) $(APICHECK)
- $(TARGET_OUT_COMMON_INTERMEDIATES)/test-api.xml: $(call java-lib-files,android_test_stubs_current) $(APICHECK)
+ $(TARGET_OUT_COMMON_INTERMEDIATES)/api.xml: $(call java-lib-files,$(ANDROID_PUBLIC_STUBS)) $(APICHECK)
+ $(TARGET_OUT_COMMON_INTERMEDIATES)/system-api.xml: $(call java-lib-files,$(ANDROID_SYSTEM_STUBS)) $(APICHECK)
+ $(TARGET_OUT_COMMON_INTERMEDIATES)/module-lib-api.xml: $(call java-lib-files,$(ANDROID_MODULE_LIB_STUBS)) $(APICHECK)
+ $(TARGET_OUT_COMMON_INTERMEDIATES)/system-server-api.xml: $(call java-lib-files,$(ANDROID_SYSTEM_SERVER_STUBS)) $(APICHECK)
+ $(TARGET_OUT_COMMON_INTERMEDIATES)/test-api.xml: $(call java-lib-files,$(ANDROID_TEST_STUBS)) $(APICHECK)
api_xmls := $(addprefix $(TARGET_OUT_COMMON_INTERMEDIATES)/,api.xml system-api.xml module-lib-api.xml system-server-api.xml test-api.xml)
$(api_xmls):
@@ -2010,6 +2031,243 @@
# missing dependency errors.
$(call build-license-metadata)
+# Generate SBOM in SPDX format
+product_copy_files_without_owner := $(foreach pcf,$(PRODUCT_COPY_FILES),$(call word-colon,1,$(pcf)):$(call word-colon,2,$(pcf)))
+ifeq ($(TARGET_BUILD_APPS),)
+dest_files_without_source := $(sort $(foreach pcf,$(product_copy_files_without_owner),$(if $(wildcard $(call word-colon,1,$(pcf))),,$(call word-colon,2,$(pcf)))))
+dest_files_without_source := $(addprefix $(PRODUCT_OUT)/,$(dest_files_without_source))
+filter_out_files := \
+ $(PRODUCT_OUT)/apex/% \
+ $(PRODUCT_OUT)/fake_packages/% \
+ $(PRODUCT_OUT)/testcases/% \
+ $(dest_files_without_source)
+# Check if each partition image is built, if not filter out all its installed files
+# Also check if a partition uses prebuilt image file, save the info if prebuilt image is used.
+PREBUILT_PARTITION_COPY_FILES :=
+# product.img
+ifndef BUILDING_PRODUCT_IMAGE
+filter_out_files += $(PRODUCT_OUT)/product/%
+ifdef BOARD_PREBUILT_PRODUCTIMAGE
+PREBUILT_PARTITION_COPY_FILES += $(BOARD_PREBUILT_PRODUCTIMAGE):$(INSTALLED_PRODUCTIMAGE_TARGET)
+endif
+endif
+
+# system.img
+ifndef BUILDING_SYSTEM_IMAGE
+filter_out_files += $(PRODUCT_OUT)/system/%
+endif
+# system_dlkm.img
+ifndef BUILDING_SYSTEM_DLKM_IMAGE
+filter_out_files += $(PRODUCT_OUT)/system_dlkm/%
+ifdef BOARD_PREBUILT_SYSTEM_DLKMIMAGE
+PREBUILT_PARTITION_COPY_FILES += $(BOARD_PREBUILT_SYSTEM_DLKMIMAGE):$(INSTALLED_SYSTEM_DLKMIMAGE_TARGET)
+endif
+endif
+# system_ext.img
+ifndef BUILDING_SYSTEM_EXT_IMAGE
+filter_out_files += $(PRODUCT_OUT)/system_ext/%
+ifdef BOARD_PREBUILT_SYSTEM_EXTIMAGE
+PREBUILT_PARTITION_COPY_FILES += $(BOARD_PREBUILT_SYSTEM_EXTIMAGE):$(INSTALLED_SYSTEM_EXTIMAGE_TARGET)
+endif
+endif
+# system_other.img
+ifndef BUILDING_SYSTEM_OTHER_IMAGE
+filter_out_files += $(PRODUCT_OUT)/system_other/%
+endif
+
+# odm.img
+ifndef BUILDING_ODM_IMAGE
+filter_out_files += $(PRODUCT_OUT)/odm/%
+ifdef BOARD_PREBUILT_ODMIMAGE
+PREBUILT_PARTITION_COPY_FILES += $(BOARD_PREBUILT_ODMIMAGE):$(INSTALLED_ODMIMAGE_TARGET)
+endif
+endif
+# odm_dlkm.img
+ifndef BUILDING_ODM_DLKM_IMAGE
+filter_out_files += $(PRODUCT_OUT)/odm_dlkm/%
+ifdef BOARD_PREBUILT_ODM_DLKMIMAGE
+PREBUILT_PARTITION_COPY_FILES += $(BOARD_PREBUILT_ODM_DLKMIMAGE):$(INSTALLED_ODM_DLKMIMAGE_TARGET)
+endif
+endif
+
+# vendor.img
+ifndef BUILDING_VENDOR_IMAGE
+filter_out_files += $(PRODUCT_OUT)/vendor/%
+ifdef BOARD_PREBUILT_VENDORIMAGE
+PREBUILT_PARTITION_COPY_FILES += $(BOARD_PREBUILT_VENDORIMAGE):$(INSTALLED_VENDORIMAGE_TARGET)
+endif
+endif
+# vendor_dlkm.img
+ifndef BUILDING_VENDOR_DLKM_IMAGE
+filter_out_files += $(PRODUCT_OUT)/vendor_dlkm/%
+ifdef BOARD_PREBUILT_VENDOR_DLKMIMAGE
+PREBUILT_PARTITION_COPY_FILES += $(BOARD_PREBUILT_VENDOR_DLKMIMAGE):$(INSTALLED_VENDOR_DLKMIMAGE_TARGET)
+endif
+endif
+
+# cache.img
+ifndef BUILDING_CACHE_IMAGE
+filter_out_files += $(PRODUCT_OUT)/cache/%
+endif
+
+# boot.img
+ifndef BUILDING_BOOT_IMAGE
+ifdef BOARD_PREBUILT_BOOTIMAGE
+PREBUILT_PARTITION_COPY_FILES += $(BOARD_PREBUILT_BOOTIMAGE):$(INSTALLED_BOOTIMAGE_TARGET)
+endif
+endif
+# init_boot.img
+ifndef BUILDING_INIT_BOOT_IMAGE
+ifdef BOARD_PREBUILT_INIT_BOOT_IMAGE
+PREBUILT_PARTITION_COPY_FILES += $(BOARD_PREBUILT_INIT_BOOT_IMAGE):$(INSTALLED_INIT_BOOT_IMAGE_TARGET)
+endif
+endif
+
+# ramdisk.img
+ifndef BUILDING_RAMDISK_IMAGE
+filter_out_files += $(PRODUCT_OUT)/ramdisk/%
+endif
+
+# recovery.img
+ifndef INSTALLED_RECOVERYIMAGE_TARGET
+filter_out_files += $(PRODUCT_OUT)/recovery/%
+endif
+
+installed_files := $(sort $(filter-out $(filter_out_files),$(filter $(PRODUCT_OUT)/%,$(modules_to_install))))
+else
+installed_files := $(apps_only_installed_files)
+endif # TARGET_BUILD_APPS
+
+# sbom-metadata.csv contains all raw data collected in Make for generating SBOM in generate-sbom.py.
+# There are multiple columns and each identifies the source of an installed file for a specific case.
+# The columns and their uses are described as below:
+# installed_file: the file path on device, e.g. /product/app/Browser2/Browser2.apk
+# module_path: the path of the module that generates the installed file, e.g. packages/apps/Browser2
+# soong_module_type: Soong module type, e.g. android_app, cc_binary
+# is_prebuilt_make_module: Y, if the installed file is from a prebuilt Make module, see prebuilt_internal.mk
+# product_copy_files: the installed file is from variable PRODUCT_COPY_FILES, e.g. device/google/cuttlefish/shared/config/init.product.rc:product/etc/init/init.rc
+# kernel_module_copy_files: the installed file is from variable KERNEL_MODULE_COPY_FILES, similar to product_copy_files
+# is_platform_generated: this is an aggregated value including some small cases instead of adding more columns. It is set to Y if any case is Y
+# is_build_prop: build.prop in each partition, see sysprop.mk.
+# is_notice_file: NOTICE.xml.gz in each partition, see Makefile.
+# is_dexpreopt_image_profile: see the usage of DEXPREOPT_IMAGE_PROFILE_BUILT_INSTALLED in Soong and Make
+# is_product_system_other_avbkey: see INSTALLED_PRODUCT_SYSTEM_OTHER_AVBKEY_TARGET
+# is_system_other_odex_marker: see INSTALLED_SYSTEM_OTHER_ODEX_MARKER
+# is_event_log_tags_file: see variable event_log_tags_file in Makefile
+# is_kernel_modules_blocklist: modules.blocklist created for _dlkm partitions, see macro build-image-kernel-modules-dir in Makefile.
+# is_fsverity_build_manifest_apk: BuildManifest<part>.apk files for system and system_ext partition, see ALL_FSVERITY_BUILD_MANIFEST_APK in Makefile.
+# is_linker_config: see SYSTEM_LINKER_CONFIG and vendor_linker_config_file in Makefile.
+# build_output_path: the path of the built file, used to calculate checksum
+# static_libraries/whole_static_libraries: list of module name of the static libraries the file links against, e.g. libclang_rt.builtins or libclang_rt.builtins_32
+# Info of all static libraries of all installed files are collected in variable _all_static_libs that is used to list all the static library files in sbom-metadata.csv.
+# See the second foreach loop in the rule of sbom-metadata.csv for the detailed info of static libraries collected in _all_static_libs.
+# is_static_lib: whether the file is a static library
+
+metadata_list := $(OUT_DIR)/.module_paths/METADATA.list
+metadata_files := $(subst $(newline),$(space),$(file <$(metadata_list)))
+# (TODO: b/272358583 find another way of always rebuilding this target)
+# Remove the sbom-metadata.csv whenever makefile is evaluated
+$(shell rm $(PRODUCT_OUT)/sbom-metadata.csv >/dev/null 2>&1)
+$(PRODUCT_OUT)/sbom-metadata.csv: $(installed_files) $(metadata_list) $(metadata_files)
+ rm -f $@
+ echo installed_file,module_path,soong_module_type,is_prebuilt_make_module,product_copy_files,kernel_module_copy_files,is_platform_generated,build_output_path,static_libraries,whole_static_libraries,is_static_lib >> $@
+ $(eval _all_static_libs :=)
+ $(foreach f,$(installed_files),\
+ $(eval _module_name := $(ALL_INSTALLED_FILES.$f)) \
+ $(eval _path_on_device := $(patsubst $(PRODUCT_OUT)/%,%,$f)) \
+ $(eval _build_output_path := $(PRODUCT_OUT)/$(_path_on_device)) \
+ $(eval _module_path := $(strip $(sort $(ALL_MODULES.$(_module_name).PATH)))) \
+ $(eval _soong_module_type := $(strip $(sort $(ALL_MODULES.$(_module_name).SOONG_MODULE_TYPE)))) \
+ $(eval _is_prebuilt_make_module := $(ALL_MODULES.$(_module_name).IS_PREBUILT_MAKE_MODULE)) \
+ $(eval _post_installed_dexpreopt_zip := $(DEXPREOPT.$(_module_name).POST_INSTALLED_DEXPREOPT_ZIP)) \
+ $(eval _product_copy_files := $(sort $(filter %:$(_path_on_device),$(product_copy_files_without_owner)))) \
+ $(eval _kernel_module_copy_files := $(sort $(filter %$(_path_on_device),$(KERNEL_MODULE_COPY_FILES)))) \
+ $(eval _is_build_prop := $(call is-build-prop,$f)) \
+ $(eval _is_notice_file := $(call is-notice-file,$f)) \
+ $(eval _is_dexpreopt_image_profile := $(if $(filter %:/$(_path_on_device),$(DEXPREOPT_IMAGE_PROFILE_BUILT_INSTALLED)),Y)) \
+ $(eval _is_product_system_other_avbkey := $(if $(findstring $f,$(INSTALLED_PRODUCT_SYSTEM_OTHER_AVBKEY_TARGET)),Y)) \
+ $(eval _is_event_log_tags_file := $(if $(findstring $f,$(event_log_tags_file)),Y)) \
+ $(eval _is_system_other_odex_marker := $(if $(findstring $f,$(INSTALLED_SYSTEM_OTHER_ODEX_MARKER)),Y)) \
+ $(eval _is_kernel_modules_blocklist := $(if $(findstring $f,$(ALL_KERNEL_MODULES_BLOCKLIST)),Y)) \
+ $(eval _is_fsverity_build_manifest_apk := $(if $(findstring $f,$(ALL_FSVERITY_BUILD_MANIFEST_APK)),Y)) \
+ $(eval _is_linker_config := $(if $(findstring $f,$(SYSTEM_LINKER_CONFIG) $(vendor_linker_config_file)),Y)) \
+ $(eval _is_partition_compat_symlink := $(if $(findstring $f,$(PARTITION_COMPAT_SYMLINKS)),Y)) \
+ $(eval _is_platform_generated := $(_is_build_prop)$(_is_notice_file)$(_is_dexpreopt_image_profile)$(_is_product_system_other_avbkey)$(_is_event_log_tags_file)$(_is_system_other_odex_marker)$(_is_kernel_modules_blocklist)$(_is_fsverity_build_manifest_apk)$(_is_linker_config)$(_is_partition_compat_symlink)) \
+ $(eval _static_libs := $(ALL_INSTALLED_FILES.$f.STATIC_LIBRARIES)) \
+ $(eval _whole_static_libs := $(ALL_INSTALLED_FILES.$f.WHOLE_STATIC_LIBRARIES)) \
+ $(foreach l,$(_static_libs),$(eval _all_static_libs += $l:$(strip $(sort $(ALL_MODULES.$l.PATH))):$(strip $(sort $(ALL_MODULES.$l.SOONG_MODULE_TYPE))):$(ALL_STATIC_LIBRARIES.$l.BUILT_FILE))) \
+ $(foreach l,$(_whole_static_libs),$(eval _all_static_libs += $l:$(strip $(sort $(ALL_MODULES.$l.PATH))):$(strip $(sort $(ALL_MODULES.$l.SOONG_MODULE_TYPE))):$(ALL_STATIC_LIBRARIES.$l.BUILT_FILE))) \
+ echo /$(_path_on_device),$(_module_path),$(_soong_module_type),$(_is_prebuilt_make_module),$(_product_copy_files),$(_kernel_module_copy_files),$(_is_platform_generated),$(_build_output_path),$(_static_libs),$(_whole_static_libs), >> $@; \
+ $(if $(_post_installed_dexpreopt_zip), \
+ for i in $$(zipinfo -1 $(_post_installed_dexpreopt_zip)); do echo /$$i$(comma)$(_module_path)$(comma)$(_soong_module_type)$(comma)$(_is_prebuilt_make_module)$(comma)$(_product_copy_files)$(comma)$(_kernel_module_copy_files)$(comma)$(_is_platform_generated)$(comma)$(PRODUCT_OUT)/$$i$(comma)$(_static_libs)$(comma)$(_whole_static_libs)$(comma) >> $@ ; done ; \
+ ) \
+ )
+ $(foreach l,$(sort $(_all_static_libs)), \
+ $(eval _lib_stem := $(call word-colon,1,$l)) \
+ $(eval _module_path := $(call word-colon,2,$l)) \
+ $(eval _soong_module_type := $(call word-colon,3,$l)) \
+ $(eval _built_file := $(call word-colon,4,$l)) \
+ $(eval _static_libs := $(ALL_STATIC_LIBRARIES.$l.STATIC_LIBRARIES)) \
+ $(eval _whole_static_libs := $(ALL_STATIC_LIBRARIES.$l.WHOLE_STATIC_LIBRARIES)) \
+ $(eval _is_static_lib := Y) \
+ echo $(_lib_stem).a,$(_module_path),$(_soong_module_type),,,,,$(_built_file),$(_static_libs),$(_whole_static_libs),$(_is_static_lib) >> $@; \
+ )
+
+.PHONY: sbom
+ifeq ($(TARGET_BUILD_APPS),)
+sbom: $(PRODUCT_OUT)/sbom.spdx.json
+$(PRODUCT_OUT)/sbom.spdx.json: $(PRODUCT_OUT)/sbom.spdx
+$(PRODUCT_OUT)/sbom.spdx: $(PRODUCT_OUT)/sbom-metadata.csv $(GEN_SBOM)
+ rm -rf $@
+ $(GEN_SBOM) --output_file $@ --metadata $(PRODUCT_OUT)/sbom-metadata.csv --build_version $(BUILD_FINGERPRINT_FROM_FILE) --product_mfr "$(PRODUCT_MANUFACTURER)" --json
+
+$(call dist-for-goals,droid,$(PRODUCT_OUT)/sbom.spdx.json:sbom/sbom.spdx.json)
+else
+# Create build rules for generating SBOMs of unbundled APKs and APEXs
+# $1: sbom file
+# $2: sbom fragment file
+# $3: installed file
+# $4: sbom-metadata.csv file
+define generate-app-sbom
+$(eval _path_on_device := $(patsubst $(PRODUCT_OUT)/%,%,$(3)))
+$(eval _module_name := $(ALL_INSTALLED_FILES.$(3)))
+$(eval _module_path := $(strip $(sort $(ALL_MODULES.$(_module_name).PATH))))
+$(eval _soong_module_type := $(strip $(sort $(ALL_MODULES.$(_module_name).SOONG_MODULE_TYPE))))
+$(eval _dep_modules := $(filter %.$(_module_name),$(ALL_MODULES)) $(filter %.$(_module_name)$(TARGET_2ND_ARCH_MODULE_SUFFIX),$(ALL_MODULES)))
+$(eval _is_apex := $(filter %.apex,$(3)))
+
+$(4): $(3) $(metadata_list) $(metadata_files)
+ rm -rf $$@
+ echo installed_file,module_path,soong_module_type,is_prebuilt_make_module,product_copy_files,kernel_module_copy_files,is_platform_generated,build_output_path,static_libraries,whole_static_libraries,is_static_lib >> $$@
+ echo /$(_path_on_device),$(_module_path),$(_soong_module_type),,,,,$(3),,, >> $$@
+ $(if $(filter %.apex,$(3)),\
+ $(foreach m,$(_dep_modules),\
+ echo $(patsubst $(PRODUCT_OUT)/apex/$(_module_name)/%,%,$(ALL_MODULES.$m.INSTALLED)),$(sort $(ALL_MODULES.$m.PATH)),$(sort $(ALL_MODULES.$m.SOONG_MODULE_TYPE)),,,,,$(strip $(ALL_MODULES.$m.BUILT)),,, >> $$@;))
+
+$(2): $(1)
+$(1): $(4) $(GEN_SBOM)
+ rm -rf $$@
+ $(GEN_SBOM) --output_file $$@ --metadata $(4) --build_version $$(BUILD_FINGERPRINT_FROM_FILE) --product_mfr "$(PRODUCT_MANUFACTURER)" --json $(if $(filter %.apk,$(3)),--unbundled_apk,--unbundled_apex)
+endef
+
+apps_only_sbom_files :=
+apps_only_fragment_files :=
+$(foreach f,$(filter %.apk %.apex,$(installed_files)), \
+ $(eval _metadata_csv_file := $(patsubst %,%-sbom-metadata.csv,$f)) \
+ $(eval _sbom_file := $(patsubst %,%.spdx.json,$f)) \
+ $(eval _fragment_file := $(patsubst %,%-fragment.spdx,$f)) \
+ $(eval apps_only_sbom_files += $(_sbom_file)) \
+ $(eval apps_only_fragment_files += $(_fragment_file)) \
+ $(eval $(call generate-app-sbom,$(_sbom_file),$(_fragment_file),$f,$(_metadata_csv_file))) \
+)
+
+sbom: $(apps_only_sbom_files)
+
+$(foreach f,$(apps_only_fragment_files),$(eval apps_only_fragment_dist_files += :sbom/$(notdir $f)))
+$(foreach f,$(apps_only_sbom_files),$(eval apps_only_sbom_dist_files += :sbom/$(notdir $f)))
+$(call dist-for-goals,apps_only,$(join $(apps_only_sbom_files),$(apps_only_sbom_dist_files)) $(join $(apps_only_fragment_files),$(apps_only_fragment_dist_files)))
+endif
+
$(call dist-write-file,$(KATI_PACKAGE_MK_DIR)/dist.mk)
$(info [$(call inc_and_print,subdir_makefiles_inc)/$(subdir_makefiles_total)] writing build rules ...)
diff --git a/core/native_test_config_template.xml b/core/native_test_config_template.xml
index ea982cf..788157c 100644
--- a/core/native_test_config_template.xml
+++ b/core/native_test_config_template.xml
@@ -1,5 +1,5 @@
<?xml version="1.0" encoding="utf-8"?>
-<!-- Copyright (C) 2017 The Android Open Source Project
+<!-- Copyright (C) 2023 The Android Open Source Project
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -26,7 +26,7 @@
</target_preparer>
<test class="com.android.tradefed.testtype.GTest" >
- <option name="native-test-device-path" value="{TEST_INSTALL_BASE}" />
+ {EXTRA_TEST_RUNNER_CONFIGS}<option name="native-test-device-path" value="{TEST_INSTALL_BASE}" />
<option name="module-name" value="{MODULE}" />
</test>
</configuration>
diff --git a/core/ninja_config.mk b/core/ninja_config.mk
index e436b2c..2b5ceee 100644
--- a/core/ninja_config.mk
+++ b/core/ninja_config.mk
@@ -25,6 +25,7 @@
cts \
custom_images \
dicttool_aosp \
+ docs \
eng \
oem_image \
online-system-api-sdk-docs \
diff --git a/core/node_fns.mk b/core/node_fns.mk
index 2243cd7..144eb8b 100644
--- a/core/node_fns.mk
+++ b/core/node_fns.mk
@@ -83,27 +83,17 @@
# If needle appears multiple times, only the first occurrance
# will survive.
#
-# How it works:
-#
-# - Stick everything in haystack into a single word,
-# with "|||" separating the words.
-# - Replace occurrances of "|||$(needle)|||" with "||| |||",
-# breaking haystack back into multiple words, with spaces
-# where needle appeared.
-# - Add needle between the first and second words of haystack.
-# - Replace "|||" with spaces, breaking haystack back into
-# individual words.
-#
define uniq-word
$(strip \
$(if $(filter-out 0 1,$(words $(filter $(2),$(1)))), \
- $(eval h := |||$(subst $(space),|||,$(strip $(1)))|||) \
- $(eval h := $(subst |||$(strip $(2))|||,|||$(space)|||,$(h))) \
- $(eval h := $(word 1,$(h)) $(2) $(wordlist 2,9999,$(h))) \
- $(subst |||,$(space),$(h)) \
- , \
- $(1) \
- ))
+ $(eval _uniq_word_seen :=) \
+ $(foreach w,$(1), \
+ $(if $(filter $(2),$(w)), \
+ $(if $(_uniq_word_seen),, \
+ $(w) \
+ $(eval _uniq_word_seen := true)), \
+ $(w))), \
+ $(1)))
endef
INHERIT_TAG := @inherit:
diff --git a/core/notice_files.mk b/core/notice_files.mk
index c05d4ea..a5852cc 100644
--- a/core/notice_files.mk
+++ b/core/notice_files.mk
@@ -11,6 +11,8 @@
ifneq (,$(strip $(LOCAL_LICENSE_PACKAGE_NAME)))
license_package_name:=$(strip $(LOCAL_LICENSE_PACKAGE_NAME))
+else
+license_package_name:=
endif
ifneq (,$(strip $(LOCAL_LICENSE_INSTALL_MAP)))
@@ -125,16 +127,21 @@
module_license_metadata :=
ifdef my_register_name
- module_license_metadata := $(call local-intermediates-dir)/$(my_register_name).meta_lic
+ module_license_metadata := $(call local-meta-intermediates-dir)/$(my_register_name).meta_lic
- $(foreach target,$(ALL_MODULES.$(my_register_name).BUILT) $(ALL_MODULES.$(my_register_name).INSTALLED) $(my_test_data) $(my_test_config),\
+ $(foreach target,$(ALL_MODULES.$(my_register_name).BUILT) $(ALL_MODULES.$(my_register_name).INSTALLED) $(foreach bi,$(LOCAL_SOONG_BUILT_INSTALLED),$(call word-colon,1,$(bi))),\
$(eval ALL_TARGETS.$(target).META_LIC := $(module_license_metadata)))
+ $(foreach f,$(my_test_data) $(my_test_config),\
+ $(if $(strip $(ALL_TARGETS.$(call word-colon,1,$(f)).META_LIC)), \
+ $(call declare-copy-target-license-metadata,$(call word-colon,2,$(f)),$(call word-colon,1,$(f))), \
+ $(eval ALL_TARGETS.$(call word-colon,2,$(f)).META_LIC := $(module_license_metadata))))
+
ALL_MODULES.$(my_register_name).META_LIC := $(strip $(ALL_MODULES.$(my_register_name).META_LIC) $(module_license_metadata))
ifdef LOCAL_SOONG_LICENSE_METADATA
# Soong modules have already produced a license metadata file, copy it to where Make expects it.
- $(eval $(call copy-one-file, $(LOCAL_SOONG_LICENSE_METADATA), $(module_license_metadata)))
+ $(eval $(call copy-one-license-metadata-file, $(LOCAL_SOONG_LICENSE_METADATA), $(module_license_metadata),$(ALL_MODULES.$(my_register_name).BUILT),$(ALL_MODUES.$(my_register_name).INSTALLED)))
else
# Make modules don't have enough information to produce a license metadata rule until after fix-notice-deps
# has been called, store the necessary information until later.
diff --git a/core/os_licensing.mk b/core/os_licensing.mk
index 416e4b2..1e1b7df 100644
--- a/core/os_licensing.mk
+++ b/core/os_licensing.mk
@@ -5,7 +5,7 @@
ifneq (,$(SYSTEM_NOTICE_DEPS))
-SYSTEM_NOTICE_DEPS += $(UNMOUNTED_NOTICE_DEPS)
+SYSTEM_NOTICE_DEPS += $(UNMOUNTED_NOTICE_DEPS) $(UNMOUNTED_NOTICE_VENDOR_DEPS)
ifneq ($(PRODUCT_NOTICE_SPLIT),true)
$(eval $(call html-notice-rule,$(target_notice_file_html_gz),"System image",$(system_notice_file_message),$(SYSTEM_NOTICE_DEPS),$(SYSTEM_NOTICE_DEPS)))
@@ -21,8 +21,8 @@
$(copy-file-to-target)
endif
-$(call declare-0p-target,$(target_notice_file_xml_gz))
-$(call declare-0p-target,$(installed_notice_html_or_xml_gz))
+$(call declare-1p-target,$(target_notice_file_xml_gz))
+$(call declare-1p-target,$(installed_notice_html_or_xml_gz))
endif
.PHONY: vendorlicense
@@ -30,7 +30,7 @@
ifneq (,$(VENDOR_NOTICE_DEPS))
-VENDOR_NOTICE_DEPS += $(UNMOUNTED_NOTICE_DEPS)
+VENDOR_NOTICE_DEPS += $(UNMOUNTED_NOTICE_VENDOR_DEPS)
$(eval $(call text-notice-rule,$(target_vendor_notice_file_txt),"Vendor image", \
"Notices for files contained in all filesystem images except system/system_ext/product/odm/vendor_dlkm/odm_dlkm in this directory:", \
@@ -43,8 +43,8 @@
$(installed_vendor_notice_xml_gz): $(target_vendor_notice_file_xml_gz)
$(copy-file-to-target)
-$(call declare-0p-target,$(target_vendor_notice_file_xml_gz))
-$(call declare-0p-target,$(installed_vendor_notice_xml_gz))
+$(call declare-1p-target,$(target_vendor_notice_file_xml_gz))
+$(call declare-1p-target,$(installed_vendor_notice_xml_gz))
endif
.PHONY: odmlicense
@@ -62,8 +62,8 @@
$(installed_odm_notice_xml_gz): $(target_odm_notice_file_xml_gz)
$(copy-file-to-target)
-$(call declare-0p-target,$(target_odm_notice_file_xml_gz))
-$(call declare-0p-target,$(installed_odm_notice_xml_gz))
+$(call declare-1p-target,$(target_odm_notice_file_xml_gz))
+$(call declare-1p-target,$(installed_odm_notice_xml_gz))
endif
.PHONY: oemlicense
@@ -84,8 +84,8 @@
$(installed_product_notice_xml_gz): $(target_product_notice_file_xml_gz)
$(copy-file-to-target)
-$(call declare-0p-target,$(target_product_notice_file_xml_gz))
-$(call declare-0p-target,$(installed_product_notice_xml_gz))
+$(call declare-1p-target,$(target_product_notice_file_xml_gz))
+$(call declare-1p-target,$(installed_product_notice_xml_gz))
endif
.PHONY: systemextlicense
@@ -103,8 +103,8 @@
$(installed_system_ext_notice_xml_gz): $(target_system_ext_notice_file_xml_gz)
$(copy-file-to-target)
-$(call declare-0p-target,$(target_system_ext_notice_file_xml_gz))
-$(call declare-0p-target,$(installed_system_ext_notice_xml_gz))
+$(call declare-1p-target,$(target_system_ext_notice_file_xml_gz))
+$(call declare-1p-target,$(installed_system_ext_notice_xml_gz))
endif
.PHONY: vendor_dlkmlicense
@@ -122,8 +122,8 @@
$(installed_vendor_dlkm_notice_xml_gz): $(target_vendor_dlkm_notice_file_xml_gz)
$(copy-file-to-target)
-$(call declare-0p-target,$(target_vendor_dlkm_notice_file_xml_gz))
-$(call declare-0p-target,$(installed_vendor_dlkm_notice_xml_gz))
+$(call declare-1p-target,$(target_vendor_dlkm_notice_file_xml_gz))
+$(call declare-1p-target,$(installed_vendor_dlkm_notice_xml_gz))
endif
.PHONY: odm_dlkmlicense
@@ -141,8 +141,8 @@
$(installed_odm_dlkm_notice_xml_gz): $(target_odm_dlkm_notice_file_xml_gz)
$(copy-file-to-target)
-$(call declare-0p-target,$(target_odm_dlkm_notice_file_xml_gz))
-$(call declare-0p-target,$(installed_odm_dlkm_notice_xml_gz))
+$(call declare-1p-target,$(target_odm_dlkm_notice_file_xml_gz))
+$(call declare-1p-target,$(installed_odm_dlkm_notice_xml_gz))
endif
.PHONY: system_dlkmlicense
@@ -160,8 +160,8 @@
$(installed_system_dlkm_notice_xml_gz): $(target_system_dlkm_notice_file_xml_gz)
$(copy-file-to-target)
-$(call declare-0p-target,$(target_system_dlkm_notice_file_xml_gz))
-$(call declare-0p-target,$(installed_sysetm_dlkm_notice_xml_gz))
+$(call declare-1p-target,$(target_system_dlkm_notice_file_xml_gz))
+$(call declare-1p-target,$(installed_sysetm_dlkm_notice_xml_gz))
endif
endif # not TARGET_BUILD_APPS
diff --git a/core/package_internal.mk b/core/package_internal.mk
index c7a173b..3e9106b 100644
--- a/core/package_internal.mk
+++ b/core/package_internal.mk
@@ -111,24 +111,26 @@
# Determine whether auto-RRO is enabled for this package.
enforce_rro_enabled :=
-ifneq (,$(filter *, $(PRODUCT_ENFORCE_RRO_TARGETS)))
- # * means all system and system_ext APKs, so enable conditionally based on module path.
+ifeq (,$(filter tests,$(LOCAL_MODULE_TAGS)))
+ ifneq (,$(filter *, $(PRODUCT_ENFORCE_RRO_TARGETS)))
+ # * means all system and system_ext APKs, so enable conditionally based on module path.
- # Note that base_rules.mk has not yet been included, so it's likely that only
- # one of LOCAL_MODULE_PATH and the LOCAL_X_MODULE flags has been set.
- ifeq (,$(LOCAL_MODULE_PATH))
- non_rro_target_module := $(filter true,\
- $(LOCAL_ODM_MODULE) \
- $(LOCAL_OEM_MODULE) \
- $(LOCAL_PRODUCT_MODULE) \
- $(LOCAL_PROPRIETARY_MODULE) \
- $(LOCAL_VENDOR_MODULE))
- enforce_rro_enabled := $(if $(non_rro_target_module),,true)
- else ifneq ($(filter $(TARGET_OUT)/%,$(LOCAL_MODULE_PATH)),)
+ # Note that base_rules.mk has not yet been included, so it's likely that only
+ # one of LOCAL_MODULE_PATH and the LOCAL_X_MODULE flags has been set.
+ ifeq (,$(LOCAL_MODULE_PATH))
+ non_rro_target_module := $(filter true,\
+ $(LOCAL_ODM_MODULE) \
+ $(LOCAL_OEM_MODULE) \
+ $(LOCAL_PRODUCT_MODULE) \
+ $(LOCAL_PROPRIETARY_MODULE) \
+ $(LOCAL_VENDOR_MODULE))
+ enforce_rro_enabled := $(if $(non_rro_target_module),,true)
+ else ifneq ($(filter $(TARGET_OUT)/%,$(LOCAL_MODULE_PATH)),)
+ enforce_rro_enabled := true
+ endif
+ else ifneq (,$(filter $(LOCAL_PACKAGE_NAME), $(PRODUCT_ENFORCE_RRO_TARGETS)))
enforce_rro_enabled := true
endif
-else ifneq (,$(filter $(LOCAL_PACKAGE_NAME), $(PRODUCT_ENFORCE_RRO_TARGETS)))
- enforce_rro_enabled := true
endif
product_package_overlays := $(strip \
diff --git a/core/prebuilt_internal.mk b/core/prebuilt_internal.mk
index ef1471d..5bea9b6 100644
--- a/core/prebuilt_internal.mk
+++ b/core/prebuilt_internal.mk
@@ -57,6 +57,9 @@
$(error $(LOCAL_MODULE) : unexpected LOCAL_MODULE_CLASS for prebuilts: $(LOCAL_MODULE_CLASS))
endif
+$(if $(filter-out $(SOONG_ANDROID_MK),$(LOCAL_MODULE_MAKEFILE)), \
+ $(eval ALL_MODULES.$(my_register_name).IS_PREBUILT_MAKE_MODULE := Y))
+
$(built_module) : $(LOCAL_ADDITIONAL_DEPENDENCIES)
my_prebuilt_src_file :=
diff --git a/core/product.mk b/core/product.mk
index e57ca13..e90e27b 100644
--- a/core/product.mk
+++ b/core/product.mk
@@ -24,8 +24,16 @@
_product_single_value_vars += PRODUCT_NAME
_product_single_value_vars += PRODUCT_MODEL
+_product_single_value_vars += PRODUCT_NAME_FOR_ATTESTATION
+_product_single_value_vars += PRODUCT_MODEL_FOR_ATTESTATION
-# The resoure configuration options to use for this product.
+# Defines the ELF segment alignment for binaries (executables and shared libraries).
+# The ELF segment alignment has to be a PAGE_SIZE multiple. For example, if
+# PRODUCT_MAX_PAGE_SIZE_SUPPORTED=65536, the possible values for PAGE_SIZE could be
+# 4096, 16384 and 65536.
+_product_single_value_vars += PRODUCT_MAX_PAGE_SIZE_SUPPORTED
+
+# The resource configuration options to use for this product.
_product_list_vars += PRODUCT_LOCALES
_product_list_vars += PRODUCT_AAPT_CONFIG
_product_single_value_vars += PRODUCT_AAPT_PREF_CONFIG
@@ -34,6 +42,7 @@
_product_list_vars += PRODUCT_PACKAGES
_product_list_vars += PRODUCT_PACKAGES_DEBUG
_product_list_vars += PRODUCT_PACKAGES_DEBUG_ASAN
+_product_list_vars += PRODUCT_PACKAGES_ARM64
# Packages included only for eng/userdebug builds, when building with EMMA_INSTRUMENT=true
_product_list_vars += PRODUCT_PACKAGES_DEBUG_JAVA_COVERAGE
_product_list_vars += PRODUCT_PACKAGES_ENG
@@ -43,6 +52,7 @@
_product_single_value_vars += PRODUCT_DEVICE
_product_single_value_vars += PRODUCT_MANUFACTURER
_product_single_value_vars += PRODUCT_BRAND
+_product_single_value_vars += PRODUCT_BRAND_FOR_ATTESTATION
# These PRODUCT_SYSTEM_* flags, if defined, are used in place of the
# corresponding PRODUCT_* flags for the sysprops on /system.
@@ -136,10 +146,7 @@
# PRODUCT_BOOT_JARS, so that device-specific jars go after common jars.
_product_list_vars += PRODUCT_BOOT_JARS_EXTRA
-_product_single_value_vars += PRODUCT_SUPPORTS_BOOT_SIGNER
_product_single_value_vars += PRODUCT_SUPPORTS_VBOOT
-_product_single_value_vars += PRODUCT_SUPPORTS_VERITY
-_product_single_value_vars += PRODUCT_SUPPORTS_VERITY_FEC
_product_list_vars += PRODUCT_SYSTEM_SERVER_APPS
# List of system_server classpath jars on the platform.
_product_list_vars += PRODUCT_SYSTEM_SERVER_JARS
@@ -168,7 +175,6 @@
_product_list_vars += PRODUCT_LOADED_BY_PRIVILEGED_MODULES
_product_single_value_vars += PRODUCT_VBOOT_SIGNING_KEY
_product_single_value_vars += PRODUCT_VBOOT_SIGNING_SUBKEY
-_product_single_value_vars += PRODUCT_VERITY_SIGNING_KEY
_product_single_value_vars += PRODUCT_SYSTEM_VERITY_PARTITION
_product_single_value_vars += PRODUCT_VENDOR_VERITY_PARTITION
_product_single_value_vars += PRODUCT_PRODUCT_VERITY_PARTITION
@@ -238,6 +244,9 @@
# Whether any paths are excluded from sanitization when SANITIZE_TARGET=cfi
_product_list_vars += PRODUCT_CFI_EXCLUDE_PATHS
+# Whether any paths should have HWASan enabled for components
+_product_list_vars += PRODUCT_HWASAN_INCLUDE_PATHS
+
# Whether the Scudo hardened allocator is disabled platform-wide
_product_single_value_vars += PRODUCT_DISABLE_SCUDO
@@ -270,6 +279,9 @@
# List of tags that will be used to gate blueprint modules from the build graph
_product_list_vars += PRODUCT_INCLUDE_TAGS
+# List of directories that will be used to gate blueprint modules from the build graph
+_product_list_vars += PRODUCT_SOURCE_ROOT_DIRS
+
# When this is true, various build time as well as runtime debugfs restrictions are enabled.
_product_single_value_vars += PRODUCT_SET_DEBUGFS_RESTRICTIONS
@@ -363,20 +375,37 @@
# This option is only meant to be set by compliance GSI targets.
_product_single_value_vars += PRODUCT_INSTALL_DEBUG_POLICY_TO_SYSTEM_EXT
-# If set, metadata files for the following artifacts will be generated.
-# - system/framework/*.jar
-# - system/framework/oat/<arch>/*.{oat,vdex,art}
-# - system/etc/boot-image.prof
-# - system/etc/dirty-image-objects
-# One fsverity metadata container file per one input file will be generated in
-# system.img, with a suffix ".fsv_meta". e.g. a container file for
-# "/system/framework/foo.jar" will be "system/framework/foo.jar.fsv_meta".
-_product_single_value_vars += PRODUCT_SYSTEM_FSVERITY_GENERATE_METADATA
+# If set, fsverity metadata files will be generated for each files in the
+# allowlist, plus an manifest APK per partition. For example,
+# /system/framework/service.jar will come with service.jar.fsv_meta in the same
+# directory; the file information will also be included in
+# /system/etc/security/fsverity/BuildManifest.apk
+_product_single_value_vars += PRODUCT_FSVERITY_GENERATE_METADATA
# If true, sets the default for MODULE_BUILD_FROM_SOURCE. This overrides
# BRANCH_DEFAULT_MODULE_BUILD_FROM_SOURCE but not an explicitly set value.
_product_single_value_vars += PRODUCT_MODULE_BUILD_FROM_SOURCE
+# If true, installs a full version of com.android.virt APEX.
+_product_single_value_vars += PRODUCT_AVF_ENABLED
+
+# List of .json files to be merged/compiled into vendor/etc/linker.config.pb
+_product_list_vars += PRODUCT_VENDOR_LINKER_CONFIG_FRAGMENTS
+
+# Whether to use userfaultfd GC.
+# Possible values are:
+# - "default" or empty: both the build system and the runtime determine whether to use userfaultfd
+# GC based on the vendor API level
+# - "true": forces the build system to use userfaultfd GC regardless of the vendor API level; the
+# runtime determines whether to use userfaultfd GC based on the kernel support. Note that the
+# device may have to re-compile everything on the first boot if the kernel doesn't support
+# userfaultfd
+# - "false": disallows the build system and the runtime to use userfaultfd GC even if the device
+# supports it
+_product_single_value_vars += PRODUCT_ENABLE_UFFD_GC
+
+_product_list_vars += PRODUCT_AFDO_PROFILES
+
.KATI_READONLY := _product_single_value_vars _product_list_vars
_product_var_list :=$= $(_product_single_value_vars) $(_product_list_vars)
@@ -407,7 +436,7 @@
$(eval current_mk := $(strip $(word 1,$(_include_stack)))) \
$(eval inherit_var := PRODUCTS.$(current_mk).INHERITS_FROM) \
$(eval $(inherit_var) := $(sort $($(inherit_var)) $(np))) \
- $(call dump-inherit,$(strip $(word 1,$(_include_stack))),$(1)) \
+ $(call dump-inherit,$(current_mk),$(1)) \
$(call dump-config-vals,$(current_mk),inherit))
endef
diff --git a/core/product_config.mk b/core/product_config.mk
index c55d07b..5d76eeb 100644
--- a/core/product_config.mk
+++ b/core/product_config.mk
@@ -74,7 +74,7 @@
###########################################################
define find-copy-subdir-files
-$(sort $(shell find $(2) -name "$(1)" -type f | $(SED_EXTENDED) "s:($(2)/?(.*)):\\1\\:$(3)/\\2:" | sed "s://:/:g"))
+$(shell find $(2) -name "$(1)" -type f | $(SED_EXTENDED) "s:($(2)/?(.*)):\\1\\:$(3)/\\2:" | sed "s://:/:g" | sort)
endef
#
@@ -144,7 +144,6 @@
#
include $(BUILD_SYSTEM)/node_fns.mk
include $(BUILD_SYSTEM)/product.mk
-include $(BUILD_SYSTEM)/device.mk
# Read all product definitions.
#
@@ -210,7 +209,6 @@
# Dedup, extract product names, etc.
product_paths := $(sort $(product_paths))
all_named_products := $(sort $(call _first,$(product_paths),:))
-all_product_makefiles := $(sort $(call _second,$(product_paths),:))
current_product_makefile := $(call _second,$(filter $(TARGET_PRODUCT):%,$(product_paths)),:)
COMMON_LUNCH_CHOICES := $(sort $(common_lunch_choices))
@@ -230,7 +228,6 @@
ifneq (,$(filter $(TARGET_PRODUCT),$(products_using_starlark_config)))
RBC_PRODUCT_CONFIG := true
- RBC_BOARD_CONFIG := true
endif
ifndef RBC_PRODUCT_CONFIG
@@ -274,8 +271,6 @@
############################################################################
current_product_makefile :=
-all_product_makefiles :=
-all_product_configs :=
#############################################################################
# Check product include tag allowlist
@@ -494,6 +489,9 @@
ifneq (,$(call math_gt_or_eq,29,$(PRODUCT_SHIPPING_API_LEVEL)))
PRODUCT_PACKAGES += $(PRODUCT_PACKAGES_SHIPPING_API_LEVEL_29)
endif
+ ifneq (,$(call math_gt_or_eq,33,$(PRODUCT_SHIPPING_API_LEVEL)))
+ PRODUCT_PACKAGES += $(PRODUCT_PACKAGES_SHIPPING_API_LEVEL_33)
+ endif
endif
# If build command defines OVERRIDE_PRODUCT_EXTRA_VNDK_VERSIONS,
diff --git a/core/product_config.rbc b/core/product_config.rbc
index 0189323..97c1d00 100644
--- a/core/product_config.rbc
+++ b/core/product_config.rbc
@@ -59,6 +59,12 @@
if _options.format == "pretty":
print(attr, "=", repr(value))
elif _options.format == "make":
+ value = list(value)
+ for i, x in enumerate(value):
+ if type(x) == "tuple" and len(x) == 1:
+ value[i] = "@inherit:" + x[0] + ".mk"
+ elif type(x) != "string":
+ fail("Wasn't a list of strings:", attr, " value:", value)
print(attr, ":=", " ".join(value))
elif _options.format == "pretty":
print(attr, "=", repr(value))
@@ -147,7 +153,7 @@
# Run this one, obtaining its configuration and child PCMs.
if _options.trace_modules:
- print("#%d: %s" % (n, name))
+ rblf_log("%d: %s" % (n, name))
# Run PCM.
handle = __h_new()
@@ -167,7 +173,7 @@
# Now we know everything about this PCM, record it in 'configs'.
children = handle.inherited_modules
if _options.trace_modules:
- print("# ", " ".join(children.keys()))
+ rblf_log(" ", " ".join(children.keys()))
# Starlark dictionaries are guaranteed to iterate through in insertion order,
# so children.keys() will be ordered by the inherit() calls
configs[name] = (pcm, handle.cfg, children.keys(), False)
@@ -234,9 +240,9 @@
configs = cloned_configs
if trace:
- print("\n#---Postfix---")
+ rblf_log("\n---Postfix---")
for x in configs_postfix:
- print("# ", x)
+ rblf_log(" ", x)
# Traverse the tree from the bottom, evaluating inherited values
for pcm_name in configs_postfix:
@@ -309,7 +315,7 @@
old_val = val
new_val = _value_expand(configs, attr, val)
if new_val != old_val:
- print("%s(i): %s=%s (was %s)" % (pcm_name, attr, new_val, old_val))
+ rblf_log("%s(i): %s=%s (was %s)" % (pcm_name, attr, new_val, old_val))
cfg[attr] = new_val
def _value_expand(configs, attr, values_list):
@@ -363,7 +369,7 @@
for attr in _options.trace_variables:
if attr in percolated_attrs:
- print("%s: %s^=%s" % (cfg_name, attr, cfg[attr]))
+ rblf_log("%s: %s^=%s" % (cfg_name, attr, cfg[attr]))
def __move_items(to_list, from_cfg, attr):
value = from_cfg.get(attr, [])
@@ -456,6 +462,9 @@
def __words(string_or_list):
if type(string_or_list) == "list":
+ for x in string_or_list:
+ if type(x) != "string":
+ return string_or_list
string_or_list = " ".join(string_or_list)
return _mkstrip(string_or_list).split()
@@ -536,8 +545,11 @@
"""If from file exists, returns [from:to] pair."""
value = path_pair.split(":", 2)
+ if value[0].find('*') != -1:
+ fail("copy_if_exists: input file cannot contain *")
+
# Check that l[0] exists
- return [":".join(value)] if rblf_file_exists(value[0]) else []
+ return [":".join(value)] if rblf_wildcard(value[0]) else []
def _enforce_product_packages_exist(handle, pkg_string_or_list=[]):
"""Makes including non-existent modules in PRODUCT_PACKAGES an error."""
@@ -552,10 +564,6 @@
_setdefault(handle, "PRODUCT_DEX_PREOPT_MODULE_CONFIGS")
handle.cfg["PRODUCT_DEX_PREOPT_MODULE_CONFIGS"] += [m + "=" + config for m in modules]
-def _file_wildcard_exists(file_pattern):
- """Return True if there are files matching given bash pattern."""
- return len(rblf_wildcard(file_pattern)) > 0
-
def _find_and_copy(pattern, from_dir, to_dir):
"""Return a copy list for the files matching the pattern."""
return sorted([("%s/%s:%s/%s" % (from_dir, f, to_dir, f))
@@ -605,6 +613,27 @@
break
return res
+def _first_word(input):
+ """Equivalent to the GNU make function $(firstword)."""
+ input = __words(input)
+ if len(input) == 0:
+ return ""
+ return input[0]
+
+def _last_word(input):
+ """Equivalent to the GNU make function $(lastword)."""
+ input = __words(input)
+ l = len(input)
+ if l == 0:
+ return ""
+ return input[l-1]
+
+def _flatten_2d_list(list):
+ result = []
+ for x in list:
+ result += x
+ return result
+
def _dir(paths):
"""Equivalent to the GNU make function $(dir).
@@ -767,8 +796,11 @@
That is, removes string's leading and trailing whitespace characters and
replaces any sequence of whitespace characters with with a single space.
"""
- if type(s) != "string":
- return s
+ t = type(s)
+ if t == "list":
+ s = " ".join(s)
+ elif t != "string":
+ fail("Argument to mkstrip must be a string or list, got: "+t)
result = ""
was_space = False
for ch in s.strip().elems():
@@ -807,6 +839,41 @@
return [ __mkpatsubst_word(parsed_percent, parsed_src, x) + ":" + __mkpatsubst_word(parsed_percent, parsed_dest, x) for x in words]
+__zero_values = {
+ "string": "",
+ "list": [],
+ "int": 0,
+ "float": 0,
+ "bool": False,
+ "dict": {},
+ "NoneType": None,
+ "tuple": (),
+}
+def __zero_value(x):
+ t = type(x)
+ if t in __zero_values:
+ return __zero_values[t]
+ else:
+ fail("Unknown type: "+t)
+
+
+def _clear_var_list(g, h, var_list):
+ cfg = __h_cfg(h)
+ for v in __words(var_list):
+ # Set these variables to their zero values rather than None
+ # or removing them from the dictionary because if they were
+ # removed entirely, ?= would set their value, when it would not
+ # after a make-based clear_var_list call.
+ if v in g:
+ g[v] = __zero_value(g[v])
+ if v in cfg:
+ cfg[v] = __zero_value(cfg[v])
+
+ if v not in cfg and v not in g:
+ # Cause the variable to appear set like the make version does
+ g[v] = ""
+
+
def __get_options():
"""Returns struct containing runtime global settings."""
settings = dict(
@@ -850,18 +917,20 @@
addsuffix = _addsuffix,
board_platform_in = _board_platform_in,
board_platform_is = _board_platform_is,
+ clear_var_list = _clear_var_list,
copy_files = _copy_files,
copy_if_exists = _copy_if_exists,
cfg = __h_cfg,
dir = _dir,
enforce_product_packages_exist = _enforce_product_packages_exist,
expand_wildcard = _expand_wildcard,
- file_exists = rblf_file_exists,
- file_wildcard_exists = _file_wildcard_exists,
filter = _filter,
filter_out = _filter_out,
find_and_copy = _find_and_copy,
findstring = _findstring,
+ first_word = _first_word,
+ last_word = _last_word,
+ flatten_2d_list = _flatten_2d_list,
inherit = _inherit,
indirect = _indirect,
mk2rbc_error = _mk2rbc_error,
diff --git a/core/proguard.flags b/core/proguard.flags
index 185275e..d790061 100644
--- a/core/proguard.flags
+++ b/core/proguard.flags
@@ -9,10 +9,26 @@
# Add this flag in your package's own configuration if it's needed.
#-flattenpackagehierarchy
-# Keep classes and methods that have the guava @VisibleForTesting annotation
--keep @**.VisibleForTesting class *
+# Keep classes and members with the platform-defined @VisibleForTesting annotation.
+-keep @com.android.internal.annotations.VisibleForTesting class *
-keepclassmembers class * {
-@**.VisibleForTesting *;
+ @com.android.internal.annotations.VisibleForTesting *;
+}
+
+# Keep classes and members with non-platform @VisibleForTesting annotations, but
+# only within platform-defined packages. This avoids keeping external, library-specific
+# test code that isn't actually needed for platform testing.
+# TODO(b/239961360): Migrate away from androidx.annotation.VisibleForTesting
+# and com.google.common.annotations.VisibleForTesting use in platform code.
+-keep @**.VisibleForTesting class android.**,com.android.**,com.google.android.**
+-keepclassmembers class android.**,com.android.**,com.google.android.** {
+ @**.VisibleForTesting *;
+}
+
+# Keep rule for members that are needed solely to keep alive downstream weak
+# references, and could otherwise be removed after tree shaking optimizations.
+-keepclassmembers,allowaccessmodification,allowobfuscation,allowshrinking class * {
+ @com.android.internal.annotations.KeepForWeakReference <fields>;
}
# Understand the common @Keep annotation from various Android packages:
diff --git a/core/proguard_basic_keeps.flags b/core/proguard_basic_keeps.flags
index 30c2341..b59527a 100644
--- a/core/proguard_basic_keeps.flags
+++ b/core/proguard_basic_keeps.flags
@@ -2,6 +2,14 @@
# that isn't explicitly part of the API
-dontskipnonpubliclibraryclasses -dontskipnonpubliclibraryclassmembers
+# Preserve line number information for debugging stack traces.
+-keepattributes SourceFile,LineNumberTable
+
+# Annotations are implemented as attributes, so we have to explicitly keep them.
+# Keep all runtime-visible annotations like RuntimeVisibleParameterAnnotations
+# and RuntimeVisibleTypeAnnotations, as well as associated defaults.
+-keepattributes RuntimeVisible*Annotation*,AnnotationDefault
+
# For enumeration classes, see http://proguard.sourceforge.net/manual/examples.html#enumerations
-keepclassmembers enum * {
public static **[] values();
@@ -33,6 +41,11 @@
java.lang.Object readResolve();
}
+# Keep all Javascript API methods
+-keepclassmembers class * {
+ @android.webkit.JavascriptInterface <methods>;
+}
+
# Keep Throwable's constructor that takes a String argument.
-keepclassmembers class * extends java.lang.Throwable {
<init>(java.lang.String);
@@ -48,7 +61,7 @@
# -keep class * extends android.app.BackupAgent
# Parcelable CREATORs must be kept for Parcelable functionality
--keep class * implements android.os.Parcelable {
+-keepclassmembers class * implements android.os.Parcelable {
public static final ** CREATOR;
}
@@ -70,9 +83,23 @@
# has a fallback, but again, don't use Futures.getChecked on Android regardless.
-dontwarn java.lang.ClassValue
+# Ignore missing annotation references for various support libraries.
+# While this is not ideal, it should be relatively safe given that
+# 1) runtime-visible annotations will still be kept, and 2) compile-time
+# annotations are stripped by R8 anyway.
+# Note: The ** prefix is used to accommodate jarjar repackaging.
+# TODO(b/242088131): Remove these exemptions after resolving transitive libs
+# dependencies that are provided to R8.
+-dontwarn **android**.annotation*.**
+-dontwarn **com.google.errorprone.annotations.**
+-dontwarn javax.annotation.**
+-dontwarn org.checkerframework.**
+-dontwarn org.jetbrains.annotations.**
+
# Less spammy.
-dontnote
# The lite proto runtime uses reflection to access fields based on the names in
-# the schema, keep all the fields.
--keepclassmembers class * extends com.google.protobuf.MessageLite { <fields>; }
+# the schema, keep all the fields. Wildcard is used to apply the rule to classes
+# that have been renamed with jarjar.
+-keepclassmembers class * extends **.protobuf.MessageLite { <fields>; }
diff --git a/core/python_binary_host_mobly_test_config_template.xml b/core/python_binary_host_mobly_test_config_template.xml
new file mode 100644
index 0000000..a6576cd
--- /dev/null
+++ b/core/python_binary_host_mobly_test_config_template.xml
@@ -0,0 +1,25 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!-- Copyright (C) 2023 The Android Open Source Project
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+ http://www.apache.org/licenses/LICENSE-2.0
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<configuration description="Config for {MODULE} mobly test">
+ {EXTRA_CONFIGS}
+
+ <device name="device1"></device>
+ <device name="device2"></device>
+
+ <test class="com.android.tradefed.testtype.mobly.MoblyBinaryHostTest">
+ <!-- The mobly-par-file-name should match the module name -->
+ <option name="mobly-par-file-name" value="{MODULE}" />
+ <!-- Timeout limit in milliseconds for all test cases of the python binary -->
+ <option name="mobly-test-timeout" value="300000" />
+ </test>
+</configuration>
diff --git a/core/rbe.mk b/core/rbe.mk
index fd3427a..6754b0a 100644
--- a/core/rbe.mk
+++ b/core/rbe.mk
@@ -81,11 +81,11 @@
endif
ifdef RBE_R8
- R8_WRAPPER := $(strip $(RBE_WRAPPER) --labels=type=compile,compiler=r8 --exec_strategy=$(r8_exec_strategy) --platform=$(java_r8_d8_platform) --inputs=out/soong/host/linux-x86/framework/r8-compat-proguard.jar,build/make/core/proguard_basic_keeps.flags --toolchain_inputs=prebuilts/jdk/jdk11/linux-x86/bin/java)
+ R8_WRAPPER := $(strip $(RBE_WRAPPER) --labels=type=compile,compiler=r8 --exec_strategy=$(r8_exec_strategy) --platform=$(java_r8_d8_platform) --inputs=$(OUT_DIR)/host/linux-x86/framework/r8.jar,build/make/core/proguard_basic_keeps.flags --toolchain_inputs=$(firstword $(JAVA)))
endif
ifdef RBE_D8
- D8_WRAPPER := $(strip $(RBE_WRAPPER) --labels=type=compile,compiler=d8 --exec_strategy=$(d8_exec_strategy) --platform=$(java_r8_d8_platform) --inputs=out/soong/host/linux-x86/framework/d8.jar --toolchain_inputs=prebuilts/jdk/jdk11/linux-x86/bin/java)
+ D8_WRAPPER := $(strip $(RBE_WRAPPER) --labels=type=compile,compiler=d8 --exec_strategy=$(d8_exec_strategy) --platform=$(java_r8_d8_platform) --inputs=$(OUT_DIR)/host/linux-x86/framework/d8.jar --toolchain_inputs=$(firstword $(JAVA)))
endif
rbe_dir :=
diff --git a/core/robolectric_test_config_template.xml b/core/robolectric_test_config_template.xml
index 483b957..56d2312 100644
--- a/core/robolectric_test_config_template.xml
+++ b/core/robolectric_test_config_template.xml
@@ -18,7 +18,7 @@
<option name="test-suite-tag" value="robolectric" />
<option name="test-suite-tag" value="robolectric-tests" />
- <option name="java-folder" value="prebuilts/jdk/jdk11/linux-x86/" />
+ <option name="java-folder" value="prebuilts/jdk/jdk17/linux-x86/" />
<option name="exclude-paths" value="java" />
<option name="use-robolectric-resources" value="true" />
@@ -26,5 +26,12 @@
<test class="com.android.tradefed.testtype.IsolatedHostTest" >
<option name="jar" value="{MODULE}.jar" />
+ <option name="java-flags" value="--add-modules=jdk.compiler"/>
+ <option name="java-flags" value="--add-opens=java.base/java.lang=ALL-UNNAMED"/>
+ <option name="java-flags" value="--add-opens=java.base/java.lang.reflect=ALL-UNNAMED"/>
+ <!-- b/238100560 -->
+ <option name="java-flags" value="--add-opens=java.base/jdk.internal.util.random=ALL-UNNAMED"/>
+ <!-- b/251387255 -->
+ <option name="java-flags" value="--add-opens=java.base/java.io=ALL-UNNAMED"/>
</test>
</configuration>
diff --git a/core/sbom.mk b/core/sbom.mk
new file mode 100644
index 0000000..39c251a
--- /dev/null
+++ b/core/sbom.mk
@@ -0,0 +1,22 @@
+# For SBOM generation
+# This is included by base_rules.mk and is not necessary to be included in other .mk files
+# unless a .mk file changes its installed file after including base_rules.mk.
+
+ifdef my_register_name
+ # ALL_INSTALLED_FILES.$(installed_file).STATIC_LIBRARIES: list of module name of static libraries, e.g. libc++demangle libclang_rt.builtins, for primary arch
+ # ALL_INSTALLED_FILES.$(installed_file).WHOLE_STATIC_LIBRARIES: list of module name of static libraries, e.g. libc++demangle_32 libclang_rt.builtins_32, for 2nd arch.
+ ifneq (, $(strip $(ALL_MODULES.$(my_register_name).INSTALLED)))
+ $(foreach installed_file,$(ALL_MODULES.$(my_register_name).INSTALLED),\
+ $(eval ALL_INSTALLED_FILES.$(installed_file) := $(my_register_name))\
+ $(eval ALL_INSTALLED_FILES.$(installed_file).STATIC_LIBRARIES := $(foreach l,$(strip $(sort $(LOCAL_STATIC_LIBRARIES))),$l$(if $(LOCAL_2ND_ARCH_VAR_PREFIX),$($(my_prefix)2ND_ARCH_MODULE_SUFFIX))))\
+ $(eval ALL_INSTALLED_FILES.$(installed_file).WHOLE_STATIC_LIBRARIES := $(foreach l,$(strip $(sort $(LOCAL_WHOLE_STATIC_LIBRARIES))),$l$(if $(LOCAL_2ND_ARCH_VAR_PREFIX),$($(my_prefix)2ND_ARCH_MODULE_SUFFIX))))\
+ )
+ endif
+ ifeq (STATIC_LIBRARIES,$(LOCAL_MODULE_CLASS))
+ ALL_STATIC_LIBRARIES.$(my_register_name).STATIC_LIBRARIES := $(foreach l,$(strip $(sort $(LOCAL_STATIC_LIBRARIES))),$l$($(my_prefix)2ND_ARCH_MODULE_SUFFIX))
+ ALL_STATIC_LIBRARIES.$(my_register_name).WHOLE_STATIC_LIBRARIES := $(foreach l,$(strip $(sort $(LOCAL_WHOLE_STATIC_LIBRARIES))),$l$($(my_prefix)2ND_ARCH_MODULE_SUFFIX))
+ ifdef LOCAL_SOONG_MODULE_TYPE
+ ALL_STATIC_LIBRARIES.$(my_register_name).BUILT_FILE := $(LOCAL_PREBUILT_MODULE_FILE)
+ endif
+ endif
+endif
\ No newline at end of file
diff --git a/core/soong_app_prebuilt.mk b/core/soong_app_prebuilt.mk
index d771d22..ccc5449 100644
--- a/core/soong_app_prebuilt.mk
+++ b/core/soong_app_prebuilt.mk
@@ -162,22 +162,27 @@
# embedded JNI will already have been handled by soong
my_embed_jni :=
my_prebuilt_jni_libs :=
-ifdef LOCAL_SOONG_JNI_LIBS_$(TARGET_ARCH)
- my_2nd_arch_prefix :=
- LOCAL_JNI_SHARED_LIBRARIES := $(LOCAL_SOONG_JNI_LIBS_$(TARGET_ARCH))
- include $(BUILD_SYSTEM)/install_jni_libs_internal.mk
-endif
-ifdef TARGET_2ND_ARCH
- ifdef LOCAL_SOONG_JNI_LIBS_$(TARGET_2ND_ARCH)
- my_2nd_arch_prefix := $(TARGET_2ND_ARCH_VAR_PREFIX)
- LOCAL_JNI_SHARED_LIBRARIES := $(LOCAL_SOONG_JNI_LIBS_$(TARGET_2ND_ARCH))
+ifneq (true,$(LOCAL_UNINSTALLABLE_MODULE))
+ ifdef LOCAL_SOONG_JNI_LIBS_$(TARGET_ARCH)
+ my_2nd_arch_prefix :=
+ LOCAL_JNI_SHARED_LIBRARIES := $(LOCAL_SOONG_JNI_LIBS_$(TARGET_ARCH))
+ partition_lib_pairs := $(LOCAL_SOONG_JNI_LIBS_PARTITION_$(TARGET_ARCH))
include $(BUILD_SYSTEM)/install_jni_libs_internal.mk
endif
+ ifdef TARGET_2ND_ARCH
+ ifdef LOCAL_SOONG_JNI_LIBS_$(TARGET_2ND_ARCH)
+ my_2nd_arch_prefix := $(TARGET_2ND_ARCH_VAR_PREFIX)
+ LOCAL_JNI_SHARED_LIBRARIES := $(LOCAL_SOONG_JNI_LIBS_$(TARGET_2ND_ARCH))
+ partition_lib_pairs := $(LOCAL_SOONG_JNI_LIBS_PARTITION_$(TARGET_2ND_ARCH))
+ include $(BUILD_SYSTEM)/install_jni_libs_internal.mk
+ endif
+ endif
endif
LOCAL_SHARED_JNI_LIBRARIES :=
my_embed_jni :=
my_prebuilt_jni_libs :=
my_2nd_arch_prefix :=
+partition_lib_pairs :=
PACKAGES := $(PACKAGES) $(LOCAL_MODULE)
ifndef LOCAL_CERTIFICATE
@@ -234,26 +239,28 @@
include $(BUILD_SYSTEM)/link_type.mk
endif # !LOCAL_IS_HOST_MODULE
-ifdef LOCAL_SOONG_DEVICE_RRO_DIRS
- $(call append_enforce_rro_sources, \
- $(my_register_name), \
- false, \
- $(LOCAL_FULL_MANIFEST_FILE), \
- $(if $(LOCAL_EXPORT_PACKAGE_RESOURCES),true,false), \
- $(LOCAL_SOONG_DEVICE_RRO_DIRS), \
- vendor \
- )
-endif
+ifeq (,$(filter tests,$(LOCAL_MODULE_TAGS)))
+ ifdef LOCAL_SOONG_DEVICE_RRO_DIRS
+ $(call append_enforce_rro_sources, \
+ $(my_register_name), \
+ false, \
+ $(LOCAL_FULL_MANIFEST_FILE), \
+ $(if $(LOCAL_EXPORT_PACKAGE_RESOURCES),true,false), \
+ $(LOCAL_SOONG_DEVICE_RRO_DIRS), \
+ vendor \
+ )
+ endif
-ifdef LOCAL_SOONG_PRODUCT_RRO_DIRS
- $(call append_enforce_rro_sources, \
- $(my_register_name), \
- false, \
- $(LOCAL_FULL_MANIFEST_FILE), \
- $(if $(LOCAL_EXPORT_PACKAGE_RESOURCES),true,false), \
- $(LOCAL_SOONG_PRODUCT_RRO_DIRS), \
- product \
- )
+ ifdef LOCAL_SOONG_PRODUCT_RRO_DIRS
+ $(call append_enforce_rro_sources, \
+ $(my_register_name), \
+ false, \
+ $(LOCAL_FULL_MANIFEST_FILE), \
+ $(if $(LOCAL_EXPORT_PACKAGE_RESOURCES),true,false), \
+ $(LOCAL_SOONG_PRODUCT_RRO_DIRS), \
+ product \
+ )
+ endif
endif
ifdef LOCAL_PREBUILT_COVERAGE_ARCHIVE
@@ -264,3 +271,8 @@
endif
SOONG_ALREADY_CONV += $(LOCAL_MODULE)
+
+###########################################################
+## SBOM generation
+###########################################################
+include $(BUILD_SBOM_GEN)
diff --git a/core/soong_cc_rust_prebuilt.mk b/core/soong_cc_rust_prebuilt.mk
index 07e577a..05b4b6b 100644
--- a/core/soong_cc_rust_prebuilt.mk
+++ b/core/soong_cc_rust_prebuilt.mk
@@ -50,6 +50,28 @@
# to avoid checkbuilds making an extra copy of every module.
LOCAL_CHECKED_MODULE := $(LOCAL_PREBUILT_MODULE_FILE)
+my_check_same_vndk_variants :=
+same_vndk_variants_stamp :=
+ifeq ($(LOCAL_CHECK_SAME_VNDK_VARIANTS),true)
+ ifeq ($(filter hwaddress address, $(SANITIZE_TARGET)),)
+ ifneq ($(CLANG_COVERAGE),true)
+ # Do not compare VNDK variant for special cases e.g. coverage builds.
+ ifneq ($(SKIP_VNDK_VARIANTS_CHECK),true)
+ my_check_same_vndk_variants := true
+ same_vndk_variants_stamp := $(call local-intermediates-dir,,$(LOCAL_2ND_ARCH_VAR_PREFIX))/same_vndk_variants.timestamp
+ endif
+ endif
+ endif
+endif
+
+ifeq ($(my_check_same_vndk_variants),true)
+ # Add the timestamp to the CHECKED list so that `checkbuild` can run it.
+ # Note that because `checkbuild` doesn't check LOCAL_BUILT_MODULE for soong-built modules adding
+ # the timestamp to LOCAL_BUILT_MODULE isn't enough. It is skipped when the vendor variant
+ # isn't used at all and it may break in the downstream trees.
+ LOCAL_ADDITIONAL_CHECKED_MODULE := $(same_vndk_variants_stamp)
+endif
+
#######################################
include $(BUILD_SYSTEM)/base_rules.mk
#######################################
@@ -125,21 +147,7 @@
endif
endif
-my_check_same_vndk_variants :=
-ifeq ($(LOCAL_CHECK_SAME_VNDK_VARIANTS),true)
- ifeq ($(filter hwaddress address, $(SANITIZE_TARGET)),)
- ifneq ($(CLANG_COVERAGE),true)
- # Do not compare VNDK variant for special cases e.g. coverage builds.
- ifneq ($(SKIP_VNDK_VARIANTS_CHECK),true)
- my_check_same_vndk_variants := true
- endif
- endif
- endif
-endif
-
ifeq ($(my_check_same_vndk_variants),true)
- same_vndk_variants_stamp := $(intermediates)/same_vndk_variants.timestamp
-
my_core_register_name := $(subst .vendor,,$(subst .product,,$(my_register_name)))
my_core_variant_files := $(call module-target-built-files,$(my_core_register_name))
my_core_shared_lib := $(sort $(filter %.so,$(my_core_variant_files)))
diff --git a/core/soong_config.mk b/core/soong_config.mk
index 16b7fae..a149e2a 100644
--- a/core/soong_config.mk
+++ b/core/soong_config.mk
@@ -9,8 +9,19 @@
endif
endif
+include $(BUILD_SYSTEM)/art_config.mk
include $(BUILD_SYSTEM)/dex_preopt_config.mk
+ifndef AFDO_PROFILES
+# Set AFDO_PROFILES
+-include vendor/google_data/pgo_profile/sampling/afdo_profiles.mk
+else
+$(error AFDO_PROFILES can only be set from soong_config.mk. For product-specific fdo_profiles, please use PRODUCT_AFDO_PROFILES)
+endif
+
+# PRODUCT_AFDO_PROFILES takes precedence over product-agnostic profiles in AFDO_PROFILES
+ALL_AFDO_PROFILES := $(PRODUCT_AFDO_PROFILES) $(AFDO_PROFILES)
+
ifeq ($(WRITE_SOONG_VARIABLES),true)
# Create soong.variables with copies of makefile settings. Runs every build,
@@ -30,10 +41,12 @@
$(call add_json_val, Platform_sdk_extension_version, $(PLATFORM_SDK_EXTENSION_VERSION))
$(call add_json_val, Platform_base_sdk_extension_version, $(PLATFORM_BASE_SDK_EXTENSION_VERSION))
$(call add_json_csv, Platform_version_active_codenames, $(PLATFORM_VERSION_ALL_CODENAMES))
+$(call add_json_csv, Platform_version_all_preview_codenames, $(PLATFORM_VERSION_ALL_PREVIEW_CODENAMES))
$(call add_json_str, Platform_security_patch, $(PLATFORM_SECURITY_PATCH))
$(call add_json_str, Platform_preview_sdk_version, $(PLATFORM_PREVIEW_SDK_VERSION))
$(call add_json_str, Platform_base_os, $(PLATFORM_BASE_OS))
$(call add_json_str, Platform_version_last_stable, $(PLATFORM_VERSION_LAST_STABLE))
+$(call add_json_str, Platform_version_known_codenames, $(PLATFORM_VERSION_KNOWN_CODENAMES))
$(call add_json_str, Platform_min_supported_target_sdk_version, $(PLATFORM_MIN_SUPPORTED_TARGET_SDK_VERSION))
@@ -93,6 +106,7 @@
$(call add_json_list, AAPTPrebuiltDPI, $(PRODUCT_AAPT_PREBUILT_DPI))
$(call add_json_str, DefaultAppCertificate, $(PRODUCT_DEFAULT_DEV_CERTIFICATE))
+$(call add_json_str, MainlineSepolicyDevCertificates, $(MAINLINE_SEPOLICY_DEV_CERTIFICATES))
$(call add_json_str, AppsDefaultVersionName, $(APPS_DEFAULT_VERSION_NAME))
@@ -106,6 +120,7 @@
$(call add_json_list, CFIExcludePaths, $(CFI_EXCLUDE_PATHS) $(PRODUCT_CFI_EXCLUDE_PATHS))
$(call add_json_list, CFIIncludePaths, $(CFI_INCLUDE_PATHS) $(PRODUCT_CFI_INCLUDE_PATHS))
$(call add_json_list, IntegerOverflowExcludePaths, $(INTEGER_OVERFLOW_EXCLUDE_PATHS) $(PRODUCT_INTEGER_OVERFLOW_EXCLUDE_PATHS))
+$(call add_json_list, HWASanIncludePaths, $(HWASAN_INCLUDE_PATHS) $(PRODUCT_HWASAN_INCLUDE_PATHS))
$(call add_json_list, MemtagHeapExcludePaths, $(MEMTAG_HEAP_EXCLUDE_PATHS) $(PRODUCT_MEMTAG_HEAP_EXCLUDE_PATHS))
$(call add_json_list, MemtagHeapAsyncIncludePaths, $(MEMTAG_HEAP_ASYNC_INCLUDE_PATHS) $(PRODUCT_MEMTAG_HEAP_ASYNC_INCLUDE_PATHS))
@@ -142,6 +157,7 @@
$(call add_json_bool, Malloc_zero_contents, $(call invert_bool,$(filter false,$(MALLOC_ZERO_CONTENTS))))
$(call add_json_bool, Malloc_pattern_fill_contents, $(MALLOC_PATTERN_FILL_CONTENTS))
$(call add_json_str, Override_rs_driver, $(OVERRIDE_RS_DRIVER))
+$(call add_json_str, DeviceMaxPageSizeSupported, $(TARGET_MAX_PAGE_SIZE_SUPPORTED))
$(call add_json_bool, UncompressPrivAppDex, $(call invert_bool,$(filter true,$(DONT_UNCOMPRESS_PRIV_APPS_DEXS))))
$(call add_json_list, ModulesLoadedByPrivilegedModules, $(PRODUCT_LOADED_BY_PRIVILEGED_MODULES))
@@ -170,6 +186,8 @@
$(call add_json_list, RecoverySnapshotDirsExcluded, $(RECOVERY_SNAPSHOT_DIRS_EXCLUDED))
$(call add_json_bool, HostFakeSnapshotEnabled, $(HOST_FAKE_SNAPSHOT_ENABLE))
+$(call add_json_bool, MultitreeUpdateMeta, $(filter true,$(TARGET_MULTITREE_UPDATE_META)))
+
$(call add_json_bool, Treble_linker_namespaces, $(filter true,$(PRODUCT_TREBLE_LINKER_NAMESPACES)))
$(call add_json_bool, Enforce_vintf_manifest, $(filter true,$(PRODUCT_ENFORCE_VINTF_MANIFEST)))
@@ -205,9 +223,8 @@
$(call add_json_list, BoardVendorDlkmSepolicyDirs, $(BOARD_VENDOR_DLKM_SEPOLICY_DIRS))
$(call add_json_list, BoardOdmDlkmSepolicyDirs, $(BOARD_ODM_DLKM_SEPOLICY_DIRS))
$(call add_json_list, BoardSystemDlkmSepolicyDirs, $(BOARD_SYSTEM_DLKM_SEPOLICY_DIRS))
-# TODO: BOARD_PLAT_* dirs only kept for compatibility reasons. Will be a hard error on API level 31
-$(call add_json_list, SystemExtPublicSepolicyDirs, $(SYSTEM_EXT_PUBLIC_SEPOLICY_DIRS) $(BOARD_PLAT_PUBLIC_SEPOLICY_DIR))
-$(call add_json_list, SystemExtPrivateSepolicyDirs, $(SYSTEM_EXT_PRIVATE_SEPOLICY_DIRS) $(BOARD_PLAT_PRIVATE_SEPOLICY_DIR))
+$(call add_json_list, SystemExtPublicSepolicyDirs, $(SYSTEM_EXT_PUBLIC_SEPOLICY_DIRS))
+$(call add_json_list, SystemExtPrivateSepolicyDirs, $(SYSTEM_EXT_PRIVATE_SEPOLICY_DIRS))
$(call add_json_list, BoardSepolicyM4Defs, $(BOARD_SEPOLICY_M4DEFS))
$(call add_json_str, BoardSepolicyVers, $(BOARD_SEPOLICY_VERS))
$(call add_json_str, SystemExtSepolicyPrebuiltApiDir, $(BOARD_SYSTEM_EXT_PREBUILT_DIR))
@@ -245,10 +262,10 @@
$(call add_json_list, MissingUsesLibraries, $(INTERNAL_PLATFORM_MISSING_USES_LIBRARIES))
$(call add_json_map, VendorVars)
-$(foreach namespace,$(SOONG_CONFIG_NAMESPACES),\
+$(foreach namespace,$(sort $(SOONG_CONFIG_NAMESPACES)),\
$(call add_json_map, $(namespace))\
- $(foreach key,$(SOONG_CONFIG_$(namespace)),\
- $(call add_json_str,$(key),$(SOONG_CONFIG_$(namespace)_$(key))))\
+ $(foreach key,$(sort $(SOONG_CONFIG_$(namespace))),\
+ $(call add_json_str,$(key),$(subst ",\",$(SOONG_CONFIG_$(namespace)_$(key)))))\
$(call end_json_map))
$(call end_json_map)
@@ -262,6 +279,10 @@
$(call add_json_bool, CompressedApex, $(filter true,$(PRODUCT_COMPRESSED_APEX)))
+ifndef APEX_BUILD_FOR_PRE_S_DEVICES
+$(call add_json_bool, TrimmedApex, $(filter true,$(PRODUCT_TRIMMED_APEX)))
+endif
+
$(call add_json_bool, BoardUsesRecoveryAsBoot, $(filter true,$(BOARD_USES_RECOVERY_AS_BOOT)))
$(call add_json_list, BoardKernelBinaries, $(BOARD_KERNEL_BINARIES))
@@ -272,8 +293,13 @@
$(call add_json_str, ShippingApiLevel, $(PRODUCT_SHIPPING_API_LEVEL))
+$(call add_json_bool, BuildBrokenClangProperty, $(filter true,$(BUILD_BROKEN_CLANG_PROPERTY)))
+$(call add_json_bool, BuildBrokenClangAsFlags, $(filter true,$(BUILD_BROKEN_CLANG_ASFLAGS)))
+$(call add_json_bool, BuildBrokenClangCFlags, $(filter true,$(BUILD_BROKEN_CLANG_CFLAGS)))
+$(call add_json_bool, BuildBrokenDepfile, $(filter true,$(BUILD_BROKEN_DEPFILE)))
$(call add_json_bool, BuildBrokenEnforceSyspropOwner, $(filter true,$(BUILD_BROKEN_ENFORCE_SYSPROP_OWNER)))
$(call add_json_bool, BuildBrokenTrebleSyspropNeverallow, $(filter true,$(BUILD_BROKEN_TREBLE_SYSPROP_NEVERALLOW)))
+$(call add_json_bool, BuildBrokenUsesSoongPython2Modules, $(filter true,$(BUILD_BROKEN_USES_SOONG_PYTHON2_MODULES)))
$(call add_json_bool, BuildBrokenVendorPropertyNamespace, $(filter true,$(BUILD_BROKEN_VENDOR_PROPERTY_NAMESPACE)))
$(call add_json_list, BuildBrokenInputDirModules, $(BUILD_BROKEN_INPUT_DIR_MODULES))
@@ -290,9 +316,16 @@
$(call add_json_bool, GenerateAidlNdkPlatformBackend, $(filter true,$(NEED_AIDL_NDK_PLATFORM_BACKEND)))
-$(call add_json_bool, ForceMultilibFirstOnDevice, $(filter true,$(FORCE_MULTILIB_FIRST_ON_DEVICE)))
+$(call add_json_bool, IgnorePrefer32OnDevice, $(filter true,$(IGNORE_PREFER32_ON_DEVICE)))
$(call add_json_list, IncludeTags, $(PRODUCT_INCLUDE_TAGS))
+$(call add_json_list, SourceRootDirs, $(PRODUCT_SOURCE_ROOT_DIRS))
+
+$(call add_json_list, AfdoProfiles, $(ALL_AFDO_PROFILES))
+
+$(call add_json_str, ProductManufacturer, $(PRODUCT_MANUFACTURER))
+$(call add_json_str, ProductBrand, $(PRODUCT_BRAND))
+$(call add_json_list, BuildVersionTags, $(BUILD_VERSION_TAGS))
$(call json_end)
diff --git a/core/sysprop.mk b/core/sysprop.mk
index 570702a..bd6f3d9 100644
--- a/core/sysprop.mk
+++ b/core/sysprop.mk
@@ -46,6 +46,10 @@
echo "ro.product.$(1).manufacturer=$(PRODUCT_MANUFACTURER)" >> $(2);\
echo "ro.product.$(1).model=$(PRODUCT_MODEL)" >> $(2);\
echo "ro.product.$(1).name=$(TARGET_PRODUCT)" >> $(2);\
+ # Attestation specific properties for AOSP/GSI build running on device.
+ echo "ro.product.model_for_attestation=$(PRODUCT_MODEL_FOR_ATTESTATION)" >> $(2);\
+ echo "ro.product.brand_for_attestation=$(PRODUCT_BRAND_FOR_ATTESTATION)" >> $(2);\
+ echo "ro.product.name_for_attestation=$(PRODUCT_NAME_FOR_ATTESTATION)" >> $(2);\
)\
$(if $(filter true,$(ZYGOTE_FORCE_64)),\
$(if $(filter vendor,$(1)),\
@@ -137,7 +141,7 @@
fi;)
$(hide) echo "# end of file" >> $$@
-$(call declare-0p-target,$(2))
+$(call declare-1p-target,$(2))
endef
# -----------------------------------------------------------------
@@ -269,7 +273,6 @@
BUILD_USERNAME="$(BUILD_USERNAME)" \
BUILD_HOSTNAME="$(BUILD_HOSTNAME)" \
BUILD_NUMBER="$(BUILD_NUMBER_FROM_FILE)" \
- BOARD_BUILD_SYSTEM_ROOT_IMAGE="$(BOARD_BUILD_SYSTEM_ROOT_IMAGE)" \
BOARD_USE_VBMETA_DIGTEST_IN_FINGERPRINT="$(BOARD_USE_VBMETA_DIGTEST_IN_FINGERPRINT)" \
PLATFORM_VERSION="$(PLATFORM_VERSION)" \
PLATFORM_DISPLAY_VERSION="$(PLATFORM_DISPLAY_VERSION)" \
@@ -540,3 +543,19 @@
$(empty)))
$(eval $(call declare-1p-target,$(INSTALLED_RAMDISK_BUILD_PROP_TARGET)))
+
+ALL_INSTALLED_BUILD_PROP_FILES := \
+ $(INSTALLED_BUILD_PROP_TARGET) \
+ $(INSTALLED_VENDOR_BUILD_PROP_TARGET) \
+ $(INSTALLED_PRODUCT_BUILD_PROP_TARGET) \
+ $(INSTALLED_ODM_BUILD_PROP_TARGET) \
+ $(INSTALLED_VENDOR_DLKM_BUILD_PROP_TARGET) \
+ $(INSTALLED_ODM_DLKM_BUILD_PROP_TARGET) \
+ $(INSTALLED_SYSTEM_DLKM_BUILD_PROP_TARGET) \
+ $(INSTALLED_SYSTEM_EXT_BUILD_PROP_TARGET) \
+ $(INSTALLED_RAMDISK_BUILD_PROP_TARGET)
+
+# $1 installed file path, e.g. out/target/product/vsoc_x86_64/system/build.prop
+define is-build-prop
+$(if $(findstring $1,$(ALL_INSTALLED_BUILD_PROP_FILES)),Y)
+endef
\ No newline at end of file
diff --git a/core/tasks/OWNERS b/core/tasks/OWNERS
deleted file mode 100644
index 594930d..0000000
--- a/core/tasks/OWNERS
+++ /dev/null
@@ -1 +0,0 @@
-per-file art-host-tests.mk = dshi@google.com,dsrbecky@google.com,jdesprez@google.com,rpl@google.com
diff --git a/core/tasks/README.dex_preopt_check.md b/core/tasks/README.dex_preopt_check.md
new file mode 100644
index 0000000..b0baa9e
--- /dev/null
+++ b/core/tasks/README.dex_preopt_check.md
@@ -0,0 +1,43 @@
+# `dex_preopt_check`
+
+`dex_preopt_check` is a build-time check to make sure that all system server
+jars are dexpreopted. When the check fails, you will see the following error
+message:
+
+```
+FAILED:
+build/make/core/tasks/dex_preopt_check.mk:13: warning: Missing compilation artifacts. Dexpreopting is not working for some system server jars
+Offending entries:
+```
+
+Possible causes are:
+
+1. There is an APEX/SDK mismatch. (E.g., the APEX is built from source while
+ the SDK is built from prebuilt.)
+
+1. The `systemserverclasspath_fragment` is not added as
+ `systemserverclasspath_fragments` of the corresponding `apex` module, or not
+ added as `exported_systemserverclasspath_fragments` of the corresponding
+ `prebuilt_apex`/`apex_set` module when building from prebuilt.
+
+1. The expected version of the system server java library is not preferred.
+ (E.g., the `java_import` module has `prefer: false` when building from
+ prebuilt.)
+
+1. Dexpreopting is disabled for the system server java library. This can be due
+ to various reasons including but not limited to:
+
+ - The java library has `dex_preopt: { enabled: false }` in the Android.bp
+ file.
+
+ - The java library is listed in `DEXPREOPT_DISABLED_MODULES` in a Makefile.
+
+ - The java library is missing `installable: true` in the Android.bp
+ file when building from source.
+
+ - Sanitizer is enabled.
+
+1. `PRODUCT_SYSTEM_SERVER_JARS`, `PRODUCT_APEX_SYSTEM_SERVER_JARS`,
+ `PRODUCT_STANDALONE_SYSTEM_SERVER_JARS`, or
+ `PRODUCT_APEX_STANDALONE_SYSTEM_SERVER_JARS` has an extra entry that is not
+ needed by the product.
diff --git a/core/tasks/art-host-tests.mk b/core/tasks/art-host-tests.mk
index 2af1ded..ff9eb09 100644
--- a/core/tasks/art-host-tests.mk
+++ b/core/tasks/art-host-tests.mk
@@ -24,25 +24,55 @@
$(eval _cmf_src := $(word 1,$(_cmf_tuple))) \
$(_cmf_src)))
-$(art_host_tests_zip) : PRIVATE_HOST_SHARED_LIBS := $(my_host_shared_lib_for_art_host_tests)
+# Create an artifact to include a list of test config files in art-host-tests.
+art_host_tests_list_zip := $(PRODUCT_OUT)/art-host-tests_list.zip
+# Create an artifact to include all test config files in art-host-tests.
+art_host_tests_configs_zip := $(PRODUCT_OUT)/art-host-tests_configs.zip
+# Create an artifact to include all shared library files in art-host-tests.
+art_host_tests_host_shared_libs_zip := $(PRODUCT_OUT)/art-host-tests_host-shared-libs.zip
+$(art_host_tests_zip) : PRIVATE_HOST_SHARED_LIBS := $(my_host_shared_lib_for_art_host_tests)
+$(art_host_tests_zip) : PRIVATE_art_host_tests_list_zip := $(art_host_tests_list_zip)
+$(art_host_tests_zip) : PRIVATE_art_host_tests_configs_zip := $(art_host_tests_configs_zip)
+$(art_host_tests_zip) : PRIVATE_art_host_tests_host_shared_libs_zip := $(art_host_tests_host_shared_libs_zip)
+$(art_host_tests_zip) : .KATI_IMPLICIT_OUTPUTS := $(art_host_tests_list_zip) $(art_host_tests_configs_zip) $(art_host_tests_host_shared_libs_zip)
+$(art_host_tests_zip) : PRIVATE_INTERMEDIATES_DIR := $(intermediates_dir)
$(art_host_tests_zip) : $(COMPATIBILITY.art-host-tests.FILES) $(my_host_shared_lib_for_art_host_tests) $(SOONG_ZIP)
- echo $(sort $(COMPATIBILITY.art-host-tests.FILES)) | tr " " "\n" > $@.list
- grep $(HOST_OUT_TESTCASES) $@.list > $@-host.list || true
- $(hide) touch $@-host-libs.list
+ rm -rf $(PRIVATE_INTERMEDIATES_DIR)
+ rm -f $@ $(PRIVATE_art_host_tests_list_zip)
+ mkdir -p $(PRIVATE_INTERMEDIATES_DIR)
+ echo $(sort $(COMPATIBILITY.art-host-tests.FILES)) | tr " " "\n" > $(PRIVATE_INTERMEDIATES_DIR)/list
+ grep $(HOST_OUT_TESTCASES) $(PRIVATE_INTERMEDIATES_DIR)/list > $(PRIVATE_INTERMEDIATES_DIR)/host.list || true
+ $(hide) touch $(PRIVATE_INTERMEDIATES_DIR)/shared-libs.list
$(hide) for shared_lib in $(PRIVATE_HOST_SHARED_LIBS); do \
- echo $$shared_lib >> $@-host-libs.list; \
+ echo $$shared_lib >> $(PRIVATE_INTERMEDIATES_DIR)/shared-libs.list; \
done
- grep $(TARGET_OUT_TESTCASES) $@.list > $@-target.list || true
- $(hide) $(SOONG_ZIP) -d -o $@ -P host -C $(HOST_OUT) -l $@-host.list \
- -P target -C $(PRODUCT_OUT) -l $@-target.list \
- -P host/testcases -C $(HOST_OUT) -l $@-host-libs.list
- rm -f $@.list $@-host.list $@-target.list $@-host-libs.list
+ grep $(TARGET_OUT_TESTCASES) $(PRIVATE_INTERMEDIATES_DIR)/list > $(PRIVATE_INTERMEDIATES_DIR)/target.list || true
+ $(hide) $(SOONG_ZIP) -d -o $@ -P host -C $(HOST_OUT) -l $(PRIVATE_INTERMEDIATES_DIR)/host.list \
+ -P target -C $(PRODUCT_OUT) -l $(PRIVATE_INTERMEDIATES_DIR)/target.list \
+ -P host/testcases -C $(HOST_OUT) -l $(PRIVATE_INTERMEDIATES_DIR)/shared-libs.list
+ grep -e .*\\.config$$ $(PRIVATE_INTERMEDIATES_DIR)/host.list > $(PRIVATE_INTERMEDIATES_DIR)/host-test-configs.list || true
+ grep -e .*\\.config$$ $(PRIVATE_INTERMEDIATES_DIR)/target.list > $(PRIVATE_INTERMEDIATES_DIR)/target-test-configs.list || true
+ $(hide) $(SOONG_ZIP) -d -o $(PRIVATE_art_host_tests_configs_zip) \
+ -P host -C $(HOST_OUT) -l $(PRIVATE_INTERMEDIATES_DIR)/host-test-configs.list \
+ -P target -C $(PRODUCT_OUT) -l $(PRIVATE_INTERMEDIATES_DIR)/target-test-configs.list
+ grep $(HOST_OUT) $(PRIVATE_INTERMEDIATES_DIR)/shared-libs.list > $(PRIVATE_INTERMEDIATES_DIR)/host-shared-libs.list || true
+ $(hide) $(SOONG_ZIP) -d -o $(PRIVATE_art_host_tests_host_shared_libs_zip) \
+ -P host -C $(HOST_OUT) -l $(PRIVATE_INTERMEDIATES_DIR)/host-shared-libs.list
+ grep -e .*\\.config$$ $(PRIVATE_INTERMEDIATES_DIR)/host.list | sed s%$(HOST_OUT)%host%g > $(PRIVATE_INTERMEDIATES_DIR)/art-host-tests_list
+ grep -e .*\\.config$$ $(PRIVATE_INTERMEDIATES_DIR)/target.list | sed s%$(PRODUCT_OUT)%target%g >> $(PRIVATE_INTERMEDIATES_DIR)/art-host-tests_list
+ $(hide) $(SOONG_ZIP) -d -o $(PRIVATE_art_host_tests_list_zip) -C $(PRIVATE_INTERMEDIATES_DIR) -f $(PRIVATE_INTERMEDIATES_DIR)/art-host-tests_list
art-host-tests: $(art_host_tests_zip)
-$(call dist-for-goals, art-host-tests, $(art_host_tests_zip))
+$(call dist-for-goals, art-host-tests, $(art_host_tests_zip) $(art_host_tests_list_zip) $(art_host_tests_configs_zip) $(art_host_tests_host_shared_libs_zip))
$(call declare-1p-container,$(art_host_tests_zip),)
$(call declare-container-license-deps,$(art_host_tests_zip),$(COMPATIBILITY.art-host-tests.FILES) $(my_host_shared_lib_for_art_host_tests),$(PRODUCT_OUT)/:/)
tests: art-host-tests
+
+intermediates_dir :=
+art_host_tests_zip :=
+art_host_tests_list_zip :=
+art_host_tests_configs_zip :=
+art_host_tests_host_shared_libs_zip :=
diff --git a/core/tasks/automotive-general-tests.mk b/core/tasks/automotive-general-tests.mk
new file mode 100644
index 0000000..44b62be
--- /dev/null
+++ b/core/tasks/automotive-general-tests.mk
@@ -0,0 +1,89 @@
+# Copyright (C) 2022 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+.PHONY: automotive-general-tests
+
+automotive_general_tests_tools := \
+ $(HOST_OUT_JAVA_LIBRARIES)/cts-tradefed.jar \
+ $(HOST_OUT_JAVA_LIBRARIES)/compatibility-host-util.jar \
+ $(HOST_OUT_JAVA_LIBRARIES)/vts-tradefed.jar \
+
+intermediates_dir := $(call intermediates-dir-for,PACKAGING,automotive-general-tests)
+automotive_general_tests_zip := $(PRODUCT_OUT)/automotive-general-tests.zip
+# Create an artifact to include a list of test config files in automotive-general-tests.
+automotive_general_tests_list_zip := $(PRODUCT_OUT)/automotive-general-tests_list.zip
+
+# Filter shared entries between automotive-general-tests and automotive-tests's HOST_SHARED_LIBRARY.FILES,
+# to avoid warning about overriding commands.
+my_host_shared_lib_for_automotive_general_tests := \
+ $(foreach m,$(filter $(COMPATIBILITY.automotive-tests.HOST_SHARED_LIBRARY.FILES),\
+ $(COMPATIBILITY.automotive-general-tests.HOST_SHARED_LIBRARY.FILES)),$(call word-colon,2,$(m)))
+my_automotive_general_tests_shared_lib_files := \
+ $(filter-out $(COMPATIBILITY.automotive-tests.HOST_SHARED_LIBRARY.FILES),\
+ $(COMPATIBILITY.automotive-general-tests.HOST_SHARED_LIBRARY.FILES))
+
+my_host_shared_lib_for_automotive_general_tests += $(call copy-many-files,$(my_automotive_general_tests_shared_lib_files))
+
+# Create an artifact to include all test config files in automotive-general-tests.
+automotive_general_tests_configs_zip := $(PRODUCT_OUT)/automotive-general-tests_configs.zip
+# Create an artifact to include all shared librariy files in automotive-general-tests.
+automotive_general_tests_host_shared_libs_zip := $(PRODUCT_OUT)/automotive-general-tests_host-shared-libs.zip
+
+$(automotive_general_tests_zip) : PRIVATE_automotive_general_tests_list_zip := $(automotive_general_tests_list_zip)
+$(automotive_general_tests_zip) : .KATI_IMPLICIT_OUTPUTS := $(automotive_general_tests_list_zip) $(automotive_general_tests_configs_zip) $(automotive_general_tests_host_shared_libs_zip)
+$(automotive_general_tests_zip) : PRIVATE_TOOLS := $(automotive_general_tests_tools)
+$(automotive_general_tests_zip) : PRIVATE_INTERMEDIATES_DIR := $(intermediates_dir)
+$(automotive_general_tests_zip) : PRIVATE_HOST_SHARED_LIBS := $(my_host_shared_lib_for_automotive_general_tests)
+$(automotive_general_tests_zip) : PRIVATE_automotive_general_tests_configs_zip := $(automotive_general_tests_configs_zip)
+$(automotive_general_tests_zip) : PRIVATE_general_host_shared_libs_zip := $(automotive_general_tests_host_shared_libs_zip)
+$(automotive_general_tests_zip) : $(COMPATIBILITY.automotive-general-tests.FILES) $(automotive_general_tests_tools) $(my_host_shared_lib_for_automotive_general_tests) $(SOONG_ZIP)
+ rm -rf $(PRIVATE_INTERMEDIATES_DIR)
+ rm -f $@ $(PRIVATE_automotive_general_tests_list_zip)
+ mkdir -p $(PRIVATE_INTERMEDIATES_DIR) $(PRIVATE_INTERMEDIATES_DIR)/tools
+ echo $(sort $(COMPATIBILITY.automotive-general-tests.FILES)) | tr " " "\n" > $(PRIVATE_INTERMEDIATES_DIR)/list
+ grep $(HOST_OUT_TESTCASES) $(PRIVATE_INTERMEDIATES_DIR)/list > $(PRIVATE_INTERMEDIATES_DIR)/host.list || true
+ grep $(TARGET_OUT_TESTCASES) $(PRIVATE_INTERMEDIATES_DIR)/list > $(PRIVATE_INTERMEDIATES_DIR)/target.list || true
+ grep -e .*\\.config$$ $(PRIVATE_INTERMEDIATES_DIR)/host.list > $(PRIVATE_INTERMEDIATES_DIR)/host-test-configs.list || true
+ grep -e .*\\.config$$ $(PRIVATE_INTERMEDIATES_DIR)/target.list > $(PRIVATE_INTERMEDIATES_DIR)/target-test-configs.list || true
+ $(hide) for shared_lib in $(PRIVATE_HOST_SHARED_LIBS); do \
+ echo $$shared_lib >> $(PRIVATE_INTERMEDIATES_DIR)/host.list; \
+ echo $$shared_lib >> $(PRIVATE_INTERMEDIATES_DIR)/shared-libs.list; \
+ done
+ grep $(HOST_OUT_TESTCASES) $(PRIVATE_INTERMEDIATES_DIR)/shared-libs.list > $(PRIVATE_INTERMEDIATES_DIR)/host-shared-libs.list || true
+ cp -fp $(PRIVATE_TOOLS) $(PRIVATE_INTERMEDIATES_DIR)/tools/
+ $(SOONG_ZIP) -d -o $@ \
+ -P host -C $(PRIVATE_INTERMEDIATES_DIR) -D $(PRIVATE_INTERMEDIATES_DIR)/tools \
+ -P host -C $(HOST_OUT) -l $(PRIVATE_INTERMEDIATES_DIR)/host.list \
+ -P target -C $(PRODUCT_OUT) -l $(PRIVATE_INTERMEDIATES_DIR)/target.list
+ $(SOONG_ZIP) -d -o $(PRIVATE_automotive_general_tests_configs_zip) \
+ -P host -C $(HOST_OUT) -l $(PRIVATE_INTERMEDIATES_DIR)/host-test-configs.list \
+ -P target -C $(PRODUCT_OUT) -l $(PRIVATE_INTERMEDIATES_DIR)/target-test-configs.list
+ $(SOONG_ZIP) -d -o $(PRIVATE_general_host_shared_libs_zip) \
+ -P host -C $(HOST_OUT) -l $(PRIVATE_INTERMEDIATES_DIR)/host-shared-libs.list
+ grep -e .*\\.config$$ $(PRIVATE_INTERMEDIATES_DIR)/host.list | sed s%$(HOST_OUT)%host%g > $(PRIVATE_INTERMEDIATES_DIR)/automotive-general-tests_list
+ grep -e .*\\.config$$ $(PRIVATE_INTERMEDIATES_DIR)/target.list | sed s%$(PRODUCT_OUT)%target%g >> $(PRIVATE_INTERMEDIATES_DIR)/automotive-general-tests_list
+ $(SOONG_ZIP) -d -o $(PRIVATE_automotive_general_tests_list_zip) -C $(PRIVATE_INTERMEDIATES_DIR) -f $(PRIVATE_INTERMEDIATES_DIR)/automotive-general-tests_list
+
+automotive-general-tests: $(automotive_general_tests_zip)
+$(call dist-for-goals, automotive-general-tests, $(automotive_general_tests_zip) $(automotive_general_tests_list_zip) $(automotive_general_tests_configs_zip) $(automotive_general_tests_host_shared_libs_zip))
+
+$(call declare-1p-container,$(automotive_general_tests_zip),)
+$(call declare-container-license-deps,$(automotive_general_tests_zip),$(COMPATIBILITY.automotive-general-tests.FILES) $(automotive_general_tests_tools) $(my_host_shared_lib_for_automotive_general_tests),$(PRODUCT_OUT)/:/)
+
+intermediates_dir :=
+automotive_general_tests_tools :=
+automotive_general_tests_zip :=
+automotive_general_tests_list_zip :=
+automotive_general_tests_configs_zip :=
+automotive_general_tests_host_shared_libs_zip :=
diff --git a/core/tasks/automotive-tests.mk b/core/tasks/automotive-tests.mk
new file mode 100644
index 0000000..da6af6b
--- /dev/null
+++ b/core/tasks/automotive-tests.mk
@@ -0,0 +1,61 @@
+# Copyright (C) 2022 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+.PHONY: automotive-tests
+
+automotive-tests-zip := $(PRODUCT_OUT)/automotive-tests.zip
+# Create an artifact to include a list of test config files in automotive-tests.
+automotive-tests-list-zip := $(PRODUCT_OUT)/automotive-tests_list.zip
+# Create an artifact to include all test config files in automotive-tests.
+automotive-tests-configs-zip := $(PRODUCT_OUT)/automotive-tests_configs.zip
+my_host_shared_lib_for_automotive_tests := $(call copy-many-files,$(COMPATIBILITY.automotive-tests.HOST_SHARED_LIBRARY.FILES))
+automotive_tests_host_shared_libs_zip := $(PRODUCT_OUT)/automotive-tests_host-shared-libs.zip
+
+$(automotive-tests-zip) : .KATI_IMPLICIT_OUTPUTS := $(automotive-tests-list-zip) $(automotive-tests-configs-zip) $(automotive_tests_host_shared_libs_zip)
+$(automotive-tests-zip) : PRIVATE_automotive_tests_list := $(PRODUCT_OUT)/automotive-tests_list
+$(automotive-tests-zip) : PRIVATE_HOST_SHARED_LIBS := $(my_host_shared_lib_for_automotive_tests)
+$(automotive-tests-zip) : PRIVATE_automotive_host_shared_libs_zip := $(automotive_tests_host_shared_libs_zip)
+$(automotive-tests-zip) : $(COMPATIBILITY.automotive-tests.FILES) $(my_host_shared_lib_for_automotive_tests) $(SOONG_ZIP)
+ rm -f $@-shared-libs.list
+ echo $(sort $(COMPATIBILITY.automotive-tests.FILES)) | tr " " "\n" > $@.list
+ grep $(HOST_OUT_TESTCASES) $@.list > $@-host.list || true
+ grep -e .*\\.config$$ $@-host.list > $@-host-test-configs.list || true
+ $(hide) for shared_lib in $(PRIVATE_HOST_SHARED_LIBS); do \
+ echo $$shared_lib >> $@-host.list; \
+ echo $$shared_lib >> $@-shared-libs.list; \
+ done
+ grep $(HOST_OUT_TESTCASES) $@-shared-libs.list > $@-host-shared-libs.list || true
+ grep $(TARGET_OUT_TESTCASES) $@.list > $@-target.list || true
+ grep -e .*\\.config$$ $@-target.list > $@-target-test-configs.list || true
+ $(hide) $(SOONG_ZIP) -d -o $@ -P host -C $(HOST_OUT) -l $@-host.list -P target -C $(PRODUCT_OUT) -l $@-target.list
+ $(hide) $(SOONG_ZIP) -d -o $(automotive-tests-configs-zip) \
+ -P host -C $(HOST_OUT) -l $@-host-test-configs.list \
+ -P target -C $(PRODUCT_OUT) -l $@-target-test-configs.list
+ $(SOONG_ZIP) -d -o $(PRIVATE_automotive_host_shared_libs_zip) \
+ -P host -C $(HOST_OUT) -l $@-host-shared-libs.list
+ rm -f $(PRIVATE_automotive_tests_list)
+ $(hide) grep -e .*\\.config$$ $@-host.list | sed s%$(HOST_OUT)%host%g > $(PRIVATE_automotive_tests_list)
+ $(hide) grep -e .*\\.config$$ $@-target.list | sed s%$(PRODUCT_OUT)%target%g >> $(PRIVATE_automotive_tests_list)
+ $(hide) $(SOONG_ZIP) -d -o $(automotive-tests-list-zip) -C $(dir $@) -f $(PRIVATE_automotive_tests_list)
+ rm -f $@.list $@-host.list $@-target.list $@-host-test-configs.list $@-target-test-configs.list \
+ $@-shared-libs.list $@-host-shared-libs.list $(PRIVATE_automotive_tests_list)
+
+automotive-tests: $(automotive-tests-zip)
+$(call dist-for-goals, automotive-tests, $(automotive-tests-zip) $(automotive-tests-list-zip) $(automotive-tests-configs-zip) $(automotive_tests_host_shared_libs_zip))
+
+$(call declare-1p-container,$(automotive-tests-zip),)
+$(call declare-container-license-deps,$(automotive-tests-zip),$(COMPATIBILITY.automotive-tests.FILES) $(my_host_shared_lib_for_automotive_tests),$(PRODUCT_OUT)/:/)
+
+tests: automotive-tests
diff --git a/core/tasks/build_custom_images.mk b/core/tasks/build_custom_images.mk
index c9b07da..680ad11 100644
--- a/core/tasks/build_custom_images.mk
+++ b/core/tasks/build_custom_images.mk
@@ -62,8 +62,6 @@
CUSTOM_IMAGE_MODULES \
CUSTOM_IMAGE_COPY_FILES \
CUSTOM_IMAGE_SELINUX \
- CUSTOM_IMAGE_SUPPORT_VERITY \
- CUSTOM_IMAGE_SUPPORT_VERITY_FEC \
CUSTOM_IMAGE_VERITY_BLOCK_DEVICE \
CUSTOM_IMAGE_AVB_HASH_ENABLE \
CUSTOM_IMAGE_AVB_ADD_HASH_FOOTER_ARGS \
diff --git a/core/tasks/cts.mk b/core/tasks/cts.mk
index c282268..674afb5 100644
--- a/core/tasks/cts.mk
+++ b/core/tasks/cts.mk
@@ -16,6 +16,8 @@
test_suite_tradefed := cts-tradefed
test_suite_dynamic_config := cts/tools/cts-tradefed/DynamicConfig.xml
test_suite_readme := cts/tools/cts-tradefed/README
+test_suite_tools := $(HOST_OUT_JAVA_LIBRARIES)/ats_console_deploy.jar \
+ $(HOST_OUT_JAVA_LIBRARIES)/ats_olc_server_local_mode_deploy.jar
$(call declare-1p-target,$(test_suite_dynamic_config),cts)
$(call declare-1p-target,$(test_suite_readme),cts)
@@ -211,7 +213,7 @@
# 3 - Format of the report
define generate-coverage-report-cts
$(hide) mkdir -p $(dir $@)
- $(hide) $(PRIVATE_CTS_API_COVERAGE_EXE) -d $(PRIVATE_DEXDEPS_EXE) -a $(PRIVATE_API_XML_DESC) -n $(PRIVATE_NAPI_XML_DESC) -f $(3) -o $@ $(2)
+ $(hide) $(PRIVATE_CTS_API_COVERAGE_EXE) -j 8 -d $(PRIVATE_DEXDEPS_EXE) -a $(PRIVATE_API_XML_DESC) -n $(PRIVATE_NAPI_XML_DESC) -f $(3) -o $@ $(2)
@ echo $(1): file://$$(cd $(dir $@); pwd)/$(notdir $@)
endef
diff --git a/core/tasks/device-tests.mk b/core/tasks/device-tests.mk
index 3196f52..4167a7e 100644
--- a/core/tasks/device-tests.mk
+++ b/core/tasks/device-tests.mk
@@ -39,7 +39,7 @@
grep $(HOST_OUT_TESTCASES) $@-shared-libs.list > $@-host-shared-libs.list || true
grep $(TARGET_OUT_TESTCASES) $@.list > $@-target.list || true
grep -e .*\\.config$$ $@-target.list > $@-target-test-configs.list || true
- $(hide) $(SOONG_ZIP) -d -o $@ -P host -C $(HOST_OUT) -l $@-host.list -P target -C $(PRODUCT_OUT) -l $@-target.list
+ $(hide) $(SOONG_ZIP) -d -o $@ -P host -C $(HOST_OUT) -l $@-host.list -P target -C $(PRODUCT_OUT) -l $@-target.list -sha256
$(hide) $(SOONG_ZIP) -d -o $(device-tests-configs-zip) \
-P host -C $(HOST_OUT) -l $@-host-test-configs.list \
-P target -C $(PRODUCT_OUT) -l $@-target-test-configs.list
diff --git a/core/tasks/dex_preopt_check.mk b/core/tasks/dex_preopt_check.mk
index bfa1ec5..5fd60c8 100644
--- a/core/tasks/dex_preopt_check.mk
+++ b/core/tasks/dex_preopt_check.mk
@@ -12,7 +12,8 @@
ifneq (,$(filter services,$(PRODUCT_PACKAGES)))
$(call maybe-print-list-and-error,\
$(filter-out $(ALL_DEFAULT_INSTALLED_MODULES),$(DEXPREOPT_SYSTEMSERVER_ARTIFACTS)),\
- Missing compilation artifacts. Dexpreopting is not working for some system server jars \
+ Missing compilation artifacts. Dexpreopting is not working for some system server jars. See \
+ https://cs.android.com/android/platform/superproject/+/master:build/make/core/tasks/README.dex_preopt_check.md \
)
endif
endif
diff --git a/core/tasks/general-tests.mk b/core/tasks/general-tests.mk
index 5252394..8dbc76f 100644
--- a/core/tasks/general-tests.mk
+++ b/core/tasks/general-tests.mk
@@ -42,16 +42,24 @@
# Copy kernel test modules to testcases directories
include $(BUILD_SYSTEM)/tasks/tools/vts-kernel-tests.mk
-kernel_test_copy_pairs := \
- $(call target-native-copy-pairs,$(kernel_test_modules),$(kernel_test_host_out))
-copy_kernel_tests := $(call copy-many-files,$(kernel_test_copy_pairs))
+ltp_copy_pairs := \
+ $(call target-native-copy-pairs,$(kernel_ltp_modules),$(kernel_ltp_host_out))
+kselftest_copy_pairs := \
+ $(call target-native-copy-pairs,$(kernel_kselftest_modules),$(kernel_kselftest_host_out))
+copy_ltp_tests := $(call copy-many-files,$(ltp_copy_pairs))
+copy_kselftest_tests := $(call copy-many-files,$(kselftest_copy_pairs))
-# PHONY target to be used to build and test `vts_kernel_tests` without building full vts
-.PHONY: vts_kernel_tests
-vts_kernel_tests: $(copy_kernel_tests)
+# PHONY target to be used to build and test `vts_ltp_tests` and `vts_kselftest_tests` without building full vts
+.PHONY: vts_kernel_ltp_tests
+vts_kernel_ltp_tests: $(copy_ltp_tests)
-$(general_tests_zip) : $(copy_kernel_tests)
-$(general_tests_zip) : PRIVATE_KERNEL_TEST_HOST_OUT := $(kernel_test_host_out)
+.PHONY: vts_kernel_kselftest_tests
+vts_kernel_kselftest_tests: $(copy_kselftest_tests)
+
+$(general_tests_zip) : $(copy_ltp_tests)
+$(general_tests_zip) : $(copy_kselftest_tests)
+$(general_tests_zip) : PRIVATE_KERNEL_LTP_HOST_OUT := $(kernel_ltp_host_out)
+$(general_tests_zip) : PRIVATE_KERNEL_KSELFTEST_HOST_OUT := $(kernel_kselftest_host_out)
$(general_tests_zip) : PRIVATE_general_tests_list_zip := $(general_tests_list_zip)
$(general_tests_zip) : .KATI_IMPLICIT_OUTPUTS := $(general_tests_list_zip) $(general_tests_configs_zip) $(general_tests_host_shared_libs_zip)
$(general_tests_zip) : PRIVATE_TOOLS := $(general_tests_tools)
@@ -64,7 +72,8 @@
rm -f $@ $(PRIVATE_general_tests_list_zip)
mkdir -p $(PRIVATE_INTERMEDIATES_DIR) $(PRIVATE_INTERMEDIATES_DIR)/tools
echo $(sort $(COMPATIBILITY.general-tests.FILES)) | tr " " "\n" > $(PRIVATE_INTERMEDIATES_DIR)/list
- find $(PRIVATE_KERNEL_TEST_HOST_OUT) >> $(PRIVATE_INTERMEDIATES_DIR)/list
+ find $(PRIVATE_KERNEL_LTP_HOST_OUT) >> $(PRIVATE_INTERMEDIATES_DIR)/list
+ find $(PRIVATE_KERNEL_KSELFTEST_HOST_OUT) >> $(PRIVATE_INTERMEDIATES_DIR)/list
grep $(HOST_OUT_TESTCASES) $(PRIVATE_INTERMEDIATES_DIR)/list > $(PRIVATE_INTERMEDIATES_DIR)/host.list || true
grep $(TARGET_OUT_TESTCASES) $(PRIVATE_INTERMEDIATES_DIR)/list > $(PRIVATE_INTERMEDIATES_DIR)/target.list || true
grep -e .*\\.config$$ $(PRIVATE_INTERMEDIATES_DIR)/host.list > $(PRIVATE_INTERMEDIATES_DIR)/host-test-configs.list || true
@@ -78,7 +87,8 @@
$(SOONG_ZIP) -d -o $@ \
-P host -C $(PRIVATE_INTERMEDIATES_DIR) -D $(PRIVATE_INTERMEDIATES_DIR)/tools \
-P host -C $(HOST_OUT) -l $(PRIVATE_INTERMEDIATES_DIR)/host.list \
- -P target -C $(PRODUCT_OUT) -l $(PRIVATE_INTERMEDIATES_DIR)/target.list
+ -P target -C $(PRODUCT_OUT) -l $(PRIVATE_INTERMEDIATES_DIR)/target.list \
+ -sha256
$(SOONG_ZIP) -d -o $(PRIVATE_general_tests_configs_zip) \
-P host -C $(HOST_OUT) -l $(PRIVATE_INTERMEDIATES_DIR)/host-test-configs.list \
-P target -C $(PRODUCT_OUT) -l $(PRIVATE_INTERMEDIATES_DIR)/target-test-configs.list
diff --git a/core/tasks/host-unit-tests.mk b/core/tasks/host-unit-tests.mk
index 4453c29..733a2e2 100644
--- a/core/tasks/host-unit-tests.mk
+++ b/core/tasks/host-unit-tests.mk
@@ -39,9 +39,9 @@
echo $$shared_lib >> $@-host-libs.list; \
done
grep $(TARGET_OUT_TESTCASES) $@.list > $@-target.list || true
- $(hide) $(SOONG_ZIP) -L 0 -d -o $@ -P host -C $(HOST_OUT) -l $@-host.list \
+ $(hide) $(SOONG_ZIP) -d -o $@ -P host -C $(HOST_OUT) -l $@-host.list \
-P target -C $(PRODUCT_OUT) -l $@-target.list \
- -P host/testcases -C $(HOST_OUT) -l $@-host-libs.list
+ -P host/testcases -C $(HOST_OUT) -l $@-host-libs.list -sha256
rm -f $@.list $@-host.list $@-target.list $@-host-libs.list
host-unit-tests: $(host_unit_tests_zip)
diff --git a/core/tasks/module-info.mk b/core/tasks/module-info.mk
index 8097535..e83d408 100644
--- a/core/tasks/module-info.mk
+++ b/core/tasks/module-info.mk
@@ -24,10 +24,14 @@
'"classes_jar": [$(foreach w,$(sort $(ALL_MODULES.$(m).CLASSES_JAR)),"$(w)", )], ' \
'"test_mainline_modules": [$(foreach w,$(sort $(ALL_MODULES.$(m).TEST_MAINLINE_MODULES)),"$(w)", )], ' \
'"is_unit_test": "$(ALL_MODULES.$(m).IS_UNIT_TEST)", ' \
+ '"test_options_tags": [$(foreach w,$(sort $(ALL_MODULES.$(m).TEST_OPTIONS_TAGS)),"$(w)", )], ' \
'"data": [$(foreach w,$(sort $(ALL_MODULES.$(m).TEST_DATA)),"$(w)", )], ' \
'"runtime_dependencies": [$(foreach w,$(sort $(ALL_MODULES.$(m).LOCAL_RUNTIME_LIBRARIES)),"$(w)", )], ' \
+ '"static_dependencies": [$(foreach w,$(sort $(ALL_MODULES.$(m).LOCAL_STATIC_LIBRARIES)),"$(w)", )], ' \
'"data_dependencies": [$(foreach w,$(sort $(ALL_MODULES.$(m).TEST_DATA_BINS)),"$(w)", )], ' \
'"supported_variants": [$(foreach w,$(sort $(ALL_MODULES.$(m).SUPPORTED_VARIANTS)),"$(w)", )], ' \
+ '"host_dependencies": [$(foreach w,$(sort $(ALL_MODULES.$(m).HOST_REQUIRED_FROM_TARGET)),"$(w)", )], ' \
+ '"target_dependencies": [$(foreach w,$(sort $(ALL_MODULES.$(m).TARGET_REQUIRED_FROM_HOST)),"$(w)", )], ' \
'},\n' \
) | sed -e 's/, *\]/]/g' -e 's/, *\}/ }/g' -e '$$s/,$$//' >> $@
$(hide) echo '}' >> $@
@@ -37,3 +41,9 @@
$(call dist-for-goals, general-tests, $(MODULE_INFO_JSON))
$(call dist-for-goals, droidcore-unbundled, $(MODULE_INFO_JSON))
+
+# On every build, generate an all_modules.txt file to be used for autocompleting
+# the m command. After timing this using $(shell date +"%s.%3N"), it only adds
+# 0.01 seconds to the internal master build, and will only rerun on builds that
+# rerun kati.
+$(file >$(PRODUCT_OUT)/all_modules.txt,$(subst $(space),$(newline),$(ALL_MODULES)))
diff --git a/core/tasks/multitree.mk b/core/tasks/multitree.mk
new file mode 100644
index 0000000..225477e
--- /dev/null
+++ b/core/tasks/multitree.mk
@@ -0,0 +1,16 @@
+# Copyright (C) 2022 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+.PHONY: update-meta
+update-meta: $(SOONG_MULTITREE_METADATA)
diff --git a/core/tasks/owners.mk b/core/tasks/owners.mk
index 806b8ee..29f3c44 100644
--- a/core/tasks/owners.mk
+++ b/core/tasks/owners.mk
@@ -11,7 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-# Create an artifact to include TEST_MAPPING files in source tree.
+# Create an artifact to include OWNERS files in source tree.
.PHONY: owners
diff --git a/core/tasks/test_mapping.mk b/core/tasks/test_mapping.mk
index 0b0c93c..eb2a585 100644
--- a/core/tasks/test_mapping.mk
+++ b/core/tasks/test_mapping.mk
@@ -21,17 +21,17 @@
intermediates := $(call intermediates-dir-for,PACKAGING,test_mapping)
test_mappings_zip := $(intermediates)/test_mappings.zip
test_mapping_list := $(OUT_DIR)/.module_paths/TEST_MAPPING.list
-test_mappings := $(file <$(test_mapping_list))
-$(test_mappings_zip) : PRIVATE_test_mappings := $(subst $(newline),\n,$(test_mappings))
$(test_mappings_zip) : PRIVATE_all_disabled_presubmit_tests := $(ALL_DISABLED_PRESUBMIT_TESTS)
+$(test_mappings_zip) : PRIVATE_test_mapping_list := $(test_mapping_list)
-$(test_mappings_zip) : $(test_mappings) $(SOONG_ZIP)
+$(test_mappings_zip) : .KATI_DEPFILE := $(test_mappings_zip).d
+$(test_mappings_zip) : $(test_mapping_list) $(SOONG_ZIP)
@echo "Building artifact to include TEST_MAPPING files and tests to skip in presubmit check."
rm -rf $@ $(dir $@)/disabled-presubmit-tests
echo $(sort $(PRIVATE_all_disabled_presubmit_tests)) | tr " " "\n" > $(dir $@)/disabled-presubmit-tests
- echo -e "$(PRIVATE_test_mappings)" > $@.list
- $(SOONG_ZIP) -o $@ -C . -l $@.list -C $(dir $@) -f $(dir $@)/disabled-presubmit-tests
- rm -f $@.list $(dir $@)/disabled-presubmit-tests
+ $(SOONG_ZIP) -o $@ -C . -l $(PRIVATE_test_mapping_list) -C $(dir $@) -f $(dir $@)/disabled-presubmit-tests
+ echo "$@ : " $$(cat $(PRIVATE_test_mapping_list)) > $@.d
+ rm -f $(dir $@)/disabled-presubmit-tests
test_mapping : $(test_mappings_zip)
diff --git a/core/tasks/tools/build_custom_image.mk b/core/tasks/tools/build_custom_image.mk
index f9ae2c1..2626120 100644
--- a/core/tasks/tools/build_custom_image.mk
+++ b/core/tasks/tools/build_custom_image.mk
@@ -91,9 +91,6 @@
$(my_built_custom_image): PRIVATE_COPY_PAIRS := $(my_copy_pairs)
$(my_built_custom_image): PRIVATE_PICKUP_FILES := $(my_pickup_files)
$(my_built_custom_image): PRIVATE_SELINUX := $(CUSTOM_IMAGE_SELINUX)
-$(my_built_custom_image): PRIVATE_SUPPORT_VERITY := $(CUSTOM_IMAGE_SUPPORT_VERITY)
-$(my_built_custom_image): PRIVATE_SUPPORT_VERITY_FEC := $(CUSTOM_IMAGE_SUPPORT_VERITY_FEC)
-$(my_built_custom_image): PRIVATE_VERITY_KEY := $(PRODUCT_VERITY_SIGNING_KEY)
$(my_built_custom_image): PRIVATE_VERITY_BLOCK_DEVICE := $(CUSTOM_IMAGE_VERITY_BLOCK_DEVICE)
$(my_built_custom_image): PRIVATE_DICT_FILE := $(CUSTOM_IMAGE_DICT_FILE)
$(my_built_custom_image): PRIVATE_AVB_AVBTOOL := $(AVBTOOL)
@@ -108,9 +105,6 @@
else ifneq (,$(filter true, $(CUSTOM_IMAGE_AVB_HASH_ENABLE) $(CUSTOM_IMAGE_AVB_HASHTREE_ENABLE)))
$(error Cannot set both CUSTOM_IMAGE_AVB_HASH_ENABLE and CUSTOM_IMAGE_AVB_HASHTREE_ENABLE to true)
endif
-ifeq (true,$(CUSTOM_IMAGE_SUPPORT_VERITY_FEC))
- $(my_built_custom_image): $(FEC)
-endif
$(my_built_custom_image): $(INTERNAL_USERIMAGES_DEPS) $(my_built_modules) $(my_image_copy_files) $(my_custom_image_modules_dep) \
$(CUSTOM_IMAGE_DICT_FILE)
@echo "Build image $@"
@@ -130,13 +124,6 @@
$(hide) echo "partition_size=$(PRIVATE_PARTITION_SIZE)" >> $(PRIVATE_INTERMEDIATES)/image_info.txt
$(hide) echo "ext_mkuserimg=$(notdir $(MKEXTUSERIMG))" >> $(PRIVATE_INTERMEDIATES)/image_info.txt
$(if $(PRIVATE_SELINUX),$(hide) echo "selinux_fc=$(SELINUX_FC)" >> $(PRIVATE_INTERMEDIATES)/image_info.txt)
- $(if $(PRIVATE_SUPPORT_VERITY),\
- $(hide) echo "verity=$(PRIVATE_SUPPORT_VERITY)" >> $(PRIVATE_INTERMEDIATES)/image_info.txt;\
- echo "verity_key=$(PRIVATE_VERITY_KEY)" >> $(PRIVATE_INTERMEDIATES)/image_info.txt;\
- echo "verity_signer_cmd=$(VERITY_SIGNER)" >> $(PRIVATE_INTERMEDIATES)/image_info.txt;\
- echo "verity_block_device=$(PRIVATE_VERITY_BLOCK_DEVICE)" >> $(PRIVATE_INTERMEDIATES)/image_info.txt)
- $(if $(PRIVATE_SUPPORT_VERITY_FEC),\
- $(hide) echo "verity_fec=$(PRIVATE_SUPPORT_VERITY_FEC)" >> $(PRIVATE_INTERMEDIATES)/image_info.txt)
$(if $(filter eng, $(TARGET_BUILD_VARIANT)),$(hide) echo "verity_disable=true" >> $(PRIVATE_INTERMEDIATES)/image_info.txt)
$(hide) echo "avb_avbtool=$(PRIVATE_AVB_AVBTOOL)" >> $(PRIVATE_INTERMEDIATES)/image_info.txt
$(if $(PRIVATE_AVB_KEY_PATH),\
diff --git a/core/tasks/tools/compatibility.mk b/core/tasks/tools/compatibility.mk
index cfae490..b42476d 100644
--- a/core/tasks/tools/compatibility.mk
+++ b/core/tasks/tools/compatibility.mk
@@ -50,11 +50,22 @@
$(test_suite_jdk): PRIVATE_SUBDIR := $(test_suite_subdir)
$(test_suite_jdk): $(shell find $(test_suite_jdk_dir) -type f | sort)
$(test_suite_jdk): $(SOONG_ZIP)
- $(SOONG_ZIP) -o $@ -P $(PRIVATE_SUBDIR)/jdk -C $(PRIVATE_JDK_DIR) -D $(PRIVATE_JDK_DIR)
+ $(SOONG_ZIP) -o $@ -P $(PRIVATE_SUBDIR)/jdk -C $(PRIVATE_JDK_DIR) -D $(PRIVATE_JDK_DIR) -sha256
-$(call declare-license-metadata,$(test_suite_jdk),SPDX-license-identifier-GPL-2.0-with-classpath-exception,restricted,\
+$(call declare-license-metadata,$(test_suite_jdk),SPDX-license-identifier-GPL-2.0-with-classpath-exception,permissive,\
$(test_suite_jdk_dir)/legal/java.base/LICENSE,JDK,prebuilts/jdk/$(notdir $(patsubst %/,%,$(dir $(test_suite_jdk_dir)))))
+# Copy license metadata
+$(call declare-copy-target-license-metadata,$(out_dir)/$(notdir $(test_suite_jdk)),$(test_suite_jdk))
+$(foreach t,$(test_tools) $(test_suite_prebuilt_tools),\
+ $(eval _dst := $(out_dir)/tools/$(notdir $(t)))\
+ $(if $(strip $(ALL_TARGETS.$(t).META_LIC)),\
+ $(call declare-copy-target-license-metadata,$(_dst),$(t)),\
+ $(warning $(t) has no license metadata)\
+ )\
+)
+test_copied_tools := $(foreach t,$(test_tools) $(test_suite_prebuilt_tools), $(out_dir)/tools/$(notdir $(t))) $(out_dir)/$(notdir $(test_suite_jdk))
+
# Include host shared libraries
host_shared_libs := $(call copy-many-files, $(COMPATIBILITY.$(test_suite_name).HOST_SHARED_LIBRARY.FILES))
@@ -64,7 +75,7 @@
$(eval _src := $(call word-colon,1,$(p)))\
$(eval _dst := $(call word-colon,2,$(p)))\
$(if $(strip $(ALL_TARGETS.$(_src).META_LIC)),\
- $(eval ALL_TARGETS.$(_dst).META_LIC := $(ALL_TARGETS.$(_src).META_LIC)),\
+ $(call declare-copy-target-license-metadata,$(_dst),$(_src)),\
$(warning $(_src) has no license metadata for $(_dst))\
)\
)\
@@ -111,7 +122,7 @@
cp $(PRIVATE_TOOLS) $(PRIVATE_OUT_DIR)/tools
$(if $(PRIVATE_DYNAMIC_CONFIG),$(hide) cp $(PRIVATE_DYNAMIC_CONFIG) $(PRIVATE_OUT_DIR)/testcases/$(PRIVATE_SUITE_NAME).dynamic)
find $(PRIVATE_RESOURCES) | sort >$@.list
- $(SOONG_ZIP) -d -o $@.tmp -C $(dir $@) -l $@.list
+ $(SOONG_ZIP) -d -o $@.tmp -C $(dir $@) -l $@.list -sha256
$(MERGE_ZIPS) $@ $@.tmp $(PRIVATE_JDK)
rm -f $@.tmp
# Build a list of tests
@@ -123,7 +134,7 @@
$(call declare-0p-target,$(compatibility_tests_list_zip),)
$(call declare-1p-container,$(compatibility_zip),)
-$(call declare-container-license-deps,$(compatibility_zip),$(compatibility_zip_deps) $(test_suite_jdk), $(out_dir)/:/)
+$(call declare-container-license-deps,$(compatibility_zip),$(compatibility_zip_deps) $(test_copied_tools), $(out_dir)/:/)
$(eval $(call html-notice-rule,$(test_suite_notice_html),"Test suites","Notices for files contained in the test suites filesystem image:",$(compatibility_zip),$(compatibility_zip)))
$(eval $(call text-notice-rule,$(test_suite_notice_txt),"Test suites","Notices for files contained in the test suites filesystem image:",$(compatibility_zip),$(compatibility_zip)))
diff --git a/core/tasks/tools/package-modules.mk b/core/tasks/tools/package-modules.mk
index f89d51e..b15df28 100644
--- a/core/tasks/tools/package-modules.mk
+++ b/core/tasks/tools/package-modules.mk
@@ -27,7 +27,7 @@
LOCAL_MODULE_STEM := $(my_package_name).zip
LOCAL_UNINSTALLABLE_MODULE := true
include $(BUILD_SYSTEM)/base_rules.mk
-my_staging_dir := $(intermediates)
+my_staging_dir := $(intermediates)/staging
my_package_zip := $(LOCAL_BUILT_MODULE)
my_built_modules := $(foreach p,$(my_copy_pairs),$(call word-colon,1,$(p)))
@@ -50,12 +50,12 @@
$(error done)
endif
-my_missing_files = $(shell $(call echo-warning,$(my_makefile),$(my_package_name): Unknown installed file for module '$(1)'))
+my_missing_files = $(shell $(call echo-warning,$(my_makefile),$(my_package_name): Unknown installed file for module '$(1)'))$(shell$(call echo-warning,$(my_makefile),$(my_package_name): Some necessary modules may have been skipped by Soong. Check if PRODUCT_SOURCE_ROOT_DIRS is pruning necessary Android.bp files.))
ifeq ($(ALLOW_MISSING_DEPENDENCIES),true)
# Ignore unknown installed files on partial builds
my_missing_files =
else ifneq ($(my_modules_strict),false)
- my_missing_files = $(shell $(call echo-error,$(my_makefile),$(my_package_name): Unknown installed file for module '$(1)'))$(eval my_missing_error := true)
+ my_missing_files = $(shell $(call echo-error,$(my_makefile),$(my_package_name): Unknown installed file for module '$(1)'))$(shell$(call echo-warning,$(my_makefile),$(my_package_name): Some necessary modules may have been skipped by Soong. Check if PRODUCT_SOURCE_ROOT_DIRS is pruning necessary Android.bp files.))$(eval my_missing_error := true)
endif
# Iterate over modules' built files and installed files;
@@ -94,17 +94,18 @@
endif
$(my_package_zip): PRIVATE_COPY_PAIRS := $(my_copy_pairs)
+$(my_package_zip): PRIVATE_STAGING_DIR := $(my_staging_dir)
$(my_package_zip): PRIVATE_PICKUP_FILES := $(my_pickup_files)
$(my_package_zip) : $(my_built_modules)
@echo "Package $@"
- @rm -rf $(dir $@) && mkdir -p $(dir $@)
+ @rm -rf $(PRIVATE_STAGING_DIR) && mkdir -p $(PRIVATE_STAGING_DIR)
$(foreach p, $(PRIVATE_COPY_PAIRS),\
$(eval pair := $(subst :,$(space),$(p)))\
mkdir -p $(dir $(word 2,$(pair))) && \
cp -Rf $(word 1,$(pair)) $(word 2,$(pair)) && ) true
$(hide) $(foreach f, $(PRIVATE_PICKUP_FILES),\
- cp -RfL $(f) $(dir $@) && ) true
- $(hide) cd $(dir $@) && zip -rqX $(notdir $@) *
+ cp -RfL $(f) $(PRIVATE_STAGING_DIR) && ) true
+ $(hide) cd $(PRIVATE_STAGING_DIR) && zip -rqX ../$(notdir $@) *
my_makefile :=
my_staging_dir :=
diff --git a/core/tasks/tools/vts-kernel-tests.mk b/core/tasks/tools/vts-kernel-tests.mk
index 5fbb589..bd115c9 100644
--- a/core/tasks/tools/vts-kernel-tests.mk
+++ b/core/tasks/tools/vts-kernel-tests.mk
@@ -18,9 +18,12 @@
include $(BUILD_SYSTEM)/tasks/tools/vts_package_utils.mk
# Copy kernel test modules to testcases directories
-kernel_test_host_out := $(HOST_OUT_TESTCASES)/vts_kernel_tests
-kernel_test_vts_out := $(HOST_OUT)/$(test_suite_name)/android-$(test_suite_name)/testcases/vts_kernel_tests
-kernel_test_modules := \
- $(kselftest_modules) \
+kernel_ltp_host_out := $(HOST_OUT_TESTCASES)/vts_kernel_ltp_tests
+kernel_ltp_vts_out := $(HOST_OUT)/$(test_suite_name)/android-$(test_suite_name)/testcases/vts_kernel_ltp_tests
+kernel_ltp_modules := \
ltp \
- $(ltp_packages)
\ No newline at end of file
+ $(ltp_packages)
+
+kernel_kselftest_host_out := $(HOST_OUT_TESTCASES)/vts_kernel_kselftest_tests
+kernel_kselftest_vts_out := $(HOST_OUT)/$(test_suite_name)/android-$(test_suite_name)/testcases/vts_kernel_kselftest_tests
+kernel_kselftest_modules := $(kselftest_modules)
diff --git a/core/tasks/tools/vts_package_utils.mk b/core/tasks/tools/vts_package_utils.mk
index f1159b3..06161f0 100644
--- a/core/tasks/tools/vts_package_utils.mk
+++ b/core/tasks/tools/vts_package_utils.mk
@@ -29,6 +29,6 @@
$(eval my_copy_dest := $(patsubst data/%,DATA/%,\
$(patsubst system/%,DATA/%,\
$(patsubst $(PRODUCT_OUT)/%,%,$(ins)))))\
- $(eval ALL_TARGETS.$(2)/$(my_copy_dest).META_LIC := $(if $(strip $(ALL_MODULES.$(m).META_LIC)),$(ALL_MODULES.$(m).META_LIC),$(ALL_MODULES.$(m).DELAYED_META_LIC)))\
+ $(call declare-copy-target-license-metadata,$(2)/$(my_copy_dest),$(bui))\
$(bui):$(2)/$(my_copy_dest))))
endef
diff --git a/core/tasks/vts-core-tests.mk b/core/tasks/vts-core-tests.mk
index 5e1b5d5..bd7652b 100644
--- a/core/tasks/vts-core-tests.mk
+++ b/core/tasks/vts-core-tests.mk
@@ -18,12 +18,15 @@
include $(BUILD_SYSTEM)/tasks/tools/vts-kernel-tests.mk
-kernel_test_copy_pairs := \
- $(call target-native-copy-pairs,$(kernel_test_modules),$(kernel_test_vts_out))
+ltp_copy_pairs := \
+ $(call target-native-copy-pairs,$(kernel_ltp_modules),$(kernel_ltp_vts_out))
+kselftest_copy_pairs := \
+ $(call target-native-copy-pairs,$(kernel_kselftest_modules),$(kernel_kselftest_vts_out))
-copy_kernel_tests := $(call copy-many-files,$(kernel_test_copy_pairs))
+copy_ltp_tests := $(call copy-many-files,$(ltp_copy_pairs))
+copy_kselftest_tests := $(call copy-many-files,$(kselftest_copy_pairs))
-test_suite_extra_deps := $(copy_kernel_tests)
+test_suite_extra_deps := $(copy_ltp_tests) $(copy_kselftest_tests)
include $(BUILD_SYSTEM)/tasks/tools/compatibility.mk
diff --git a/core/tasks/wvts.mk b/core/tasks/wvts.mk
new file mode 100644
index 0000000..a79f613
--- /dev/null
+++ b/core/tasks/wvts.mk
@@ -0,0 +1,30 @@
+# Copyright (C) 2022 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Widevine test suite for non-GMS partners: go/android-wvts
+ifneq ($(wildcard test/wvts/tools/wvts-tradefed/README),)
+test_suite_name := wvts
+test_suite_tradefed := wvts-tradefed
+test_suite_dynamic_config := test/wvts/tools/wvts-tradefed/DynamicConfig.xml
+test_suite_readme := test/wvts/tools/wvts-tradefed/README
+
+$(call declare-1p-target,$(test_suite_dynamic_config),wvts)
+$(call declare-1p-target,$(test_suite_readme),wvts)
+
+include $(BUILD_SYSTEM)/tasks/tools/compatibility.mk
+
+.PHONY: wvts
+wvts: $(compatibility_zip) $(compatibility_tests_list_zip)
+$(call dist-for-goals, wvts, $(compatibility_zip) $(compatibility_tests_list_zip))
+endif
diff --git a/core/version_defaults.mk b/core/version_defaults.mk
index 9e0d571..d31574f 100644
--- a/core/version_defaults.mk
+++ b/core/version_defaults.mk
@@ -40,25 +40,26 @@
include $(INTERNAL_BUILD_ID_MAKEFILE)
endif
-DEFAULT_PLATFORM_VERSION := TP1A
+DEFAULT_PLATFORM_VERSION := UP1A
.KATI_READONLY := DEFAULT_PLATFORM_VERSION
-MIN_PLATFORM_VERSION := TP1A
-MAX_PLATFORM_VERSION := TP1A
+MIN_PLATFORM_VERSION := UP1A
+MAX_PLATFORM_VERSION := VP1A
# The last stable version name of the platform that was released. During
# development, this stays at that previous version, while the codename indicates
# further work based on the previous version.
-PLATFORM_VERSION_LAST_STABLE := 13
+PLATFORM_VERSION_LAST_STABLE := 14
.KATI_READONLY := PLATFORM_VERSION_LAST_STABLE
# These are the current development codenames, if the build is not a final
# release build. If this is a final release build, it is simply "REL".
-PLATFORM_VERSION_CODENAME.TP1A := REL
+PLATFORM_VERSION_CODENAME.UP1A := REL
+PLATFORM_VERSION_CODENAME.VP1A := VanillaIceCream
# This is the user-visible version. In a final release build it should
# be empty to use PLATFORM_VERSION as the user-visible version. For
# a preview release it can be set to a user-friendly value like `12 Preview 1`
-PLATFORM_DISPLAY_VERSION := 13
+PLATFORM_DISPLAY_VERSION :=
ifndef PLATFORM_SDK_VERSION
# This is the canonical definition of the SDK version, which defines
@@ -73,16 +74,16 @@
# When you increment the PLATFORM_SDK_VERSION please ensure you also
# clear out the following text file of all older PLATFORM_VERSION's:
# cts/tests/tests/os/assets/platform_versions.txt
- PLATFORM_SDK_VERSION := 33
+ PLATFORM_SDK_VERSION := 34
endif
.KATI_READONLY := PLATFORM_SDK_VERSION
# This is the sdk extension version of this tree.
-PLATFORM_SDK_EXTENSION_VERSION := 3
+PLATFORM_SDK_EXTENSION_VERSION := 7
.KATI_READONLY := PLATFORM_SDK_EXTENSION_VERSION
# This is the sdk extension version that PLATFORM_SDK_VERSION ships with.
-PLATFORM_BASE_SDK_EXTENSION_VERSION := 3
+PLATFORM_BASE_SDK_EXTENSION_VERSION := $(PLATFORM_SDK_EXTENSION_VERSION)
.KATI_READONLY := PLATFORM_BASE_SDK_EXTENSION_VERSION
# This are all known codenames.
@@ -90,7 +91,7 @@
Base Base11 Cupcake Donut Eclair Eclair01 EclairMr1 Froyo Gingerbread GingerbreadMr1 \
Honeycomb HoneycombMr1 HoneycombMr2 IceCreamSandwich IceCreamSandwichMr1 \
JellyBean JellyBeanMr1 JellyBeanMr2 Kitkat KitkatWatch Lollipop LollipopMr1 M N NMr1 O OMr1 P \
-Q R S Sv2 Tiramisu
+Q R S Sv2 Tiramisu UpsideDownCake
# Convert from space separated list to comma separated
PLATFORM_VERSION_KNOWN_CODENAMES := \
diff --git a/core/version_util.mk b/core/version_util.mk
index 3a0d4b5..0a45296 100644
--- a/core/version_util.mk
+++ b/core/version_util.mk
@@ -56,39 +56,52 @@
# unreleased API level targetable by this branch, not just those that are valid
# lunch targets for this branch.
+PLATFORM_VERSION_CODENAME := $(PLATFORM_VERSION_CODENAME.$(TARGET_PLATFORM_VERSION))
ifndef PLATFORM_VERSION_CODENAME
- PLATFORM_VERSION_CODENAME := $(PLATFORM_VERSION_CODENAME.$(TARGET_PLATFORM_VERSION))
- ifndef PLATFORM_VERSION_CODENAME
- # PLATFORM_VERSION_CODENAME falls back to TARGET_PLATFORM_VERSION
- PLATFORM_VERSION_CODENAME := $(TARGET_PLATFORM_VERSION)
- endif
-
- # This is all of the *active* development codenames.
- # This confusing name is needed because
- # all_codenames has been baked into build.prop for ages.
- #
- # Should be either the same as PLATFORM_VERSION_CODENAME or a comma-separated
- # list of additional codenames after PLATFORM_VERSION_CODENAME.
- PLATFORM_VERSION_ALL_CODENAMES :=
-
- # Build a list of all active code names. Avoid duplicates, and stop when we
- # reach a codename that matches PLATFORM_VERSION_CODENAME (anything beyond
- # that is not included in our build).
- _versions_in_target := \
- $(call find_and_earlier,$(ALL_VERSIONS),$(TARGET_PLATFORM_VERSION))
- $(foreach version,$(_versions_in_target),\
- $(eval _codename := $(PLATFORM_VERSION_CODENAME.$(version)))\
- $(if $(filter $(_codename),$(PLATFORM_VERSION_ALL_CODENAMES)),,\
- $(eval PLATFORM_VERSION_ALL_CODENAMES += $(_codename))))
-
- # And convert from space separated to comma separated.
- PLATFORM_VERSION_ALL_CODENAMES := \
- $(subst $(space),$(comma),$(strip $(PLATFORM_VERSION_ALL_CODENAMES)))
-
+ # PLATFORM_VERSION_CODENAME falls back to TARGET_PLATFORM_VERSION
+ PLATFORM_VERSION_CODENAME := $(TARGET_PLATFORM_VERSION)
endif
+
+# This is all of the *active* development codenames.
+# This confusing name is needed because
+# all_codenames has been baked into build.prop for ages.
+#
+# Should be either the same as PLATFORM_VERSION_CODENAME or a comma-separated
+# list of additional codenames after PLATFORM_VERSION_CODENAME.
+PLATFORM_VERSION_ALL_CODENAMES :=
+
+# Build a list of all active code names. Avoid duplicates, and stop when we
+# reach a codename that matches PLATFORM_VERSION_CODENAME (anything beyond
+# that is not included in our build).
+_versions_in_target := \
+ $(call find_and_earlier,$(ALL_VERSIONS),$(TARGET_PLATFORM_VERSION))
+$(foreach version,$(_versions_in_target),\
+ $(eval _codename := $(PLATFORM_VERSION_CODENAME.$(version)))\
+ $(if $(filter $(_codename),$(PLATFORM_VERSION_ALL_CODENAMES)),,\
+ $(eval PLATFORM_VERSION_ALL_CODENAMES += $(_codename))))
+
+# And the list of actually all the codenames that are in preview. The
+# ALL_CODENAMES variable is sort of a lie for historical reasons and only
+# includes codenames up to and including the currently active codename, whereas
+# this variable also includes future codenames. For example, while AOSP is still
+# merging into U, but V development has started, ALL_CODENAMES will only be U,
+# but ALL_PREVIEW_CODENAMES will be U and V.
+PLATFORM_VERSION_ALL_PREVIEW_CODENAMES :=
+$(foreach version,$(ALL_VERSIONS),\
+ $(eval _codename := $(PLATFORM_VERSION_CODENAME.$(version)))\
+ $(if $(filter $(_codename),$(PLATFORM_VERSION_ALL_PREVIEW_CODENAMES)),,\
+ $(eval PLATFORM_VERSION_ALL_PREVIEW_CODENAMES += $(_codename))))
+
+# And convert from space separated to comma separated.
+PLATFORM_VERSION_ALL_CODENAMES := \
+ $(subst $(space),$(comma),$(strip $(PLATFORM_VERSION_ALL_CODENAMES)))
+PLATFORM_VERSION_ALL_PREVIEW_CODENAMES := \
+ $(subst $(space),$(comma),$(strip $(PLATFORM_VERSION_ALL_PREVIEW_CODENAMES)))
+
.KATI_READONLY := \
PLATFORM_VERSION_CODENAME \
- PLATFORM_VERSION_ALL_CODENAMES
+ PLATFORM_VERSION_ALL_CODENAMES \
+ PLATFORM_VERSION_ALL_PREVIEW_CODENAMES \
ifneq (REL,$(PLATFORM_VERSION_CODENAME))
codenames := \
@@ -172,7 +185,7 @@
# to the public SDK where platform essentially supports all previous SDK versions,
# platform supports only a few number of recent system SDK versions as some of
# old system APIs are gradually deprecated, removed and then deleted.
- PLATFORM_SYSTEMSDK_MIN_VERSION := 28
+ PLATFORM_SYSTEMSDK_MIN_VERSION := 29
endif
.KATI_READONLY := PLATFORM_SYSTEMSDK_MIN_VERSION
@@ -253,6 +266,6 @@
# Used to set minimum supported target sdk version. Apps targeting sdk
# version lower than the set value will result in a warning being shown
# when any activity from the app is started.
- PLATFORM_MIN_SUPPORTED_TARGET_SDK_VERSION := 23
+ PLATFORM_MIN_SUPPORTED_TARGET_SDK_VERSION := 28
endif
.KATI_READONLY := PLATFORM_MIN_SUPPORTED_TARGET_SDK_VERSION
diff --git a/envsetup.sh b/envsetup.sh
index be6061d..905635c 100644
--- a/envsetup.sh
+++ b/envsetup.sh
@@ -1,3 +1,55 @@
+# Copyright (C) 2022 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# gettop is duplicated here and in shell_utils.mk, because it's difficult
+# to find shell_utils.make without it for all the novel ways this file can be
+# sourced. Other common functions should only be in one place or the other.
+function _gettop_once
+{
+ local TOPFILE=build/make/core/envsetup.mk
+ if [ -n "$TOP" -a -f "$TOP/$TOPFILE" ] ; then
+ # The following circumlocution ensures we remove symlinks from TOP.
+ (cd "$TOP"; PWD= /bin/pwd)
+ else
+ if [ -f $TOPFILE ] ; then
+ # The following circumlocution (repeated below as well) ensures
+ # that we record the true directory name and not one that is
+ # faked up with symlink names.
+ PWD= /bin/pwd
+ else
+ local HERE=$PWD
+ local T=
+ while [ \( ! \( -f $TOPFILE \) \) -a \( "$PWD" != "/" \) ]; do
+ \cd ..
+ T=`PWD= /bin/pwd -P`
+ done
+ \cd "$HERE"
+ if [ -f "$T/$TOPFILE" ]; then
+ echo "$T"
+ fi
+ fi
+ fi
+}
+T=$(_gettop_once)
+if [ ! "$T" ]; then
+ echo "Couldn't locate the top of the tree. Always source build/envsetup.sh from the root of the tree." >&2
+ return 1
+fi
+IMPORTING_ENVSETUP=true source $T/build/make/shell_utils.sh
+
+
+# Help
function hmm() {
cat <<EOF
@@ -10,7 +62,8 @@
invocations of 'm' etc.
- tapas: tapas [<App1> <App2> ...] [arm|x86|arm64|x86_64] [eng|userdebug|user]
Sets up the build environment for building unbundled apps (APKs).
-- banchan: banchan <module1> [<module2> ...] [arm|x86|arm64|x86_64] [eng|userdebug|user]
+- banchan: banchan <module1> [<module2> ...] [arm|x86|arm64|x86_64|arm64_only|x86_64only] \
+ [eng|userdebug|user]
Sets up the build environment for building unbundled modules (APEXes).
- croot: Changes directory to the top of the tree, or a subdirectory thereof.
- m: Makes from the top of the tree.
@@ -26,6 +79,7 @@
- ggrep: Greps on all local Gradle files.
- gogrep: Greps on all local Go files.
- jgrep: Greps on all local Java files.
+- jsongrep: Greps on all local Json files.
- ktgrep: Greps on all local Kotlin files.
- resgrep: Greps on all local res/*.xml files.
- mangrep: Greps on all local AndroidManifest.xml files.
@@ -34,9 +88,12 @@
- rsgrep: Greps on all local Rust files.
- sepgrep: Greps on all local sepolicy files.
- sgrep: Greps on all local source files.
+- tomlgrep: Greps on all local Toml files.
+- pygrep: Greps on all local Python files.
- godir: Go to the directory containing a file.
- allmod: List all modules.
- gomod: Go to the directory containing a module.
+- bmod: Get the Bazel label of a Soong module if it is converted with bp2build.
- pathmod: Get the directory containing a module.
- outmod: Gets the location of a module's installed outputs with a certain extension.
- dirmods: Gets the modules defined in a given directory.
@@ -171,7 +228,10 @@
return 1
}
-function setpaths()
+
+# Add directories to PATH that are dependent on the lunch target.
+# For directories that are not lunch-specific, add them in set_global_paths
+function set_lunch_paths()
{
local T=$(gettop)
if [ ! "$T" ]; then
@@ -183,129 +243,80 @@
# #
# Read me before you modify this code #
# #
- # This function sets ANDROID_BUILD_PATHS to what it is adding #
- # to PATH, and the next time it is run, it removes that from #
- # PATH. This is required so lunch can be run more than once #
- # and still have working paths. #
+ # This function sets ANDROID_LUNCH_BUILD_PATHS to what it is #
+ # adding to PATH, and the next time it is run, it removes that #
+ # from PATH. This is required so lunch can be run more than #
+ # once and still have working paths. #
# #
##################################################################
- # Note: on windows/cygwin, ANDROID_BUILD_PATHS will contain spaces
+ # Note: on windows/cygwin, ANDROID_LUNCH_BUILD_PATHS will contain spaces
# due to "C:\Program Files" being in the path.
- # out with the old
- if [ -n "$ANDROID_BUILD_PATHS" ] ; then
- export PATH=${PATH/$ANDROID_BUILD_PATHS/}
+ # Handle compat with the old ANDROID_BUILD_PATHS variable.
+ # TODO: Remove this after we think everyone has lunched again.
+ if [ -z "$ANDROID_LUNCH_BUILD_PATHS" -a -n "$ANDROID_BUILD_PATHS" ] ; then
+ ANDROID_LUNCH_BUILD_PATHS="$ANDROID_BUILD_PATHS"
+ ANDROID_BUILD_PATHS=
fi
if [ -n "$ANDROID_PRE_BUILD_PATHS" ] ; then
export PATH=${PATH/$ANDROID_PRE_BUILD_PATHS/}
# strip leading ':', if any
export PATH=${PATH/:%/}
+ ANDROID_PRE_BUILD_PATHS=
fi
- # and in with the new
- local prebuiltdir=$(getprebuilt)
- local gccprebuiltdir=$(get_abs_build_var ANDROID_GCC_PREBUILTS)
-
- # defined in core/config.mk
- local targetgccversion=$(get_build_var TARGET_GCC_VERSION)
- local targetgccversion2=$(get_build_var 2ND_TARGET_GCC_VERSION)
- export TARGET_GCC_VERSION=$targetgccversion
-
- # The gcc toolchain does not exists for windows/cygwin. In this case, do not reference it.
- export ANDROID_TOOLCHAIN=
- export ANDROID_TOOLCHAIN_2ND_ARCH=
- local ARCH=$(get_build_var TARGET_ARCH)
- local toolchaindir toolchaindir2=
- case $ARCH in
- x86) toolchaindir=x86/x86_64-linux-android-$targetgccversion/bin
- ;;
- x86_64) toolchaindir=x86/x86_64-linux-android-$targetgccversion/bin
- ;;
- arm) toolchaindir=arm/arm-linux-androideabi-$targetgccversion/bin
- ;;
- arm64) toolchaindir=aarch64/aarch64-linux-android-$targetgccversion/bin;
- toolchaindir2=arm/arm-linux-androideabi-$targetgccversion2/bin
- ;;
- *)
- echo "Can't find toolchain for unknown architecture: $ARCH"
- toolchaindir=xxxxxxxxx
- ;;
- esac
- if [ -d "$gccprebuiltdir/$toolchaindir" ]; then
- export ANDROID_TOOLCHAIN=$gccprebuiltdir/$toolchaindir
+ # Out with the old...
+ if [ -n "$ANDROID_LUNCH_BUILD_PATHS" ] ; then
+ export PATH=${PATH/$ANDROID_LUNCH_BUILD_PATHS/}
fi
- if [ "$toolchaindir2" -a -d "$gccprebuiltdir/$toolchaindir2" ]; then
- export ANDROID_TOOLCHAIN_2ND_ARCH=$gccprebuiltdir/$toolchaindir2
- fi
+ # And in with the new...
+ ANDROID_LUNCH_BUILD_PATHS=$(get_abs_build_var SOONG_HOST_OUT_EXECUTABLES)
+ ANDROID_LUNCH_BUILD_PATHS+=:$(get_abs_build_var HOST_OUT_EXECUTABLES)
- export ANDROID_DEV_SCRIPTS=$T/development/scripts:$T/prebuilts/devtools/tools
-
- # add kernel specific binaries
- case $(uname -s) in
- Linux)
- export ANDROID_DEV_SCRIPTS=$ANDROID_DEV_SCRIPTS:$T/prebuilts/misc/linux-x86/dtc:$T/prebuilts/misc/linux-x86/libufdt
- ;;
- *)
- ;;
- esac
-
- ANDROID_BUILD_PATHS=$(get_build_var ANDROID_BUILD_PATHS):$ANDROID_TOOLCHAIN
- ANDROID_BUILD_PATHS=$ANDROID_BUILD_PATHS:$ANDROID_TOOLCHAIN_2ND_ARCH
- ANDROID_BUILD_PATHS=$ANDROID_BUILD_PATHS:$ANDROID_DEV_SCRIPTS
-
- # Append llvm binutils prebuilts path to ANDROID_BUILD_PATHS.
+ # Append llvm binutils prebuilts path to ANDROID_LUNCH_BUILD_PATHS.
local ANDROID_LLVM_BINUTILS=$(get_abs_build_var ANDROID_CLANG_PREBUILTS)/llvm-binutils-stable
- ANDROID_BUILD_PATHS=$ANDROID_BUILD_PATHS:$ANDROID_LLVM_BINUTILS
+ ANDROID_LUNCH_BUILD_PATHS+=:$ANDROID_LLVM_BINUTILS
# Set up ASAN_SYMBOLIZER_PATH for SANITIZE_HOST=address builds.
export ASAN_SYMBOLIZER_PATH=$ANDROID_LLVM_BINUTILS/llvm-symbolizer
- # If prebuilts/android-emulator/<system>/ exists, prepend it to our PATH
- # to ensure that the corresponding 'emulator' binaries are used.
- case $(uname -s) in
- Darwin)
- ANDROID_EMULATOR_PREBUILTS=$T/prebuilts/android-emulator/darwin-x86_64
- ;;
- Linux)
- ANDROID_EMULATOR_PREBUILTS=$T/prebuilts/android-emulator/linux-x86_64
- ;;
- *)
- ANDROID_EMULATOR_PREBUILTS=
- ;;
- esac
- if [ -n "$ANDROID_EMULATOR_PREBUILTS" -a -d "$ANDROID_EMULATOR_PREBUILTS" ]; then
- ANDROID_BUILD_PATHS=$ANDROID_BUILD_PATHS:$ANDROID_EMULATOR_PREBUILTS
- export ANDROID_EMULATOR_PREBUILTS
- fi
-
- # Append asuite prebuilts path to ANDROID_BUILD_PATHS.
+ # Append asuite prebuilts path to ANDROID_LUNCH_BUILD_PATHS.
local os_arch=$(get_build_var HOST_PREBUILT_TAG)
- local ACLOUD_PATH="$T/prebuilts/asuite/acloud/$os_arch"
- local AIDEGEN_PATH="$T/prebuilts/asuite/aidegen/$os_arch"
- local ATEST_PATH="$T/prebuilts/asuite/atest/$os_arch"
- ANDROID_BUILD_PATHS=$ANDROID_BUILD_PATHS:$ACLOUD_PATH:$AIDEGEN_PATH:$ATEST_PATH
-
- export ANDROID_BUILD_PATHS=$(tr -s : <<<"${ANDROID_BUILD_PATHS}:")
- export PATH=$ANDROID_BUILD_PATHS$PATH
-
- # out with the duplicate old
- if [ -n $ANDROID_PYTHONPATH ]; then
- export PYTHONPATH=${PYTHONPATH//$ANDROID_PYTHONPATH/}
- fi
- # and in with the new
- export ANDROID_PYTHONPATH=$T/development/python-packages:
- if [ -n $VENDOR_PYTHONPATH ]; then
- ANDROID_PYTHONPATH=$ANDROID_PYTHONPATH$VENDOR_PYTHONPATH
- fi
- export PYTHONPATH=$ANDROID_PYTHONPATH$PYTHONPATH
+ ANDROID_LUNCH_BUILD_PATHS+=:$T/prebuilts/asuite/acloud/$os_arch
+ ANDROID_LUNCH_BUILD_PATHS+=:$T/prebuilts/asuite/aidegen/$os_arch
+ ANDROID_LUNCH_BUILD_PATHS+=:$T/prebuilts/asuite/atest/$os_arch
export ANDROID_JAVA_HOME=$(get_abs_build_var ANDROID_JAVA_HOME)
export JAVA_HOME=$ANDROID_JAVA_HOME
export ANDROID_JAVA_TOOLCHAIN=$(get_abs_build_var ANDROID_JAVA_TOOLCHAIN)
- export ANDROID_PRE_BUILD_PATHS=$ANDROID_JAVA_TOOLCHAIN:
- export PATH=$ANDROID_PRE_BUILD_PATHS$PATH
+ ANDROID_LUNCH_BUILD_PATHS+=:$ANDROID_JAVA_TOOLCHAIN
+
+ # Fix up PYTHONPATH
+ if [ -n $ANDROID_PYTHONPATH ]; then
+ export PYTHONPATH=${PYTHONPATH//$ANDROID_PYTHONPATH/}
+ fi
+ # //development/python-packages contains both a pseudo-PYTHONPATH which
+ # mimics an already assembled venv, but also contains real Python packages
+ # that are not in that layout until they are installed. We can fake it for
+ # the latter type by adding the package source directories to the PYTHONPATH
+ # directly. For the former group, we only need to add the python-packages
+ # directory itself.
+ #
+ # This could be cleaned up by converting the remaining packages that are in
+ # the first category into a typical python source layout (that is, another
+ # layer of directory nesting) and automatically adding all subdirectories of
+ # python-packages to the PYTHONPATH instead of manually curating this. We
+ # can't convert the packages like adb to the other style because doing so
+ # would prevent exporting type info from those packages.
+ #
+ # http://b/266688086
+ export ANDROID_PYTHONPATH=$T/development/python-packages/adb:$T/development/python-packages:
+ if [ -n $VENDOR_PYTHONPATH ]; then
+ ANDROID_PYTHONPATH=$ANDROID_PYTHONPATH$VENDOR_PYTHONPATH
+ fi
+ export PYTHONPATH=$ANDROID_PYTHONPATH$PYTHONPATH
unset ANDROID_PRODUCT_OUT
export ANDROID_PRODUCT_OUT=$(get_abs_build_var PRODUCT_OUT)
@@ -323,25 +334,67 @@
unset ANDROID_TARGET_OUT_TESTCASES
export ANDROID_TARGET_OUT_TESTCASES=$(get_abs_build_var TARGET_OUT_TESTCASES)
- # needed for building linux on MacOS
- # TODO: fix the path
- #export HOST_EXTRACFLAGS="-I "$T/system/kernel_headers/host_include
+ # Finally, set PATH
+ export PATH=$ANDROID_LUNCH_BUILD_PATHS:$PATH
}
-function bazel()
+# Add directories to PATH that are NOT dependent on the lunch target.
+# For directories that are lunch-specific, add them in set_lunch_paths
+function set_global_paths()
{
- if which bazel &>/dev/null; then
- >&2 echo "NOTE: bazel() function sourced from Android's envsetup.sh is being used instead of $(which bazel)"
- >&2 echo
- fi
-
- local T="$(gettop)"
+ local T=$(gettop)
if [ ! "$T" ]; then
- >&2 echo "Couldn't locate the top of the Android tree. Try setting TOP. This bazel() function cannot be used outside of the AOSP directory."
+ echo "Couldn't locate the top of the tree. Try setting TOP."
return
fi
- "$T/tools/bazel" "$@"
+ ##################################################################
+ # #
+ # Read me before you modify this code #
+ # #
+ # This function sets ANDROID_GLOBAL_BUILD_PATHS to what it is #
+ # adding to PATH, and the next time it is run, it removes that #
+ # from PATH. This is required so envsetup.sh can be sourced #
+ # more than once and still have working paths. #
+ # #
+ ##################################################################
+
+ # Out with the old...
+ if [ -n "$ANDROID_GLOBAL_BUILD_PATHS" ] ; then
+ export PATH=${PATH/$ANDROID_GLOBAL_BUILD_PATHS/}
+ fi
+
+ # And in with the new...
+ ANDROID_GLOBAL_BUILD_PATHS=$T/build/bazel/bin
+ ANDROID_GLOBAL_BUILD_PATHS+=:$T/development/scripts
+ ANDROID_GLOBAL_BUILD_PATHS+=:$T/prebuilts/devtools/tools
+
+ # add kernel specific binaries
+ if [ $(uname -s) = Linux ] ; then
+ ANDROID_GLOBAL_BUILD_PATHS+=:$T/prebuilts/misc/linux-x86/dtc
+ ANDROID_GLOBAL_BUILD_PATHS+=:$T/prebuilts/misc/linux-x86/libufdt
+ fi
+
+ # If prebuilts/android-emulator/<system>/ exists, prepend it to our PATH
+ # to ensure that the corresponding 'emulator' binaries are used.
+ case $(uname -s) in
+ Darwin)
+ ANDROID_EMULATOR_PREBUILTS=$T/prebuilts/android-emulator/darwin-x86_64
+ ;;
+ Linux)
+ ANDROID_EMULATOR_PREBUILTS=$T/prebuilts/android-emulator/linux-x86_64
+ ;;
+ *)
+ ANDROID_EMULATOR_PREBUILTS=
+ ;;
+ esac
+ if [ -n "$ANDROID_EMULATOR_PREBUILTS" -a -d "$ANDROID_EMULATOR_PREBUILTS" ]; then
+ ANDROID_GLOBAL_BUILD_PATHS+=:$ANDROID_EMULATOR_PREBUILTS
+ export ANDROID_EMULATOR_PREBUILTS
+ fi
+
+ # Finally, set PATH
+ export PATH=$ANDROID_GLOBAL_BUILD_PATHS:$PATH
}
function printconfig()
@@ -356,12 +409,10 @@
function set_stuff_for_environment()
{
- setpaths
+ set_lunch_paths
set_sequence_number
export ANDROID_BUILD_TOP=$(gettop)
- # With this environment variable new GCC can apply colors to warnings/errors
- export GCC_COLORS='error=01;31:warning=01;35:note=01;36:caret=01;32:locus=01:quote=01'
}
function set_sequence_number()
@@ -395,16 +446,21 @@
fi
local completion_files=(
- system/core/adb/adb.bash
+ packages/modules/adb/adb.bash
system/core/fastboot/fastboot.bash
tools/asuite/asuite.sh
+ prebuilts/bazel/common/bazel-complete.bash
)
# Completion can be disabled selectively to allow users to use non-standard completion.
# e.g.
# ENVSETUP_NO_COMPLETION=adb # -> disable adb completion
# ENVSETUP_NO_COMPLETION=adb:bit # -> disable adb and bit completion
+ local T=$(gettop)
for f in ${completion_files[*]}; do
- if [ -f "$f" ] && should_add_completion "$f"; then
+ f="$T/$f"
+ if [ ! -f "$f" ]; then
+ echo "Warning: completion file $f not found"
+ elif should_add_completion "$f"; then
. $f
fi
done
@@ -415,6 +471,8 @@
if [ -z "$ZSH_VERSION" ]; then
# Doesn't work in zsh.
complete -o nospace -F _croot croot
+ # TODO(b/244559459): Support b autocompletion for zsh
+ complete -F _bazel__complete -o nospace b
fi
complete -F _lunch lunch
@@ -422,6 +480,7 @@
complete -F _complete_android_module_names gomod
complete -F _complete_android_module_names outmod
complete -F _complete_android_module_names installmod
+ complete -F _complete_android_module_names bmod
complete -F _complete_android_module_names m
}
@@ -451,10 +510,17 @@
{
local code
local results
+ # Lunch must be run in the topdir, but this way we get a clear error
+ # message, instead of FileNotFound.
+ local T=$(multitree_gettop)
+ if [ -z "$T" ]; then
+ _multitree_lunch_error
+ return 1
+ fi
if $(echo "$1" | grep -q '^-') ; then
# Calls starting with a -- argument are passed directly and the function
# returns with the lunch.py exit code.
- build/make/orchestrator/core/lunch.py "$@"
+ "${T}/orchestrator/build/orchestrator/core/lunch.py" "$@"
code=$?
if [[ $code -eq 2 ]] ; then
echo 1>&2
@@ -465,7 +531,7 @@
fi
else
# All other calls go through the --lunch variant of lunch.py
- results=($(build/make/orchestrator/core/lunch.py --lunch "$@"))
+ results=($(${T}/orchestrator/build/orchestrator/core/lunch.py --lunch "$@"))
code=$?
if [[ $code -eq 2 ]] ; then
echo 1>&2
@@ -880,7 +946,7 @@
function banchan()
{
local showHelp="$(echo $* | xargs -n 1 echo | \grep -E '^(help)$' | xargs)"
- local product="$(echo $* | xargs -n 1 echo | \grep -E '^(.*_)?(arm|x86|arm64|x86_64)$' | xargs)"
+ local product="$(echo $* | xargs -n 1 echo | \grep -E '^(.*_)?(arm|x86|arm64|x86_64|arm64only|x86_64only)$' | xargs)"
local variant="$(echo $* | xargs -n 1 echo | \grep -E '^(user|userdebug|eng)$' | xargs)"
local apps="$(echo $* | xargs -n 1 echo | \grep -E -v '^(user|userdebug|eng|(.*_)?(arm|x86|arm64|x86_64))$' | xargs)"
@@ -890,7 +956,7 @@
fi
if [ -z "$product" ]; then
- product=arm
+ product=arm64
elif [ $(echo $product | wc -w) -gt 1 ]; then
echo "banchan: Error: Multiple build archs or products supplied: $products"
return
@@ -909,6 +975,8 @@
x86) product=module_x86;;
arm64) product=module_arm64;;
x86_64) product=module_x86_64;;
+ arm64only) product=module_arm64only;;
+ x86_64only) product=module_x86_64only;;
esac
if [ -z "$variant" ]; then
variant=eng
@@ -929,9 +997,10 @@
destroy_build_var_cache
}
-function gettop
+# TODO: Merge into gettop as part of launching multitree
+function multitree_gettop
{
- local TOPFILE=build/make/core/envsetup.mk
+ local TOPFILE=orchestrator/build/make/core/envsetup.mk
if [ -n "$TOP" -a -f "$TOP/$TOPFILE" ] ; then
# The following circumlocution ensures we remove symlinks from TOP.
(cd "$TOP"; PWD= /bin/pwd)
@@ -1028,7 +1097,7 @@
# Easy way to make system.img/etc writable
function syswrite() {
adb wait-for-device && adb root || return 1
- if [[ $(adb disable-verity | grep "reboot") ]]; then
+ if [[ $(adb disable-verity | grep -i "reboot") ]]; then
echo "rebooting"
adb reboot && adb wait-for-device && adb root || return 1
fi
@@ -1079,7 +1148,7 @@
return;
fi;
echo "Setting core limit for $PID to infinite...";
- adb shell /system/bin/ulimit -p $PID -c unlimited
+ adb shell /system/bin/ulimit -P $PID -c unlimited
}
# core - send SIGV and pull the core for process
@@ -1141,7 +1210,7 @@
Darwin)
function sgrep()
{
- find -E . -name .repo -prune -o -name .git -prune -o -type f -iregex '.*\.(c|h|cc|cpp|hpp|S|java|kt|xml|sh|mk|aidl|vts|proto)' \
+ find -E . -name .repo -prune -o -name .git -prune -o -type f -iregex '.*\.(c|h|cc|cpp|hpp|S|java|kt|xml|sh|mk|aidl|vts|proto|rs|go)' \
-exec grep --color -n "$@" {} +
}
@@ -1149,7 +1218,7 @@
*)
function sgrep()
{
- find . -name .repo -prune -o -name .git -prune -o -type f -iregex '.*\.\(c\|h\|cc\|cpp\|hpp\|S\|java\|kt\|xml\|sh\|mk\|aidl\|vts\|proto\)' \
+ find . -name .repo -prune -o -name .git -prune -o -type f -iregex '.*\.\(c\|h\|cc\|cpp\|hpp\|S\|java\|kt\|xml\|sh\|mk\|aidl\|vts\|proto\|rs\|go\)' \
-exec grep --color -n "$@" {} +
}
;;
@@ -1184,6 +1253,18 @@
-exec grep --color -n "$@" {} +
}
+function jsongrep()
+{
+ find . -name .repo -prune -o -name .git -prune -o -name out -prune -o -type f -name "*\.json" \
+ -exec grep --color -n "$@" {} +
+}
+
+function tomlgrep()
+{
+ find . -name .repo -prune -o -name .git -prune -o -name out -prune -o -type f -name "*\.toml" \
+ -exec grep --color -n "$@" {} +
+}
+
function ktgrep()
{
find . -name .repo -prune -o -name .git -prune -o -name out -prune -o -type f -name "*\.kt" \
@@ -1228,6 +1309,12 @@
-exec grep --color -n "$@" {} +
}
+function pygrep()
+{
+ find . -name .repo -prune -o -name .git -prune -o -name out -prune -o -type f -name "*\.py" \
+ -exec grep --color -n "$@" {} +
+}
+
case `uname -s` in
Darwin)
function mgrep()
@@ -1553,14 +1640,49 @@
fi
}
-# List all modules for the current device, as cached in module-info.json. If any build change is
-# made and it should be reflected in the output, you should run 'refreshmod' first.
+# List all modules for the current device, as cached in all_modules.txt. If any build change is
+# made and it should be reflected in the output, you should run `m nothing` first.
function allmod() {
- verifymodinfo || return 1
-
- python3 -c "import json; print('\n'.join(sorted(json.load(open('$ANDROID_PRODUCT_OUT/module-info.json')).keys())))"
+ cat $ANDROID_PRODUCT_OUT/all_modules.txt 2>/dev/null
}
+# Return the Bazel label of a Soong module if it is converted with bp2build.
+function bmod()
+(
+ if [ $# -ne 1 ]; then
+ echo "usage: bmod <module>" >&2
+ return 1
+ fi
+
+ # We could run bp2build here, but it might trigger bp2build invalidation
+ # when used with `b` (e.g. --run_soong_tests) and/or add unnecessary waiting
+ # time overhead.
+ #
+ # For a snappy result, use the latest generated version in soong_injection,
+ # and ask users to run m bp2build if it doesn't exist.
+ converted_json="$(get_abs_build_var OUT_DIR)/soong/soong_injection/metrics/converted_modules_path_map.json"
+
+ if [ ! -f ${converted_json} ]; then
+ echo "bp2build files not found. Have you ran 'm bp2build'?" >&2
+ return 1
+ fi
+
+ local target_label=$(python3 -c "import json
+module = '$1'
+converted_json='$converted_json'
+bp2build_converted_map = json.load(open(converted_json))
+if module not in bp2build_converted_map:
+ exit(1)
+print(bp2build_converted_map[module] + ':' + module)")
+
+ if [ -z "${target_label}" ]; then
+ echo "$1 is not converted to Bazel." >&2
+ return 1
+ else
+ echo "${target_label}"
+ fi
+)
+
# Get the path of a specific module in the android tree, as cached in module-info.json.
# If any build change is made, and it should be reflected in the output, you should run
# 'refreshmod' first. Note: This is the inverse of dirmods.
@@ -1700,7 +1822,7 @@
function _complete_android_module_names() {
local word=${COMP_WORDS[COMP_CWORD]}
- COMPREPLY=( $(QUIET_VERIFYMODINFO=true allmod | grep -E "^$word") )
+ COMPREPLY=( $(allmod | grep -E "^$word") )
}
# Print colored exit condition
@@ -1759,11 +1881,6 @@
color_reset=""
fi
- if [[ "x${USE_RBE}" == "x" && $mins -gt 15 && "${ANDROID_BUILD_ENVIRONMENT_CONFIG}" == "googler" ]]; then
- echo
- echo "${color_warning}Start using RBE (http://go/build-fast) to get faster builds!${color_reset}"
- fi
-
echo
if [ $ret -eq 0 ] ; then
echo -n "${color_success}#### build completed successfully "
@@ -1785,7 +1902,8 @@
function _trigger_build()
(
local -r bc="$1"; shift
- if T="$(gettop)"; then
+ local T=$(gettop)
+ if [ -n "$T" ]; then
_wrap_build "$T/build/soong/soong_ui.bash" --build-mode --${bc} --dir="$(pwd)" "$@"
else
>&2 echo "Couldn't locate the top of the tree. Try setting TOP."
@@ -1793,21 +1911,6 @@
fi
)
-# Convenience entry point (like m) to use Bazel in AOSP.
-function b()
-(
- # Generate BUILD, bzl files into the synthetic Bazel workspace (out/soong/workspace).
- _trigger_build "all-modules" bp2build USE_BAZEL_ANALYSIS= || return 1
- # Then, run Bazel using the synthetic workspace as the --package_path.
- if [[ -z "$@" ]]; then
- # If there are no args, show help.
- bazel help
- else
- # Else, always run with the bp2build configuration, which sets Bazel's package path to the synthetic workspace.
- bazel "$@" --config=bp2build
- fi
-)
-
function m()
(
_trigger_build "all-modules" "$@"
@@ -1838,6 +1941,22 @@
_wrap_build $(get_make_command "$@") "$@"
}
+function _multitree_lunch_error()
+{
+ >&2 echo "Couldn't locate the top of the tree. Please run \'source build/envsetup.sh\' and multitree_lunch from the root of your workspace."
+}
+
+function multitree_build()
+{
+ local T=$(multitree_gettop)
+ if [ -n "$T" ]; then
+ "$T/orchestrator/build/orchestrator/core/orchestrator.py" "$@"
+ else
+ _multitree_lunch_error
+ return 1
+ fi
+}
+
function provision()
{
if [ ! "$ANDROID_PRODUCT_OUT" ]; then
@@ -1943,13 +2062,7 @@
return
;;
esac
- if [[ -z "$OUT_DIR" ]]; then
- if [[ -z "$OUT_DIR_COMMON_BASE" ]]; then
- OUT_DIR=out
- else
- OUT_DIR=${OUT_DIR_COMMON_BASE}/${PWD##*/}
- fi
- fi
+ OUT_DIR="$(get_abs_build_var OUT_DIR)"
if [[ "$1" == "--regenerate" ]]; then
shift 1
NINJA_ARGS="-t commands $@" m
@@ -1960,6 +2073,15 @@
fi
}
+function avbtool() {
+ if [[ ! -f "$ANDROID_SOONG_HOST_OUT"/bin/avbtool ]]; then
+ m avbtool
+ fi
+ "$ANDROID_SOONG_HOST_OUT"/bin/avbtool $@
+}
+
validate_current_shell
+set_global_paths
source_vendorsetup
addcompletions
+
diff --git a/finalize_branch_for_release.sh b/finalize_branch_for_release.sh
deleted file mode 100755
index d498beb..0000000
--- a/finalize_branch_for_release.sh
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/bin/bash
-
-set -e
-
-source ../envsetup.sh
-
-# default target to modify tree and build SDK
-lunch aosp_arm64-userdebug
-
-set -x
-
-# This script is WIP and only finalizes part of the Android branch for release.
-# The full process can be found at (INTERNAL) go/android-sdk-finalization.
-
-# VNDK snapshot (TODO)
-# SDK snapshots (TODO)
-# Update references in the codebase to new API version (TODO)
-# ...
-
-AIDL_TRANSITIVE_FREEZE=true m aidl-freeze-api
-
-# TODO(b/229413853): test while simulating 'rel' for more requirements AIDL_FROZEN_REL=true
-m # test build
-
-# Build SDK (TODO)
-# lunch sdk...
-# m ...
diff --git a/help.sh b/help.sh
index e51adc1..c405959 100755
--- a/help.sh
+++ b/help.sh
@@ -26,6 +26,8 @@
clean (aka clobber) equivalent to rm -rf out/
checkbuild Build every module defined in the source tree
droid Default target
+ sync Build everything in the default target except the images,
+ for use with adb sync.
nothing Do not build anything, just parse and validate the build structure
java Build all the java code in the source tree
diff --git a/orchestrator/core/lunch.py b/orchestrator/core/lunch.py
deleted file mode 100755
index 35dac73..0000000
--- a/orchestrator/core/lunch.py
+++ /dev/null
@@ -1,329 +0,0 @@
-#!/usr/bin/python3
-#
-# Copyright (C) 2022 The Android Open Source Project
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import argparse
-import glob
-import json
-import os
-import sys
-
-EXIT_STATUS_OK = 0
-EXIT_STATUS_ERROR = 1
-EXIT_STATUS_NEED_HELP = 2
-
-def FindDirs(path, name, ttl=6):
- """Search at most ttl directories deep inside path for a directory called name."""
- # The dance with subdirs is so that we recurse in sorted order.
- subdirs = []
- with os.scandir(path) as it:
- for dirent in sorted(it, key=lambda x: x.name):
- try:
- if dirent.is_dir():
- if dirent.name == name:
- yield os.path.join(path, dirent.name)
- elif ttl > 0:
- subdirs.append(dirent.name)
- except OSError:
- # Consume filesystem errors, e.g. too many links, permission etc.
- pass
- for subdir in subdirs:
- yield from FindDirs(os.path.join(path, subdir), name, ttl-1)
-
-
-def WalkPaths(path, matcher, ttl=10):
- """Do a traversal of all files under path yielding each file that matches
- matcher."""
- # First look for files, then recurse into directories as needed.
- # The dance with subdirs is so that we recurse in sorted order.
- subdirs = []
- with os.scandir(path) as it:
- for dirent in sorted(it, key=lambda x: x.name):
- try:
- if dirent.is_file():
- if matcher(dirent.name):
- yield os.path.join(path, dirent.name)
- if dirent.is_dir():
- if ttl > 0:
- subdirs.append(dirent.name)
- except OSError:
- # Consume filesystem errors, e.g. too many links, permission etc.
- pass
- for subdir in sorted(subdirs):
- yield from WalkPaths(os.path.join(path, subdir), matcher, ttl-1)
-
-
-def FindFile(path, filename):
- """Return a file called filename inside path, no more than ttl levels deep.
-
- Directories are searched alphabetically.
- """
- for f in WalkPaths(path, lambda x: x == filename):
- return f
-
-
-def FindConfigDirs(workspace_root):
- """Find the configuration files in the well known locations inside workspace_root
-
- <workspace_root>/build/orchestrator/multitree_combos
- (AOSP devices, such as cuttlefish)
-
- <workspace_root>/vendor/**/multitree_combos
- (specific to a vendor and not open sourced)
-
- <workspace_root>/device/**/multitree_combos
- (specific to a vendor and are open sourced)
-
- Directories are returned specifically in this order, so that aosp can't be
- overridden, but vendor overrides device.
- """
-
- # TODO: When orchestrator is in its own git project remove the "make/" here
- yield os.path.join(workspace_root, "build/make/orchestrator/multitree_combos")
-
- dirs = ["vendor", "device"]
- for d in dirs:
- yield from FindDirs(os.path.join(workspace_root, d), "multitree_combos")
-
-
-def FindNamedConfig(workspace_root, shortname):
- """Find the config with the given shortname inside workspace_root.
-
- Config directories are searched in the order described in FindConfigDirs,
- and inside those directories, alphabetically."""
- filename = shortname + ".mcombo"
- for config_dir in FindConfigDirs(workspace_root):
- found = FindFile(config_dir, filename)
- if found:
- return found
- return None
-
-
-def ParseProductVariant(s):
- """Split a PRODUCT-VARIANT name, or return None if it doesn't match that pattern."""
- split = s.split("-")
- if len(split) != 2:
- return None
- return split
-
-
-def ChooseConfigFromArgs(workspace_root, args):
- """Return the config file we should use for the given argument,
- or null if there's no file that matches that."""
- if len(args) == 1:
- # Prefer PRODUCT-VARIANT syntax so if there happens to be a matching
- # file we don't match that.
- pv = ParseProductVariant(args[0])
- if pv:
- config = FindNamedConfig(workspace_root, pv[0])
- if config:
- return (config, pv[1])
- return None, None
- # Look for a specifically named file
- if os.path.isfile(args[0]):
- return (args[0], args[1] if len(args) > 1 else None)
- # That file didn't exist, return that we didn't find it.
- return None, None
-
-
-class ConfigException(Exception):
- ERROR_PARSE = "parse"
- ERROR_CYCLE = "cycle"
-
- def __init__(self, kind, message, locations, line=0):
- """Error thrown when loading and parsing configurations.
-
- Args:
- message: Error message to display to user
- locations: List of filenames of the include history. The 0 index one
- the location where the actual error occurred
- """
- if len(locations):
- s = locations[0]
- if line:
- s += ":"
- s += str(line)
- s += ": "
- else:
- s = ""
- s += message
- if len(locations):
- for loc in locations[1:]:
- s += "\n included from %s" % loc
- super().__init__(s)
- self.kind = kind
- self.message = message
- self.locations = locations
- self.line = line
-
-
-def LoadConfig(filename):
- """Load a config, including processing the inherits fields.
-
- Raises:
- ConfigException on errors
- """
- def LoadAndMerge(fn, visited):
- with open(fn) as f:
- try:
- contents = json.load(f)
- except json.decoder.JSONDecodeError as ex:
- if True:
- raise ConfigException(ConfigException.ERROR_PARSE, ex.msg, visited, ex.lineno)
- else:
- sys.stderr.write("exception %s" % ex.__dict__)
- raise ex
- # Merge all the parents into one data, with first-wins policy
- inherited_data = {}
- for parent in contents.get("inherits", []):
- if parent in visited:
- raise ConfigException(ConfigException.ERROR_CYCLE, "Cycle detected in inherits",
- visited)
- DeepMerge(inherited_data, LoadAndMerge(parent, [parent,] + visited))
- # Then merge inherited_data into contents, but what's already there will win.
- DeepMerge(contents, inherited_data)
- contents.pop("inherits", None)
- return contents
- return LoadAndMerge(filename, [filename,])
-
-
-def DeepMerge(merged, addition):
- """Merge all fields of addition into merged. Pre-existing fields win."""
- for k, v in addition.items():
- if k in merged:
- if isinstance(v, dict) and isinstance(merged[k], dict):
- DeepMerge(merged[k], v)
- else:
- merged[k] = v
-
-
-def Lunch(args):
- """Handle the lunch command."""
- # Check that we're at the top of a multitree workspace
- # TODO: Choose the right sentinel file
- if not os.path.exists("build/make/orchestrator"):
- sys.stderr.write("ERROR: lunch.py must be run from the root of a multi-tree workspace\n")
- return EXIT_STATUS_ERROR
-
- # Choose the config file
- config_file, variant = ChooseConfigFromArgs(".", args)
-
- if config_file == None:
- sys.stderr.write("Can't find lunch combo file for: %s\n" % " ".join(args))
- return EXIT_STATUS_NEED_HELP
- if variant == None:
- sys.stderr.write("Can't find variant for: %s\n" % " ".join(args))
- return EXIT_STATUS_NEED_HELP
-
- # Parse the config file
- try:
- config = LoadConfig(config_file)
- except ConfigException as ex:
- sys.stderr.write(str(ex))
- return EXIT_STATUS_ERROR
-
- # Fail if the lunchable bit isn't set, because this isn't a usable config
- if not config.get("lunchable", False):
- sys.stderr.write("%s: Lunch config file (or inherited files) does not have the 'lunchable'"
- % config_file)
- sys.stderr.write(" flag set, which means it is probably not a complete lunch spec.\n")
-
- # All the validation has passed, so print the name of the file and the variant
- sys.stdout.write("%s\n" % config_file)
- sys.stdout.write("%s\n" % variant)
-
- return EXIT_STATUS_OK
-
-
-def FindAllComboFiles(workspace_root):
- """Find all .mcombo files in the prescribed locations in the tree."""
- for dir in FindConfigDirs(workspace_root):
- for file in WalkPaths(dir, lambda x: x.endswith(".mcombo")):
- yield file
-
-
-def IsFileLunchable(config_file):
- """Parse config_file, flatten the inheritance, and return whether it can be
- used as a lunch target."""
- try:
- config = LoadConfig(config_file)
- except ConfigException as ex:
- sys.stderr.write("%s" % ex)
- return False
- return config.get("lunchable", False)
-
-
-def FindAllLunchable(workspace_root):
- """Find all mcombo files in the tree (rooted at workspace_root) that when
- parsed (and inheritance is flattened) have lunchable: true."""
- for f in [x for x in FindAllComboFiles(workspace_root) if IsFileLunchable(x)]:
- yield f
-
-
-def List():
- """Handle the --list command."""
- for f in sorted(FindAllLunchable(".")):
- print(f)
-
-
-def Print(args):
- """Handle the --print command."""
- # Parse args
- if len(args) == 0:
- config_file = os.environ.get("TARGET_BUILD_COMBO")
- if not config_file:
- sys.stderr.write("TARGET_BUILD_COMBO not set. Run lunch or pass a combo file.\n")
- return EXIT_STATUS_NEED_HELP
- elif len(args) == 1:
- config_file = args[0]
- else:
- return EXIT_STATUS_NEED_HELP
-
- # Parse the config file
- try:
- config = LoadConfig(config_file)
- except ConfigException as ex:
- sys.stderr.write(str(ex))
- return EXIT_STATUS_ERROR
-
- # Print the config in json form
- json.dump(config, sys.stdout, indent=4)
-
- return EXIT_STATUS_OK
-
-
-def main(argv):
- if len(argv) < 2 or argv[1] == "-h" or argv[1] == "--help":
- return EXIT_STATUS_NEED_HELP
-
- if len(argv) == 2 and argv[1] == "--list":
- List()
- return EXIT_STATUS_OK
-
- if len(argv) == 2 and argv[1] == "--print":
- return Print(argv[2:])
- return EXIT_STATUS_OK
-
- if (len(argv) == 2 or len(argv) == 3) and argv[1] == "--lunch":
- return Lunch(argv[2:])
-
- sys.stderr.write("Unknown lunch command: %s\n" % " ".join(argv[1:]))
- return EXIT_STATUS_NEED_HELP
-
-if __name__ == "__main__":
- sys.exit(main(sys.argv))
-
-
-# vim: sts=4:ts=4:sw=4
diff --git a/orchestrator/core/test/configs/another/bad.mcombo b/orchestrator/core/test/configs/another/bad.mcombo
deleted file mode 100644
index 0967ef4..0000000
--- a/orchestrator/core/test/configs/another/bad.mcombo
+++ /dev/null
@@ -1 +0,0 @@
-{}
diff --git a/orchestrator/core/test/configs/another/dir/a b/orchestrator/core/test/configs/another/dir/a
deleted file mode 100644
index 7898192..0000000
--- a/orchestrator/core/test/configs/another/dir/a
+++ /dev/null
@@ -1 +0,0 @@
-a
diff --git a/orchestrator/core/test/configs/b-eng b/orchestrator/core/test/configs/b-eng
deleted file mode 100644
index eceb3f3..0000000
--- a/orchestrator/core/test/configs/b-eng
+++ /dev/null
@@ -1 +0,0 @@
-INVALID FILE
diff --git a/orchestrator/core/test/configs/build/make/orchestrator/multitree_combos/b.mcombo b/orchestrator/core/test/configs/build/make/orchestrator/multitree_combos/b.mcombo
deleted file mode 100644
index 8cc8370..0000000
--- a/orchestrator/core/test/configs/build/make/orchestrator/multitree_combos/b.mcombo
+++ /dev/null
@@ -1,3 +0,0 @@
-{
- "lunchable": "true"
-}
diff --git a/orchestrator/core/test/configs/build/make/orchestrator/multitree_combos/nested/nested.mcombo b/orchestrator/core/test/configs/build/make/orchestrator/multitree_combos/nested/nested.mcombo
deleted file mode 100644
index 0967ef4..0000000
--- a/orchestrator/core/test/configs/build/make/orchestrator/multitree_combos/nested/nested.mcombo
+++ /dev/null
@@ -1 +0,0 @@
-{}
diff --git a/orchestrator/core/test/configs/build/make/orchestrator/multitree_combos/not_a_combo.txt b/orchestrator/core/test/configs/build/make/orchestrator/multitree_combos/not_a_combo.txt
deleted file mode 100644
index f9805f2..0000000
--- a/orchestrator/core/test/configs/build/make/orchestrator/multitree_combos/not_a_combo.txt
+++ /dev/null
@@ -1 +0,0 @@
-not a combo file
diff --git a/orchestrator/core/test/configs/device/aa/bb/multitree_combos/b.mcombo b/orchestrator/core/test/configs/device/aa/bb/multitree_combos/b.mcombo
deleted file mode 100644
index 0967ef4..0000000
--- a/orchestrator/core/test/configs/device/aa/bb/multitree_combos/b.mcombo
+++ /dev/null
@@ -1 +0,0 @@
-{}
diff --git a/orchestrator/core/test/configs/device/aa/bb/multitree_combos/d.mcombo b/orchestrator/core/test/configs/device/aa/bb/multitree_combos/d.mcombo
deleted file mode 100644
index 0967ef4..0000000
--- a/orchestrator/core/test/configs/device/aa/bb/multitree_combos/d.mcombo
+++ /dev/null
@@ -1 +0,0 @@
-{}
diff --git a/orchestrator/core/test/configs/device/aa/bb/multitree_combos/v.mcombo b/orchestrator/core/test/configs/device/aa/bb/multitree_combos/v.mcombo
deleted file mode 100644
index 0967ef4..0000000
--- a/orchestrator/core/test/configs/device/aa/bb/multitree_combos/v.mcombo
+++ /dev/null
@@ -1 +0,0 @@
-{}
diff --git a/orchestrator/core/test/configs/device/this/one/is/deeper/than/will/be/found/by/the/ttl/multitree_combos/too_deep.mcombo b/orchestrator/core/test/configs/device/this/one/is/deeper/than/will/be/found/by/the/ttl/multitree_combos/too_deep.mcombo
deleted file mode 100644
index e69de29..0000000
--- a/orchestrator/core/test/configs/device/this/one/is/deeper/than/will/be/found/by/the/ttl/multitree_combos/too_deep.mcombo
+++ /dev/null
diff --git a/orchestrator/core/test/configs/parsing/cycles/1.mcombo b/orchestrator/core/test/configs/parsing/cycles/1.mcombo
deleted file mode 100644
index ab8fe33..0000000
--- a/orchestrator/core/test/configs/parsing/cycles/1.mcombo
+++ /dev/null
@@ -1,5 +0,0 @@
-{
- "inherits": [
- "test/configs/parsing/cycles/2.mcombo"
- ]
-}
diff --git a/orchestrator/core/test/configs/parsing/cycles/2.mcombo b/orchestrator/core/test/configs/parsing/cycles/2.mcombo
deleted file mode 100644
index 2b774d0..0000000
--- a/orchestrator/core/test/configs/parsing/cycles/2.mcombo
+++ /dev/null
@@ -1,6 +0,0 @@
-{
- "inherits": [
- "test/configs/parsing/cycles/3.mcombo"
- ]
-}
-
diff --git a/orchestrator/core/test/configs/parsing/cycles/3.mcombo b/orchestrator/core/test/configs/parsing/cycles/3.mcombo
deleted file mode 100644
index 41b629b..0000000
--- a/orchestrator/core/test/configs/parsing/cycles/3.mcombo
+++ /dev/null
@@ -1,6 +0,0 @@
-{
- "inherits": [
- "test/configs/parsing/cycles/1.mcombo"
- ]
-}
-
diff --git a/orchestrator/core/test/configs/parsing/merge/1.mcombo b/orchestrator/core/test/configs/parsing/merge/1.mcombo
deleted file mode 100644
index a5a57d7..0000000
--- a/orchestrator/core/test/configs/parsing/merge/1.mcombo
+++ /dev/null
@@ -1,13 +0,0 @@
-{
- "inherits": [
- "test/configs/parsing/merge/2.mcombo",
- "test/configs/parsing/merge/3.mcombo"
- ],
- "in_1": "1",
- "in_1_2": "1",
- "merged": {
- "merged_1": "1",
- "merged_1_2": "1"
- },
- "dict_1": { "a" : "b" }
-}
diff --git a/orchestrator/core/test/configs/parsing/merge/2.mcombo b/orchestrator/core/test/configs/parsing/merge/2.mcombo
deleted file mode 100644
index 00963e2..0000000
--- a/orchestrator/core/test/configs/parsing/merge/2.mcombo
+++ /dev/null
@@ -1,12 +0,0 @@
-{
- "in_1_2": "2",
- "in_2": "2",
- "in_2_3": "2",
- "merged": {
- "merged_1_2": "2",
- "merged_2": "2",
- "merged_2_3": "2"
- },
- "dict_2": { "a" : "b" }
-}
-
diff --git a/orchestrator/core/test/configs/parsing/merge/3.mcombo b/orchestrator/core/test/configs/parsing/merge/3.mcombo
deleted file mode 100644
index 5fc9d90..0000000
--- a/orchestrator/core/test/configs/parsing/merge/3.mcombo
+++ /dev/null
@@ -1,10 +0,0 @@
-{
- "in_3": "3",
- "in_2_3": "3",
- "merged": {
- "merged_3": "3",
- "merged_2_3": "3"
- },
- "dict_3": { "a" : "b" }
-}
-
diff --git a/orchestrator/core/test/configs/vendor/aa/bb/multitree_combos/b.mcombo b/orchestrator/core/test/configs/vendor/aa/bb/multitree_combos/b.mcombo
deleted file mode 100644
index 0967ef4..0000000
--- a/orchestrator/core/test/configs/vendor/aa/bb/multitree_combos/b.mcombo
+++ /dev/null
@@ -1 +0,0 @@
-{}
diff --git a/orchestrator/core/test/configs/vendor/aa/bb/multitree_combos/v.mcombo b/orchestrator/core/test/configs/vendor/aa/bb/multitree_combos/v.mcombo
deleted file mode 100644
index 0967ef4..0000000
--- a/orchestrator/core/test/configs/vendor/aa/bb/multitree_combos/v.mcombo
+++ /dev/null
@@ -1 +0,0 @@
-{}
diff --git a/orchestrator/core/test/configs/vendor/this/one/is/deeper/than/will/be/found/by/the/ttl/multitree_combos/too_deep.mcombo b/orchestrator/core/test/configs/vendor/this/one/is/deeper/than/will/be/found/by/the/ttl/multitree_combos/too_deep.mcombo
deleted file mode 100644
index e69de29..0000000
--- a/orchestrator/core/test/configs/vendor/this/one/is/deeper/than/will/be/found/by/the/ttl/multitree_combos/too_deep.mcombo
+++ /dev/null
diff --git a/orchestrator/core/test_lunch.py b/orchestrator/core/test_lunch.py
deleted file mode 100755
index 3c39493..0000000
--- a/orchestrator/core/test_lunch.py
+++ /dev/null
@@ -1,128 +0,0 @@
-#!/usr/bin/env python3
-#
-# Copyright (C) 2008 The Android Open Source Project
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import sys
-import unittest
-
-sys.dont_write_bytecode = True
-import lunch
-
-class TestStringMethods(unittest.TestCase):
-
- def test_find_dirs(self):
- self.assertEqual([x for x in lunch.FindDirs("test/configs", "multitree_combos")], [
- "test/configs/build/make/orchestrator/multitree_combos",
- "test/configs/device/aa/bb/multitree_combos",
- "test/configs/vendor/aa/bb/multitree_combos"])
-
- def test_find_file(self):
- # Finds the one in device first because this is searching from the root,
- # not using FindNamedConfig.
- self.assertEqual(lunch.FindFile("test/configs", "v.mcombo"),
- "test/configs/device/aa/bb/multitree_combos/v.mcombo")
-
- def test_find_config_dirs(self):
- self.assertEqual([x for x in lunch.FindConfigDirs("test/configs")], [
- "test/configs/build/make/orchestrator/multitree_combos",
- "test/configs/vendor/aa/bb/multitree_combos",
- "test/configs/device/aa/bb/multitree_combos"])
-
- def test_find_named_config(self):
- # Inside build/orchestrator, overriding device and vendor
- self.assertEqual(lunch.FindNamedConfig("test/configs", "b"),
- "test/configs/build/make/orchestrator/multitree_combos/b.mcombo")
-
- # Nested dir inside a combo dir
- self.assertEqual(lunch.FindNamedConfig("test/configs", "nested"),
- "test/configs/build/make/orchestrator/multitree_combos/nested/nested.mcombo")
-
- # Inside vendor, overriding device
- self.assertEqual(lunch.FindNamedConfig("test/configs", "v"),
- "test/configs/vendor/aa/bb/multitree_combos/v.mcombo")
-
- # Inside device
- self.assertEqual(lunch.FindNamedConfig("test/configs", "d"),
- "test/configs/device/aa/bb/multitree_combos/d.mcombo")
-
- # Make sure we don't look too deep (for performance)
- self.assertIsNone(lunch.FindNamedConfig("test/configs", "too_deep"))
-
-
- def test_choose_config_file(self):
- # Empty string argument
- self.assertEqual(lunch.ChooseConfigFromArgs("test/configs", [""]),
- (None, None))
-
- # A PRODUCT-VARIANT name
- self.assertEqual(lunch.ChooseConfigFromArgs("test/configs", ["v-eng"]),
- ("test/configs/vendor/aa/bb/multitree_combos/v.mcombo", "eng"))
-
- # A PRODUCT-VARIANT name that conflicts with a file
- self.assertEqual(lunch.ChooseConfigFromArgs("test/configs", ["b-eng"]),
- ("test/configs/build/make/orchestrator/multitree_combos/b.mcombo", "eng"))
-
- # A PRODUCT-VARIANT that doesn't exist
- self.assertEqual(lunch.ChooseConfigFromArgs("test/configs", ["z-user"]),
- (None, None))
-
- # An explicit file
- self.assertEqual(lunch.ChooseConfigFromArgs("test/configs",
- ["test/configs/build/make/orchestrator/multitree_combos/b.mcombo", "eng"]),
- ("test/configs/build/make/orchestrator/multitree_combos/b.mcombo", "eng"))
-
- # An explicit file that doesn't exist
- self.assertEqual(lunch.ChooseConfigFromArgs("test/configs",
- ["test/configs/doesnt_exist.mcombo", "eng"]),
- (None, None))
-
- # An explicit file without a variant should fail
- self.assertEqual(lunch.ChooseConfigFromArgs("test/configs",
- ["test/configs/build/make/orchestrator/multitree_combos/b.mcombo"]),
- ("test/configs/build/make/orchestrator/multitree_combos/b.mcombo", None))
-
-
- def test_config_cycles(self):
- # Test that we catch cycles
- with self.assertRaises(lunch.ConfigException) as context:
- lunch.LoadConfig("test/configs/parsing/cycles/1.mcombo")
- self.assertEqual(context.exception.kind, lunch.ConfigException.ERROR_CYCLE)
-
- def test_config_merge(self):
- # Test the merge logic
- self.assertEqual(lunch.LoadConfig("test/configs/parsing/merge/1.mcombo"), {
- "in_1": "1",
- "in_1_2": "1",
- "merged": {"merged_1": "1",
- "merged_1_2": "1",
- "merged_2": "2",
- "merged_2_3": "2",
- "merged_3": "3"},
- "dict_1": {"a": "b"},
- "in_2": "2",
- "in_2_3": "2",
- "dict_2": {"a": "b"},
- "in_3": "3",
- "dict_3": {"a": "b"}
- })
-
- def test_list(self):
- self.assertEqual(sorted(lunch.FindAllLunchable("test/configs")),
- ["test/configs/build/make/orchestrator/multitree_combos/b.mcombo"])
-
-if __name__ == "__main__":
- unittest.main()
-
-# vim: sts=4:ts=4:sw=4
diff --git a/orchestrator/multitree_combos/test.mcombo b/orchestrator/multitree_combos/test.mcombo
deleted file mode 100644
index 3ad0717..0000000
--- a/orchestrator/multitree_combos/test.mcombo
+++ /dev/null
@@ -1,16 +0,0 @@
-{
- "lunchable": true,
- "system": {
- "tree": "inner_tree_system",
- "product": "system_lunch_product"
- },
- "vendor": {
- "tree": "inner_tree_vendor",
- "product": "vendor_lunch_product"
- },
- "modules": {
- "com.android.something": {
- "tree": "inner_tree_module"
- }
- }
-}
diff --git a/rbesetup.sh b/rbesetup.sh
index 3b0e7cf..8386628 100644
--- a/rbesetup.sh
+++ b/rbesetup.sh
@@ -33,20 +33,15 @@
# This function prefixes the given command with appropriate variables needed
# for the build to be executed with RBE.
function use_rbe() {
- local RBE_LOG_DIR="/tmp"
local RBE_BINARIES_DIR="prebuilts/remoteexecution-client/latest"
local DOCKER_IMAGE="gcr.io/androidbuild-re-dockerimage/android-build-remoteexec-image@sha256:582efb38f0c229ea39952fff9e132ccbe183e14869b39888010dacf56b360d62"
# Do not set an invocation-ID and let reproxy auto-generate one.
USE_RBE="true" \
- FLAG_server_address="unix:///tmp/reproxy_$RANDOM.sock" \
FLAG_exec_root="$(gettop)" \
FLAG_platform="container-image=docker://${DOCKER_IMAGE}" \
RBE_use_application_default_credentials="true" \
- RBE_log_dir="${RBE_LOG_DIR}" \
RBE_reproxy_wait_seconds="20" \
- RBE_output_dir="${RBE_LOG_DIR}" \
- RBE_log_path="text://${RBE_LOG_DIR}/reproxy_log.txt" \
RBE_CXX_EXEC_STRATEGY="remote_local_fallback" \
RBE_cpp_dependency_scanner_plugin="${RBE_BINARIES_DIR}/dependency_scanner_go_plugin.so" \
RBE_DIR=${RBE_BINARIES_DIR} \
diff --git a/shell_utils.sh b/shell_utils.sh
new file mode 100644
index 0000000..9de5a50
--- /dev/null
+++ b/shell_utils.sh
@@ -0,0 +1,74 @@
+# Copyright (C) 2022 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+function gettop
+{
+ local TOPFILE=build/make/core/envsetup.mk
+ # The ${TOP-} expansion allows this to work even with set -u
+ if [ -n "${TOP:-}" -a -f "${TOP:-}/$TOPFILE" ] ; then
+ # The following circumlocution ensures we remove symlinks from TOP.
+ (cd "$TOP"; PWD= /bin/pwd)
+ else
+ if [ -f $TOPFILE ] ; then
+ # The following circumlocution (repeated below as well) ensures
+ # that we record the true directory name and not one that is
+ # faked up with symlink names.
+ PWD= /bin/pwd
+ else
+ local HERE=$PWD
+ local T=
+ while [ \( ! \( -f $TOPFILE \) \) -a \( "$PWD" != "/" \) ]; do
+ \cd ..
+ T=`PWD= /bin/pwd -P`
+ done
+ \cd "$HERE"
+ if [ -f "$T/$TOPFILE" ]; then
+ echo "$T"
+ fi
+ fi
+ fi
+}
+
+# Sets TOP, or if the root of the tree can't be found, prints a message and
+# exits. Since this function exits, it should not be called from functions
+# defined in envsetup.sh.
+if [ -z "${IMPORTING_ENVSETUP:-}" ] ; then
+function require_top
+{
+ TOP=$(gettop)
+ if [[ ! $TOP ]] ; then
+ echo "Can not locate root of source tree. $(basename $0) must be run from within the Android source tree." >&2
+ exit 1
+ fi
+}
+fi
+
+function getoutdir
+{
+ local top=$(gettop)
+ local out_dir="${OUT_DIR:-}"
+ if [[ -z "${out_dir}" ]]; then
+ if [[ -n "${OUT_DIR_COMMON_BASE:-}" && -n "${top}" ]]; then
+ out_dir="${OUT_DIR_COMMON_BASE}/$(basename ${top})"
+ else
+ out_dir="out"
+ fi
+ fi
+ if [[ "${out_dir}" != /* ]]; then
+ out_dir="${top}/${out_dir}"
+ fi
+ echo "${out_dir}"
+}
+
+
diff --git a/target/OWNERS b/target/OWNERS
deleted file mode 100644
index feb2742..0000000
--- a/target/OWNERS
+++ /dev/null
@@ -1 +0,0 @@
-hansson@google.com
diff --git a/target/board/Android.mk b/target/board/Android.mk
index baa3d3a..21c0c10 100644
--- a/target/board/Android.mk
+++ b/target/board/Android.mk
@@ -19,8 +19,11 @@
ifndef board_info_txt
board_info_txt := $(wildcard $(TARGET_DEVICE_DIR)/board-info.txt)
endif
-$(INSTALLED_ANDROID_INFO_TXT_TARGET): $(board_info_txt) build/make/tools/check_radio_versions.py
- $(hide) build/make/tools/check_radio_versions.py $< $(BOARD_INFO_CHECK)
+CHECK_RADIO_VERSIONS := $(HOST_OUT_EXECUTABLES)/check_radio_versions$(HOST_EXECUTABLE_SUFFIX)
+$(INSTALLED_ANDROID_INFO_TXT_TARGET): $(board_info_txt) $(CHECK_RADIO_VERSIONS)
+ $(hide) $(CHECK_RADIO_VERSIONS) \
+ --board_info_txt $(board_info_txt) \
+ --board_info_check $(BOARD_INFO_CHECK)
$(call pretty,"Generated: ($@)")
ifdef board_info_txt
$(hide) grep -v '#' $< > $@
diff --git a/target/board/BoardConfigEmuCommon.mk b/target/board/BoardConfigEmuCommon.mk
index cc5e3ab..6ed08f0 100644
--- a/target/board/BoardConfigEmuCommon.mk
+++ b/target/board/BoardConfigEmuCommon.mk
@@ -33,7 +33,7 @@
BOARD_BUILD_SUPER_IMAGE_BY_DEFAULT := true
# 8G + 8M
-BOARD_SUPER_PARTITION_SIZE := 8598323200
+BOARD_SUPER_PARTITION_SIZE ?= 8598323200
BOARD_SUPER_PARTITION_GROUPS := emulator_dynamic_partitions
BOARD_EMULATOR_DYNAMIC_PARTITIONS_PARTITION_LIST := \
@@ -53,7 +53,7 @@
TARGET_COPY_OUT_SYSTEM_DLKM := system_dlkm
# 8G
-BOARD_EMULATOR_DYNAMIC_PARTITIONS_SIZE := 8589934592
+BOARD_EMULATOR_DYNAMIC_PARTITIONS_SIZE ?= 8589934592
#vendor boot
BOARD_INCLUDE_DTB_IN_BOOTIMG := false
@@ -70,6 +70,5 @@
BOARD_VENDORIMAGE_FILE_SYSTEM_TYPE := ext4
BOARD_FLASH_BLOCK_SIZE := 512
-DEVICE_MATRIX_FILE := device/generic/goldfish/compatibility_matrix.xml
BOARD_SEPOLICY_DIRS += device/generic/goldfish/sepolicy/common
diff --git a/target/board/BoardConfigGsiCommon.mk b/target/board/BoardConfigGsiCommon.mk
index 53714a8..4d95b33 100644
--- a/target/board/BoardConfigGsiCommon.mk
+++ b/target/board/BoardConfigGsiCommon.mk
@@ -3,6 +3,8 @@
# Common compile-time definitions for GSI
# Builds upon the mainline config.
#
+# See device/generic/common/README.md for more details.
+#
include build/make/target/board/BoardConfigMainlineCommon.mk
@@ -17,6 +19,12 @@
TARGET_USERIMAGES_SPARSE_EXT_DISABLED := true
TARGET_USERIMAGES_SPARSE_EROFS_DISABLED := true
+# Enable system_dlkm image for creating a symlink in GSI to support
+# the devices with system_dlkm partition
+BOARD_USES_SYSTEM_DLKMIMAGE := true
+BOARD_SYSTEM_DLKMIMAGE_FILE_SYSTEM_TYPE := ext4
+TARGET_COPY_OUT_SYSTEM_DLKM := system_dlkm
+
# GSI also includes make_f2fs to support userdata parition in f2fs
# for some devices
TARGET_USERIMAGES_USE_F2FS := true
@@ -80,6 +88,3 @@
# Setup a vendor image to let PRODUCT_VENDOR_PROPERTIES does not affect GSI
BOARD_VENDORIMAGE_FILE_SYSTEM_TYPE := ext4
-
-# Disable 64 bit mediadrmserver
-TARGET_ENABLE_MEDIADRM_64 :=
diff --git a/target/board/emulator_arm64/device.mk b/target/board/emulator_arm64/device.mk
index dc84192..d221e64 100644
--- a/target/board/emulator_arm64/device.mk
+++ b/target/board/emulator_arm64/device.mk
@@ -17,12 +17,3 @@
PRODUCT_SOONG_NAMESPACES += device/generic/goldfish # for libwifi-hal-emu
PRODUCT_SOONG_NAMESPACES += device/generic/goldfish-opengl # for goldfish deps.
-# Cuttlefish has GKI kernel prebuilts, so use those for the GKI boot.img.
-ifeq ($(TARGET_PREBUILT_KERNEL),)
- LOCAL_KERNEL := kernel/prebuilts/5.4/arm64/kernel-5.4-lz4
-else
- LOCAL_KERNEL := $(TARGET_PREBUILT_KERNEL)
-endif
-
-PRODUCT_COPY_FILES += \
- $(LOCAL_KERNEL):kernel
diff --git a/target/board/generic_arm64/BoardConfig.mk b/target/board/generic_arm64/BoardConfig.mk
index 45ed3da..e2d5fb4 100644
--- a/target/board/generic_arm64/BoardConfig.mk
+++ b/target/board/generic_arm64/BoardConfig.mk
@@ -52,6 +52,11 @@
TARGET_2ND_CPU_VARIANT := generic
endif
+# Include 64-bit mediaserver to support 64-bit only devices
+TARGET_DYNAMIC_64_32_MEDIASERVER := true
+# Include 64-bit drmserver to support 64-bit only devices
+TARGET_DYNAMIC_64_32_DRMSERVER := true
+
include build/make/target/board/BoardConfigGsiCommon.mk
# Some vendors still haven't cleaned up all device specific directories under
diff --git a/target/board/generic_riscv64/BoardConfig.mk b/target/board/generic_riscv64/BoardConfig.mk
new file mode 100644
index 0000000..906f7f0
--- /dev/null
+++ b/target/board/generic_riscv64/BoardConfig.mk
@@ -0,0 +1,28 @@
+# Copyright (C) 2022 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# riscv64 emulator specific definitions
+TARGET_ARCH := riscv64
+TARGET_ARCH_VARIANT :=
+TARGET_CPU_VARIANT := generic
+TARGET_CPU_ABI := riscv64
+
+# Include 64-bit mediaserver to support 64-bit only devices
+TARGET_DYNAMIC_64_32_MEDIASERVER := true
+
+include build/make/target/board/BoardConfigGsiCommon.mk
+
+# Temporary hack while prebuilt modules are missing riscv64.
+ALLOW_MISSING_DEPENDENCIES := true
diff --git a/target/board/generic_riscv64/README.txt b/target/board/generic_riscv64/README.txt
new file mode 100644
index 0000000..9811982
--- /dev/null
+++ b/target/board/generic_riscv64/README.txt
@@ -0,0 +1,7 @@
+The "generic_riscv64" product defines a non-hardware-specific riscv64 target
+without a bootloader.
+
+It is also the target to build the generic kernel image (GKI).
+
+It is not a product "base class"; no other products inherit
+from it or use it in any way.
diff --git a/target/board/generic_riscv64/device.mk b/target/board/generic_riscv64/device.mk
new file mode 100644
index 0000000..27a4175
--- /dev/null
+++ b/target/board/generic_riscv64/device.mk
@@ -0,0 +1,15 @@
+#
+# Copyright (C) 2022 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
diff --git a/target/board/generic_riscv64/system_ext.prop b/target/board/generic_riscv64/system_ext.prop
new file mode 100644
index 0000000..42c4ef5
--- /dev/null
+++ b/target/board/generic_riscv64/system_ext.prop
@@ -0,0 +1,5 @@
+#
+# system.prop for generic riscv64 sdk
+#
+
+rild.libpath=/vendor/lib64/libreference-ril.so
diff --git a/target/board/generic_x86_64/BoardConfig.mk b/target/board/generic_x86_64/BoardConfig.mk
index 93694f2..36136f4 100755
--- a/target/board/generic_x86_64/BoardConfig.mk
+++ b/target/board/generic_x86_64/BoardConfig.mk
@@ -22,6 +22,11 @@
TARGET_2ND_ARCH := x86
TARGET_2ND_ARCH_VARIANT := x86_64
+# Include 64-bit mediaserver to support 64-bit only devices
+TARGET_DYNAMIC_64_32_MEDIASERVER := true
+# Include 64-bit drmserver to support 64-bit only devices
+TARGET_DYNAMIC_64_32_DRMSERVER := true
+
include build/make/target/board/BoardConfigGsiCommon.mk
ifndef BUILDING_GSI
diff --git a/target/board/gsi_arm64/BoardConfig.mk b/target/board/gsi_arm64/BoardConfig.mk
index db6f3f0..7910b1d 100644
--- a/target/board/gsi_arm64/BoardConfig.mk
+++ b/target/board/gsi_arm64/BoardConfig.mk
@@ -27,6 +27,11 @@
TARGET_2ND_CPU_ABI2 := armeabi
TARGET_2ND_CPU_VARIANT := generic
+# Include 64-bit mediaserver to support 64-bit only devices
+TARGET_DYNAMIC_64_32_MEDIASERVER := true
+# Include 64-bit drmserver to support 64-bit only devices
+TARGET_DYNAMIC_64_32_DRMSERVER := true
+
# TODO(b/111434759, b/111287060) SoC specific hacks
BOARD_ROOT_EXTRA_SYMLINKS += /vendor/lib/dsp:/dsp
BOARD_ROOT_EXTRA_SYMLINKS += /mnt/vendor/persist:/persist
diff --git a/target/board/linux_bionic/BoardConfig.mk b/target/board/linux_bionic/BoardConfig.mk
new file mode 100644
index 0000000..7fca911
--- /dev/null
+++ b/target/board/linux_bionic/BoardConfig.mk
@@ -0,0 +1,28 @@
+# Copyright (C) 2020 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# This "device" is only intended to be used for host Bionic build targets, so
+# (device) target architectures are irrelevant. However, the build system isn't
+# prepared to handle no target architectures at all, so pick something
+# arbitrarily.
+TARGET_ARCH := arm
+TARGET_ARCH_VARIANT := armv7-a-neon
+TARGET_CPU_VARIANT := generic
+TARGET_CPU_ABI := armeabi-v7a
+TARGET_CPU_ABI2 := armeabi
+
+HOST_CROSS_OS := linux_bionic
+HOST_CROSS_ARCH := x86_64
+HOST_CROSS_2ND_ARCH :=
diff --git a/target/board/linux_bionic/README.md b/target/board/linux_bionic/README.md
new file mode 100644
index 0000000..8db77f2
--- /dev/null
+++ b/target/board/linux_bionic/README.md
@@ -0,0 +1,6 @@
+This "device" is suitable for Soong-only builds to create Bionic binaries for
+Linux hosts:
+
+```
+build/soong/soong_ui.bash --make-mode --soong-only TARGET_PRODUCT=linux_bionic ...
+```
diff --git a/target/board/mainline_sdk/BoardConfig.mk b/target/board/mainline_sdk/BoardConfig.mk
index 84f8b2d..f5c2dc6 100644
--- a/target/board/mainline_sdk/BoardConfig.mk
+++ b/target/board/mainline_sdk/BoardConfig.mk
@@ -18,3 +18,6 @@
HOST_CROSS_OS := linux_bionic
HOST_CROSS_ARCH := x86_64
HOST_CROSS_2ND_ARCH :=
+
+# Required flag for non-64 bit devices from P.
+TARGET_USES_64_BIT_BINDER := true
diff --git a/target/board/module_arm64only/BoardConfig.mk b/target/board/module_arm64only/BoardConfig.mk
new file mode 100644
index 0000000..3cabf05
--- /dev/null
+++ b/target/board/module_arm64only/BoardConfig.mk
@@ -0,0 +1,21 @@
+# Copyright (C) 2022 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+include build/make/target/board/BoardConfigModuleCommon.mk
+
+TARGET_ARCH := arm64
+TARGET_ARCH_VARIANT := armv8-a
+TARGET_CPU_VARIANT := generic
+TARGET_CPU_ABI := arm64-v8a
diff --git a/target/board/module_arm64only/README.md b/target/board/module_arm64only/README.md
new file mode 100644
index 0000000..0dd1699
--- /dev/null
+++ b/target/board/module_arm64only/README.md
@@ -0,0 +1,2 @@
+This device is suitable for an unbundled module targeted specifically to an
+arm64 device. 32 bit binaries will not be built.
diff --git a/target/board/module_x86_64only/BoardConfig.mk b/target/board/module_x86_64only/BoardConfig.mk
new file mode 100644
index 0000000..b0676cb
--- /dev/null
+++ b/target/board/module_x86_64only/BoardConfig.mk
@@ -0,0 +1,20 @@
+# Copyright (C) 2020 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+include build/make/target/board/BoardConfigModuleCommon.mk
+
+TARGET_CPU_ABI := x86_64
+TARGET_ARCH := x86_64
+TARGET_ARCH_VARIANT := x86_64
diff --git a/target/board/module_x86_64only/README.md b/target/board/module_x86_64only/README.md
new file mode 100644
index 0000000..8fd7dc4
--- /dev/null
+++ b/target/board/module_x86_64only/README.md
@@ -0,0 +1,2 @@
+This device is suitable for an unbundled module targeted specifically to an
+x86_64 device. 32 bit binaries will not be built.
diff --git a/target/product/AndroidProducts.mk b/target/product/AndroidProducts.mk
index ee702e5..133dc73 100644
--- a/target/product/AndroidProducts.mk
+++ b/target/product/AndroidProducts.mk
@@ -35,7 +35,9 @@
ifneq ($(TARGET_BUILD_APPS),)
PRODUCT_MAKEFILES := \
$(LOCAL_DIR)/aosp_arm64.mk \
+ $(LOCAL_DIR)/aosp_arm64_fullmte.mk \
$(LOCAL_DIR)/aosp_arm.mk \
+ $(LOCAL_DIR)/aosp_riscv64.mk \
$(LOCAL_DIR)/aosp_x86_64.mk \
$(LOCAL_DIR)/aosp_x86.mk \
$(LOCAL_DIR)/full.mk \
@@ -45,7 +47,9 @@
PRODUCT_MAKEFILES := \
$(LOCAL_DIR)/aosp_64bitonly_x86_64.mk \
$(LOCAL_DIR)/aosp_arm64.mk \
+ $(LOCAL_DIR)/aosp_arm64_fullmte.mk \
$(LOCAL_DIR)/aosp_arm.mk \
+ $(LOCAL_DIR)/aosp_riscv64.mk \
$(LOCAL_DIR)/aosp_x86_64.mk \
$(LOCAL_DIR)/aosp_x86_arm.mk \
$(LOCAL_DIR)/aosp_x86.mk \
@@ -74,11 +78,14 @@
endif
PRODUCT_MAKEFILES += \
+ $(LOCAL_DIR)/linux_bionic.mk \
$(LOCAL_DIR)/mainline_sdk.mk \
$(LOCAL_DIR)/module_arm.mk \
$(LOCAL_DIR)/module_arm64.mk \
+ $(LOCAL_DIR)/module_arm64only.mk \
$(LOCAL_DIR)/module_x86.mk \
$(LOCAL_DIR)/module_x86_64.mk \
+ $(LOCAL_DIR)/module_x86_64only.mk \
COMMON_LUNCH_CHOICES := \
aosp_arm64-eng \
diff --git a/target/product/OWNERS b/target/product/OWNERS
index 30b1af6..008e4a2 100644
--- a/target/product/OWNERS
+++ b/target/product/OWNERS
@@ -7,4 +7,4 @@
# Android Go
per-file go_defaults.mk = gkaiser@google.com, kushg@google.com, rajekumar@google.com
per-file go_defaults_512.mk = gkaiser@google.com, kushg@google.com, rajekumar@google.com
-per-file go_defaults_common.mk = gkaiser@google.com, kushg@google.com, rajekumar@google.com
+per-file go_defaults_common.mk = gkaiser@google.com, kushg@google.com, rajekumar@google.com
\ No newline at end of file
diff --git a/target/product/angle_default.mk b/target/product/angle_default.mk
new file mode 100644
index 0000000..bea0be6
--- /dev/null
+++ b/target/product/angle_default.mk
@@ -0,0 +1,23 @@
+#
+# Copyright 2023 The Android Open-Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# To enable ANGLE as the default system GLES drivers, add
+# $(call inherit-product, $(SRC_TARGET_DIR)/product/angle_enabled.mk) to the Makefile.
+
+$(call inherit-product, $(SRC_TARGET_DIR)/product/angle_supported.mk)
+
+PRODUCT_VENDOR_PROPERTIES += \
+ persist.graphics.egl=angle
diff --git a/target/product/angle_supported.mk b/target/product/angle_supported.mk
new file mode 100644
index 0000000..c83ff5f
--- /dev/null
+++ b/target/product/angle_supported.mk
@@ -0,0 +1,27 @@
+#
+# Copyright 2023 The Android Open-Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# To include ANGLE into the image build, add
+# $(call inherit-product, $(SRC_TARGET_DIR)/product/angle_supported.mk) to the Makefile.
+# By default, this will allow ANGLE binaries to coexist with native GLES drivers.
+
+PRODUCT_PACKAGES += \
+ libEGL_angle \
+ libGLESv1_CM_angle \
+ libGLESv2_angle
+
+# Set ro.gfx.angle.supported based on if ANGLE is installed in vendor partition
+PRODUCT_VENDOR_PROPERTIES += ro.gfx.angle.supported=true
diff --git a/target/product/aosp_64bitonly_x86_64.mk b/target/product/aosp_64bitonly_x86_64.mk
index 4de4e0c..75fd3c8 100644
--- a/target/product/aosp_64bitonly_x86_64.mk
+++ b/target/product/aosp_64bitonly_x86_64.mk
@@ -51,7 +51,6 @@
#
# All components inherited here go to vendor image
#
-$(call inherit-product-if-exists, device/generic/goldfish/x86_64-vendor.mk)
$(call inherit-product, $(SRC_TARGET_DIR)/product/emulator_vendor.mk)
$(call inherit-product, $(SRC_TARGET_DIR)/board/generic_x86_64/device.mk)
@@ -59,6 +58,9 @@
# Special settings for GSI releasing
#
ifeq (aosp_64bitonly_x86_64,$(TARGET_PRODUCT))
+# Build modules from source if this has not been pre-configured
+MODULE_BUILD_FROM_SOURCE ?= true
+
$(call inherit-product, $(SRC_TARGET_DIR)/product/gsi_release.mk)
endif
diff --git a/target/product/aosp_arm.mk b/target/product/aosp_arm.mk
index 90acc17..61c1316 100644
--- a/target/product/aosp_arm.mk
+++ b/target/product/aosp_arm.mk
@@ -49,7 +49,7 @@
#
# All components inherited here go to vendor image
#
-$(call inherit-product-if-exists, device/generic/goldfish/arm32-vendor.mk)
+$(call inherit-product-if-exists, build/make/target/product/ramdisk_stub.mk)
$(call inherit-product, $(SRC_TARGET_DIR)/product/emulator_vendor.mk)
$(call inherit-product, $(SRC_TARGET_DIR)/board/generic_x86/device.mk)
@@ -57,6 +57,9 @@
# Special settings for GSI releasing
#
ifeq (aosp_arm,$(TARGET_PRODUCT))
+# Build modules from source if this has not been pre-configured
+MODULE_BUILD_FROM_SOURCE ?= true
+
$(call inherit-product, $(SRC_TARGET_DIR)/product/gsi_release.mk)
endif
diff --git a/target/product/aosp_arm64.mk b/target/product/aosp_arm64.mk
index 01897b7..6c907db 100644
--- a/target/product/aosp_arm64.mk
+++ b/target/product/aosp_arm64.mk
@@ -43,6 +43,9 @@
$(call inherit-product, $(SRC_TARGET_DIR)/product/handheld_system_ext.mk)
$(call inherit-product, $(SRC_TARGET_DIR)/product/telephony_system_ext.mk)
+# pKVM
+$(call inherit-product, packages/modules/Virtualization/apex/product_packages.mk)
+
#
# All components inherited here go to product image
#
@@ -59,6 +62,9 @@
# Special settings for GSI releasing
#
ifeq (aosp_arm64,$(TARGET_PRODUCT))
+# Build modules from source if this has not been pre-configured
+MODULE_BUILD_FROM_SOURCE ?= true
+
$(call inherit-product, $(SRC_TARGET_DIR)/product/gsi_release.mk)
endif
diff --git a/target/product/aosp_arm64_fullmte.mk b/target/product/aosp_arm64_fullmte.mk
new file mode 100644
index 0000000..ed6bd4a
--- /dev/null
+++ b/target/product/aosp_arm64_fullmte.mk
@@ -0,0 +1,27 @@
+# Copyright (C) 2023 The Android Open-Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+include $(SRC_TARGET_DIR)/product/fullmte.mk
+
+PRODUCT_ENFORCE_ARTIFACT_PATH_REQUIREMENTS := relaxed
+
+$(call inherit-product, $(SRC_TARGET_DIR)/product/aosp_arm64.mk)
+
+# Build modules from source if this has not been pre-configured
+MODULE_BUILD_FROM_SOURCE ?= true
+
+$(call inherit-product, $(SRC_TARGET_DIR)/product/gsi_release.mk)
+
+PRODUCT_NAME := aosp_arm64_fullmte
diff --git a/target/product/aosp_product.mk b/target/product/aosp_product.mk
index e396ad1..a4c3a91 100644
--- a/target/product/aosp_product.mk
+++ b/target/product/aosp_product.mk
@@ -29,6 +29,7 @@
# More AOSP packages
PRODUCT_PACKAGES += \
+ initial-package-stopped-states-aosp.xml \
messaging \
PhotoTable \
preinstalled-packages-platform-aosp-product.xml \
diff --git a/target/product/aosp_riscv64.mk b/target/product/aosp_riscv64.mk
new file mode 100644
index 0000000..270a989
--- /dev/null
+++ b/target/product/aosp_riscv64.mk
@@ -0,0 +1,75 @@
+#
+# Copyright 2022 The Android Open-Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+PRODUCT_USE_DYNAMIC_PARTITIONS := true
+
+# The system image of aosp_riscv64-userdebug is a GSI for the devices with:
+# - riscv64 user space
+# - 64 bits binder interface
+# - system-as-root
+# - VNDK enforcement
+# - compatible property override enabled
+
+# This is a build configuration for a full-featured build of the
+# Open-Source part of the tree. It's geared toward a US-centric
+# build quite specifically for the emulator, and might not be
+# entirely appropriate to inherit from for on-device configurations.
+
+# GSI for system/product & support 64-bit apps only
+$(call inherit-product, $(SRC_TARGET_DIR)/product/core_64_bit_only.mk)
+$(call inherit-product, $(SRC_TARGET_DIR)/product/mainline_system.mk)
+
+#
+# All components inherited here go to system_ext image
+#
+$(call inherit-product, $(SRC_TARGET_DIR)/product/handheld_system_ext.mk)
+$(call inherit-product, $(SRC_TARGET_DIR)/product/telephony_system_ext.mk)
+
+#
+# All components inherited here go to product image
+#
+$(call inherit-product, $(SRC_TARGET_DIR)/product/aosp_product.mk)
+
+#
+# All components inherited here go to vendor image
+#
+$(call inherit-product, $(SRC_TARGET_DIR)/product/emulator_vendor.mk)
+$(call inherit-product, $(SRC_TARGET_DIR)/board/generic_riscv64/device.mk)
+
+#
+# Special settings for GSI releasing
+#
+ifeq (aosp_riscv64,$(TARGET_PRODUCT))
+# Build modules from source if this has not been pre-configured
+MODULE_BUILD_FROM_SOURCE ?= true
+
+$(call inherit-product, $(SRC_TARGET_DIR)/product/gsi_release.mk)
+endif
+
+PRODUCT_ARTIFACT_PATH_REQUIREMENT_ALLOWED_LIST += \
+ root/init.zygote64.rc
+
+# TODO(b/206676167): This property can be removed when renderscript is removed.
+# Prevents framework from attempting to load renderscript libraries, which are
+# not supported on this architecture.
+PRODUCT_SYSTEM_PROPERTIES += \
+ config.disable_renderscript=1 \
+
+# This build configuration supports 64-bit apps only
+PRODUCT_NAME := aosp_riscv64
+PRODUCT_DEVICE := generic_riscv64
+PRODUCT_BRAND := Android
+PRODUCT_MODEL := AOSP on Riscv64
diff --git a/target/product/aosp_x86.mk b/target/product/aosp_x86.mk
index 7db2c0f..a2f0390 100644
--- a/target/product/aosp_x86.mk
+++ b/target/product/aosp_x86.mk
@@ -47,7 +47,6 @@
#
# All components inherited here go to vendor image
#
-$(call inherit-product-if-exists, device/generic/goldfish/x86-vendor.mk)
$(call inherit-product, $(SRC_TARGET_DIR)/product/emulator_vendor.mk)
$(call inherit-product, $(SRC_TARGET_DIR)/board/generic_x86/device.mk)
@@ -56,6 +55,9 @@
# Special settings for GSI releasing
#
ifeq (aosp_x86,$(TARGET_PRODUCT))
+# Build modules from source if this has not been pre-configured
+MODULE_BUILD_FROM_SOURCE ?= true
+
$(call inherit-product, $(SRC_TARGET_DIR)/product/gsi_release.mk)
endif
diff --git a/target/product/aosp_x86_64.mk b/target/product/aosp_x86_64.mk
index b3cfae4..535ee3f 100644
--- a/target/product/aosp_x86_64.mk
+++ b/target/product/aosp_x86_64.mk
@@ -45,6 +45,9 @@
$(call inherit-product, $(SRC_TARGET_DIR)/product/handheld_system_ext.mk)
$(call inherit-product, $(SRC_TARGET_DIR)/product/telephony_system_ext.mk)
+# pKVM
+$(call inherit-product, packages/modules/Virtualization/apex/product_packages.mk)
+
#
# All components inherited here go to product image
#
@@ -53,7 +56,6 @@
#
# All components inherited here go to vendor image
#
-$(call inherit-product-if-exists, device/generic/goldfish/x86_64-vendor.mk)
$(call inherit-product, $(SRC_TARGET_DIR)/product/emulator_vendor.mk)
$(call inherit-product, $(SRC_TARGET_DIR)/board/generic_x86_64/device.mk)
$(call inherit-product, $(SRC_TARGET_DIR)/product/non_ab_device.mk)
@@ -62,6 +64,9 @@
# Special settings for GSI releasing
#
ifeq (aosp_x86_64,$(TARGET_PRODUCT))
+# Build modules from source if this has not been pre-configured
+MODULE_BUILD_FROM_SOURCE ?= true
+
$(call inherit-product, $(SRC_TARGET_DIR)/product/gsi_release.mk)
endif
diff --git a/target/product/aosp_x86_arm.mk b/target/product/aosp_x86_arm.mk
index f96e068..39ad0d8 100644
--- a/target/product/aosp_x86_arm.mk
+++ b/target/product/aosp_x86_arm.mk
@@ -45,7 +45,6 @@
#
# All components inherited here go to vendor image
#
-$(call inherit-product-if-exists, device/generic/goldfish/x86-vendor.mk)
$(call inherit-product, $(SRC_TARGET_DIR)/product/emulator_vendor.mk)
$(call inherit-product, $(SRC_TARGET_DIR)/board/generic_x86_arm/device.mk)
diff --git a/target/product/base_system.mk b/target/product/base_system.mk
index b0870c3..a3f5ab3 100644
--- a/target/product/base_system.mk
+++ b/target/product/base_system.mk
@@ -24,7 +24,7 @@
android.hidl.manager-V1.0-java \
android.hidl.memory@1.0-impl \
android.hidl.memory@1.0-impl.vendor \
- android.system.suspend@1.0-service \
+ android.system.suspend-service \
android.test.base \
android.test.mock \
android.test.runner \
@@ -53,8 +53,11 @@
com.android.adservices \
com.android.appsearch \
com.android.btservices \
+ com.android.configinfrastructure \
com.android.conscrypt \
+ com.android.devicelock \
com.android.extservices \
+ com.android.healthfitness \
com.android.i18n \
com.android.ipsec \
com.android.location.provider \
@@ -65,12 +68,14 @@
com.android.os.statsd \
com.android.permission \
com.android.resolv \
+ com.android.rkpd \
com.android.neuralnetworks \
com.android.scheduling \
com.android.sdkext \
com.android.tethering \
com.android.tzdata \
com.android.uwb \
+ com.android.virt \
com.android.wifi \
ContactsProvider \
content \
@@ -118,6 +123,7 @@
incident-helper-cmd \
init.environ.rc \
init_system \
+ initial-package-stopped-states.xml \
input \
installd \
IntentResolver \
@@ -202,7 +208,6 @@
libvulkan \
libwilhelm \
linker \
- linkerconfig \
llkd \
lmkd \
LocalTransport \
@@ -221,6 +226,7 @@
mke2fs \
mkfs.erofs \
monkey \
+ mtectrl \
mtpd \
ndc \
netd \
@@ -236,6 +242,7 @@
platform.xml \
pm \
pppd \
+ preinstalled-packages-asl-files.xml \
preinstalled-packages-platform.xml \
privapp-permissions-platform.xml \
prng_seeder \
@@ -275,7 +282,6 @@
traced \
traced_probes \
tune2fs \
- tzdatacheck \
uiautomator \
uinput \
uncrypt \
@@ -284,7 +290,6 @@
viewcompiler \
voip-common \
vold \
- WallpaperBackup \
watchdogd \
wificond \
wifi.rc \
@@ -295,11 +300,9 @@
system_manifest.xml \
system_compatibility_matrix.xml \
-# HWASAN runtime for SANITIZE_TARGET=hwaddress builds
-ifneq (,$(filter hwaddress,$(SANITIZE_TARGET)))
- PRODUCT_PACKAGES += \
- libclang_rt.hwasan.bootstrap
-endif
+PRODUCT_PACKAGES_ARM64 := libclang_rt.hwasan \
+ libclang_rt.hwasan.bootstrap \
+ libc_hwasan \
# Jacoco agent JARS to be built and installed, if any.
ifeq ($(EMMA_INSTRUMENT),true)
@@ -318,6 +321,16 @@
endif # EMMA_INSTRUMENT_STATIC
endif # EMMA_INSTRUMENT
+ifeq (,$(DISABLE_WALLPAPER_BACKUP))
+ PRODUCT_PACKAGES += \
+ WallpaperBackup
+endif
+
+# For testing purposes
+ifeq ($(FORCE_AUDIO_SILENT), true)
+ PRODUCT_SYSTEM_PROPERTIES += ro.audio.silent=1
+endif
+
# Host tools to install
PRODUCT_HOST_PACKAGES += \
BugReport \
@@ -345,7 +358,6 @@
sqlite3 \
tinyplay \
tune2fs \
- tzdatacheck \
unwind_info \
unwind_reg_info \
unwind_symbols \
@@ -378,12 +390,14 @@
iotop \
iperf3 \
iw \
+ layertracegenerator \
+ libclang_rt.ubsan_standalone \
logpersist.start \
logtagd.rc \
procrank \
profcollectd \
profcollectctl \
- remount \
+ record_binder \
servicedispatcher \
showmap \
sqlite3 \
@@ -402,7 +416,11 @@
# The set of packages whose code can be loaded by the system server.
PRODUCT_SYSTEM_SERVER_APPS += \
SettingsProvider \
+
+ifeq (,$(DISABLE_WALLPAPER_BACKUP))
+ PRODUCT_SYSTEM_SERVER_APPS += \
WallpaperBackup
+endif
PRODUCT_PACKAGES_DEBUG_JAVA_COVERAGE := \
libdumpcoverage
diff --git a/target/product/base_vendor.mk b/target/product/base_vendor.mk
index 5004b85..97809c2 100644
--- a/target/product/base_vendor.mk
+++ b/target/product/base_vendor.mk
@@ -29,6 +29,11 @@
shell_and_utilities_recovery \
watchdogd.recovery \
+PRODUCT_VENDOR_PROPERTIES += \
+ ro.recovery.usb.vid?=18D1 \
+ ro.recovery.usb.adb.pid?=D001 \
+ ro.recovery.usb.fastboot.pid?=4EE0 \
+
# These had been pulled in via init_second_stage.recovery, but may not be needed.
PRODUCT_HOST_PACKAGES += \
e2fsdroid \
@@ -41,12 +46,12 @@
# Base modules and settings for the vendor partition.
PRODUCT_PACKAGES += \
- android.hardware.cas@1.2-service \
- android.hardware.media.omx@1.0-service \
+ android.hardware.cas-service.example \
boringssl_self_test_vendor \
dumpsys_vendor \
fs_config_files_nonsystem \
fs_config_dirs_nonsystem \
+ gpu_counter_producer \
gralloc.default \
group_odm \
group_vendor \
@@ -69,7 +74,19 @@
selinux_policy_nonsystem \
shell_and_utilities_vendor \
-# Base module when shipping api level is less than or equal to 29
+# OMX not supported for 64bit_only builds
+# Only supported when SHIPPING_API_LEVEL is less than or equal to 33
+ifneq ($(TARGET_SUPPORTS_OMX_SERVICE),false)
+ PRODUCT_PACKAGES_SHIPPING_API_LEVEL_33 += \
+ android.hardware.media.omx@1.0-service \
+
+endif
+
+# Base modules when shipping api level is less than or equal to 33
+PRODUCT_PACKAGES_SHIPPING_API_LEVEL_33 += \
+ android.hardware.cas@1.2-service \
+
+# Base modules when shipping api level is less than or equal to 29
PRODUCT_PACKAGES_SHIPPING_API_LEVEL_29 += \
android.hardware.configstore@1.1-service \
vndservice \
diff --git a/target/product/core_64_bit.mk b/target/product/core_64_bit.mk
index b9d22a6..e0c4d53 100644
--- a/target/product/core_64_bit.mk
+++ b/target/product/core_64_bit.mk
@@ -23,7 +23,9 @@
# for 32-bit only.
# Copy the 64-bit primary, 32-bit secondary zygote startup script
-PRODUCT_COPY_FILES += system/core/rootdir/init.zygote64_32.rc:system/etc/init/hw/init.zygote64_32.rc
+PRODUCT_COPY_FILES += \
+ system/core/rootdir/init.zygote64.rc:system/etc/init/hw/init.zygote64.rc \
+ system/core/rootdir/init.zygote64_32.rc:system/etc/init/hw/init.zygote64_32.rc \
# Set the zygote property to select the 64-bit primary, 32-bit secondary script
# This line must be parsed before the one in core_minimal.mk
diff --git a/target/product/core_64_bit_only.mk b/target/product/core_64_bit_only.mk
index 061728f..fc2b8e5 100644
--- a/target/product/core_64_bit_only.mk
+++ b/target/product/core_64_bit_only.mk
@@ -31,3 +31,4 @@
TARGET_SUPPORTS_32_BIT_APPS := false
TARGET_SUPPORTS_64_BIT_APPS := true
+TARGET_SUPPORTS_OMX_SERVICE := false
diff --git a/target/product/default_art_config.mk b/target/product/default_art_config.mk
index e2bb9d5..d970203 100644
--- a/target/product/default_art_config.mk
+++ b/target/product/default_art_config.mk
@@ -55,7 +55,10 @@
com.android.adservices:framework-sdksandbox \
com.android.appsearch:framework-appsearch \
com.android.btservices:framework-bluetooth \
+ com.android.configinfrastructure:framework-configinfrastructure \
com.android.conscrypt:conscrypt \
+ com.android.devicelock:framework-devicelock \
+ com.android.healthfitness:framework-healthfitness \
com.android.i18n:core-icu4j \
com.android.ipsec:android.net.ipsec.ike \
com.android.media:updatable-media \
@@ -70,6 +73,7 @@
com.android.tethering:framework-connectivity-t \
com.android.tethering:framework-tethering \
com.android.uwb:framework-uwb \
+ com.android.virt:framework-virtualization \
com.android.wifi:framework-wifi \
# List of system_server classpath jars delivered via apex.
@@ -80,10 +84,18 @@
com.android.adservices:service-sdksandbox \
com.android.appsearch:service-appsearch \
com.android.art:service-art \
+ com.android.configinfrastructure:service-configinfrastructure \
+ com.android.healthfitness:service-healthfitness \
com.android.media:service-media-s \
+ com.android.ondevicepersonalization:service-ondevicepersonalization \
com.android.permission:service-permission \
+ com.android.rkpd:service-rkp \
-PRODUCT_DEX_PREOPT_BOOT_IMAGE_PROFILE_LOCATION += art/build/boot/boot-image-profile.txt
+# Use $(wildcard) to avoid referencing the profile in thin manifests that don't have the
+# art project.
+ifneq (,$(wildcard art))
+ PRODUCT_DEX_PREOPT_BOOT_IMAGE_PROFILE_LOCATION += art/build/boot/boot-image-profile.txt
+endif
# List of jars on the platform that system_server loads dynamically using separate classloaders.
# Keep the list sorted library names.
@@ -94,6 +106,7 @@
# Note: For modules available in Q, DO NOT add new entries here.
PRODUCT_APEX_STANDALONE_SYSTEM_SERVER_JARS := \
com.android.btservices:service-bluetooth \
+ com.android.devicelock:service-devicelock \
com.android.os.statsd:service-statsd \
com.android.scheduling:service-scheduling \
com.android.tethering:service-connectivity \
@@ -108,3 +121,5 @@
dalvik.vm.image-dex2oat-Xmx=64m \
dalvik.vm.dex2oat-Xms=64m \
dalvik.vm.dex2oat-Xmx=512m \
+
+PRODUCT_ENABLE_UFFD_GC := false # TODO(jiakaiz): Change this to "default".
diff --git a/target/product/emulator.mk b/target/product/emulator.mk
deleted file mode 100644
index 36da1f7..0000000
--- a/target/product/emulator.mk
+++ /dev/null
@@ -1,60 +0,0 @@
-#
-# Copyright (C) 2012 The Android Open Source Project
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-#
-# This file is included by other product makefiles to add all the
-# emulator-related modules to PRODUCT_PACKAGES.
-#
-
-# Device modules
-PRODUCT_PACKAGES += \
- CarrierConfig \
-
-# need this for gles libraries to load properly
-# after moving to /vendor/lib/
-PRODUCT_PACKAGES += \
- vndk-sp
-
-# WiFi: system side
-PRODUCT_PACKAGES += \
- ip \
- iw \
- wificond \
-
-
-PRODUCT_PACKAGE_OVERLAYS := device/generic/goldfish/overlay
-
-PRODUCT_CHARACTERISTICS := emulator
-
-PRODUCT_FULL_TREBLE_OVERRIDE := true
-
-# goldfish vendor partition configurations
-$(call inherit-product-if-exists, device/generic/goldfish/vendor.mk)
-
-#watchdog tiggers reboot because location service is not
-#responding, disble it for now.
-#still keep it on internal master as it is still working
-#once it is fixed in aosp, remove this block of comment.
-#PRODUCT_VENDOR_PROPERTIES += \
-#config.disable_location=true
-
-# enable Google-specific location features,
-# like NetworkLocationProvider and LocationCollector
-PRODUCT_SYSTEM_EXT_PROPERTIES += \
- ro.com.google.locationfeatures=1
-
-# disable setupwizard
-PRODUCT_SYSTEM_EXT_PROPERTIES += \
- ro.setupwizard.mode=DISABLED
diff --git a/target/product/full.mk b/target/product/full.mk
index adb54ab..945957f 100644
--- a/target/product/full.mk
+++ b/target/product/full.mk
@@ -19,8 +19,8 @@
# build quite specifically for the emulator, and might not be
# entirely appropriate to inherit from for on-device configurations.
-$(call inherit-product-if-exists, device/generic/goldfish/arm32-vendor.mk)
-$(call inherit-product, $(SRC_TARGET_DIR)/product/emulator.mk)
+$(call inherit-product-if-exists, build/make/target/product/ramdisk_stub.mk)
+$(call inherit-product, $(SRC_TARGET_DIR)/product/emulator_vendor.mk)
$(call inherit-product, $(SRC_TARGET_DIR)/product/aosp_base_telephony.mk)
$(call inherit-product, $(SRC_TARGET_DIR)/board/generic/device.mk)
diff --git a/target/product/full_x86.mk b/target/product/full_x86.mk
index 2f40c03..0f3be91 100644
--- a/target/product/full_x86.mk
+++ b/target/product/full_x86.mk
@@ -23,7 +23,7 @@
# that isn't a wifi connection. This will instruct init.rc to enable the
# network connection so that you can use it with ADB
-$(call inherit-product, $(SRC_TARGET_DIR)/product/emulator.mk)
+$(call inherit-product, $(SRC_TARGET_DIR)/product/emulator_vendor.mk)
$(call inherit-product, $(SRC_TARGET_DIR)/product/aosp_base_telephony.mk)
$(call inherit-product, $(SRC_TARGET_DIR)/board/generic_x86/device.mk)
diff --git a/target/product/fullmte.mk b/target/product/fullmte.mk
new file mode 100644
index 0000000..d47c685
--- /dev/null
+++ b/target/product/fullmte.mk
@@ -0,0 +1,26 @@
+#
+# Copyright (C) 2023 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# Enables more comprehensive detection of memory errors on hardware that
+# supports the ARM Memory Tagging Extension (MTE), by building the image with
+# MTE stack instrumentation and forcing MTE on in SYNC mode in all processes.
+# For more details, see:
+# https://source.android.com/docs/security/test/memory-safety/arm-mte
+ifeq ($(filter memtag_heap,$(SANITIZE_TARGET)),)
+ SANITIZE_TARGET := $(strip $(SANITIZE_TARGET) memtag_heap memtag_stack)
+ SANITIZE_TARGET_DIAG := $(strip $(SANITIZE_TARGET_DIAG) memtag_heap)
+endif
+PRODUCT_PRODUCT_PROPERTIES += persist.arm64.memtag.default=sync
diff --git a/target/product/generic_ramdisk.mk b/target/product/generic_ramdisk.mk
index fb0370e..ebac62f 100644
--- a/target/product/generic_ramdisk.mk
+++ b/target/product/generic_ramdisk.mk
@@ -22,21 +22,23 @@
# Ramdisk
PRODUCT_PACKAGES += \
init_first_stage \
- e2fsck.ramdisk \
- fsck.f2fs.ramdisk \
- tune2fs.ramdisk \
- snapuserd.ramdisk \
+ snapuserd_ramdisk \
# Debug ramdisk
PRODUCT_PACKAGES += \
adb_debug.prop \
userdebug_plat_sepolicy.cil \
+
+# For targets using dedicated recovery partition, generic ramdisk
+# might be relocated to recovery partition
_my_paths := \
$(TARGET_COPY_OUT_RAMDISK)/ \
$(TARGET_COPY_OUT_DEBUG_RAMDISK)/ \
system/usr/share/zoneinfo/tz_version \
system/usr/share/zoneinfo/tzdata \
+ $(TARGET_COPY_OUT_RECOVERY)/root/first_stage_ramdisk/system \
+
# We use the "relaxed" version here because tzdata / tz_version is only produced
# by this makefile on a subset of devices.
diff --git a/target/product/generic_system.mk b/target/product/generic_system.mk
index 1a639ef..98d6046 100644
--- a/target/product/generic_system.mk
+++ b/target/product/generic_system.mk
@@ -32,6 +32,7 @@
PRODUCT_PACKAGES += \
LiveWallpapersPicker \
PartnerBookmarksProvider \
+ preinstalled-packages-platform-generic-system.xml \
Stk \
Tag \
@@ -67,7 +68,7 @@
android.hardware.radio.config@1.0 \
android.hardware.radio.deprecated@1.0 \
android.hardware.secure_element@1.0 \
- android.hardware.wifi@1.0 \
+ android.hardware.wifi \
libaudio-resampler \
libaudiohal \
libdrm \
diff --git a/target/product/gsi/34.txt b/target/product/gsi/34.txt
new file mode 100644
index 0000000..ceb2060
--- /dev/null
+++ b/target/product/gsi/34.txt
@@ -0,0 +1,210 @@
+LLNDK: libEGL.so
+LLNDK: libGLESv1_CM.so
+LLNDK: libGLESv2.so
+LLNDK: libGLESv3.so
+LLNDK: libRS.so
+LLNDK: libandroid_net.so
+LLNDK: libbinder_ndk.so
+LLNDK: libc.so
+LLNDK: libcgrouprc.so
+LLNDK: libcom.android.tethering.connectivity_native.so
+LLNDK: libdl.so
+LLNDK: libft2.so
+LLNDK: liblog.so
+LLNDK: libm.so
+LLNDK: libmediandk.so
+LLNDK: libnativewindow.so
+LLNDK: libneuralnetworks.so
+LLNDK: libselinux.so
+LLNDK: libsync.so
+LLNDK: libvndksupport.so
+LLNDK: libvulkan.so
+VNDK-SP: android.hardware.common-V2-ndk.so
+VNDK-SP: android.hardware.common.fmq-V1-ndk.so
+VNDK-SP: android.hardware.graphics.common-V4-ndk.so
+VNDK-SP: android.hardware.graphics.common@1.0.so
+VNDK-SP: android.hardware.graphics.common@1.1.so
+VNDK-SP: android.hardware.graphics.common@1.2.so
+VNDK-SP: android.hardware.graphics.composer3-V1-ndk.so
+VNDK-SP: android.hardware.graphics.mapper@2.0.so
+VNDK-SP: android.hardware.graphics.mapper@2.1.so
+VNDK-SP: android.hardware.graphics.mapper@3.0.so
+VNDK-SP: android.hardware.graphics.mapper@4.0.so
+VNDK-SP: android.hardware.graphics.allocator-V2-ndk.so
+VNDK-SP: android.hardware.renderscript@1.0.so
+VNDK-SP: android.hidl.memory.token@1.0.so
+VNDK-SP: android.hidl.memory@1.0-impl.so
+VNDK-SP: android.hidl.memory@1.0.so
+VNDK-SP: android.hidl.safe_union@1.0.so
+VNDK-SP: libRSCpuRef.so
+VNDK-SP: libRSDriver.so
+VNDK-SP: libRS_internal.so
+VNDK-SP: libbase.so
+VNDK-SP: libbcinfo.so
+VNDK-SP: libblas.so
+VNDK-SP: libc++.so
+VNDK-SP: libcompiler_rt.so
+VNDK-SP: libcutils.so
+VNDK-SP: libdmabufheap.so
+VNDK-SP: libgralloctypes.so
+VNDK-SP: libhardware.so
+VNDK-SP: libhidlbase.so
+VNDK-SP: libhidlmemory.so
+VNDK-SP: libion.so
+VNDK-SP: libjsoncpp.so
+VNDK-SP: liblzma.so
+VNDK-SP: libprocessgroup.so
+VNDK-SP: libunwindstack.so
+VNDK-SP: libutils.so
+VNDK-SP: libutilscallstack.so
+VNDK-SP: libz.so
+VNDK-core: android.frameworks.cameraservice.common-V1-ndk.so
+VNDK-core: android.frameworks.cameraservice.device-V1-ndk.so
+VNDK-core: android.frameworks.cameraservice.service-V1-ndk.so
+VNDK-core: android.hardware.audio.common@2.0.so
+VNDK-core: android.hardware.configstore-utils.so
+VNDK-core: android.hardware.configstore@1.0.so
+VNDK-core: android.hardware.configstore@1.1.so
+VNDK-core: android.hardware.confirmationui-support-lib.so
+VNDK-core: android.hardware.graphics.allocator@2.0.so
+VNDK-core: android.hardware.graphics.allocator@3.0.so
+VNDK-core: android.hardware.graphics.allocator@4.0.so
+VNDK-core: android.hardware.graphics.bufferqueue@1.0.so
+VNDK-core: android.hardware.graphics.bufferqueue@2.0.so
+VNDK-core: android.hardware.media.bufferpool@2.0.so
+VNDK-core: android.hardware.media.omx@1.0.so
+VNDK-core: android.hardware.media@1.0.so
+VNDK-core: android.hardware.memtrack-V1-ndk.so
+VNDK-core: android.hardware.memtrack@1.0.so
+VNDK-core: android.hardware.soundtrigger@2.0-core.so
+VNDK-core: android.hardware.soundtrigger@2.0.so
+VNDK-core: android.hidl.token@1.0-utils.so
+VNDK-core: android.hidl.token@1.0.so
+VNDK-core: android.system.suspend-V1-ndk.so
+VNDK-core: android.system.suspend@1.0.so
+VNDK-core: libaudioroute.so
+VNDK-core: libaudioutils.so
+VNDK-core: libbinder.so
+VNDK-core: libbufferqueueconverter.so
+VNDK-core: libcamera_metadata.so
+VNDK-core: libcap.so
+VNDK-core: libcn-cbor.so
+VNDK-core: libcodec2.so
+VNDK-core: libcrypto.so
+VNDK-core: libcrypto_utils.so
+VNDK-core: libcurl.so
+VNDK-core: libdiskconfig.so
+VNDK-core: libdumpstateutil.so
+VNDK-core: libevent.so
+VNDK-core: libexif.so
+VNDK-core: libexpat.so
+VNDK-core: libfmq.so
+VNDK-core: libgatekeeper.so
+VNDK-core: libgui.so
+VNDK-core: libhardware_legacy.so
+VNDK-core: libhidlallocatorutils.so
+VNDK-core: libjpeg.so
+VNDK-core: libldacBT_abr.so
+VNDK-core: libldacBT_enc.so
+VNDK-core: liblz4.so
+VNDK-core: libmedia_helper.so
+VNDK-core: libmedia_omx.so
+VNDK-core: libmemtrack.so
+VNDK-core: libminijail.so
+VNDK-core: libmkbootimg_abi_check.so
+VNDK-core: libnetutils.so
+VNDK-core: libnl.so
+VNDK-core: libpcre2.so
+VNDK-core: libpiex.so
+VNDK-core: libpng.so
+VNDK-core: libpower.so
+VNDK-core: libprocinfo.so
+VNDK-core: libradio_metadata.so
+VNDK-core: libspeexresampler.so
+VNDK-core: libsqlite.so
+VNDK-core: libssl.so
+VNDK-core: libstagefright_bufferpool@2.0.so
+VNDK-core: libstagefright_bufferqueue_helper.so
+VNDK-core: libstagefright_foundation.so
+VNDK-core: libstagefright_omx.so
+VNDK-core: libstagefright_omx_utils.so
+VNDK-core: libstagefright_xmlparser.so
+VNDK-core: libsysutils.so
+VNDK-core: libtinyalsa.so
+VNDK-core: libtinyxml2.so
+VNDK-core: libui.so
+VNDK-core: libusbhost.so
+VNDK-core: libwifi-system-iface.so
+VNDK-core: libxml2.so
+VNDK-core: libyuv.so
+VNDK-core: libziparchive.so
+VNDK-private: libblas.so
+VNDK-private: libcompiler_rt.so
+VNDK-private: libft2.so
+VNDK-private: libgui.so
+VNDK-product: android.hardware.audio.common@2.0.so
+VNDK-product: android.hardware.configstore@1.0.so
+VNDK-product: android.hardware.configstore@1.1.so
+VNDK-product: android.hardware.graphics.allocator@2.0.so
+VNDK-product: android.hardware.graphics.allocator@3.0.so
+VNDK-product: android.hardware.graphics.allocator@4.0.so
+VNDK-product: android.hardware.graphics.bufferqueue@1.0.so
+VNDK-product: android.hardware.graphics.bufferqueue@2.0.so
+VNDK-product: android.hardware.graphics.common@1.0.so
+VNDK-product: android.hardware.graphics.common@1.1.so
+VNDK-product: android.hardware.graphics.common@1.2.so
+VNDK-product: android.hardware.graphics.mapper@2.0.so
+VNDK-product: android.hardware.graphics.mapper@2.1.so
+VNDK-product: android.hardware.graphics.mapper@3.0.so
+VNDK-product: android.hardware.graphics.mapper@4.0.so
+VNDK-product: android.hardware.media.bufferpool@2.0.so
+VNDK-product: android.hardware.media.omx@1.0.so
+VNDK-product: android.hardware.media@1.0.so
+VNDK-product: android.hardware.memtrack@1.0.so
+VNDK-product: android.hardware.renderscript@1.0.so
+VNDK-product: android.hardware.soundtrigger@2.0.so
+VNDK-product: android.hidl.memory.token@1.0.so
+VNDK-product: android.hidl.memory@1.0.so
+VNDK-product: android.hidl.safe_union@1.0.so
+VNDK-product: android.hidl.token@1.0.so
+VNDK-product: android.system.suspend@1.0.so
+VNDK-product: libaudioutils.so
+VNDK-product: libbase.so
+VNDK-product: libc++.so
+VNDK-product: libcamera_metadata.so
+VNDK-product: libcap.so
+VNDK-product: libcompiler_rt.so
+VNDK-product: libcrypto.so
+VNDK-product: libcurl.so
+VNDK-product: libcutils.so
+VNDK-product: libevent.so
+VNDK-product: libexpat.so
+VNDK-product: libfmq.so
+VNDK-product: libhidlbase.so
+VNDK-product: libhidlmemory.so
+VNDK-product: libion.so
+VNDK-product: libjpeg.so
+VNDK-product: libjsoncpp.so
+VNDK-product: libldacBT_abr.so
+VNDK-product: libldacBT_enc.so
+VNDK-product: liblz4.so
+VNDK-product: liblzma.so
+VNDK-product: libminijail.so
+VNDK-product: libnl.so
+VNDK-product: libpcre2.so
+VNDK-product: libpiex.so
+VNDK-product: libpng.so
+VNDK-product: libprocessgroup.so
+VNDK-product: libprocinfo.so
+VNDK-product: libspeexresampler.so
+VNDK-product: libssl.so
+VNDK-product: libtinyalsa.so
+VNDK-product: libtinyxml2.so
+VNDK-product: libunwindstack.so
+VNDK-product: libutils.so
+VNDK-product: libutilscallstack.so
+VNDK-product: libwifi-system-iface.so
+VNDK-product: libxml2.so
+VNDK-product: libyuv.so
+VNDK-product: libz.so
+VNDK-product: libziparchive.so
diff --git a/target/product/gsi/Android.mk b/target/product/gsi/Android.mk
index 85e551d..107c94f 100644
--- a/target/product/gsi/Android.mk
+++ b/target/product/gsi/Android.mk
@@ -126,8 +126,13 @@
endef
VNDK_ABI_DUMP_DIR := prebuilts/abi-dumps/vndk/$(PLATFORM_VNDK_VERSION)
-NDK_ABI_DUMP_DIR := prebuilts/abi-dumps/ndk/$(PLATFORM_VNDK_VERSION)
-PLATFORM_ABI_DUMP_DIR := prebuilts/abi-dumps/platform/$(PLATFORM_VNDK_VERSION)
+ifeq (REL,$(PLATFORM_VERSION_CODENAME))
+ NDK_ABI_DUMP_DIR := prebuilts/abi-dumps/ndk/$(PLATFORM_SDK_VERSION)
+ PLATFORM_ABI_DUMP_DIR := prebuilts/abi-dumps/platform/$(PLATFORM_SDK_VERSION)
+else
+ NDK_ABI_DUMP_DIR := prebuilts/abi-dumps/ndk/current
+ PLATFORM_ABI_DUMP_DIR := prebuilts/abi-dumps/platform/current
+endif
VNDK_ABI_DUMPS := $(call find-abi-dump-paths,$(VNDK_ABI_DUMP_DIR))
NDK_ABI_DUMPS := $(call find-abi-dump-paths,$(NDK_ABI_DUMP_DIR))
PLATFORM_ABI_DUMPS := $(call find-abi-dump-paths,$(PLATFORM_ABI_DUMP_DIR))
@@ -141,7 +146,7 @@
$(check-vndk-abi-dump-list-timestamp): PRIVATE_STUB_LIBRARIES := $(STUB_LIBRARIES)
$(check-vndk-abi-dump-list-timestamp):
$(eval added_vndk_abi_dumps := $(strip $(sort $(filter-out \
- $(call filter-abi-dump-paths,LLNDK VNDK-SP VNDK-core,$(PRIVATE_LSDUMP_PATHS)), \
+ $(call filter-abi-dump-paths,VNDK-SP VNDK-core,$(PRIVATE_LSDUMP_PATHS)), \
$(notdir $(VNDK_ABI_DUMPS))))))
$(if $(added_vndk_abi_dumps), \
echo -e "Found unexpected ABI reference dump files under $(VNDK_ABI_DUMP_DIR). It is caused by mismatch between Android.bp and the dump files. Run \`find \$${ANDROID_BUILD_TOP}/$(VNDK_ABI_DUMP_DIR) '(' -name $(subst $(space), -or -name ,$(added_vndk_abi_dumps)) ')' -delete\` to delete the dump files.")
@@ -154,7 +159,7 @@
echo -e "Found unexpected ABI reference dump files under $(NDK_ABI_DUMP_DIR). It is caused by mismatch between Android.bp and the dump files. Run \`find \$${ANDROID_BUILD_TOP}/$(NDK_ABI_DUMP_DIR) '(' -name $(subst $(space), -or -name ,$(added_ndk_abi_dumps)) ')' -delete\` to delete the dump files.")
$(eval added_platform_abi_dumps := $(strip $(sort $(filter-out \
- $(call filter-abi-dump-paths,PLATFORM,$(PRIVATE_LSDUMP_PATHS)) \
+ $(call filter-abi-dump-paths,LLNDK PLATFORM,$(PRIVATE_LSDUMP_PATHS)) \
$(addsuffix .lsdump,$(PRIVATE_STUB_LIBRARIES)), \
$(notdir $(PLATFORM_ABI_DUMPS))))))
$(if $(added_platform_abi_dumps), \
@@ -185,6 +190,10 @@
$(addsuffix .vendor,$(VNDK_SAMEPROCESS_LIBRARIES)) \
$(VNDK_USING_CORE_VARIANT_LIBRARIES) \
com.android.vndk.current
+
+LOCAL_ADDITIONAL_DEPENDENCIES += $(call module-built-files,\
+ $(addsuffix .vendor,$(VNDK_CORE_LIBRARIES) $(VNDK_SAMEPROCESS_LIBRARIES)))
+
endif
include $(BUILD_PHONY_PACKAGE)
diff --git a/target/product/gsi/current.txt b/target/product/gsi/current.txt
index 03a143d..ceb2060 100644
--- a/target/product/gsi/current.txt
+++ b/target/product/gsi/current.txt
@@ -7,6 +7,7 @@
LLNDK: libbinder_ndk.so
LLNDK: libc.so
LLNDK: libcgrouprc.so
+LLNDK: libcom.android.tethering.connectivity_native.so
LLNDK: libdl.so
LLNDK: libft2.so
LLNDK: liblog.so
@@ -20,8 +21,7 @@
LLNDK: libvulkan.so
VNDK-SP: android.hardware.common-V2-ndk.so
VNDK-SP: android.hardware.common.fmq-V1-ndk.so
-VNDK-SP: android.hardware.graphics.allocator-V1-ndk.so
-VNDK-SP: android.hardware.graphics.common-V3-ndk.so
+VNDK-SP: android.hardware.graphics.common-V4-ndk.so
VNDK-SP: android.hardware.graphics.common@1.0.so
VNDK-SP: android.hardware.graphics.common@1.1.so
VNDK-SP: android.hardware.graphics.common@1.2.so
@@ -30,6 +30,7 @@
VNDK-SP: android.hardware.graphics.mapper@2.1.so
VNDK-SP: android.hardware.graphics.mapper@3.0.so
VNDK-SP: android.hardware.graphics.mapper@4.0.so
+VNDK-SP: android.hardware.graphics.allocator-V2-ndk.so
VNDK-SP: android.hardware.renderscript@1.0.so
VNDK-SP: android.hidl.memory.token@1.0.so
VNDK-SP: android.hidl.memory@1.0-impl.so
@@ -38,7 +39,6 @@
VNDK-SP: libRSCpuRef.so
VNDK-SP: libRSDriver.so
VNDK-SP: libRS_internal.so
-VNDK-SP: libbacktrace.so
VNDK-SP: libbase.so
VNDK-SP: libbcinfo.so
VNDK-SP: libblas.so
@@ -58,70 +58,28 @@
VNDK-SP: libutils.so
VNDK-SP: libutilscallstack.so
VNDK-SP: libz.so
-VNDK-core: android.hardware.audio.common-V1-ndk.so
+VNDK-core: android.frameworks.cameraservice.common-V1-ndk.so
+VNDK-core: android.frameworks.cameraservice.device-V1-ndk.so
+VNDK-core: android.frameworks.cameraservice.service-V1-ndk.so
VNDK-core: android.hardware.audio.common@2.0.so
-VNDK-core: android.hardware.authsecret-V1-ndk.so
-VNDK-core: android.hardware.automotive.occupant_awareness-V1-ndk.so
-VNDK-core: android.hardware.bluetooth.audio-V2-ndk.so
-VNDK-core: android.hardware.camera.common-V1-ndk.so
-VNDK-core: android.hardware.camera.device-V1-ndk.so
-VNDK-core: android.hardware.camera.metadata-V1-ndk.so
-VNDK-core: android.hardware.camera.provider-V1-ndk.so
VNDK-core: android.hardware.configstore-utils.so
VNDK-core: android.hardware.configstore@1.0.so
VNDK-core: android.hardware.configstore@1.1.so
VNDK-core: android.hardware.confirmationui-support-lib.so
-VNDK-core: android.hardware.drm-V1-ndk.so
-VNDK-core: android.hardware.dumpstate-V1-ndk.so
-VNDK-core: android.hardware.gnss-V2-ndk.so
VNDK-core: android.hardware.graphics.allocator@2.0.so
VNDK-core: android.hardware.graphics.allocator@3.0.so
VNDK-core: android.hardware.graphics.allocator@4.0.so
VNDK-core: android.hardware.graphics.bufferqueue@1.0.so
VNDK-core: android.hardware.graphics.bufferqueue@2.0.so
-VNDK-core: android.hardware.health-V1-ndk.so
-VNDK-core: android.hardware.health.storage-V1-ndk.so
-VNDK-core: android.hardware.identity-V4-ndk.so
-VNDK-core: android.hardware.ir-V1-ndk.so
-VNDK-core: android.hardware.keymaster-V3-ndk.so
-VNDK-core: android.hardware.light-V2-ndk.so
VNDK-core: android.hardware.media.bufferpool@2.0.so
VNDK-core: android.hardware.media.omx@1.0.so
VNDK-core: android.hardware.media@1.0.so
VNDK-core: android.hardware.memtrack-V1-ndk.so
VNDK-core: android.hardware.memtrack@1.0.so
-VNDK-core: android.hardware.nfc-V1-ndk.so
-VNDK-core: android.hardware.oemlock-V1-ndk.so
-VNDK-core: android.hardware.power-V3-ndk.so
-VNDK-core: android.hardware.power.stats-V1-ndk.so
-VNDK-core: android.hardware.radio-V1-ndk.so
-VNDK-core: android.hardware.radio.config-V1-ndk.so
-VNDK-core: android.hardware.radio.data-V1-ndk.so
-VNDK-core: android.hardware.radio.messaging-V1-ndk.so
-VNDK-core: android.hardware.radio.modem-V1-ndk.so
-VNDK-core: android.hardware.radio.network-V1-ndk.so
-VNDK-core: android.hardware.radio.sim-V1-ndk.so
-VNDK-core: android.hardware.radio.voice-V1-ndk.so
-VNDK-core: android.hardware.rebootescrow-V1-ndk.so
-VNDK-core: android.hardware.security.dice-V1-ndk.so
-VNDK-core: android.hardware.security.keymint-V2-ndk.so
-VNDK-core: android.hardware.security.secureclock-V1-ndk.so
-VNDK-core: android.hardware.security.sharedsecret-V1-ndk.so
-VNDK-core: android.hardware.sensors-V1-ndk.so
-VNDK-core: android.hardware.soundtrigger3-V1-ndk.so
VNDK-core: android.hardware.soundtrigger@2.0-core.so
VNDK-core: android.hardware.soundtrigger@2.0.so
-VNDK-core: android.hardware.usb-V1-ndk.so
-VNDK-core: android.hardware.uwb-V1-ndk.so
-VNDK-core: android.hardware.vibrator-V2-ndk.so
-VNDK-core: android.hardware.weaver-V1-ndk.so
-VNDK-core: android.hardware.wifi.hostapd-V1-ndk.so
-VNDK-core: android.hardware.wifi.supplicant-V1-ndk.so
VNDK-core: android.hidl.token@1.0-utils.so
VNDK-core: android.hidl.token@1.0.so
-VNDK-core: android.media.audio.common.types-V1-ndk.so
-VNDK-core: android.media.soundtrigger.types-V1-ndk.so
-VNDK-core: android.system.keystore2-V2-ndk.so
VNDK-core: android.system.suspend-V1-ndk.so
VNDK-core: android.system.suspend@1.0.so
VNDK-core: libaudioroute.so
@@ -180,7 +138,6 @@
VNDK-core: libxml2.so
VNDK-core: libyuv.so
VNDK-core: libziparchive.so
-VNDK-private: libbacktrace.so
VNDK-private: libblas.so
VNDK-private: libcompiler_rt.so
VNDK-private: libft2.so
@@ -212,7 +169,6 @@
VNDK-product: android.hidl.token@1.0.so
VNDK-product: android.system.suspend@1.0.so
VNDK-product: libaudioutils.so
-VNDK-product: libbacktrace.so
VNDK-product: libbase.so
VNDK-product: libc++.so
VNDK-product: libcamera_metadata.so
diff --git a/target/product/gsi_release.mk b/target/product/gsi_release.mk
index 74501cd..9d102ea 100644
--- a/target/product/gsi_release.mk
+++ b/target/product/gsi_release.mk
@@ -23,6 +23,8 @@
# - Released GSI contains more VNDK packages to support old version vendors
# - etc.
#
+# See device/generic/common/README.md for more details.
+#
BUILDING_GSI := true
@@ -34,7 +36,7 @@
# GSI should always support up-to-date platform features.
# Keep this value at the latest API level to ensure latest build system
# default configs are applied.
-PRODUCT_SHIPPING_API_LEVEL := 31
+PRODUCT_SHIPPING_API_LEVEL := 34
# Enable dynamic partitions to facilitate mixing onto Cuttlefish
PRODUCT_USE_DYNAMIC_PARTITIONS := true
@@ -62,13 +64,18 @@
init.gsi.rc \
init.vndk-nodef.rc \
+# Overlay the GSI specific SystemUI setting
+PRODUCT_PACKAGES += gsi_overlay_systemui
+PRODUCT_COPY_FILES += \
+ device/generic/common/overlays/overlay-config.xml:$(TARGET_COPY_OUT_SYSTEM_EXT)/overlay/config/config.xml
+
# Support additional VNDK snapshots
PRODUCT_EXTRA_VNDK_VERSIONS := \
- 28 \
29 \
30 \
31 \
32 \
+ 33 \
# Do not build non-GSI partition images.
PRODUCT_BUILD_CACHE_IMAGE := false
@@ -78,11 +85,12 @@
PRODUCT_BUILD_VENDOR_IMAGE := false
PRODUCT_BUILD_SUPER_PARTITION := false
PRODUCT_BUILD_SUPER_EMPTY_IMAGE := false
+PRODUCT_BUILD_SYSTEM_DLKM_IMAGE := false
PRODUCT_EXPORT_BOOT_IMAGE_TO_DIST := true
-# Always build modules from source
-MODULE_BUILD_FROM_SOURCE := true
-
# Additional settings used in all GSI builds
PRODUCT_PRODUCT_PROPERTIES += \
ro.crypto.metadata_init_delete_all_keys.enabled=false \
+
+# Window Extensions
+$(call inherit-product, $(SRC_TARGET_DIR)/product/window_extensions.mk)
\ No newline at end of file
diff --git a/target/product/handheld_product.mk b/target/product/handheld_product.mk
index 2199c57..8755ae6 100644
--- a/target/product/handheld_product.mk
+++ b/target/product/handheld_product.mk
@@ -30,7 +30,6 @@
Gallery2 \
LatinIME \
Music \
- OneTimeInitializer \
preinstalled-packages-platform-handheld-product.xml \
QuickSearchBox \
SettingsIntelligence \
diff --git a/target/product/handheld_system.mk b/target/product/handheld_system.mk
index 41233b2..52f9ee1 100644
--- a/target/product/handheld_system.mk
+++ b/target/product/handheld_system.mk
@@ -42,6 +42,7 @@
CameraExtensionsProxy \
CaptivePortalLogin \
CertInstaller \
+ CredentialManager \
DocumentsUI \
DownloadProviderUi \
EasterEgg \
@@ -56,6 +57,7 @@
MusicFX \
NfcNci \
PacProcessor \
+ preinstalled-packages-platform-handheld-system.xml \
PrintRecommendationService \
PrintSpooler \
ProxyHandler \
@@ -79,7 +81,8 @@
Telecom \
PRODUCT_COPY_FILES += \
- frameworks/av/media/libeffects/data/audio_effects.conf:system/etc/audio_effects.conf
+ frameworks/av/media/libeffects/data/audio_effects.xml:system/etc/audio_effects.xml \
+ frameworks/native/data/etc/android.software.window_magnification.xml:$(TARGET_COPY_OUT_SYSTEM)/etc/permissions/android.software.window_magnification.xml \
PRODUCT_VENDOR_PROPERTIES += \
ro.carrier?=unknown \
diff --git a/target/product/handheld_system_ext.mk b/target/product/handheld_system_ext.mk
index d935fbf..187b627 100644
--- a/target/product/handheld_system_ext.mk
+++ b/target/product/handheld_system_ext.mk
@@ -22,6 +22,7 @@
# /system_ext packages
PRODUCT_PACKAGES += \
+ AccessibilityMenu \
Launcher3QuickStep \
Provision \
Settings \
diff --git a/target/product/linux_bionic.mk b/target/product/linux_bionic.mk
new file mode 100644
index 0000000..da6b890
--- /dev/null
+++ b/target/product/linux_bionic.mk
@@ -0,0 +1,18 @@
+# Copyright (C) 2022 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+PRODUCT_NAME := linux_bionic
+PRODUCT_BRAND := Android
+PRODUCT_DEVICE := linux_bionic
diff --git a/target/product/module_arm.mk b/target/product/module_arm.mk
index d99dce8..434f7ad 100644
--- a/target/product/module_arm.mk
+++ b/target/product/module_arm.mk
@@ -17,5 +17,4 @@
$(call inherit-product, $(SRC_TARGET_DIR)/product/module_common.mk)
PRODUCT_NAME := module_arm
-PRODUCT_BRAND := Android
PRODUCT_DEVICE := module_arm
diff --git a/target/product/module_arm64.mk b/target/product/module_arm64.mk
index fc9529c..2e8c8a7 100644
--- a/target/product/module_arm64.mk
+++ b/target/product/module_arm64.mk
@@ -18,5 +18,4 @@
$(call inherit-product, $(SRC_TARGET_DIR)/product/core_64_bit.mk)
PRODUCT_NAME := module_arm64
-PRODUCT_BRAND := Android
PRODUCT_DEVICE := module_arm64
diff --git a/target/product/module_arm64only.mk b/target/product/module_arm64only.mk
new file mode 100644
index 0000000..c0769bf
--- /dev/null
+++ b/target/product/module_arm64only.mk
@@ -0,0 +1,21 @@
+#
+# Copyright (C) 2022 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+$(call inherit-product, $(SRC_TARGET_DIR)/product/module_common.mk)
+$(call inherit-product, $(SRC_TARGET_DIR)/product/core_64_bit_only.mk)
+
+PRODUCT_NAME := module_arm64only
+PRODUCT_DEVICE := module_arm64only
diff --git a/target/product/module_common.mk b/target/product/module_common.mk
index 54f3949..84bd799 100644
--- a/target/product/module_common.mk
+++ b/target/product/module_common.mk
@@ -25,3 +25,10 @@
# Builds using a module product should build modules from source, even if
# BRANCH_DEFAULT_MODULE_BUILD_FROM_SOURCE says otherwise.
PRODUCT_MODULE_BUILD_FROM_SOURCE := true
+
+# Build sdk from source if the branch is not using slim manifests.
+ifneq (,$(strip $(wildcard frameworks/base/Android.bp)))
+ UNBUNDLED_BUILD_SDKS_FROM_SOURCE := true
+endif
+
+PRODUCT_BRAND := Android
diff --git a/target/product/module_x86.mk b/target/product/module_x86.mk
index b852e7a..f38e2b9 100644
--- a/target/product/module_x86.mk
+++ b/target/product/module_x86.mk
@@ -17,5 +17,4 @@
$(call inherit-product, $(SRC_TARGET_DIR)/product/module_common.mk)
PRODUCT_NAME := module_x86
-PRODUCT_BRAND := Android
PRODUCT_DEVICE := module_x86
diff --git a/target/product/module_x86_64.mk b/target/product/module_x86_64.mk
index f6bc1fc..20f443a 100644
--- a/target/product/module_x86_64.mk
+++ b/target/product/module_x86_64.mk
@@ -18,5 +18,4 @@
$(call inherit-product, $(SRC_TARGET_DIR)/product/core_64_bit.mk)
PRODUCT_NAME := module_x86_64
-PRODUCT_BRAND := Android
PRODUCT_DEVICE := module_x86_64
diff --git a/target/product/module_x86_64only.mk b/target/product/module_x86_64only.mk
new file mode 100644
index 0000000..b0d72bf
--- /dev/null
+++ b/target/product/module_x86_64only.mk
@@ -0,0 +1,21 @@
+#
+# Copyright (C) 2021 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+$(call inherit-product, $(SRC_TARGET_DIR)/product/module_common.mk)
+$(call inherit-product, $(SRC_TARGET_DIR)/product/core_64_bit_only.mk)
+
+PRODUCT_NAME := module_x86_64only
+PRODUCT_DEVICE := module_x86_64only
diff --git a/target/product/ramdisk_stub.mk b/target/product/ramdisk_stub.mk
new file mode 100644
index 0000000..2a0b752
--- /dev/null
+++ b/target/product/ramdisk_stub.mk
@@ -0,0 +1,18 @@
+#
+# Copyright 2022 The Android Open-Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+PRODUCT_COPY_FILES += \
+ build/make/target/product/ramdisk_stub.mk:$(TARGET_COPY_OUT_VENDOR_RAMDISK)/nonempty
diff --git a/target/product/runtime_libart.mk b/target/product/runtime_libart.mk
index b6560fc..68ed249 100644
--- a/target/product/runtime_libart.mk
+++ b/target/product/runtime_libart.mk
@@ -95,7 +95,6 @@
# The thermal cutoff value is currently set to THERMAL_STATUS_MODERATE.
PRODUCT_SYSTEM_PROPERTIES += \
dalvik.vm.usejit=true \
- dalvik.vm.usejitprofiles=true \
dalvik.vm.dexopt.secondary=true \
dalvik.vm.dexopt.thermal-cutoff=2 \
dalvik.vm.appimageformat=lz4
@@ -123,6 +122,7 @@
# without exceptions).
PRODUCT_SYSTEM_PROPERTIES += \
pm.dexopt.post-boot?=extract \
+ pm.dexopt.boot-after-mainline-update?=verify \
pm.dexopt.install?=speed-profile \
pm.dexopt.install-fast?=skip \
pm.dexopt.install-bulk?=speed-profile \
@@ -157,3 +157,24 @@
dalvik.vm.madvise.vdexfile.size=104857600 \
dalvik.vm.madvise.odexfile.size=104857600 \
dalvik.vm.madvise.artfile.size=4294967295
+
+# Properties for the Unspecialized App Process Pool
+PRODUCT_SYSTEM_PROPERTIES += \
+ dalvik.vm.usap_pool_enabled?=false \
+ dalvik.vm.usap_refill_threshold?=1 \
+ dalvik.vm.usap_pool_size_max?=3 \
+ dalvik.vm.usap_pool_size_min?=1 \
+ dalvik.vm.usap_pool_refill_delay_ms?=3000
+
+# Allow dexopt files that are side-effects of already allowlisted files.
+# This is only necessary when ART is prebuilt.
+ifeq (false,$(ART_MODULE_BUILD_FROM_SOURCE))
+ PRODUCT_ARTIFACT_PATH_REQUIREMENT_ALLOWED_LIST += \
+ system/framework/%.art \
+ system/framework/%.oat \
+ system/framework/%.odex \
+ system/framework/%.vdex
+endif
+
+PRODUCT_SYSTEM_PROPERTIES += \
+ dalvik.vm.useartservice=true
diff --git a/target/product/sdk_phone_arm64.mk b/target/product/sdk_phone_arm64.mk
index 4203d45..3f81615 100644
--- a/target/product/sdk_phone_arm64.mk
+++ b/target/product/sdk_phone_arm64.mk
@@ -13,7 +13,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
-QEMU_USE_SYSTEM_EXT_PARTITIONS := true
PRODUCT_USE_DYNAMIC_PARTITIONS := true
# This is a build configuration for a full-featured build of the
diff --git a/target/product/sdk_phone_armv7.mk b/target/product/sdk_phone_armv7.mk
index 6c88b44..48a0e3b 100644
--- a/target/product/sdk_phone_armv7.mk
+++ b/target/product/sdk_phone_armv7.mk
@@ -13,7 +13,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
-QEMU_USE_SYSTEM_EXT_PARTITIONS := true
PRODUCT_USE_DYNAMIC_PARTITIONS := true
# This is a build configuration for a full-featured build of the
@@ -45,7 +44,7 @@
#
# All components inherited here go to vendor image
#
-$(call inherit-product-if-exists, device/generic/goldfish/arm32-vendor.mk)
+$(call inherit-product-if-exists, build/make/target/product/ramdisk_stub.mk)
$(call inherit-product, $(SRC_TARGET_DIR)/product/emulator_vendor.mk)
$(call inherit-product, $(SRC_TARGET_DIR)/board/emulator_arm/device.mk)
diff --git a/target/product/sdk_phone_x86.mk b/target/product/sdk_phone_x86.mk
index a324e5f..0f8b508 100644
--- a/target/product/sdk_phone_x86.mk
+++ b/target/product/sdk_phone_x86.mk
@@ -13,7 +13,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
-QEMU_USE_SYSTEM_EXT_PARTITIONS := true
PRODUCT_USE_DYNAMIC_PARTITIONS := true
# This is a build configuration for a full-featured build of the
diff --git a/target/product/sdk_phone_x86_64.mk b/target/product/sdk_phone_x86_64.mk
index ff9018d..f5d9028 100644
--- a/target/product/sdk_phone_x86_64.mk
+++ b/target/product/sdk_phone_x86_64.mk
@@ -13,7 +13,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
-QEMU_USE_SYSTEM_EXT_PARTITIONS := true
PRODUCT_USE_DYNAMIC_PARTITIONS := true
# This is a build configuration for a full-featured build of the
diff --git a/target/product/security/Android.mk b/target/product/security/Android.mk
index ad25a92..4bd8efc 100644
--- a/target/product/security/Android.mk
+++ b/target/product/security/Android.mk
@@ -1,43 +1,6 @@
LOCAL_PATH:= $(call my-dir)
#######################################
-# verity_key (installed to /, i.e. part of system.img)
-include $(CLEAR_VARS)
-
-LOCAL_MODULE := verity_key
-LOCAL_LICENSE_KINDS := SPDX-license-identifier-Apache-2.0
-LOCAL_LICENSE_CONDITIONS := notice
-LOCAL_NOTICE_FILE := build/soong/licenses/LICENSE
-LOCAL_SRC_FILES := $(LOCAL_MODULE)
-LOCAL_MODULE_CLASS := ETC
-LOCAL_MODULE_PATH := $(TARGET_ROOT_OUT)
-
-# For devices using a separate ramdisk, we need a copy there to establish the chain of trust.
-ifneq ($(BOARD_BUILD_SYSTEM_ROOT_IMAGE),true)
-LOCAL_REQUIRED_MODULES := verity_key_ramdisk
-endif
-
-include $(BUILD_PREBUILT)
-
-#######################################
-# verity_key (installed to ramdisk)
-#
-# Enabling the target when using system-as-root would cause build failure, as TARGET_RAMDISK_OUT
-# points to the same location as TARGET_ROOT_OUT.
-ifneq ($(BOARD_BUILD_SYSTEM_ROOT_IMAGE),true)
- include $(CLEAR_VARS)
- LOCAL_MODULE := verity_key_ramdisk
- LOCAL_LICENSE_KINDS := SPDX-license-identifier-Apache-2.0
- LOCAL_LICENSE_CONDITIONS := notice
- LOCAL_NOTICE_FILE := build/soong/licenses/LICENSE
- LOCAL_MODULE_CLASS := ETC
- LOCAL_SRC_FILES := verity_key
- LOCAL_MODULE_STEM := verity_key
- LOCAL_MODULE_PATH := $(TARGET_RAMDISK_OUT)
- include $(BUILD_PREBUILT)
-endif
-
-#######################################
# adb key, if configured via PRODUCT_ADB_KEYS
ifdef PRODUCT_ADB_KEYS
ifneq ($(filter eng userdebug,$(TARGET_BUILD_VARIANT)),)
diff --git a/target/product/security/BUILD.bazel b/target/product/security/BUILD.bazel
new file mode 100644
index 0000000..c12be79
--- /dev/null
+++ b/target/product/security/BUILD.bazel
@@ -0,0 +1,8 @@
+filegroup(
+ name = "android_certificate_directory",
+ srcs = glob([
+ "*.pk8",
+ "*.pem",
+ ]),
+ visibility = ["//visibility:public"],
+)
diff --git a/target/product/security/verity.pk8 b/target/product/security/verity.pk8
deleted file mode 100644
index bebf216..0000000
--- a/target/product/security/verity.pk8
+++ /dev/null
Binary files differ
diff --git a/target/product/security/verity.x509.pem b/target/product/security/verity.x509.pem
deleted file mode 100644
index 86399c3..0000000
--- a/target/product/security/verity.x509.pem
+++ /dev/null
@@ -1,24 +0,0 @@
------BEGIN CERTIFICATE-----
-MIID/TCCAuWgAwIBAgIJAJcPmDkJqolJMA0GCSqGSIb3DQEBBQUAMIGUMQswCQYD
-VQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNTW91bnRhaW4g
-VmlldzEQMA4GA1UECgwHQW5kcm9pZDEQMA4GA1UECwwHQW5kcm9pZDEQMA4GA1UE
-AwwHQW5kcm9pZDEiMCAGCSqGSIb3DQEJARYTYW5kcm9pZEBhbmRyb2lkLmNvbTAe
-Fw0xNDExMDYxOTA3NDBaFw00MjAzMjQxOTA3NDBaMIGUMQswCQYDVQQGEwJVUzET
-MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNTW91bnRhaW4gVmlldzEQMA4G
-A1UECgwHQW5kcm9pZDEQMA4GA1UECwwHQW5kcm9pZDEQMA4GA1UEAwwHQW5kcm9p
-ZDEiMCAGCSqGSIb3DQEJARYTYW5kcm9pZEBhbmRyb2lkLmNvbTCCASIwDQYJKoZI
-hvcNAQEBBQADggEPADCCAQoCggEBAOjreE0vTVSRenuzO9vnaWfk0eQzYab0gqpi
-6xAzi6dmD+ugoEKJmbPiuE5Dwf21isZ9uhUUu0dQM46dK4ocKxMRrcnmGxydFn6o
-fs3ODJMXOkv2gKXL/FdbEPdDbxzdu8z3yk+W67udM/fW7WbaQ3DO0knu+izKak/3
-T41c5uoXmQ81UNtAzRGzGchNVXMmWuTGOkg6U+0I2Td7K8yvUMWhAWPPpKLtVH9r
-AL5TzjYNR92izdKcz3AjRsI3CTjtpiVABGeX0TcjRSuZB7K9EK56HV+OFNS6I1NP
-jdD7FIShyGlqqZdUOkAUZYanbpgeT5N7QL6uuqcGpoTOkalu6kkCAwEAAaNQME4w
-HQYDVR0OBBYEFH5DM/m7oArf4O3peeKO0ZIEkrQPMB8GA1UdIwQYMBaAFH5DM/m7
-oArf4O3peeKO0ZIEkrQPMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADggEB
-AHO3NSvDE5jFvMehGGtS8BnFYdFKRIglDMc4niWSzhzOVYRH4WajxdtBWc5fx0ix
-NF/+hVKVhP6AIOQa+++sk+HIi7RvioPPbhjcsVlZe7cUEGrLSSveGouQyc+j0+m6
-JF84kszIl5GGNMTnx0XRPO+g8t6h5LWfnVydgZfpGRRg+WHewk1U2HlvTjIceb0N
-dcoJ8WKJAFWdcuE7VIm4w+vF/DYX/A2Oyzr2+QRhmYSv1cusgAeC1tvH4ap+J1Lg
-UnOu5Kh/FqPLLSwNVQp4Bu7b9QFfqK8Moj84bj88NqRGZgDyqzuTrFxn6FW7dmyA
-yttuAJAEAymk1mipd9+zp38=
------END CERTIFICATE-----
diff --git a/target/product/security/verity_key b/target/product/security/verity_key
deleted file mode 100644
index 31982d9..0000000
--- a/target/product/security/verity_key
+++ /dev/null
Binary files differ
diff --git a/target/product/sysconfig/Android.bp b/target/product/sysconfig/Android.bp
index 29122e4..95042a7 100644
--- a/target/product/sysconfig/Android.bp
+++ b/target/product/sysconfig/Android.bp
@@ -30,8 +30,34 @@
}
prebuilt_etc {
+ name: "preinstalled-packages-platform-generic-system.xml",
+ sub_dir: "sysconfig",
+ src: "preinstalled-packages-platform-generic-system.xml",
+}
+
+prebuilt_etc {
name: "preinstalled-packages-platform-handheld-product.xml",
product_specific: true,
sub_dir: "sysconfig",
src: "preinstalled-packages-platform-handheld-product.xml",
}
+
+prebuilt_etc {
+ name: "preinstalled-packages-platform-handheld-system.xml",
+ sub_dir: "sysconfig",
+ src: "preinstalled-packages-platform-handheld-system.xml",
+}
+
+prebuilt_etc {
+ name: "preinstalled-packages-platform-telephony-product.xml",
+ product_specific: true,
+ sub_dir: "sysconfig",
+ src: "preinstalled-packages-platform-telephony-product.xml",
+}
+
+prebuilt_etc {
+ name: "initial-package-stopped-states-aosp.xml",
+ product_specific: true,
+ sub_dir: "sysconfig",
+ src: "initial-package-stopped-states-aosp.xml",
+}
diff --git a/target/product/sysconfig/initial-package-stopped-states-aosp.xml b/target/product/sysconfig/initial-package-stopped-states-aosp.xml
new file mode 100644
index 0000000..1704ff2
--- /dev/null
+++ b/target/product/sysconfig/initial-package-stopped-states-aosp.xml
@@ -0,0 +1,47 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!--
+ ~ Copyright (C) 2023 The Android Open Source Project
+ ~
+ ~ Licensed under the Apache License, Version 2.0 (the "License");
+ ~ you may not use this file except in compliance with the License.
+ ~ You may obtain a copy of the License at
+ ~
+ ~ http://www.apache.org/licenses/LICENSE-2.0
+ ~
+ ~ Unless required by applicable law or agreed to in writing, software
+ ~ distributed under the License is distributed on an "AS IS" BASIS,
+ ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ ~ See the License for the specific language governing permissions and
+ ~ limitations under the License.
+ -->
+
+<!--
+This XML defines an allowlist for packages that should not be scanned in a "stopped" state.
+When this feature is turned on (indicated by the config config_stopSystemPackagesByDefault in
+core/res/res/values/config.xml) packages on the system partition that are encountered by
+the PackageManagerService for the first time are scanned in the "stopped" state. This allowlist
+is also considered while creating new users on the device. Stopped state is not set during
+subsequent reboots.
+
+Example usage
+ 1. <initial-package-state package="com.example.app" stopped="false"/>
+ Indicates that a system package - com.example.app's initial stopped state should not be set
+ by the Package Manager. By default, system apps are marked as stopped.
+ 2. <initial-package-state package="com.example.app" stopped="true"/>
+ Indicates that a system package - com.example.app's initial state should be set by the
+ Package Manager to "stopped=true". It will have the same effect on the
+ package's stopped state even if this package was not included in the allow list.
+ 3. <initial-package-state package="com.example.app"/>
+ Invalid usage.
+-->
+
+<config>
+ <initial-package-state package="com.android.calendar" stopped="false"/>
+ <initial-package-state package="com.android.camera2" stopped="false"/>
+ <initial-package-state package="com.android.contacts" stopped="false"/>
+ <initial-package-state package="com.android.documentsui" stopped="false"/>
+ <initial-package-state package="com.android.messaging" stopped="false"/>
+ <initial-package-state package="com.android.quicksearchbox" stopped="false"/>
+ <initial-package-state package="com.android.settings" stopped="false"/>
+ <initial-package-state package="com.android.stk" stopped="false"/>
+</config>
diff --git a/target/product/sysconfig/preinstalled-packages-platform-aosp-product.xml b/target/product/sysconfig/preinstalled-packages-platform-aosp-product.xml
index eec1326..1295e1c 100644
--- a/target/product/sysconfig/preinstalled-packages-platform-aosp-product.xml
+++ b/target/product/sysconfig/preinstalled-packages-platform-aosp-product.xml
@@ -20,4 +20,12 @@
<install-in-user-type package="com.android.wallpaperpicker">
<install-in user-type="FULL" />
</install-in-user-type>
+
+ <!-- System packages that should not be pre-installed on the CLONE profile. -->
+ <!-- Messages -->
+ <install-in-user-type package="com.android.messaging">
+ <install-in user-type="FULL" />
+ <install-in user-type="PROFILE" />
+ <do-not-install-in user-type="android.os.usertype.profile.CLONE" />
+ </install-in-user-type>
</config>
diff --git a/target/product/sysconfig/preinstalled-packages-platform-generic-system.xml b/target/product/sysconfig/preinstalled-packages-platform-generic-system.xml
new file mode 100644
index 0000000..e2482e1
--- /dev/null
+++ b/target/product/sysconfig/preinstalled-packages-platform-generic-system.xml
@@ -0,0 +1,29 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!-- Copyright (C) 2022 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<!-- System packages to preinstall on all devices with generic_system, per user type.
+ Documentation at frameworks/base/data/etc/preinstalled-packages-platform.xml
+-->
+<config>
+ <!-- Stk (SIM ToolKit)
+ TODO(b/258055479): Check if this should be preinstalled on SYSTEM user -->
+ <install-in-user-type package="com.android.stk">
+ <install-in user-type="SYSTEM" />
+ <install-in user-type="FULL" />
+ <install-in user-type="PROFILE" />
+ <do-not-install-in user-type="android.os.usertype.profile.CLONE" />
+ </install-in-user-type>
+</config>
+
diff --git a/target/product/sysconfig/preinstalled-packages-platform-handheld-product.xml b/target/product/sysconfig/preinstalled-packages-platform-handheld-product.xml
index a5d9ba2..54add22 100644
--- a/target/product/sysconfig/preinstalled-packages-platform-handheld-product.xml
+++ b/target/product/sysconfig/preinstalled-packages-platform-handheld-product.xml
@@ -17,6 +17,56 @@
Documentation at frameworks/base/data/etc/preinstalled-packages-platform.xml
-->
<config>
+ <!-- Android Keyboard (AOSP) (LatinIME) TODO(b/258055479) -->
+ <install-in-user-type package="com.android.inputmethod.latin">
+ <install-in user-type="SYSTEM" />
+ <install-in user-type="FULL" />
+ <install-in user-type="PROFILE" />
+ </install-in-user-type>
+
+ <!-- Calendar -->
+ <install-in-user-type package="com.android.calendar">
+ <install-in user-type="FULL" />
+ <install-in user-type="PROFILE" />
+ <do-not-install-in user-type="android.os.usertype.profile.CLONE" />
+ </install-in-user-type>
+
+ <!-- Camera (Camera2) -->
+ <install-in-user-type package="com.android.camera2">
+ <install-in user-type="FULL" />
+ <install-in user-type="PROFILE" />
+ <do-not-install-in user-type="android.os.usertype.profile.CLONE" />
+ </install-in-user-type>
+
+ <!-- Clock (DeskClock) -->
+ <install-in-user-type package="com.android.deskclock">
+ <install-in user-type="FULL" />
+ <do-not-install-in user-type="android.os.usertype.profile.CLONE" />
+ </install-in-user-type>
+
+ <!-- Contacts -->
+ <install-in-user-type package="com.android.contacts">
+ <install-in user-type="FULL" />
+ <install-in user-type="PROFILE" />
+ <do-not-install-in user-type="android.os.usertype.profile.CLONE" />
+ </install-in-user-type>
+
+ <!-- Gallery (Gallery2) -->
+ <install-in-user-type package="com.android.gallery3d">
+ <install-in user-type="FULL" />
+ <install-in user-type="PROFILE" />
+ <do-not-install-in user-type="android.os.usertype.profile.CLONE" />
+ </install-in-user-type>
+
+ <!-- Search (QuickSearchBox) TODO(b/258055479) -->
+ <install-in-user-type package="com.android.quicksearchbox">
+ <install-in user-type="SYSTEM" />
+ <install-in user-type="FULL" />
+ <install-in user-type="PROFILE" />
+ <do-not-install-in user-type="android.os.usertype.profile.CLONE" />
+ </install-in-user-type>
+
+ <!-- WallpaperCropper -->
<install-in-user-type package="com.android.wallpapercropper">
<install-in user-type="FULL" />
</install-in-user-type>
diff --git a/target/product/sysconfig/preinstalled-packages-platform-handheld-system.xml b/target/product/sysconfig/preinstalled-packages-platform-handheld-system.xml
new file mode 100644
index 0000000..02b03f1
--- /dev/null
+++ b/target/product/sysconfig/preinstalled-packages-platform-handheld-system.xml
@@ -0,0 +1,34 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!-- Copyright (C) 2022 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<!-- System packages to preinstall on all devices with handheld_system, per user type.
+ Documentation at frameworks/base/data/etc/preinstalled-packages-platform.xml
+-->
+<config>
+ <!-- Files (DocumentsUI) TODO(b/258055479) -->
+ <install-in-user-type package="com.android.documentsui">
+ <install-in user-type="SYSTEM" />
+ <install-in user-type="FULL" />
+ <install-in user-type="PROFILE" />
+ <do-not-install-in user-type="android.os.usertype.profile.CLONE" />
+ </install-in-user-type>
+
+ <!-- Printer (BuiltInPrintService) (Does not show on launcher but shows on the share sheet) -->
+ <install-in-user-type package="com.android.bips">
+ <install-in user-type="FULL" />
+ <install-in user-type="PROFILE" />
+ <do-not-install-in user-type="android.os.usertype.profile.CLONE" />
+ </install-in-user-type>
+</config>
diff --git a/target/product/sysconfig/preinstalled-packages-platform-telephony-product.xml b/target/product/sysconfig/preinstalled-packages-platform-telephony-product.xml
new file mode 100644
index 0000000..cc1c135
--- /dev/null
+++ b/target/product/sysconfig/preinstalled-packages-platform-telephony-product.xml
@@ -0,0 +1,29 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!-- Copyright (C) 2022 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<!-- System packages to preinstall on all devices with telephony_product, per user type.
+ Documentation at frameworks/base/data/etc/preinstalled-packages-platform.xml
+-->
+<config>
+ <!-- Phone
+ TODO(b/258055373): Check if this should be preinstalled on SYSTEM user -->
+ <install-in-user-type package="com.android.dialer">
+ <install-in user-type="SYSTEM" />
+ <install-in user-type="FULL" />
+ <install-in user-type="PROFILE" />
+ <do-not-install-in user-type="android.os.usertype.profile.CLONE" />
+ </install-in-user-type>
+</config>
+
diff --git a/target/product/telephony_product.mk b/target/product/telephony_product.mk
index 18374d4..aa70f46 100644
--- a/target/product/telephony_product.mk
+++ b/target/product/telephony_product.mk
@@ -21,3 +21,4 @@
PRODUCT_PACKAGES += \
Dialer \
ImsServiceEntitlement \
+ preinstalled-packages-platform-telephony-product.xml
diff --git a/target/product/verity.mk b/target/product/verity.mk
deleted file mode 100644
index 5f09283..0000000
--- a/target/product/verity.mk
+++ /dev/null
@@ -1,29 +0,0 @@
-#
-# Copyright (C) 2014 The Android Open Source Project
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-# Provides dependencies necessary for verified boot.
-
-PRODUCT_SUPPORTS_BOOT_SIGNER := true
-PRODUCT_SUPPORTS_VERITY := true
-PRODUCT_SUPPORTS_VERITY_FEC := true
-
-# The dev key is used to sign boot and recovery images, and the verity
-# metadata table. Actual product deliverables will be re-signed by hand.
-# We expect this file to exist with the suffixes ".x509.pem" and ".pk8".
-PRODUCT_VERITY_SIGNING_KEY := build/make/target/product/security/verity
-
-PRODUCT_PACKAGES += \
- verity_key
diff --git a/target/product/virtual_ab_ota/android_t_baseline.mk b/target/product/virtual_ab_ota/android_t_baseline.mk
index 18e08e4..418aaa4 100644
--- a/target/product/virtual_ab_ota/android_t_baseline.mk
+++ b/target/product/virtual_ab_ota/android_t_baseline.mk
@@ -12,41 +12,11 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
+#
-# This file enables baseline features, such as io_uring,
-# userspace merge, etc. But sets compression method to none.
-# This .mk file also removes snapuserd from vendor ramdisk,
-# as T launching devices will have init_boot which has snapuserd
-# in generic ramdisk.
-# T launching devices should include this .mk file, and configure
-# compression algorithm by setting
-# PRODUCT_VIRTUAL_AB_COMPRESSION_METHOD to gz or brotli. Complete
-# set of supported algorithms can be found in
-# system/core/fs_mgr/libsnapshot/cow_writer.cpp
-
-PRODUCT_VIRTUAL_AB_OTA := true
-
-PRODUCT_VENDOR_PROPERTIES += ro.virtual_ab.enabled=true
-
-PRODUCT_VENDOR_PROPERTIES += ro.virtual_ab.compression.enabled=true
-PRODUCT_VENDOR_PROPERTIES += ro.virtual_ab.userspace.snapshots.enabled=true
-PRODUCT_VENDOR_PROPERTIES += ro.virtual_ab.io_uring.enabled=true
-PRODUCT_VENDOR_PROPERTIES += ro.virtual_ab.compression.xor.enabled=true
-
-PRODUCT_VIRTUAL_AB_COMPRESSION := true
-PRODUCT_VIRTUAL_AB_COMPRESSION_METHOD ?= none
-PRODUCT_PACKAGES += \
- snapuserd \
-
-# For dedicated recovery partitions, we need to include snapuserd
-# For GKI devices, BOARD_USES_RECOVERY_AS_BOOT is empty, but
-# so is BOARD_MOVE_RECOVERY_RESOURCES_TO_VENDOR_BOOT.
-ifdef BUILDING_RECOVERY_IMAGE
-ifneq ($(BOARD_USES_RECOVERY_AS_BOOT),true)
-ifneq ($(BOARD_MOVE_RECOVERY_RESOURCES_TO_VENDOR_BOOT),true)
-PRODUCT_PACKAGES += \
- snapuserd.recovery
-endif
-endif
-endif
-
+# This file should be used only for T launching devices. We maintain
+# this file just for backward compatibility for T launch devices
+# so that build doesn't break.
+#
+# All U+ launching devices should instead use vabc_features.mk.
+$(call inherit-product, $(SRC_TARGET_DIR)/product/virtual_ab_ota/vabc_features.mk)
diff --git a/target/product/virtual_ab_ota/compression.mk b/target/product/virtual_ab_ota/compression.mk
index d5bd2a5..dc1ee3e 100644
--- a/target/product/virtual_ab_ota/compression.mk
+++ b/target/product/virtual_ab_ota/compression.mk
@@ -19,6 +19,12 @@
PRODUCT_VENDOR_PROPERTIES += ro.virtual_ab.compression.enabled=true
PRODUCT_VENDOR_PROPERTIES += ro.virtual_ab.userspace.snapshots.enabled=true
PRODUCT_VENDOR_PROPERTIES += ro.virtual_ab.io_uring.enabled=true
+PRODUCT_VENDOR_PROPERTIES += ro.virtual_ab.batch_writes=true
+
+# Enabling this property, will improve OTA install time
+# but will use an additional CPU core
+# PRODUCT_VENDOR_PROPERTIES += ro.virtual_ab.compression.threads=true
+
PRODUCT_VIRTUAL_AB_COMPRESSION := true
PRODUCT_PACKAGES += \
snapuserd.vendor_ramdisk \
diff --git a/target/product/virtual_ab_ota/vabc_features.mk b/target/product/virtual_ab_ota/vabc_features.mk
new file mode 100644
index 0000000..874eb9c
--- /dev/null
+++ b/target/product/virtual_ab_ota/vabc_features.mk
@@ -0,0 +1,46 @@
+#
+# Copyright (C) 2022 The Android Open-Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This file enables baseline features, such as io_uring,
+# userspace merge, etc. But sets compression method to none.
+# This .mk file also removes snapuserd from vendor ramdisk,
+# as T launching devices will have init_boot which has snapuserd
+# in generic ramdisk.
+#
+# T and U launching devices should include this .mk file, and configure
+# compression algorithm by setting
+# PRODUCT_VIRTUAL_AB_COMPRESSION_METHOD to lz4, gz or brotli. Complete
+# set of supported algorithms can be found in
+# system/core/fs_mgr/libsnapshot/cow_writer.cpp
+
+PRODUCT_VIRTUAL_AB_OTA := true
+
+PRODUCT_VENDOR_PROPERTIES += ro.virtual_ab.enabled=true
+
+PRODUCT_VENDOR_PROPERTIES += ro.virtual_ab.compression.enabled=true
+PRODUCT_VENDOR_PROPERTIES += ro.virtual_ab.userspace.snapshots.enabled=true
+PRODUCT_VENDOR_PROPERTIES += ro.virtual_ab.io_uring.enabled=true
+PRODUCT_VENDOR_PROPERTIES += ro.virtual_ab.compression.xor.enabled=true
+PRODUCT_VENDOR_PROPERTIES += ro.virtual_ab.batch_writes=true
+
+# Enabling this property, will improve OTA install time
+# but will use an additional CPU core
+# PRODUCT_VENDOR_PROPERTIES += ro.virtual_ab.compression.threads=true
+
+PRODUCT_VIRTUAL_AB_COMPRESSION := true
+PRODUCT_VIRTUAL_AB_COMPRESSION_METHOD ?= none
+PRODUCT_PACKAGES += \
+ snapuserd \
+
diff --git a/tests/b_tests.sh b/tests/b_tests.sh
new file mode 100755
index 0000000..491d762
--- /dev/null
+++ b/tests/b_tests.sh
@@ -0,0 +1,41 @@
+# Copyright (C) 2022 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# These commands are expected to always return successfully
+
+trap 'exit 1' ERR
+
+source $(dirname $0)/../envsetup.sh
+
+# lunch required to set up PATH to use b
+lunch aosp_arm64
+
+test_target=//build/bazel/scripts/difftool:difftool
+
+b build "$test_target"
+b build -- "$test_target"
+b build "$test_target" --run-soong-tests
+b build --run-soong-tests "$test_target"
+b --run-soong-tests build "$test_target"
+# Test that the bazel server can be restarted once shut down. If run in a
+# docker container, you need to run the docker container with --init or
+# have some other process as PID 1 that can reap zombies.
+b shutdown
+b cquery 'kind(test, //build/bazel/examples/android_app/...)' --config=android
+b run $test_target -- --help >/dev/null
+
+# Workflow tests for bmod
+bmod libm
+b run $(bmod fastboot) -- help
+b build $(bmod libm) $(bmod libcutils) --config=android
diff --git a/tests/envsetup_tests.sh b/tests/envsetup_tests.sh
index abdcd56..6b41766 100755
--- a/tests/envsetup_tests.sh
+++ b/tests/envsetup_tests.sh
@@ -1,37 +1,22 @@
#!/bin/bash -e
+# Copyright (C) 2022 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
-source $(dirname $0)/../envsetup.sh
-
-unset TARGET_PRODUCT TARGET_BUILD_VARIANT TARGET_PLATFORM_VERSION
-
-function check_lunch
-(
- echo lunch $1
- set +e
- lunch $1 > /dev/null 2> /dev/null
- set -e
- [ "$TARGET_PRODUCT" = "$2" ] || ( echo "lunch $1: expected TARGET_PRODUCT='$2', got '$TARGET_PRODUCT'" && exit 1 )
- [ "$TARGET_BUILD_VARIANT" = "$3" ] || ( echo "lunch $1: expected TARGET_BUILD_VARIANT='$3', got '$TARGET_BUILD_VARIANT'" && exit 1 )
- [ "$TARGET_PLATFORM_VERSION" = "$4" ] || ( echo "lunch $1: expected TARGET_PLATFORM_VERSION='$4', got '$TARGET_PLATFORM_VERSION'" && exit 1 )
+tests=(
+ $(dirname $0)/lunch_tests.sh
)
-default_version=$(get_build_var DEFAULT_PLATFORM_VERSION)
-valid_version=PPR1
-
-# lunch tests
-check_lunch "aosp_arm64" "aosp_arm64" "eng" ""
-check_lunch "aosp_arm64-userdebug" "aosp_arm64" "userdebug" ""
-check_lunch "aosp_arm64-userdebug-$default_version" "aosp_arm64" "userdebug" "$default_version"
-check_lunch "aosp_arm64-userdebug-$valid_version" "aosp_arm64" "userdebug" "$valid_version"
-check_lunch "abc" "" "" ""
-check_lunch "aosp_arm64-abc" "" "" ""
-check_lunch "aosp_arm64-userdebug-abc" "" "" ""
-check_lunch "aosp_arm64-abc-$valid_version" "" "" ""
-check_lunch "abc-userdebug-$valid_version" "" "" ""
-check_lunch "-" "" "" ""
-check_lunch "--" "" "" ""
-check_lunch "-userdebug" "" "" ""
-check_lunch "-userdebug-" "" "" ""
-check_lunch "-userdebug-$valid_version" "" "" ""
-check_lunch "aosp_arm64-userdebug-$valid_version-" "" "" ""
-check_lunch "aosp_arm64-userdebug-$valid_version-abc" "" "" ""
+for test in $tests; do
+ bash -x $test
+done
diff --git a/tests/inherits_in_regular_variables/inherit1.rbc b/tests/inherits_in_regular_variables/inherit1.rbc
new file mode 100644
index 0000000..4ce8825
--- /dev/null
+++ b/tests/inherits_in_regular_variables/inherit1.rbc
@@ -0,0 +1,21 @@
+# Copyright 2023 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+load("//build/make/core:product_config.rbc", "rblf")
+
+def init(g, handle):
+ cfg = rblf.cfg(handle)
+
+ cfg.setdefault("PRODUCT_PACKAGES", [])
+ cfg["PRODUCT_PACKAGES"] += ["bar"]
diff --git a/tests/inherits_in_regular_variables/product.rbc b/tests/inherits_in_regular_variables/product.rbc
new file mode 100644
index 0000000..c193c65
--- /dev/null
+++ b/tests/inherits_in_regular_variables/product.rbc
@@ -0,0 +1,27 @@
+# Copyright 2023 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+load("//build/make/core:product_config.rbc", "rblf")
+load(":inherit1.rbc", _inherit1_init = "init")
+
+def init(g, handle):
+ cfg = rblf.cfg(handle)
+
+ cfg.setdefault("PRODUCT_PACKAGES", [])
+ cfg["PRODUCT_PACKAGES"] += ["foo"]
+
+ g["PRODUCT_PACKAGES_COPY"] = cfg["PRODUCT_PACKAGES"]
+
+ rblf.inherit(handle, "test/inherit1", _inherit1_init)
+
diff --git a/tests/inherits_in_regular_variables/test.rbc b/tests/inherits_in_regular_variables/test.rbc
new file mode 100644
index 0000000..3a76d8a
--- /dev/null
+++ b/tests/inherits_in_regular_variables/test.rbc
@@ -0,0 +1,30 @@
+# Copyright 2023 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+load("//build/make/core:product_config.rbc", "rblf")
+load("//build/make/tests/input_variables.rbc", input_variables_init = "init")
+load(":product.rbc", "init")
+
+
+def assert_eq(expected, actual):
+ if expected != actual:
+ fail("Expected '%s', got '%s'" % (expected, actual))
+
+def test():
+ (globals, globals_base) = rblf.product_configuration("test/device", init, input_variables_init)
+ assert_eq(["foo", "bar"], globals["PRODUCTS.test/device.mk.PRODUCT_PACKAGES"])
+ assert_eq(["foo", ("test/inherit1",)], globals["PRODUCT_PACKAGES_COPY"])
+
+ # Ideally we would check that rblf.printvars returns the correct result, but we don't have
+ # a good way to intercept its output or mock rblf_cli
diff --git a/tests/lunch_tests.sh b/tests/lunch_tests.sh
new file mode 100755
index 0000000..4285d13
--- /dev/null
+++ b/tests/lunch_tests.sh
@@ -0,0 +1,48 @@
+#!/usr/bin/env bash
+# Copyright (C) 2022 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+source $(dirname $0)/../envsetup.sh
+
+unset TARGET_PRODUCT TARGET_BUILD_VARIANT TARGET_PLATFORM_VERSION
+
+function check_lunch
+(
+ echo lunch $1
+ set +e
+ lunch $1 > /dev/null 2> /dev/null
+ set -e
+ [ "$TARGET_PRODUCT" = "$2" ] || ( echo "lunch $1: expected TARGET_PRODUCT='$2', got '$TARGET_PRODUCT'" && exit 1 )
+ [ "$TARGET_BUILD_VARIANT" = "$3" ] || ( echo "lunch $1: expected TARGET_BUILD_VARIANT='$3', got '$TARGET_BUILD_VARIANT'" && exit 1 )
+ [ "$TARGET_PLATFORM_VERSION" = "$4" ] || ( echo "lunch $1: expected TARGET_PLATFORM_VERSION='$4', got '$TARGET_PLATFORM_VERSION'" && exit 1 )
+)
+
+default_version=$(get_build_var DEFAULT_PLATFORM_VERSION)
+
+# lunch tests
+check_lunch "aosp_arm64" "aosp_arm64" "eng" ""
+check_lunch "aosp_arm64-userdebug" "aosp_arm64" "userdebug" ""
+check_lunch "aosp_arm64-userdebug-$default_version" "aosp_arm64" "userdebug" "$default_version"
+check_lunch "abc" "" "" ""
+check_lunch "aosp_arm64-abc" "" "" ""
+check_lunch "aosp_arm64-userdebug-abc" "" "" ""
+check_lunch "aosp_arm64-abc-$default_version" "" "" ""
+check_lunch "abc-userdebug-$default_version" "" "" ""
+check_lunch "-" "" "" ""
+check_lunch "--" "" "" ""
+check_lunch "-userdebug" "" "" ""
+check_lunch "-userdebug-" "" "" ""
+check_lunch "-userdebug-$default_version" "" "" ""
+check_lunch "aosp_arm64-userdebug-$default_version-" "" "" ""
+check_lunch "aosp_arm64-userdebug-$default_version-abc" "" "" ""
diff --git a/tests/roboleaf_tests.sh b/tests/roboleaf_tests.sh
new file mode 100755
index 0000000..2d13766
--- /dev/null
+++ b/tests/roboleaf_tests.sh
@@ -0,0 +1,22 @@
+#!/bin/bash -e
+# Copyright (C) 2022 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+tests=(
+ $(dirname $0)/b_tests.sh
+)
+
+for test in $tests; do
+ bash -x $test
+done
diff --git a/tests/run.rbc b/tests/run.rbc
index 2d35e85..33583eb 100644
--- a/tests/run.rbc
+++ b/tests/run.rbc
@@ -28,6 +28,7 @@
load("//build/make/tests/single_value_inheritance:test.rbc", test_single_value_inheritance = "test")
load("//build/make/tests/artifact_path_requirements:test.rbc", test_artifact_path_requirements = "test")
load("//build/make/tests/prefixed_sort_order:test.rbc", test_prefixed_sort_order = "test")
+load("//build/make/tests/inherits_in_regular_variables:test.rbc", test_inherits_in_regular_variables = "test")
def assert_eq(expected, actual):
if expected != actual:
@@ -43,7 +44,12 @@
assert_eq("", rblf.mkstrip(" \n \t "))
assert_eq("a b c", rblf.mkstrip(" a b \n c \t"))
-assert_eq(1, rblf.mkstrip(1))
+assert_eq("1", rblf.mkstrip("1 "))
+
+assert_eq(["a", "b"], rblf.words("a b"))
+assert_eq(["a", "b", "c"], rblf.words(["a b", "c"]))
+# 1-tuple like we use in product variables
+assert_eq(["a b", ("c",)], rblf.words(["a b", ("c",)]))
assert_eq("b1 b2", rblf.mksubst("a", "b", "a1 a2"))
assert_eq(["b1", "x2"], rblf.mksubst("a", "b", ["a1", "x2"]))
@@ -81,6 +87,19 @@
assert_eq(cwd+"/foo/bar "+cwd+"/foo/baz", rblf.abspath("foo/bar foo/baz"))
assert_eq("/baz", rblf.abspath("/../../../../../../../../../../../../../../../../baz"))
+assert_eq("foo", rblf.first_word("foo bar"))
+assert_eq("foo", rblf.first_word(["foo", "bar"]))
+assert_eq("", rblf.first_word(""))
+assert_eq("", rblf.first_word([]))
+assert_eq("bar", rblf.last_word("foo bar"))
+assert_eq("bar", rblf.last_word(["foo", "bar"]))
+assert_eq("", rblf.last_word(""))
+assert_eq("", rblf.last_word([]))
+
+assert_eq(["foo", "bar"], rblf.flatten_2d_list([["foo", "bar"]]))
+assert_eq(["foo", "bar"], rblf.flatten_2d_list([["foo"], ["bar"]]))
+assert_eq([], rblf.flatten_2d_list([]))
+
assert_eq(
["build/make/tests/board.rbc", "build/make/tests/board_input_vars.rbc"],
rblf.expand_wildcard("build/make/tests/board*.rbc")
@@ -149,6 +168,18 @@
assert_eq({"A_LIST_VARIABLE": ["foo", "bar"]}, board_globals)
assert_eq({"A_LIST_VARIABLE": ["foo"]}, board_globals_base)
+g = {"FOO": "a", "BAR": "c", "BAZ": "e"}
+cfg = {"FOO": "b", "BAR": "d", "BAZ": "f"}
+rblf.clear_var_list(g, struct(cfg=cfg), "FOO BAR NEWVAR")
+assert_eq("", g["FOO"])
+assert_eq("", cfg["FOO"])
+assert_eq("", g["BAR"])
+assert_eq("", cfg["BAR"])
+assert_eq("e", g["BAZ"])
+assert_eq("f", cfg["BAZ"])
+assert_eq("", g.get("NEWVAR"))
+
test_single_value_inheritance()
test_artifact_path_requirements()
test_prefixed_sort_order()
+test_inherits_in_regular_variables()
diff --git a/tools/Android.bp b/tools/Android.bp
index 6601c60..bea0602 100644
--- a/tools/Android.bp
+++ b/tools/Android.bp
@@ -49,3 +49,36 @@
out: ["kernel_release.txt"],
cmd: "$(location) --tools lz4:$(location lz4) --input $(in) --output-release > $(out)"
}
+
+cc_binary_host {
+ name: "build-runfiles",
+ srcs: ["build-runfiles.cc"],
+}
+
+python_binary_host {
+ name: "check_radio_versions",
+ srcs: ["check_radio_versions.py"],
+}
+
+python_binary_host {
+ name: "check_elf_file",
+ srcs: ["check_elf_file.py"],
+}
+
+python_binary_host {
+ name: "generate_gts_shared_report",
+ srcs: ["generate_gts_shared_report.py"],
+}
+
+python_binary_host {
+ name: "list_files",
+ main: "list_files.py",
+ srcs: [
+ "list_files.py",
+ ],
+ version: {
+ py3: {
+ embedded_launcher: true,
+ }
+ }
+}
diff --git a/tools/BUILD.bazel b/tools/BUILD.bazel
index 3170820..0de178b 100644
--- a/tools/BUILD.bazel
+++ b/tools/BUILD.bazel
@@ -1,20 +1,27 @@
py_library(
- name="event_log_tags",
+ name = "event_log_tags",
srcs = ["event_log_tags.py"],
)
py_binary(
- name="java-event-log-tags",
- srcs=["java-event-log-tags.py"],
- deps=[":event_log_tags"],
- visibility = ["//visibility:public"],
+ name = "java-event-log-tags",
+ srcs = ["java-event-log-tags.py"],
python_version = "PY3",
+ visibility = ["//visibility:public"],
+ deps = [":event_log_tags"],
)
py_binary(
- name="merge-event-log-tags",
- srcs=["merge-event-log-tags.py"],
- deps=[":event_log_tags"],
- visibility = ["//visibility:public"],
+ name = "merge-event-log-tags",
+ srcs = ["merge-event-log-tags.py"],
python_version = "PY3",
+ visibility = ["//visibility:public"],
+ deps = [":event_log_tags"],
+)
+
+py_binary(
+ name = "check_elf_file",
+ srcs = ["check_elf_file.py"],
+ python_version = "PY3",
+ visibility = ["//visibility:public"],
)
diff --git a/tools/atree/fs.cpp b/tools/atree/fs.cpp
index 6cd080e..d004e97 100644
--- a/tools/atree/fs.cpp
+++ b/tools/atree/fs.cpp
@@ -177,7 +177,7 @@
} else {
// Split the arguments if more than 1
char* cmd = strdup(strip_cmd);
- const char** args = (const char**) malloc(sizeof(const char*) * (num_args + 2));
+ const char** args = (const char**) calloc((num_args + 2), sizeof(const char*));
const char** curr = args;
char* s = cmd;
diff --git a/tools/auto_gen_test_config.py b/tools/auto_gen_test_config.py
index 943f238..ce64160 100755
--- a/tools/auto_gen_test_config.py
+++ b/tools/auto_gen_test_config.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
#
# Copyright (C) 2017 The Android Open Source Project
#
@@ -69,7 +69,7 @@
module = os.path.splitext(os.path.basename(target_config))[0]
instrumentation = instrumentation_elements[0]
manifest = manifest_elements[0]
- if instrumentation.attributes.has_key(ATTRIBUTE_LABEL):
+ if ATTRIBUTE_LABEL in instrumentation.attributes:
label = instrumentation.attributes[ATTRIBUTE_LABEL].value
else:
label = module
diff --git a/tools/build-runfiles.cc b/tools/build-runfiles.cc
new file mode 100644
index 0000000..b6197f0
--- /dev/null
+++ b/tools/build-runfiles.cc
@@ -0,0 +1,426 @@
+// Copyright 2014 The Bazel Authors. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+// This program creates a "runfiles tree" from a "runfiles manifest".
+//
+// The command line arguments are an input manifest INPUT and an output
+// directory RUNFILES. First, the files in the RUNFILES directory are scanned
+// and any extraneous ones are removed. Second, any missing files are created.
+// Finally, a copy of the input manifest is written to RUNFILES/MANIFEST.
+//
+// The input manifest consists of lines, each containing a relative path within
+// the runfiles, a space, and an optional absolute path. If this second path
+// is present, a symlink is created pointing to it; otherwise an empty file is
+// created.
+//
+// Given the line
+// <workspace root>/output/path /real/path
+// we will create directories
+// RUNFILES/<workspace root>
+// RUNFILES/<workspace root>/output
+// a symlink
+// RUNFILES/<workspace root>/output/path -> /real/path
+// and the output manifest will contain a line
+// <workspace root>/output/path /real/path
+//
+// If --use_metadata is supplied, every other line is treated as opaque
+// metadata, and is ignored here.
+//
+// All output paths must be relative and generally (but not always) begin with
+// <workspace root>. No output path may be equal to another. No output path may
+// be a path prefix of another.
+
+#define _FILE_OFFSET_BITS 64
+
+#include <dirent.h>
+#include <err.h>
+#include <errno.h>
+#include <fcntl.h>
+#include <limits.h>
+#include <stdio.h>
+#include <string.h>
+#include <stdlib.h>
+#include <sys/stat.h>
+#include <unistd.h>
+
+#include <map>
+#include <string>
+
+// program_invocation_short_name is not portable.
+static const char *argv0;
+
+const char *input_filename;
+const char *output_base_dir;
+
+enum FileType {
+ FILE_TYPE_REGULAR,
+ FILE_TYPE_DIRECTORY,
+ FILE_TYPE_SYMLINK
+};
+
+struct FileInfo {
+ FileType type;
+ std::string symlink_target;
+
+ bool operator==(const FileInfo &other) const {
+ return type == other.type && symlink_target == other.symlink_target;
+ }
+
+ bool operator!=(const FileInfo &other) const {
+ return !(*this == other);
+ }
+};
+
+typedef std::map<std::string, FileInfo> FileInfoMap;
+
+class RunfilesCreator {
+ public:
+ explicit RunfilesCreator(const std::string &output_base)
+ : output_base_(output_base),
+ output_filename_("MANIFEST"),
+ temp_filename_(output_filename_ + ".tmp") {
+ SetupOutputBase();
+ if (chdir(output_base_.c_str()) != 0) {
+ err(2, "chdir '%s'", output_base_.c_str());
+ }
+ }
+
+ void ReadManifest(const std::string &manifest_file, bool allow_relative,
+ bool use_metadata) {
+ FILE *outfile = fopen(temp_filename_.c_str(), "w");
+ if (!outfile) {
+ err(2, "opening '%s/%s' for writing", output_base_.c_str(),
+ temp_filename_.c_str());
+ }
+ FILE *infile = fopen(manifest_file.c_str(), "r");
+ if (!infile) {
+ err(2, "opening '%s' for reading", manifest_file.c_str());
+ }
+
+ // read input manifest
+ int lineno = 0;
+ char buf[3 * PATH_MAX];
+ while (fgets(buf, sizeof buf, infile)) {
+ // copy line to output manifest
+ if (fputs(buf, outfile) == EOF) {
+ err(2, "writing to '%s/%s'", output_base_.c_str(),
+ temp_filename_.c_str());
+ }
+
+ // parse line
+ ++lineno;
+ // Skip metadata lines. They are used solely for
+ // dependency checking.
+ if (use_metadata && lineno % 2 == 0) continue;
+
+ char *tok = strtok(buf, " \n");
+ if (tok == nullptr) {
+ continue;
+ } else if (*tok == '/') {
+ errx(2, "%s:%d: paths must not be absolute", input_filename, lineno);
+ }
+ std::string link(tok);
+
+ const char *target = strtok(nullptr, " \n");
+ if (target == nullptr) {
+ target = "";
+ } else if (strtok(nullptr, " \n") != nullptr) {
+ errx(2, "%s:%d: link or target filename contains space", input_filename, lineno);
+ } else if (!allow_relative && target[0] != '/') {
+ errx(2, "%s:%d: expected absolute path", input_filename, lineno);
+ }
+
+ FileInfo *info = &manifest_[link];
+ if (target[0] == '\0') {
+ // No target means an empty file.
+ info->type = FILE_TYPE_REGULAR;
+ } else {
+ info->type = FILE_TYPE_SYMLINK;
+ info->symlink_target = target;
+ }
+
+ FileInfo parent_info;
+ parent_info.type = FILE_TYPE_DIRECTORY;
+
+ while (true) {
+ int k = link.rfind('/');
+ if (k < 0) break;
+ link.erase(k, std::string::npos);
+ if (!manifest_.insert(std::make_pair(link, parent_info)).second) break;
+ }
+ }
+ if (fclose(outfile) != 0) {
+ err(2, "writing to '%s/%s'", output_base_.c_str(),
+ temp_filename_.c_str());
+ }
+ fclose(infile);
+
+ // Don't delete the temp manifest file.
+ manifest_[temp_filename_].type = FILE_TYPE_REGULAR;
+ }
+
+ void CreateRunfiles() {
+ if (unlink(output_filename_.c_str()) != 0 && errno != ENOENT) {
+ err(2, "removing previous file at '%s/%s'", output_base_.c_str(),
+ output_filename_.c_str());
+ }
+
+ ScanTreeAndPrune(".");
+ CreateFiles();
+
+ // rename output file into place
+ if (rename(temp_filename_.c_str(), output_filename_.c_str()) != 0) {
+ err(2, "renaming '%s/%s' to '%s/%s'",
+ output_base_.c_str(), temp_filename_.c_str(),
+ output_base_.c_str(), output_filename_.c_str());
+ }
+ }
+
+ private:
+ void SetupOutputBase() {
+ struct stat st;
+ if (stat(output_base_.c_str(), &st) != 0) {
+ // Technically, this will cause problems if the user's umask contains
+ // 0200, but we don't care. Anyone who does that deserves what's coming.
+ if (mkdir(output_base_.c_str(), 0777) != 0) {
+ err(2, "creating directory '%s'", output_base_.c_str());
+ }
+ } else {
+ EnsureDirReadAndWritePerms(output_base_);
+ }
+ }
+
+ void ScanTreeAndPrune(const std::string &path) {
+ // A note on non-empty files:
+ // We don't distinguish between empty and non-empty files. That is, if
+ // there's a file that has contents, we don't truncate it here, even though
+ // the manifest supports creation of empty files, only. Given that
+ // .runfiles are *supposed* to be immutable, this shouldn't be a problem.
+ EnsureDirReadAndWritePerms(path);
+
+ struct dirent *entry;
+ DIR *dh = opendir(path.c_str());
+ if (!dh) {
+ err(2, "opendir '%s'", path.c_str());
+ }
+
+ errno = 0;
+ const std::string prefix = (path == "." ? "" : path + "/");
+ while ((entry = readdir(dh)) != nullptr) {
+ if (!strcmp(entry->d_name, ".") || !strcmp(entry->d_name, "..")) continue;
+
+ std::string entry_path = prefix + entry->d_name;
+ FileInfo actual_info;
+ actual_info.type = DentryToFileType(entry_path, entry);
+
+ if (actual_info.type == FILE_TYPE_SYMLINK) {
+ ReadLinkOrDie(entry_path, &actual_info.symlink_target);
+ }
+
+ FileInfoMap::iterator expected_it = manifest_.find(entry_path);
+ if (expected_it == manifest_.end() ||
+ expected_it->second != actual_info) {
+ DelTree(entry_path, actual_info.type);
+ } else {
+ manifest_.erase(expected_it);
+ if (actual_info.type == FILE_TYPE_DIRECTORY) {
+ ScanTreeAndPrune(entry_path);
+ }
+ }
+
+ errno = 0;
+ }
+ if (errno != 0) {
+ err(2, "reading directory '%s'", path.c_str());
+ }
+ closedir(dh);
+ }
+
+ void CreateFiles() {
+ for (FileInfoMap::const_iterator it = manifest_.begin();
+ it != manifest_.end(); ++it) {
+ const std::string &path = it->first;
+ switch (it->second.type) {
+ case FILE_TYPE_DIRECTORY:
+ if (mkdir(path.c_str(), 0777) != 0) {
+ err(2, "mkdir '%s'", path.c_str());
+ }
+ break;
+ case FILE_TYPE_REGULAR:
+ {
+ int fd = open(path.c_str(), O_CREAT|O_EXCL|O_WRONLY, 0555);
+ if (fd < 0) {
+ err(2, "creating empty file '%s'", path.c_str());
+ }
+ close(fd);
+ }
+ break;
+ case FILE_TYPE_SYMLINK:
+ {
+ const std::string& target = it->second.symlink_target;
+ if (symlink(target.c_str(), path.c_str()) != 0) {
+ err(2, "symlinking '%s' -> '%s'", path.c_str(), target.c_str());
+ }
+ }
+ break;
+ }
+ }
+ }
+
+ FileType DentryToFileType(const std::string &path, struct dirent *ent) {
+#ifdef _DIRENT_HAVE_D_TYPE
+ if (ent->d_type != DT_UNKNOWN) {
+ if (ent->d_type == DT_DIR) {
+ return FILE_TYPE_DIRECTORY;
+ } else if (ent->d_type == DT_LNK) {
+ return FILE_TYPE_SYMLINK;
+ } else {
+ return FILE_TYPE_REGULAR;
+ }
+ } else // NOLINT (the brace is in the next line)
+#endif
+ {
+ struct stat st;
+ LStatOrDie(path, &st);
+ if (S_ISDIR(st.st_mode)) {
+ return FILE_TYPE_DIRECTORY;
+ } else if (S_ISLNK(st.st_mode)) {
+ return FILE_TYPE_SYMLINK;
+ } else {
+ return FILE_TYPE_REGULAR;
+ }
+ }
+ }
+
+ void LStatOrDie(const std::string &path, struct stat *st) {
+ if (lstat(path.c_str(), st) != 0) {
+ err(2, "lstating file '%s'", path.c_str());
+ }
+ }
+
+ void StatOrDie(const std::string &path, struct stat *st) {
+ if (stat(path.c_str(), st) != 0) {
+ err(2, "stating file '%s'", path.c_str());
+ }
+ }
+
+ void ReadLinkOrDie(const std::string &path, std::string *output) {
+ char readlink_buffer[PATH_MAX];
+ int sz = readlink(path.c_str(), readlink_buffer, sizeof(readlink_buffer));
+ if (sz < 0) {
+ err(2, "reading symlink '%s'", path.c_str());
+ }
+ // readlink returns a non-null terminated string.
+ std::string(readlink_buffer, sz).swap(*output);
+ }
+
+ void EnsureDirReadAndWritePerms(const std::string &path) {
+ const int kMode = 0700;
+ struct stat st;
+ LStatOrDie(path, &st);
+ if ((st.st_mode & kMode) != kMode) {
+ int new_mode = st.st_mode | kMode;
+ if (chmod(path.c_str(), new_mode) != 0) {
+ err(2, "chmod '%s'", path.c_str());
+ }
+ }
+ }
+
+ bool DelTree(const std::string &path, FileType file_type) {
+ if (file_type != FILE_TYPE_DIRECTORY) {
+ if (unlink(path.c_str()) != 0) {
+ err(2, "unlinking '%s'", path.c_str());
+ return false;
+ }
+ return true;
+ }
+
+ EnsureDirReadAndWritePerms(path);
+
+ struct dirent *entry;
+ DIR *dh = opendir(path.c_str());
+ if (!dh) {
+ err(2, "opendir '%s'", path.c_str());
+ }
+ errno = 0;
+ while ((entry = readdir(dh)) != nullptr) {
+ if (!strcmp(entry->d_name, ".") || !strcmp(entry->d_name, "..")) continue;
+ const std::string entry_path = path + '/' + entry->d_name;
+ FileType entry_file_type = DentryToFileType(entry_path, entry);
+ DelTree(entry_path, entry_file_type);
+ errno = 0;
+ }
+ if (errno != 0) {
+ err(2, "readdir '%s'", path.c_str());
+ }
+ closedir(dh);
+ if (rmdir(path.c_str()) != 0) {
+ err(2, "rmdir '%s'", path.c_str());
+ }
+ return true;
+ }
+
+ private:
+ std::string output_base_;
+ std::string output_filename_;
+ std::string temp_filename_;
+
+ FileInfoMap manifest_;
+};
+
+int main(int argc, char **argv) {
+ argv0 = argv[0];
+
+ argc--; argv++;
+ bool allow_relative = false;
+ bool use_metadata = false;
+
+ while (argc >= 1) {
+ if (strcmp(argv[0], "--allow_relative") == 0) {
+ allow_relative = true;
+ argc--; argv++;
+ } else if (strcmp(argv[0], "--use_metadata") == 0) {
+ use_metadata = true;
+ argc--; argv++;
+ } else {
+ break;
+ }
+ }
+
+ if (argc != 2) {
+ fprintf(stderr, "usage: %s "
+ "[--allow_relative] [--use_metadata] "
+ "INPUT RUNFILES\n",
+ argv0);
+ return 1;
+ }
+
+ input_filename = argv[0];
+ output_base_dir = argv[1];
+
+ std::string manifest_file = input_filename;
+ if (input_filename[0] != '/') {
+ char cwd_buf[PATH_MAX];
+ if (getcwd(cwd_buf, sizeof(cwd_buf)) == nullptr) {
+ err(2, "getcwd failed");
+ }
+ manifest_file = std::string(cwd_buf) + '/' + manifest_file;
+ }
+
+ RunfilesCreator runfiles_creator(output_base_dir);
+ runfiles_creator.ReadManifest(manifest_file, allow_relative, use_metadata);
+ runfiles_creator.CreateRunfiles();
+
+ return 0;
+}
diff --git a/tools/buildinfo.sh b/tools/buildinfo.sh
index 536a381..c2e36df 100755
--- a/tools/buildinfo.sh
+++ b/tools/buildinfo.sh
@@ -30,9 +30,6 @@
echo "ro.build.host=$BUILD_HOSTNAME"
echo "ro.build.tags=$BUILD_VERSION_TAGS"
echo "ro.build.flavor=$TARGET_BUILD_FLAVOR"
-if [ -n "$BOARD_BUILD_SYSTEM_ROOT_IMAGE" ] ; then
- echo "ro.build.system_root_image=$BOARD_BUILD_SYSTEM_ROOT_IMAGE"
-fi
# These values are deprecated, use "ro.product.cpu.abilist"
# instead (see below).
diff --git a/tools/canoninja/go.mod b/tools/canoninja/go.mod
index c5a924e..9e668a5 100644
--- a/tools/canoninja/go.mod
+++ b/tools/canoninja/go.mod
@@ -1 +1,3 @@
module canoninja
+
+go 1.19
diff --git a/tools/check_elf_file.py b/tools/check_elf_file.py
index 045cb1d..51ec23b 100755
--- a/tools/check_elf_file.py
+++ b/tools/check_elf_file.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
#
# Copyright (C) 2019 The Android Open Source Project
#
@@ -72,9 +72,9 @@
def _get_os_name():
"""Get the host OS name."""
- if sys.platform == 'linux2':
+ if sys.platform.startswith('linux'):
return 'linux'
- if sys.platform == 'darwin':
+ if sys.platform.startswith('darwin'):
return 'darwin'
raise ValueError(sys.platform + ' is not supported')
@@ -196,11 +196,7 @@
def _read_llvm_readobj(cls, elf_file_path, header, llvm_readobj):
"""Run llvm-readobj and parse the output."""
cmd = [llvm_readobj, '--dynamic-table', '--dyn-symbols', elf_file_path]
- proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- out, _ = proc.communicate()
- rc = proc.returncode
- if rc != 0:
- raise subprocess.CalledProcessError(rc, cmd, out)
+ out = subprocess.check_output(cmd, text=True)
lines = out.splitlines()
return cls._parse_llvm_readobj(elf_file_path, header, lines)
@@ -411,8 +407,7 @@
# Chech whether all DT_NEEDED entries are specified.
for lib in self._file_under_test.dt_needed:
if lib not in specified_sonames:
- self._error('DT_NEEDED "{}" is not specified in shared_libs.'
- .format(lib.decode('utf-8')))
+ self._error(f'DT_NEEDED "{lib}" is not specified in shared_libs.')
missing_shared_libs = True
if missing_shared_libs:
@@ -467,7 +462,7 @@
"""Check whether all undefined symbols are resolved to a definition."""
all_elf_files = [self._file_under_test] + self._shared_libs
missing_symbols = []
- for sym, imported_vers in self._file_under_test.imported.iteritems():
+ for sym, imported_vers in self._file_under_test.imported.items():
for imported_ver in imported_vers:
lib = self._find_symbol_from_libs(all_elf_files, sym, imported_ver)
if not lib:
@@ -475,16 +470,14 @@
if missing_symbols:
for sym, ver in sorted(missing_symbols):
- sym = sym.decode('utf-8')
if ver:
- sym += '@' + ver.decode('utf-8')
- self._error('Unresolved symbol: {}'.format(sym))
+ sym += '@' + ver
+ self._error(f'Unresolved symbol: {sym}')
self._note()
self._note('Some dependencies might be changed, thus the symbol(s) '
'above cannot be resolved.')
- self._note('Please re-build the prebuilt file: "{}".'
- .format(self._file_path))
+ self._note(f'Please re-build the prebuilt file: "{self._file_path}".')
self._note()
self._note('If this is a new prebuilt file and it is designed to have '
diff --git a/tools/check_radio_versions.py b/tools/check_radio_versions.py
index ebe621f..d1d50e6 100755
--- a/tools/check_radio_versions.py
+++ b/tools/check_radio_versions.py
@@ -22,11 +22,18 @@
except ImportError:
from sha import sha as sha1
-if len(sys.argv) < 2:
+import argparse
+
+parser = argparse.ArgumentParser()
+parser.add_argument("--board_info_txt", nargs="?", required=True)
+parser.add_argument("--board_info_check", nargs="*", required=True)
+args = parser.parse_args()
+
+if not args.board_info_txt:
sys.exit(0)
build_info = {}
-f = open(sys.argv[1])
+f = open(args.board_info_txt)
for line in f:
line = line.strip()
if line.startswith("require"):
@@ -36,7 +43,7 @@
bad = False
-for item in sys.argv[2:]:
+for item in args.board_info_check:
key, fn = item.split(":", 1)
values = build_info.get(key, None)
@@ -52,8 +59,8 @@
try:
f = open(fn + ".sha1")
except IOError:
- if not bad: print
- print "*** Error opening \"%s.sha1\"; can't verify %s" % (fn, key)
+ if not bad: print()
+ print("*** Error opening \"%s.sha1\"; can't verify %s" % (fn, key))
bad = True
continue
for line in f:
@@ -63,17 +70,17 @@
versions[h] = v
if digest not in versions:
- if not bad: print
- print "*** SHA-1 hash of \"%s\" doesn't appear in \"%s.sha1\"" % (fn, fn)
+ if not bad: print()
+ print("*** SHA-1 hash of \"%s\" doesn't appear in \"%s.sha1\"" % (fn, fn))
bad = True
continue
if versions[digest] not in values:
- if not bad: print
- print "*** \"%s\" is version %s; not any %s allowed by \"%s\"." % (
- fn, versions[digest], key, sys.argv[1])
+ if not bad: print()
+ print("*** \"%s\" is version %s; not any %s allowed by \"%s\"." % (
+ fn, versions[digest], key, args.board_info_txt))
bad = True
if bad:
- print
+ print()
sys.exit(1)
diff --git a/tools/compare_fileslist.py b/tools/compare_fileslist.py
deleted file mode 100755
index 1f507d8..0000000
--- a/tools/compare_fileslist.py
+++ /dev/null
@@ -1,106 +0,0 @@
-#!/usr/bin/env python
-#
-# Copyright (C) 2009 The Android Open Source Project
-#
-# Licensed under the Apache License, Version 2.0 (the 'License');
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an 'AS IS' BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-import cgi, os, string, sys
-
-def IsDifferent(row):
- val = None
- for v in row:
- if v:
- if not val:
- val = v
- else:
- if val != v:
- return True
- return False
-
-def main(argv):
- inputs = argv[1:]
- data = {}
- index = 0
- for input in inputs:
- f = file(input, "r")
- lines = f.readlines()
- f.close()
- lines = map(string.split, lines)
- lines = map(lambda (x,y): (y,int(x)), lines)
- for fn,sz in lines:
- if not data.has_key(fn):
- data[fn] = {}
- data[fn][index] = sz
- index = index + 1
- rows = []
- for fn,sizes in data.iteritems():
- row = [fn]
- for i in range(0,index):
- if sizes.has_key(i):
- row.append(sizes[i])
- else:
- row.append(None)
- rows.append(row)
- rows = sorted(rows, key=lambda x: x[0])
- print """<html>
- <head>
- <style type="text/css">
- .fn, .sz, .z, .d {
- padding-left: 10px;
- padding-right: 10px;
- }
- .sz, .z, .d {
- text-align: right;
- }
- .fn {
- background-color: #ffffdd;
- }
- .sz {
- background-color: #ffffcc;
- }
- .z {
- background-color: #ffcccc;
- }
- .d {
- background-color: #99ccff;
- }
- </style>
- </head>
- <body>
- """
- print "<table>"
- print "<tr>"
- for input in inputs:
- combo = input.split(os.path.sep)[1]
- print " <td class='fn'>%s</td>" % cgi.escape(combo)
- print "</tr>"
-
- for row in rows:
- print "<tr>"
- for sz in row[1:]:
- if not sz:
- print " <td class='z'> </td>"
- elif IsDifferent(row[1:]):
- print " <td class='d'>%d</td>" % sz
- else:
- print " <td class='sz'>%d</td>" % sz
- print " <td class='fn'>%s</td>" % cgi.escape(row[0])
- print "</tr>"
- print "</table>"
- print "</body></html>"
-
-if __name__ == '__main__':
- main(sys.argv)
-
-
diff --git a/tools/compliance/Android.bp b/tools/compliance/Android.bp
index 225f3a5..ef5c760 100644
--- a/tools/compliance/Android.bp
+++ b/tools/compliance/Android.bp
@@ -18,6 +18,17 @@
}
blueprint_go_binary {
+ name: "compliance_checkmetadata",
+ srcs: ["cmd/checkmetadata/checkmetadata.go"],
+ deps: [
+ "compliance-module",
+ "projectmetadata-module",
+ "soong-response",
+ ],
+ testSrcs: ["cmd/checkmetadata/checkmetadata_test.go"],
+}
+
+blueprint_go_binary {
name: "compliance_checkshare",
srcs: ["cmd/checkshare/checkshare.go"],
deps: [
@@ -120,6 +131,22 @@
testSrcs: ["cmd/xmlnotice/xmlnotice_test.go"],
}
+blueprint_go_binary {
+ name: "compliance_sbom",
+ srcs: ["cmd/sbom/sbom.go"],
+ deps: [
+ "compliance-module",
+ "blueprint-deptools",
+ "soong-response",
+ "spdx-tools-spdxv2_2",
+ "spdx-tools-builder2v2",
+ "spdx-tools-spdxcommon",
+ "spdx-tools-spdx-json",
+ "spdx-tools-spdxlib",
+ ],
+ testSrcs: ["cmd/sbom/sbom_test.go"],
+}
+
bootstrap_go_package {
name: "compliance-module",
srcs: [
@@ -156,6 +183,8 @@
"test_util.go",
],
deps: [
+ "compliance-test-fs-module",
+ "projectmetadata-module",
"golang-protobuf-proto",
"golang-protobuf-encoding-prototext",
"license_metadata_proto",
diff --git a/tools/compliance/README.md b/tools/compliance/README.md
new file mode 100644
index 0000000..995d9ca
--- /dev/null
+++ b/tools/compliance/README.md
@@ -0,0 +1,101 @@
+# Compliance
+
+<!-- Much of this content appears too in doc.go
+When changing this file consider whether the change also applies to doc.go -->
+
+Package compliance provides an approved means for reading, consuming, and
+analyzing license metadata graphs.
+
+Assuming the license metadata and dependencies are fully and accurately
+recorded in the build system, any discrepancy between the official policy for
+open source license compliance and this code is **a bug in this code.**
+
+## Naming
+
+All of the code that directly reflects a policy decision belongs in a file with
+a name begninning `policy_`. Changes to these files need to be authored or
+reviewed by someone in OSPO or whichever successor group governs policy.
+
+The files with names not beginning `policy_` describe data types, and general,
+reusable algorithms.
+
+The source code for binary tools and utilities appears under the `cmd/`
+subdirectory. Other subdirectories contain reusable components that are not
+`compliance` per se.
+
+## Data Types
+
+A few principal types to understand are LicenseGraph, LicenseCondition, and
+ResolutionSet.
+
+### LicenseGraph
+
+A LicenseGraph is an immutable graph of the targets and dependencies reachable
+from a specific set of root targets. In general, the root targets will be the
+artifacts in a release or distribution. While conceptually immutable, parts of
+the graph may be loaded or evaluated lazily.
+
+Conceptually, the graph itself will always be a directed acyclic graph. One
+representation is a set of directed edges. Another is a set of nodes with
+directed edges to their dependencies.
+
+The edges have annotations, which can distinguish between build tools, runtime
+dependencies, and dependencies like 'contains' that make a derivative work.
+
+### LicenseCondition
+
+A LicenseCondition is an immutable tuple pairing a condition name with an
+originating target. e.g. Per current policy, a static library licensed under an
+MIT license would pair a "notice" condition with the static library target, and
+a dynamic license licensed under GPL would pair a "restricted" condition with
+the dynamic library target.
+
+### ResolutionSet
+
+A ResolutionSet is an immutable set of `AttachesTo`, `ActsOn`, `Resolves`
+tuples describing how license conditions apply to targets.
+
+`AttachesTo` is the trigger for acting. Distribution of the target invokes
+the policy.
+
+`ActsOn` is the target to share, give notice for, hide etc.
+
+`Resolves` is the set of conditions that the action resolves.
+
+For most condition types, `ActsOn` will be the target where the condition
+originated. For example, a notice condition policy means attribution or notice
+must be given for the target where the condition originates. Likewise, a
+proprietary condition policy means the privacy of the target where the
+condition originates must be respected. i.e. The thing acted on is the origin.
+
+Restricted conditions are different. The infectious nature of restricted often
+means sharing code that is not the target where the restricted condition
+originates. Linking an MIT library to a GPL library implies a policy to share
+the MIT library despite the MIT license having no source sharing requirement.
+
+In this case, one or more resolution tuples will have the MIT license module in
+`ActsOn` and the restricted condition originating at the GPL library module in
+`Resolves`. These tuples will `AttachTo` every target that depends on the GPL
+library because shipping any of those targets trigger the policy to share the
+code.
+
+## Processes
+
+### ReadLicenseGraph
+
+The principal means to ingest license metadata. Given the distribution targets,
+ReadLicenseGraph populates the LicenseGraph for those root targets.
+
+### NoticeIndex.IndexLicenseTexts
+
+IndexLicenseTexts reads, deduplicates and caches license texts for notice
+files. Also reads and caches project metadata for deriving library names.
+
+The algorithm for deriving library names has not been dictated by OSPO policy,
+but reflects a pragmatic attempt to comply with Android policy regarding
+unreleased product names, proprietary partner names etc.
+
+### projectmetadata.Index.MetadataForProjects
+
+MetadataForProjects reads, deduplicates and caches project METADATA files used
+for notice library names, and various properties appearing in SBOMs.
diff --git a/tools/compliance/cmd/checkmetadata/checkmetadata.go b/tools/compliance/cmd/checkmetadata/checkmetadata.go
new file mode 100644
index 0000000..c6c84e4
--- /dev/null
+++ b/tools/compliance/cmd/checkmetadata/checkmetadata.go
@@ -0,0 +1,148 @@
+// Copyright 2022 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "bytes"
+ "flag"
+ "fmt"
+ "io"
+ "io/fs"
+ "os"
+ "path/filepath"
+ "strings"
+
+ "android/soong/response"
+ "android/soong/tools/compliance"
+ "android/soong/tools/compliance/projectmetadata"
+)
+
+var (
+ failNoneRequested = fmt.Errorf("\nNo projects requested")
+)
+
+func main() {
+ var expandedArgs []string
+ for _, arg := range os.Args[1:] {
+ if strings.HasPrefix(arg, "@") {
+ f, err := os.Open(strings.TrimPrefix(arg, "@"))
+ if err != nil {
+ fmt.Fprintln(os.Stderr, err.Error())
+ os.Exit(1)
+ }
+
+ respArgs, err := response.ReadRspFile(f)
+ f.Close()
+ if err != nil {
+ fmt.Fprintln(os.Stderr, err.Error())
+ os.Exit(1)
+ }
+ expandedArgs = append(expandedArgs, respArgs...)
+ } else {
+ expandedArgs = append(expandedArgs, arg)
+ }
+ }
+
+ flags := flag.NewFlagSet("flags", flag.ExitOnError)
+
+ flags.Usage = func() {
+ fmt.Fprintf(os.Stderr, `Usage: %s {-o outfile} projectdir {projectdir...}
+
+Tries to open the METADATA.android or METADATA file in each projectdir
+reporting any errors on stderr.
+
+Reports "FAIL" to stdout if any errors found and exits with status 1.
+
+Otherwise, reports "PASS" and the number of project metadata files
+found exiting with status 0.
+`, filepath.Base(os.Args[0]))
+ flags.PrintDefaults()
+ }
+
+ outputFile := flags.String("o", "-", "Where to write the output. (default stdout)")
+
+ flags.Parse(expandedArgs)
+
+ // Must specify at least one root target.
+ if flags.NArg() == 0 {
+ flags.Usage()
+ os.Exit(2)
+ }
+
+ if len(*outputFile) == 0 {
+ flags.Usage()
+ fmt.Fprintf(os.Stderr, "must specify file for -o; use - for stdout\n")
+ os.Exit(2)
+ } else {
+ dir, err := filepath.Abs(filepath.Dir(*outputFile))
+ if err != nil {
+ fmt.Fprintf(os.Stderr, "cannot determine path to %q: %s\n", *outputFile, err)
+ os.Exit(1)
+ }
+ fi, err := os.Stat(dir)
+ if err != nil {
+ fmt.Fprintf(os.Stderr, "cannot read directory %q of %q: %s\n", dir, *outputFile, err)
+ os.Exit(1)
+ }
+ if !fi.IsDir() {
+ fmt.Fprintf(os.Stderr, "parent %q of %q is not a directory\n", dir, *outputFile)
+ os.Exit(1)
+ }
+ }
+
+ var ofile io.Writer
+ ofile = os.Stdout
+ var obuf *bytes.Buffer
+ if *outputFile != "-" {
+ obuf = &bytes.Buffer{}
+ ofile = obuf
+ }
+
+ err := checkProjectMetadata(ofile, os.Stderr, compliance.FS, flags.Args()...)
+ if err != nil {
+ if err == failNoneRequested {
+ flags.Usage()
+ }
+ fmt.Fprintf(os.Stderr, "%s\n", err.Error())
+ fmt.Fprintln(ofile, "FAIL")
+ os.Exit(1)
+ }
+ if *outputFile != "-" {
+ err := os.WriteFile(*outputFile, obuf.Bytes(), 0666)
+ if err != nil {
+ fmt.Fprintf(os.Stderr, "could not write output to %q from %q: %s\n", *outputFile, os.Getenv("PWD"), err)
+ os.Exit(1)
+ }
+ }
+ os.Exit(0)
+}
+
+// checkProjectMetadata implements the checkmetadata utility.
+func checkProjectMetadata(stdout, stderr io.Writer, rootFS fs.FS, projects ...string) error {
+
+ if len(projects) < 1 {
+ return failNoneRequested
+ }
+
+ // Read the project metadata files from `projects`
+ ix := projectmetadata.NewIndex(rootFS)
+ pms, err := ix.MetadataForProjects(projects...)
+ if err != nil {
+ return fmt.Errorf("Unable to read project metadata file(s) %q from %q: %w\n", projects, os.Getenv("PWD"), err)
+ }
+
+ fmt.Fprintf(stdout, "PASS -- parsed %d project metadata files for %d projects\n", len(pms), len(projects))
+ return nil
+}
diff --git a/tools/compliance/cmd/checkmetadata/checkmetadata_test.go b/tools/compliance/cmd/checkmetadata/checkmetadata_test.go
new file mode 100644
index 0000000..cf2090b
--- /dev/null
+++ b/tools/compliance/cmd/checkmetadata/checkmetadata_test.go
@@ -0,0 +1,191 @@
+// Copyright 2022 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "bytes"
+ "fmt"
+ "os"
+ "strings"
+ "testing"
+
+ "android/soong/tools/compliance"
+)
+
+func TestMain(m *testing.M) {
+ // Change into the parent directory before running the tests
+ // so they can find the testdata directory.
+ if err := os.Chdir(".."); err != nil {
+ fmt.Printf("failed to change to testdata directory: %s\n", err)
+ os.Exit(1)
+ }
+ os.Exit(m.Run())
+}
+
+func Test(t *testing.T) {
+ tests := []struct {
+ name string
+ projects []string
+ expectedStdout string
+ }{
+ {
+ name: "1p",
+ projects: []string{"firstparty"},
+ expectedStdout: "PASS -- parsed 1 project metadata files for 1 projects",
+ },
+ {
+ name: "notice",
+ projects: []string{"notice"},
+ expectedStdout: "PASS -- parsed 1 project metadata files for 1 projects",
+ },
+ {
+ name: "1p+notice",
+ projects: []string{"firstparty", "notice"},
+ expectedStdout: "PASS -- parsed 2 project metadata files for 2 projects",
+ },
+ {
+ name: "reciprocal",
+ projects: []string{"reciprocal"},
+ expectedStdout: "PASS -- parsed 1 project metadata files for 1 projects",
+ },
+ {
+ name: "1p+notice+reciprocal",
+ projects: []string{"firstparty", "notice", "reciprocal"},
+ expectedStdout: "PASS -- parsed 3 project metadata files for 3 projects",
+ },
+ {
+ name: "restricted",
+ projects: []string{"restricted"},
+ expectedStdout: "PASS -- parsed 1 project metadata files for 1 projects",
+ },
+ {
+ name: "1p+notice+reciprocal+restricted",
+ projects: []string{
+ "firstparty",
+ "notice",
+ "reciprocal",
+ "restricted",
+ },
+ expectedStdout: "PASS -- parsed 4 project metadata files for 4 projects",
+ },
+ {
+ name: "proprietary",
+ projects: []string{"proprietary"},
+ expectedStdout: "PASS -- parsed 1 project metadata files for 1 projects",
+ },
+ {
+ name: "1p+notice+reciprocal+restricted+proprietary",
+ projects: []string{
+ "firstparty",
+ "notice",
+ "reciprocal",
+ "restricted",
+ "proprietary",
+ },
+ expectedStdout: "PASS -- parsed 5 project metadata files for 5 projects",
+ },
+ {
+ name: "missing1",
+ projects: []string{"regressgpl1"},
+ expectedStdout: "PASS -- parsed 0 project metadata files for 1 projects",
+ },
+ {
+ name: "1p+notice+reciprocal+restricted+proprietary+missing1",
+ projects: []string{
+ "firstparty",
+ "notice",
+ "reciprocal",
+ "restricted",
+ "proprietary",
+ "regressgpl1",
+ },
+ expectedStdout: "PASS -- parsed 5 project metadata files for 6 projects",
+ },
+ {
+ name: "missing2",
+ projects: []string{"regressgpl2"},
+ expectedStdout: "PASS -- parsed 0 project metadata files for 1 projects",
+ },
+ {
+ name: "1p+notice+reciprocal+restricted+proprietary+missing1+missing2",
+ projects: []string{
+ "firstparty",
+ "notice",
+ "reciprocal",
+ "restricted",
+ "proprietary",
+ "regressgpl1",
+ "regressgpl2",
+ },
+ expectedStdout: "PASS -- parsed 5 project metadata files for 7 projects",
+ },
+ {
+ name: "missing2+1p+notice+reciprocal+restricted+proprietary+missing1",
+ projects: []string{
+ "regressgpl2",
+ "firstparty",
+ "notice",
+ "reciprocal",
+ "restricted",
+ "proprietary",
+ "regressgpl1",
+ },
+ expectedStdout: "PASS -- parsed 5 project metadata files for 7 projects",
+ },
+ {
+ name: "missing2+1p+notice+missing1+reciprocal+restricted+proprietary",
+ projects: []string{
+ "regressgpl2",
+ "firstparty",
+ "notice",
+ "regressgpl1",
+ "reciprocal",
+ "restricted",
+ "proprietary",
+ },
+ expectedStdout: "PASS -- parsed 5 project metadata files for 7 projects",
+ },
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ stdout := &bytes.Buffer{}
+ stderr := &bytes.Buffer{}
+
+ projects := make([]string, 0, len(tt.projects))
+ for _, project := range tt.projects {
+ projects = append(projects, "testdata/"+project)
+ }
+ err := checkProjectMetadata(stdout, stderr, compliance.GetFS(""), projects...)
+ if err != nil {
+ t.Fatalf("checkmetadata: error = %v, stderr = %v", err, stderr)
+ return
+ }
+ var actualStdout string
+ for _, s := range strings.Split(stdout.String(), "\n") {
+ ts := strings.TrimLeft(s, " \t")
+ if len(ts) < 1 {
+ continue
+ }
+ if len(actualStdout) > 0 {
+ t.Errorf("checkmetadata: unexpected multiple output lines %q, want %q", actualStdout+"\n"+ts, tt.expectedStdout)
+ }
+ actualStdout = ts
+ }
+ if actualStdout != tt.expectedStdout {
+ t.Errorf("checkmetadata: unexpected stdout %q, want %q", actualStdout, tt.expectedStdout)
+ }
+ })
+ }
+}
diff --git a/tools/compliance/cmd/checkshare/checkshare_test.go b/tools/compliance/cmd/checkshare/checkshare_test.go
index fdcab29..079691c 100644
--- a/tools/compliance/cmd/checkshare/checkshare_test.go
+++ b/tools/compliance/cmd/checkshare/checkshare_test.go
@@ -241,6 +241,59 @@
roots: []string{"lib/libd.so.meta_lic"},
expectedStdout: "PASS",
},
+ {
+ condition: "regressconcur",
+ name: "container",
+ roots: []string{"container.zip.meta_lic"},
+ expectedStdout: "FAIL",
+ expectedOutcomes: outcomeList{
+ &outcome{
+ target: "testdata/regressconcur/bin/bin1.meta_lic",
+ privacyCondition: "proprietary",
+ shareCondition: "restricted",
+ },
+ &outcome{
+ target: "testdata/regressconcur/bin/bin2.meta_lic",
+ privacyCondition: "proprietary",
+ shareCondition: "restricted",
+ },
+ &outcome{
+ target: "testdata/regressconcur/bin/bin3.meta_lic",
+ privacyCondition: "proprietary",
+ shareCondition: "restricted",
+ },
+ &outcome{
+ target: "testdata/regressconcur/bin/bin4.meta_lic",
+ privacyCondition: "proprietary",
+ shareCondition: "restricted",
+ },
+ &outcome{
+ target: "testdata/regressconcur/bin/bin5.meta_lic",
+ privacyCondition: "proprietary",
+ shareCondition: "restricted",
+ },
+ &outcome{
+ target: "testdata/regressconcur/bin/bin6.meta_lic",
+ privacyCondition: "proprietary",
+ shareCondition: "restricted",
+ },
+ &outcome{
+ target: "testdata/regressconcur/bin/bin7.meta_lic",
+ privacyCondition: "proprietary",
+ shareCondition: "restricted",
+ },
+ &outcome{
+ target: "testdata/regressconcur/bin/bin8.meta_lic",
+ privacyCondition: "proprietary",
+ shareCondition: "restricted",
+ },
+ &outcome{
+ target: "testdata/regressconcur/bin/bin9.meta_lic",
+ privacyCondition: "proprietary",
+ shareCondition: "restricted",
+ },
+ },
+ },
}
for _, tt := range tests {
t.Run(tt.condition+" "+tt.name, func(t *testing.T) {
diff --git a/tools/compliance/cmd/dumpgraph/dumpgraph_test.go b/tools/compliance/cmd/dumpgraph/dumpgraph_test.go
index d1deed3..e2d0db0 100644
--- a/tools/compliance/cmd/dumpgraph/dumpgraph_test.go
+++ b/tools/compliance/cmd/dumpgraph/dumpgraph_test.go
@@ -341,13 +341,13 @@
roots: []string{"highest.apex.meta_lic"},
ctx: context{stripPrefix: []string{"testdata/restricted/"}, labelConditions: true},
expectedOut: []string{
- "bin/bin1.meta_lic:notice lib/liba.so.meta_lic:restricted_allows_dynamic_linking static",
+ "bin/bin1.meta_lic:notice lib/liba.so.meta_lic:restricted_if_statically_linked static",
"bin/bin1.meta_lic:notice lib/libc.a.meta_lic:reciprocal static",
"bin/bin2.meta_lic:notice lib/libb.so.meta_lic:restricted dynamic",
"bin/bin2.meta_lic:notice lib/libd.so.meta_lic:notice dynamic",
"highest.apex.meta_lic:notice bin/bin1.meta_lic:notice static",
"highest.apex.meta_lic:notice bin/bin2.meta_lic:notice static",
- "highest.apex.meta_lic:notice lib/liba.so.meta_lic:restricted_allows_dynamic_linking static",
+ "highest.apex.meta_lic:notice lib/liba.so.meta_lic:restricted_if_statically_linked static",
"highest.apex.meta_lic:notice lib/libb.so.meta_lic:restricted static",
},
},
@@ -1011,7 +1011,7 @@
matchTarget("bin/bin1.meta_lic", "notice"),
matchTarget("bin/bin2.meta_lic", "notice"),
matchTarget("highest.apex.meta_lic", "notice"),
- matchTarget("lib/liba.so.meta_lic", "restricted_allows_dynamic_linking"),
+ matchTarget("lib/liba.so.meta_lic", "restricted_if_statically_linked"),
matchTarget("lib/libb.so.meta_lic", "restricted"),
matchTarget("lib/libc.a.meta_lic", "reciprocal"),
matchTarget("lib/libd.so.meta_lic", "notice"),
diff --git a/tools/compliance/cmd/dumpresolutions/dumpresolutions_test.go b/tools/compliance/cmd/dumpresolutions/dumpresolutions_test.go
index 63fd157..227942b 100644
--- a/tools/compliance/cmd/dumpresolutions/dumpresolutions_test.go
+++ b/tools/compliance/cmd/dumpresolutions/dumpresolutions_test.go
@@ -529,18 +529,18 @@
name: "apex",
roots: []string{"highest.apex.meta_lic"},
expectedOut: []string{
- "testdata/restricted/bin/bin1.meta_lic testdata/restricted/bin/bin1.meta_lic notice:restricted_allows_dynamic_linking",
- "testdata/restricted/bin/bin1.meta_lic testdata/restricted/lib/liba.so.meta_lic restricted_allows_dynamic_linking",
- "testdata/restricted/bin/bin1.meta_lic testdata/restricted/lib/libc.a.meta_lic reciprocal:restricted_allows_dynamic_linking",
+ "testdata/restricted/bin/bin1.meta_lic testdata/restricted/bin/bin1.meta_lic notice:restricted_if_statically_linked",
+ "testdata/restricted/bin/bin1.meta_lic testdata/restricted/lib/liba.so.meta_lic restricted_if_statically_linked",
+ "testdata/restricted/bin/bin1.meta_lic testdata/restricted/lib/libc.a.meta_lic reciprocal:restricted_if_statically_linked",
"testdata/restricted/bin/bin2.meta_lic testdata/restricted/bin/bin2.meta_lic notice:restricted",
"testdata/restricted/bin/bin2.meta_lic testdata/restricted/lib/libb.so.meta_lic restricted",
- "testdata/restricted/highest.apex.meta_lic testdata/restricted/bin/bin1.meta_lic notice:restricted_allows_dynamic_linking",
+ "testdata/restricted/highest.apex.meta_lic testdata/restricted/bin/bin1.meta_lic notice:restricted_if_statically_linked",
"testdata/restricted/highest.apex.meta_lic testdata/restricted/bin/bin2.meta_lic notice:restricted",
- "testdata/restricted/highest.apex.meta_lic testdata/restricted/highest.apex.meta_lic notice:restricted:restricted_allows_dynamic_linking",
- "testdata/restricted/highest.apex.meta_lic testdata/restricted/lib/liba.so.meta_lic restricted_allows_dynamic_linking",
+ "testdata/restricted/highest.apex.meta_lic testdata/restricted/highest.apex.meta_lic notice:restricted:restricted_if_statically_linked",
+ "testdata/restricted/highest.apex.meta_lic testdata/restricted/lib/liba.so.meta_lic restricted_if_statically_linked",
"testdata/restricted/highest.apex.meta_lic testdata/restricted/lib/libb.so.meta_lic restricted",
- "testdata/restricted/highest.apex.meta_lic testdata/restricted/lib/libc.a.meta_lic reciprocal:restricted_allows_dynamic_linking",
- "testdata/restricted/lib/liba.so.meta_lic testdata/restricted/lib/liba.so.meta_lic restricted_allows_dynamic_linking",
+ "testdata/restricted/highest.apex.meta_lic testdata/restricted/lib/libc.a.meta_lic reciprocal:restricted_if_statically_linked",
+ "testdata/restricted/lib/liba.so.meta_lic testdata/restricted/lib/liba.so.meta_lic restricted_if_statically_linked",
"testdata/restricted/lib/libb.so.meta_lic testdata/restricted/lib/libb.so.meta_lic restricted",
},
},
@@ -550,18 +550,18 @@
roots: []string{"highest.apex.meta_lic"},
ctx: context{stripPrefix: []string{"testdata/restricted/"}},
expectedOut: []string{
- "bin/bin1.meta_lic bin/bin1.meta_lic notice:restricted_allows_dynamic_linking",
- "bin/bin1.meta_lic lib/liba.so.meta_lic restricted_allows_dynamic_linking",
- "bin/bin1.meta_lic lib/libc.a.meta_lic reciprocal:restricted_allows_dynamic_linking",
+ "bin/bin1.meta_lic bin/bin1.meta_lic notice:restricted_if_statically_linked",
+ "bin/bin1.meta_lic lib/liba.so.meta_lic restricted_if_statically_linked",
+ "bin/bin1.meta_lic lib/libc.a.meta_lic reciprocal:restricted_if_statically_linked",
"bin/bin2.meta_lic bin/bin2.meta_lic notice:restricted",
"bin/bin2.meta_lic lib/libb.so.meta_lic restricted",
- "highest.apex.meta_lic bin/bin1.meta_lic notice:restricted_allows_dynamic_linking",
+ "highest.apex.meta_lic bin/bin1.meta_lic notice:restricted_if_statically_linked",
"highest.apex.meta_lic bin/bin2.meta_lic notice:restricted",
- "highest.apex.meta_lic highest.apex.meta_lic notice:restricted:restricted_allows_dynamic_linking",
- "highest.apex.meta_lic lib/liba.so.meta_lic restricted_allows_dynamic_linking",
+ "highest.apex.meta_lic highest.apex.meta_lic notice:restricted:restricted_if_statically_linked",
+ "highest.apex.meta_lic lib/liba.so.meta_lic restricted_if_statically_linked",
"highest.apex.meta_lic lib/libb.so.meta_lic restricted",
- "highest.apex.meta_lic lib/libc.a.meta_lic reciprocal:restricted_allows_dynamic_linking",
- "lib/liba.so.meta_lic lib/liba.so.meta_lic restricted_allows_dynamic_linking",
+ "highest.apex.meta_lic lib/libc.a.meta_lic reciprocal:restricted_if_statically_linked",
+ "lib/liba.so.meta_lic lib/liba.so.meta_lic restricted_if_statically_linked",
"lib/libb.so.meta_lic lib/libb.so.meta_lic restricted",
},
},
@@ -590,18 +590,18 @@
stripPrefix: []string{"testdata/restricted/"},
},
expectedOut: []string{
- "bin/bin1.meta_lic bin/bin1.meta_lic restricted_allows_dynamic_linking",
- "bin/bin1.meta_lic lib/liba.so.meta_lic restricted_allows_dynamic_linking",
- "bin/bin1.meta_lic lib/libc.a.meta_lic reciprocal:restricted_allows_dynamic_linking",
+ "bin/bin1.meta_lic bin/bin1.meta_lic restricted_if_statically_linked",
+ "bin/bin1.meta_lic lib/liba.so.meta_lic restricted_if_statically_linked",
+ "bin/bin1.meta_lic lib/libc.a.meta_lic reciprocal:restricted_if_statically_linked",
"bin/bin2.meta_lic bin/bin2.meta_lic restricted",
"bin/bin2.meta_lic lib/libb.so.meta_lic restricted",
- "highest.apex.meta_lic bin/bin1.meta_lic restricted_allows_dynamic_linking",
+ "highest.apex.meta_lic bin/bin1.meta_lic restricted_if_statically_linked",
"highest.apex.meta_lic bin/bin2.meta_lic restricted",
- "highest.apex.meta_lic highest.apex.meta_lic restricted:restricted_allows_dynamic_linking",
- "highest.apex.meta_lic lib/liba.so.meta_lic restricted_allows_dynamic_linking",
+ "highest.apex.meta_lic highest.apex.meta_lic restricted:restricted_if_statically_linked",
+ "highest.apex.meta_lic lib/liba.so.meta_lic restricted_if_statically_linked",
"highest.apex.meta_lic lib/libb.so.meta_lic restricted",
- "highest.apex.meta_lic lib/libc.a.meta_lic reciprocal:restricted_allows_dynamic_linking",
- "lib/liba.so.meta_lic lib/liba.so.meta_lic restricted_allows_dynamic_linking",
+ "highest.apex.meta_lic lib/libc.a.meta_lic reciprocal:restricted_if_statically_linked",
+ "lib/liba.so.meta_lic lib/liba.so.meta_lic restricted_if_statically_linked",
"lib/libb.so.meta_lic lib/libb.so.meta_lic restricted",
},
},
@@ -624,18 +624,18 @@
stripPrefix: []string{"testdata/restricted/"},
},
expectedOut: []string{
- "bin/bin1.meta_lic bin/bin1.meta_lic restricted_allows_dynamic_linking",
- "bin/bin1.meta_lic lib/liba.so.meta_lic restricted_allows_dynamic_linking",
- "bin/bin1.meta_lic lib/libc.a.meta_lic reciprocal:restricted_allows_dynamic_linking",
+ "bin/bin1.meta_lic bin/bin1.meta_lic restricted_if_statically_linked",
+ "bin/bin1.meta_lic lib/liba.so.meta_lic restricted_if_statically_linked",
+ "bin/bin1.meta_lic lib/libc.a.meta_lic reciprocal:restricted_if_statically_linked",
"bin/bin2.meta_lic bin/bin2.meta_lic restricted",
"bin/bin2.meta_lic lib/libb.so.meta_lic restricted",
- "highest.apex.meta_lic bin/bin1.meta_lic restricted_allows_dynamic_linking",
+ "highest.apex.meta_lic bin/bin1.meta_lic restricted_if_statically_linked",
"highest.apex.meta_lic bin/bin2.meta_lic restricted",
- "highest.apex.meta_lic highest.apex.meta_lic restricted:restricted_allows_dynamic_linking",
- "highest.apex.meta_lic lib/liba.so.meta_lic restricted_allows_dynamic_linking",
+ "highest.apex.meta_lic highest.apex.meta_lic restricted:restricted_if_statically_linked",
+ "highest.apex.meta_lic lib/liba.so.meta_lic restricted_if_statically_linked",
"highest.apex.meta_lic lib/libb.so.meta_lic restricted",
- "highest.apex.meta_lic lib/libc.a.meta_lic reciprocal:restricted_allows_dynamic_linking",
- "lib/liba.so.meta_lic lib/liba.so.meta_lic restricted_allows_dynamic_linking",
+ "highest.apex.meta_lic lib/libc.a.meta_lic reciprocal:restricted_if_statically_linked",
+ "lib/liba.so.meta_lic lib/liba.so.meta_lic restricted_if_statically_linked",
"lib/libb.so.meta_lic lib/libb.so.meta_lic restricted",
},
},
@@ -645,18 +645,18 @@
roots: []string{"highest.apex.meta_lic"},
ctx: context{stripPrefix: []string{"testdata/restricted/"}, labelConditions: true},
expectedOut: []string{
- "bin/bin1.meta_lic:notice bin/bin1.meta_lic:notice notice:restricted_allows_dynamic_linking",
- "bin/bin1.meta_lic:notice lib/liba.so.meta_lic:restricted_allows_dynamic_linking restricted_allows_dynamic_linking",
- "bin/bin1.meta_lic:notice lib/libc.a.meta_lic:reciprocal reciprocal:restricted_allows_dynamic_linking",
+ "bin/bin1.meta_lic:notice bin/bin1.meta_lic:notice notice:restricted_if_statically_linked",
+ "bin/bin1.meta_lic:notice lib/liba.so.meta_lic:restricted_if_statically_linked restricted_if_statically_linked",
+ "bin/bin1.meta_lic:notice lib/libc.a.meta_lic:reciprocal reciprocal:restricted_if_statically_linked",
"bin/bin2.meta_lic:notice bin/bin2.meta_lic:notice notice:restricted",
"bin/bin2.meta_lic:notice lib/libb.so.meta_lic:restricted restricted",
- "highest.apex.meta_lic:notice bin/bin1.meta_lic:notice notice:restricted_allows_dynamic_linking",
+ "highest.apex.meta_lic:notice bin/bin1.meta_lic:notice notice:restricted_if_statically_linked",
"highest.apex.meta_lic:notice bin/bin2.meta_lic:notice notice:restricted",
- "highest.apex.meta_lic:notice highest.apex.meta_lic:notice notice:restricted:restricted_allows_dynamic_linking",
- "highest.apex.meta_lic:notice lib/liba.so.meta_lic:restricted_allows_dynamic_linking restricted_allows_dynamic_linking",
+ "highest.apex.meta_lic:notice highest.apex.meta_lic:notice notice:restricted:restricted_if_statically_linked",
+ "highest.apex.meta_lic:notice lib/liba.so.meta_lic:restricted_if_statically_linked restricted_if_statically_linked",
"highest.apex.meta_lic:notice lib/libb.so.meta_lic:restricted restricted",
- "highest.apex.meta_lic:notice lib/libc.a.meta_lic:reciprocal reciprocal:restricted_allows_dynamic_linking",
- "lib/liba.so.meta_lic:restricted_allows_dynamic_linking lib/liba.so.meta_lic:restricted_allows_dynamic_linking restricted_allows_dynamic_linking",
+ "highest.apex.meta_lic:notice lib/libc.a.meta_lic:reciprocal reciprocal:restricted_if_statically_linked",
+ "lib/liba.so.meta_lic:restricted_if_statically_linked lib/liba.so.meta_lic:restricted_if_statically_linked restricted_if_statically_linked",
"lib/libb.so.meta_lic:restricted lib/libb.so.meta_lic:restricted restricted",
},
},
@@ -665,18 +665,18 @@
name: "container",
roots: []string{"container.zip.meta_lic"},
expectedOut: []string{
- "testdata/restricted/bin/bin1.meta_lic testdata/restricted/bin/bin1.meta_lic notice:restricted_allows_dynamic_linking",
- "testdata/restricted/bin/bin1.meta_lic testdata/restricted/lib/liba.so.meta_lic restricted_allows_dynamic_linking",
- "testdata/restricted/bin/bin1.meta_lic testdata/restricted/lib/libc.a.meta_lic reciprocal:restricted_allows_dynamic_linking",
+ "testdata/restricted/bin/bin1.meta_lic testdata/restricted/bin/bin1.meta_lic notice:restricted_if_statically_linked",
+ "testdata/restricted/bin/bin1.meta_lic testdata/restricted/lib/liba.so.meta_lic restricted_if_statically_linked",
+ "testdata/restricted/bin/bin1.meta_lic testdata/restricted/lib/libc.a.meta_lic reciprocal:restricted_if_statically_linked",
"testdata/restricted/bin/bin2.meta_lic testdata/restricted/bin/bin2.meta_lic notice:restricted",
"testdata/restricted/bin/bin2.meta_lic testdata/restricted/lib/libb.so.meta_lic restricted",
- "testdata/restricted/container.zip.meta_lic testdata/restricted/bin/bin1.meta_lic notice:restricted_allows_dynamic_linking",
+ "testdata/restricted/container.zip.meta_lic testdata/restricted/bin/bin1.meta_lic notice:restricted_if_statically_linked",
"testdata/restricted/container.zip.meta_lic testdata/restricted/bin/bin2.meta_lic notice:restricted",
- "testdata/restricted/container.zip.meta_lic testdata/restricted/container.zip.meta_lic notice:restricted:restricted_allows_dynamic_linking",
- "testdata/restricted/container.zip.meta_lic testdata/restricted/lib/liba.so.meta_lic restricted_allows_dynamic_linking",
+ "testdata/restricted/container.zip.meta_lic testdata/restricted/container.zip.meta_lic notice:restricted:restricted_if_statically_linked",
+ "testdata/restricted/container.zip.meta_lic testdata/restricted/lib/liba.so.meta_lic restricted_if_statically_linked",
"testdata/restricted/container.zip.meta_lic testdata/restricted/lib/libb.so.meta_lic restricted",
- "testdata/restricted/container.zip.meta_lic testdata/restricted/lib/libc.a.meta_lic reciprocal:restricted_allows_dynamic_linking",
- "testdata/restricted/lib/liba.so.meta_lic testdata/restricted/lib/liba.so.meta_lic restricted_allows_dynamic_linking",
+ "testdata/restricted/container.zip.meta_lic testdata/restricted/lib/libc.a.meta_lic reciprocal:restricted_if_statically_linked",
+ "testdata/restricted/lib/liba.so.meta_lic testdata/restricted/lib/liba.so.meta_lic restricted_if_statically_linked",
"testdata/restricted/lib/libb.so.meta_lic testdata/restricted/lib/libb.so.meta_lic restricted",
},
},
@@ -685,8 +685,8 @@
name: "application",
roots: []string{"application.meta_lic"},
expectedOut: []string{
- "testdata/restricted/application.meta_lic testdata/restricted/application.meta_lic notice:restricted:restricted_allows_dynamic_linking",
- "testdata/restricted/application.meta_lic testdata/restricted/lib/liba.so.meta_lic restricted:restricted_allows_dynamic_linking",
+ "testdata/restricted/application.meta_lic testdata/restricted/application.meta_lic notice:restricted:restricted_if_statically_linked",
+ "testdata/restricted/application.meta_lic testdata/restricted/lib/liba.so.meta_lic restricted:restricted_if_statically_linked",
},
},
{
@@ -694,9 +694,9 @@
name: "binary",
roots: []string{"bin/bin1.meta_lic"},
expectedOut: []string{
- "testdata/restricted/bin/bin1.meta_lic testdata/restricted/bin/bin1.meta_lic notice:restricted_allows_dynamic_linking",
- "testdata/restricted/bin/bin1.meta_lic testdata/restricted/lib/liba.so.meta_lic restricted_allows_dynamic_linking",
- "testdata/restricted/bin/bin1.meta_lic testdata/restricted/lib/libc.a.meta_lic reciprocal:restricted_allows_dynamic_linking",
+ "testdata/restricted/bin/bin1.meta_lic testdata/restricted/bin/bin1.meta_lic notice:restricted_if_statically_linked",
+ "testdata/restricted/bin/bin1.meta_lic testdata/restricted/lib/liba.so.meta_lic restricted_if_statically_linked",
+ "testdata/restricted/bin/bin1.meta_lic testdata/restricted/lib/libc.a.meta_lic reciprocal:restricted_if_statically_linked",
},
},
{
@@ -2235,17 +2235,17 @@
matchResolution(
"testdata/restricted/bin/bin1.meta_lic",
"testdata/restricted/bin/bin1.meta_lic",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"notice"),
matchResolution(
"testdata/restricted/bin/bin1.meta_lic",
"testdata/restricted/lib/liba.so.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"testdata/restricted/bin/bin1.meta_lic",
"testdata/restricted/lib/libc.a.meta_lic",
"reciprocal",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"testdata/restricted/bin/bin2.meta_lic",
"testdata/restricted/bin/bin2.meta_lic",
@@ -2258,7 +2258,7 @@
matchResolution(
"testdata/restricted/highest.apex.meta_lic",
"testdata/restricted/bin/bin1.meta_lic",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"notice"),
matchResolution(
"testdata/restricted/highest.apex.meta_lic",
@@ -2269,12 +2269,12 @@
"testdata/restricted/highest.apex.meta_lic",
"testdata/restricted/highest.apex.meta_lic",
"restricted",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"notice"),
matchResolution(
"testdata/restricted/highest.apex.meta_lic",
"testdata/restricted/lib/liba.so.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"testdata/restricted/highest.apex.meta_lic",
"testdata/restricted/lib/libb.so.meta_lic",
@@ -2283,11 +2283,11 @@
"testdata/restricted/highest.apex.meta_lic",
"testdata/restricted/lib/libc.a.meta_lic",
"reciprocal",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"testdata/restricted/lib/liba.so.meta_lic",
"testdata/restricted/lib/liba.so.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"testdata/restricted/lib/libb.so.meta_lic",
"testdata/restricted/lib/libb.so.meta_lic",
@@ -2309,17 +2309,17 @@
matchResolution(
"bin/bin1.meta_lic",
"bin/bin1.meta_lic",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"notice"),
matchResolution(
"bin/bin1.meta_lic",
"lib/liba.so.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"bin/bin1.meta_lic",
"lib/libc.a.meta_lic",
"reciprocal",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"bin/bin2.meta_lic",
"bin/bin2.meta_lic",
@@ -2332,7 +2332,7 @@
matchResolution(
"highest.apex.meta_lic",
"bin/bin1.meta_lic",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"notice"),
matchResolution(
"highest.apex.meta_lic",
@@ -2343,12 +2343,12 @@
"highest.apex.meta_lic",
"highest.apex.meta_lic",
"restricted",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"notice"),
matchResolution(
"highest.apex.meta_lic",
"lib/liba.so.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"highest.apex.meta_lic",
"lib/libb.so.meta_lic",
@@ -2357,11 +2357,11 @@
"highest.apex.meta_lic",
"lib/libc.a.meta_lic",
"reciprocal",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"lib/liba.so.meta_lic",
"lib/liba.so.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"lib/libb.so.meta_lic",
"lib/libb.so.meta_lic",
@@ -2420,16 +2420,16 @@
matchResolution(
"bin/bin1.meta_lic",
"bin/bin1.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"bin/bin1.meta_lic",
"lib/liba.so.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"bin/bin1.meta_lic",
"lib/libc.a.meta_lic",
"reciprocal",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"bin/bin2.meta_lic",
"bin/bin2.meta_lic",
@@ -2441,7 +2441,7 @@
matchResolution(
"highest.apex.meta_lic",
"bin/bin1.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"highest.apex.meta_lic",
"bin/bin2.meta_lic",
@@ -2450,11 +2450,11 @@
"highest.apex.meta_lic",
"highest.apex.meta_lic",
"restricted",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"highest.apex.meta_lic",
"lib/liba.so.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"highest.apex.meta_lic",
"lib/libb.so.meta_lic",
@@ -2463,11 +2463,11 @@
"highest.apex.meta_lic",
"lib/libc.a.meta_lic",
"reciprocal",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"lib/liba.so.meta_lic",
"lib/liba.so.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"lib/libb.so.meta_lic",
"lib/libb.so.meta_lic",
@@ -2502,16 +2502,16 @@
matchResolution(
"bin/bin1.meta_lic",
"bin/bin1.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"bin/bin1.meta_lic",
"lib/liba.so.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"bin/bin1.meta_lic",
"lib/libc.a.meta_lic",
"reciprocal",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"bin/bin2.meta_lic",
"bin/bin2.meta_lic",
@@ -2523,7 +2523,7 @@
matchResolution(
"highest.apex.meta_lic",
"bin/bin1.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"highest.apex.meta_lic",
"bin/bin2.meta_lic",
@@ -2532,11 +2532,11 @@
"highest.apex.meta_lic",
"highest.apex.meta_lic",
"restricted",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"highest.apex.meta_lic",
"lib/liba.so.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"highest.apex.meta_lic",
"lib/libb.so.meta_lic",
@@ -2545,11 +2545,11 @@
"highest.apex.meta_lic",
"lib/libc.a.meta_lic",
"reciprocal",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"lib/liba.so.meta_lic",
"lib/liba.so.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"lib/libb.so.meta_lic",
"lib/libb.so.meta_lic",
@@ -2563,7 +2563,7 @@
ctx: context{stripPrefix: []string{"testdata/restricted/"}, labelConditions: true},
expectedOut: []getMatcher{
matchTarget("bin/bin1.meta_lic", "notice"),
- matchTarget("lib/liba.so.meta_lic", "restricted_allows_dynamic_linking"),
+ matchTarget("lib/liba.so.meta_lic", "restricted_if_statically_linked"),
matchTarget("lib/libc.a.meta_lic", "reciprocal"),
matchTarget("bin/bin2.meta_lic", "notice"),
matchTarget("lib/libb.so.meta_lic", "restricted"),
@@ -2571,17 +2571,17 @@
matchResolution(
"bin/bin1.meta_lic",
"bin/bin1.meta_lic",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"notice"),
matchResolution(
"bin/bin1.meta_lic",
"lib/liba.so.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"bin/bin1.meta_lic",
"lib/libc.a.meta_lic",
"reciprocal",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"bin/bin2.meta_lic",
"bin/bin2.meta_lic",
@@ -2594,7 +2594,7 @@
matchResolution(
"highest.apex.meta_lic",
"bin/bin1.meta_lic",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"notice"),
matchResolution(
"highest.apex.meta_lic",
@@ -2605,12 +2605,12 @@
"highest.apex.meta_lic",
"highest.apex.meta_lic",
"restricted",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"notice"),
matchResolution(
"highest.apex.meta_lic",
"lib/liba.so.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"highest.apex.meta_lic",
"lib/libb.so.meta_lic",
@@ -2619,11 +2619,11 @@
"highest.apex.meta_lic",
"lib/libc.a.meta_lic",
"reciprocal",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"lib/liba.so.meta_lic",
"lib/liba.so.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"lib/libb.so.meta_lic",
"lib/libb.so.meta_lic",
@@ -2644,17 +2644,17 @@
matchResolution(
"testdata/restricted/bin/bin1.meta_lic",
"testdata/restricted/bin/bin1.meta_lic",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"notice"),
matchResolution(
"testdata/restricted/bin/bin1.meta_lic",
"testdata/restricted/lib/liba.so.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"testdata/restricted/bin/bin1.meta_lic",
"testdata/restricted/lib/libc.a.meta_lic",
"reciprocal",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"testdata/restricted/bin/bin2.meta_lic",
"testdata/restricted/bin/bin2.meta_lic",
@@ -2667,7 +2667,7 @@
matchResolution(
"testdata/restricted/container.zip.meta_lic",
"testdata/restricted/bin/bin1.meta_lic",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"notice"),
matchResolution(
"testdata/restricted/container.zip.meta_lic",
@@ -2678,12 +2678,12 @@
"testdata/restricted/container.zip.meta_lic",
"testdata/restricted/container.zip.meta_lic",
"restricted",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"notice"),
matchResolution(
"testdata/restricted/container.zip.meta_lic",
"testdata/restricted/lib/liba.so.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"testdata/restricted/container.zip.meta_lic",
"testdata/restricted/lib/libb.so.meta_lic",
@@ -2692,11 +2692,11 @@
"testdata/restricted/container.zip.meta_lic",
"testdata/restricted/lib/libc.a.meta_lic",
"reciprocal",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"testdata/restricted/lib/liba.so.meta_lic",
"testdata/restricted/lib/liba.so.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"testdata/restricted/lib/libb.so.meta_lic",
"testdata/restricted/lib/libb.so.meta_lic",
@@ -2714,12 +2714,12 @@
"testdata/restricted/application.meta_lic",
"testdata/restricted/application.meta_lic",
"restricted",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"notice"),
matchResolution(
"testdata/restricted/application.meta_lic",
"testdata/restricted/lib/liba.so.meta_lic",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"restricted"),
},
},
@@ -2734,16 +2734,16 @@
matchResolution(
"testdata/restricted/bin/bin1.meta_lic",
"testdata/restricted/bin/bin1.meta_lic",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"notice"),
matchResolution(
"testdata/restricted/bin/bin1.meta_lic",
"testdata/restricted/lib/liba.so.meta_lic",
- "restricted_allows_dynamic_linking"),
+ "restricted_if_statically_linked"),
matchResolution(
"testdata/restricted/bin/bin1.meta_lic",
"testdata/restricted/lib/libc.a.meta_lic",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"reciprocal"),
},
},
diff --git a/tools/compliance/cmd/htmlnotice/htmlnotice.go b/tools/compliance/cmd/htmlnotice/htmlnotice.go
index 1a49610..78371ee 100644
--- a/tools/compliance/cmd/htmlnotice/htmlnotice.go
+++ b/tools/compliance/cmd/htmlnotice/htmlnotice.go
@@ -24,6 +24,7 @@
"io/fs"
"os"
"path/filepath"
+ "sort"
"strings"
"android/soong/response"
@@ -275,7 +276,8 @@
}
fmt.Fprintln(ctx.stdout, "</body></html>")
- *ctx.deps = ni.InputNoticeFiles()
+ *ctx.deps = ni.InputFiles()
+ sort.Strings(*ctx.deps)
return nil
}
diff --git a/tools/compliance/cmd/htmlnotice/htmlnotice_test.go b/tools/compliance/cmd/htmlnotice/htmlnotice_test.go
index b927018..8dc1197 100644
--- a/tools/compliance/cmd/htmlnotice/htmlnotice_test.go
+++ b/tools/compliance/cmd/htmlnotice/htmlnotice_test.go
@@ -78,7 +78,16 @@
usedBy{"highest.apex/lib/libb.so"},
firstParty{},
},
- expectedDeps: []string{"testdata/firstparty/FIRST_PARTY_LICENSE"},
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/bin/bin1.meta_lic",
+ "testdata/firstparty/bin/bin2.meta_lic",
+ "testdata/firstparty/highest.apex.meta_lic",
+ "testdata/firstparty/lib/liba.so.meta_lic",
+ "testdata/firstparty/lib/libb.so.meta_lic",
+ "testdata/firstparty/lib/libc.a.meta_lic",
+ "testdata/firstparty/lib/libd.so.meta_lic",
+ },
},
{
condition: "firstparty",
@@ -106,7 +115,16 @@
usedBy{"highest.apex/lib/libb.so"},
firstParty{},
},
- expectedDeps: []string{"testdata/firstparty/FIRST_PARTY_LICENSE"},
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/bin/bin1.meta_lic",
+ "testdata/firstparty/bin/bin2.meta_lic",
+ "testdata/firstparty/highest.apex.meta_lic",
+ "testdata/firstparty/lib/liba.so.meta_lic",
+ "testdata/firstparty/lib/libb.so.meta_lic",
+ "testdata/firstparty/lib/libc.a.meta_lic",
+ "testdata/firstparty/lib/libd.so.meta_lic",
+ },
},
{
condition: "firstparty",
@@ -124,7 +142,16 @@
usedBy{"highest.apex/lib/libb.so"},
firstParty{},
},
- expectedDeps: []string{"testdata/firstparty/FIRST_PARTY_LICENSE"},
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/bin/bin1.meta_lic",
+ "testdata/firstparty/bin/bin2.meta_lic",
+ "testdata/firstparty/highest.apex.meta_lic",
+ "testdata/firstparty/lib/liba.so.meta_lic",
+ "testdata/firstparty/lib/libb.so.meta_lic",
+ "testdata/firstparty/lib/libc.a.meta_lic",
+ "testdata/firstparty/lib/libd.so.meta_lic",
+ },
},
{
condition: "firstparty",
@@ -154,7 +181,16 @@
usedBy{"highest.apex/lib/libb.so"},
firstParty{},
},
- expectedDeps: []string{"testdata/firstparty/FIRST_PARTY_LICENSE"},
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/bin/bin1.meta_lic",
+ "testdata/firstparty/bin/bin2.meta_lic",
+ "testdata/firstparty/highest.apex.meta_lic",
+ "testdata/firstparty/lib/liba.so.meta_lic",
+ "testdata/firstparty/lib/libb.so.meta_lic",
+ "testdata/firstparty/lib/libc.a.meta_lic",
+ "testdata/firstparty/lib/libd.so.meta_lic",
+ },
},
{
condition: "firstparty",
@@ -170,7 +206,16 @@
usedBy{"container.zip/libb.so"},
firstParty{},
},
- expectedDeps: []string{"testdata/firstparty/FIRST_PARTY_LICENSE"},
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/bin/bin1.meta_lic",
+ "testdata/firstparty/bin/bin2.meta_lic",
+ "testdata/firstparty/container.zip.meta_lic",
+ "testdata/firstparty/lib/liba.so.meta_lic",
+ "testdata/firstparty/lib/libb.so.meta_lic",
+ "testdata/firstparty/lib/libc.a.meta_lic",
+ "testdata/firstparty/lib/libd.so.meta_lic",
+ },
},
{
condition: "firstparty",
@@ -182,7 +227,13 @@
usedBy{"application"},
firstParty{},
},
- expectedDeps: []string{"testdata/firstparty/FIRST_PARTY_LICENSE"},
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/application.meta_lic",
+ "testdata/firstparty/bin/bin3.meta_lic",
+ "testdata/firstparty/lib/liba.so.meta_lic",
+ "testdata/firstparty/lib/libb.so.meta_lic",
+ },
},
{
condition: "firstparty",
@@ -194,7 +245,12 @@
usedBy{"bin/bin1"},
firstParty{},
},
- expectedDeps: []string{"testdata/firstparty/FIRST_PARTY_LICENSE"},
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/bin/bin1.meta_lic",
+ "testdata/firstparty/lib/liba.so.meta_lic",
+ "testdata/firstparty/lib/libc.a.meta_lic",
+ },
},
{
condition: "firstparty",
@@ -206,7 +262,10 @@
usedBy{"lib/libd.so"},
firstParty{},
},
- expectedDeps: []string{"testdata/firstparty/FIRST_PARTY_LICENSE"},
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/lib/libd.so.meta_lic",
+ },
},
{
condition: "notice",
@@ -231,6 +290,13 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/notice/NOTICE_LICENSE",
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/bin/bin2.meta_lic",
+ "testdata/notice/highest.apex.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
+ "testdata/notice/lib/libd.so.meta_lic",
},
},
{
@@ -256,6 +322,13 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/notice/NOTICE_LICENSE",
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/bin/bin2.meta_lic",
+ "testdata/notice/container.zip.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
+ "testdata/notice/lib/libd.so.meta_lic",
},
},
{
@@ -275,6 +348,10 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/notice/NOTICE_LICENSE",
+ "testdata/notice/application.meta_lic",
+ "testdata/notice/bin/bin3.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
},
},
{
@@ -296,6 +373,9 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/notice/NOTICE_LICENSE",
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
},
},
{
@@ -308,7 +388,10 @@
usedBy{"lib/libd.so"},
notice{},
},
- expectedDeps: []string{"testdata/notice/NOTICE_LICENSE"},
+ expectedDeps: []string{
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/notice/lib/libd.so.meta_lic",
+ },
},
{
condition: "reciprocal",
@@ -333,6 +416,13 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/reciprocal/RECIPROCAL_LICENSE",
+ "testdata/reciprocal/bin/bin1.meta_lic",
+ "testdata/reciprocal/bin/bin2.meta_lic",
+ "testdata/reciprocal/highest.apex.meta_lic",
+ "testdata/reciprocal/lib/liba.so.meta_lic",
+ "testdata/reciprocal/lib/libb.so.meta_lic",
+ "testdata/reciprocal/lib/libc.a.meta_lic",
+ "testdata/reciprocal/lib/libd.so.meta_lic",
},
},
{
@@ -358,6 +448,13 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/reciprocal/RECIPROCAL_LICENSE",
+ "testdata/reciprocal/bin/bin1.meta_lic",
+ "testdata/reciprocal/bin/bin2.meta_lic",
+ "testdata/reciprocal/container.zip.meta_lic",
+ "testdata/reciprocal/lib/liba.so.meta_lic",
+ "testdata/reciprocal/lib/libb.so.meta_lic",
+ "testdata/reciprocal/lib/libc.a.meta_lic",
+ "testdata/reciprocal/lib/libd.so.meta_lic",
},
},
{
@@ -377,6 +474,10 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/reciprocal/RECIPROCAL_LICENSE",
+ "testdata/reciprocal/application.meta_lic",
+ "testdata/reciprocal/bin/bin3.meta_lic",
+ "testdata/reciprocal/lib/liba.so.meta_lic",
+ "testdata/reciprocal/lib/libb.so.meta_lic",
},
},
{
@@ -398,6 +499,9 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/reciprocal/RECIPROCAL_LICENSE",
+ "testdata/reciprocal/bin/bin1.meta_lic",
+ "testdata/reciprocal/lib/liba.so.meta_lic",
+ "testdata/reciprocal/lib/libc.a.meta_lic",
},
},
{
@@ -410,7 +514,10 @@
usedBy{"lib/libd.so"},
notice{},
},
- expectedDeps: []string{"testdata/notice/NOTICE_LICENSE"},
+ expectedDeps: []string{
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/reciprocal/lib/libd.so.meta_lic",
+ },
},
{
condition: "restricted",
@@ -440,6 +547,13 @@
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/reciprocal/RECIPROCAL_LICENSE",
"testdata/restricted/RESTRICTED_LICENSE",
+ "testdata/restricted/bin/bin1.meta_lic",
+ "testdata/restricted/bin/bin2.meta_lic",
+ "testdata/restricted/highest.apex.meta_lic",
+ "testdata/restricted/lib/liba.so.meta_lic",
+ "testdata/restricted/lib/libb.so.meta_lic",
+ "testdata/restricted/lib/libc.a.meta_lic",
+ "testdata/restricted/lib/libd.so.meta_lic",
},
},
{
@@ -470,6 +584,13 @@
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/reciprocal/RECIPROCAL_LICENSE",
"testdata/restricted/RESTRICTED_LICENSE",
+ "testdata/restricted/bin/bin1.meta_lic",
+ "testdata/restricted/bin/bin2.meta_lic",
+ "testdata/restricted/container.zip.meta_lic",
+ "testdata/restricted/lib/liba.so.meta_lic",
+ "testdata/restricted/lib/libb.so.meta_lic",
+ "testdata/restricted/lib/libc.a.meta_lic",
+ "testdata/restricted/lib/libd.so.meta_lic",
},
},
{
@@ -489,6 +610,10 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/restricted/RESTRICTED_LICENSE",
+ "testdata/restricted/application.meta_lic",
+ "testdata/restricted/bin/bin3.meta_lic",
+ "testdata/restricted/lib/liba.so.meta_lic",
+ "testdata/restricted/lib/libb.so.meta_lic",
},
},
{
@@ -513,6 +638,9 @@
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/reciprocal/RECIPROCAL_LICENSE",
"testdata/restricted/RESTRICTED_LICENSE",
+ "testdata/restricted/bin/bin1.meta_lic",
+ "testdata/restricted/lib/liba.so.meta_lic",
+ "testdata/restricted/lib/libc.a.meta_lic",
},
},
{
@@ -525,7 +653,10 @@
usedBy{"lib/libd.so"},
notice{},
},
- expectedDeps: []string{"testdata/notice/NOTICE_LICENSE"},
+ expectedDeps: []string{
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/restricted/lib/libd.so.meta_lic",
+ },
},
{
condition: "proprietary",
@@ -555,6 +686,13 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/proprietary/PROPRIETARY_LICENSE",
+ "testdata/proprietary/bin/bin1.meta_lic",
+ "testdata/proprietary/bin/bin2.meta_lic",
+ "testdata/proprietary/highest.apex.meta_lic",
+ "testdata/proprietary/lib/liba.so.meta_lic",
+ "testdata/proprietary/lib/libb.so.meta_lic",
+ "testdata/proprietary/lib/libc.a.meta_lic",
+ "testdata/proprietary/lib/libd.so.meta_lic",
"testdata/restricted/RESTRICTED_LICENSE",
},
},
@@ -586,6 +724,13 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/proprietary/PROPRIETARY_LICENSE",
+ "testdata/proprietary/bin/bin1.meta_lic",
+ "testdata/proprietary/bin/bin2.meta_lic",
+ "testdata/proprietary/container.zip.meta_lic",
+ "testdata/proprietary/lib/liba.so.meta_lic",
+ "testdata/proprietary/lib/libb.so.meta_lic",
+ "testdata/proprietary/lib/libc.a.meta_lic",
+ "testdata/proprietary/lib/libd.so.meta_lic",
"testdata/restricted/RESTRICTED_LICENSE",
},
},
@@ -606,6 +751,10 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/proprietary/PROPRIETARY_LICENSE",
+ "testdata/proprietary/application.meta_lic",
+ "testdata/proprietary/bin/bin3.meta_lic",
+ "testdata/proprietary/lib/liba.so.meta_lic",
+ "testdata/proprietary/lib/libb.so.meta_lic",
},
},
{
@@ -627,6 +776,9 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/proprietary/PROPRIETARY_LICENSE",
+ "testdata/proprietary/bin/bin1.meta_lic",
+ "testdata/proprietary/lib/liba.so.meta_lic",
+ "testdata/proprietary/lib/libc.a.meta_lic",
},
},
{
@@ -639,7 +791,10 @@
usedBy{"lib/libd.so"},
notice{},
},
- expectedDeps: []string{"testdata/notice/NOTICE_LICENSE"},
+ expectedDeps: []string{
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/proprietary/lib/libd.so.meta_lic",
+ },
},
}
for _, tt := range tests {
diff --git a/tools/compliance/cmd/listshare/listshare.go b/tools/compliance/cmd/listshare/listshare.go
index 31bd1b2..4ca6457 100644
--- a/tools/compliance/cmd/listshare/listshare.go
+++ b/tools/compliance/cmd/listshare/listshare.go
@@ -149,6 +149,9 @@
// Group the resolutions by project.
presolution := make(map[string]compliance.LicenseConditionSet)
for _, target := range shareSource.AttachesTo() {
+ if shareSource.IsPureAggregate(target) && !target.LicenseConditions().MatchesAnySet(compliance.ImpliesShared) {
+ continue
+ }
rl := shareSource.Resolutions(target)
sort.Sort(rl)
for _, r := range rl {
diff --git a/tools/compliance/cmd/listshare/listshare_test.go b/tools/compliance/cmd/listshare/listshare_test.go
index c1e38be..16a8b69 100644
--- a/tools/compliance/cmd/listshare/listshare_test.go
+++ b/tools/compliance/cmd/listshare/listshare_test.go
@@ -187,30 +187,23 @@
},
{
project: "device/library",
- conditions: []string{"restricted_allows_dynamic_linking"},
+ conditions: []string{"restricted_if_statically_linked"},
},
{
project: "dynamic/binary",
conditions: []string{"restricted"},
},
{
- project: "highest/apex",
- conditions: []string{
- "restricted",
- "restricted_allows_dynamic_linking",
- },
- },
- {
project: "static/binary",
conditions: []string{
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
},
},
{
project: "static/library",
conditions: []string{
"reciprocal",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
},
},
},
@@ -225,15 +218,8 @@
conditions: []string{"restricted"},
},
{
- project: "container/zip",
- conditions: []string{
- "restricted",
- "restricted_allows_dynamic_linking",
- },
- },
- {
project: "device/library",
- conditions: []string{"restricted_allows_dynamic_linking"},
+ conditions: []string{"restricted_if_statically_linked"},
},
{
project: "dynamic/binary",
@@ -242,14 +228,14 @@
{
project: "static/binary",
conditions: []string{
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
},
},
{
project: "static/library",
conditions: []string{
"reciprocal",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
},
},
},
@@ -263,14 +249,14 @@
project: "device/library",
conditions: []string{
"restricted",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
},
},
{
project: "distributable/application",
conditions: []string{
"restricted",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
},
},
},
@@ -283,20 +269,20 @@
{
project: "device/library",
conditions: []string{
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
},
},
{
project: "static/binary",
conditions: []string{
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
},
},
{
project: "static/library",
conditions: []string{
"reciprocal",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
},
},
},
@@ -320,10 +306,6 @@
project: "dynamic/binary",
conditions: []string{"restricted"},
},
- {
- project: "highest/apex",
- conditions: []string{"restricted"},
- },
},
},
{
@@ -336,10 +318,6 @@
conditions: []string{"restricted"},
},
{
- project: "container/zip",
- conditions: []string{"restricted"},
- },
- {
project: "dynamic/binary",
conditions: []string{"restricted"},
},
@@ -381,10 +359,6 @@
project: "bin/threelibraries",
conditions: []string{"restricted"},
},
- {
- project: "container/zip",
- conditions: []string{"restricted"},
- },
},
},
{
@@ -397,10 +371,6 @@
conditions: []string{"restricted"},
},
{
- project: "container/zip",
- conditions: []string{"restricted"},
- },
- {
project: "lib/apache",
conditions: []string{"restricted"},
},
@@ -420,10 +390,6 @@
conditions: []string{"restricted"},
},
{
- project: "container/zip",
- conditions: []string{"restricted"},
- },
- {
project: "lib/apache",
conditions: []string{"restricted"},
},
@@ -447,10 +413,6 @@
conditions: []string{"restricted"},
},
{
- project: "container/zip",
- conditions: []string{"restricted"},
- },
- {
project: "lib/apache",
conditions: []string{"restricted"},
},
diff --git a/tools/compliance/cmd/rtrace/rtrace_test.go b/tools/compliance/cmd/rtrace/rtrace_test.go
index cbe9461..d650868 100644
--- a/tools/compliance/cmd/rtrace/rtrace_test.go
+++ b/tools/compliance/cmd/rtrace/rtrace_test.go
@@ -44,9 +44,9 @@
expectedOut []string
}{
{
- condition: "firstparty",
- name: "apex",
- roots: []string{"highest.apex.meta_lic"},
+ condition: "firstparty",
+ name: "apex",
+ roots: []string{"highest.apex.meta_lic"},
expectedOut: []string{},
},
{
@@ -60,33 +60,33 @@
expectedOut: []string{},
},
{
- condition: "firstparty",
- name: "container",
- roots: []string{"container.zip.meta_lic"},
+ condition: "firstparty",
+ name: "container",
+ roots: []string{"container.zip.meta_lic"},
expectedOut: []string{},
},
{
- condition: "firstparty",
- name: "application",
- roots: []string{"application.meta_lic"},
+ condition: "firstparty",
+ name: "application",
+ roots: []string{"application.meta_lic"},
expectedOut: []string{},
},
{
- condition: "firstparty",
- name: "binary",
- roots: []string{"bin/bin1.meta_lic"},
+ condition: "firstparty",
+ name: "binary",
+ roots: []string{"bin/bin1.meta_lic"},
expectedOut: []string{},
},
{
- condition: "firstparty",
- name: "library",
- roots: []string{"lib/libd.so.meta_lic"},
+ condition: "firstparty",
+ name: "library",
+ roots: []string{"lib/libd.so.meta_lic"},
expectedOut: []string{},
},
{
- condition: "notice",
- name: "apex",
- roots: []string{"highest.apex.meta_lic"},
+ condition: "notice",
+ name: "apex",
+ roots: []string{"highest.apex.meta_lic"},
expectedOut: []string{},
},
{
@@ -100,33 +100,33 @@
expectedOut: []string{},
},
{
- condition: "notice",
- name: "container",
- roots: []string{"container.zip.meta_lic"},
+ condition: "notice",
+ name: "container",
+ roots: []string{"container.zip.meta_lic"},
expectedOut: []string{},
},
{
- condition: "notice",
- name: "application",
- roots: []string{"application.meta_lic"},
+ condition: "notice",
+ name: "application",
+ roots: []string{"application.meta_lic"},
expectedOut: []string{},
},
{
- condition: "notice",
- name: "binary",
- roots: []string{"bin/bin1.meta_lic"},
+ condition: "notice",
+ name: "binary",
+ roots: []string{"bin/bin1.meta_lic"},
expectedOut: []string{},
},
{
- condition: "notice",
- name: "library",
- roots: []string{"lib/libd.so.meta_lic"},
+ condition: "notice",
+ name: "library",
+ roots: []string{"lib/libd.so.meta_lic"},
expectedOut: []string{},
},
{
- condition: "reciprocal",
- name: "apex",
- roots: []string{"highest.apex.meta_lic"},
+ condition: "reciprocal",
+ name: "apex",
+ roots: []string{"highest.apex.meta_lic"},
expectedOut: []string{},
},
{
@@ -140,27 +140,27 @@
expectedOut: []string{},
},
{
- condition: "reciprocal",
- name: "container",
- roots: []string{"container.zip.meta_lic"},
+ condition: "reciprocal",
+ name: "container",
+ roots: []string{"container.zip.meta_lic"},
expectedOut: []string{},
},
{
- condition: "reciprocal",
- name: "application",
- roots: []string{"application.meta_lic"},
+ condition: "reciprocal",
+ name: "application",
+ roots: []string{"application.meta_lic"},
expectedOut: []string{},
},
{
- condition: "reciprocal",
- name: "binary",
- roots: []string{"bin/bin1.meta_lic"},
+ condition: "reciprocal",
+ name: "binary",
+ roots: []string{"bin/bin1.meta_lic"},
expectedOut: []string{},
},
{
- condition: "reciprocal",
- name: "library",
- roots: []string{"lib/libd.so.meta_lic"},
+ condition: "reciprocal",
+ name: "library",
+ roots: []string{"lib/libd.so.meta_lic"},
expectedOut: []string{},
},
{
@@ -168,7 +168,7 @@
name: "apex",
roots: []string{"highest.apex.meta_lic"},
expectedOut: []string{
- "testdata/restricted/lib/liba.so.meta_lic restricted_allows_dynamic_linking",
+ "testdata/restricted/lib/liba.so.meta_lic restricted_if_statically_linked",
"testdata/restricted/lib/libb.so.meta_lic restricted",
},
},
@@ -180,7 +180,7 @@
sources: []string{"testdata/restricted/bin/bin1.meta_lic"},
stripPrefix: []string{"testdata/restricted/"},
},
- expectedOut: []string{"lib/liba.so.meta_lic restricted_allows_dynamic_linking"},
+ expectedOut: []string{"lib/liba.so.meta_lic restricted_if_statically_linked"},
},
{
condition: "restricted",
@@ -197,32 +197,32 @@
name: "container",
roots: []string{"container.zip.meta_lic"},
expectedOut: []string{
- "testdata/restricted/lib/liba.so.meta_lic restricted_allows_dynamic_linking",
+ "testdata/restricted/lib/liba.so.meta_lic restricted_if_statically_linked",
"testdata/restricted/lib/libb.so.meta_lic restricted",
},
},
{
- condition: "restricted",
- name: "application",
- roots: []string{"application.meta_lic"},
- expectedOut: []string{"testdata/restricted/lib/liba.so.meta_lic restricted_allows_dynamic_linking"},
+ condition: "restricted",
+ name: "application",
+ roots: []string{"application.meta_lic"},
+ expectedOut: []string{"testdata/restricted/lib/liba.so.meta_lic restricted_if_statically_linked"},
},
{
- condition: "restricted",
- name: "binary",
- roots: []string{"bin/bin1.meta_lic"},
- expectedOut: []string{"testdata/restricted/lib/liba.so.meta_lic restricted_allows_dynamic_linking"},
+ condition: "restricted",
+ name: "binary",
+ roots: []string{"bin/bin1.meta_lic"},
+ expectedOut: []string{"testdata/restricted/lib/liba.so.meta_lic restricted_if_statically_linked"},
},
{
- condition: "restricted",
- name: "library",
- roots: []string{"lib/libd.so.meta_lic"},
+ condition: "restricted",
+ name: "library",
+ roots: []string{"lib/libd.so.meta_lic"},
expectedOut: []string{},
},
{
- condition: "proprietary",
- name: "apex",
- roots: []string{"highest.apex.meta_lic"},
+ condition: "proprietary",
+ name: "apex",
+ roots: []string{"highest.apex.meta_lic"},
expectedOut: []string{"testdata/proprietary/lib/libb.so.meta_lic restricted"},
},
{
@@ -246,27 +246,27 @@
expectedOut: []string{"lib/libb.so.meta_lic restricted"},
},
{
- condition: "proprietary",
- name: "container",
- roots: []string{"container.zip.meta_lic"},
+ condition: "proprietary",
+ name: "container",
+ roots: []string{"container.zip.meta_lic"},
expectedOut: []string{"testdata/proprietary/lib/libb.so.meta_lic restricted"},
},
{
- condition: "proprietary",
- name: "application",
- roots: []string{"application.meta_lic"},
+ condition: "proprietary",
+ name: "application",
+ roots: []string{"application.meta_lic"},
expectedOut: []string{},
},
{
- condition: "proprietary",
- name: "binary",
- roots: []string{"bin/bin1.meta_lic"},
+ condition: "proprietary",
+ name: "binary",
+ roots: []string{"bin/bin1.meta_lic"},
expectedOut: []string{},
},
{
- condition: "proprietary",
- name: "library",
- roots: []string{"lib/libd.so.meta_lic"},
+ condition: "proprietary",
+ name: "library",
+ roots: []string{"lib/libd.so.meta_lic"},
expectedOut: []string{},
},
}
diff --git a/tools/compliance/cmd/sbom/sbom.go b/tools/compliance/cmd/sbom/sbom.go
new file mode 100644
index 0000000..c378e39
--- /dev/null
+++ b/tools/compliance/cmd/sbom/sbom.go
@@ -0,0 +1,538 @@
+// Copyright 2022 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "bytes"
+ "crypto/sha1"
+ "encoding/hex"
+ "flag"
+ "fmt"
+ "io"
+ "io/fs"
+ "os"
+ "path/filepath"
+ "sort"
+ "strings"
+ "time"
+
+ "android/soong/response"
+ "android/soong/tools/compliance"
+ "android/soong/tools/compliance/projectmetadata"
+
+ "github.com/google/blueprint/deptools"
+
+ "github.com/spdx/tools-golang/builder/builder2v2"
+ "github.com/spdx/tools-golang/json"
+ "github.com/spdx/tools-golang/spdx/common"
+ spdx "github.com/spdx/tools-golang/spdx/v2_2"
+ "github.com/spdx/tools-golang/spdxlib"
+)
+
+var (
+ failNoneRequested = fmt.Errorf("\nNo license metadata files requested")
+ failNoLicenses = fmt.Errorf("No licenses found")
+)
+
+const NOASSERTION = "NOASSERTION"
+
+type context struct {
+ stdout io.Writer
+ stderr io.Writer
+ rootFS fs.FS
+ product string
+ stripPrefix []string
+ creationTime creationTimeGetter
+}
+
+func (ctx context) strip(installPath string) string {
+ for _, prefix := range ctx.stripPrefix {
+ if strings.HasPrefix(installPath, prefix) {
+ p := strings.TrimPrefix(installPath, prefix)
+ if 0 == len(p) {
+ p = ctx.product
+ }
+ if 0 == len(p) {
+ continue
+ }
+ return p
+ }
+ }
+ return installPath
+}
+
+// newMultiString creates a flag that allows multiple values in an array.
+func newMultiString(flags *flag.FlagSet, name, usage string) *multiString {
+ var f multiString
+ flags.Var(&f, name, usage)
+ return &f
+}
+
+// multiString implements the flag `Value` interface for multiple strings.
+type multiString []string
+
+func (ms *multiString) String() string { return strings.Join(*ms, ", ") }
+func (ms *multiString) Set(s string) error { *ms = append(*ms, s); return nil }
+
+func main() {
+ var expandedArgs []string
+ for _, arg := range os.Args[1:] {
+ if strings.HasPrefix(arg, "@") {
+ f, err := os.Open(strings.TrimPrefix(arg, "@"))
+ if err != nil {
+ fmt.Fprintln(os.Stderr, err.Error())
+ os.Exit(1)
+ }
+
+ respArgs, err := response.ReadRspFile(f)
+ f.Close()
+ if err != nil {
+ fmt.Fprintln(os.Stderr, err.Error())
+ os.Exit(1)
+ }
+ expandedArgs = append(expandedArgs, respArgs...)
+ } else {
+ expandedArgs = append(expandedArgs, arg)
+ }
+ }
+
+ flags := flag.NewFlagSet("flags", flag.ExitOnError)
+
+ flags.Usage = func() {
+ fmt.Fprintf(os.Stderr, `Usage: %s {options} file.meta_lic {file.meta_lic...}
+
+Outputs an SBOM.spdx.
+
+Options:
+`, filepath.Base(os.Args[0]))
+ flags.PrintDefaults()
+ }
+
+ outputFile := flags.String("o", "-", "Where to write the SBOM spdx file. (default stdout)")
+ depsFile := flags.String("d", "", "Where to write the deps file")
+ product := flags.String("product", "", "The name of the product for which the notice is generated.")
+ stripPrefix := newMultiString(flags, "strip_prefix", "Prefix to remove from paths. i.e. path to root (multiple allowed)")
+
+ flags.Parse(expandedArgs)
+
+ // Must specify at least one root target.
+ if flags.NArg() == 0 {
+ flags.Usage()
+ os.Exit(2)
+ }
+
+ if len(*outputFile) == 0 {
+ flags.Usage()
+ fmt.Fprintf(os.Stderr, "must specify file for -o; use - for stdout\n")
+ os.Exit(2)
+ } else {
+ dir, err := filepath.Abs(filepath.Dir(*outputFile))
+ if err != nil {
+ fmt.Fprintf(os.Stderr, "cannot determine path to %q: %s\n", *outputFile, err)
+ os.Exit(1)
+ }
+ fi, err := os.Stat(dir)
+ if err != nil {
+ fmt.Fprintf(os.Stderr, "cannot read directory %q of %q: %s\n", dir, *outputFile, err)
+ os.Exit(1)
+ }
+ if !fi.IsDir() {
+ fmt.Fprintf(os.Stderr, "parent %q of %q is not a directory\n", dir, *outputFile)
+ os.Exit(1)
+ }
+ }
+
+ var ofile io.Writer
+ ofile = os.Stdout
+ var obuf *bytes.Buffer
+ if *outputFile != "-" {
+ obuf = &bytes.Buffer{}
+ ofile = obuf
+ }
+
+ ctx := &context{ofile, os.Stderr, compliance.FS, *product, *stripPrefix, actualTime}
+
+ spdxDoc, deps, err := sbomGenerator(ctx, flags.Args()...)
+
+ if err != nil {
+ if err == failNoneRequested {
+ flags.Usage()
+ }
+ fmt.Fprintf(os.Stderr, "%s\n", err.Error())
+ os.Exit(1)
+ }
+
+ // writing the spdx Doc created
+ if err := spdx_json.Save2_2(spdxDoc, ofile); err != nil {
+ fmt.Fprintf(os.Stderr, "failed to write document to %v: %v", *outputFile, err)
+ os.Exit(1)
+ }
+
+ if *outputFile != "-" {
+ err := os.WriteFile(*outputFile, obuf.Bytes(), 0666)
+ if err != nil {
+ fmt.Fprintf(os.Stderr, "could not write output to %q: %s\n", *outputFile, err)
+ os.Exit(1)
+ }
+ }
+
+ if *depsFile != "" {
+ err := deptools.WriteDepFile(*depsFile, *outputFile, deps)
+ if err != nil {
+ fmt.Fprintf(os.Stderr, "could not write deps to %q: %s\n", *depsFile, err)
+ os.Exit(1)
+ }
+ }
+ os.Exit(0)
+}
+
+type creationTimeGetter func() string
+
+// actualTime returns current time in UTC
+func actualTime() string {
+ t := time.Now().UTC()
+ return t.UTC().Format("2006-01-02T15:04:05Z")
+}
+
+// replaceSlashes replaces "/" by "-" for the library path to be used for packages & files SPDXID
+func replaceSlashes(x string) string {
+ return strings.ReplaceAll(x, "/", "-")
+}
+
+// stripDocName removes the outdir prefix and meta_lic suffix from a target Name
+func stripDocName(name string) string {
+ // remove outdir prefix
+ if strings.HasPrefix(name, "out/") {
+ name = name[4:]
+ }
+
+ // remove suffix
+ if strings.HasSuffix(name, ".meta_lic") {
+ name = name[:len(name)-9]
+ } else if strings.HasSuffix(name, "/meta_lic") {
+ name = name[:len(name)-9] + "/"
+ }
+
+ return name
+}
+
+// getPackageName returns a package name of a target Node
+func getPackageName(_ *context, tn *compliance.TargetNode) string {
+ return replaceSlashes(tn.Name())
+}
+
+// getDocumentName returns a package name of a target Node
+func getDocumentName(ctx *context, tn *compliance.TargetNode, pm *projectmetadata.ProjectMetadata) string {
+ if len(ctx.product) > 0 {
+ return replaceSlashes(ctx.product)
+ }
+ if len(tn.ModuleName()) > 0 {
+ if pm != nil {
+ return replaceSlashes(pm.Name() + ":" + tn.ModuleName())
+ }
+ return replaceSlashes(tn.ModuleName())
+ }
+
+ return stripDocName(replaceSlashes(tn.Name()))
+}
+
+// getDownloadUrl returns the download URL if available (GIT, SVN, etc..),
+// or NOASSERTION if not available, none determined or ambiguous
+func getDownloadUrl(_ *context, pm *projectmetadata.ProjectMetadata) string {
+ if pm == nil {
+ return NOASSERTION
+ }
+
+ urlsByTypeName := pm.UrlsByTypeName()
+ if urlsByTypeName == nil {
+ return NOASSERTION
+ }
+
+ url := urlsByTypeName.DownloadUrl()
+ if url == "" {
+ return NOASSERTION
+ }
+ return url
+}
+
+// getProjectMetadata returns the optimal project metadata for the target node
+func getProjectMetadata(_ *context, pmix *projectmetadata.Index,
+ tn *compliance.TargetNode) (*projectmetadata.ProjectMetadata, error) {
+ pms, err := pmix.MetadataForProjects(tn.Projects()...)
+ if err != nil {
+ return nil, fmt.Errorf("Unable to read projects for %q: %w\n", tn, err)
+ }
+ if len(pms) == 0 {
+ return nil, nil
+ }
+
+ // Getting the project metadata that contains most of the info needed for sbomGenerator
+ score := -1
+ index := -1
+ for i := 0; i < len(pms); i++ {
+ tempScore := 0
+ if pms[i].Name() != "" {
+ tempScore += 1
+ }
+ if pms[i].Version() != "" {
+ tempScore += 1
+ }
+ if pms[i].UrlsByTypeName().DownloadUrl() != "" {
+ tempScore += 1
+ }
+
+ if tempScore == score {
+ if pms[i].Project() < pms[index].Project() {
+ index = i
+ }
+ } else if tempScore > score {
+ score = tempScore
+ index = i
+ }
+ }
+ return pms[index], nil
+}
+
+// inputFiles returns the complete list of files read
+func inputFiles(lg *compliance.LicenseGraph, pmix *projectmetadata.Index, licenseTexts []string) []string {
+ projectMeta := pmix.AllMetadataFiles()
+ targets := lg.TargetNames()
+ files := make([]string, 0, len(licenseTexts)+len(targets)+len(projectMeta))
+ files = append(files, licenseTexts...)
+ files = append(files, targets...)
+ files = append(files, projectMeta...)
+ return files
+}
+
+// generateSPDXNamespace generates a unique SPDX Document Namespace using a SHA1 checksum
+// and the CreationInfo.Created field as the date.
+func generateSPDXNamespace(created string) string {
+ // Compute a SHA1 checksum of the CreationInfo.Created field.
+ hash := sha1.Sum([]byte(created))
+ checksum := hex.EncodeToString(hash[:])
+
+ // Combine the checksum and timestamp to generate the SPDX Namespace.
+ namespace := fmt.Sprintf("SPDXRef-DOCUMENT-%s-%s", created, checksum)
+
+ return namespace
+}
+
+// sbomGenerator implements the spdx bom utility
+
+// SBOM is part of the new government regulation issued to improve national cyber security
+// and enhance software supply chain and transparency, see https://www.cisa.gov/sbom
+
+// sbomGenerator uses the SPDX standard, see the SPDX specification (https://spdx.github.io/spdx-spec/)
+// sbomGenerator is also following the internal google SBOM styleguide (http://goto.google.com/spdx-style-guide)
+func sbomGenerator(ctx *context, files ...string) (*spdx.Document, []string, error) {
+ // Must be at least one root file.
+ if len(files) < 1 {
+ return nil, nil, failNoneRequested
+ }
+
+ pmix := projectmetadata.NewIndex(ctx.rootFS)
+
+ lg, err := compliance.ReadLicenseGraph(ctx.rootFS, ctx.stderr, files)
+
+ if err != nil {
+ return nil, nil, fmt.Errorf("Unable to read license text file(s) for %q: %v\n", files, err)
+ }
+
+ // creating the packages section
+ pkgs := []*spdx.Package{}
+
+ // creating the relationship section
+ relationships := []*spdx.Relationship{}
+
+ // creating the license section
+ otherLicenses := []*spdx.OtherLicense{}
+
+ // spdx document name
+ var docName string
+
+ // main package name
+ var mainPkgName string
+
+ // implementing the licenses references for the packages
+ licenses := make(map[string]string)
+ concludedLicenses := func(licenseTexts []string) string {
+ licenseRefs := make([]string, 0, len(licenseTexts))
+ for _, licenseText := range licenseTexts {
+ license := strings.SplitN(licenseText, ":", 2)[0]
+ if _, ok := licenses[license]; !ok {
+ licenseRef := "LicenseRef-" + replaceSlashes(license)
+ licenses[license] = licenseRef
+ }
+
+ licenseRefs = append(licenseRefs, licenses[license])
+ }
+ if len(licenseRefs) > 1 {
+ return "(" + strings.Join(licenseRefs, " AND ") + ")"
+ } else if len(licenseRefs) == 1 {
+ return licenseRefs[0]
+ }
+ return "NONE"
+ }
+
+ isMainPackage := true
+ visitedNodes := make(map[*compliance.TargetNode]struct{})
+
+ // performing a Breadth-first top down walk of licensegraph and building package information
+ compliance.WalkTopDownBreadthFirst(nil, lg,
+ func(lg *compliance.LicenseGraph, tn *compliance.TargetNode, path compliance.TargetEdgePath) bool {
+ if err != nil {
+ return false
+ }
+ var pm *projectmetadata.ProjectMetadata
+ pm, err = getProjectMetadata(ctx, pmix, tn)
+ if err != nil {
+ return false
+ }
+
+ if isMainPackage {
+ docName = getDocumentName(ctx, tn, pm)
+ mainPkgName = replaceSlashes(getPackageName(ctx, tn))
+ isMainPackage = false
+ }
+
+ if len(path) == 0 {
+ // Add the describe relationship for the main package
+ rln := &spdx.Relationship{
+ RefA: common.MakeDocElementID("" /* this document */, "DOCUMENT"),
+ RefB: common.MakeDocElementID("", mainPkgName),
+ Relationship: "DESCRIBES",
+ }
+ relationships = append(relationships, rln)
+
+ } else {
+ // Check parent and identify annotation
+ parent := path[len(path)-1]
+ targetEdge := parent.Edge()
+ if targetEdge.IsRuntimeDependency() {
+ // Adding the dynamic link annotation RUNTIME_DEPENDENCY_OF relationship
+ rln := &spdx.Relationship{
+ RefA: common.MakeDocElementID("", replaceSlashes(getPackageName(ctx, tn))),
+ RefB: common.MakeDocElementID("", replaceSlashes(getPackageName(ctx, targetEdge.Target()))),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ }
+ relationships = append(relationships, rln)
+
+ } else if targetEdge.IsDerivation() {
+ // Adding the derivation annotation as a CONTAINS relationship
+ rln := &spdx.Relationship{
+ RefA: common.MakeDocElementID("", replaceSlashes(getPackageName(ctx, targetEdge.Target()))),
+ RefB: common.MakeDocElementID("", replaceSlashes(getPackageName(ctx, tn))),
+ Relationship: "CONTAINS",
+ }
+ relationships = append(relationships, rln)
+
+ } else if targetEdge.IsBuildTool() {
+ // Adding the toolchain annotation as a BUILD_TOOL_OF relationship
+ rln := &spdx.Relationship{
+ RefA: common.MakeDocElementID("", replaceSlashes(getPackageName(ctx, tn))),
+ RefB: common.MakeDocElementID("", replaceSlashes(getPackageName(ctx, targetEdge.Target()))),
+ Relationship: "BUILD_TOOL_OF",
+ }
+ relationships = append(relationships, rln)
+
+ } else {
+ panic(fmt.Errorf("Unknown dependency type: %v", targetEdge.Annotations()))
+ }
+ }
+
+ if _, alreadyVisited := visitedNodes[tn]; alreadyVisited {
+ return false
+ }
+ visitedNodes[tn] = struct{}{}
+ pkgName := getPackageName(ctx, tn)
+
+ // Making an spdx package and adding it to pkgs
+ pkg := &spdx.Package{
+ PackageName: replaceSlashes(pkgName),
+ PackageDownloadLocation: getDownloadUrl(ctx, pm),
+ PackageSPDXIdentifier: common.ElementID(replaceSlashes(pkgName)),
+ PackageLicenseConcluded: concludedLicenses(tn.LicenseTexts()),
+ }
+
+ if pm != nil && pm.Version() != "" {
+ pkg.PackageVersion = pm.Version()
+ } else {
+ pkg.PackageVersion = NOASSERTION
+ }
+
+ pkgs = append(pkgs, pkg)
+
+ return true
+ })
+
+ // Adding Non-standard licenses
+
+ licenseTexts := make([]string, 0, len(licenses))
+
+ for licenseText := range licenses {
+ licenseTexts = append(licenseTexts, licenseText)
+ }
+
+ sort.Strings(licenseTexts)
+
+ for _, licenseText := range licenseTexts {
+ // open the file
+ f, err := ctx.rootFS.Open(filepath.Clean(licenseText))
+ if err != nil {
+ return nil, nil, fmt.Errorf("error opening license text file %q: %w", licenseText, err)
+ }
+
+ // read the file
+ text, err := io.ReadAll(f)
+ if err != nil {
+ return nil, nil, fmt.Errorf("error reading license text file %q: %w", licenseText, err)
+ }
+ // Making an spdx License and adding it to otherLicenses
+ otherLicenses = append(otherLicenses, &spdx.OtherLicense{
+ LicenseName: strings.Replace(licenses[licenseText], "LicenseRef-", "", -1),
+ LicenseIdentifier: string(licenses[licenseText]),
+ ExtractedText: string(text),
+ })
+ }
+
+ deps := inputFiles(lg, pmix, licenseTexts)
+ sort.Strings(deps)
+
+ // Making the SPDX doc
+ ci, err := builder2v2.BuildCreationInfoSection2_2("Organization", "Google LLC", nil)
+ if err != nil {
+ return nil, nil, fmt.Errorf("Unable to build creation info section for SPDX doc: %v\n", err)
+ }
+
+ ci.Created = ctx.creationTime()
+
+ doc := &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: docName,
+ DocumentNamespace: generateSPDXNamespace(ci.Created),
+ CreationInfo: ci,
+ Packages: pkgs,
+ Relationships: relationships,
+ OtherLicenses: otherLicenses,
+ }
+
+ if err := spdxlib.ValidateDocument2_2(doc); err != nil {
+ return nil, nil, fmt.Errorf("Unable to validate the SPDX doc: %v\n", err)
+ }
+
+ return doc, deps, nil
+}
diff --git a/tools/compliance/cmd/sbom/sbom_test.go b/tools/compliance/cmd/sbom/sbom_test.go
new file mode 100644
index 0000000..6472f51
--- /dev/null
+++ b/tools/compliance/cmd/sbom/sbom_test.go
@@ -0,0 +1,2467 @@
+// Copyright 2022 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "os"
+ "reflect"
+ "strings"
+ "testing"
+ "time"
+
+ "android/soong/tools/compliance"
+ "github.com/spdx/tools-golang/builder/builder2v2"
+ "github.com/spdx/tools-golang/spdx/common"
+ spdx "github.com/spdx/tools-golang/spdx/v2_2"
+)
+
+func TestMain(m *testing.M) {
+ // Change into the parent directory before running the tests
+ // so they can find the testdata directory.
+ if err := os.Chdir(".."); err != nil {
+ fmt.Printf("failed to change to testdata directory: %s\n", err)
+ os.Exit(1)
+ }
+ os.Exit(m.Run())
+}
+
+func Test(t *testing.T) {
+ tests := []struct {
+ condition string
+ name string
+ outDir string
+ roots []string
+ stripPrefix string
+ expectedOut *spdx.Document
+ expectedDeps []string
+ }{
+ {
+ condition: "firstparty",
+ name: "apex",
+ roots: []string{"highest.apex.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-firstparty-highest.apex",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-firstparty-highest.apex.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-highest.apex.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-firstparty-bin-bin1.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-bin-bin1.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-firstparty-bin-bin2.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-bin-bin2.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-firstparty-lib-liba.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-lib-liba.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-firstparty-lib-libb.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-lib-libb.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-firstparty-lib-libc.a.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-lib-libc.a.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-firstparty-lib-libd.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-lib-libd.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-highest.apex.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-firstparty-highest.apex.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-bin-bin1.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-firstparty-highest.apex.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-bin-bin2.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-firstparty-highest.apex.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-firstparty-highest.apex.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-lib-libb.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-firstparty-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-firstparty-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-lib-libc.a.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-firstparty-lib-libb.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-bin-bin2.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-firstparty-lib-libd.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-bin-bin2.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ ExtractedText: "&&&First Party License&&&\n",
+ LicenseName: "testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/bin/bin1.meta_lic",
+ "testdata/firstparty/bin/bin2.meta_lic",
+ "testdata/firstparty/highest.apex.meta_lic",
+ "testdata/firstparty/lib/liba.so.meta_lic",
+ "testdata/firstparty/lib/libb.so.meta_lic",
+ "testdata/firstparty/lib/libc.a.meta_lic",
+ "testdata/firstparty/lib/libd.so.meta_lic",
+ },
+ },
+ {
+ condition: "firstparty",
+ name: "application",
+ roots: []string{"application.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-firstparty-application",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-firstparty-application.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-application.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-firstparty-bin-bin3.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-bin-bin3.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-firstparty-lib-liba.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-lib-liba.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-firstparty-lib-libb.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-lib-libb.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-application.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-firstparty-bin-bin3.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-application.meta_lic"),
+ Relationship: "BUILD_TOOL_OF",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-firstparty-application.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-firstparty-lib-libb.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-application.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ ExtractedText: "&&&First Party License&&&\n",
+ LicenseName: "testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/application.meta_lic",
+ "testdata/firstparty/bin/bin3.meta_lic",
+ "testdata/firstparty/lib/liba.so.meta_lic",
+ "testdata/firstparty/lib/libb.so.meta_lic",
+ },
+ },
+ {
+ condition: "firstparty",
+ name: "container",
+ roots: []string{"container.zip.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-firstparty-container.zip",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-firstparty-container.zip.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-container.zip.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-firstparty-bin-bin1.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-bin-bin1.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-firstparty-bin-bin2.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-bin-bin2.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-firstparty-lib-liba.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-lib-liba.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-firstparty-lib-libb.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-lib-libb.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-firstparty-lib-libc.a.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-lib-libc.a.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-firstparty-lib-libd.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-lib-libd.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-container.zip.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-firstparty-container.zip.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-bin-bin1.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-firstparty-container.zip.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-bin-bin2.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-firstparty-container.zip.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-firstparty-container.zip.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-lib-libb.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-firstparty-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-firstparty-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-lib-libc.a.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-firstparty-lib-libb.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-bin-bin2.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-firstparty-lib-libd.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-bin-bin2.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ ExtractedText: "&&&First Party License&&&\n",
+ LicenseName: "testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/bin/bin1.meta_lic",
+ "testdata/firstparty/bin/bin2.meta_lic",
+ "testdata/firstparty/container.zip.meta_lic",
+ "testdata/firstparty/lib/liba.so.meta_lic",
+ "testdata/firstparty/lib/libb.so.meta_lic",
+ "testdata/firstparty/lib/libc.a.meta_lic",
+ "testdata/firstparty/lib/libd.so.meta_lic",
+ },
+ },
+ {
+ condition: "firstparty",
+ name: "binary",
+ roots: []string{"bin/bin1.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-firstparty-bin-bin1",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-firstparty-bin-bin1.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-bin-bin1.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-firstparty-lib-liba.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-lib-liba.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-firstparty-lib-libc.a.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-lib-libc.a.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-bin-bin1.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-firstparty-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-firstparty-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-lib-libc.a.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ ExtractedText: "&&&First Party License&&&\n",
+ LicenseName: "testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/bin/bin1.meta_lic",
+ "testdata/firstparty/lib/liba.so.meta_lic",
+ "testdata/firstparty/lib/libc.a.meta_lic",
+ },
+ },
+ {
+ condition: "firstparty",
+ name: "library",
+ roots: []string{"lib/libd.so.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-firstparty-lib-libd.so",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-firstparty-lib-libd.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-firstparty-lib-libd.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-firstparty-lib-libd.so.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ ExtractedText: "&&&First Party License&&&\n",
+ LicenseName: "testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/lib/libd.so.meta_lic",
+ },
+ },
+ {
+ condition: "notice",
+ name: "apex",
+ roots: []string{"highest.apex.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-notice-highest.apex",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-notice-highest.apex.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-highest.apex.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-notice-bin-bin1.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-bin-bin1.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-notice-bin-bin2.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-bin-bin2.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-notice-lib-liba.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-lib-liba.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ },
+ {
+ PackageName: "testdata-notice-lib-libb.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-lib-libb.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-notice-lib-libc.a.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-lib-libc.a.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ },
+ {
+ PackageName: "testdata-notice-lib-libd.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-lib-libd.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-notice-highest.apex.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-notice-highest.apex.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-notice-bin-bin1.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-notice-highest.apex.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-notice-bin-bin2.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-notice-highest.apex.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-notice-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-notice-highest.apex.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-notice-lib-libb.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-notice-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-notice-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-notice-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-notice-lib-libc.a.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-notice-lib-libb.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-notice-bin-bin2.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-notice-lib-libd.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-notice-bin-bin2.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ ExtractedText: "&&&First Party License&&&\n",
+ LicenseName: "testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ ExtractedText: "%%%Notice License%%%\n",
+ LicenseName: "testdata-notice-NOTICE_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/bin/bin2.meta_lic",
+ "testdata/notice/highest.apex.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
+ "testdata/notice/lib/libd.so.meta_lic",
+ },
+ },
+ {
+ condition: "notice",
+ name: "container",
+ roots: []string{"container.zip.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-notice-container.zip",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-notice-container.zip.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-container.zip.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-notice-bin-bin1.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-bin-bin1.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-notice-bin-bin2.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-bin-bin2.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-notice-lib-liba.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-lib-liba.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ },
+ {
+ PackageName: "testdata-notice-lib-libb.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-lib-libb.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-notice-lib-libc.a.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-lib-libc.a.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ },
+ {
+ PackageName: "testdata-notice-lib-libd.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-lib-libd.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-notice-container.zip.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-notice-container.zip.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-notice-bin-bin1.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-notice-container.zip.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-notice-bin-bin2.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-notice-container.zip.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-notice-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-notice-container.zip.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-notice-lib-libb.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-notice-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-notice-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-notice-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-notice-lib-libc.a.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-notice-lib-libb.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-notice-bin-bin2.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-notice-lib-libd.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-notice-bin-bin2.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ ExtractedText: "&&&First Party License&&&\n",
+ LicenseName: "testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ ExtractedText: "%%%Notice License%%%\n",
+ LicenseName: "testdata-notice-NOTICE_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/bin/bin2.meta_lic",
+ "testdata/notice/container.zip.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
+ "testdata/notice/lib/libd.so.meta_lic",
+ },
+ },
+ {
+ condition: "notice",
+ name: "application",
+ roots: []string{"application.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-notice-application",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-notice-application.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-application.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-notice-bin-bin3.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-bin-bin3.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ },
+ {
+ PackageName: "testdata-notice-lib-liba.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-lib-liba.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ },
+ {
+ PackageName: "testdata-notice-lib-libb.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-lib-libb.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-notice-application.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-notice-bin-bin3.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-notice-application.meta_lic"),
+ Relationship: "BUILD_TOOL_OF",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-notice-application.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-notice-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-notice-lib-libb.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-notice-application.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ ExtractedText: "&&&First Party License&&&\n",
+ LicenseName: "testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ ExtractedText: "%%%Notice License%%%\n",
+ LicenseName: "testdata-notice-NOTICE_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/notice/application.meta_lic",
+ "testdata/notice/bin/bin3.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ },
+ },
+ {
+ condition: "notice",
+ name: "binary",
+ roots: []string{"bin/bin1.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-notice-bin-bin1",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-notice-bin-bin1.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-bin-bin1.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-notice-lib-liba.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-lib-liba.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ },
+ {
+ PackageName: "testdata-notice-lib-libc.a.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-lib-libc.a.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-notice-bin-bin1.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-notice-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-notice-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-notice-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-notice-lib-libc.a.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ ExtractedText: "&&&First Party License&&&\n",
+ LicenseName: "testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ ExtractedText: "%%%Notice License%%%\n",
+ LicenseName: "testdata-notice-NOTICE_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
+ },
+ },
+ {
+ condition: "notice",
+ name: "library",
+ roots: []string{"lib/libd.so.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-notice-lib-libd.so",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-notice-lib-libd.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-notice-lib-libd.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-notice-lib-libd.so.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ ExtractedText: "%%%Notice License%%%\n",
+ LicenseName: "testdata-notice-NOTICE_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/notice/lib/libd.so.meta_lic",
+ },
+ },
+ {
+ condition: "reciprocal",
+ name: "apex",
+ roots: []string{"highest.apex.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-reciprocal-highest.apex",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-reciprocal-highest.apex.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-reciprocal-highest.apex.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-reciprocal-bin-bin1.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-reciprocal-bin-bin1.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-reciprocal-bin-bin2.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-reciprocal-bin-bin2.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-reciprocal-lib-liba.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-reciprocal-lib-liba.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-reciprocal-RECIPROCAL_LICENSE",
+ },
+ {
+ PackageName: "testdata-reciprocal-lib-libb.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-reciprocal-lib-libb.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-reciprocal-lib-libc.a.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-reciprocal-lib-libc.a.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-reciprocal-RECIPROCAL_LICENSE",
+ },
+ {
+ PackageName: "testdata-reciprocal-lib-libd.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-reciprocal-lib-libd.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-reciprocal-highest.apex.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-reciprocal-highest.apex.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-reciprocal-bin-bin1.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-reciprocal-highest.apex.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-reciprocal-bin-bin2.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-reciprocal-highest.apex.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-reciprocal-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-reciprocal-highest.apex.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-reciprocal-lib-libb.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-reciprocal-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-reciprocal-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-reciprocal-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-reciprocal-lib-libc.a.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-reciprocal-lib-libb.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-reciprocal-bin-bin2.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-reciprocal-lib-libd.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-reciprocal-bin-bin2.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ ExtractedText: "&&&First Party License&&&\n",
+ LicenseName: "testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ ExtractedText: "%%%Notice License%%%\n",
+ LicenseName: "testdata-notice-NOTICE_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-reciprocal-RECIPROCAL_LICENSE",
+ ExtractedText: "$$$Reciprocal License$$$\n",
+ LicenseName: "testdata-reciprocal-RECIPROCAL_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/reciprocal/RECIPROCAL_LICENSE",
+ "testdata/reciprocal/bin/bin1.meta_lic",
+ "testdata/reciprocal/bin/bin2.meta_lic",
+ "testdata/reciprocal/highest.apex.meta_lic",
+ "testdata/reciprocal/lib/liba.so.meta_lic",
+ "testdata/reciprocal/lib/libb.so.meta_lic",
+ "testdata/reciprocal/lib/libc.a.meta_lic",
+ "testdata/reciprocal/lib/libd.so.meta_lic",
+ },
+ },
+ {
+ condition: "reciprocal",
+ name: "application",
+ roots: []string{"application.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-reciprocal-application",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-reciprocal-application.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-reciprocal-application.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-reciprocal-bin-bin3.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-reciprocal-bin-bin3.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ },
+ {
+ PackageName: "testdata-reciprocal-lib-liba.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-reciprocal-lib-liba.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-reciprocal-RECIPROCAL_LICENSE",
+ },
+ {
+ PackageName: "testdata-reciprocal-lib-libb.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-reciprocal-lib-libb.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-reciprocal-application.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-reciprocal-bin-bin3.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-reciprocal-application.meta_lic"),
+ Relationship: "BUILD_TOOL_OF",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-reciprocal-application.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-reciprocal-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-reciprocal-lib-libb.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-reciprocal-application.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ ExtractedText: "&&&First Party License&&&\n",
+ LicenseName: "testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ ExtractedText: "%%%Notice License%%%\n",
+ LicenseName: "testdata-notice-NOTICE_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-reciprocal-RECIPROCAL_LICENSE",
+ ExtractedText: "$$$Reciprocal License$$$\n",
+ LicenseName: "testdata-reciprocal-RECIPROCAL_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/reciprocal/RECIPROCAL_LICENSE",
+ "testdata/reciprocal/application.meta_lic",
+ "testdata/reciprocal/bin/bin3.meta_lic",
+ "testdata/reciprocal/lib/liba.so.meta_lic",
+ "testdata/reciprocal/lib/libb.so.meta_lic",
+ },
+ },
+ {
+ condition: "reciprocal",
+ name: "binary",
+ roots: []string{"bin/bin1.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-reciprocal-bin-bin1",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-reciprocal-bin-bin1.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-reciprocal-bin-bin1.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-reciprocal-lib-liba.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-reciprocal-lib-liba.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-reciprocal-RECIPROCAL_LICENSE",
+ },
+ {
+ PackageName: "testdata-reciprocal-lib-libc.a.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-reciprocal-lib-libc.a.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-reciprocal-RECIPROCAL_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-reciprocal-bin-bin1.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-reciprocal-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-reciprocal-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-reciprocal-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-reciprocal-lib-libc.a.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ ExtractedText: "&&&First Party License&&&\n",
+ LicenseName: "testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-reciprocal-RECIPROCAL_LICENSE",
+ ExtractedText: "$$$Reciprocal License$$$\n",
+ LicenseName: "testdata-reciprocal-RECIPROCAL_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/reciprocal/RECIPROCAL_LICENSE",
+ "testdata/reciprocal/bin/bin1.meta_lic",
+ "testdata/reciprocal/lib/liba.so.meta_lic",
+ "testdata/reciprocal/lib/libc.a.meta_lic",
+ },
+ },
+ {
+ condition: "reciprocal",
+ name: "library",
+ roots: []string{"lib/libd.so.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-reciprocal-lib-libd.so",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-reciprocal-lib-libd.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-reciprocal-lib-libd.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-reciprocal-lib-libd.so.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ ExtractedText: "%%%Notice License%%%\n",
+ LicenseName: "testdata-notice-NOTICE_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/reciprocal/lib/libd.so.meta_lic",
+ },
+ },
+ {
+ condition: "restricted",
+ name: "apex",
+ roots: []string{"highest.apex.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-restricted-highest.apex",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-restricted-highest.apex.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-restricted-highest.apex.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-restricted-bin-bin1.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-restricted-bin-bin1.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-restricted-bin-bin2.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-restricted-bin-bin2.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-restricted-lib-liba.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-restricted-lib-liba.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-restricted-RESTRICTED_LICENSE",
+ },
+ {
+ PackageName: "testdata-restricted-lib-libb.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-restricted-lib-libb.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-restricted-RESTRICTED_LICENSE",
+ },
+ {
+ PackageName: "testdata-restricted-lib-libc.a.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-restricted-lib-libc.a.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-reciprocal-RECIPROCAL_LICENSE",
+ },
+ {
+ PackageName: "testdata-restricted-lib-libd.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-restricted-lib-libd.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-highest.apex.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-restricted-highest.apex.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-bin-bin1.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-restricted-highest.apex.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-bin-bin2.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-restricted-highest.apex.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-restricted-highest.apex.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-lib-libb.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-restricted-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-restricted-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-lib-libc.a.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-restricted-lib-libb.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-bin-bin2.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-restricted-lib-libd.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-bin-bin2.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ ExtractedText: "&&&First Party License&&&\n",
+ LicenseName: "testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ ExtractedText: "%%%Notice License%%%\n",
+ LicenseName: "testdata-notice-NOTICE_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-reciprocal-RECIPROCAL_LICENSE",
+ ExtractedText: "$$$Reciprocal License$$$\n",
+ LicenseName: "testdata-reciprocal-RECIPROCAL_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-restricted-RESTRICTED_LICENSE",
+ ExtractedText: "###Restricted License###\n",
+ LicenseName: "testdata-restricted-RESTRICTED_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/reciprocal/RECIPROCAL_LICENSE",
+ "testdata/restricted/RESTRICTED_LICENSE",
+ "testdata/restricted/bin/bin1.meta_lic",
+ "testdata/restricted/bin/bin2.meta_lic",
+ "testdata/restricted/highest.apex.meta_lic",
+ "testdata/restricted/lib/liba.so.meta_lic",
+ "testdata/restricted/lib/libb.so.meta_lic",
+ "testdata/restricted/lib/libc.a.meta_lic",
+ "testdata/restricted/lib/libd.so.meta_lic",
+ },
+ },
+ {
+ condition: "restricted",
+ name: "container",
+ roots: []string{"container.zip.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-restricted-container.zip",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-restricted-container.zip.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-restricted-container.zip.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-restricted-bin-bin1.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-restricted-bin-bin1.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-restricted-bin-bin2.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-restricted-bin-bin2.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-restricted-lib-liba.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-restricted-lib-liba.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-restricted-RESTRICTED_LICENSE",
+ },
+ {
+ PackageName: "testdata-restricted-lib-libb.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-restricted-lib-libb.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-restricted-RESTRICTED_LICENSE",
+ },
+ {
+ PackageName: "testdata-restricted-lib-libc.a.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-restricted-lib-libc.a.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-reciprocal-RECIPROCAL_LICENSE",
+ },
+ {
+ PackageName: "testdata-restricted-lib-libd.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-restricted-lib-libd.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-container.zip.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-restricted-container.zip.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-bin-bin1.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-restricted-container.zip.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-bin-bin2.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-restricted-container.zip.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-restricted-container.zip.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-lib-libb.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-restricted-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-restricted-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-lib-libc.a.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-restricted-lib-libb.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-bin-bin2.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-restricted-lib-libd.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-bin-bin2.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ ExtractedText: "&&&First Party License&&&\n",
+ LicenseName: "testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ ExtractedText: "%%%Notice License%%%\n",
+ LicenseName: "testdata-notice-NOTICE_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-reciprocal-RECIPROCAL_LICENSE",
+ ExtractedText: "$$$Reciprocal License$$$\n",
+ LicenseName: "testdata-reciprocal-RECIPROCAL_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-restricted-RESTRICTED_LICENSE",
+ ExtractedText: "###Restricted License###\n",
+ LicenseName: "testdata-restricted-RESTRICTED_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/reciprocal/RECIPROCAL_LICENSE",
+ "testdata/restricted/RESTRICTED_LICENSE",
+ "testdata/restricted/bin/bin1.meta_lic",
+ "testdata/restricted/bin/bin2.meta_lic",
+ "testdata/restricted/container.zip.meta_lic",
+ "testdata/restricted/lib/liba.so.meta_lic",
+ "testdata/restricted/lib/libb.so.meta_lic",
+ "testdata/restricted/lib/libc.a.meta_lic",
+ "testdata/restricted/lib/libd.so.meta_lic",
+ },
+ },
+ {
+ condition: "restricted",
+ name: "binary",
+ roots: []string{"bin/bin1.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-restricted-bin-bin1",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-restricted-bin-bin1.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-restricted-bin-bin1.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-restricted-lib-liba.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-restricted-lib-liba.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-restricted-RESTRICTED_LICENSE",
+ },
+ {
+ PackageName: "testdata-restricted-lib-libc.a.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-restricted-lib-libc.a.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-reciprocal-RECIPROCAL_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-bin-bin1.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-restricted-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-restricted-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-lib-libc.a.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ ExtractedText: "&&&First Party License&&&\n",
+ LicenseName: "testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-reciprocal-RECIPROCAL_LICENSE",
+ ExtractedText: "$$$Reciprocal License$$$\n",
+ LicenseName: "testdata-reciprocal-RECIPROCAL_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-restricted-RESTRICTED_LICENSE",
+ ExtractedText: "###Restricted License###\n",
+ LicenseName: "testdata-restricted-RESTRICTED_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/reciprocal/RECIPROCAL_LICENSE",
+ "testdata/restricted/RESTRICTED_LICENSE",
+ "testdata/restricted/bin/bin1.meta_lic",
+ "testdata/restricted/lib/liba.so.meta_lic",
+ "testdata/restricted/lib/libc.a.meta_lic",
+ },
+ },
+ {
+ condition: "restricted",
+ name: "library",
+ roots: []string{"lib/libd.so.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-restricted-lib-libd.so",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-restricted-lib-libd.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-restricted-lib-libd.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-restricted-lib-libd.so.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ ExtractedText: "%%%Notice License%%%\n",
+ LicenseName: "testdata-notice-NOTICE_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/restricted/lib/libd.so.meta_lic",
+ },
+ },
+ {
+ condition: "proprietary",
+ name: "apex",
+ roots: []string{"highest.apex.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-proprietary-highest.apex",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-proprietary-highest.apex.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-highest.apex.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-proprietary-bin-bin1.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-bin-bin1.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-proprietary-bin-bin2.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-bin-bin2.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-proprietary-PROPRIETARY_LICENSE",
+ },
+ {
+ PackageName: "testdata-proprietary-lib-liba.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-lib-liba.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-proprietary-PROPRIETARY_LICENSE",
+ },
+ {
+ PackageName: "testdata-proprietary-lib-libb.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-lib-libb.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-restricted-RESTRICTED_LICENSE",
+ },
+ {
+ PackageName: "testdata-proprietary-lib-libc.a.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-lib-libc.a.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-proprietary-PROPRIETARY_LICENSE",
+ },
+ {
+ PackageName: "testdata-proprietary-lib-libd.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-lib-libd.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-highest.apex.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-proprietary-highest.apex.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-bin-bin1.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-proprietary-highest.apex.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-bin-bin2.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-proprietary-highest.apex.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-proprietary-highest.apex.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-lib-libb.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-proprietary-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-proprietary-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-lib-libc.a.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-proprietary-lib-libb.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-bin-bin2.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-proprietary-lib-libd.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-bin-bin2.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ ExtractedText: "&&&First Party License&&&\n",
+ LicenseName: "testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ ExtractedText: "%%%Notice License%%%\n",
+ LicenseName: "testdata-notice-NOTICE_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-proprietary-PROPRIETARY_LICENSE",
+ ExtractedText: "@@@Proprietary License@@@\n",
+ LicenseName: "testdata-proprietary-PROPRIETARY_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-restricted-RESTRICTED_LICENSE",
+ ExtractedText: "###Restricted License###\n",
+ LicenseName: "testdata-restricted-RESTRICTED_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/proprietary/PROPRIETARY_LICENSE",
+ "testdata/proprietary/bin/bin1.meta_lic",
+ "testdata/proprietary/bin/bin2.meta_lic",
+ "testdata/proprietary/highest.apex.meta_lic",
+ "testdata/proprietary/lib/liba.so.meta_lic",
+ "testdata/proprietary/lib/libb.so.meta_lic",
+ "testdata/proprietary/lib/libc.a.meta_lic",
+ "testdata/proprietary/lib/libd.so.meta_lic",
+ "testdata/restricted/RESTRICTED_LICENSE",
+ },
+ },
+ {
+ condition: "proprietary",
+ name: "container",
+ roots: []string{"container.zip.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-proprietary-container.zip",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-proprietary-container.zip.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-container.zip.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-proprietary-bin-bin1.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-bin-bin1.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-proprietary-bin-bin2.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-bin-bin2.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-proprietary-PROPRIETARY_LICENSE",
+ },
+ {
+ PackageName: "testdata-proprietary-lib-liba.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-lib-liba.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-proprietary-PROPRIETARY_LICENSE",
+ },
+ {
+ PackageName: "testdata-proprietary-lib-libb.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-lib-libb.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-restricted-RESTRICTED_LICENSE",
+ },
+ {
+ PackageName: "testdata-proprietary-lib-libc.a.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-lib-libc.a.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-proprietary-PROPRIETARY_LICENSE",
+ },
+ {
+ PackageName: "testdata-proprietary-lib-libd.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-lib-libd.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-container.zip.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-proprietary-container.zip.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-bin-bin1.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-proprietary-container.zip.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-bin-bin2.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-proprietary-container.zip.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-proprietary-container.zip.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-lib-libb.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-proprietary-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-proprietary-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-lib-libc.a.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-proprietary-lib-libb.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-bin-bin2.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-proprietary-lib-libd.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-bin-bin2.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ ExtractedText: "&&&First Party License&&&\n",
+ LicenseName: "testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ ExtractedText: "%%%Notice License%%%\n",
+ LicenseName: "testdata-notice-NOTICE_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-proprietary-PROPRIETARY_LICENSE",
+ ExtractedText: "@@@Proprietary License@@@\n",
+ LicenseName: "testdata-proprietary-PROPRIETARY_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-restricted-RESTRICTED_LICENSE",
+ ExtractedText: "###Restricted License###\n",
+ LicenseName: "testdata-restricted-RESTRICTED_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/proprietary/PROPRIETARY_LICENSE",
+ "testdata/proprietary/bin/bin1.meta_lic",
+ "testdata/proprietary/bin/bin2.meta_lic",
+ "testdata/proprietary/container.zip.meta_lic",
+ "testdata/proprietary/lib/liba.so.meta_lic",
+ "testdata/proprietary/lib/libb.so.meta_lic",
+ "testdata/proprietary/lib/libc.a.meta_lic",
+ "testdata/proprietary/lib/libd.so.meta_lic",
+ "testdata/restricted/RESTRICTED_LICENSE",
+ },
+ },
+ {
+ condition: "proprietary",
+ name: "application",
+ roots: []string{"application.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-proprietary-application",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-proprietary-application.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-application.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-proprietary-bin-bin3.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-bin-bin3.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-restricted-RESTRICTED_LICENSE",
+ },
+ {
+ PackageName: "testdata-proprietary-lib-liba.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-lib-liba.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-proprietary-PROPRIETARY_LICENSE",
+ },
+ {
+ PackageName: "testdata-proprietary-lib-libb.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-lib-libb.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-restricted-RESTRICTED_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-application.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-proprietary-bin-bin3.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-application.meta_lic"),
+ Relationship: "BUILD_TOOL_OF",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-proprietary-application.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-proprietary-lib-libb.so.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-application.meta_lic"),
+ Relationship: "RUNTIME_DEPENDENCY_OF",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ ExtractedText: "&&&First Party License&&&\n",
+ LicenseName: "testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-proprietary-PROPRIETARY_LICENSE",
+ ExtractedText: "@@@Proprietary License@@@\n",
+ LicenseName: "testdata-proprietary-PROPRIETARY_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-restricted-RESTRICTED_LICENSE",
+ ExtractedText: "###Restricted License###\n",
+ LicenseName: "testdata-restricted-RESTRICTED_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/proprietary/PROPRIETARY_LICENSE",
+ "testdata/proprietary/application.meta_lic",
+ "testdata/proprietary/bin/bin3.meta_lic",
+ "testdata/proprietary/lib/liba.so.meta_lic",
+ "testdata/proprietary/lib/libb.so.meta_lic",
+ "testdata/restricted/RESTRICTED_LICENSE",
+ },
+ },
+ {
+ condition: "proprietary",
+ name: "binary",
+ roots: []string{"bin/bin1.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-proprietary-bin-bin1",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-proprietary-bin-bin1.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-bin-bin1.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ PackageName: "testdata-proprietary-lib-liba.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-lib-liba.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-proprietary-PROPRIETARY_LICENSE",
+ },
+ {
+ PackageName: "testdata-proprietary-lib-libc.a.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-lib-libc.a.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-proprietary-PROPRIETARY_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-bin-bin1.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-proprietary-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-lib-liba.so.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ {
+ RefA: common.MakeDocElementID("", "testdata-proprietary-bin-bin1.meta_lic"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-lib-libc.a.meta_lic"),
+ Relationship: "CONTAINS",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-firstparty-FIRST_PARTY_LICENSE",
+ ExtractedText: "&&&First Party License&&&\n",
+ LicenseName: "testdata-firstparty-FIRST_PARTY_LICENSE",
+ },
+ {
+ LicenseIdentifier: "LicenseRef-testdata-proprietary-PROPRIETARY_LICENSE",
+ ExtractedText: "@@@Proprietary License@@@\n",
+ LicenseName: "testdata-proprietary-PROPRIETARY_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/proprietary/PROPRIETARY_LICENSE",
+ "testdata/proprietary/bin/bin1.meta_lic",
+ "testdata/proprietary/lib/liba.so.meta_lic",
+ "testdata/proprietary/lib/libc.a.meta_lic",
+ },
+ },
+ {
+ condition: "proprietary",
+ name: "library",
+ roots: []string{"lib/libd.so.meta_lic"},
+ expectedOut: &spdx.Document{
+ SPDXVersion: "SPDX-2.2",
+ DataLicense: "CC0-1.0",
+ SPDXIdentifier: "DOCUMENT",
+ DocumentName: "testdata-proprietary-lib-libd.so",
+ DocumentNamespace: generateSPDXNamespace("1970-01-01T00:00:00Z"),
+ CreationInfo: getCreationInfo(t),
+ Packages: []*spdx.Package{
+ {
+ PackageName: "testdata-proprietary-lib-libd.so.meta_lic",
+ PackageVersion: "NOASSERTION",
+ PackageDownloadLocation: "NOASSERTION",
+ PackageSPDXIdentifier: common.ElementID("testdata-proprietary-lib-libd.so.meta_lic"),
+ PackageLicenseConcluded: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ },
+ },
+ Relationships: []*spdx.Relationship{
+ {
+ RefA: common.MakeDocElementID("", "DOCUMENT"),
+ RefB: common.MakeDocElementID("", "testdata-proprietary-lib-libd.so.meta_lic"),
+ Relationship: "DESCRIBES",
+ },
+ },
+ OtherLicenses: []*spdx.OtherLicense{
+ {
+ LicenseIdentifier: "LicenseRef-testdata-notice-NOTICE_LICENSE",
+ ExtractedText: "%%%Notice License%%%\n",
+ LicenseName: "testdata-notice-NOTICE_LICENSE",
+ },
+ },
+ },
+ expectedDeps: []string{
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/proprietary/lib/libd.so.meta_lic",
+ },
+ },
+ }
+ for _, tt := range tests {
+ t.Run(tt.condition+" "+tt.name, func(t *testing.T) {
+ stdout := &bytes.Buffer{}
+ stderr := &bytes.Buffer{}
+
+ rootFiles := make([]string, 0, len(tt.roots))
+ for _, r := range tt.roots {
+ rootFiles = append(rootFiles, "testdata/"+tt.condition+"/"+r)
+ }
+
+ ctx := context{stdout, stderr, compliance.GetFS(tt.outDir), "", []string{tt.stripPrefix}, fakeTime}
+
+ spdxDoc, deps, err := sbomGenerator(&ctx, rootFiles...)
+ if err != nil {
+ t.Fatalf("sbom: error = %v, stderr = %v", err, stderr)
+ return
+ }
+ if stderr.Len() > 0 {
+ t.Errorf("sbom: gotStderr = %v, want none", stderr)
+ }
+
+ if err := validate(spdxDoc); err != nil {
+ t.Fatalf("sbom: document fails to validate: %v", err)
+ }
+
+ gotData, err := json.Marshal(spdxDoc)
+ if err != nil {
+ t.Fatalf("sbom: failed to marshal spdx doc: %v", err)
+ return
+ }
+
+ t.Logf("Got SPDX Doc: %s", string(gotData))
+
+ expectedData, err := json.Marshal(tt.expectedOut)
+ if err != nil {
+ t.Fatalf("sbom: failed to marshal spdx doc: %v", err)
+ return
+ }
+
+ t.Logf("Want SPDX Doc: %s", string(expectedData))
+
+ // compare the spdx Docs
+ compareSpdxDocs(t, spdxDoc, tt.expectedOut)
+
+ // compare deps
+ t.Logf("got deps: %q", deps)
+
+ t.Logf("want deps: %q", tt.expectedDeps)
+
+ if g, w := deps, tt.expectedDeps; !reflect.DeepEqual(g, w) {
+ t.Errorf("unexpected deps, wanted:\n%s\ngot:\n%s\n",
+ strings.Join(w, "\n"), strings.Join(g, "\n"))
+ }
+ })
+ }
+}
+
+func getCreationInfo(t *testing.T) *spdx.CreationInfo {
+ ci, err := builder2v2.BuildCreationInfoSection2_2("Organization", "Google LLC", nil)
+ if err != nil {
+ t.Errorf("Unable to get creation info: %v", err)
+ return nil
+ }
+ return ci
+}
+
+// validate returns an error if the Document is found to be invalid
+func validate(doc *spdx.Document) error {
+ if doc.SPDXVersion == "" {
+ return fmt.Errorf("SPDXVersion: got nothing, want spdx version")
+ }
+ if doc.DataLicense == "" {
+ return fmt.Errorf("DataLicense: got nothing, want Data License")
+ }
+ if doc.SPDXIdentifier == "" {
+ return fmt.Errorf("SPDXIdentifier: got nothing, want SPDX Identifier")
+ }
+ if doc.DocumentName == "" {
+ return fmt.Errorf("DocumentName: got nothing, want Document Name")
+ }
+ if fmt.Sprintf("%v", doc.CreationInfo.Creators[1].Creator) != "Google LLC" {
+ return fmt.Errorf("Creator: got %v, want 'Google LLC'")
+ }
+ _, err := time.Parse(time.RFC3339, doc.CreationInfo.Created)
+ if err != nil {
+ return fmt.Errorf("Invalid time spec: %q: got error %q, want no error", doc.CreationInfo.Created, err)
+ }
+
+ for _, license := range doc.OtherLicenses {
+ if license.ExtractedText == "" {
+ return fmt.Errorf("License file: %q: got nothing, want license text", license.LicenseName)
+ }
+ }
+ return nil
+}
+
+// compareSpdxDocs deep-compares two spdx docs by going through the info section, packages, relationships and licenses
+func compareSpdxDocs(t *testing.T, actual, expected *spdx.Document) {
+
+ if actual == nil || expected == nil {
+ t.Errorf("SBOM: SPDX Doc is nil! Got %v: Expected %v", actual, expected)
+ }
+
+ if actual.DocumentName != expected.DocumentName {
+ t.Errorf("sbom: unexpected SPDX Document Name got %q, want %q", actual.DocumentName, expected.DocumentName)
+ }
+
+ if actual.SPDXVersion != expected.SPDXVersion {
+ t.Errorf("sbom: unexpected SPDX Version got %s, want %s", actual.SPDXVersion, expected.SPDXVersion)
+ }
+
+ if actual.DataLicense != expected.DataLicense {
+ t.Errorf("sbom: unexpected SPDX DataLicense got %s, want %s", actual.DataLicense, expected.DataLicense)
+ }
+
+ if actual.SPDXIdentifier != expected.SPDXIdentifier {
+ t.Errorf("sbom: unexpected SPDX Identified got %s, want %s", actual.SPDXIdentifier, expected.SPDXIdentifier)
+ }
+
+ if actual.DocumentNamespace != expected.DocumentNamespace {
+ t.Errorf("sbom: unexpected SPDX Document Namespace got %s, want %s", actual.DocumentNamespace, expected.DocumentNamespace)
+ }
+
+ // compare creation info
+ compareSpdxCreationInfo(t, actual.CreationInfo, expected.CreationInfo)
+
+ // compare packages
+ if len(actual.Packages) != len(expected.Packages) {
+ t.Errorf("SBOM: Number of Packages is different! Got %d: Expected %d", len(actual.Packages), len(expected.Packages))
+ }
+
+ for i, pkg := range actual.Packages {
+ if !compareSpdxPackages(t, i, pkg, expected.Packages[i]) {
+ break
+ }
+ }
+
+ // compare licenses
+ if len(actual.OtherLicenses) != len(expected.OtherLicenses) {
+ t.Errorf("SBOM: Number of Licenses in actual is different! Got %d: Expected %d", len(actual.OtherLicenses), len(expected.OtherLicenses))
+ }
+ for i, license := range actual.OtherLicenses {
+ if !compareLicenses(t, i, license, expected.OtherLicenses[i]) {
+ break
+ }
+ }
+
+ //compare Relationships
+ if len(actual.Relationships) != len(expected.Relationships) {
+ t.Errorf("SBOM: Number of Licenses in actual is different! Got %d: Expected %d", len(actual.Relationships), len(expected.Relationships))
+ }
+ for i, rl := range actual.Relationships {
+ if !compareRelationShips(t, i, rl, expected.Relationships[i]) {
+ break
+ }
+ }
+}
+
+func compareSpdxCreationInfo(t *testing.T, actual, expected *spdx.CreationInfo) {
+ if actual == nil || expected == nil {
+ t.Errorf("SBOM: Creation info is nil! Got %q: Expected %q", actual, expected)
+ }
+
+ if actual.LicenseListVersion != expected.LicenseListVersion {
+ t.Errorf("SBOM: Creation info license version Error! Got %s: Expected %s", actual.LicenseListVersion, expected.LicenseListVersion)
+ }
+
+ if len(actual.Creators) != len(expected.Creators) {
+ t.Errorf("SBOM: Creation info creators Error! Got %d: Expected %d", len(actual.Creators), len(expected.Creators))
+ }
+
+ for i, info := range actual.Creators {
+ if info != expected.Creators[i] {
+ t.Errorf("SBOM: Creation info creators Error! Got %q: Expected %q", info, expected.Creators[i])
+ }
+ }
+}
+
+func compareSpdxPackages(t *testing.T, i int, actual, expected *spdx.Package) bool {
+ if actual == nil || expected == nil {
+ t.Errorf("SBOM: Packages are nil at index %d! Got %v: Expected %v", i, actual, expected)
+ return false
+ }
+ if actual.PackageName != expected.PackageName {
+ t.Errorf("SBOM: Package name Error at index %d! Got %s: Expected %s", i, actual.PackageName, expected.PackageName)
+ return false
+ }
+
+ if actual.PackageVersion != expected.PackageVersion {
+ t.Errorf("SBOM: Package version Error at index %d! Got %s: Expected %s", i, actual.PackageVersion, expected.PackageVersion)
+ return false
+ }
+
+ if actual.PackageSPDXIdentifier != expected.PackageSPDXIdentifier {
+ t.Errorf("SBOM: Package identifier Error at index %d! Got %s: Expected %s", i, actual.PackageSPDXIdentifier, expected.PackageSPDXIdentifier)
+ return false
+ }
+
+ if actual.PackageDownloadLocation != expected.PackageDownloadLocation {
+ t.Errorf("SBOM: Package download location Error at index %d! Got %s: Expected %s", i, actual.PackageDownloadLocation, expected.PackageDownloadLocation)
+ return false
+ }
+
+ if actual.PackageLicenseConcluded != expected.PackageLicenseConcluded {
+ t.Errorf("SBOM: Package license concluded Error at index %d! Got %s: Expected %s", i, actual.PackageLicenseConcluded, expected.PackageLicenseConcluded)
+ return false
+ }
+ return true
+}
+
+func compareRelationShips(t *testing.T, i int, actual, expected *spdx.Relationship) bool {
+ if actual == nil || expected == nil {
+ t.Errorf("SBOM: Relationships is nil at index %d! Got %v: Expected %v", i, actual, expected)
+ return false
+ }
+
+ if actual.RefA != expected.RefA {
+ t.Errorf("SBOM: Relationship RefA Error at index %d! Got %s: Expected %s", i, actual.RefA, expected.RefA)
+ return false
+ }
+
+ if actual.RefB != expected.RefB {
+ t.Errorf("SBOM: Relationship RefB Error at index %d! Got %s: Expected %s", i, actual.RefB, expected.RefB)
+ return false
+ }
+
+ if actual.Relationship != expected.Relationship {
+ t.Errorf("SBOM: Relationship type Error at index %d! Got %s: Expected %s", i, actual.Relationship, expected.Relationship)
+ return false
+ }
+ return true
+}
+
+func compareLicenses(t *testing.T, i int, actual, expected *spdx.OtherLicense) bool {
+ if actual == nil || expected == nil {
+ t.Errorf("SBOM: Licenses is nil at index %d! Got %v: Expected %v", i, actual, expected)
+ return false
+ }
+
+ if actual.LicenseName != expected.LicenseName {
+ t.Errorf("SBOM: License Name Error at index %d! Got %s: Expected %s", i, actual.LicenseName, expected.LicenseName)
+ return false
+ }
+
+ if actual.LicenseIdentifier != expected.LicenseIdentifier {
+ t.Errorf("SBOM: License Identifier Error at index %d! Got %s: Expected %s", i, actual.LicenseIdentifier, expected.LicenseIdentifier)
+ return false
+ }
+
+ if actual.ExtractedText != expected.ExtractedText {
+ t.Errorf("SBOM: License Extracted Text Error at index %d! Got: %q want: %q", i, actual.ExtractedText, expected.ExtractedText)
+ return false
+ }
+ return true
+}
+
+func fakeTime() string {
+ t := time.UnixMicro(0)
+ return t.UTC().Format("2006-01-02T15:04:05Z")
+}
diff --git a/tools/compliance/cmd/testdata/firstparty/METADATA b/tools/compliance/cmd/testdata/firstparty/METADATA
new file mode 100644
index 0000000..62b4481
--- /dev/null
+++ b/tools/compliance/cmd/testdata/firstparty/METADATA
@@ -0,0 +1,6 @@
+# Comments are allowed
+name: "1ptd"
+description: "First Party Test Data"
+third_party {
+ version: "1.0"
+}
diff --git a/tools/compliance/cmd/testdata/notice/METADATA b/tools/compliance/cmd/testdata/notice/METADATA
new file mode 100644
index 0000000..302dfeb
--- /dev/null
+++ b/tools/compliance/cmd/testdata/notice/METADATA
@@ -0,0 +1,6 @@
+# Comments are allowed
+name: "noticetd"
+description: "Notice Test Data"
+third_party {
+ version: "1.0"
+}
diff --git a/tools/compliance/cmd/testdata/proprietary/METADATA b/tools/compliance/cmd/testdata/proprietary/METADATA
new file mode 100644
index 0000000..72cc54a
--- /dev/null
+++ b/tools/compliance/cmd/testdata/proprietary/METADATA
@@ -0,0 +1 @@
+# comments are allowed
diff --git a/tools/compliance/cmd/testdata/proprietary/bin/bin3.meta_lic b/tools/compliance/cmd/testdata/proprietary/bin/bin3.meta_lic
index 7ef14e9..859be7f 100644
--- a/tools/compliance/cmd/testdata/proprietary/bin/bin3.meta_lic
+++ b/tools/compliance/cmd/testdata/proprietary/bin/bin3.meta_lic
@@ -2,7 +2,7 @@
module_classes: "EXECUTABLES"
projects: "standalone/binary"
license_kinds: "SPDX-license-identifier-LGPL-2.0"
-license_conditions: "restricted"
+license_conditions: "restricted_if_statically_linked"
license_texts: "testdata/restricted/RESTRICTED_LICENSE"
is_container: false
built: "out/target/product/fictional/obj/EXECUTABLES/bin_intermediates/bin3"
diff --git a/tools/compliance/cmd/testdata/reciprocal/METADATA b/tools/compliance/cmd/testdata/reciprocal/METADATA
new file mode 100644
index 0000000..50cc2ef
--- /dev/null
+++ b/tools/compliance/cmd/testdata/reciprocal/METADATA
@@ -0,0 +1,5 @@
+# Comments are allowed
+description: "Reciprocal Test Data"
+third_party {
+ version: "1.0"
+}
diff --git a/tools/compliance/cmd/testdata/regressconcur/README.md b/tools/compliance/cmd/testdata/regressconcur/README.md
new file mode 100644
index 0000000..98ab0ef
--- /dev/null
+++ b/tools/compliance/cmd/testdata/regressconcur/README.md
@@ -0,0 +1,62 @@
+## Start long walks followed by short walks
+
+Detect concurrency error where "already started" treated as
+"already finished".
+
+### Testdata build graph structure:
+
+A restricted licensed library sandwiched between a notice library and a notice
+binary. The source-code for the libraries only needs to be shared if shipped
+alongside the container with the binaries.
+
+```dot
+strict digraph {
+ rankdir=LR;
+ bin1 [label="bin/bin1.meta_lic\nproprietary"];
+ bin2 [label="bin/bin2.meta_lic\nproprietary"];
+ bin3 [label="bin/bin3.meta_lic\nproprietary"];
+ bin4 [label="bin/bin4.meta_lic\nproprietary"];
+ bin5 [label="bin/bin5.meta_lic\nproprietary"];
+ bin6 [label="bin/bin6.meta_lic\nproprietary"];
+ bin7 [label="bin/bin7.meta_lic\nproprietary"];
+ bin8 [label="bin/bin8.meta_lic\nproprietary"];
+ bin9 [label="bin/bin9.meta_lic\nproprietary"];
+ container [label="container.zip.meta_lic\nnotice"];
+ lib1 [label="lib/lib1.so.meta_lic\nnotice"];
+ lib2 [label="lib/lib2.so.meta_lic\nnotice"];
+ lib3 [label="lib/lib3.so.meta_lic\nnotice"];
+ lib4 [label="lib/lib4.so.meta_lic\nnotice"];
+ lib5 [label="lib/lib5.so.meta_lic\nnotice"];
+ lib6 [label="lib/lib6.so.meta_lic\nnotice"];
+ lib7 [label="lib/lib7.so.meta_lic\nnotice"];
+ lib8 [label="lib/lib8.so.meta_lic\nnotice"];
+ lib9 [label="lib/lib9.so.meta_lic\nrestricted"];
+ container -> bin1 [label="static"];
+ container -> bin2 [label="static"];
+ container -> bin3 [label="static"];
+ container -> bin4 [label="static"];
+ container -> bin5 [label="static"];
+ container -> bin6 [label="static"];
+ container -> bin7 [label="static"];
+ container -> bin8 [label="static"];
+ container -> bin9 [label="static"];
+ bin1 -> lib1 [label="static"];
+ bin2 -> lib2 [label="static"];
+ bin3 -> lib3 [label="static"];
+ bin4 -> lib4 [label="static"];
+ bin5 -> lib5 [label="static"];
+ bin6 -> lib6 [label="static"];
+ bin7 -> lib7 [label="static"];
+ bin8 -> lib8 [label="static"];
+ bin9 -> lib9 [label="static"];
+ lib1 -> lib2 [label="static"];
+ lib2 -> lib3 [label="static"];
+ lib3 -> lib4 [label="static"];
+ lib4 -> lib5 [label="static"];
+ lib5 -> lib6 [label="static"];
+ lib6 -> lib7 [label="static"];
+ lib7 -> lib8 [label="static"];
+ lib8 -> lib9 [label="static"];
+ {rank=same; container}
+}
+```
diff --git a/tools/compliance/cmd/testdata/regressconcur/bin/bin1.meta_lic b/tools/compliance/cmd/testdata/regressconcur/bin/bin1.meta_lic
new file mode 100644
index 0000000..e3763f6
--- /dev/null
+++ b/tools/compliance/cmd/testdata/regressconcur/bin/bin1.meta_lic
@@ -0,0 +1,13 @@
+package_name: "Android"
+module_classes: "EXECUTABLES"
+projects: "bin/onelibrary"
+license_kinds: "legacy_proprietary"
+license_conditions: "proprietary"
+is_container: false
+built: "out/target/product/fictional/obj/EXECUTABLES/bin_intermediates/bin1"
+installed: "out/target/product/fictional/system/bin/bin1"
+sources: "out/target/product/fictional/system/lib/lib1.a"
+deps: {
+ file: "testdata/regressconcur/lib/lib1.a.meta_lic"
+ annotations: "static"
+}
diff --git a/tools/compliance/cmd/testdata/regressconcur/bin/bin2.meta_lic b/tools/compliance/cmd/testdata/regressconcur/bin/bin2.meta_lic
new file mode 100644
index 0000000..e1822bb
--- /dev/null
+++ b/tools/compliance/cmd/testdata/regressconcur/bin/bin2.meta_lic
@@ -0,0 +1,13 @@
+package_name: "Android"
+module_classes: "EXECUTABLES"
+projects: "bin/onelibrary"
+license_kinds: "legacy_proprietary"
+license_conditions: "proprietary"
+is_container: false
+built: "out/target/product/fictional/obj/EXECUTABLES/bin_intermediates/bin2"
+installed: "out/target/product/fictional/system/bin/bin2"
+sources: "out/target/product/fictional/system/lib/lib2.a"
+deps: {
+ file: "testdata/regressconcur/lib/lib2.a.meta_lic"
+ annotations: "static"
+}
diff --git a/tools/compliance/cmd/testdata/regressconcur/bin/bin3.meta_lic b/tools/compliance/cmd/testdata/regressconcur/bin/bin3.meta_lic
new file mode 100644
index 0000000..35f5d74
--- /dev/null
+++ b/tools/compliance/cmd/testdata/regressconcur/bin/bin3.meta_lic
@@ -0,0 +1,13 @@
+package_name: "Android"
+module_classes: "EXECUTABLES"
+projects: "bin/onelibrary"
+license_kinds: "legacy_proprietary"
+license_conditions: "proprietary"
+is_container: false
+built: "out/target/product/fictional/obj/EXECUTABLES/bin_intermediates/bin3"
+installed: "out/target/product/fictional/system/bin/bin3"
+sources: "out/target/product/fictional/system/lib/lib3.a"
+deps: {
+ file: "testdata/regressconcur/lib/lib3.a.meta_lic"
+ annotations: "static"
+}
diff --git a/tools/compliance/cmd/testdata/regressconcur/bin/bin4.meta_lic b/tools/compliance/cmd/testdata/regressconcur/bin/bin4.meta_lic
new file mode 100644
index 0000000..3287382
--- /dev/null
+++ b/tools/compliance/cmd/testdata/regressconcur/bin/bin4.meta_lic
@@ -0,0 +1,13 @@
+package_name: "Android"
+module_classes: "EXECUTABLES"
+projects: "bin/onelibrary"
+license_kinds: "legacy_proprietary"
+license_conditions: "proprietary"
+is_container: false
+built: "out/target/product/fictional/obj/EXECUTABLES/bin_intermediates/bin4"
+installed: "out/target/product/fictional/system/bin/bin4"
+sources: "out/target/product/fictional/system/lib/lib4.a"
+deps: {
+ file: "testdata/regressconcur/lib/lib4.a.meta_lic"
+ annotations: "static"
+}
diff --git a/tools/compliance/cmd/testdata/regressconcur/bin/bin5.meta_lic b/tools/compliance/cmd/testdata/regressconcur/bin/bin5.meta_lic
new file mode 100644
index 0000000..f51bcdb
--- /dev/null
+++ b/tools/compliance/cmd/testdata/regressconcur/bin/bin5.meta_lic
@@ -0,0 +1,13 @@
+package_name: "Android"
+module_classes: "EXECUTABLES"
+projects: "bin/onelibrary"
+license_kinds: "legacy_proprietary"
+license_conditions: "proprietary"
+is_container: false
+built: "out/target/product/fictional/obj/EXECUTABLES/bin_intermediates/bin5"
+installed: "out/target/product/fictional/system/bin/bin5"
+sources: "out/target/product/fictional/system/lib/lib5.a"
+deps: {
+ file: "testdata/regressconcur/lib/lib5.a.meta_lic"
+ annotations: "static"
+}
diff --git a/tools/compliance/cmd/testdata/regressconcur/bin/bin6.meta_lic b/tools/compliance/cmd/testdata/regressconcur/bin/bin6.meta_lic
new file mode 100644
index 0000000..4c99b01
--- /dev/null
+++ b/tools/compliance/cmd/testdata/regressconcur/bin/bin6.meta_lic
@@ -0,0 +1,13 @@
+package_name: "Android"
+module_classes: "EXECUTABLES"
+projects: "bin/onelibrary"
+license_kinds: "legacy_proprietary"
+license_conditions: "proprietary"
+is_container: false
+built: "out/target/product/fictional/obj/EXECUTABLES/bin_intermediates/bin6"
+installed: "out/target/product/fictional/system/bin/bin6"
+sources: "out/target/product/fictional/system/lib/lib6.a"
+deps: {
+ file: "testdata/regressconcur/lib/lib6.a.meta_lic"
+ annotations: "static"
+}
diff --git a/tools/compliance/cmd/testdata/regressconcur/bin/bin7.meta_lic b/tools/compliance/cmd/testdata/regressconcur/bin/bin7.meta_lic
new file mode 100644
index 0000000..565a338
--- /dev/null
+++ b/tools/compliance/cmd/testdata/regressconcur/bin/bin7.meta_lic
@@ -0,0 +1,13 @@
+package_name: "Android"
+module_classes: "EXECUTABLES"
+projects: "bin/onelibrary"
+license_kinds: "legacy_proprietary"
+license_conditions: "proprietary"
+is_container: false
+built: "out/target/product/fictional/obj/EXECUTABLES/bin_intermediates/bin7"
+installed: "out/target/product/fictional/system/bin/bin7"
+sources: "out/target/product/fictional/system/lib/lib7.a"
+deps: {
+ file: "testdata/regressconcur/lib/lib7.a.meta_lic"
+ annotations: "static"
+}
diff --git a/tools/compliance/cmd/testdata/regressconcur/bin/bin8.meta_lic b/tools/compliance/cmd/testdata/regressconcur/bin/bin8.meta_lic
new file mode 100644
index 0000000..f66c2c9
--- /dev/null
+++ b/tools/compliance/cmd/testdata/regressconcur/bin/bin8.meta_lic
@@ -0,0 +1,13 @@
+package_name: "Android"
+module_classes: "EXECUTABLES"
+projects: "bin/onelibrary"
+license_kinds: "legacy_proprietary"
+license_conditions: "proprietary"
+is_container: false
+built: "out/target/product/fictional/obj/EXECUTABLES/bin_intermediates/bin8"
+installed: "out/target/product/fictional/system/bin/bin8"
+sources: "out/target/product/fictional/system/lib/lib8.a"
+deps: {
+ file: "testdata/regressconcur/lib/lib8.a.meta_lic"
+ annotations: "static"
+}
diff --git a/tools/compliance/cmd/testdata/regressconcur/bin/bin9.meta_lic b/tools/compliance/cmd/testdata/regressconcur/bin/bin9.meta_lic
new file mode 100644
index 0000000..0aac4c6
--- /dev/null
+++ b/tools/compliance/cmd/testdata/regressconcur/bin/bin9.meta_lic
@@ -0,0 +1,13 @@
+package_name: "Android"
+module_classes: "EXECUTABLES"
+projects: "bin/onelibrary"
+license_kinds: "legacy_proprietary"
+license_conditions: "proprietary"
+is_container: false
+built: "out/target/product/fictional/obj/EXECUTABLES/bin_intermediates/bin9"
+installed: "out/target/product/fictional/system/bin/bin9"
+sources: "out/target/product/fictional/system/lib/lib9.a"
+deps: {
+ file: "testdata/regressconcur/lib/lib9.a.meta_lic"
+ annotations: "static"
+}
diff --git a/tools/compliance/cmd/testdata/regressconcur/container.zip.meta_lic b/tools/compliance/cmd/testdata/regressconcur/container.zip.meta_lic
new file mode 100644
index 0000000..88683d4
--- /dev/null
+++ b/tools/compliance/cmd/testdata/regressconcur/container.zip.meta_lic
@@ -0,0 +1,60 @@
+package_name: "Android"
+projects: "container/zip"
+license_kinds: "SPDX-license-identifier-MIT"
+license_conditions: "notice"
+is_container: true
+built: "out/target/product/fictional/obj/ETC/container_intermediates/container.zip"
+installed: "out/target/product/fictional/data/container.zip"
+install_map {
+ from_path: "out/target/product/fictional/system/lib/"
+ container_path: "/"
+}
+install_map {
+ from_path: "out/target/product/fictional/system/bin/"
+ container_path: "/"
+}
+sources: "out/target/product/fictional/system/bin/bin1"
+sources: "out/target/product/fictional/system/bin/bin2"
+sources: "out/target/product/fictional/system/bin/bin3"
+sources: "out/target/product/fictional/system/bin/bin4"
+sources: "out/target/product/fictional/system/bin/bin5"
+sources: "out/target/product/fictional/system/bin/bin6"
+sources: "out/target/product/fictional/system/bin/bin7"
+sources: "out/target/product/fictional/system/bin/bin8"
+sources: "out/target/product/fictional/system/bin/bin9"
+deps: {
+ file: "testdata/regressconcur/bin/bin1.meta_lic"
+ annotations: "static"
+}
+deps: {
+ file: "testdata/regressconcur/bin/bin2.meta_lic"
+ annotations: "static"
+}
+deps: {
+ file: "testdata/regressconcur/bin/bin3.meta_lic"
+ annotations: "static"
+}
+deps: {
+ file: "testdata/regressconcur/bin/bin4.meta_lic"
+ annotations: "static"
+}
+deps: {
+ file: "testdata/regressconcur/bin/bin5.meta_lic"
+ annotations: "static"
+}
+deps: {
+ file: "testdata/regressconcur/bin/bin6.meta_lic"
+ annotations: "static"
+}
+deps: {
+ file: "testdata/regressconcur/bin/bin7.meta_lic"
+ annotations: "static"
+}
+deps: {
+ file: "testdata/regressconcur/bin/bin8.meta_lic"
+ annotations: "static"
+}
+deps: {
+ file: "testdata/regressconcur/bin/bin9.meta_lic"
+ annotations: "static"
+}
diff --git a/tools/compliance/cmd/testdata/regressconcur/lib/lib1.a.meta_lic b/tools/compliance/cmd/testdata/regressconcur/lib/lib1.a.meta_lic
new file mode 100644
index 0000000..89ebd7c
--- /dev/null
+++ b/tools/compliance/cmd/testdata/regressconcur/lib/lib1.a.meta_lic
@@ -0,0 +1,13 @@
+package_name: "Android"
+projects: "lib/restricted"
+license_kinds: "SPDX-license-identifier-MIT"
+license_conditions: "notice"
+is_container: false
+built: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib1.so"
+built: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib1.a"
+installed: "out/target/product/fictional/system/lib/lib1.so"
+sources: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib2.a"
+deps: {
+ file: "testdata/regressconcur/lib/lib2.a.meta_lic"
+ annotations: "static"
+}
diff --git a/tools/compliance/cmd/testdata/regressconcur/lib/lib2.a.meta_lic b/tools/compliance/cmd/testdata/regressconcur/lib/lib2.a.meta_lic
new file mode 100644
index 0000000..ec84287
--- /dev/null
+++ b/tools/compliance/cmd/testdata/regressconcur/lib/lib2.a.meta_lic
@@ -0,0 +1,13 @@
+package_name: "Android"
+projects: "lib/restricted"
+license_kinds: "SPDX-license-identifier-MIT"
+license_conditions: "notice"
+is_container: false
+built: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib2.so"
+built: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib2.a"
+installed: "out/target/product/fictional/system/lib/lib2.so"
+sources: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib3.a"
+deps: {
+ file: "testdata/regressconcur/lib/lib3.a.meta_lic"
+ annotations: "static"
+}
diff --git a/tools/compliance/cmd/testdata/regressconcur/lib/lib3.a.meta_lic b/tools/compliance/cmd/testdata/regressconcur/lib/lib3.a.meta_lic
new file mode 100644
index 0000000..1949c06
--- /dev/null
+++ b/tools/compliance/cmd/testdata/regressconcur/lib/lib3.a.meta_lic
@@ -0,0 +1,13 @@
+package_name: "Android"
+projects: "lib/restricted"
+license_kinds: "SPDX-license-identifier-MIT"
+license_conditions: "notice"
+is_container: false
+built: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib3.so"
+built: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib3.a"
+installed: "out/target/product/fictional/system/lib/lib3.so"
+sources: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib4.a"
+deps: {
+ file: "testdata/regressconcur/lib/lib4.a.meta_lic"
+ annotations: "static"
+}
diff --git a/tools/compliance/cmd/testdata/regressconcur/lib/lib4.a.meta_lic b/tools/compliance/cmd/testdata/regressconcur/lib/lib4.a.meta_lic
new file mode 100644
index 0000000..4dc777b
--- /dev/null
+++ b/tools/compliance/cmd/testdata/regressconcur/lib/lib4.a.meta_lic
@@ -0,0 +1,13 @@
+package_name: "Android"
+projects: "lib/restricted"
+license_kinds: "SPDX-license-identifier-MIT"
+license_conditions: "notice"
+is_container: false
+built: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib4.so"
+built: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib4.a"
+installed: "out/target/product/fictional/system/lib/lib4.so"
+sources: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib5.a"
+deps: {
+ file: "testdata/regressconcur/lib/lib5.a.meta_lic"
+ annotations: "static"
+}
diff --git a/tools/compliance/cmd/testdata/regressconcur/lib/lib5.a.meta_lic b/tools/compliance/cmd/testdata/regressconcur/lib/lib5.a.meta_lic
new file mode 100644
index 0000000..5d005dc
--- /dev/null
+++ b/tools/compliance/cmd/testdata/regressconcur/lib/lib5.a.meta_lic
@@ -0,0 +1,13 @@
+package_name: "Android"
+projects: "lib/restricted"
+license_kinds: "SPDX-license-identifier-MIT"
+license_conditions: "notice"
+is_container: false
+built: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib5.so"
+built: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib5.a"
+installed: "out/target/product/fictional/system/lib/lib5.so"
+sources: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib6.a"
+deps: {
+ file: "testdata/regressconcur/lib/lib6.a.meta_lic"
+ annotations: "static"
+}
diff --git a/tools/compliance/cmd/testdata/regressconcur/lib/lib6.a.meta_lic b/tools/compliance/cmd/testdata/regressconcur/lib/lib6.a.meta_lic
new file mode 100644
index 0000000..f2920c3
--- /dev/null
+++ b/tools/compliance/cmd/testdata/regressconcur/lib/lib6.a.meta_lic
@@ -0,0 +1,13 @@
+package_name: "Android"
+projects: "lib/restricted"
+license_kinds: "SPDX-license-identifier-MIT"
+license_conditions: "notice"
+is_container: false
+built: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib6.so"
+built: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib6.a"
+installed: "out/target/product/fictional/system/lib/lib6.so"
+sources: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib7.a"
+deps: {
+ file: "testdata/regressconcur/lib/lib7.a.meta_lic"
+ annotations: "static"
+}
diff --git a/tools/compliance/cmd/testdata/regressconcur/lib/lib7.a.meta_lic b/tools/compliance/cmd/testdata/regressconcur/lib/lib7.a.meta_lic
new file mode 100644
index 0000000..28c0c5f
--- /dev/null
+++ b/tools/compliance/cmd/testdata/regressconcur/lib/lib7.a.meta_lic
@@ -0,0 +1,13 @@
+package_name: "Android"
+projects: "lib/restricted"
+license_kinds: "SPDX-license-identifier-MIT"
+license_conditions: "notice"
+is_container: false
+built: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib7.so"
+built: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib7.a"
+installed: "out/target/product/fictional/system/lib/lib7.so"
+sources: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib8.a"
+deps: {
+ file: "testdata/regressconcur/lib/lib8.a.meta_lic"
+ annotations: "static"
+}
diff --git a/tools/compliance/cmd/testdata/regressconcur/lib/lib8.a.meta_lic b/tools/compliance/cmd/testdata/regressconcur/lib/lib8.a.meta_lic
new file mode 100644
index 0000000..b887edc
--- /dev/null
+++ b/tools/compliance/cmd/testdata/regressconcur/lib/lib8.a.meta_lic
@@ -0,0 +1,13 @@
+package_name: "Android"
+projects: "lib/restricted"
+license_kinds: "SPDX-license-identifier-MIT"
+license_conditions: "notice"
+is_container: false
+built: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib8.so"
+built: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib8.a"
+installed: "out/target/product/fictional/system/lib/lib8.so"
+sources: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib9.a"
+deps: {
+ file: "testdata/regressconcur/lib/lib9.a.meta_lic"
+ annotations: "static"
+}
diff --git a/tools/compliance/cmd/testdata/regressconcur/lib/lib9.a.meta_lic b/tools/compliance/cmd/testdata/regressconcur/lib/lib9.a.meta_lic
new file mode 100644
index 0000000..9bf155f
--- /dev/null
+++ b/tools/compliance/cmd/testdata/regressconcur/lib/lib9.a.meta_lic
@@ -0,0 +1,8 @@
+package_name: "Android"
+projects: "lib/restricted"
+license_kinds: "SPDX-license-identifier-GPL-2.0"
+license_conditions: "restricted"
+is_container: false
+built: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib9.so"
+built: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/lib9.a"
+installed: "out/target/product/fictional/system/lib/lib9.so"
diff --git a/tools/compliance/cmd/testdata/restricted/METADATA b/tools/compliance/cmd/testdata/restricted/METADATA
new file mode 100644
index 0000000..6bcf83f
--- /dev/null
+++ b/tools/compliance/cmd/testdata/restricted/METADATA
@@ -0,0 +1,6 @@
+name {
+ id: 1
+}
+third_party {
+ version: 2
+}
diff --git a/tools/compliance/cmd/testdata/restricted/METADATA.android b/tools/compliance/cmd/testdata/restricted/METADATA.android
new file mode 100644
index 0000000..1142499
--- /dev/null
+++ b/tools/compliance/cmd/testdata/restricted/METADATA.android
@@ -0,0 +1,6 @@
+# Comments are allowed
+name: "testdata"
+description: "Restricted Test Data"
+third_party {
+ version: "1.0"
+}
diff --git a/tools/compliance/cmd/testdata/restricted/bin/bin3.meta_lic b/tools/compliance/cmd/testdata/restricted/bin/bin3.meta_lic
index 7ef14e9..859be7f 100644
--- a/tools/compliance/cmd/testdata/restricted/bin/bin3.meta_lic
+++ b/tools/compliance/cmd/testdata/restricted/bin/bin3.meta_lic
@@ -2,7 +2,7 @@
module_classes: "EXECUTABLES"
projects: "standalone/binary"
license_kinds: "SPDX-license-identifier-LGPL-2.0"
-license_conditions: "restricted"
+license_conditions: "restricted_if_statically_linked"
license_texts: "testdata/restricted/RESTRICTED_LICENSE"
is_container: false
built: "out/target/product/fictional/obj/EXECUTABLES/bin_intermediates/bin3"
diff --git a/tools/compliance/cmd/testdata/restricted/lib/liba.so.meta_lic b/tools/compliance/cmd/testdata/restricted/lib/liba.so.meta_lic
index a505d4a..ce5de6e 100644
--- a/tools/compliance/cmd/testdata/restricted/lib/liba.so.meta_lic
+++ b/tools/compliance/cmd/testdata/restricted/lib/liba.so.meta_lic
@@ -1,7 +1,7 @@
package_name: "Device"
projects: "device/library"
license_kinds: "SPDX-license-identifier-LGPL-2.0"
-license_conditions: "restricted"
+license_conditions: "restricted_if_statically_linked"
license_texts: "testdata/restricted/RESTRICTED_LICENSE"
is_container: false
built: "out/target/product/fictional/obj/SHARED_LIBRARIES/lib_intermediates/liba.so"
diff --git a/tools/compliance/cmd/textnotice/textnotice.go b/tools/compliance/cmd/textnotice/textnotice.go
index 9beaf58..450290c 100644
--- a/tools/compliance/cmd/textnotice/textnotice.go
+++ b/tools/compliance/cmd/textnotice/textnotice.go
@@ -23,6 +23,7 @@
"io/fs"
"os"
"path/filepath"
+ "sort"
"strings"
"android/soong/response"
@@ -230,7 +231,8 @@
fmt.Fprintln(ctx.stdout)
}
- *ctx.deps = ni.InputNoticeFiles()
+ *ctx.deps = ni.InputFiles()
+ sort.Strings(*ctx.deps)
return nil
}
diff --git a/tools/compliance/cmd/textnotice/textnotice_test.go b/tools/compliance/cmd/textnotice/textnotice_test.go
index e661a44..a902313 100644
--- a/tools/compliance/cmd/textnotice/textnotice_test.go
+++ b/tools/compliance/cmd/textnotice/textnotice_test.go
@@ -65,7 +65,16 @@
usedBy{"highest.apex/lib/libb.so"},
firstParty{},
},
- expectedDeps: []string{"testdata/firstparty/FIRST_PARTY_LICENSE"},
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/bin/bin1.meta_lic",
+ "testdata/firstparty/bin/bin2.meta_lic",
+ "testdata/firstparty/highest.apex.meta_lic",
+ "testdata/firstparty/lib/liba.so.meta_lic",
+ "testdata/firstparty/lib/libb.so.meta_lic",
+ "testdata/firstparty/lib/libc.a.meta_lic",
+ "testdata/firstparty/lib/libd.so.meta_lic",
+ },
},
{
condition: "firstparty",
@@ -81,7 +90,16 @@
usedBy{"container.zip/libb.so"},
firstParty{},
},
- expectedDeps: []string{"testdata/firstparty/FIRST_PARTY_LICENSE"},
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/bin/bin1.meta_lic",
+ "testdata/firstparty/bin/bin2.meta_lic",
+ "testdata/firstparty/container.zip.meta_lic",
+ "testdata/firstparty/lib/liba.so.meta_lic",
+ "testdata/firstparty/lib/libb.so.meta_lic",
+ "testdata/firstparty/lib/libc.a.meta_lic",
+ "testdata/firstparty/lib/libd.so.meta_lic",
+ },
},
{
condition: "firstparty",
@@ -93,7 +111,13 @@
usedBy{"application"},
firstParty{},
},
- expectedDeps: []string{"testdata/firstparty/FIRST_PARTY_LICENSE"},
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/application.meta_lic",
+ "testdata/firstparty/bin/bin3.meta_lic",
+ "testdata/firstparty/lib/liba.so.meta_lic",
+ "testdata/firstparty/lib/libb.so.meta_lic",
+ },
},
{
condition: "firstparty",
@@ -105,7 +129,12 @@
usedBy{"bin/bin1"},
firstParty{},
},
- expectedDeps: []string{"testdata/firstparty/FIRST_PARTY_LICENSE"},
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/bin/bin1.meta_lic",
+ "testdata/firstparty/lib/liba.so.meta_lic",
+ "testdata/firstparty/lib/libc.a.meta_lic",
+ },
},
{
condition: "firstparty",
@@ -117,7 +146,10 @@
usedBy{"lib/libd.so"},
firstParty{},
},
- expectedDeps: []string{"testdata/firstparty/FIRST_PARTY_LICENSE"},
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/lib/libd.so.meta_lic",
+ },
},
{
condition: "notice",
@@ -142,6 +174,13 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/notice/NOTICE_LICENSE",
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/bin/bin2.meta_lic",
+ "testdata/notice/highest.apex.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
+ "testdata/notice/lib/libd.so.meta_lic",
},
},
{
@@ -167,6 +206,13 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/notice/NOTICE_LICENSE",
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/bin/bin2.meta_lic",
+ "testdata/notice/container.zip.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
+ "testdata/notice/lib/libd.so.meta_lic",
},
},
{
@@ -186,6 +232,10 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/notice/NOTICE_LICENSE",
+ "testdata/notice/application.meta_lic",
+ "testdata/notice/bin/bin3.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
},
},
{
@@ -207,6 +257,9 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/notice/NOTICE_LICENSE",
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
},
},
{
@@ -219,7 +272,10 @@
usedBy{"lib/libd.so"},
notice{},
},
- expectedDeps: []string{"testdata/notice/NOTICE_LICENSE"},
+ expectedDeps: []string{
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/notice/lib/libd.so.meta_lic",
+ },
},
{
condition: "reciprocal",
@@ -244,6 +300,13 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/reciprocal/RECIPROCAL_LICENSE",
+ "testdata/reciprocal/bin/bin1.meta_lic",
+ "testdata/reciprocal/bin/bin2.meta_lic",
+ "testdata/reciprocal/highest.apex.meta_lic",
+ "testdata/reciprocal/lib/liba.so.meta_lic",
+ "testdata/reciprocal/lib/libb.so.meta_lic",
+ "testdata/reciprocal/lib/libc.a.meta_lic",
+ "testdata/reciprocal/lib/libd.so.meta_lic",
},
},
{
@@ -269,6 +332,13 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/reciprocal/RECIPROCAL_LICENSE",
+ "testdata/reciprocal/bin/bin1.meta_lic",
+ "testdata/reciprocal/bin/bin2.meta_lic",
+ "testdata/reciprocal/container.zip.meta_lic",
+ "testdata/reciprocal/lib/liba.so.meta_lic",
+ "testdata/reciprocal/lib/libb.so.meta_lic",
+ "testdata/reciprocal/lib/libc.a.meta_lic",
+ "testdata/reciprocal/lib/libd.so.meta_lic",
},
},
{
@@ -288,6 +358,10 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/reciprocal/RECIPROCAL_LICENSE",
+ "testdata/reciprocal/application.meta_lic",
+ "testdata/reciprocal/bin/bin3.meta_lic",
+ "testdata/reciprocal/lib/liba.so.meta_lic",
+ "testdata/reciprocal/lib/libb.so.meta_lic",
},
},
{
@@ -309,6 +383,9 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/reciprocal/RECIPROCAL_LICENSE",
+ "testdata/reciprocal/bin/bin1.meta_lic",
+ "testdata/reciprocal/lib/liba.so.meta_lic",
+ "testdata/reciprocal/lib/libc.a.meta_lic",
},
},
{
@@ -323,6 +400,7 @@
},
expectedDeps: []string{
"testdata/notice/NOTICE_LICENSE",
+ "testdata/reciprocal/lib/libd.so.meta_lic",
},
},
{
@@ -353,6 +431,13 @@
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/reciprocal/RECIPROCAL_LICENSE",
"testdata/restricted/RESTRICTED_LICENSE",
+ "testdata/restricted/bin/bin1.meta_lic",
+ "testdata/restricted/bin/bin2.meta_lic",
+ "testdata/restricted/highest.apex.meta_lic",
+ "testdata/restricted/lib/liba.so.meta_lic",
+ "testdata/restricted/lib/libb.so.meta_lic",
+ "testdata/restricted/lib/libc.a.meta_lic",
+ "testdata/restricted/lib/libd.so.meta_lic",
},
},
{
@@ -383,6 +468,13 @@
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/reciprocal/RECIPROCAL_LICENSE",
"testdata/restricted/RESTRICTED_LICENSE",
+ "testdata/restricted/bin/bin1.meta_lic",
+ "testdata/restricted/bin/bin2.meta_lic",
+ "testdata/restricted/container.zip.meta_lic",
+ "testdata/restricted/lib/liba.so.meta_lic",
+ "testdata/restricted/lib/libb.so.meta_lic",
+ "testdata/restricted/lib/libc.a.meta_lic",
+ "testdata/restricted/lib/libd.so.meta_lic",
},
},
{
@@ -402,6 +494,10 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/restricted/RESTRICTED_LICENSE",
+ "testdata/restricted/application.meta_lic",
+ "testdata/restricted/bin/bin3.meta_lic",
+ "testdata/restricted/lib/liba.so.meta_lic",
+ "testdata/restricted/lib/libb.so.meta_lic",
},
},
{
@@ -426,6 +522,9 @@
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/reciprocal/RECIPROCAL_LICENSE",
"testdata/restricted/RESTRICTED_LICENSE",
+ "testdata/restricted/bin/bin1.meta_lic",
+ "testdata/restricted/lib/liba.so.meta_lic",
+ "testdata/restricted/lib/libc.a.meta_lic",
},
},
{
@@ -438,7 +537,10 @@
usedBy{"lib/libd.so"},
notice{},
},
- expectedDeps: []string{"testdata/notice/NOTICE_LICENSE"},
+ expectedDeps: []string{
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/restricted/lib/libd.so.meta_lic",
+ },
},
{
condition: "proprietary",
@@ -468,6 +570,13 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/proprietary/PROPRIETARY_LICENSE",
+ "testdata/proprietary/bin/bin1.meta_lic",
+ "testdata/proprietary/bin/bin2.meta_lic",
+ "testdata/proprietary/highest.apex.meta_lic",
+ "testdata/proprietary/lib/liba.so.meta_lic",
+ "testdata/proprietary/lib/libb.so.meta_lic",
+ "testdata/proprietary/lib/libc.a.meta_lic",
+ "testdata/proprietary/lib/libd.so.meta_lic",
"testdata/restricted/RESTRICTED_LICENSE",
},
},
@@ -499,6 +608,13 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/proprietary/PROPRIETARY_LICENSE",
+ "testdata/proprietary/bin/bin1.meta_lic",
+ "testdata/proprietary/bin/bin2.meta_lic",
+ "testdata/proprietary/container.zip.meta_lic",
+ "testdata/proprietary/lib/liba.so.meta_lic",
+ "testdata/proprietary/lib/libb.so.meta_lic",
+ "testdata/proprietary/lib/libc.a.meta_lic",
+ "testdata/proprietary/lib/libd.so.meta_lic",
"testdata/restricted/RESTRICTED_LICENSE",
},
},
@@ -519,6 +635,10 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/proprietary/PROPRIETARY_LICENSE",
+ "testdata/proprietary/application.meta_lic",
+ "testdata/proprietary/bin/bin3.meta_lic",
+ "testdata/proprietary/lib/liba.so.meta_lic",
+ "testdata/proprietary/lib/libb.so.meta_lic",
},
},
{
@@ -540,6 +660,9 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/proprietary/PROPRIETARY_LICENSE",
+ "testdata/proprietary/bin/bin1.meta_lic",
+ "testdata/proprietary/lib/liba.so.meta_lic",
+ "testdata/proprietary/lib/libc.a.meta_lic",
},
},
{
@@ -552,7 +675,10 @@
usedBy{"lib/libd.so"},
notice{},
},
- expectedDeps: []string{"testdata/notice/NOTICE_LICENSE"},
+ expectedDeps: []string{
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/proprietary/lib/libd.so.meta_lic",
+ },
},
}
for _, tt := range tests {
diff --git a/tools/compliance/cmd/xmlnotice/xmlnotice.go b/tools/compliance/cmd/xmlnotice/xmlnotice.go
index 2097b7c..c3f8e4c 100644
--- a/tools/compliance/cmd/xmlnotice/xmlnotice.go
+++ b/tools/compliance/cmd/xmlnotice/xmlnotice.go
@@ -24,6 +24,7 @@
"io/fs"
"os"
"path/filepath"
+ "sort"
"strings"
"android/soong/response"
@@ -238,7 +239,8 @@
}
fmt.Fprintln(ctx.stdout, "</licenses>")
- *ctx.deps = ni.InputNoticeFiles()
+ *ctx.deps = ni.InputFiles()
+ sort.Strings(*ctx.deps)
return nil
}
diff --git a/tools/compliance/cmd/xmlnotice/xmlnotice_test.go b/tools/compliance/cmd/xmlnotice/xmlnotice_test.go
index 731e783..551006f 100644
--- a/tools/compliance/cmd/xmlnotice/xmlnotice_test.go
+++ b/tools/compliance/cmd/xmlnotice/xmlnotice_test.go
@@ -65,7 +65,16 @@
target{"highest.apex/lib/libb.so", "Android"},
firstParty{},
},
- expectedDeps: []string{"testdata/firstparty/FIRST_PARTY_LICENSE"},
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/bin/bin1.meta_lic",
+ "testdata/firstparty/bin/bin2.meta_lic",
+ "testdata/firstparty/highest.apex.meta_lic",
+ "testdata/firstparty/lib/liba.so.meta_lic",
+ "testdata/firstparty/lib/libb.so.meta_lic",
+ "testdata/firstparty/lib/libc.a.meta_lic",
+ "testdata/firstparty/lib/libd.so.meta_lic",
+ },
},
{
condition: "firstparty",
@@ -79,7 +88,16 @@
target{"container.zip/libb.so", "Android"},
firstParty{},
},
- expectedDeps: []string{"testdata/firstparty/FIRST_PARTY_LICENSE"},
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/bin/bin1.meta_lic",
+ "testdata/firstparty/bin/bin2.meta_lic",
+ "testdata/firstparty/container.zip.meta_lic",
+ "testdata/firstparty/lib/liba.so.meta_lic",
+ "testdata/firstparty/lib/libb.so.meta_lic",
+ "testdata/firstparty/lib/libc.a.meta_lic",
+ "testdata/firstparty/lib/libd.so.meta_lic",
+ },
},
{
condition: "firstparty",
@@ -89,7 +107,13 @@
target{"application", "Android"},
firstParty{},
},
- expectedDeps: []string{"testdata/firstparty/FIRST_PARTY_LICENSE"},
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/application.meta_lic",
+ "testdata/firstparty/bin/bin3.meta_lic",
+ "testdata/firstparty/lib/liba.so.meta_lic",
+ "testdata/firstparty/lib/libb.so.meta_lic",
+ },
},
{
condition: "firstparty",
@@ -99,7 +123,12 @@
target{"bin/bin1", "Android"},
firstParty{},
},
- expectedDeps: []string{"testdata/firstparty/FIRST_PARTY_LICENSE"},
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/bin/bin1.meta_lic",
+ "testdata/firstparty/lib/liba.so.meta_lic",
+ "testdata/firstparty/lib/libc.a.meta_lic",
+ },
},
{
condition: "firstparty",
@@ -109,7 +138,10 @@
target{"lib/libd.so", "Android"},
firstParty{},
},
- expectedDeps: []string{"testdata/firstparty/FIRST_PARTY_LICENSE"},
+ expectedDeps: []string{
+ "testdata/firstparty/FIRST_PARTY_LICENSE",
+ "testdata/firstparty/lib/libd.so.meta_lic",
+ },
},
{
condition: "notice",
@@ -129,6 +161,13 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/notice/NOTICE_LICENSE",
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/bin/bin2.meta_lic",
+ "testdata/notice/highest.apex.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
+ "testdata/notice/lib/libd.so.meta_lic",
},
},
{
@@ -149,6 +188,13 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/notice/NOTICE_LICENSE",
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/bin/bin2.meta_lic",
+ "testdata/notice/container.zip.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
+ "testdata/notice/lib/libd.so.meta_lic",
},
},
{
@@ -164,6 +210,10 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/notice/NOTICE_LICENSE",
+ "testdata/notice/application.meta_lic",
+ "testdata/notice/bin/bin3.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
},
},
{
@@ -180,6 +230,9 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/notice/NOTICE_LICENSE",
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
},
},
{
@@ -190,7 +243,10 @@
target{"lib/libd.so", "External"},
notice{},
},
- expectedDeps: []string{"testdata/notice/NOTICE_LICENSE"},
+ expectedDeps: []string{
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/notice/lib/libd.so.meta_lic",
+ },
},
{
condition: "reciprocal",
@@ -210,6 +266,13 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/reciprocal/RECIPROCAL_LICENSE",
+ "testdata/reciprocal/bin/bin1.meta_lic",
+ "testdata/reciprocal/bin/bin2.meta_lic",
+ "testdata/reciprocal/highest.apex.meta_lic",
+ "testdata/reciprocal/lib/liba.so.meta_lic",
+ "testdata/reciprocal/lib/libb.so.meta_lic",
+ "testdata/reciprocal/lib/libc.a.meta_lic",
+ "testdata/reciprocal/lib/libd.so.meta_lic",
},
},
{
@@ -230,6 +293,13 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/reciprocal/RECIPROCAL_LICENSE",
+ "testdata/reciprocal/bin/bin1.meta_lic",
+ "testdata/reciprocal/bin/bin2.meta_lic",
+ "testdata/reciprocal/container.zip.meta_lic",
+ "testdata/reciprocal/lib/liba.so.meta_lic",
+ "testdata/reciprocal/lib/libb.so.meta_lic",
+ "testdata/reciprocal/lib/libc.a.meta_lic",
+ "testdata/reciprocal/lib/libd.so.meta_lic",
},
},
{
@@ -245,6 +315,10 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/reciprocal/RECIPROCAL_LICENSE",
+ "testdata/reciprocal/application.meta_lic",
+ "testdata/reciprocal/bin/bin3.meta_lic",
+ "testdata/reciprocal/lib/liba.so.meta_lic",
+ "testdata/reciprocal/lib/libb.so.meta_lic",
},
},
{
@@ -261,6 +335,9 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/reciprocal/RECIPROCAL_LICENSE",
+ "testdata/reciprocal/bin/bin1.meta_lic",
+ "testdata/reciprocal/lib/liba.so.meta_lic",
+ "testdata/reciprocal/lib/libc.a.meta_lic",
},
},
{
@@ -271,7 +348,10 @@
target{"lib/libd.so", "External"},
notice{},
},
- expectedDeps: []string{"testdata/notice/NOTICE_LICENSE"},
+ expectedDeps: []string{
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/reciprocal/lib/libd.so.meta_lic",
+ },
},
{
condition: "restricted",
@@ -294,6 +374,13 @@
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/reciprocal/RECIPROCAL_LICENSE",
"testdata/restricted/RESTRICTED_LICENSE",
+ "testdata/restricted/bin/bin1.meta_lic",
+ "testdata/restricted/bin/bin2.meta_lic",
+ "testdata/restricted/highest.apex.meta_lic",
+ "testdata/restricted/lib/liba.so.meta_lic",
+ "testdata/restricted/lib/libb.so.meta_lic",
+ "testdata/restricted/lib/libc.a.meta_lic",
+ "testdata/restricted/lib/libd.so.meta_lic",
},
},
{
@@ -317,6 +404,13 @@
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/reciprocal/RECIPROCAL_LICENSE",
"testdata/restricted/RESTRICTED_LICENSE",
+ "testdata/restricted/bin/bin1.meta_lic",
+ "testdata/restricted/bin/bin2.meta_lic",
+ "testdata/restricted/container.zip.meta_lic",
+ "testdata/restricted/lib/liba.so.meta_lic",
+ "testdata/restricted/lib/libb.so.meta_lic",
+ "testdata/restricted/lib/libc.a.meta_lic",
+ "testdata/restricted/lib/libd.so.meta_lic",
},
},
{
@@ -332,6 +426,10 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/restricted/RESTRICTED_LICENSE",
+ "testdata/restricted/application.meta_lic",
+ "testdata/restricted/bin/bin3.meta_lic",
+ "testdata/restricted/lib/liba.so.meta_lic",
+ "testdata/restricted/lib/libb.so.meta_lic",
},
},
{
@@ -350,6 +448,9 @@
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/reciprocal/RECIPROCAL_LICENSE",
"testdata/restricted/RESTRICTED_LICENSE",
+ "testdata/restricted/bin/bin1.meta_lic",
+ "testdata/restricted/lib/liba.so.meta_lic",
+ "testdata/restricted/lib/libc.a.meta_lic",
},
},
{
@@ -360,7 +461,10 @@
target{"lib/libd.so", "External"},
notice{},
},
- expectedDeps: []string{"testdata/notice/NOTICE_LICENSE"},
+ expectedDeps: []string{
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/restricted/lib/libd.so.meta_lic",
+ },
},
{
condition: "proprietary",
@@ -382,6 +486,13 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/proprietary/PROPRIETARY_LICENSE",
+ "testdata/proprietary/bin/bin1.meta_lic",
+ "testdata/proprietary/bin/bin2.meta_lic",
+ "testdata/proprietary/highest.apex.meta_lic",
+ "testdata/proprietary/lib/liba.so.meta_lic",
+ "testdata/proprietary/lib/libb.so.meta_lic",
+ "testdata/proprietary/lib/libc.a.meta_lic",
+ "testdata/proprietary/lib/libd.so.meta_lic",
"testdata/restricted/RESTRICTED_LICENSE",
},
},
@@ -405,6 +516,13 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/proprietary/PROPRIETARY_LICENSE",
+ "testdata/proprietary/bin/bin1.meta_lic",
+ "testdata/proprietary/bin/bin2.meta_lic",
+ "testdata/proprietary/container.zip.meta_lic",
+ "testdata/proprietary/lib/liba.so.meta_lic",
+ "testdata/proprietary/lib/libb.so.meta_lic",
+ "testdata/proprietary/lib/libc.a.meta_lic",
+ "testdata/proprietary/lib/libd.so.meta_lic",
"testdata/restricted/RESTRICTED_LICENSE",
},
},
@@ -421,6 +539,10 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/proprietary/PROPRIETARY_LICENSE",
+ "testdata/proprietary/application.meta_lic",
+ "testdata/proprietary/bin/bin3.meta_lic",
+ "testdata/proprietary/lib/liba.so.meta_lic",
+ "testdata/proprietary/lib/libb.so.meta_lic",
},
},
{
@@ -437,6 +559,9 @@
expectedDeps: []string{
"testdata/firstparty/FIRST_PARTY_LICENSE",
"testdata/proprietary/PROPRIETARY_LICENSE",
+ "testdata/proprietary/bin/bin1.meta_lic",
+ "testdata/proprietary/lib/liba.so.meta_lic",
+ "testdata/proprietary/lib/libc.a.meta_lic",
},
},
{
@@ -447,7 +572,10 @@
target{"lib/libd.so", "External"},
notice{},
},
- expectedDeps: []string{"testdata/notice/NOTICE_LICENSE"},
+ expectedDeps: []string{
+ "testdata/notice/NOTICE_LICENSE",
+ "testdata/proprietary/lib/libd.so.meta_lic",
+ },
},
}
for _, tt := range tests {
diff --git a/tools/compliance/condition.go b/tools/compliance/condition.go
index cfe6f82..2aac78c 100644
--- a/tools/compliance/condition.go
+++ b/tools/compliance/condition.go
@@ -23,7 +23,7 @@
type LicenseCondition uint16
// LicenseConditionMask is a bitmask for the recognized license conditions.
-const LicenseConditionMask = LicenseCondition(0x3ff)
+const LicenseConditionMask = LicenseCondition(0x1ff)
const (
// UnencumberedCondition identifies public domain or public domain-
@@ -41,36 +41,32 @@
// RestrictedCondition identifies a license with requirement to share
// all source code linked to the module's source.
RestrictedCondition = LicenseCondition(0x0010)
- // RestrictedClasspathExceptionCondition identifies RestrictedCondition
- // waived for dynamic linking from independent modules.
- RestrictedClasspathExceptionCondition = LicenseCondition(0x0020)
// WeaklyRestrictedCondition identifies a RestrictedCondition waived
// for dynamic linking.
- WeaklyRestrictedCondition = LicenseCondition(0x0040)
+ WeaklyRestrictedCondition = LicenseCondition(0x0020)
// ProprietaryCondition identifies a license with source privacy
// requirements.
- ProprietaryCondition = LicenseCondition(0x0080)
+ ProprietaryCondition = LicenseCondition(0x0040)
// ByExceptionOnly identifies a license where policy requires product
// counsel review prior to use.
- ByExceptionOnlyCondition = LicenseCondition(0x0100)
+ ByExceptionOnlyCondition = LicenseCondition(0x0080)
// NotAllowedCondition identifies a license with onerous conditions
// where policy prohibits use.
- NotAllowedCondition = LicenseCondition(0x0200)
+ NotAllowedCondition = LicenseCondition(0x0100)
)
var (
// RecognizedConditionNames maps condition strings to LicenseCondition.
RecognizedConditionNames = map[string]LicenseCondition{
- "unencumbered": UnencumberedCondition,
- "permissive": PermissiveCondition,
- "notice": NoticeCondition,
- "reciprocal": ReciprocalCondition,
- "restricted": RestrictedCondition,
- "restricted_with_classpath_exception": RestrictedClasspathExceptionCondition,
- "restricted_allows_dynamic_linking": WeaklyRestrictedCondition,
- "proprietary": ProprietaryCondition,
- "by_exception_only": ByExceptionOnlyCondition,
- "not_allowed": NotAllowedCondition,
+ "unencumbered": UnencumberedCondition,
+ "permissive": PermissiveCondition,
+ "notice": NoticeCondition,
+ "reciprocal": ReciprocalCondition,
+ "restricted": RestrictedCondition,
+ "restricted_if_statically_linked": WeaklyRestrictedCondition,
+ "proprietary": ProprietaryCondition,
+ "by_exception_only": ByExceptionOnlyCondition,
+ "not_allowed": NotAllowedCondition,
}
)
@@ -87,10 +83,8 @@
return "reciprocal"
case RestrictedCondition:
return "restricted"
- case RestrictedClasspathExceptionCondition:
- return "restricted_with_classpath_exception"
case WeaklyRestrictedCondition:
- return "restricted_allows_dynamic_linking"
+ return "restricted_if_statically_linked"
case ProprietaryCondition:
return "proprietary"
case ByExceptionOnlyCondition:
@@ -98,5 +92,5 @@
case NotAllowedCondition:
return "not_allowed"
}
- panic(fmt.Errorf("unrecognized license condition: %04x", lc))
+ panic(fmt.Errorf("unrecognized license condition: %#v", lc))
}
diff --git a/tools/compliance/condition_test.go b/tools/compliance/condition_test.go
index 778ce4a..16ec72c 100644
--- a/tools/compliance/condition_test.go
+++ b/tools/compliance/condition_test.go
@@ -21,22 +21,22 @@
func TestConditionSetHas(t *testing.T) {
impliesShare := ImpliesShared
- t.Logf("testing with imliesShare=%04x", impliesShare)
+ t.Logf("testing with imliesShare=%#v", impliesShare)
if impliesShare.HasAny(NoticeCondition) {
- t.Errorf("impliesShare.HasAny(\"notice\"=%04x) got true, want false", NoticeCondition)
+ t.Errorf("impliesShare.HasAny(\"notice\"=%#v) got true, want false", NoticeCondition)
}
if !impliesShare.HasAny(RestrictedCondition) {
- t.Errorf("impliesShare.HasAny(\"restricted\"=%04x) got false, want true", RestrictedCondition)
+ t.Errorf("impliesShare.HasAny(\"restricted\"=%#v) got false, want true", RestrictedCondition)
}
if !impliesShare.HasAny(ReciprocalCondition) {
- t.Errorf("impliesShare.HasAny(\"reciprocal\"=%04x) got false, want true", ReciprocalCondition)
+ t.Errorf("impliesShare.HasAny(\"reciprocal\"=%#v) got false, want true", ReciprocalCondition)
}
if impliesShare.HasAny(LicenseCondition(0x0000)) {
- t.Errorf("impliesShare.HasAny(nil=%04x) got true, want false", LicenseCondition(0x0000))
+ t.Errorf("impliesShare.HasAny(nil=%#v) got true, want false", LicenseCondition(0x0000))
}
}
@@ -44,7 +44,7 @@
for expected, condition := range RecognizedConditionNames {
actual := condition.Name()
if expected != actual {
- t.Errorf("unexpected name for condition %04x: got %s, want %s", condition, actual, expected)
+ t.Errorf("unexpected name for condition %#v: got %s, want %s", condition, actual, expected)
}
}
}
@@ -62,6 +62,6 @@
t.Errorf("invalid condition unexpected name: got %s, wanted panic", name)
}()
if !panicked {
- t.Errorf("no expected panic for %04x.Name(): got no panic, wanted panic", lc)
+ t.Errorf("no expected panic for %#v.Name(): got no panic, wanted panic", lc)
}
}
diff --git a/tools/compliance/conditionset_test.go b/tools/compliance/conditionset_test.go
index c91912f..e31360d 100644
--- a/tools/compliance/conditionset_test.go
+++ b/tools/compliance/conditionset_test.go
@@ -96,19 +96,18 @@
{
name: "everything",
conditions: []string{"unencumbered", "permissive", "notice", "reciprocal", "restricted", "proprietary"},
- plus: &[]string{"restricted_with_classpath_exception", "restricted_allows_dynamic_linking", "by_exception_only", "not_allowed"},
+ plus: &[]string{"restricted_if_statically_linked", "by_exception_only", "not_allowed"},
matchingAny: map[string][]string{
- "unencumbered": []string{"unencumbered"},
- "permissive": []string{"permissive"},
- "notice": []string{"notice"},
- "reciprocal": []string{"reciprocal"},
- "restricted": []string{"restricted"},
- "restricted_with_classpath_exception": []string{"restricted_with_classpath_exception"},
- "restricted_allows_dynamic_linking": []string{"restricted_allows_dynamic_linking"},
- "proprietary": []string{"proprietary"},
- "by_exception_only": []string{"by_exception_only"},
- "not_allowed": []string{"not_allowed"},
- "notice|proprietary": []string{"notice", "proprietary"},
+ "unencumbered": []string{"unencumbered"},
+ "permissive": []string{"permissive"},
+ "notice": []string{"notice"},
+ "reciprocal": []string{"reciprocal"},
+ "restricted": []string{"restricted"},
+ "restricted_if_statically_linked": []string{"restricted_if_statically_linked"},
+ "proprietary": []string{"proprietary"},
+ "by_exception_only": []string{"by_exception_only"},
+ "not_allowed": []string{"not_allowed"},
+ "notice|proprietary": []string{"notice", "proprietary"},
},
expected: []string{
"unencumbered",
@@ -116,8 +115,7 @@
"notice",
"reciprocal",
"restricted",
- "restricted_with_classpath_exception",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"proprietary",
"by_exception_only",
"not_allowed",
@@ -131,8 +129,7 @@
"notice",
"reciprocal",
"restricted",
- "restricted_with_classpath_exception",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"proprietary",
"by_exception_only",
"not_allowed",
@@ -151,8 +148,7 @@
"notice",
"reciprocal",
"restricted",
- "restricted_with_classpath_exception",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"proprietary",
"by_exception_only",
"not_allowed",
@@ -161,19 +157,18 @@
{
name: "allbutone",
conditions: []string{"unencumbered", "permissive", "notice", "reciprocal", "restricted", "proprietary"},
- plus: &[]string{"restricted_allows_dynamic_linking", "by_exception_only", "not_allowed"},
+ plus: &[]string{"restricted_if_statically_linked", "by_exception_only", "not_allowed"},
matchingAny: map[string][]string{
- "unencumbered": []string{"unencumbered"},
- "permissive": []string{"permissive"},
- "notice": []string{"notice"},
- "reciprocal": []string{"reciprocal"},
- "restricted": []string{"restricted"},
- "restricted_with_classpath_exception": []string{},
- "restricted_allows_dynamic_linking": []string{"restricted_allows_dynamic_linking"},
- "proprietary": []string{"proprietary"},
- "by_exception_only": []string{"by_exception_only"},
- "not_allowed": []string{"not_allowed"},
- "notice|proprietary": []string{"notice", "proprietary"},
+ "unencumbered": []string{"unencumbered"},
+ "permissive": []string{"permissive"},
+ "notice": []string{"notice"},
+ "reciprocal": []string{"reciprocal"},
+ "restricted": []string{"restricted"},
+ "restricted_if_statically_linked": []string{"restricted_if_statically_linked"},
+ "proprietary": []string{"proprietary"},
+ "by_exception_only": []string{"by_exception_only"},
+ "not_allowed": []string{"not_allowed"},
+ "notice|proprietary": []string{"notice", "proprietary"},
},
expected: []string{
"unencumbered",
@@ -181,7 +176,7 @@
"notice",
"reciprocal",
"restricted",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"proprietary",
"by_exception_only",
"not_allowed",
@@ -195,25 +190,23 @@
"notice",
"reciprocal",
"restricted",
- "restricted_with_classpath_exception",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"proprietary",
"by_exception_only",
"not_allowed",
},
- minus: &[]string{"restricted_allows_dynamic_linking"},
+ minus: &[]string{"restricted_if_statically_linked"},
matchingAny: map[string][]string{
- "unencumbered": []string{"unencumbered"},
- "permissive": []string{"permissive"},
- "notice": []string{"notice"},
- "reciprocal": []string{"reciprocal"},
- "restricted": []string{"restricted"},
- "restricted_with_classpath_exception": []string{"restricted_with_classpath_exception"},
- "restricted_allows_dynamic_linking": []string{},
- "proprietary": []string{"proprietary"},
- "by_exception_only": []string{"by_exception_only"},
- "not_allowed": []string{"not_allowed"},
- "restricted|proprietary": []string{"restricted", "proprietary"},
+ "unencumbered": []string{"unencumbered"},
+ "permissive": []string{"permissive"},
+ "notice": []string{"notice"},
+ "reciprocal": []string{"reciprocal"},
+ "restricted": []string{"restricted"},
+ "restricted_if_statically_linked": []string{},
+ "proprietary": []string{"proprietary"},
+ "by_exception_only": []string{"by_exception_only"},
+ "not_allowed": []string{"not_allowed"},
+ "restricted|proprietary": []string{"restricted", "proprietary"},
},
expected: []string{
"unencumbered",
@@ -221,7 +214,6 @@
"notice",
"reciprocal",
"restricted",
- "restricted_with_classpath_exception",
"proprietary",
"by_exception_only",
"not_allowed",
@@ -235,8 +227,7 @@
"notice",
"reciprocal",
"restricted",
- "restricted_with_classpath_exception",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"proprietary",
"by_exception_only",
"not_allowed",
@@ -247,44 +238,41 @@
"notice",
"reciprocal",
"restricted",
- "restricted_with_classpath_exception",
- "restricted_allows_dynamic_linking",
+ "restricted_if_statically_linked",
"proprietary",
"by_exception_only",
"not_allowed",
},
matchingAny: map[string][]string{
- "unencumbered": []string{},
- "permissive": []string{},
- "notice": []string{},
- "reciprocal": []string{},
- "restricted": []string{},
- "restricted_with_classpath_exception": []string{},
- "restricted_allows_dynamic_linking": []string{},
- "proprietary": []string{},
- "by_exception_only": []string{},
- "not_allowed": []string{},
- "restricted|proprietary": []string{},
+ "unencumbered": []string{},
+ "permissive": []string{},
+ "notice": []string{},
+ "reciprocal": []string{},
+ "restricted": []string{},
+ "restricted_if_statically_linked": []string{},
+ "proprietary": []string{},
+ "by_exception_only": []string{},
+ "not_allowed": []string{},
+ "restricted|proprietary": []string{},
},
expected: []string{},
},
{
name: "restrictedplus",
- conditions: []string{"restricted", "restricted_with_classpath_exception", "restricted_allows_dynamic_linking"},
+ conditions: []string{"restricted", "restricted_if_statically_linked"},
plus: &[]string{"permissive", "notice", "restricted", "proprietary"},
matchingAny: map[string][]string{
- "unencumbered": []string{},
- "permissive": []string{"permissive"},
- "notice": []string{"notice"},
- "restricted": []string{"restricted"},
- "restricted_with_classpath_exception": []string{"restricted_with_classpath_exception"},
- "restricted_allows_dynamic_linking": []string{"restricted_allows_dynamic_linking"},
- "proprietary": []string{"proprietary"},
- "restricted|proprietary": []string{"restricted", "proprietary"},
- "by_exception_only": []string{},
- "proprietary|by_exception_only": []string{"proprietary"},
+ "unencumbered": []string{},
+ "permissive": []string{"permissive"},
+ "notice": []string{"notice"},
+ "restricted": []string{"restricted"},
+ "restricted_if_statically_linked": []string{"restricted_if_statically_linked"},
+ "proprietary": []string{"proprietary"},
+ "restricted|proprietary": []string{"restricted", "proprietary"},
+ "by_exception_only": []string{},
+ "proprietary|by_exception_only": []string{"proprietary"},
},
- expected: []string{"permissive", "notice", "restricted", "restricted_with_classpath_exception", "restricted_allows_dynamic_linking", "proprietary"},
+ expected: []string{"permissive", "notice", "restricted", "restricted_if_statically_linked", "proprietary"},
},
}
for _, tt := range tests {
@@ -342,11 +330,11 @@
actual := cs.MatchingAny(toConditions(strings.Split(data, "|"))...)
actualNames := actual.Names()
- t.Logf("MatchingAny(%s): actual set %04x %s", data, actual, actual.String())
- t.Logf("MatchingAny(%s): expected set %04x %s", data, expected, expected.String())
+ t.Logf("MatchingAny(%s): actual set %#v %s", data, actual, actual.String())
+ t.Logf("MatchingAny(%s): expected set %#v %s", data, expected, expected.String())
if actual != expected {
- t.Errorf("MatchingAny(%s): got %04x, want %04x", data, actual, expected)
+ t.Errorf("MatchingAny(%s): got %#v, want %#v", data, actual, expected)
continue
}
if len(actualNames) != len(expectedNames) {
@@ -382,11 +370,11 @@
actual := cs.MatchingAnySet(NewLicenseConditionSet(toConditions(strings.Split(data, "|"))...))
actualNames := actual.Names()
- t.Logf("MatchingAnySet(%s): actual set %04x %s", data, actual, actual.String())
- t.Logf("MatchingAnySet(%s): expected set %04x %s", data, expected, expected.String())
+ t.Logf("MatchingAnySet(%s): actual set %#v %s", data, actual, actual.String())
+ t.Logf("MatchingAnySet(%s): expected set %#v %s", data, expected, expected.String())
if actual != expected {
- t.Errorf("MatchingAnySet(%s): got %04x, want %04x", data, actual, expected)
+ t.Errorf("MatchingAnySet(%s): got %#v, want %#v", data, actual, expected)
continue
}
if len(actualNames) != len(expectedNames) {
@@ -426,11 +414,11 @@
actualNames := actual.Names()
- t.Logf("actual license condition set: %04x %s", actual, actual.String())
- t.Logf("expected license condition set: %04x %s", expected, expected.String())
+ t.Logf("actual license condition set: %#v %s", actual, actual.String())
+ t.Logf("expected license condition set: %#v %s", expected, expected.String())
if actual != expected {
- t.Errorf("checkExpected: got %04x, want %04x", actual, expected)
+ t.Errorf("checkExpected: got %#v, want %#v", actual, expected)
return false
}
@@ -487,7 +475,7 @@
notExpected := (AllLicenseConditions &^ expected)
notExpectedList := notExpected.AsList()
- t.Logf("not expected license condition set: %04x %s", notExpected, notExpected.String())
+ t.Logf("not expected license condition set: %#v %s", notExpected, notExpected.String())
if len(tt.expected) == 0 {
if actual.HasAny(append(expectedConditions, notExpectedList...)...) {
@@ -526,11 +514,11 @@
actualNames := actual.Names()
- t.Logf("actual license condition set: %04x %s", actual, actual.String())
- t.Logf("expected license condition set: %04x %s", expected, expected.String())
+ t.Logf("actual license condition set: %#v %s", actual, actual.String())
+ t.Logf("expected license condition set: %#v %s", expected, expected.String())
if actual != expected {
- t.Errorf("checkExpectedSet: got %04x, want %04x", actual, expected)
+ t.Errorf("checkExpectedSet: got %#v, want %#v", actual, expected)
return false
}
@@ -581,7 +569,7 @@
}
notExpected := (AllLicenseConditions &^ expected)
- t.Logf("not expected license condition set: %04x %s", notExpected, notExpected.String())
+ t.Logf("not expected license condition set: %#v %s", notExpected, notExpected.String())
if len(tt.expected) == 0 {
if actual.MatchesAnySet(expected, notExpected) {
@@ -606,10 +594,10 @@
t.Errorf("actual.Difference({expected}).IsEmpty(): want true, got false")
}
if expected != actual.Intersection(expected) {
- t.Errorf("expected == actual.Intersection({expected}): want true, got false (%04x != %04x)", expected, actual.Intersection(expected))
+ t.Errorf("expected == actual.Intersection({expected}): want true, got false (%#v != %#v)", expected, actual.Intersection(expected))
}
if actual != actual.Intersection(expected) {
- t.Errorf("actual == actual.Intersection({expected}): want true, got false (%04x != %04x)", actual, actual.Intersection(expected))
+ t.Errorf("actual == actual.Intersection({expected}): want true, got false (%#v != %#v)", actual, actual.Intersection(expected))
}
return true
}
diff --git a/tools/compliance/doc.go b/tools/compliance/doc.go
index a47c1cf..5ced9ee 100644
--- a/tools/compliance/doc.go
+++ b/tools/compliance/doc.go
@@ -11,6 +11,10 @@
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
+
+// Much of this content appears too in README.md
+// When changing this file consider whether the change also applies to README.md
+
/*
Package compliance provides an approved means for reading, consuming, and
@@ -31,6 +35,13 @@
artifacts in a release or distribution. While conceptually immutable, parts of
the graph may be loaded or evaluated lazily.
+Conceptually, the graph itself will always be a directed acyclic graph. One
+representation is a set of directed edges. Another is a set of nodes with
+directed edges to their dependencies.
+
+The edges have annotations, which can distinguish between build tools, runtime
+dependencies, and dependencies like 'contains' that make a derivative work.
+
LicenseCondition
----------------
@@ -51,17 +62,13 @@
`ActsOn` is the target to share, give notice for, hide etc.
-`Resolves` is the license condition that the action resolves.
+`Resolves` is the set of condition types that the action resolves.
-Remember: Each license condition pairs a condition name with an originating
-target so each resolution in a ResolutionSet has two targets it applies to and
-one target from which it originates, all of which may be the same target.
-
-For most condition types, `ActsOn` and `Resolves.Origin` will be the same
-target. For example, a notice condition policy means attribution or notice must
-be given for the target where the condition originates. Likewise, a proprietary
-condition policy means the privacy of the target where the condition originates
-must be respected. i.e. The thing acted on is the origin.
+For most condition types, `ActsOn` will be the target where the condition
+originated. For example, a notice condition policy means attribution or notice
+must be given for the target where the condition originates. Likewise, a
+proprietary condition policy means the privacy of the target where the
+condition originates must be respected. i.e. The thing acted on is the origin.
Restricted conditions are different. The infectious nature of restricted often
means sharing code that is not the target where the restricted condition
diff --git a/tools/compliance/go.mod b/tools/compliance/go.mod
index 61e2158..088915a 100644
--- a/tools/compliance/go.mod
+++ b/tools/compliance/go.mod
@@ -4,9 +4,17 @@
replace google.golang.org/protobuf v0.0.0 => ../../../../external/golang-protobuf
-require android/soong v0.0.0
+require (
+ android/soong v0.0.0
+ github.com/google/blueprint v0.0.0
+)
-replace android/soong v0.0.0 => ../../../soong
+require golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f // indirect
+
+replace android/soong v0.0.0 => ../../../soong
+
+replace github.com/google/blueprint => ../../../blueprint
+
// Indirect deps from golang-protobuf
exclude github.com/golang/protobuf v1.5.0
diff --git a/tools/compliance/go.sum b/tools/compliance/go.sum
new file mode 100644
index 0000000..cbe76d9
--- /dev/null
+++ b/tools/compliance/go.sum
@@ -0,0 +1,2 @@
+golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f h1:uF6paiQQebLeSXkrTqHqz0MXhXXS1KgF41eUdBNvxK0=
+golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
diff --git a/tools/compliance/graph.go b/tools/compliance/graph.go
index e73ab46..9ad319b 100644
--- a/tools/compliance/graph.go
+++ b/tools/compliance/graph.go
@@ -58,13 +58,11 @@
/// (guarded by mu)
targets map[string]*TargetNode
- // wgBU becomes non-nil when the bottom-up resolve begins and reaches 0
- // (i.e. Wait() proceeds) when the bottom-up resolve completes. (guarded by mu)
- wgBU *sync.WaitGroup
+ // onceBottomUp makes sure the bottom-up resolve walk only happens one time.
+ onceBottomUp sync.Once
- // wgTD becomes non-nil when the top-down resolve begins and reaches 0 (i.e. Wait()
- // proceeds) when the top-down resolve completes. (guarded by mu)
- wgTD *sync.WaitGroup
+ // onceTopDown makes sure the top-down resolve walk only happens one time.
+ onceTopDown sync.Once
// shippedNodes caches the results of a full walk of nodes identifying targets
// distributed either directly or as derivative works. (creation guarded by mu)
@@ -90,6 +88,15 @@
return targets
}
+// TargetNames returns the list of target node names in the graph. (unordered)
+func (lg *LicenseGraph) TargetNames() []string {
+ targets := make([]string, 0, len(lg.targets))
+ for target := range lg.targets {
+ targets = append(targets, target)
+ }
+ return targets
+}
+
// compliance-only LicenseGraph methods
// newLicenseGraph constructs a new, empty instance of LicenseGraph.
@@ -139,6 +146,24 @@
return e.annotations
}
+// IsRuntimeDependency returns true for edges representing shared libraries
+// linked dynamically at runtime.
+func (e *TargetEdge) IsRuntimeDependency() bool {
+ return edgeIsDynamicLink(e)
+}
+
+// IsDerivation returns true for edges where the target is a derivative
+// work of dependency.
+func (e *TargetEdge) IsDerivation() bool {
+ return edgeIsDerivation(e)
+}
+
+// IsBuildTool returns true for edges where the target is built
+// by dependency.
+func (e *TargetEdge) IsBuildTool() bool {
+ return !edgeIsDerivation(e) && !edgeIsDynamicLink(e)
+}
+
// String returns a human-readable string representation of the edge.
func (e *TargetEdge) String() string {
return fmt.Sprintf("%s -[%s]> %s", e.target.name, strings.Join(e.annotations.AsList(), ", "), e.dependency.name)
@@ -188,6 +213,11 @@
return s.edge.dependency
}
+// Edge describes the target edge.
+func (s TargetEdgePathSegment) Edge() *TargetEdge {
+ return s.edge
+}
+
// Annotations describes the type of edge by the set of annotations attached to
// it.
//
@@ -300,21 +330,9 @@
return tn.proto.GetPackageName()
}
-// ModuleTypes returns the list of module types implementing the target.
-// (unordered)
-//
-// In an ideal world, only 1 module type would implement each target, but the
-// interactions between Soong and Make for host versus product and for a
-// variety of architectures sometimes causes multiple module types per target
-// (often a regular build target and a prebuilt.)
-func (tn *TargetNode) ModuleTypes() []string {
- return append([]string{}, tn.proto.ModuleTypes...)
-}
-
-// ModuleClasses returns the list of module classes implementing the target.
-// (unordered)
-func (tn *TargetNode) ModuleClasses() []string {
- return append([]string{}, tn.proto.ModuleClasses...)
+// ModuleName returns the module name of the target.
+func (tn *TargetNode) ModuleName() string {
+ return tn.proto.GetModuleName()
}
// Projects returns the projects defining the target node. (unordered)
@@ -326,14 +344,6 @@
return append([]string{}, tn.proto.Projects...)
}
-// LicenseKinds returns the list of license kind names for the module or
-// target. (unordered)
-//
-// e.g. SPDX-license-identifier-MIT or legacy_proprietary
-func (tn *TargetNode) LicenseKinds() []string {
- return append([]string{}, tn.proto.LicenseKinds...)
-}
-
// LicenseConditions returns a copy of the set of license conditions
// originating at the target. The values that appear and how each is resolved
// is a matter of policy. (unordered)
@@ -458,36 +468,25 @@
}
// TargetNodeSet describes a set of distinct nodes in a license graph.
-type TargetNodeSet struct {
- nodes map[*TargetNode]struct{}
-}
+type TargetNodeSet map[*TargetNode]struct{}
// Contains returns true when `target` is an element of the set.
-func (ts *TargetNodeSet) Contains(target *TargetNode) bool {
- _, isPresent := ts.nodes[target]
+func (ts TargetNodeSet) Contains(target *TargetNode) bool {
+ _, isPresent := ts[target]
return isPresent
}
-// AsList returns the list of target nodes in the set. (unordered)
-func (ts *TargetNodeSet) AsList() TargetNodeList {
- result := make(TargetNodeList, 0, len(ts.nodes))
- for tn := range ts.nodes {
- result = append(result, tn)
- }
- return result
-}
-
// Names returns the array of target node namess in the set. (unordered)
-func (ts *TargetNodeSet) Names() []string {
- result := make([]string, 0, len(ts.nodes))
- for tn := range ts.nodes {
+func (ts TargetNodeSet) Names() []string {
+ result := make([]string, 0, len(ts))
+ for tn := range ts {
result = append(result, tn.name)
}
return result
}
// String returns a human-readable string representation of the set.
-func (ts *TargetNodeSet) String() string {
+func (ts TargetNodeSet) String() string {
return fmt.Sprintf("{%s}", strings.Join(ts.Names(), ", "))
}
diff --git a/tools/compliance/noticeindex.go b/tools/compliance/noticeindex.go
index f082383..c91a8df 100644
--- a/tools/compliance/noticeindex.go
+++ b/tools/compliance/noticeindex.go
@@ -15,7 +15,6 @@
package compliance
import (
- "bufio"
"crypto/md5"
"fmt"
"io"
@@ -25,16 +24,11 @@
"regexp"
"sort"
"strings"
-)
-const (
- noProjectName = "\u2205"
+ "android/soong/tools/compliance/projectmetadata"
)
var (
- nameRegexp = regexp.MustCompile(`^\s*name\s*:\s*"(.*)"\s*$`)
- descRegexp = regexp.MustCompile(`^\s*description\s*:\s*"(.*)"\s*$`)
- versionRegexp = regexp.MustCompile(`^\s*version\s*:\s*"(.*)"\s*$`)
licensesPathRegexp = regexp.MustCompile(`licen[cs]es?/`)
)
@@ -43,10 +37,12 @@
type NoticeIndex struct {
// lg identifies the license graph to which the index applies.
lg *LicenseGraph
+ // pmix indexes project metadata
+ pmix *projectmetadata.Index
// rs identifies the set of resolutions upon which the index is based.
rs ResolutionSet
// shipped identifies the set of target nodes shipped directly or as derivative works.
- shipped *TargetNodeSet
+ shipped TargetNodeSet
// rootFS locates the root of the file system from which to read the files.
rootFS fs.FS
// hash maps license text filenames to content hashes
@@ -75,6 +71,7 @@
}
ni := &NoticeIndex{
lg: lg,
+ pmix: projectmetadata.NewIndex(rootFS),
rs: rs,
shipped: ShippedNodes(lg),
rootFS: rootFS,
@@ -110,9 +107,12 @@
return hashes, nil
}
- link := func(tn *TargetNode, hashes map[hash]struct{}, installPaths []string) {
+ link := func(tn *TargetNode, hashes map[hash]struct{}, installPaths []string) error {
for h := range hashes {
- libName := ni.getLibName(tn, h)
+ libName, err := ni.getLibName(tn, h)
+ if err != nil {
+ return err
+ }
if _, ok := ni.libHash[libName]; !ok {
ni.libHash[libName] = make(map[hash]struct{})
}
@@ -145,6 +145,11 @@
}
}
}
+ return nil
+ }
+
+ cacheMetadata := func(tn *TargetNode) {
+ ni.pmix.MetadataForProjects(tn.Projects()...)
}
// returns error from walk below.
@@ -157,13 +162,17 @@
if !ni.shipped.Contains(tn) {
return false
}
+ go cacheMetadata(tn)
installPaths := getInstallPaths(tn, path)
var hashes map[hash]struct{}
hashes, err = index(tn)
if err != nil {
return false
}
- link(tn, hashes, installPaths)
+ err = link(tn, hashes, installPaths)
+ if err != nil {
+ return false
+ }
if tn.IsContainer() {
return true
}
@@ -173,7 +182,10 @@
if err != nil {
return false
}
- link(r.actsOn, hashes, installPaths)
+ err = link(r.actsOn, hashes, installPaths)
+ if err != nil {
+ return false
+ }
}
return false
})
@@ -214,12 +226,18 @@
close(c)
}()
return c
+
}
-// InputNoticeFiles returns the list of files that were hashed during IndexLicenseTexts.
-func (ni *NoticeIndex) InputNoticeFiles() []string {
- files := append([]string(nil), ni.files...)
- sort.Strings(files)
+// InputFiles returns the complete list of files read during indexing.
+func (ni *NoticeIndex) InputFiles() []string {
+ projectMeta := ni.pmix.AllMetadataFiles()
+ files := make([]string, 0, len(ni.files) + len(ni.lg.targets) + len(projectMeta))
+ files = append(files, ni.files...)
+ for f := range ni.lg.targets {
+ files = append(files, f)
+ }
+ files = append(files, projectMeta...)
return files
}
@@ -308,15 +326,18 @@
}
// getLibName returns the name of the library associated with `noticeFor`.
-func (ni *NoticeIndex) getLibName(noticeFor *TargetNode, h hash) string {
+func (ni *NoticeIndex) getLibName(noticeFor *TargetNode, h hash) (string, error) {
for _, text := range noticeFor.LicenseTexts() {
if !strings.Contains(text, ":") {
if ni.hash[text].key != h.key {
continue
}
- ln := ni.checkMetadataForLicenseText(noticeFor, text)
+ ln, err := ni.checkMetadataForLicenseText(noticeFor, text)
+ if err != nil {
+ return "", err
+ }
if len(ln) > 0 {
- return ln
+ return ln, nil
}
continue
}
@@ -331,17 +352,20 @@
if err != nil {
continue
}
- return ln
+ return ln, nil
}
// use name from METADATA if available
- ln := ni.checkMetadata(noticeFor)
+ ln, err := ni.checkMetadata(noticeFor)
+ if err != nil {
+ return "", err
+ }
if len(ln) > 0 {
- return ln
+ return ln, nil
}
// use package_name: from license{} module if available
pn := noticeFor.PackageName()
if len(pn) > 0 {
- return pn
+ return pn, nil
}
for _, p := range noticeFor.Projects() {
if strings.HasPrefix(p, "prebuilts/") {
@@ -360,18 +384,17 @@
continue
}
}
- for r, prefix := range SafePrebuiltPrefixes {
- match := r.FindString(licenseText)
+ for _, safePrebuiltPrefix := range safePrebuiltPrefixes {
+ match := safePrebuiltPrefix.re.FindString(licenseText)
if len(match) == 0 {
continue
}
- strip := SafePathPrefixes[prefix]
- if strip {
+ if safePrebuiltPrefix.strip {
// strip entire prefix
match = licenseText[len(match):]
} else {
// strip from prebuilts/ until safe prefix
- match = licenseText[len(match)-len(prefix):]
+ match = licenseText[len(match)-len(safePrebuiltPrefix.prefix):]
}
// remove LICENSE or NOTICE or other filename
li := strings.LastIndex(match, "/")
@@ -386,17 +409,17 @@
match = match[:li]
}
}
- return match
+ return match, nil
}
break
}
}
- for prefix, strip := range SafePathPrefixes {
- if strings.HasPrefix(p, prefix) {
- if strip {
- return p[len(prefix):]
+ for _, safePathPrefix := range safePathPrefixes {
+ if strings.HasPrefix(p, safePathPrefix.prefix) {
+ if safePathPrefix.strip {
+ return p[len(safePathPrefix.prefix):], nil
} else {
- return p
+ return p, nil
}
}
}
@@ -411,35 +434,26 @@
if fi > 0 {
n = n[:fi]
}
- return n
+ return n, nil
}
// checkMetadata tries to look up a library name from a METADATA file associated with `noticeFor`.
-func (ni *NoticeIndex) checkMetadata(noticeFor *TargetNode) string {
- for _, p := range noticeFor.Projects() {
- if name, ok := ni.projectName[p]; ok {
- if name == noProjectName {
- continue
- }
- return name
- }
- name, err := ni.checkMetadataFile(filepath.Join(p, "METADATA"))
- if err != nil {
- ni.projectName[p] = noProjectName
- continue
- }
- if len(name) == 0 {
- ni.projectName[p] = noProjectName
- continue
- }
- ni.projectName[p] = name
- return name
+func (ni *NoticeIndex) checkMetadata(noticeFor *TargetNode) (string, error) {
+ pms, err := ni.pmix.MetadataForProjects(noticeFor.Projects()...)
+ if err != nil {
+ return "", err
}
- return ""
+ for _, pm := range pms {
+ name := pm.VersionedName()
+ if name != "" {
+ return name, nil
+ }
+ }
+ return "", nil
}
// checkMetadataForLicenseText
-func (ni *NoticeIndex) checkMetadataForLicenseText(noticeFor *TargetNode, licenseText string) string {
+func (ni *NoticeIndex) checkMetadataForLicenseText(noticeFor *TargetNode, licenseText string) (string, error) {
p := ""
for _, proj := range noticeFor.Projects() {
if strings.HasPrefix(licenseText, proj) {
@@ -457,79 +471,17 @@
p = filepath.Dir(p)
continue
}
- return ""
+ return "", nil
}
}
- if name, ok := ni.projectName[p]; ok {
- if name == noProjectName {
- return ""
- }
- return name
- }
- name, err := ni.checkMetadataFile(filepath.Join(p, "METADATA"))
- if err == nil && len(name) > 0 {
- ni.projectName[p] = name
- return name
- }
- ni.projectName[p] = noProjectName
- return ""
-}
-
-// checkMetadataFile tries to look up a library name from a METADATA file at `path`.
-func (ni *NoticeIndex) checkMetadataFile(path string) (string, error) {
- f, err := ni.rootFS.Open(path)
+ pms, err := ni.pmix.MetadataForProjects(p)
if err != nil {
return "", err
}
- name := ""
- description := ""
- version := ""
- s := bufio.NewScanner(f)
- for s.Scan() {
- line := s.Text()
- m := nameRegexp.FindStringSubmatch(line)
- if m != nil {
- if 1 < len(m) && m[1] != "" {
- name = m[1]
- }
- if version != "" {
- break
- }
- continue
- }
- m = versionRegexp.FindStringSubmatch(line)
- if m != nil {
- if 1 < len(m) && m[1] != "" {
- version = m[1]
- }
- if name != "" {
- break
- }
- continue
- }
- m = descRegexp.FindStringSubmatch(line)
- if m != nil {
- if 1 < len(m) && m[1] != "" {
- description = m[1]
- }
- }
+ if pms == nil {
+ return "", nil
}
- _ = s.Err()
- _ = f.Close()
- if name != "" {
- if version != "" {
- if version[0] == 'v' || version[0] == 'V' {
- return name + "_" + version, nil
- } else {
- return name + "_v_" + version, nil
- }
- }
- return name, nil
- }
- if description != "" {
- return description, nil
- }
- return "", nil
+ return pms[0].VersionedName(), nil
}
// addText reads and indexes the content of a license text file.
diff --git a/tools/compliance/policy_policy.go b/tools/compliance/policy_policy.go
index 60bdf48..368a162 100644
--- a/tools/compliance/policy_policy.go
+++ b/tools/compliance/policy_policy.go
@@ -29,30 +29,31 @@
"toolchain": "toolchain",
}
- // SafePathPrefixes maps the path prefixes presumed not to contain any
+ // safePathPrefixes maps the path prefixes presumed not to contain any
// proprietary or confidential pathnames to whether to strip the prefix
// from the path when used as the library name for notices.
- SafePathPrefixes = map[string]bool{
- "external/": true,
- "art/": false,
- "build/": false,
- "cts/": false,
- "dalvik/": false,
- "developers/": false,
- "development/": false,
- "frameworks/": false,
- "packages/": true,
- "prebuilts/": false,
- "sdk/": false,
- "system/": false,
- "test/": false,
- "toolchain/": false,
- "tools/": false,
+ safePathPrefixes = []safePathPrefixesType{
+ {"external/", true},
+ {"art/", false},
+ {"build/", false},
+ {"cts/", false},
+ {"dalvik/", false},
+ {"developers/", false},
+ {"development/", false},
+ {"frameworks/", false},
+ {"packages/", true},
+ {"prebuilts/module_sdk/", true},
+ {"prebuilts/", false},
+ {"sdk/", false},
+ {"system/", false},
+ {"test/", false},
+ {"toolchain/", false},
+ {"tools/", false},
}
- // SafePrebuiltPrefixes maps the regular expression to match a prebuilt
+ // safePrebuiltPrefixes maps the regular expression to match a prebuilt
// containing the path of a safe prefix to the safe prefix.
- SafePrebuiltPrefixes = make(map[*regexp.Regexp]string)
+ safePrebuiltPrefixes []safePrebuiltPrefixesType
// ImpliesUnencumbered lists the condition names representing an author attempt to disclaim copyright.
ImpliesUnencumbered = LicenseConditionSet(UnencumberedCondition)
@@ -62,14 +63,13 @@
// ImpliesNotice lists the condition names implying a notice or attribution policy.
ImpliesNotice = LicenseConditionSet(UnencumberedCondition | PermissiveCondition | NoticeCondition | ReciprocalCondition |
- RestrictedCondition | RestrictedClasspathExceptionCondition | WeaklyRestrictedCondition |
- ProprietaryCondition | ByExceptionOnlyCondition)
+ RestrictedCondition | WeaklyRestrictedCondition | ProprietaryCondition | ByExceptionOnlyCondition)
// ImpliesReciprocal lists the condition names implying a local source-sharing policy.
ImpliesReciprocal = LicenseConditionSet(ReciprocalCondition)
// Restricted lists the condition names implying an infectious source-sharing policy.
- ImpliesRestricted = LicenseConditionSet(RestrictedCondition | RestrictedClasspathExceptionCondition | WeaklyRestrictedCondition)
+ ImpliesRestricted = LicenseConditionSet(RestrictedCondition | WeaklyRestrictedCondition)
// ImpliesProprietary lists the condition names implying a confidentiality policy.
ImpliesProprietary = LicenseConditionSet(ProprietaryCondition)
@@ -81,9 +81,19 @@
ImpliesPrivate = LicenseConditionSet(ProprietaryCondition)
// ImpliesShared lists the condition names implying a source-code sharing policy.
- ImpliesShared = LicenseConditionSet(ReciprocalCondition | RestrictedCondition | RestrictedClasspathExceptionCondition | WeaklyRestrictedCondition)
+ ImpliesShared = LicenseConditionSet(ReciprocalCondition | RestrictedCondition | WeaklyRestrictedCondition)
)
+type safePathPrefixesType struct {
+ prefix string
+ strip bool
+}
+
+type safePrebuiltPrefixesType struct {
+ safePathPrefixesType
+ re *regexp.Regexp
+}
+
var (
anyLgpl = regexp.MustCompile(`^SPDX-license-identifier-LGPL.*`)
versionedGpl = regexp.MustCompile(`^SPDX-license-identifier-GPL-\p{N}.*`)
@@ -92,50 +102,21 @@
)
func init() {
- for prefix := range SafePathPrefixes {
- if prefix == "prebuilts/" {
+ for _, safePathPrefix := range safePathPrefixes {
+ if strings.HasPrefix(safePathPrefix.prefix, "prebuilts/") {
continue
}
- r := regexp.MustCompile("^prebuilts/[^ ]*/" + prefix)
- SafePrebuiltPrefixes[r] = prefix
+ r := regexp.MustCompile("^prebuilts/(?:runtime/mainline/)?" + safePathPrefix.prefix)
+ safePrebuiltPrefixes = append(safePrebuiltPrefixes,
+ safePrebuiltPrefixesType{safePathPrefix, r})
}
}
// LicenseConditionSetFromNames returns a set containing the recognized `names` and
// silently ignoring or discarding the unrecognized `names`.
-func LicenseConditionSetFromNames(tn *TargetNode, names ...string) LicenseConditionSet {
+func LicenseConditionSetFromNames(names ...string) LicenseConditionSet {
cs := NewLicenseConditionSet()
for _, name := range names {
- if name == "restricted" {
- if 0 == len(tn.LicenseKinds()) {
- cs = cs.Plus(RestrictedCondition)
- continue
- }
- hasLgpl := false
- hasClasspath := false
- hasGeneric := false
- for _, kind := range tn.LicenseKinds() {
- if strings.HasSuffix(kind, "-with-classpath-exception") {
- cs = cs.Plus(RestrictedClasspathExceptionCondition)
- hasClasspath = true
- } else if anyLgpl.MatchString(kind) {
- cs = cs.Plus(WeaklyRestrictedCondition)
- hasLgpl = true
- } else if versionedGpl.MatchString(kind) {
- cs = cs.Plus(RestrictedCondition)
- } else if genericGpl.MatchString(kind) {
- hasGeneric = true
- } else if kind == "legacy_restricted" || ccBySa.MatchString(kind) {
- cs = cs.Plus(RestrictedCondition)
- } else {
- cs = cs.Plus(RestrictedCondition)
- }
- }
- if hasGeneric && !hasLgpl && !hasClasspath {
- cs = cs.Plus(RestrictedCondition)
- }
- continue
- }
if lc, ok := RecognizedConditionNames[name]; ok {
cs |= LicenseConditionSet(lc)
}
@@ -202,9 +183,6 @@
}
result |= depConditions & LicenseConditionSet(RestrictedCondition)
- if 0 != (depConditions&LicenseConditionSet(RestrictedClasspathExceptionCondition)) && !edgeNodesAreIndependentModules(e) {
- result |= LicenseConditionSet(RestrictedClasspathExceptionCondition)
- }
return result
}
@@ -241,9 +219,6 @@
return result
}
result = result.Minus(WeaklyRestrictedCondition)
- if edgeNodesAreIndependentModules(e) {
- result = result.Minus(RestrictedClasspathExceptionCondition)
- }
return result
}
@@ -261,10 +236,7 @@
return NewLicenseConditionSet()
}
- result &= LicenseConditionSet(RestrictedCondition | RestrictedClasspathExceptionCondition)
- if 0 != (result&LicenseConditionSet(RestrictedClasspathExceptionCondition)) && edgeNodesAreIndependentModules(e) {
- result &= LicenseConditionSet(RestrictedCondition)
- }
+ result &= LicenseConditionSet(RestrictedCondition)
return result
}
@@ -281,9 +253,3 @@
isToolchain := e.annotations.HasAnnotation("toolchain")
return !isDynamic && !isToolchain
}
-
-// edgeNodesAreIndependentModules returns true for edges where the target and
-// dependency are independent modules.
-func edgeNodesAreIndependentModules(e *TargetEdge) bool {
- return e.target.PackageName() != e.dependency.PackageName()
-}
diff --git a/tools/compliance/policy_policy_test.go b/tools/compliance/policy_policy_test.go
index 27ce16c..f003314 100644
--- a/tools/compliance/policy_policy_test.go
+++ b/tools/compliance/policy_policy_test.go
@@ -20,6 +20,8 @@
"sort"
"strings"
"testing"
+
+ "android/soong/tools/compliance/testfs"
)
func TestPolicy_edgeConditions(t *testing.T) {
@@ -47,8 +49,8 @@
name: "fponlgpl",
edge: annotated{"apacheBin.meta_lic", "lgplLib.meta_lic", []string{"static"}},
expectedDepActions: []string{
- "apacheBin.meta_lic:lgplLib.meta_lic:restricted_allows_dynamic_linking",
- "lgplLib.meta_lic:lgplLib.meta_lic:restricted_allows_dynamic_linking",
+ "apacheBin.meta_lic:lgplLib.meta_lic:restricted_if_statically_linked",
+ "lgplLib.meta_lic:lgplLib.meta_lic:restricted_if_statically_linked",
},
expectedTargetConditions: []string{},
},
@@ -83,21 +85,15 @@
expectedTargetConditions: []string{},
},
{
- name: "independentmodulestatic",
- edge: annotated{"apacheBin.meta_lic", "gplWithClasspathException.meta_lic", []string{"static"}},
- expectedDepActions: []string{
- "apacheBin.meta_lic:gplWithClasspathException.meta_lic:restricted_with_classpath_exception",
- "gplWithClasspathException.meta_lic:gplWithClasspathException.meta_lic:restricted_with_classpath_exception",
- },
+ name: "independentmodulestatic",
+ edge: annotated{"apacheBin.meta_lic", "gplWithClasspathException.meta_lic", []string{"static"}},
+ expectedDepActions: []string{},
expectedTargetConditions: []string{},
},
{
- name: "dependentmodule",
- edge: annotated{"dependentModule.meta_lic", "gplWithClasspathException.meta_lic", []string{"dynamic"}},
- expectedDepActions: []string{
- "dependentModule.meta_lic:gplWithClasspathException.meta_lic:restricted_with_classpath_exception",
- "gplWithClasspathException.meta_lic:gplWithClasspathException.meta_lic:restricted_with_classpath_exception",
- },
+ name: "dependentmodule",
+ edge: annotated{"dependentModule.meta_lic", "gplWithClasspathException.meta_lic", []string{"dynamic"}},
+ expectedDepActions: []string{},
expectedTargetConditions: []string{},
},
@@ -105,7 +101,7 @@
name: "lgplonfp",
edge: annotated{"lgplBin.meta_lic", "apacheLib.meta_lic", []string{"static"}},
expectedDepActions: []string{},
- expectedTargetConditions: []string{"lgplBin.meta_lic:restricted_allows_dynamic_linking"},
+ expectedTargetConditions: []string{"lgplBin.meta_lic:restricted_if_statically_linked"},
},
{
name: "lgplonfpdynamic",
@@ -166,13 +162,13 @@
name: "independentmodulereversestatic",
edge: annotated{"gplWithClasspathException.meta_lic", "apacheBin.meta_lic", []string{"static"}},
expectedDepActions: []string{},
- expectedTargetConditions: []string{"gplWithClasspathException.meta_lic:restricted_with_classpath_exception"},
+ expectedTargetConditions: []string{},
},
{
name: "dependentmodulereverse",
edge: annotated{"gplWithClasspathException.meta_lic", "dependentModule.meta_lic", []string{"dynamic"}},
expectedDepActions: []string{},
- expectedTargetConditions: []string{"gplWithClasspathException.meta_lic:restricted_with_classpath_exception"},
+ expectedTargetConditions: []string{},
},
{
name: "ponr",
@@ -216,7 +212,7 @@
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
- fs := make(testFS)
+ fs := make(testfs.TestFS)
stderr := &bytes.Buffer{}
target := meta[tt.edge.target] + fmt.Sprintf("deps: {\n file: \"%s\"\n", tt.edge.dep)
for _, ann := range tt.edge.annotations {
@@ -257,9 +253,9 @@
otherCs := otn.LicenseConditions()
depConditions |= otherCs
}
- t.Logf("calculate target actions for edge=%s, dep conditions=%04x, treatAsAggregate=%v", edge.String(), depConditions, tt.treatAsAggregate)
+ t.Logf("calculate target actions for edge=%s, dep conditions=%#v %s, treatAsAggregate=%v", edge.String(), depConditions, depConditions, tt.treatAsAggregate)
csActual := depConditionsPropagatingToTarget(lg, edge, depConditions, tt.treatAsAggregate)
- t.Logf("calculated target conditions as %04x{%s}", csActual, strings.Join(csActual.Names(), ", "))
+ t.Logf("calculated target conditions as %#v %s", csActual, csActual)
csExpected := NewLicenseConditionSet()
for _, triple := range tt.expectedDepActions {
fields := strings.Split(triple, ":")
@@ -269,9 +265,9 @@
}
csExpected |= expectedConditions
}
- t.Logf("expected target conditions as %04x{%s}", csExpected, strings.Join(csExpected.Names(), ", "))
+ t.Logf("expected target conditions as %#v %s", csExpected, csExpected)
if csActual != csExpected {
- t.Errorf("unexpected license conditions: got %04x, want %04x", csActual, csExpected)
+ t.Errorf("unexpected license conditions: got %#v, want %#v", csActual, csExpected)
}
})
}
diff --git a/tools/compliance/policy_resolve.go b/tools/compliance/policy_resolve.go
index d357aec..0951ba1 100644
--- a/tools/compliance/policy_resolve.go
+++ b/tools/compliance/policy_resolve.go
@@ -14,10 +14,6 @@
package compliance
-import (
- "sync"
-)
-
var (
// AllResolutions is a TraceConditions function that resolves all
// unfiltered license conditions.
@@ -49,89 +45,57 @@
func TraceBottomUpConditions(lg *LicenseGraph, conditionsFn TraceConditions) {
// short-cut if already walked and cached
- lg.mu.Lock()
- wg := lg.wgBU
+ lg.onceBottomUp.Do(func() {
+ // amap identifes targets previously walked. (guarded by mu)
+ amap := make(map[*TargetNode]struct{})
- if wg != nil {
- lg.mu.Unlock()
- wg.Wait()
- return
- }
- wg = &sync.WaitGroup{}
- wg.Add(1)
- lg.wgBU = wg
- lg.mu.Unlock()
+ var walk func(target *TargetNode, treatAsAggregate bool) LicenseConditionSet
- // amap identifes targets previously walked. (guarded by mu)
- amap := make(map[*TargetNode]struct{})
-
- // cmap identifies targets previously walked as pure aggregates. i.e. as containers
- // (guarded by mu)
- cmap := make(map[*TargetNode]struct{})
- var mu sync.Mutex
-
- var walk func(target *TargetNode, treatAsAggregate bool) LicenseConditionSet
-
- walk = func(target *TargetNode, treatAsAggregate bool) LicenseConditionSet {
- priorWalkResults := func() (LicenseConditionSet, bool) {
- mu.Lock()
- defer mu.Unlock()
-
- if _, alreadyWalked := amap[target]; alreadyWalked {
- if treatAsAggregate {
- return target.resolution, true
+ walk = func(target *TargetNode, treatAsAggregate bool) LicenseConditionSet {
+ priorWalkResults := func() (LicenseConditionSet, bool) {
+ if _, alreadyWalked := amap[target]; alreadyWalked {
+ if treatAsAggregate {
+ return target.resolution, true
+ }
+ if !target.pure {
+ return target.resolution, true
+ }
+ // previously walked in a pure aggregate context,
+ // needs to walk again in non-aggregate context
+ } else {
+ target.resolution |= conditionsFn(target)
+ amap[target] = struct{}{}
}
- if _, asAggregate := cmap[target]; !asAggregate {
- return target.resolution, true
- }
- // previously walked in a pure aggregate context,
- // needs to walk again in non-aggregate context
- delete(cmap, target)
- } else {
- target.resolution |= conditionsFn(target)
- amap[target] = struct{}{}
+ target.pure = treatAsAggregate
+ return target.resolution, false
}
- if treatAsAggregate {
- cmap[target] = struct{}{}
+ cs, alreadyWalked := priorWalkResults()
+ if alreadyWalked {
+ return cs
}
- return target.resolution, false
- }
- cs, alreadyWalked := priorWalkResults()
- if alreadyWalked {
+
+ // add all the conditions from all the dependencies
+ for _, edge := range target.edges {
+ // walk dependency to get its conditions
+ dcs := walk(edge.dependency, treatAsAggregate && edge.dependency.IsContainer())
+
+ // turn those into the conditions that apply to the target
+ dcs = depConditionsPropagatingToTarget(lg, edge, dcs, treatAsAggregate)
+ cs |= dcs
+ }
+ target.resolution |= cs
+ cs = target.resolution
+
+ // return conditions up the tree
return cs
}
- c := make(chan LicenseConditionSet, len(target.edges))
- // add all the conditions from all the dependencies
- for _, edge := range target.edges {
- go func(edge *TargetEdge) {
- // walk dependency to get its conditions
- cs := walk(edge.dependency, treatAsAggregate && edge.dependency.IsContainer())
-
- // turn those into the conditions that apply to the target
- cs = depConditionsPropagatingToTarget(lg, edge, cs, treatAsAggregate)
-
- c <- cs
- }(edge)
+ // walk each of the roots
+ for _, rname := range lg.rootFiles {
+ rnode := lg.targets[rname]
+ _ = walk(rnode, rnode.IsContainer())
}
- for i := 0; i < len(target.edges); i++ {
- cs |= <-c
- }
- mu.Lock()
- target.resolution |= cs
- mu.Unlock()
-
- // return conditions up the tree
- return cs
- }
-
- // walk each of the roots
- for _, rname := range lg.rootFiles {
- rnode := lg.targets[rname]
- _ = walk(rnode, rnode.IsContainer())
- }
-
- wg.Done()
+ })
}
// ResolveTopDownCondtions performs a top-down walk of the LicenseGraph
@@ -150,85 +114,61 @@
func TraceTopDownConditions(lg *LicenseGraph, conditionsFn TraceConditions) {
// short-cut if already walked and cached
- lg.mu.Lock()
- wg := lg.wgTD
+ lg.onceTopDown.Do(func() {
+ // start with the conditions propagated up the graph
+ TraceBottomUpConditions(lg, conditionsFn)
- if wg != nil {
- lg.mu.Unlock()
- wg.Wait()
- return
- }
- wg = &sync.WaitGroup{}
- wg.Add(1)
- lg.wgTD = wg
- lg.mu.Unlock()
+ // amap contains the set of targets already walked. (guarded by mu)
+ amap := make(map[*TargetNode]struct{})
- // start with the conditions propagated up the graph
- TraceBottomUpConditions(lg, conditionsFn)
+ var walk func(fnode *TargetNode, cs LicenseConditionSet, treatAsAggregate bool)
- // amap contains the set of targets already walked. (guarded by mu)
- amap := make(map[*TargetNode]struct{})
-
- // cmap contains the set of targets walked as pure aggregates. i.e. containers
- // (guarded by mu)
- cmap := make(map[*TargetNode]struct{})
-
- // mu guards concurrent access to cmap
- var mu sync.Mutex
-
- var walk func(fnode *TargetNode, cs LicenseConditionSet, treatAsAggregate bool)
-
- walk = func(fnode *TargetNode, cs LicenseConditionSet, treatAsAggregate bool) {
- defer wg.Done()
- mu.Lock()
- fnode.resolution |= conditionsFn(fnode)
- fnode.resolution |= cs
- amap[fnode] = struct{}{}
- if treatAsAggregate {
- cmap[fnode] = struct{}{}
- }
- cs = fnode.resolution
- mu.Unlock()
- // for each dependency
- for _, edge := range fnode.edges {
- func(edge *TargetEdge) {
- // dcs holds the dpendency conditions inherited from the target
- dcs := targetConditionsPropagatingToDep(lg, edge, cs, treatAsAggregate, conditionsFn)
- dnode := edge.dependency
- mu.Lock()
- defer mu.Unlock()
- depcs := dnode.resolution
- _, alreadyWalked := amap[dnode]
- if !dcs.IsEmpty() && alreadyWalked {
- if dcs.Difference(depcs).IsEmpty() {
+ walk = func(fnode *TargetNode, cs LicenseConditionSet, treatAsAggregate bool) {
+ continueWalk := func() bool {
+ if _, alreadyWalked := amap[fnode]; alreadyWalked {
+ if cs.IsEmpty() {
+ return false
+ }
+ if cs.Difference(fnode.resolution).IsEmpty() {
// no new conditions
// pure aggregates never need walking a 2nd time with same conditions
if treatAsAggregate {
- return
+ return false
}
// non-aggregates don't need walking as non-aggregate a 2nd time
- if _, asAggregate := cmap[dnode]; !asAggregate {
- return
+ if !fnode.pure {
+ return false
}
// previously walked as pure aggregate; need to re-walk as non-aggregate
- delete(cmap, dnode)
}
+ } else {
+ fnode.resolution |= conditionsFn(fnode)
}
+ fnode.resolution |= cs
+ fnode.pure = treatAsAggregate
+ amap[fnode] = struct{}{}
+ cs = fnode.resolution
+ return true
+ }()
+ if !continueWalk {
+ return
+ }
+ // for each dependency
+ for _, edge := range fnode.edges {
+ // dcs holds the dpendency conditions inherited from the target
+ dcs := targetConditionsPropagatingToDep(lg, edge, cs, treatAsAggregate, conditionsFn)
+ dnode := edge.dependency
// add the conditions to the dependency
- wg.Add(1)
- go walk(dnode, dcs, treatAsAggregate && dnode.IsContainer())
- }(edge)
+ walk(dnode, dcs, treatAsAggregate && dnode.IsContainer())
+ }
}
- }
- // walk each of the roots
- for _, rname := range lg.rootFiles {
- rnode := lg.targets[rname]
- wg.Add(1)
- // add the conditions to the root and its transitive closure
- go walk(rnode, NewLicenseConditionSet(), rnode.IsContainer())
- }
- wg.Done()
- wg.Wait()
+ // walk each of the roots
+ for _, rname := range lg.rootFiles {
+ rnode := lg.targets[rname]
+ // add the conditions to the root and its transitive closure
+ walk(rnode, NewLicenseConditionSet(), rnode.IsContainer())
+ }
+ })
}
diff --git a/tools/compliance/policy_resolve_test.go b/tools/compliance/policy_resolve_test.go
index f98e4cc..f9ea6a1 100644
--- a/tools/compliance/policy_resolve_test.go
+++ b/tools/compliance/policy_resolve_test.go
@@ -204,8 +204,8 @@
{"apacheBin.meta_lic", "lgplLib.meta_lic", []string{"static"}},
},
expectedActions: []tcond{
- {"apacheBin.meta_lic", "notice|restricted_allows_dynamic_linking"},
- {"lgplLib.meta_lic", "restricted_allows_dynamic_linking"},
+ {"apacheBin.meta_lic", "notice|restricted_if_statically_linked"},
+ {"lgplLib.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -216,7 +216,7 @@
},
expectedActions: []tcond{
{"apacheBin.meta_lic", "notice"},
- {"lgplLib.meta_lic", "restricted_allows_dynamic_linking"},
+ {"lgplLib.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -227,9 +227,9 @@
{"apacheBin.meta_lic", "lgplLib.meta_lic", []string{"static"}},
},
expectedActions: []tcond{
- {"apacheContainer.meta_lic", "notice|restricted_allows_dynamic_linking"},
- {"apacheBin.meta_lic", "notice|restricted_allows_dynamic_linking"},
- {"lgplLib.meta_lic", "restricted_allows_dynamic_linking"},
+ {"apacheContainer.meta_lic", "notice|restricted_if_statically_linked"},
+ {"apacheBin.meta_lic", "notice|restricted_if_statically_linked"},
+ {"lgplLib.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -240,9 +240,9 @@
{"apacheContainer.meta_lic", "lgplLib.meta_lic", []string{"static"}},
},
expectedActions: []tcond{
- {"apacheContainer.meta_lic", "notice|restricted_allows_dynamic_linking"},
+ {"apacheContainer.meta_lic", "notice|restricted_if_statically_linked"},
{"apacheBin.meta_lic", "notice"},
- {"lgplLib.meta_lic", "restricted_allows_dynamic_linking"},
+ {"lgplLib.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -253,7 +253,7 @@
},
expectedActions: []tcond{
{"apacheBin.meta_lic", "notice"},
- {"lgplLib.meta_lic", "restricted_allows_dynamic_linking"},
+ {"lgplLib.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -266,7 +266,7 @@
expectedActions: []tcond{
{"apacheContainer.meta_lic", "notice"},
{"apacheBin.meta_lic", "notice"},
- {"lgplLib.meta_lic", "restricted_allows_dynamic_linking"},
+ {"lgplLib.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -279,7 +279,7 @@
expectedActions: []tcond{
{"apacheContainer.meta_lic", "notice"},
{"apacheBin.meta_lic", "notice"},
- {"lgplLib.meta_lic", "restricted_allows_dynamic_linking"},
+ {"lgplLib.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -289,8 +289,8 @@
{"apacheBin.meta_lic", "gplWithClasspathException.meta_lic", []string{"static"}},
},
expectedActions: []tcond{
- {"apacheBin.meta_lic", "notice|restricted_with_classpath_exception"},
- {"gplWithClasspathException.meta_lic", "restricted_with_classpath_exception"},
+ {"apacheBin.meta_lic", "notice"},
+ {"gplWithClasspathException.meta_lic", "permissive"},
},
},
{
@@ -300,8 +300,8 @@
{"dependentModule.meta_lic", "gplWithClasspathException.meta_lic", []string{"static"}},
},
expectedActions: []tcond{
- {"dependentModule.meta_lic", "notice|restricted_with_classpath_exception"},
- {"gplWithClasspathException.meta_lic", "restricted_with_classpath_exception"},
+ {"dependentModule.meta_lic", "notice"},
+ {"gplWithClasspathException.meta_lic", "permissive"},
},
},
{
@@ -312,7 +312,7 @@
},
expectedActions: []tcond{
{"apacheBin.meta_lic", "notice"},
- {"gplWithClasspathException.meta_lic", "restricted_with_classpath_exception"},
+ {"gplWithClasspathException.meta_lic", "permissive"},
},
},
{
@@ -322,8 +322,8 @@
{"dependentModule.meta_lic", "gplWithClasspathException.meta_lic", []string{"dynamic"}},
},
expectedActions: []tcond{
- {"dependentModule.meta_lic", "notice|restricted_with_classpath_exception"},
- {"gplWithClasspathException.meta_lic", "restricted_with_classpath_exception"},
+ {"dependentModule.meta_lic", "notice"},
+ {"gplWithClasspathException.meta_lic", "permissive"},
},
},
}
@@ -500,9 +500,9 @@
{"apacheBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedActions: []tcond{
- {"apacheBin.meta_lic", "notice|restricted_allows_dynamic_linking"},
- {"lgplLib.meta_lic", "restricted_allows_dynamic_linking"},
- {"mitLib.meta_lic", "notice|restricted_allows_dynamic_linking"},
+ {"apacheBin.meta_lic", "notice|restricted_if_statically_linked"},
+ {"lgplLib.meta_lic", "restricted_if_statically_linked"},
+ {"mitLib.meta_lic", "notice|restricted_if_statically_linked"},
},
},
{
@@ -514,7 +514,7 @@
},
expectedActions: []tcond{
{"apacheBin.meta_lic", "notice"},
- {"lgplBin.meta_lic", "restricted_allows_dynamic_linking"},
+ {"lgplBin.meta_lic", "restricted_if_statically_linked"},
{"mitLib.meta_lic", "notice"},
},
},
@@ -527,10 +527,10 @@
{"apacheBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedActions: []tcond{
- {"apacheContainer.meta_lic", "notice|restricted_allows_dynamic_linking"},
- {"apacheBin.meta_lic", "notice|restricted_allows_dynamic_linking"},
- {"lgplLib.meta_lic", "restricted_allows_dynamic_linking"},
- {"mitLib.meta_lic", "notice|restricted_allows_dynamic_linking"},
+ {"apacheContainer.meta_lic", "notice|restricted_if_statically_linked"},
+ {"apacheBin.meta_lic", "notice|restricted_if_statically_linked"},
+ {"lgplLib.meta_lic", "restricted_if_statically_linked"},
+ {"mitLib.meta_lic", "notice|restricted_if_statically_linked"},
},
},
{
@@ -541,9 +541,9 @@
{"apacheContainer.meta_lic", "lgplLib.meta_lic", []string{"static"}},
},
expectedActions: []tcond{
- {"apacheContainer.meta_lic", "notice|restricted_allows_dynamic_linking"},
+ {"apacheContainer.meta_lic", "notice|restricted_if_statically_linked"},
{"apacheBin.meta_lic", "notice"},
- {"lgplLib.meta_lic", "restricted_allows_dynamic_linking"},
+ {"lgplLib.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -555,7 +555,7 @@
},
expectedActions: []tcond{
{"apacheBin.meta_lic", "notice"},
- {"lgplLib.meta_lic", "restricted_allows_dynamic_linking"},
+ {"lgplLib.meta_lic", "restricted_if_statically_linked"},
{"mitLib.meta_lic", "notice"},
},
},
@@ -569,7 +569,7 @@
expectedActions: []tcond{
{"apacheContainer.meta_lic", "notice"},
{"apacheBin.meta_lic", "notice"},
- {"lgplLib.meta_lic", "restricted_allows_dynamic_linking"},
+ {"lgplLib.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -582,7 +582,7 @@
expectedActions: []tcond{
{"apacheContainer.meta_lic", "notice"},
{"apacheBin.meta_lic", "notice"},
- {"lgplLib.meta_lic", "restricted_allows_dynamic_linking"},
+ {"lgplLib.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -593,9 +593,9 @@
{"apacheBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedActions: []tcond{
- {"apacheBin.meta_lic", "notice|restricted_with_classpath_exception"},
- {"gplWithClasspathException.meta_lic", "restricted_with_classpath_exception"},
- {"mitLib.meta_lic", "notice|restricted_with_classpath_exception"},
+ {"apacheBin.meta_lic", "notice"},
+ {"gplWithClasspathException.meta_lic", "permissive"},
+ {"mitLib.meta_lic", "notice"},
},
},
{
@@ -606,9 +606,9 @@
{"dependentModule.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedActions: []tcond{
- {"dependentModule.meta_lic", "notice|restricted_with_classpath_exception"},
- {"gplWithClasspathException.meta_lic", "restricted_with_classpath_exception"},
- {"mitLib.meta_lic", "notice|restricted_with_classpath_exception"},
+ {"dependentModule.meta_lic", "notice"},
+ {"gplWithClasspathException.meta_lic", "permissive"},
+ {"mitLib.meta_lic", "notice"},
},
},
{
@@ -620,7 +620,7 @@
},
expectedActions: []tcond{
{"apacheBin.meta_lic", "notice"},
- {"gplWithClasspathException.meta_lic", "restricted_with_classpath_exception"},
+ {"gplWithClasspathException.meta_lic", "permissive"},
{"mitLib.meta_lic", "notice"},
},
},
@@ -632,9 +632,9 @@
{"dependentModule.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedActions: []tcond{
- {"dependentModule.meta_lic", "notice|restricted_with_classpath_exception"},
- {"gplWithClasspathException.meta_lic", "restricted_with_classpath_exception"},
- {"mitLib.meta_lic", "notice|restricted_with_classpath_exception"},
+ {"dependentModule.meta_lic", "notice"},
+ {"gplWithClasspathException.meta_lic", "permissive"},
+ {"mitLib.meta_lic", "notice"},
},
},
}
diff --git a/tools/compliance/policy_resolvenotices_test.go b/tools/compliance/policy_resolvenotices_test.go
index cd9dd71..b23e587 100644
--- a/tools/compliance/policy_resolvenotices_test.go
+++ b/tools/compliance/policy_resolvenotices_test.go
@@ -33,8 +33,8 @@
{"apacheBin.meta_lic", "apacheLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "apacheLib.meta_lic", "apacheLib.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "apacheLib.meta_lic", "notice"},
},
},
{
@@ -44,7 +44,7 @@
{"apacheBin.meta_lic", "apacheLib.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
},
},
{
@@ -54,8 +54,8 @@
{"apacheBin.meta_lic", "apacheLib.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheLib.meta_lic", "apacheLib.meta_lic", "apacheLib.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheLib.meta_lic", "apacheLib.meta_lic", "notice"},
},
},
{
@@ -66,11 +66,9 @@
{"apacheBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "mitLib.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "mitLib.meta_lic", "mitLib.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice|restricted"},
+ {"apacheBin.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "mitLib.meta_lic", "notice|restricted"},
},
},
{
@@ -81,8 +79,8 @@
{"apacheBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "mitLib.meta_lic", "mitLib.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "mitLib.meta_lic", "notice"},
},
},
{
@@ -96,22 +94,17 @@
{"mitBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheContainer.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "apacheBin.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheContainer.meta_lic", "gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheContainer.meta_lic", "mitBin.meta_lic", "mitBin.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "mitLib.meta_lic", "mitLib.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "mplLib.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheContainer.meta_lic", "mplLib.meta_lic", "mplLib.meta_lic", "reciprocal"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "mplLib.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "mplLib.meta_lic", "mplLib.meta_lic", "reciprocal"},
- {"mitBin.meta_lic", "mitBin.meta_lic", "mitBin.meta_lic", "notice"},
- {"mitBin.meta_lic", "mitLib.meta_lic", "mitLib.meta_lic", "notice"},
+ {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice|restricted"},
+ {"apacheContainer.meta_lic", "apacheBin.meta_lic", "notice|restricted"},
+ {"apacheContainer.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"apacheContainer.meta_lic", "mitBin.meta_lic", "notice"},
+ {"apacheContainer.meta_lic", "mitLib.meta_lic", "notice"},
+ {"apacheContainer.meta_lic", "mplLib.meta_lic", "reciprocal|restricted"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice|restricted"},
+ {"apacheBin.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "mplLib.meta_lic", "reciprocal|restricted"},
+ {"mitBin.meta_lic", "mitBin.meta_lic", "notice"},
+ {"mitBin.meta_lic", "mitLib.meta_lic", "notice"},
},
},
{
@@ -122,12 +115,11 @@
{"apacheContainer.meta_lic", "gplLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheContainer.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"gplLib.meta_lic", "gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice|restricted"},
+ {"apacheContainer.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheContainer.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
},
},
{
@@ -138,8 +130,8 @@
{"apacheBin.meta_lic", "mitLib.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "restricted"},
},
},
{
@@ -150,11 +142,9 @@
{"apacheBin.meta_lic", "mitLib.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "mitLib.meta_lic", "gplLib.meta_lic", "restricted"},
- {"mitLib.meta_lic", "mitLib.meta_lic", "gplLib.meta_lic", "restricted"},
- {"mitLib.meta_lic", "mitLib.meta_lic", "mitLib.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice|restricted"},
+ {"apacheBin.meta_lic", "mitLib.meta_lic", "restricted"},
+ {"mitLib.meta_lic", "mitLib.meta_lic", "notice|restricted"},
},
},
{
@@ -168,14 +158,11 @@
{"mitBin.meta_lic", "mitLib.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheContainer.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "apacheBin.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheContainer.meta_lic", "mitBin.meta_lic", "mitBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "gplLib.meta_lic", "restricted"},
- {"mitBin.meta_lic", "mitBin.meta_lic", "mitBin.meta_lic", "notice"},
+ {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice|restricted"},
+ {"apacheContainer.meta_lic", "apacheBin.meta_lic", "notice|restricted"},
+ {"apacheContainer.meta_lic", "mitBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice|restricted"},
+ {"mitBin.meta_lic", "mitBin.meta_lic", "notice"},
},
},
{
@@ -186,10 +173,9 @@
{"apacheContainer.meta_lic", "gplLib.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheContainer.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice|restricted"},
+ {"apacheContainer.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
},
},
{
@@ -200,12 +186,11 @@
{"apacheContainer.meta_lic", "gplLib.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheContainer.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"gplLib.meta_lic", "gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice|restricted"},
+ {"apacheContainer.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheContainer.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
},
},
{
@@ -216,11 +201,9 @@
{"apacheBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "lgplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "lgplLib.meta_lic", "lgplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "mitLib.meta_lic", "mitLib.meta_lic", "notice"},
- {"apacheBin.meta_lic", "mitLib.meta_lic", "lgplLib.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice|restricted_if_statically_linked"},
+ {"apacheBin.meta_lic", "lgplLib.meta_lic", "restricted_if_statically_linked"},
+ {"apacheBin.meta_lic", "mitLib.meta_lic", "notice|restricted_if_statically_linked"},
},
},
{
@@ -231,8 +214,8 @@
{"apacheBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "mitLib.meta_lic", "mitLib.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "mitLib.meta_lic", "notice"},
},
},
{
@@ -243,9 +226,9 @@
{"apacheBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "mitLib.meta_lic", "mitLib.meta_lic", "notice"},
- {"lgplBin.meta_lic", "lgplBin.meta_lic", "lgplBin.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "mitLib.meta_lic", "notice"},
+ {"lgplBin.meta_lic", "lgplBin.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -257,18 +240,13 @@
{"apacheBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "lgplLib.meta_lic", "restricted"},
- {"apacheContainer.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "apacheBin.meta_lic", "lgplLib.meta_lic", "restricted"},
- {"apacheContainer.meta_lic", "lgplLib.meta_lic", "lgplLib.meta_lic", "restricted"},
- {"apacheContainer.meta_lic", "mitLib.meta_lic", "mitLib.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "mitLib.meta_lic", "lgplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "lgplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "lgplLib.meta_lic", "lgplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "mitLib.meta_lic", "mitLib.meta_lic", "notice"},
- {"apacheBin.meta_lic", "mitLib.meta_lic", "lgplLib.meta_lic", "restricted"},
+ {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice|restricted_if_statically_linked"},
+ {"apacheContainer.meta_lic", "apacheBin.meta_lic", "notice|restricted_if_statically_linked"},
+ {"apacheContainer.meta_lic", "lgplLib.meta_lic", "restricted_if_statically_linked"},
+ {"apacheContainer.meta_lic", "mitLib.meta_lic", "notice|restricted_if_statically_linked"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice|restricted_if_statically_linked"},
+ {"apacheBin.meta_lic", "lgplLib.meta_lic", "restricted_if_statically_linked"},
+ {"apacheBin.meta_lic", "mitLib.meta_lic", "notice|restricted_if_statically_linked"},
},
},
{
@@ -279,12 +257,11 @@
{"apacheContainer.meta_lic", "lgplLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "lgplLib.meta_lic", "restricted"},
- {"apacheContainer.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "lgplLib.meta_lic", "lgplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"lgplLib.meta_lic", "lgplLib.meta_lic", "lgplLib.meta_lic", "restricted"},
+ {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice|restricted_if_statically_linked"},
+ {"apacheContainer.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheContainer.meta_lic", "lgplLib.meta_lic", "restricted_if_statically_linked"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"lgplLib.meta_lic", "lgplLib.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -295,8 +272,8 @@
{"apacheBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "mitLib.meta_lic", "mitLib.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "mitLib.meta_lic", "notice"},
},
},
{
@@ -307,9 +284,9 @@
{"apacheBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "mitLib.meta_lic", "mitLib.meta_lic", "notice"},
- {"lgplLib.meta_lic", "lgplLib.meta_lic", "lgplLib.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "mitLib.meta_lic", "notice"},
+ {"lgplLib.meta_lic", "lgplLib.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -320,9 +297,9 @@
{"apacheBin.meta_lic", "lgplLib.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice"},
+ {"apacheContainer.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
},
},
{
@@ -333,10 +310,10 @@
{"apacheBin.meta_lic", "lgplLib.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"lgplLib.meta_lic", "lgplLib.meta_lic", "lgplLib.meta_lic", "restricted"},
+ {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice"},
+ {"apacheContainer.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"lgplLib.meta_lic", "lgplLib.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -347,9 +324,9 @@
{"apacheContainer.meta_lic", "lgplLib.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice"},
+ {"apacheContainer.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
},
},
{
@@ -360,10 +337,10 @@
{"apacheContainer.meta_lic", "lgplLib.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"lgplLib.meta_lic", "lgplLib.meta_lic", "lgplLib.meta_lic", "restricted"},
+ {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice"},
+ {"apacheContainer.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"lgplLib.meta_lic", "lgplLib.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -374,11 +351,9 @@
{"apacheBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "mitLib.meta_lic", "mitLib.meta_lic", "notice"},
- {"apacheBin.meta_lic", "mitLib.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "gplWithClasspathException.meta_lic", "permissive"},
+ {"apacheBin.meta_lic", "mitLib.meta_lic", "notice"},
},
},
{
@@ -389,11 +364,9 @@
{"dependentModule.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"dependentModule.meta_lic", "dependentModule.meta_lic", "dependentModule.meta_lic", "notice"},
- {"dependentModule.meta_lic", "dependentModule.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"dependentModule.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"dependentModule.meta_lic", "mitLib.meta_lic", "mitLib.meta_lic", "notice"},
- {"dependentModule.meta_lic", "mitLib.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
+ {"dependentModule.meta_lic", "dependentModule.meta_lic", "notice"},
+ {"dependentModule.meta_lic", "gplWithClasspathException.meta_lic", "permissive"},
+ {"dependentModule.meta_lic", "mitLib.meta_lic", "notice"},
},
},
{
@@ -404,8 +377,8 @@
{"apacheBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "mitLib.meta_lic", "mitLib.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "mitLib.meta_lic", "notice"},
},
},
{
@@ -416,9 +389,9 @@
{"apacheBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "mitLib.meta_lic", "mitLib.meta_lic", "notice"},
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "mitLib.meta_lic", "notice"},
+ {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "permissive"},
},
},
{
@@ -429,10 +402,8 @@
{"dependentModule.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"dependentModule.meta_lic", "dependentModule.meta_lic", "dependentModule.meta_lic", "notice"},
- {"dependentModule.meta_lic", "dependentModule.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"dependentModule.meta_lic", "mitLib.meta_lic", "mitLib.meta_lic", "notice"},
- {"dependentModule.meta_lic", "mitLib.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
+ {"dependentModule.meta_lic", "dependentModule.meta_lic", "notice"},
+ {"dependentModule.meta_lic", "mitLib.meta_lic", "notice"},
},
},
{
@@ -443,12 +414,9 @@
{"dependentModule.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"dependentModule.meta_lic", "dependentModule.meta_lic", "dependentModule.meta_lic", "notice"},
- {"dependentModule.meta_lic", "dependentModule.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"dependentModule.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"dependentModule.meta_lic", "mitLib.meta_lic", "mitLib.meta_lic", "notice"},
- {"dependentModule.meta_lic", "mitLib.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
+ {"dependentModule.meta_lic", "dependentModule.meta_lic", "notice"},
+ {"dependentModule.meta_lic", "mitLib.meta_lic", "notice"},
+ {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "permissive"},
},
},
}
diff --git a/tools/compliance/policy_resolveprivacy_test.go b/tools/compliance/policy_resolveprivacy_test.go
index e8c953a..d4d1967 100644
--- a/tools/compliance/policy_resolveprivacy_test.go
+++ b/tools/compliance/policy_resolveprivacy_test.go
@@ -57,7 +57,7 @@
{"proprietary.meta_lic", "gplLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"proprietary.meta_lic", "proprietary.meta_lic", "proprietary.meta_lic", "proprietary"},
+ {"proprietary.meta_lic", "proprietary.meta_lic", "proprietary"},
},
},
{
@@ -67,7 +67,7 @@
{"gplBin.meta_lic", "proprietary.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"gplBin.meta_lic", "proprietary.meta_lic", "proprietary.meta_lic", "proprietary"},
+ {"gplBin.meta_lic", "proprietary.meta_lic", "proprietary"},
},
},
}
diff --git a/tools/compliance/policy_resolveshare_test.go b/tools/compliance/policy_resolveshare_test.go
index c451b86..4abd960 100644
--- a/tools/compliance/policy_resolveshare_test.go
+++ b/tools/compliance/policy_resolveshare_test.go
@@ -40,9 +40,7 @@
edges: []annotated{
{"apacheBin.meta_lic", "gplWithClasspathException.meta_lic", []string{"dynamic"}},
},
- expectedResolutions: []res{
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedResolutions: []res{},
},
{
name: "independentmodulestaticrestricted",
@@ -50,10 +48,7 @@
edges: []annotated{
{"apacheBin.meta_lic", "gplWithClasspathException.meta_lic", []string{"static"}},
},
- expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedResolutions: []res{},
},
{
name: "dependentmodulerestricted",
@@ -61,9 +56,7 @@
edges: []annotated{
{"dependentModule.meta_lic", "gplWithClasspathException.meta_lic", []string{"dynamic"}},
},
- expectedResolutions: []res{
- {"dependentModule.meta_lic", "dependentModule.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedResolutions: []res{},
},
{
name: "dependentmodulerestrictedshipclasspath",
@@ -71,11 +64,7 @@
edges: []annotated{
{"dependentModule.meta_lic", "gplWithClasspathException.meta_lic", []string{"dynamic"}},
},
- expectedResolutions: []res{
- {"dependentModule.meta_lic", "dependentModule.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"dependentModule.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedResolutions: []res{},
},
{
name: "lgplonfprestricted",
@@ -84,8 +73,8 @@
{"lgplBin.meta_lic", "apacheLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"lgplBin.meta_lic", "lgplBin.meta_lic", "lgplBin.meta_lic", "restricted"},
- {"lgplBin.meta_lic", "apacheLib.meta_lic", "lgplBin.meta_lic", "restricted"},
+ {"lgplBin.meta_lic", "lgplBin.meta_lic", "restricted_if_statically_linked"},
+ {"lgplBin.meta_lic", "apacheLib.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -95,7 +84,7 @@
{"lgplBin.meta_lic", "apacheLib.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"lgplBin.meta_lic", "lgplBin.meta_lic", "lgplBin.meta_lic", "restricted"},
+ {"lgplBin.meta_lic", "lgplBin.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -105,7 +94,7 @@
{"lgplBin.meta_lic", "apacheLib.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"lgplBin.meta_lic", "lgplBin.meta_lic", "lgplBin.meta_lic", "restricted"},
+ {"lgplBin.meta_lic", "lgplBin.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -115,8 +104,8 @@
{"gplBin.meta_lic", "apacheLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"gplBin.meta_lic", "gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
- {"gplBin.meta_lic", "apacheLib.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "apacheLib.meta_lic", "restricted"},
},
},
{
@@ -126,9 +115,9 @@
{"gplContainer.meta_lic", "apacheLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"gplContainer.meta_lic", "gplContainer.meta_lic", "gplContainer.meta_lic", "restricted"},
- {"gplContainer.meta_lic", "apacheLib.meta_lic", "gplContainer.meta_lic", "restricted"},
- {"apacheLib.meta_lic", "apacheLib.meta_lic", "gplContainer.meta_lic", "restricted"},
+ {"gplContainer.meta_lic", "gplContainer.meta_lic", "restricted"},
+ {"gplContainer.meta_lic", "apacheLib.meta_lic", "restricted"},
+ {"apacheLib.meta_lic", "apacheLib.meta_lic", "restricted"},
},
},
{
@@ -139,9 +128,9 @@
{"apacheContainer.meta_lic", "gplLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheContainer.meta_lic", "gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
- {"gplLib.meta_lic", "gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "restricted"},
+ {"apacheContainer.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
},
},
{
@@ -152,9 +141,9 @@
{"apacheBin.meta_lic", "gplLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "apacheLib.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "apacheLib.meta_lic", "restricted"},
},
},
{
@@ -164,7 +153,7 @@
{"gplBin.meta_lic", "apacheLib.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"gplBin.meta_lic", "gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
},
},
{
@@ -174,9 +163,9 @@
{"gplBin.meta_lic", "apacheLib.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"gplBin.meta_lic", "gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
- {"gplBin.meta_lic", "apacheLib.meta_lic", "gplBin.meta_lic", "restricted"},
- {"apacheLib.meta_lic", "apacheLib.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "apacheLib.meta_lic", "restricted"},
+ {"apacheLib.meta_lic", "apacheLib.meta_lic", "restricted"},
},
},
{
@@ -185,9 +174,7 @@
edges: []annotated{
{"gplWithClasspathException.meta_lic", "apacheBin.meta_lic", []string{"dynamic"}},
},
- expectedResolutions: []res{
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedResolutions: []res{},
},
{
name: "independentmodulereversestaticrestricted",
@@ -195,10 +182,7 @@
edges: []annotated{
{"gplWithClasspathException.meta_lic", "apacheBin.meta_lic", []string{"static"}},
},
- expectedResolutions: []res{
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"gplWithClasspathException.meta_lic", "apacheBin.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedResolutions: []res{},
},
{
name: "dependentmodulereverserestricted",
@@ -206,9 +190,7 @@
edges: []annotated{
{"gplWithClasspathException.meta_lic", "dependentModule.meta_lic", []string{"dynamic"}},
},
- expectedResolutions: []res{
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedResolutions: []res{},
},
{
name: "dependentmodulereverserestrictedshipdependent",
@@ -216,11 +198,7 @@
edges: []annotated{
{"gplWithClasspathException.meta_lic", "dependentModule.meta_lic", []string{"dynamic"}},
},
- expectedResolutions: []res{
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"gplWithClasspathException.meta_lic", "dependentModule.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"dependentModule.meta_lic", "dependentModule.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedResolutions: []res{},
},
{
name: "ponrrestricted",
@@ -229,8 +207,8 @@
{"proprietary.meta_lic", "gplLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"proprietary.meta_lic", "proprietary.meta_lic", "gplLib.meta_lic", "restricted"},
- {"proprietary.meta_lic", "gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"proprietary.meta_lic", "proprietary.meta_lic", "restricted"},
+ {"proprietary.meta_lic", "gplLib.meta_lic", "restricted"},
},
},
{
@@ -240,8 +218,8 @@
{"gplBin.meta_lic", "proprietary.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"gplBin.meta_lic", "gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
- {"gplBin.meta_lic", "proprietary.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "proprietary.meta_lic", "restricted"},
},
},
{
@@ -267,7 +245,7 @@
{"mitBin.meta_lic", "mplLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"mitBin.meta_lic", "mplLib.meta_lic", "mplLib.meta_lic", "reciprocal"},
+ {"mitBin.meta_lic", "mplLib.meta_lic", "reciprocal"},
},
},
{
@@ -277,7 +255,7 @@
{"mplBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"mplBin.meta_lic", "mplBin.meta_lic", "mplBin.meta_lic", "reciprocal"},
+ {"mplBin.meta_lic", "mplBin.meta_lic", "reciprocal"},
},
},
}
diff --git a/tools/compliance/policy_shareprivacyconflicts.go b/tools/compliance/policy_shareprivacyconflicts.go
index 279e179..947bb96 100644
--- a/tools/compliance/policy_shareprivacyconflicts.go
+++ b/tools/compliance/policy_shareprivacyconflicts.go
@@ -49,7 +49,11 @@
// size is the size of the result
size := 0
- for _, cs := range combined {
+ for actsOn, cs := range combined {
+ if actsOn.pure && !actsOn.LicenseConditions().MatchesAnySet(ImpliesShared) {
+ // no need to share code to build "a distribution medium"
+ continue
+ }
size += cs.Intersection(ImpliesShared).Len() * cs.Intersection(ImpliesPrivate).Len()
}
if size == 0 {
@@ -57,6 +61,9 @@
}
result := make([]SourceSharePrivacyConflict, 0, size)
for actsOn, cs := range combined {
+ if actsOn.pure { // no need to share code for "a distribution medium"
+ continue
+ }
pconditions := cs.Intersection(ImpliesPrivate).AsList()
ssconditions := cs.Intersection(ImpliesShared).AsList()
diff --git a/tools/compliance/policy_shipped.go b/tools/compliance/policy_shipped.go
index 75c8399..b21a95a 100644
--- a/tools/compliance/policy_shipped.go
+++ b/tools/compliance/policy_shipped.go
@@ -16,15 +16,15 @@
// ShippedNodes returns the set of nodes in a license graph where the target or
// a derivative work gets distributed. (caches result)
-func ShippedNodes(lg *LicenseGraph) *TargetNodeSet {
+func ShippedNodes(lg *LicenseGraph) TargetNodeSet {
lg.mu.Lock()
shipped := lg.shippedNodes
lg.mu.Unlock()
if shipped != nil {
- return shipped
+ return *shipped
}
- tset := make(map[*TargetNode]struct{})
+ tset := make(TargetNodeSet)
WalkTopDown(NoEdgeContext{}, lg, func(lg *LicenseGraph, tn *TargetNode, path TargetEdgePath) bool {
if _, alreadyWalked := tset[tn]; alreadyWalked {
@@ -39,7 +39,7 @@
return true
})
- shipped = &TargetNodeSet{tset}
+ shipped = &tset
lg.mu.Lock()
if lg.shippedNodes == nil {
@@ -50,5 +50,5 @@
}
lg.mu.Unlock()
- return shipped
+ return *shipped
}
diff --git a/tools/compliance/policy_walk.go b/tools/compliance/policy_walk.go
index f4d7bba..e6b94ab 100644
--- a/tools/compliance/policy_walk.go
+++ b/tools/compliance/policy_walk.go
@@ -45,7 +45,7 @@
}
// VisitNode is called for each root and for each walked dependency node by
-// WalkTopDown. When VisitNode returns true, WalkTopDown will proceed to walk
+// WalkTopDown and WalkTopDownBreadthFirst. When VisitNode returns true, WalkTopDown will proceed to walk
// down the dependences of the node
type VisitNode func(lg *LicenseGraph, target *TargetNode, path TargetEdgePath) bool
@@ -79,6 +79,54 @@
}
}
+// WalkTopDownBreadthFirst performs a Breadth-first top down walk of `lg` calling `visit` and descending
+// into depenencies when `visit` returns true.
+func WalkTopDownBreadthFirst(ctx EdgeContextProvider, lg *LicenseGraph, visit VisitNode) {
+ path := NewTargetEdgePath(32)
+
+ var walk func(fnode *TargetNode)
+ walk = func(fnode *TargetNode) {
+ edgesToWalk := make(TargetEdgeList, 0, len(fnode.edges))
+ for _, edge := range fnode.edges {
+ var edgeContext interface{}
+ if ctx == nil {
+ edgeContext = nil
+ } else {
+ edgeContext = ctx.Context(lg, *path, edge)
+ }
+ path.Push(edge, edgeContext)
+ if visit(lg, edge.dependency, *path){
+ edgesToWalk = append(edgesToWalk, edge)
+ }
+ path.Pop()
+ }
+
+ for _, edge := range(edgesToWalk) {
+ var edgeContext interface{}
+ if ctx == nil {
+ edgeContext = nil
+ } else {
+ edgeContext = ctx.Context(lg, *path, edge)
+ }
+ path.Push(edge, edgeContext)
+ walk(edge.dependency)
+ path.Pop()
+ }
+ }
+
+ path.Clear()
+ rootsToWalk := make([]*TargetNode, 0, len(lg.rootFiles))
+ for _, r := range lg.rootFiles {
+ if visit(lg, lg.targets[r], *path){
+ rootsToWalk = append(rootsToWalk, lg.targets[r])
+ }
+ }
+
+ for _, rnode := range(rootsToWalk) {
+ walk(rnode)
+ }
+}
+
// resolutionKey identifies results from walking a specific target for a
// specific set of conditions.
type resolutionKey struct {
@@ -199,40 +247,16 @@
// WalkActionsForCondition performs a top-down walk of the LicenseGraph
// resolving all distributed works for `conditions`.
func WalkActionsForCondition(lg *LicenseGraph, conditions LicenseConditionSet) ActionSet {
- shipped := ShippedNodes(lg)
-
- // cmap identifies previously walked target/condition pairs.
- cmap := make(map[resolutionKey]struct{})
-
// amap maps 'actsOn' targets to the applicable conditions
//
// amap is the resulting ActionSet
amap := make(ActionSet)
- WalkTopDown(ApplicableConditionsContext{conditions}, lg, func(lg *LicenseGraph, tn *TargetNode, path TargetEdgePath) bool {
- universe := conditions
- if len(path) > 0 {
- universe = path[len(path)-1].ctx.(LicenseConditionSet)
+
+ for tn := range ShippedNodes(lg) {
+ if cs := conditions.Intersection(tn.resolution); !cs.IsEmpty() {
+ amap[tn] = cs
}
- if universe.IsEmpty() {
- return false
- }
- key := resolutionKey{tn, universe}
- if _, ok := cmap[key]; ok {
- return false
- }
- if !shipped.Contains(tn) {
- return false
- }
- cs := universe.Intersection(tn.resolution)
- if !cs.IsEmpty() {
- if _, ok := amap[tn]; ok {
- amap[tn] = cs
- } else {
- amap[tn] = amap[tn].Union(cs)
- }
- }
- return true
- })
+ }
return amap
}
diff --git a/tools/compliance/policy_walk_test.go b/tools/compliance/policy_walk_test.go
index 92867f9..53af3be 100644
--- a/tools/compliance/policy_walk_test.go
+++ b/tools/compliance/policy_walk_test.go
@@ -16,9 +16,22 @@
import (
"bytes"
+ "fmt"
+ "os"
+ "strings"
"testing"
)
+func TestMain(m *testing.M) {
+ // Change into the cmd directory before running the tests
+ // so they can find the testdata directory.
+ if err := os.Chdir("cmd"); err != nil {
+ fmt.Printf("failed to change to testdata directory: %s\n", err)
+ os.Exit(1)
+ }
+ os.Exit(m.Run())
+}
+
func TestWalkResolutionsForCondition(t *testing.T) {
tests := []struct {
name string
@@ -35,8 +48,8 @@
{"apacheBin.meta_lic", "apacheLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "apacheLib.meta_lic", "apacheLib.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "apacheLib.meta_lic", "notice"},
},
},
{
@@ -47,8 +60,8 @@
{"mitBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"mitBin.meta_lic", "mitBin.meta_lic", "mitBin.meta_lic", "notice"},
- {"mitBin.meta_lic", "mitLib.meta_lic", "mitLib.meta_lic", "notice"},
+ {"mitBin.meta_lic", "mitBin.meta_lic", "notice"},
+ {"mitBin.meta_lic", "mitLib.meta_lic", "notice"},
},
},
{
@@ -59,9 +72,8 @@
{"apacheBin.meta_lic", "lgplLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "lgplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "lgplLib.meta_lic", "lgplLib.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice|restricted_if_statically_linked"},
+ {"apacheBin.meta_lic", "lgplLib.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -72,7 +84,7 @@
{"apacheBin.meta_lic", "lgplLib.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
},
},
{
@@ -83,7 +95,7 @@
{"apacheBin.meta_lic", "gplWithClasspathException.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
},
},
{
@@ -103,9 +115,8 @@
{"apacheBin.meta_lic", "gplWithClasspathException.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "gplWithClasspathException.meta_lic", "permissive"},
},
},
{
@@ -115,10 +126,7 @@
edges: []annotated{
{"apacheBin.meta_lic", "gplWithClasspathException.meta_lic", []string{"static"}},
},
- expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedResolutions: []res{},
},
{
name: "dependentmodulenotice",
@@ -128,8 +136,7 @@
{"dependentModule.meta_lic", "gplWithClasspathException.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"dependentModule.meta_lic", "dependentModule.meta_lic", "dependentModule.meta_lic", "notice"},
- {"dependentModule.meta_lic", "dependentModule.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
+ {"dependentModule.meta_lic", "dependentModule.meta_lic", "notice"},
},
},
{
@@ -139,9 +146,7 @@
edges: []annotated{
{"dependentModule.meta_lic", "gplWithClasspathException.meta_lic", []string{"dynamic"}},
},
- expectedResolutions: []res{
- {"dependentModule.meta_lic", "dependentModule.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedResolutions: []res{},
},
{
name: "lgplonfpnotice",
@@ -151,9 +156,8 @@
{"lgplBin.meta_lic", "apacheLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"lgplBin.meta_lic", "lgplBin.meta_lic", "lgplBin.meta_lic", "restricted"},
- {"lgplBin.meta_lic", "apacheLib.meta_lic", "apacheLib.meta_lic", "notice"},
- {"lgplBin.meta_lic", "apacheLib.meta_lic", "lgplBin.meta_lic", "restricted"},
+ {"lgplBin.meta_lic", "lgplBin.meta_lic", "restricted_if_statically_linked"},
+ {"lgplBin.meta_lic", "apacheLib.meta_lic", "notice|restricted_if_statically_linked"},
},
},
{
@@ -164,8 +168,8 @@
{"lgplBin.meta_lic", "apacheLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"lgplBin.meta_lic", "lgplBin.meta_lic", "lgplBin.meta_lic", "restricted"},
- {"lgplBin.meta_lic", "apacheLib.meta_lic", "lgplBin.meta_lic", "restricted"},
+ {"lgplBin.meta_lic", "lgplBin.meta_lic", "restricted_if_statically_linked"},
+ {"lgplBin.meta_lic", "apacheLib.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -176,7 +180,7 @@
{"lgplBin.meta_lic", "apacheLib.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"lgplBin.meta_lic", "lgplBin.meta_lic", "lgplBin.meta_lic", "restricted"},
+ {"lgplBin.meta_lic", "lgplBin.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -187,7 +191,7 @@
{"lgplBin.meta_lic", "apacheLib.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"lgplBin.meta_lic", "lgplBin.meta_lic", "lgplBin.meta_lic", "restricted"},
+ {"lgplBin.meta_lic", "lgplBin.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -198,9 +202,8 @@
{"gplBin.meta_lic", "apacheLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"gplBin.meta_lic", "gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
- {"gplBin.meta_lic", "apacheLib.meta_lic", "apacheLib.meta_lic", "notice"},
- {"gplBin.meta_lic", "apacheLib.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "apacheLib.meta_lic", "notice|restricted"},
},
},
{
@@ -211,8 +214,8 @@
{"gplBin.meta_lic", "apacheLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"gplBin.meta_lic", "gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
- {"gplBin.meta_lic", "apacheLib.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "apacheLib.meta_lic", "restricted"},
},
},
{
@@ -223,11 +226,9 @@
{"gplContainer.meta_lic", "apacheLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"gplContainer.meta_lic", "gplContainer.meta_lic", "gplContainer.meta_lic", "restricted"},
- {"gplContainer.meta_lic", "apacheLib.meta_lic", "apacheLib.meta_lic", "notice"},
- {"gplContainer.meta_lic", "apacheLib.meta_lic", "gplContainer.meta_lic", "restricted"},
- {"apacheLib.meta_lic", "apacheLib.meta_lic", "apacheLib.meta_lic", "notice"},
- {"apacheLib.meta_lic", "apacheLib.meta_lic", "gplContainer.meta_lic", "restricted"},
+ {"gplContainer.meta_lic", "gplContainer.meta_lic", "restricted"},
+ {"gplContainer.meta_lic", "apacheLib.meta_lic", "notice|restricted"},
+ {"apacheLib.meta_lic", "apacheLib.meta_lic", "notice|restricted"},
},
},
{
@@ -238,9 +239,9 @@
{"gplContainer.meta_lic", "apacheLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"gplContainer.meta_lic", "gplContainer.meta_lic", "gplContainer.meta_lic", "restricted"},
- {"gplContainer.meta_lic", "apacheLib.meta_lic", "gplContainer.meta_lic", "restricted"},
- {"apacheLib.meta_lic", "apacheLib.meta_lic", "gplContainer.meta_lic", "restricted"},
+ {"gplContainer.meta_lic", "gplContainer.meta_lic", "restricted"},
+ {"gplContainer.meta_lic", "apacheLib.meta_lic", "restricted"},
+ {"apacheLib.meta_lic", "apacheLib.meta_lic", "restricted"},
},
},
{
@@ -252,12 +253,11 @@
{"apacheContainer.meta_lic", "gplLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheContainer.meta_lic", "apacheLib.meta_lic", "apacheLib.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheLib.meta_lic", "apacheLib.meta_lic", "apacheLib.meta_lic", "notice"},
- {"gplLib.meta_lic", "gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice|restricted"},
+ {"apacheContainer.meta_lic", "apacheLib.meta_lic", "notice"},
+ {"apacheContainer.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"apacheLib.meta_lic", "apacheLib.meta_lic", "notice"},
+ {"gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
},
},
{
@@ -269,9 +269,9 @@
{"apacheContainer.meta_lic", "gplLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheContainer.meta_lic", "gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
- {"gplLib.meta_lic", "gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "restricted"},
+ {"apacheContainer.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
},
},
{
@@ -283,11 +283,9 @@
{"apacheBin.meta_lic", "gplLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "apacheLib.meta_lic", "apacheLib.meta_lic", "notice"},
- {"apacheBin.meta_lic", "apacheLib.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice|restricted"},
+ {"apacheBin.meta_lic", "apacheLib.meta_lic", "notice|restricted"},
+ {"apacheBin.meta_lic", "gplLib.meta_lic", "restricted"},
},
},
{
@@ -299,9 +297,9 @@
{"apacheBin.meta_lic", "gplLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "apacheLib.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "apacheBin.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "apacheLib.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "gplLib.meta_lic", "restricted"},
},
},
{
@@ -312,7 +310,7 @@
{"gplBin.meta_lic", "apacheLib.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"gplBin.meta_lic", "gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
},
},
{
@@ -323,7 +321,7 @@
{"gplBin.meta_lic", "apacheLib.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"gplBin.meta_lic", "gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
},
},
{
@@ -334,9 +332,9 @@
{"gplBin.meta_lic", "apacheLib.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"gplBin.meta_lic", "gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
- {"gplBin.meta_lic", "apacheLib.meta_lic", "gplBin.meta_lic", "restricted"},
- {"apacheLib.meta_lic", "apacheLib.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "apacheLib.meta_lic", "restricted"},
+ {"apacheLib.meta_lic", "apacheLib.meta_lic", "restricted"},
},
},
{
@@ -347,7 +345,7 @@
{"gplWithClasspathException.meta_lic", "apacheBin.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
+ {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "permissive"},
},
},
{
@@ -357,9 +355,7 @@
edges: []annotated{
{"gplWithClasspathException.meta_lic", "apacheBin.meta_lic", []string{"dynamic"}},
},
- expectedResolutions: []res{
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedResolutions: []res{},
},
{
name: "independentmodulereverserestrictedshipped",
@@ -368,9 +364,7 @@
edges: []annotated{
{"gplWithClasspathException.meta_lic", "apacheBin.meta_lic", []string{"dynamic"}},
},
- expectedResolutions: []res{
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedResolutions: []res{},
},
{
name: "independentmodulereversestaticnotice",
@@ -380,9 +374,8 @@
{"gplWithClasspathException.meta_lic", "apacheBin.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"gplWithClasspathException.meta_lic", "apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"gplWithClasspathException.meta_lic", "apacheBin.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
+ {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "permissive"},
+ {"gplWithClasspathException.meta_lic", "apacheBin.meta_lic", "notice"},
},
},
{
@@ -392,10 +385,7 @@
edges: []annotated{
{"gplWithClasspathException.meta_lic", "apacheBin.meta_lic", []string{"static"}},
},
- expectedResolutions: []res{
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"gplWithClasspathException.meta_lic", "apacheBin.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedResolutions: []res{},
},
{
name: "dependentmodulereversenotice",
@@ -405,7 +395,7 @@
{"gplWithClasspathException.meta_lic", "dependentModule.meta_lic", []string{"dynamic"}},
},
expectedResolutions: []res{
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
+ {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "permissive"},
},
},
{
@@ -415,9 +405,7 @@
edges: []annotated{
{"gplWithClasspathException.meta_lic", "dependentModule.meta_lic", []string{"dynamic"}},
},
- expectedResolutions: []res{
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedResolutions: []res{},
},
{
name: "dependentmodulereverserestrictedshipped",
@@ -426,11 +414,7 @@
edges: []annotated{
{"gplWithClasspathException.meta_lic", "dependentModule.meta_lic", []string{"dynamic"}},
},
- expectedResolutions: []res{
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"gplWithClasspathException.meta_lic", "dependentModule.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"dependentModule.meta_lic", "dependentModule.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedResolutions: []res{},
},
{
name: "ponrnotice",
@@ -440,9 +424,8 @@
{"proprietary.meta_lic", "gplLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"proprietary.meta_lic", "proprietary.meta_lic", "proprietary.meta_lic", "proprietary"},
- {"proprietary.meta_lic", "proprietary.meta_lic", "gplLib.meta_lic", "restricted"},
- {"proprietary.meta_lic", "gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"proprietary.meta_lic", "proprietary.meta_lic", "restricted|proprietary"},
+ {"proprietary.meta_lic", "gplLib.meta_lic", "restricted"},
},
},
{
@@ -453,8 +436,8 @@
{"proprietary.meta_lic", "gplLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"proprietary.meta_lic", "gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
- {"proprietary.meta_lic", "proprietary.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"proprietary.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"proprietary.meta_lic", "proprietary.meta_lic", "restricted"},
},
},
{
@@ -465,7 +448,7 @@
{"proprietary.meta_lic", "gplLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"proprietary.meta_lic", "proprietary.meta_lic", "proprietary.meta_lic", "proprietary"},
+ {"proprietary.meta_lic", "proprietary.meta_lic", "proprietary"},
},
},
{
@@ -476,9 +459,8 @@
{"gplBin.meta_lic", "proprietary.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"gplBin.meta_lic", "gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
- {"gplBin.meta_lic", "proprietary.meta_lic", "proprietary.meta_lic", "proprietary"},
- {"gplBin.meta_lic", "proprietary.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "proprietary.meta_lic", "restricted|proprietary"},
},
},
{
@@ -489,8 +471,8 @@
{"gplBin.meta_lic", "proprietary.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"gplBin.meta_lic", "gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
- {"gplBin.meta_lic", "proprietary.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "proprietary.meta_lic", "restricted"},
},
},
{
@@ -501,7 +483,7 @@
{"gplBin.meta_lic", "proprietary.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"gplBin.meta_lic", "proprietary.meta_lic", "proprietary.meta_lic", "proprietary"},
+ {"gplBin.meta_lic", "proprietary.meta_lic", "proprietary"},
},
},
{
@@ -512,8 +494,8 @@
{"mitBin.meta_lic", "by_exception.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"mitBin.meta_lic", "mitBin.meta_lic", "mitBin.meta_lic", "notice"},
- {"mitBin.meta_lic", "by_exception.meta_lic", "by_exception.meta_lic", "by_exception_only"},
+ {"mitBin.meta_lic", "mitBin.meta_lic", "notice"},
+ {"mitBin.meta_lic", "by_exception.meta_lic", "by_exception_only"},
},
},
{
@@ -533,7 +515,7 @@
{"mitBin.meta_lic", "by_exception.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"mitBin.meta_lic", "by_exception.meta_lic", "by_exception.meta_lic", "by_exception_only"},
+ {"mitBin.meta_lic", "by_exception.meta_lic", "by_exception_only"},
},
},
{
@@ -544,8 +526,8 @@
{"by_exception.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"by_exception.meta_lic", "by_exception.meta_lic", "by_exception.meta_lic", "by_exception_only"},
- {"by_exception.meta_lic", "mitLib.meta_lic", "mitLib.meta_lic", "notice"},
+ {"by_exception.meta_lic", "by_exception.meta_lic", "by_exception_only"},
+ {"by_exception.meta_lic", "mitLib.meta_lic", "notice"},
},
},
{
@@ -565,7 +547,7 @@
{"by_exception.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"by_exception.meta_lic", "by_exception.meta_lic", "by_exception.meta_lic", "by_exception_only"},
+ {"by_exception.meta_lic", "by_exception.meta_lic", "by_exception_only"},
},
},
{
@@ -576,8 +558,8 @@
{"mitBin.meta_lic", "mplLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"mitBin.meta_lic", "mitBin.meta_lic", "mitBin.meta_lic", "notice"},
- {"mitBin.meta_lic", "mplLib.meta_lic", "mplLib.meta_lic", "reciprocal"},
+ {"mitBin.meta_lic", "mitBin.meta_lic", "notice"},
+ {"mitBin.meta_lic", "mplLib.meta_lic", "reciprocal"},
},
},
{
@@ -588,7 +570,7 @@
{"mitBin.meta_lic", "mplLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"mitBin.meta_lic", "mplLib.meta_lic", "mplLib.meta_lic", "reciprocal"},
+ {"mitBin.meta_lic", "mplLib.meta_lic", "reciprocal"},
},
},
{
@@ -599,8 +581,8 @@
{"mplBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"mplBin.meta_lic", "mplBin.meta_lic", "mplBin.meta_lic", "reciprocal"},
- {"mplBin.meta_lic", "mitLib.meta_lic", "mitLib.meta_lic", "notice"},
+ {"mplBin.meta_lic", "mplBin.meta_lic", "reciprocal"},
+ {"mplBin.meta_lic", "mitLib.meta_lic", "notice"},
},
},
{
@@ -611,7 +593,7 @@
{"mplBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedResolutions: []res{
- {"mplBin.meta_lic", "mplBin.meta_lic", "mplBin.meta_lic", "reciprocal"},
+ {"mplBin.meta_lic", "mplBin.meta_lic", "reciprocal"},
},
},
}
@@ -647,8 +629,8 @@
{"apacheBin.meta_lic", "apacheLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheLib.meta_lic", "apacheLib.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "notice"},
+ {"apacheLib.meta_lic", "notice"},
},
},
{
@@ -659,8 +641,8 @@
{"mitBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"mitBin.meta_lic", "mitBin.meta_lic", "notice"},
- {"mitLib.meta_lic", "mitLib.meta_lic", "notice"},
+ {"mitBin.meta_lic", "notice"},
+ {"mitLib.meta_lic", "notice"},
},
},
{
@@ -671,9 +653,8 @@
{"apacheBin.meta_lic", "lgplLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "lgplLib.meta_lic", "restricted"},
- {"lgplLib.meta_lic", "lgplLib.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "notice"},
+ {"lgplLib.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -684,7 +665,7 @@
{"apacheBin.meta_lic", "lgplLib.meta_lic", []string{"dynamic"}},
},
expectedActions: []act{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "notice"},
},
},
{
@@ -695,7 +676,7 @@
{"apacheBin.meta_lic", "gplWithClasspathException.meta_lic", []string{"dynamic"}},
},
expectedActions: []act{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
+ {"apacheBin.meta_lic", "notice"},
},
},
{
@@ -715,9 +696,8 @@
{"apacheBin.meta_lic", "gplWithClasspathException.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "notice"},
+ {"gplWithClasspathException.meta_lic", "permissive"},
},
},
{
@@ -727,10 +707,7 @@
edges: []annotated{
{"apacheBin.meta_lic", "gplWithClasspathException.meta_lic", []string{"static"}},
},
- expectedActions: []act{
- {"apacheBin.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedActions: []act{},
},
{
name: "dependentmodulenotice",
@@ -740,8 +717,7 @@
{"dependentModule.meta_lic", "gplWithClasspathException.meta_lic", []string{"dynamic"}},
},
expectedActions: []act{
- {"dependentModule.meta_lic", "dependentModule.meta_lic", "notice"},
- {"dependentModule.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
+ {"dependentModule.meta_lic", "notice"},
},
},
{
@@ -751,9 +727,7 @@
edges: []annotated{
{"dependentModule.meta_lic", "gplWithClasspathException.meta_lic", []string{"dynamic"}},
},
- expectedActions: []act{
- {"dependentModule.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedActions: []act{},
},
{
name: "lgplonfpnotice",
@@ -763,9 +737,8 @@
{"lgplBin.meta_lic", "apacheLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"lgplBin.meta_lic", "lgplBin.meta_lic", "restricted"},
- {"apacheLib.meta_lic", "apacheLib.meta_lic", "notice"},
- {"apacheLib.meta_lic", "lgplBin.meta_lic", "restricted"},
+ {"lgplBin.meta_lic", "restricted_if_statically_linked"},
+ {"apacheLib.meta_lic", "notice|restricted_if_statically_linked"},
},
},
{
@@ -776,8 +749,8 @@
{"lgplBin.meta_lic", "apacheLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"lgplBin.meta_lic", "lgplBin.meta_lic", "restricted"},
- {"apacheLib.meta_lic", "lgplBin.meta_lic", "restricted"},
+ {"lgplBin.meta_lic", "restricted_if_statically_linked"},
+ {"apacheLib.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -788,7 +761,7 @@
{"lgplBin.meta_lic", "apacheLib.meta_lic", []string{"dynamic"}},
},
expectedActions: []act{
- {"lgplBin.meta_lic", "lgplBin.meta_lic", "restricted"},
+ {"lgplBin.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -799,7 +772,7 @@
{"lgplBin.meta_lic", "apacheLib.meta_lic", []string{"dynamic"}},
},
expectedActions: []act{
- {"lgplBin.meta_lic", "lgplBin.meta_lic", "restricted"},
+ {"lgplBin.meta_lic", "restricted_if_statically_linked"},
},
},
{
@@ -810,9 +783,8 @@
{"gplBin.meta_lic", "apacheLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
- {"apacheLib.meta_lic", "apacheLib.meta_lic", "notice"},
- {"apacheLib.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "restricted"},
+ {"apacheLib.meta_lic", "notice|restricted"},
},
},
{
@@ -823,8 +795,8 @@
{"gplBin.meta_lic", "apacheLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
- {"apacheLib.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "restricted"},
+ {"apacheLib.meta_lic", "restricted"},
},
},
{
@@ -835,11 +807,8 @@
{"gplContainer.meta_lic", "apacheLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"gplContainer.meta_lic", "gplContainer.meta_lic", "restricted"},
- {"apacheLib.meta_lic", "apacheLib.meta_lic", "notice"},
- {"apacheLib.meta_lic", "gplContainer.meta_lic", "restricted"},
- {"apacheLib.meta_lic", "apacheLib.meta_lic", "notice"},
- {"apacheLib.meta_lic", "gplContainer.meta_lic", "restricted"},
+ {"gplContainer.meta_lic", "restricted"},
+ {"apacheLib.meta_lic", "notice|restricted"},
},
},
{
@@ -850,9 +819,8 @@
{"gplContainer.meta_lic", "apacheLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"gplContainer.meta_lic", "gplContainer.meta_lic", "restricted"},
- {"apacheLib.meta_lic", "gplContainer.meta_lic", "restricted"},
- {"apacheLib.meta_lic", "gplContainer.meta_lic", "restricted"},
+ {"gplContainer.meta_lic", "restricted"},
+ {"apacheLib.meta_lic", "restricted"},
},
},
{
@@ -864,11 +832,9 @@
{"apacheContainer.meta_lic", "gplLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"apacheContainer.meta_lic", "apacheContainer.meta_lic", "notice"},
- {"apacheContainer.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheLib.meta_lic", "apacheLib.meta_lic", "notice"},
- {"apacheLib.meta_lic", "apacheLib.meta_lic", "notice"},
- {"gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"apacheContainer.meta_lic", "notice|restricted"},
+ {"apacheLib.meta_lic", "notice"},
+ {"gplLib.meta_lic", "restricted"},
},
},
{
@@ -880,8 +846,8 @@
{"apacheContainer.meta_lic", "gplLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"apacheContainer.meta_lic", "gplLib.meta_lic", "restricted"},
- {"gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"apacheContainer.meta_lic", "restricted"},
+ {"gplLib.meta_lic", "restricted"},
},
},
{
@@ -893,11 +859,9 @@
{"apacheBin.meta_lic", "gplLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheLib.meta_lic", "apacheLib.meta_lic", "notice"},
- {"apacheLib.meta_lic", "gplLib.meta_lic", "restricted"},
- {"gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "notice|restricted"},
+ {"apacheLib.meta_lic", "notice|restricted"},
+ {"gplLib.meta_lic", "restricted"},
},
},
{
@@ -909,9 +873,9 @@
{"apacheBin.meta_lic", "gplLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"apacheBin.meta_lic", "gplLib.meta_lic", "restricted"},
- {"apacheLib.meta_lic", "gplLib.meta_lic", "restricted"},
- {"gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"apacheBin.meta_lic", "restricted"},
+ {"apacheLib.meta_lic", "restricted"},
+ {"gplLib.meta_lic", "restricted"},
},
},
{
@@ -922,7 +886,7 @@
{"gplBin.meta_lic", "apacheLib.meta_lic", []string{"dynamic"}},
},
expectedActions: []act{
- {"gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "restricted"},
},
},
{
@@ -933,7 +897,7 @@
{"gplBin.meta_lic", "apacheLib.meta_lic", []string{"dynamic"}},
},
expectedActions: []act{
- {"gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "restricted"},
},
},
{
@@ -944,8 +908,8 @@
{"gplBin.meta_lic", "apacheLib.meta_lic", []string{"dynamic"}},
},
expectedActions: []act{
- {"gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
- {"apacheLib.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "restricted"},
+ {"apacheLib.meta_lic", "restricted"},
},
},
{
@@ -956,7 +920,7 @@
{"gplWithClasspathException.meta_lic", "apacheBin.meta_lic", []string{"dynamic"}},
},
expectedActions: []act{
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
+ {"gplWithClasspathException.meta_lic", "permissive"},
},
},
{
@@ -966,9 +930,7 @@
edges: []annotated{
{"gplWithClasspathException.meta_lic", "apacheBin.meta_lic", []string{"dynamic"}},
},
- expectedActions: []act{
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedActions: []act{},
},
{
name: "independentmodulereverserestrictedshipped",
@@ -977,9 +939,7 @@
edges: []annotated{
{"gplWithClasspathException.meta_lic", "apacheBin.meta_lic", []string{"dynamic"}},
},
- expectedActions: []act{
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedActions: []act{},
},
{
name: "independentmodulereversestaticnotice",
@@ -989,9 +949,8 @@
{"gplWithClasspathException.meta_lic", "apacheBin.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "apacheBin.meta_lic", "notice"},
- {"apacheBin.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
+ {"gplWithClasspathException.meta_lic", "permissive"},
+ {"apacheBin.meta_lic", "notice"},
},
},
{
@@ -1001,10 +960,7 @@
edges: []annotated{
{"gplWithClasspathException.meta_lic", "apacheBin.meta_lic", []string{"static"}},
},
- expectedActions: []act{
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"apacheBin.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedActions: []act{},
},
{
name: "dependentmodulereversenotice",
@@ -1014,7 +970,7 @@
{"gplWithClasspathException.meta_lic", "dependentModule.meta_lic", []string{"dynamic"}},
},
expectedActions: []act{
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
+ {"gplWithClasspathException.meta_lic", "permissive"},
},
},
{
@@ -1024,9 +980,7 @@
edges: []annotated{
{"gplWithClasspathException.meta_lic", "dependentModule.meta_lic", []string{"dynamic"}},
},
- expectedActions: []act{
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedActions: []act{},
},
{
name: "dependentmodulereverserestrictedshipped",
@@ -1035,10 +989,7 @@
edges: []annotated{
{"gplWithClasspathException.meta_lic", "dependentModule.meta_lic", []string{"dynamic"}},
},
- expectedActions: []act{
- {"gplWithClasspathException.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- {"dependentModule.meta_lic", "gplWithClasspathException.meta_lic", "restricted"},
- },
+ expectedActions: []act{},
},
{
name: "ponrnotice",
@@ -1048,9 +999,8 @@
{"proprietary.meta_lic", "gplLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"proprietary.meta_lic", "proprietary.meta_lic", "proprietary"},
- {"proprietary.meta_lic", "gplLib.meta_lic", "restricted"},
- {"gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"proprietary.meta_lic", "restricted|proprietary"},
+ {"gplLib.meta_lic", "restricted"},
},
},
{
@@ -1061,8 +1011,8 @@
{"proprietary.meta_lic", "gplLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"gplLib.meta_lic", "gplLib.meta_lic", "restricted"},
- {"proprietary.meta_lic", "gplLib.meta_lic", "restricted"},
+ {"gplLib.meta_lic", "restricted"},
+ {"proprietary.meta_lic", "restricted"},
},
},
{
@@ -1073,7 +1023,7 @@
{"proprietary.meta_lic", "gplLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"proprietary.meta_lic", "proprietary.meta_lic", "proprietary"},
+ {"proprietary.meta_lic", "proprietary"},
},
},
{
@@ -1084,9 +1034,8 @@
{"gplBin.meta_lic", "proprietary.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
- {"proprietary.meta_lic", "proprietary.meta_lic", "proprietary"},
- {"proprietary.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "restricted"},
+ {"proprietary.meta_lic", "restricted|proprietary"},
},
},
{
@@ -1097,8 +1046,8 @@
{"gplBin.meta_lic", "proprietary.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"gplBin.meta_lic", "gplBin.meta_lic", "restricted"},
- {"proprietary.meta_lic", "gplBin.meta_lic", "restricted"},
+ {"gplBin.meta_lic", "restricted"},
+ {"proprietary.meta_lic", "restricted"},
},
},
{
@@ -1109,7 +1058,7 @@
{"gplBin.meta_lic", "proprietary.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"proprietary.meta_lic", "proprietary.meta_lic", "proprietary"},
+ {"proprietary.meta_lic", "proprietary"},
},
},
{
@@ -1120,8 +1069,8 @@
{"mitBin.meta_lic", "by_exception.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"mitBin.meta_lic", "mitBin.meta_lic", "notice"},
- {"by_exception.meta_lic", "by_exception.meta_lic", "by_exception_only"},
+ {"mitBin.meta_lic", "notice"},
+ {"by_exception.meta_lic", "by_exception_only"},
},
},
{
@@ -1141,7 +1090,7 @@
{"mitBin.meta_lic", "by_exception.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"by_exception.meta_lic", "by_exception.meta_lic", "by_exception_only"},
+ {"by_exception.meta_lic", "by_exception_only"},
},
},
{
@@ -1152,8 +1101,8 @@
{"by_exception.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"by_exception.meta_lic", "by_exception.meta_lic", "by_exception_only"},
- {"mitLib.meta_lic", "mitLib.meta_lic", "notice"},
+ {"by_exception.meta_lic", "by_exception_only"},
+ {"mitLib.meta_lic", "notice"},
},
},
{
@@ -1173,7 +1122,7 @@
{"by_exception.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"by_exception.meta_lic", "by_exception.meta_lic", "by_exception_only"},
+ {"by_exception.meta_lic", "by_exception_only"},
},
},
{
@@ -1184,8 +1133,8 @@
{"mitBin.meta_lic", "mplLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"mitBin.meta_lic", "mitBin.meta_lic", "notice"},
- {"mplLib.meta_lic", "mplLib.meta_lic", "reciprocal"},
+ {"mitBin.meta_lic", "notice"},
+ {"mplLib.meta_lic", "reciprocal"},
},
},
{
@@ -1196,7 +1145,7 @@
{"mitBin.meta_lic", "mplLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"mplLib.meta_lic", "mplLib.meta_lic", "reciprocal"},
+ {"mplLib.meta_lic", "reciprocal"},
},
},
{
@@ -1207,8 +1156,8 @@
{"mplBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"mplBin.meta_lic", "mplBin.meta_lic", "reciprocal"},
- {"mitLib.meta_lic", "mitLib.meta_lic", "notice"},
+ {"mplBin.meta_lic", "reciprocal"},
+ {"mitLib.meta_lic", "notice"},
},
},
{
@@ -1219,7 +1168,25 @@
{"mplBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
},
expectedActions: []act{
- {"mplBin.meta_lic", "mplBin.meta_lic", "reciprocal"},
+ {"mplBin.meta_lic", "reciprocal"},
+ },
+ },
+ {
+ name: "regress-walk-twice",
+ condition: ImpliesShared,
+ roots: []string{"mitBin.meta_lic", "apacheBin.meta_lic", "gplLib.meta_lic"},
+ edges: []annotated{
+ {"apacheBin.meta_lic", "mitLib.meta_lic", []string{"dynamic"}},
+ {"apacheBin.meta_lic", "gplLib.meta_lic", []string{"dynamic"}},
+ {"mitBin.meta_lic", "mitLib.meta_lic", []string{"static"}},
+ {"mitBin.meta_lic", "lgplLib.meta_lic", []string{"static"}},
+ },
+ expectedActions: []act{
+ {"apacheBin.meta_lic", "restricted"},
+ {"mitLib.meta_lic", "restricted|restricted_if_statically_linked"},
+ {"gplLib.meta_lic", "restricted"},
+ {"mitBin.meta_lic", "restricted_if_statically_linked"},
+ {"lgplLib.meta_lic", "restricted_if_statically_linked"},
},
},
}
@@ -1238,3 +1205,417 @@
})
}
}
+
+func TestWalkTopDownBreadthFirst(t *testing.T) {
+ tests := []struct {
+ name string
+ roots []string
+ edges []annotated
+ expectedResult []string
+ }{
+ {
+ name: "bin/bin1",
+ roots: []string{"bin/bin1.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
+ },
+ },
+ {
+ name: "bin/bin2",
+ roots: []string{"bin/bin2.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/bin/bin2.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/libd.so.meta_lic",
+ },
+ },
+ {
+ name: "bin/bin3",
+ roots: []string{"bin/bin3.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/bin/bin3.meta_lic",
+ },
+ },
+ {
+ name: "lib/liba.so",
+ roots: []string{"lib/liba.so.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/lib/liba.so.meta_lic",
+ },
+ },
+ {
+ name: "lib/libb.so",
+ roots: []string{"lib/libb.so.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/lib/libb.so.meta_lic",
+ },
+ },
+ {
+ name: "lib/libc.so",
+ roots: []string{"lib/libc.a.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/lib/libc.a.meta_lic",
+ },
+ },
+ {
+ name: "lib/libd.so",
+ roots: []string{"lib/libd.so.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/lib/libd.so.meta_lic",
+ },
+ },
+ {
+ name: "highest.apex",
+ roots: []string{"highest.apex.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/highest.apex.meta_lic",
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/bin/bin2.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/libd.so.meta_lic",
+ },
+ },
+ {
+ name: "container.zip",
+ roots: []string{"container.zip.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/container.zip.meta_lic",
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/bin/bin2.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/libd.so.meta_lic",
+ },
+ },
+ {
+ name: "application",
+ roots: []string{"application.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/application.meta_lic",
+ "testdata/notice/bin/bin3.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ },
+ },
+ {
+ name: "bin/bin1&lib/liba",
+ roots: []string{"bin/bin1.meta_lic","lib/liba.so.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
+ },
+ },
+ {
+ name: "bin/bin2&lib/libd",
+ roots: []string{"bin/bin2.meta_lic", "lib/libd.so.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/bin/bin2.meta_lic",
+ "testdata/notice/lib/libd.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/libd.so.meta_lic",
+ },
+ },
+ {
+ name: "application&bin/bin3",
+ roots: []string{"application.meta_lic", "bin/bin3.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/application.meta_lic",
+ "testdata/notice/bin/bin3.meta_lic",
+ "testdata/notice/bin/bin3.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ },
+ },
+ {
+ name: "highest.apex&container.zip",
+ roots: []string{"highest.apex.meta_lic", "container.zip.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/highest.apex.meta_lic",
+ "testdata/notice/container.zip.meta_lic",
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/bin/bin2.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/libd.so.meta_lic",
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/bin/bin2.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/libd.so.meta_lic",
+ },
+ },
+ }
+
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ stderr := &bytes.Buffer{}
+ actualOut := &bytes.Buffer{}
+
+ rootFiles := make([]string, 0, len(tt.roots))
+ for _, r := range tt.roots {
+ rootFiles = append(rootFiles, "testdata/notice/"+r)
+ }
+
+ lg, err := ReadLicenseGraph(GetFS(""), stderr, rootFiles)
+
+ if err != nil {
+ t.Errorf("unexpected test data error: got %s, want no error", err)
+ return
+ }
+
+ expectedRst := tt.expectedResult
+
+ WalkTopDownBreadthFirst(nil, lg, func(lg *LicenseGraph, tn *TargetNode, path TargetEdgePath) bool {
+ fmt.Fprintln(actualOut, tn.Name())
+ return true
+ })
+
+ actualRst := strings.Split(actualOut.String(), "\n")
+
+ if len(actualRst) > 0 {
+ actualRst = actualRst[:len(actualRst)-1]
+ }
+
+ t.Logf("actual nodes visited: %s", actualOut.String())
+ t.Logf("expected nodes visited: %s", strings.Join(expectedRst, "\n"))
+
+ if len(actualRst) != len(expectedRst) {
+ t.Errorf("WalkTopDownBreadthFirst: number of visited nodes is different: got %d, want %d", len(actualRst), len(expectedRst))
+ }
+
+ for i := 0; i < len(actualRst) && i < len(expectedRst); i++ {
+ if actualRst[i] != expectedRst[i] {
+ t.Errorf("WalkTopDownBreadthFirst: lines differ at index %d: got %q, want %q", i, actualRst[i], expectedRst[i])
+ break
+ }
+ }
+
+ if len(actualRst) < len(expectedRst) {
+ t.Errorf("WalkTopDownBreadthFirst: extra lines at %d: got %q, want nothing", len(actualRst), expectedRst[len(actualRst)])
+ }
+
+ if len(expectedRst) < len(actualRst) {
+ t.Errorf("WalkTopDownBreadthFirst: missing lines at %d: got nothing, want %q", len(expectedRst), actualRst[len(expectedRst)])
+ }
+ })
+ }
+}
+
+func TestWalkTopDownBreadthFirstWithoutDuplicates(t *testing.T) {
+ tests := []struct {
+ name string
+ roots []string
+ edges []annotated
+ expectedResult []string
+ }{
+ {
+ name: "bin/bin1",
+ roots: []string{"bin/bin1.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
+ },
+ },
+ {
+ name: "bin/bin2",
+ roots: []string{"bin/bin2.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/bin/bin2.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/libd.so.meta_lic",
+ },
+ },
+ {
+ name: "bin/bin3",
+ roots: []string{"bin/bin3.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/bin/bin3.meta_lic",
+ },
+ },
+ {
+ name: "lib/liba.so",
+ roots: []string{"lib/liba.so.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/lib/liba.so.meta_lic",
+ },
+ },
+ {
+ name: "lib/libb.so",
+ roots: []string{"lib/libb.so.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/lib/libb.so.meta_lic",
+ },
+ },
+ {
+ name: "lib/libc.so",
+ roots: []string{"lib/libc.a.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/lib/libc.a.meta_lic",
+ },
+ },
+ {
+ name: "lib/libd.so",
+ roots: []string{"lib/libd.so.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/lib/libd.so.meta_lic",
+ },
+ },
+ {
+ name: "highest.apex",
+ roots: []string{"highest.apex.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/highest.apex.meta_lic",
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/bin/bin2.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
+ "testdata/notice/lib/libd.so.meta_lic",
+ },
+ },
+ {
+ name: "container.zip",
+ roots: []string{"container.zip.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/container.zip.meta_lic",
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/bin/bin2.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
+ "testdata/notice/lib/libd.so.meta_lic",
+ },
+ },
+ {
+ name: "application",
+ roots: []string{"application.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/application.meta_lic",
+ "testdata/notice/bin/bin3.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ },
+ },
+ {
+ name: "bin/bin1&lib/liba",
+ roots: []string{"bin/bin1.meta_lic", "lib/liba.so.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
+ },
+ },
+ {
+ name: "bin/bin2&lib/libd",
+ roots: []string{"bin/bin2.meta_lic", "lib/libd.so.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/bin/bin2.meta_lic",
+ "testdata/notice/lib/libd.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ },
+ },
+ {
+ name: "application&bin/bin3",
+ roots: []string{"application.meta_lic", "bin/bin3.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/application.meta_lic",
+ "testdata/notice/bin/bin3.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ },
+ },
+ {
+ name: "highest.apex&container.zip",
+ roots: []string{"highest.apex.meta_lic", "container.zip.meta_lic"},
+ expectedResult: []string{
+ "testdata/notice/highest.apex.meta_lic",
+ "testdata/notice/container.zip.meta_lic",
+ "testdata/notice/bin/bin1.meta_lic",
+ "testdata/notice/bin/bin2.meta_lic",
+ "testdata/notice/lib/liba.so.meta_lic",
+ "testdata/notice/lib/libb.so.meta_lic",
+ "testdata/notice/lib/libc.a.meta_lic",
+ "testdata/notice/lib/libd.so.meta_lic",
+ },
+ },
+ }
+
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ stderr := &bytes.Buffer{}
+ actualOut := &bytes.Buffer{}
+
+ rootFiles := make([]string, 0, len(tt.roots))
+ for _, r := range tt.roots {
+ rootFiles = append(rootFiles, "testdata/notice/"+r)
+ }
+
+ lg, err := ReadLicenseGraph(GetFS(""), stderr, rootFiles)
+
+ if err != nil {
+ t.Errorf("unexpected test data error: got %s, want no error", err)
+ return
+ }
+
+ expectedRst := tt.expectedResult
+
+ //Keeping track of the visited nodes
+ //Only add to actualOut if not visited
+ visitedNodes := make(map[string]struct{})
+ WalkTopDownBreadthFirst(nil, lg, func(lg *LicenseGraph, tn *TargetNode, path TargetEdgePath) bool {
+ if _, alreadyVisited := visitedNodes[tn.Name()]; alreadyVisited {
+ return false
+ }
+ fmt.Fprintln(actualOut, tn.Name())
+ visitedNodes[tn.Name()] = struct{}{}
+ return true
+ })
+
+ actualRst := strings.Split(actualOut.String(), "\n")
+
+ if len(actualRst) > 0 {
+ actualRst = actualRst[:len(actualRst)-1]
+ }
+
+ t.Logf("actual nodes visited: %s", actualOut.String())
+ t.Logf("expected nodes visited: %s", strings.Join(expectedRst, "\n"))
+
+ if len(actualRst) != len(expectedRst) {
+ t.Errorf("WalkTopDownBreadthFirst: number of visited nodes is different: got %d, want %d", len(actualRst), len(expectedRst))
+ }
+
+ for i := 0; i < len(actualRst) && i < len(expectedRst); i++ {
+ if actualRst[i] != expectedRst[i] {
+ t.Errorf("WalkTopDownBreadthFirst: lines differ at index %d: got %q, want %q", i, actualRst[i], expectedRst[i])
+ break
+ }
+ }
+
+ if len(actualRst) < len(expectedRst) {
+ t.Errorf("WalkTopDownBreadthFirst: extra lines at %d: got %q, want nothing", len(actualRst), expectedRst[len(actualRst)])
+ }
+
+ if len(expectedRst) < len(actualRst) {
+ t.Errorf("WalkTopDownBreadthFirst: missing lines at %d: got nothing, want %q", len(expectedRst), actualRst[len(expectedRst)])
+ }
+ })
+ }
+}
diff --git a/tools/compliance/projectmetadata/Android.bp b/tools/compliance/projectmetadata/Android.bp
new file mode 100644
index 0000000..dccff76
--- /dev/null
+++ b/tools/compliance/projectmetadata/Android.bp
@@ -0,0 +1,34 @@
+// Copyright (C) 2022 The Android Open Source Project
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package {
+ default_applicable_licenses: ["Android-Apache-2.0"],
+}
+
+bootstrap_go_package {
+ name: "projectmetadata-module",
+ srcs: [
+ "projectmetadata.go",
+ ],
+ deps: [
+ "compliance-test-fs-module",
+ "golang-protobuf-proto",
+ "golang-protobuf-encoding-prototext",
+ "project_metadata_proto",
+ ],
+ testSrcs: [
+ "projectmetadata_test.go",
+ ],
+ pkgPath: "android/soong/tools/compliance/projectmetadata",
+}
diff --git a/tools/compliance/projectmetadata/projectmetadata.go b/tools/compliance/projectmetadata/projectmetadata.go
new file mode 100644
index 0000000..b137a12
--- /dev/null
+++ b/tools/compliance/projectmetadata/projectmetadata.go
@@ -0,0 +1,292 @@
+// Copyright 2022 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package projectmetadata
+
+import (
+ "fmt"
+ "io"
+ "io/fs"
+ "path/filepath"
+ "strings"
+ "sync"
+
+ "android/soong/compliance/project_metadata_proto"
+
+ "google.golang.org/protobuf/encoding/prototext"
+)
+
+var (
+ // ConcurrentReaders is the size of the task pool for limiting resource usage e.g. open files.
+ ConcurrentReaders = 5
+)
+
+// ProjectMetadata contains the METADATA for a git project.
+type ProjectMetadata struct {
+ proto project_metadata_proto.Metadata
+
+ // project is the path to the directory containing the METADATA file.
+ project string
+}
+
+// ProjectUrlMap maps url type name to url value
+type ProjectUrlMap map[string]string
+
+// DownloadUrl returns the address of a download location
+func (m ProjectUrlMap) DownloadUrl() string {
+ for _, urlType := range []string{"GIT", "SVN", "HG", "DARCS"} {
+ if url, ok := m[urlType]; ok {
+ return url
+ }
+ }
+ return ""
+}
+
+// String returns a string representation of the metadata for error messages.
+func (pm *ProjectMetadata) String() string {
+ return fmt.Sprintf("project: %q\n%s", pm.project, pm.proto.String())
+}
+
+// Project returns the path to the directory containing the METADATA file
+func (pm *ProjectMetadata) Project() string {
+ return pm.project
+}
+
+// ProjectName returns the name of the project.
+func (pm *ProjectMetadata) Name() string {
+ return pm.proto.GetName()
+}
+
+// ProjectVersion returns the version of the project if available.
+func (pm *ProjectMetadata) Version() string {
+ tp := pm.proto.GetThirdParty()
+ if tp != nil {
+ version := tp.GetVersion()
+ return version
+ }
+ return ""
+}
+
+// VersionedName returns the name of the project including the version if any.
+func (pm *ProjectMetadata) VersionedName() string {
+ name := pm.proto.GetName()
+ if name != "" {
+ tp := pm.proto.GetThirdParty()
+ if tp != nil {
+ version := tp.GetVersion()
+ if version != "" {
+ if version[0] == 'v' || version[0] == 'V' {
+ return name + "_" + version
+ } else {
+ return name + "_v_" + version
+ }
+ }
+ }
+ return name
+ }
+ return pm.proto.GetDescription()
+}
+
+// UrlsByTypeName returns a map of URLs by Type Name
+func (pm *ProjectMetadata) UrlsByTypeName() ProjectUrlMap {
+ tp := pm.proto.GetThirdParty()
+ if tp == nil {
+ return nil
+ }
+ if len(tp.Url) == 0 {
+ return nil
+ }
+ urls := make(ProjectUrlMap)
+
+ for _, url := range tp.Url {
+ uri := url.GetValue()
+ if uri == "" {
+ continue
+ }
+ urls[project_metadata_proto.URL_Type_name[int32(url.GetType())]] = uri
+ }
+ return urls
+}
+
+// projectIndex describes a project to be read; after `wg.Wait()`, will contain either
+// a `ProjectMetadata`, pm (can be nil even without error), or a non-nil `err`.
+type projectIndex struct {
+ project string
+ path string
+ pm *ProjectMetadata
+ err error
+ done chan struct{}
+}
+
+// finish marks the task to read the `projectIndex` completed.
+func (pi *projectIndex) finish() {
+ close(pi.done)
+}
+
+// wait suspends execution until the `projectIndex` task completes.
+func (pi *projectIndex) wait() {
+ <-pi.done
+}
+
+// Index reads and caches ProjectMetadata (thread safe)
+type Index struct {
+ // projecs maps project name to a wait group if read has already started, and
+ // to a `ProjectMetadata` or to an `error` after the read completes.
+ projects sync.Map
+
+ // task provides a fixed-size task pool to limit concurrent open files etc.
+ task chan bool
+
+ // rootFS locates the root of the file system from which to read the files.
+ rootFS fs.FS
+}
+
+// NewIndex constructs a project metadata `Index` for the given file system.
+func NewIndex(rootFS fs.FS) *Index {
+ ix := &Index{task: make(chan bool, ConcurrentReaders), rootFS: rootFS}
+ for i := 0; i < ConcurrentReaders; i++ {
+ ix.task <- true
+ }
+ return ix
+}
+
+// MetadataForProjects returns 0..n ProjectMetadata for n `projects`, or an error.
+// Each project that has a METADATA.android or a METADATA file in the root of the project will have
+// a corresponding ProjectMetadata in the result. Projects with neither file get skipped. A nil
+// result with no error indicates none of the given `projects` has a METADATA file.
+// (thread safe -- can be called concurrently from multiple goroutines)
+func (ix *Index) MetadataForProjects(projects ...string) ([]*ProjectMetadata, error) {
+ if ConcurrentReaders < 1 {
+ return nil, fmt.Errorf("need at least one task in project metadata pool")
+ }
+ if len(projects) == 0 {
+ return nil, nil
+ }
+ // Identify the projects that have never been read
+ projectsToRead := make([]*projectIndex, 0, len(projects))
+ projectIndexes := make([]*projectIndex, 0, len(projects))
+ for _, p := range projects {
+ pi, loaded := ix.projects.LoadOrStore(p, &projectIndex{project: p, done: make(chan struct{})})
+ if !loaded {
+ projectsToRead = append(projectsToRead, pi.(*projectIndex))
+ }
+ projectIndexes = append(projectIndexes, pi.(*projectIndex))
+ }
+ // findMeta locates and reads the appropriate METADATA file, if any.
+ findMeta := func(pi *projectIndex) {
+ <-ix.task
+ defer func() {
+ ix.task <- true
+ pi.finish()
+ }()
+
+ // Support METADATA.android for projects that already have a different sort of METADATA file.
+ path := filepath.Join(pi.project, "METADATA.android")
+ fi, err := fs.Stat(ix.rootFS, path)
+ if err == nil {
+ if fi.Mode().IsRegular() {
+ ix.readMetadataFile(pi, path)
+ return
+ }
+ }
+ // No METADATA.android try METADATA file.
+ path = filepath.Join(pi.project, "METADATA")
+ fi, err = fs.Stat(ix.rootFS, path)
+ if err == nil {
+ if fi.Mode().IsRegular() {
+ ix.readMetadataFile(pi, path)
+ return
+ }
+ }
+ // no METADATA file exists -- leave nil and finish
+ }
+ // Look for the METADATA files to read, and record any missing.
+ for _, p := range projectsToRead {
+ go findMeta(p)
+ }
+ // Wait until all of the projects have been read.
+ var msg strings.Builder
+ result := make([]*ProjectMetadata, 0, len(projects))
+ for _, pi := range projectIndexes {
+ pi.wait()
+ // Combine any errors into a single error.
+ if pi.err != nil {
+ fmt.Fprintf(&msg, " %v\n", pi.err)
+ } else if pi.pm != nil {
+ result = append(result, pi.pm)
+ }
+ }
+ if msg.Len() > 0 {
+ return nil, fmt.Errorf("error reading project(s):\n%s", msg.String())
+ }
+ if len(result) == 0 {
+ return nil, nil
+ }
+ return result, nil
+}
+
+// AllMetadataFiles returns the sorted list of all METADATA files read thus far.
+func (ix *Index) AllMetadataFiles() []string {
+ var files []string
+ ix.projects.Range(func(key, value any) bool {
+ pi := value.(*projectIndex)
+ if pi.path != "" {
+ files = append(files, pi.path)
+ }
+ return true
+ })
+ return files
+}
+
+// readMetadataFile tries to read and parse a METADATA file at `path` for `project`.
+func (ix *Index) readMetadataFile(pi *projectIndex, path string) {
+ f, err := ix.rootFS.Open(path)
+ if err != nil {
+ pi.err = fmt.Errorf("error opening project %q metadata %q: %w", pi.project, path, err)
+ return
+ }
+
+ // read the file
+ data, err := io.ReadAll(f)
+ if err != nil {
+ pi.err = fmt.Errorf("error reading project %q metadata %q: %w", pi.project, path, err)
+ return
+ }
+ f.Close()
+
+ uo := prototext.UnmarshalOptions{DiscardUnknown: true}
+ pm := &ProjectMetadata{project: pi.project}
+ err = uo.Unmarshal(data, &pm.proto)
+ if err != nil {
+ pi.err = fmt.Errorf(`error in project %q METADATA %q: %v
+
+METADATA and METADATA.android files must parse as text protobufs
+defined by
+ build/soong/compliance/project_metadata_proto/project_metadata.proto
+
+* unknown fields don't matter
+* check invalid ENUM names
+* check quoting
+* check unescaped nested quotes
+* check the comment marker for protobuf is '#' not '//'
+
+if importing a library that uses a different sort of METADATA file, add
+a METADATA.android file beside it to parse instead
+`, pi.project, path, err)
+ return
+ }
+
+ pi.path = path
+ pi.pm = pm
+}
diff --git a/tools/compliance/projectmetadata/projectmetadata_test.go b/tools/compliance/projectmetadata/projectmetadata_test.go
new file mode 100644
index 0000000..0af0cd7
--- /dev/null
+++ b/tools/compliance/projectmetadata/projectmetadata_test.go
@@ -0,0 +1,722 @@
+// Copyright 2022 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package projectmetadata
+
+import (
+ "fmt"
+ "strings"
+ "testing"
+
+ "android/soong/compliance/project_metadata_proto"
+ "android/soong/tools/compliance/testfs"
+)
+
+const (
+ // EMPTY represents a METADATA file with no recognized fields
+ EMPTY = ``
+
+ // INVALID_NAME represents a METADATA file with the wrong type of name
+ INVALID_NAME = `name: a library\n`
+
+ // INVALID_DESCRIPTION represents a METADATA file with the wrong type of description
+ INVALID_DESCRIPTION = `description: unquoted text\n`
+
+ // INVALID_VERSION represents a METADATA file with the wrong type of version
+ INVALID_VERSION = `third_party { version: 1 }`
+
+ // MY_LIB_1_0 represents a METADATA file for version 1.0 of mylib
+ MY_LIB_1_0 = `name: "mylib" description: "my library" third_party { version: "1.0" }`
+
+ // NO_NAME_0_1 represents a METADATA file with a description but no name
+ NO_NAME_0_1 = `description: "my library" third_party { version: "0.1" }`
+
+ // URL values per type
+ GIT_URL = "http://example.github.com/my_lib"
+ SVN_URL = "http://example.svn.com/my_lib"
+ HG_URL = "http://example.hg.com/my_lib"
+ DARCS_URL = "http://example.darcs.com/my_lib"
+ PIPER_URL = "http://google3/third_party/my/package"
+ HOMEPAGE_URL = "http://example.com/homepage"
+ OTHER_URL = "http://google.com/"
+ ARCHIVE_URL = "http://ftp.example.com/"
+ LOCAL_SOURCE_URL = "https://android.googlesource.com/platform/external/apache-http/"
+)
+
+// libWithUrl returns a METADATA file with the right download url
+func libWithUrl(urlTypes ...string) string {
+ var sb strings.Builder
+
+ fmt.Fprintln(&sb, `name: "mylib" description: "my library"
+ third_party {
+ version: "1.0"`)
+
+ for _, urltype := range urlTypes {
+ var urlValue string
+ switch urltype {
+ case "GIT":
+ urlValue = GIT_URL
+ case "SVN":
+ urlValue = SVN_URL
+ case "HG":
+ urlValue = HG_URL
+ case "DARCS":
+ urlValue = DARCS_URL
+ case "PIPER":
+ urlValue = PIPER_URL
+ case "HOMEPAGE":
+ urlValue = HOMEPAGE_URL
+ case "OTHER":
+ urlValue = OTHER_URL
+ case "ARCHIVE":
+ urlValue = ARCHIVE_URL
+ case "LOCAL_SOURCE":
+ urlValue = LOCAL_SOURCE_URL
+ default:
+ panic(fmt.Errorf("unknown url type: %q. Please update libWithUrl() in build/make/tools/compliance/projectmetadata/projectmetadata_test.go", urltype))
+ }
+ fmt.Fprintf(&sb, " url { type: %s value: %q }\n", urltype, urlValue)
+ }
+ fmt.Fprintln(&sb, `}`)
+
+ return sb.String()
+}
+
+func TestVerifyAllUrlTypes(t *testing.T) {
+ t.Run("verifyAllUrlTypes", func(t *testing.T) {
+ types := make([]string, 0, len(project_metadata_proto.URL_Type_value))
+ for t := range project_metadata_proto.URL_Type_value {
+ types = append(types, t)
+ }
+ libWithUrl(types...)
+ })
+}
+
+func TestUnknownPanics(t *testing.T) {
+ t.Run("Unknown panics", func(t *testing.T) {
+ defer func() {
+ if r := recover(); r == nil {
+ t.Errorf("unexpected success: got no error, want panic")
+ }
+ }()
+ libWithUrl("SOME WILD VALUE THAT DOES NOT EXIST")
+ })
+}
+
+func TestReadMetadataForProjects(t *testing.T) {
+ tests := []struct {
+ name string
+ fs *testfs.TestFS
+ projects []string
+ expectedError string
+ expected []pmeta
+ }{
+ {
+ name: "trivial",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte("name: \"Android\"\n"),
+ },
+ projects: []string{"/a"},
+ expected: []pmeta{{
+ project: "/a",
+ versionedName: "Android",
+ name: "Android",
+ version: "",
+ downloadUrl: "",
+ }},
+ },
+ {
+ name: "versioned",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(MY_LIB_1_0),
+ },
+ projects: []string{"/a"},
+ expected: []pmeta{{
+ project: "/a",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: "",
+ }},
+ },
+ {
+ name: "lib_with_homepage",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(libWithUrl("HOMEPAGE")),
+ },
+ projects: []string{"/a"},
+ expected: []pmeta{{
+ project: "/a",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: "",
+ }},
+ },
+ {
+ name: "lib_with_git",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(libWithUrl("GIT")),
+ },
+ projects: []string{"/a"},
+ expected: []pmeta{{
+ project: "/a",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: GIT_URL,
+ }},
+ },
+ {
+ name: "lib_with_svn",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(libWithUrl("SVN")),
+ },
+ projects: []string{"/a"},
+ expected: []pmeta{{
+ project: "/a",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: SVN_URL,
+ }},
+ },
+ {
+ name: "lib_with_hg",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(libWithUrl("HG")),
+ },
+ projects: []string{"/a"},
+ expected: []pmeta{{
+ project: "/a",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: HG_URL,
+ }},
+ },
+ {
+ name: "lib_with_darcs",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(libWithUrl("DARCS")),
+ },
+ projects: []string{"/a"},
+ expected: []pmeta{{
+ project: "/a",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: DARCS_URL,
+ }},
+ },
+ {
+ name: "lib_with_piper",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(libWithUrl("PIPER")),
+ },
+ projects: []string{"/a"},
+ expected: []pmeta{{
+ project: "/a",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: "",
+ }},
+ },
+ {
+ name: "lib_with_other",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(libWithUrl("OTHER")),
+ },
+ projects: []string{"/a"},
+ expected: []pmeta{{
+ project: "/a",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: "",
+ }},
+ },
+ {
+ name: "lib_with_local_source",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(libWithUrl("LOCAL_SOURCE")),
+ },
+ projects: []string{"/a"},
+ expected: []pmeta{{
+ project: "/a",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: "",
+ }},
+ },
+ {
+ name: "lib_with_archive",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(libWithUrl("ARCHIVE")),
+ },
+ projects: []string{"/a"},
+ expected: []pmeta{{
+ project: "/a",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: "",
+ }},
+ },
+ {
+ name: "lib_with_all_downloads",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(libWithUrl("DARCS", "HG", "SVN", "GIT")),
+ },
+ projects: []string{"/a"},
+ expected: []pmeta{{
+ project: "/a",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: GIT_URL,
+ }},
+ },
+ {
+ name: "lib_with_all_downloads_in_different_order",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(libWithUrl("DARCS", "GIT", "SVN", "HG")),
+ },
+ projects: []string{"/a"},
+ expected: []pmeta{{
+ project: "/a",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: GIT_URL,
+ }},
+ },
+ {
+ name: "lib_with_all_but_git",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(libWithUrl("DARCS", "HG", "SVN")),
+ },
+ projects: []string{"/a"},
+ expected: []pmeta{{
+ project: "/a",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: SVN_URL,
+ }},
+ },
+ {
+ name: "lib_with_all_but_git_and_svn",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(libWithUrl("DARCS", "HG")),
+ },
+ projects: []string{"/a"},
+ expected: []pmeta{{
+ project: "/a",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: HG_URL,
+ }},
+ },
+ {
+ name: "lib_with_all_nondownloads_and_git",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(libWithUrl("HOMEPAGE", "LOCAL_SOURCE", "PIPER", "ARCHIVE", "GIT")),
+ },
+ projects: []string{"/a"},
+ expected: []pmeta{{
+ project: "/a",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: GIT_URL,
+ }},
+ },
+ {
+ name: "lib_with_all_nondownloads",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(libWithUrl("HOMEPAGE", "LOCAL_SOURCE", "PIPER", "ARCHIVE")),
+ },
+ projects: []string{"/a"},
+ expected: []pmeta{{
+ project: "/a",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: "",
+ }},
+ },
+ {
+ name: "lib_with_all_nondownloads",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(libWithUrl()),
+ },
+ projects: []string{"/a"},
+ expected: []pmeta{{
+ project: "/a",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: "",
+ }},
+ },
+ {
+ name: "versioneddesc",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(NO_NAME_0_1),
+ },
+ projects: []string{"/a"},
+ expected: []pmeta{{
+ project: "/a",
+ versionedName: "my library",
+ name: "",
+ version: "0.1",
+ downloadUrl: "",
+ }},
+ },
+ {
+ name: "unterminated",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte("name: \"Android\n"),
+ },
+ projects: []string{"/a"},
+ expectedError: `invalid character '\n' in string`,
+ },
+ {
+ name: "abc",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(EMPTY),
+ "/b/METADATA": []byte(MY_LIB_1_0),
+ "/c/METADATA": []byte(NO_NAME_0_1),
+ },
+ projects: []string{"/a", "/b", "/c"},
+ expected: []pmeta{
+ {
+ project: "/a",
+ versionedName: "",
+ name: "",
+ version: "",
+ downloadUrl: "",
+ },
+ {
+ project: "/b",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: "",
+ },
+ {
+ project: "/c",
+ versionedName: "my library",
+ name: "",
+ version: "0.1",
+ downloadUrl: "",
+ },
+ },
+ },
+ {
+ name: "ab",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(EMPTY),
+ "/b/METADATA": []byte(MY_LIB_1_0),
+ },
+ projects: []string{"/a", "/b", "/c"},
+ expected: []pmeta{
+ {
+ project: "/a",
+ versionedName: "",
+ name: "",
+ version: "",
+ downloadUrl: "",
+ },
+ {
+ project: "/b",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: "",
+ },
+ },
+ },
+ {
+ name: "ac",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(EMPTY),
+ "/c/METADATA": []byte(NO_NAME_0_1),
+ },
+ projects: []string{"/a", "/b", "/c"},
+ expected: []pmeta{
+ {
+ project: "/a",
+ versionedName: "",
+ name: "",
+ version: "",
+ downloadUrl: "",
+ },
+ {
+ project: "/c",
+ versionedName: "my library",
+ name: "",
+ version: "0.1",
+ downloadUrl: "",
+ },
+ },
+ },
+ {
+ name: "bc",
+ fs: &testfs.TestFS{
+ "/b/METADATA": []byte(MY_LIB_1_0),
+ "/c/METADATA": []byte(NO_NAME_0_1),
+ },
+ projects: []string{"/a", "/b", "/c"},
+ expected: []pmeta{
+ {
+ project: "/b",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: "",
+ },
+ {
+ project: "/c",
+ versionedName: "my library",
+ name: "",
+ version: "0.1",
+ downloadUrl: "",
+ },
+ },
+ },
+ {
+ name: "wrongnametype",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(INVALID_NAME),
+ },
+ projects: []string{"/a"},
+ expectedError: `invalid value for string type`,
+ },
+ {
+ name: "wrongdescriptiontype",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(INVALID_DESCRIPTION),
+ },
+ projects: []string{"/a"},
+ expectedError: `invalid value for string type`,
+ },
+ {
+ name: "wrongversiontype",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(INVALID_VERSION),
+ },
+ projects: []string{"/a"},
+ expectedError: `invalid value for string type`,
+ },
+ {
+ name: "wrongtype",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(INVALID_NAME + INVALID_DESCRIPTION + INVALID_VERSION),
+ },
+ projects: []string{"/a"},
+ expectedError: `invalid value for string type`,
+ },
+ {
+ name: "empty",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(EMPTY),
+ },
+ projects: []string{"/a"},
+ expected: []pmeta{{
+ project: "/a",
+ versionedName: "",
+ name: "",
+ version: "",
+ downloadUrl: "",
+ }},
+ },
+ {
+ name: "emptyother",
+ fs: &testfs.TestFS{
+ "/a/METADATA.bp": []byte(EMPTY),
+ },
+ projects: []string{"/a"},
+ },
+ {
+ name: "emptyfs",
+ fs: &testfs.TestFS{},
+ projects: []string{"/a"},
+ },
+ {
+ name: "override",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(INVALID_NAME + INVALID_DESCRIPTION + INVALID_VERSION),
+ "/a/METADATA.android": []byte(MY_LIB_1_0),
+ },
+ projects: []string{"/a"},
+ expected: []pmeta{{
+ project: "/a",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: "",
+ }},
+ },
+ {
+ name: "enchilada",
+ fs: &testfs.TestFS{
+ "/a/METADATA": []byte(INVALID_NAME + INVALID_DESCRIPTION + INVALID_VERSION),
+ "/a/METADATA.android": []byte(EMPTY),
+ "/b/METADATA": []byte(MY_LIB_1_0),
+ "/c/METADATA": []byte(NO_NAME_0_1),
+ },
+ projects: []string{"/a", "/b", "/c"},
+ expected: []pmeta{
+ {
+ project: "/a",
+ versionedName: "",
+ name: "",
+ version: "",
+ downloadUrl: "",
+ },
+ {
+ project: "/b",
+ versionedName: "mylib_v_1.0",
+ name: "mylib",
+ version: "1.0",
+ downloadUrl: "",
+ },
+ {
+ project: "/c",
+ versionedName: "my library",
+ name: "",
+ version: "0.1",
+ downloadUrl: "",
+ },
+ },
+ },
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ ix := NewIndex(tt.fs)
+ pms, err := ix.MetadataForProjects(tt.projects...)
+ if err != nil {
+ if len(tt.expectedError) == 0 {
+ t.Errorf("unexpected error: got %s, want no error", err)
+ } else if !strings.Contains(err.Error(), tt.expectedError) {
+ t.Errorf("unexpected error: got %s, want %q", err, tt.expectedError)
+ }
+ return
+ }
+ t.Logf("actual %d project metadata", len(pms))
+ for _, pm := range pms {
+ t.Logf(" %v", pm.String())
+ }
+ t.Logf("expected %d project metadata", len(tt.expected))
+ for _, pm := range tt.expected {
+ t.Logf(" %s", pm.String())
+ }
+ if len(tt.expectedError) > 0 {
+ t.Errorf("unexpected success: got no error, want %q err", tt.expectedError)
+ return
+ }
+ if len(pms) != len(tt.expected) {
+ t.Errorf("missing project metadata: got %d project metadata, want %d", len(pms), len(tt.expected))
+ }
+ for i := 0; i < len(pms) && i < len(tt.expected); i++ {
+ if msg := tt.expected[i].difference(pms[i]); msg != "" {
+ t.Errorf("unexpected metadata starting at index %d: %s", i, msg)
+ return
+ }
+ }
+ if len(pms) < len(tt.expected) {
+ t.Errorf("missing metadata starting at index %d: got nothing, want %s", len(pms), tt.expected[len(pms)].String())
+ }
+ if len(tt.expected) < len(pms) {
+ t.Errorf("unexpected metadata starting at index %d: got %s, want nothing", len(tt.expected), pms[len(tt.expected)].String())
+ }
+ })
+ }
+}
+
+type pmeta struct {
+ project string
+ versionedName string
+ name string
+ version string
+ downloadUrl string
+}
+
+func (pm pmeta) String() string {
+ return fmt.Sprintf("project: %q versionedName: %q name: %q version: %q downloadUrl: %q\n", pm.project, pm.versionedName, pm.name, pm.version, pm.downloadUrl)
+}
+
+func (pm pmeta) equals(other *ProjectMetadata) bool {
+ if pm.project != other.project {
+ return false
+ }
+ if pm.versionedName != other.VersionedName() {
+ return false
+ }
+ if pm.name != other.Name() {
+ return false
+ }
+ if pm.version != other.Version() {
+ return false
+ }
+ if pm.downloadUrl != other.UrlsByTypeName().DownloadUrl() {
+ return false
+ }
+ return true
+}
+
+func (pm pmeta) difference(other *ProjectMetadata) string {
+ if pm.equals(other) {
+ return ""
+ }
+ var sb strings.Builder
+ fmt.Fprintf(&sb, "got")
+ if pm.project != other.project {
+ fmt.Fprintf(&sb, " project: %q", other.project)
+ }
+ if pm.versionedName != other.VersionedName() {
+ fmt.Fprintf(&sb, " versionedName: %q", other.VersionedName())
+ }
+ if pm.name != other.Name() {
+ fmt.Fprintf(&sb, " name: %q", other.Name())
+ }
+ if pm.version != other.Version() {
+ fmt.Fprintf(&sb, " version: %q", other.Version())
+ }
+ if pm.downloadUrl != other.UrlsByTypeName().DownloadUrl() {
+ fmt.Fprintf(&sb, " downloadUrl: %q", other.UrlsByTypeName().DownloadUrl())
+ }
+ fmt.Fprintf(&sb, ", want")
+ if pm.project != other.project {
+ fmt.Fprintf(&sb, " project: %q", pm.project)
+ }
+ if pm.versionedName != other.VersionedName() {
+ fmt.Fprintf(&sb, " versionedName: %q", pm.versionedName)
+ }
+ if pm.name != other.Name() {
+ fmt.Fprintf(&sb, " name: %q", pm.name)
+ }
+ if pm.version != other.Version() {
+ fmt.Fprintf(&sb, " version: %q", pm.version)
+ }
+ if pm.downloadUrl != other.UrlsByTypeName().DownloadUrl() {
+ fmt.Fprintf(&sb, " downloadUrl: %q", pm.downloadUrl)
+ }
+ return sb.String()
+}
diff --git a/tools/compliance/readgraph.go b/tools/compliance/readgraph.go
index 7516440..a413ebe 100644
--- a/tools/compliance/readgraph.go
+++ b/tools/compliance/readgraph.go
@@ -34,10 +34,17 @@
type globalFS struct{}
+var _ fs.FS = globalFS{}
+var _ fs.StatFS = globalFS{}
+
func (s globalFS) Open(name string) (fs.File, error) {
return os.Open(name)
}
+func (s globalFS) Stat(name string) (fs.FileInfo, error) {
+ return os.Stat(name)
+}
+
var FS globalFS
// GetFS returns a filesystem for accessing files under the OUT_DIR environment variable.
@@ -168,7 +175,7 @@
}
lg.edges = make(TargetEdgeList, 0, esize)
for _, tn := range lg.targets {
- tn.licenseConditions = LicenseConditionSetFromNames(tn, tn.proto.LicenseConditions...)
+ tn.licenseConditions = LicenseConditionSetFromNames(tn.proto.LicenseConditions...)
err = addDependencies(lg, tn)
if err != nil {
return nil, fmt.Errorf("error indexing dependencies for %q: %w", tn.name, err)
@@ -198,6 +205,9 @@
// resolution identifies the set of conditions resolved by acting on the target node.
resolution LicenseConditionSet
+
+ // pure indicates whether to treat the node as a pure aggregate (no internal linkage)
+ pure bool
}
// addDependencies converts the proto AnnotatedDependencies into `edges`
diff --git a/tools/compliance/readgraph_test.go b/tools/compliance/readgraph_test.go
index bcf9f39..a2fb04d 100644
--- a/tools/compliance/readgraph_test.go
+++ b/tools/compliance/readgraph_test.go
@@ -19,12 +19,14 @@
"sort"
"strings"
"testing"
+
+ "android/soong/tools/compliance/testfs"
)
func TestReadLicenseGraph(t *testing.T) {
tests := []struct {
name string
- fs *testFS
+ fs *testfs.TestFS
roots []string
expectedError string
expectedEdges []edge
@@ -32,7 +34,7 @@
}{
{
name: "trivial",
- fs: &testFS{
+ fs: &testfs.TestFS{
"app.meta_lic": []byte("package_name: \"Android\"\n"),
},
roots: []string{"app.meta_lic"},
@@ -41,7 +43,7 @@
},
{
name: "unterminated",
- fs: &testFS{
+ fs: &testfs.TestFS{
"app.meta_lic": []byte("package_name: \"Android\n"),
},
roots: []string{"app.meta_lic"},
@@ -49,7 +51,7 @@
},
{
name: "danglingref",
- fs: &testFS{
+ fs: &testfs.TestFS{
"app.meta_lic": []byte(AOSP + "deps: {\n file: \"lib.meta_lic\"\n}\n"),
},
roots: []string{"app.meta_lic"},
@@ -57,7 +59,7 @@
},
{
name: "singleedge",
- fs: &testFS{
+ fs: &testfs.TestFS{
"app.meta_lic": []byte(AOSP + "deps: {\n file: \"lib.meta_lic\"\n}\n"),
"lib.meta_lic": []byte(AOSP),
},
@@ -67,7 +69,7 @@
},
{
name: "fullgraph",
- fs: &testFS{
+ fs: &testfs.TestFS{
"apex.meta_lic": []byte(AOSP + "deps: {\n file: \"app.meta_lic\"\n}\ndeps: {\n file: \"bin.meta_lic\"\n}\n"),
"app.meta_lic": []byte(AOSP),
"bin.meta_lic": []byte(AOSP + "deps: {\n file: \"lib.meta_lic\"\n}\n"),
diff --git a/tools/compliance/resolutionset.go b/tools/compliance/resolutionset.go
index 7c8f333..1be4a34 100644
--- a/tools/compliance/resolutionset.go
+++ b/tools/compliance/resolutionset.go
@@ -72,6 +72,16 @@
return isPresent
}
+// IsPureAggregate returns true if `target`, which must be in
+// `AttachesTo()` resolves to a pure aggregate in the resolution.
+func (rs ResolutionSet) IsPureAggregate(target *TargetNode) bool {
+ _, isPresent := rs[target]
+ if !isPresent {
+ panic(fmt.Errorf("ResolutionSet.IsPureAggregate(%s): not attached to %s", target.Name(), target.Name()))
+ }
+ return target.pure
+}
+
// Resolutions returns the list of resolutions that `attachedTo`
// target must resolve. Returns empty list if no conditions apply.
func (rs ResolutionSet) Resolutions(attachesTo *TargetNode) ResolutionList {
diff --git a/tools/compliance/resolutionset_test.go b/tools/compliance/resolutionset_test.go
index 89cdfeb..efdff82 100644
--- a/tools/compliance/resolutionset_test.go
+++ b/tools/compliance/resolutionset_test.go
@@ -27,48 +27,44 @@
// binc represents a compiler or other toolchain binary used for
// building the other binaries.
bottomUp = []res{
- {"image", "image", "image", "notice"},
- {"image", "image", "bin2", "restricted"},
- {"image", "bin1", "bin1", "reciprocal"},
- {"image", "bin2", "bin2", "restricted"},
- {"image", "lib1", "lib1", "notice"},
- {"image", "lib2", "lib2", "notice"},
- {"binc", "binc", "binc", "proprietary"},
- {"bin1", "bin1", "bin1", "reciprocal"},
- {"bin1", "lib1", "lib1", "notice"},
- {"bin2", "bin2", "bin2", "restricted"},
- {"bin2", "lib2", "lib2", "notice"},
- {"lib1", "lib1", "lib1", "notice"},
- {"lib2", "lib2", "lib2", "notice"},
+ {"image", "image", "notice|restricted"},
+ {"image", "bin1", "reciprocal"},
+ {"image", "bin2", "restricted"},
+ {"image", "lib1", "notice"},
+ {"image", "lib2", "notice"},
+ {"binc", "binc", "proprietary"},
+ {"bin1", "bin1", "reciprocal"},
+ {"bin1", "lib1", "notice"},
+ {"bin2", "bin2", "restricted"},
+ {"bin2", "lib2", "notice"},
+ {"lib1", "lib1", "notice"},
+ {"lib2", "lib2", "notice"},
}
// notice describes bottomUp after a top-down notice resolve.
notice = []res{
- {"image", "image", "image", "notice"},
- {"image", "image", "bin2", "restricted"},
- {"image", "bin1", "bin1", "reciprocal"},
- {"image", "bin2", "bin2", "restricted"},
- {"image", "lib1", "lib1", "notice"},
- {"image", "lib2", "bin2", "restricted"},
- {"image", "lib2", "lib2", "notice"},
- {"bin1", "bin1", "bin1", "reciprocal"},
- {"bin1", "lib1", "lib1", "notice"},
- {"bin2", "bin2", "bin2", "restricted"},
- {"bin2", "lib2", "bin2", "restricted"},
- {"bin2", "lib2", "lib2", "notice"},
- {"lib1", "lib1", "lib1", "notice"},
- {"lib2", "lib2", "lib2", "notice"},
+ {"image", "image", "notice|restricted"},
+ {"image", "bin1", "reciprocal"},
+ {"image", "bin2", "restricted"},
+ {"image", "lib1", "notice"},
+ {"image", "lib2", "notice|restricted"},
+ {"bin1", "bin1", "reciprocal"},
+ {"bin1", "lib1", "notice"},
+ {"bin2", "bin2", "restricted"},
+ {"bin2", "lib2", "notice|restricted"},
+ {"lib1", "lib1", "notice"},
+ {"lib2", "lib2", "notice"},
}
// share describes bottomUp after a top-down share resolve.
share = []res{
- {"image", "image", "bin2", "restricted"},
- {"image", "bin1", "bin1", "reciprocal"},
- {"image", "bin2", "bin2", "restricted"},
- {"image", "lib2", "bin2", "restricted"},
- {"bin1", "bin1", "bin1", "reciprocal"},
- {"bin2", "bin2", "bin2", "restricted"},
- {"bin2", "lib2", "bin2", "restricted"},
+ {"image", "image", "restricted"},
+ {"image", "bin1", "reciprocal"},
+ {"image", "bin2", "restricted"},
+ {"image", "lib2", "restricted"},
+ {"bin1", "bin1", "reciprocal"},
+ {"bin2", "bin2", "restricted"},
+ {"bin2", "lib2", "restricted"},
}
// proprietary describes bottomUp after a top-down proprietary resolve.
diff --git a/tools/compliance/test_util.go b/tools/compliance/test_util.go
index 26d7461..053b398 100644
--- a/tools/compliance/test_util.go
+++ b/tools/compliance/test_util.go
@@ -17,10 +17,11 @@
import (
"fmt"
"io"
- "io/fs"
"sort"
"strings"
"testing"
+
+ "android/soong/tools/compliance/testfs"
)
const (
@@ -42,7 +43,7 @@
Classpath = `` +
`package_name: "Free Software"
license_kinds: "SPDX-license-identifier-GPL-2.0-with-classpath-exception"
-license_conditions: "restricted"
+license_conditions: "permissive"
`
// DependentModule starts a test metadata file for a module in the same package as `Classpath`.
@@ -56,7 +57,7 @@
LGPL = `` +
`package_name: "Free Library"
license_kinds: "SPDX-license-identifier-LGPL-2.0"
-license_conditions: "restricted"
+license_conditions: "restricted_if_statically_linked"
`
// MPL starts a test metadata file for a module with MPL 2.0 reciprical licensing.
@@ -120,76 +121,27 @@
return tn
}
-// newTestCondition constructs a test license condition in the license graph.
-func newTestCondition(lg *LicenseGraph, targetName string, conditionName string) LicenseCondition {
- tn := newTestNode(lg, targetName)
- cl := LicenseConditionSetFromNames(tn, conditionName).AsList()
+// newTestCondition constructs a test license condition.
+func newTestCondition(conditionName string) LicenseCondition {
+ cl := LicenseConditionSetFromNames(conditionName).AsList()
if len(cl) == 0 {
panic(fmt.Errorf("attempt to create unrecognized condition: %q", conditionName))
} else if len(cl) != 1 {
panic(fmt.Errorf("unexpected multiple conditions from condition name: %q: got %d, want 1", conditionName, len(cl)))
}
lc := cl[0]
- tn.licenseConditions = tn.licenseConditions.Plus(lc)
return lc
}
-// newTestConditionSet constructs a test license condition set in the license graph.
-func newTestConditionSet(lg *LicenseGraph, targetName string, conditionName []string) LicenseConditionSet {
- tn := newTestNode(lg, targetName)
- cs := LicenseConditionSetFromNames(tn, conditionName...)
+// newTestConditionSet constructs a test license condition set.
+func newTestConditionSet(conditionName []string) LicenseConditionSet {
+ cs := LicenseConditionSetFromNames(conditionName...)
if cs.IsEmpty() {
panic(fmt.Errorf("attempt to create unrecognized condition: %q", conditionName))
}
- tn.licenseConditions = tn.licenseConditions.Union(cs)
return cs
}
-// testFS implements a test file system (fs.FS) simulated by a map from filename to []byte content.
-type testFS map[string][]byte
-
-// Open implements fs.FS.Open() to open a file based on the filename.
-func (fs *testFS) Open(name string) (fs.File, error) {
- if _, ok := (*fs)[name]; !ok {
- return nil, fmt.Errorf("unknown file %q", name)
- }
- return &testFile{fs, name, 0}, nil
-}
-
-// testFile implements a test file (fs.File) based on testFS above.
-type testFile struct {
- fs *testFS
- name string
- posn int
-}
-
-// Stat not implemented to obviate implementing fs.FileInfo.
-func (f *testFile) Stat() (fs.FileInfo, error) {
- return nil, fmt.Errorf("unimplemented")
-}
-
-// Read copies bytes from the testFS map.
-func (f *testFile) Read(b []byte) (int, error) {
- if f.posn < 0 {
- return 0, fmt.Errorf("file not open: %q", f.name)
- }
- if f.posn >= len((*f.fs)[f.name]) {
- return 0, io.EOF
- }
- n := copy(b, (*f.fs)[f.name][f.posn:])
- f.posn += n
- return n, nil
-}
-
-// Close marks the testFile as no longer in use.
-func (f *testFile) Close() error {
- if f.posn < 0 {
- return fmt.Errorf("file already closed: %q", f.name)
- }
- f.posn = -1
- return nil
-}
-
// edge describes test data edges to define test graphs.
type edge struct {
target, dep string
@@ -268,7 +220,7 @@
deps[edge.dep] = []annotated{}
}
}
- fs := make(testFS)
+ fs := make(testfs.TestFS)
for file, edges := range deps {
body := meta[file]
for _, edge := range edges {
@@ -327,12 +279,12 @@
// act describes test data resolution actions to define test action sets.
type act struct {
- actsOn, origin, condition string
+ actsOn, condition string
}
// String returns a human-readable string representing the test action.
func (a act) String() string {
- return fmt.Sprintf("%s{%s:%s}", a.actsOn, a.origin, a.condition)
+ return fmt.Sprintf("%s{%s}", a.actsOn, a.condition)
}
// toActionSet converts a list of act test data into a test action set.
@@ -340,7 +292,7 @@
as := make(ActionSet)
for _, a := range data {
actsOn := newTestNode(lg, a.actsOn)
- cs := newTestConditionSet(lg, a.origin, strings.Split(a.condition, "|"))
+ cs := newTestConditionSet(strings.Split(a.condition, "|"))
as[actsOn] = cs
}
return as
@@ -348,7 +300,7 @@
// res describes test data resolutions to define test resolution sets.
type res struct {
- attachesTo, actsOn, origin, condition string
+ attachesTo, actsOn, condition string
}
// toResolutionSet converts a list of res test data into a test resolution set.
@@ -360,7 +312,7 @@
if _, ok := rmap[attachesTo]; !ok {
rmap[attachesTo] = make(ActionSet)
}
- cs := newTestConditionSet(lg, r.origin, strings.Split(r.condition, ":"))
+ cs := newTestConditionSet(strings.Split(r.condition, "|"))
rmap[attachesTo][actsOn] |= cs
}
return rmap
@@ -460,15 +412,13 @@
result := make([]SourceSharePrivacyConflict, 0, len(data))
for _, c := range data {
fields := strings.Split(c.share, ":")
- oshare := fields[0]
cshare := fields[1]
fields = strings.Split(c.privacy, ":")
- oprivacy := fields[0]
cprivacy := fields[1]
result = append(result, SourceSharePrivacyConflict{
newTestNode(lg, c.sourceNode),
- newTestCondition(lg, oshare, cshare),
- newTestCondition(lg, oprivacy, cprivacy),
+ newTestCondition(cshare),
+ newTestCondition(cprivacy),
})
}
return result
@@ -521,7 +471,7 @@
expectedConditions := expectedRl[i].Resolves()
actualConditions := actualRl[i].Resolves()
if expectedConditions != actualConditions {
- t.Errorf("unexpected conditions apply to %q acting on %q: got %04x with names %s, want %04x with names %s",
+ t.Errorf("unexpected conditions apply to %q acting on %q: got %#v with names %s, want %#v with names %s",
target.name, expectedRl[i].actsOn.name,
actualConditions, actualConditions.Names(),
expectedConditions, expectedConditions.Names())
@@ -586,7 +536,7 @@
expectedConditions := expectedRl[i].Resolves()
actualConditions := actualRl[i].Resolves()
if expectedConditions != (expectedConditions & actualConditions) {
- t.Errorf("expected conditions missing from %q acting on %q: got %04x with names %s, want %04x with names %s",
+ t.Errorf("expected conditions missing from %q acting on %q: got %#v with names %s, want %#v with names %s",
target.name, expectedRl[i].actsOn.name,
actualConditions, actualConditions.Names(),
expectedConditions, expectedConditions.Names())
diff --git a/tools/compliance/testfs/Android.bp b/tools/compliance/testfs/Android.bp
new file mode 100644
index 0000000..6baaf18
--- /dev/null
+++ b/tools/compliance/testfs/Android.bp
@@ -0,0 +1,25 @@
+// Copyright (C) 2022 The Android Open Source Project
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package {
+ default_applicable_licenses: ["Android-Apache-2.0"],
+}
+
+bootstrap_go_package {
+ name: "compliance-test-fs-module",
+ srcs: [
+ "testfs.go",
+ ],
+ pkgPath: "android/soong/tools/compliance/testfs",
+}
diff --git a/tools/compliance/testfs/testfs.go b/tools/compliance/testfs/testfs.go
new file mode 100644
index 0000000..2c75c5b
--- /dev/null
+++ b/tools/compliance/testfs/testfs.go
@@ -0,0 +1,129 @@
+// Copyright 2022 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package testfs
+
+import (
+ "fmt"
+ "io"
+ "io/fs"
+ "strings"
+ "time"
+)
+
+// TestFS implements a test file system (fs.FS) simulated by a map from filename to []byte content.
+type TestFS map[string][]byte
+
+var _ fs.FS = (*TestFS)(nil)
+var _ fs.StatFS = (*TestFS)(nil)
+
+// Open implements fs.FS.Open() to open a file based on the filename.
+func (tfs *TestFS) Open(name string) (fs.File, error) {
+ if _, ok := (*tfs)[name]; !ok {
+ return nil, fmt.Errorf("unknown file %q", name)
+ }
+ return &TestFile{tfs, name, 0}, nil
+}
+
+// Stat implements fs.StatFS.Stat() to examine a file based on the filename.
+func (tfs *TestFS) Stat(name string) (fs.FileInfo, error) {
+ if content, ok := (*tfs)[name]; ok {
+ return &TestFileInfo{name, len(content), 0666}, nil
+ }
+ dirname := name
+ if !strings.HasSuffix(dirname, "/") {
+ dirname = dirname + "/"
+ }
+ for name := range (*tfs) {
+ if strings.HasPrefix(name, dirname) {
+ return &TestFileInfo{name, 8, fs.ModeDir | fs.ModePerm}, nil
+ }
+ }
+ return nil, fmt.Errorf("file not found: %q", name)
+}
+
+// TestFileInfo implements a file info (fs.FileInfo) based on TestFS above.
+type TestFileInfo struct {
+ name string
+ size int
+ mode fs.FileMode
+}
+
+var _ fs.FileInfo = (*TestFileInfo)(nil)
+
+// Name returns the name of the file
+func (fi *TestFileInfo) Name() string {
+ return fi.name
+}
+
+// Size returns the size of the file in bytes.
+func (fi *TestFileInfo) Size() int64 {
+ return int64(fi.size)
+}
+
+// Mode returns the fs.FileMode bits.
+func (fi *TestFileInfo) Mode() fs.FileMode {
+ return fi.mode
+}
+
+// ModTime fakes a modification time.
+func (fi *TestFileInfo) ModTime() time.Time {
+ return time.UnixMicro(0xb0bb)
+}
+
+// IsDir is a synonym for Mode().IsDir()
+func (fi *TestFileInfo) IsDir() bool {
+ return fi.mode.IsDir()
+}
+
+// Sys is unused and returns nil.
+func (fi *TestFileInfo) Sys() any {
+ return nil
+}
+
+// TestFile implements a test file (fs.File) based on TestFS above.
+type TestFile struct {
+ fs *TestFS
+ name string
+ posn int
+}
+
+var _ fs.File = (*TestFile)(nil)
+
+// Stat not implemented to obviate implementing fs.FileInfo.
+func (f *TestFile) Stat() (fs.FileInfo, error) {
+ return f.fs.Stat(f.name)
+}
+
+// Read copies bytes from the TestFS map.
+func (f *TestFile) Read(b []byte) (int, error) {
+ if f.posn < 0 {
+ return 0, fmt.Errorf("file not open: %q", f.name)
+ }
+ if f.posn >= len((*f.fs)[f.name]) {
+ return 0, io.EOF
+ }
+ n := copy(b, (*f.fs)[f.name][f.posn:])
+ f.posn += n
+ return n, nil
+}
+
+// Close marks the TestFile as no longer in use.
+func (f *TestFile) Close() error {
+ if f.posn < 0 {
+ return fmt.Errorf("file already closed: %q", f.name)
+ }
+ f.posn = -1
+ return nil
+}
diff --git a/tools/event_log_tags.bzl b/tools/event_log_tags.bzl
deleted file mode 100644
index 35305ae..0000000
--- a/tools/event_log_tags.bzl
+++ /dev/null
@@ -1,35 +0,0 @@
-"""Event log tags generation rule"""
-
-load("@bazel_skylib//lib:paths.bzl", "paths")
-
-def _event_log_tags_impl(ctx):
- out_files = []
- for logtag_file in ctx.files.srcs:
- out_filename = paths.replace_extension(logtag_file.basename, ".java")
- out_file = ctx.actions.declare_file(out_filename)
- out_files.append(out_file)
- ctx.actions.run(
- inputs = [logtag_file],
- outputs = [out_file],
- arguments = [
- "-o",
- out_file.path,
- logtag_file.path,
- ],
- progress_message = "Generating Java logtag file from %s" % logtag_file.short_path,
- executable = ctx.executable._logtag_to_java_tool,
- )
- return [DefaultInfo(files = depset(out_files))]
-
-event_log_tags = rule(
- implementation = _event_log_tags_impl,
- attrs = {
- "srcs": attr.label_list(allow_files = [".logtags"], mandatory = True),
- "_logtag_to_java_tool": attr.label(
- executable = True,
- cfg = "exec",
- allow_files = True,
- default = Label("//build/make/tools:java-event-log-tags"),
- ),
- },
-)
diff --git a/tools/fileslist_util.py b/tools/fileslist_util.py
index ff40d51..a1b1197 100755
--- a/tools/fileslist_util.py
+++ b/tools/fileslist_util.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
#
# Copyright (C) 2016 The Android Open Source Project
#
@@ -15,7 +15,9 @@
# limitations under the License.
#
-import getopt, json, sys
+import argparse
+import json
+import sys
def PrintFileNames(path):
with open(path) as jf:
@@ -27,42 +29,25 @@
with open(path) as jf:
data = json.load(jf)
for line in data:
- print "{0:12d} {1}".format(line["Size"], line["Name"])
+ print(f"{line['Size']:12d} {line['Name']}")
-def PrintUsage(name):
- print("""
-Usage: %s -[nc] json_files_list
- -n produces list of files only
- -c produces classic installed-files.txt
-""" % (name))
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-n", action="store_true",
+ help="produces list of files only")
+ parser.add_argument("-c", action="store_true",
+ help="produces classic installed-files.txt")
+ parser.add_argument("json_files_list")
+ args = parser.parse_args()
-def main(argv):
- try:
- opts, args = getopt.getopt(argv[1:], "nc", "")
- except getopt.GetoptError, err:
- print(err)
- PrintUsage(argv[0])
- sys.exit(2)
-
- if len(opts) == 0:
- print("No conversion option specified")
- PrintUsage(argv[0])
- sys.exit(2)
-
- if len(args) == 0:
- print("No input file specified")
- PrintUsage(argv[0])
- sys.exit(2)
-
- for o, a in opts:
- if o == ("-n"):
- PrintFileNames(args[0])
- sys.exit()
- elif o == ("-c"):
- PrintCanonicalList(args[0])
- sys.exit()
- else:
- assert False, "Unsupported option"
+ if args.n and args.c:
+ sys.exit("Cannot specify both -n and -c")
+ elif args.n:
+ PrintFileNames(args.json_files_list)
+ elif args.c:
+ PrintCanonicalList(args.json_files_list)
+ else:
+ sys.exit("No conversion option specified")
if __name__ == '__main__':
- main(sys.argv)
+ main()
diff --git a/tools/finalization/OWNERS b/tools/finalization/OWNERS
new file mode 100644
index 0000000..518b60d
--- /dev/null
+++ b/tools/finalization/OWNERS
@@ -0,0 +1,5 @@
+include platform/build/soong:/OWNERS
+smoreland@google.com
+alexbuy@google.com
+patb@google.com
+zyy@google.com
diff --git a/tools/finalization/README.md b/tools/finalization/README.md
new file mode 100644
index 0000000..501f260
--- /dev/null
+++ b/tools/finalization/README.md
@@ -0,0 +1,22 @@
+# Finalization tools
+This folder contains automation and CI scripts for [finalizing](https://go/android-finalization) Android before release.
+
+## Automation:
+1. [Environment setup](./environment.sh). Set values for varios finalization constants.
+2. [Finalize SDK](./finalize-aidl-vndk-sdk-resources.sh). Prepare the branch for SDK release. SDK contains Android Java APIs and other stable APIs. Commonly referred as a 1st step.
+3. [Finalize Android](./finalize-sdk-rel.sh). Mark branch as "REL", i.e. prepares for Android release. Any signed build containing these changes will be considered an official Android Release. Referred as a 2nd finalization step.
+4. [Finalize SDK and submit](./step-1.sh). Do [Finalize SDK](./finalize-aidl-vndk-sdk-resources.sh) step, create CLs, organize them into topic and send to Gerrit.
+ a. [Update SDK and submit](./update-step-1.sh). Same as above, but updates the existings CLs.
+5. [Finalize Android and submit](./step-2.sh). Do [Finalize Android](./finalize-sdk-rel.sh) step, create CLs, organize them into topic and send to Gerrit.
+ a. [Update Android and submit](./update-step-2.sh). Same as above, but updates the existings CLs.
+
+## CI:
+Performed in build targets in Finalization branches.
+1. [Finalization Step 1 for Main, git_main-fina-1-release](https://android-build.googleplex.com/builds/branches/git_main-fina-1-release/grid). Test [1st step/Finalize SDK](./finalize-aidl-vndk-sdk-resources.sh).
+2. [Finalization Step 1 for UDC, git_udc-fina-1-release](https://android-build.googleplex.com/builds/branches/git_udc-fina-1-release/grid). Same but for udc-dev.
+3. [Finalization Step 2 for Main, git_main-fina-2-release](https://android-build.googleplex.com/builds/branches/git_main-fina-2-release/grid). Test [1st step/Finalize SDK](./finalize-aidl-vndk-sdk-resources.sh) and [2nd step/Finalize Android](./finalize-sdk-rel.sh). Use [local finalization](./localonly-steps.sh) to build and copy presubmits.
+4. [Finalization Step 2 for UDC, git_udc-fina-2-release](https://android-build.googleplex.com/builds/branches/git_udc-fina-2-release/grid). Same but for udc-dev.
+5. [Local finalization steps](./localonly-steps.sh) are done only during local testing or in the CI lab. Normally these steps use artifacts from other builds.
+
+## Utility:
+[Full cleanup](./cleanup.sh). Remove all local changes and switch each project into head-less state. This is the best state to sync/rebase/finalize the branch.
diff --git a/tools/finalization/build-step-1-and-2.sh b/tools/finalization/build-step-1-and-2.sh
new file mode 100755
index 0000000..84e2782
--- /dev/null
+++ b/tools/finalization/build-step-1-and-2.sh
@@ -0,0 +1,24 @@
+#!/bin/bash
+
+set -ex
+
+function finalize_main_step12() {
+ local top="$(dirname "$0")"/../../../..
+ source $top/build/make/tools/finalization/environment.sh
+
+ if [ "$FINAL_STATE" = "unfinalized" ] ; then
+ # SDK codename -> int
+ source $top/build/make/tools/finalization/finalize-aidl-vndk-sdk-resources.sh
+ fi;
+
+ if [ "$FINAL_STATE" = "unfinalized" ] || [ "$FINAL_STATE" = "sdk" ] ; then
+ # ADB, Platform/Mainline SDKs build and move to prebuilts
+ source $top/build/make/tools/finalization/localonly-steps.sh
+
+ # REL
+ source $top/build/make/tools/finalization/finalize-sdk-rel.sh
+ fi;
+}
+
+finalize_main_step12
+
diff --git a/tools/finalization/build-step-1-and-m.sh b/tools/finalization/build-step-1-and-m.sh
new file mode 100755
index 0000000..0e7129f
--- /dev/null
+++ b/tools/finalization/build-step-1-and-m.sh
@@ -0,0 +1,19 @@
+#!/bin/bash
+
+set -ex
+
+function finalize_main_step1_and_m() {
+ local top="$(dirname "$0")"/../../../..
+ source $top/build/make/tools/finalization/build-step-1.sh
+
+ local m="$top/build/soong/soong_ui.bash --make-mode TARGET_PRODUCT=aosp_arm64 TARGET_BUILD_VARIANT=userdebug"
+
+ # This command tests:
+ # The release state for AIDL.
+ # ABI difference between user and userdebug builds.
+ # Resource/SDK finalization.
+ AIDL_FROZEN_REL=true $m
+}
+
+finalize_main_step1_and_m
+
diff --git a/tools/finalization/build-step-1.sh b/tools/finalization/build-step-1.sh
new file mode 100755
index 0000000..3d5eadb
--- /dev/null
+++ b/tools/finalization/build-step-1.sh
@@ -0,0 +1,16 @@
+#!/bin/bash
+
+set -ex
+
+function finalize_main_step1() {
+ local top="$(dirname "$0")"/../../../..
+ source $top/build/make/tools/finalization/environment.sh
+
+ if [ "$FINAL_STATE" = "unfinalized" ] ; then
+ # Build finalization artifacts.
+ source $top/build/make/tools/finalization/finalize-aidl-vndk-sdk-resources.sh
+ fi;
+}
+
+finalize_main_step1
+
diff --git a/tools/finalization/build_soong_java_droidstubs.go.apply_hack.diff b/tools/finalization/build_soong_java_droidstubs.go.apply_hack.diff
new file mode 100644
index 0000000..9ced2a9
--- /dev/null
+++ b/tools/finalization/build_soong_java_droidstubs.go.apply_hack.diff
@@ -0,0 +1,30 @@
+From 12eea1512f2612f41b5cf7004ee2e6a189d548d7 Mon Sep 17 00:00:00 2001
+From: Alex Buynytskyy <alexbuy@google.com>
+Date: Thu, 01 Sep 2022 10:44:21 -0700
+Subject: [PATCH] Hacky workaround for half-finalized builds.
+
+Metalava increments the SDK level by one when it's not "REL", so we
+temporarily force the build to be "REL" while we're still in the
+process of finalizing it.
+
+This CL must be reverted as part of actually declaring "REL".
+
+Bug: none
+Test: Build
+Change-Id: I8c24c6dabec0270bc384d8465c582a4ddbe8bd6c
+---
+
+diff --git a/java/droidstubs.go b/java/droidstubs.go
+index 5777b18..ec4a0f4 100644
+--- a/java/droidstubs.go
++++ b/java/droidstubs.go
+@@ -386,7 +386,8 @@
+ }
+ if apiVersions != nil {
+ cmd.FlagWithArg("--current-version ", ctx.Config().PlatformSdkVersion().String())
+- cmd.FlagWithArg("--current-codename ", ctx.Config().PlatformSdkCodename())
++ // STOPSHIP: RESTORE THIS LOGIC WHEN DECLARING "REL" BUILD
++ // cmd.FlagWithArg("--current-codename ", ctx.Config().PlatformSdkCodename())
+ cmd.FlagWithInput("--apply-api-levels ", apiVersions)
+ }
+ }
diff --git a/tools/finalization/build_soong_java_droidstubs.go.revert_hack.diff b/tools/finalization/build_soong_java_droidstubs.go.revert_hack.diff
new file mode 100644
index 0000000..7dec97c
--- /dev/null
+++ b/tools/finalization/build_soong_java_droidstubs.go.revert_hack.diff
@@ -0,0 +1,26 @@
+From c0f6e8fe4c3b6803be97aeea6683631d616412f4 Mon Sep 17 00:00:00 2001
+From: Alex Buynytskyy <alexbuy@google.com>
+Date: Thu, 08 Dec 2022 17:52:52 +0000
+Subject: [PATCH] Revert "Hacky workaround for half-finalized builds."
+
+This reverts commit 12eea1512f2612f41b5cf7004ee2e6a189d548d7.
+
+Reason for revert: finalization-2
+
+Change-Id: Ifc801271628808693b1cb20206f8f81c9a6c694d
+---
+
+diff --git a/java/droidstubs.go b/java/droidstubs.go
+index ec4a0f4..5777b18 100644
+--- a/java/droidstubs.go
++++ b/java/droidstubs.go
+@@ -386,8 +386,7 @@
+ }
+ if apiVersions != nil {
+ cmd.FlagWithArg("--current-version ", ctx.Config().PlatformSdkVersion().String())
+- // STOPSHIP: RESTORE THIS LOGIC WHEN DECLARING "REL" BUILD
+- // cmd.FlagWithArg("--current-codename ", ctx.Config().PlatformSdkCodename())
++ cmd.FlagWithArg("--current-codename ", ctx.Config().PlatformSdkCodename())
+ cmd.FlagWithInput("--apply-api-levels ", apiVersions)
+ }
+ }
diff --git a/tools/finalization/cleanup.sh b/tools/finalization/cleanup.sh
new file mode 100755
index 0000000..cd87b1d
--- /dev/null
+++ b/tools/finalization/cleanup.sh
@@ -0,0 +1,21 @@
+#!/bin/bash
+# Brings local repository to a remote head state.
+
+# set -ex
+
+function finalize_revert_local_changes_main() {
+ local top="$(dirname "$0")"/../../../..
+ local m="$top/build/soong/soong_ui.bash --make-mode TARGET_PRODUCT=aosp_arm64 TARGET_BUILD_VARIANT=userdebug"
+
+ # remove the out folder
+ $m clobber
+
+ repo selfupdate
+
+ repo forall -c '\
+ git checkout . ; git revert --abort ; git clean -fdx ;\
+ git checkout @ ; git branch fina-step1 -D ; git reset --hard; \
+ repo start fina-step1 ; git checkout @ ; git b fina-step1 -D ;'
+}
+
+finalize_revert_local_changes_main
diff --git a/tools/finalization/environment.sh b/tools/finalization/environment.sh
new file mode 100755
index 0000000..d95bea0
--- /dev/null
+++ b/tools/finalization/environment.sh
@@ -0,0 +1,21 @@
+#!/bin/bash
+
+set -ex
+
+export FINAL_BUG_ID='275409981'
+
+export FINAL_PLATFORM_CODENAME='UpsideDownCake'
+export CURRENT_PLATFORM_CODENAME='UpsideDownCake'
+export FINAL_PLATFORM_CODENAME_JAVA='UPSIDE_DOWN_CAKE'
+export FINAL_PLATFORM_SDK_VERSION='34'
+export FINAL_PLATFORM_VERSION='14'
+
+export FINAL_BUILD_PREFIX='UP1A'
+
+export FINAL_MAINLINE_EXTENSION='7'
+
+# Options:
+# 'unfinalized' - branch is in development state,
+# 'sdk' - SDK/API is finalized
+# 'rel' - branch is finalized, switched to REL
+export FINAL_STATE='rel'
diff --git a/tools/finalization/finalize-aidl-vndk-sdk-resources.sh b/tools/finalization/finalize-aidl-vndk-sdk-resources.sh
new file mode 100755
index 0000000..0491701
--- /dev/null
+++ b/tools/finalization/finalize-aidl-vndk-sdk-resources.sh
@@ -0,0 +1,161 @@
+#!/bin/bash
+
+set -ex
+
+function apply_droidstubs_hack() {
+ if ! grep -q 'STOPSHIP: RESTORE THIS LOGIC WHEN DECLARING "REL" BUILD' "$top/build/soong/java/droidstubs.go" ; then
+ git -C "$top/build/soong" apply --allow-empty ../../build/make/tools/finalization/build_soong_java_droidstubs.go.apply_hack.diff
+ fi
+}
+
+function apply_resources_sdk_int_fix() {
+ if ! grep -q 'public static final int RESOURCES_SDK_INT = SDK_INT;' "$top/frameworks/base/core/java/android/os/Build.java" ; then
+ git -C "$top/frameworks/base" apply --allow-empty ../../build/make/tools/finalization/frameworks_base.apply_resource_sdk_int.diff
+ fi
+}
+
+function finalize_bionic_ndk() {
+ # Adding __ANDROID_API_<>__.
+ # If this hasn't done then it's not used and not really needed. Still, let's check and add this.
+ local api_level="$top/bionic/libc/include/android/api-level.h"
+ if ! grep -q "\__.*$((${FINAL_PLATFORM_SDK_VERSION}))" $api_level ; then
+ local tmpfile=$(mktemp /tmp/finalization.XXXXXX)
+ echo "
+/** Names the \"${FINAL_PLATFORM_CODENAME:0:1}\" API level ($FINAL_PLATFORM_SDK_VERSION), for comparison against \`__ANDROID_API__\`. */
+#define __ANDROID_API_${FINAL_PLATFORM_CODENAME:0:1}__ $FINAL_PLATFORM_SDK_VERSION" > "$tmpfile"
+
+ local api_level="$top/bionic/libc/include/android/api-level.h"
+ sed -i -e "/__.*$((${FINAL_PLATFORM_SDK_VERSION}-1))/r""$tmpfile" $api_level
+
+ rm "$tmpfile"
+ fi
+}
+
+function finalize_modules_utils() {
+ local shortCodename="${FINAL_PLATFORM_CODENAME:0:1}"
+ local methodPlaceholder="INSERT_NEW_AT_LEAST_${shortCodename}_METHOD_HERE"
+
+ local tmpfile=$(mktemp /tmp/finalization.XXXXXX)
+ echo " /** Checks if the device is running on a release version of Android $FINAL_PLATFORM_CODENAME or newer */
+ @ChecksSdkIntAtLeast(api = $FINAL_PLATFORM_SDK_VERSION /* BUILD_VERSION_CODES.$FINAL_PLATFORM_CODENAME */)
+ public static boolean isAtLeast${FINAL_PLATFORM_CODENAME:0:1}() {
+ return SDK_INT >= $FINAL_PLATFORM_SDK_VERSION;
+ }" > "$tmpfile"
+
+ local javaFuncRegex='\/\*\*[^{]*isAtLeast'"${shortCodename}"'() {[^{}]*}'
+ local javaFuncReplace="N;N;N;N;N;N;N;N; s/$javaFuncRegex/$methodPlaceholder/; /$javaFuncRegex/!{P;D};"
+
+ local javaSdkLevel="$top/frameworks/libs/modules-utils/java/com/android/modules/utils/build/SdkLevel.java"
+ sed -i "$javaFuncReplace" $javaSdkLevel
+
+ sed -i "/${methodPlaceholder}"'/{
+ r '"$tmpfile"'
+ d}' $javaSdkLevel
+
+ echo "// Checks if the device is running on release version of Android ${FINAL_PLATFORM_CODENAME:0:1} or newer.
+inline bool IsAtLeast${FINAL_PLATFORM_CODENAME:0:1}() { return android_get_device_api_level() >= $FINAL_PLATFORM_SDK_VERSION; }" > "$tmpfile"
+
+ local cppFuncRegex='\/\/[^{]*IsAtLeast'"${shortCodename}"'() {[^{}]*}'
+ local cppFuncReplace="N;N;N;N;N;N; s/$cppFuncRegex/$methodPlaceholder/; /$cppFuncRegex/!{P;D};"
+
+ local cppSdkLevel="$top/frameworks/libs/modules-utils/build/include/android-modules-utils/sdk_level.h"
+ sed -i "$cppFuncReplace" $cppSdkLevel
+ sed -i "/${methodPlaceholder}"'/{
+ r '"$tmpfile"'
+ d}' $cppSdkLevel
+
+ rm "$tmpfile"
+}
+
+function finalize_aidl_vndk_sdk_resources() {
+ local top="$(dirname "$0")"/../../../..
+ source $top/build/make/tools/finalization/environment.sh
+
+ local SDK_CODENAME="public static final int $FINAL_PLATFORM_CODENAME_JAVA = CUR_DEVELOPMENT;"
+ local SDK_VERSION="public static final int $FINAL_PLATFORM_CODENAME_JAVA = $FINAL_PLATFORM_SDK_VERSION;"
+
+ # default target to modify tree and build SDK
+ local m="$top/build/soong/soong_ui.bash --make-mode TARGET_PRODUCT=aosp_arm64 TARGET_BUILD_VARIANT=userdebug DIST_DIR=out/dist"
+
+ # The full process can be found at (INTERNAL) go/android-sdk-finalization.
+
+ # apply droidstubs hack to prevent tools from incrementing an API version
+ apply_droidstubs_hack
+
+ # bionic/NDK
+ finalize_bionic_ndk
+
+ # VNDK definitions for new SDK version
+ cp "$top/development/vndk/tools/definition-tool/datasets/vndk-lib-extra-list-current.txt" \
+ "$top/development/vndk/tools/definition-tool/datasets/vndk-lib-extra-list-$FINAL_PLATFORM_SDK_VERSION.txt"
+
+ AIDL_TRANSITIVE_FREEZE=true $m aidl-freeze-api create_reference_dumps
+
+ # Generate ABI dumps
+ ANDROID_BUILD_TOP="$top" \
+ out/host/linux-x86/bin/create_reference_dumps \
+ -p aosp_arm64 --build-variant user
+
+ echo "NOTE: THIS INTENTIONALLY MAY FAIL AND REPAIR ITSELF (until 'DONE')"
+ # Update new versions of files. See update-vndk-list.sh (which requires envsetup.sh)
+ $m check-vndk-list || \
+ { cp $top/out/soong/vndk/vndk.libraries.txt $top/build/make/target/product/gsi/current.txt; }
+ echo "DONE: THIS INTENTIONALLY MAY FAIL AND REPAIR ITSELF"
+
+ # Finalize SDK
+
+ # frameworks/libs/modules-utils
+ finalize_modules_utils
+
+ # build/make
+ local version_defaults="$top/build/make/core/version_defaults.mk"
+ sed -i -e "s/PLATFORM_SDK_VERSION := .*/PLATFORM_SDK_VERSION := ${FINAL_PLATFORM_SDK_VERSION}/g" $version_defaults
+ sed -i -e "s/PLATFORM_VERSION_LAST_STABLE := .*/PLATFORM_VERSION_LAST_STABLE := ${FINAL_PLATFORM_VERSION}/g" $version_defaults
+ sed -i -e "s/sepolicy_major_vers := .*/sepolicy_major_vers := ${FINAL_PLATFORM_SDK_VERSION}/g" "$top/build/make/core/config.mk"
+ cp "$top/build/make/target/product/gsi/current.txt" "$top/build/make/target/product/gsi/$FINAL_PLATFORM_SDK_VERSION.txt"
+
+ # build/soong
+ local codename_version="\"${FINAL_PLATFORM_CODENAME}\": ${FINAL_PLATFORM_SDK_VERSION}"
+ if ! grep -q "$codename_version" "$top/build/soong/android/api_levels.go" ; then
+ sed -i -e "/:.*$((${FINAL_PLATFORM_SDK_VERSION}-1)),/a \\\t\t$codename_version," "$top/build/soong/android/api_levels.go"
+ fi
+
+ # cts
+ echo ${FINAL_PLATFORM_VERSION} > "$top/cts/tests/tests/os/assets/platform_releases.txt"
+ sed -i -e "s/EXPECTED_SDK = $((${FINAL_PLATFORM_SDK_VERSION}-1))/EXPECTED_SDK = ${FINAL_PLATFORM_SDK_VERSION}/g" "$top/cts/tests/tests/os/src/android/os/cts/BuildVersionTest.java"
+
+ # libcore
+ sed -i "s%$SDK_CODENAME%$SDK_VERSION%g" "$top/libcore/dalvik/src/main/java/dalvik/annotation/compat/VersionCodes.java"
+
+ # platform_testing
+ local version_codes="$top/platform_testing/libraries/compatibility-common-util/src/com/android/compatibility/common/util/VersionCodes.java"
+ sed -i -e "/=.*$((${FINAL_PLATFORM_SDK_VERSION}-1));/a \\ ${SDK_VERSION}" $version_codes
+
+ # Finalize resources
+ "$top/frameworks/base/tools/aapt2/tools/finalize_res.py" \
+ "$top/frameworks/base/core/res/res/values/public-staging.xml" \
+ "$top/frameworks/base/core/res/res/values/public-final.xml"
+
+ # frameworks/base
+ sed -i "s%$SDK_CODENAME%$SDK_VERSION%g" "$top/frameworks/base/core/java/android/os/Build.java"
+ apply_resources_sdk_int_fix
+ sed -i -e "/=.*$((${FINAL_PLATFORM_SDK_VERSION}-1)),/a \\ SDK_${FINAL_PLATFORM_CODENAME_JAVA} = ${FINAL_PLATFORM_SDK_VERSION}," "$top/frameworks/base/tools/aapt/SdkConstants.h"
+ sed -i -e "/=.*$((${FINAL_PLATFORM_SDK_VERSION}-1)),/a \\ SDK_${FINAL_PLATFORM_CODENAME_JAVA} = ${FINAL_PLATFORM_SDK_VERSION}," "$top/frameworks/base/tools/aapt2/SdkConstants.h"
+
+ # Bump Mainline SDK extension version.
+ local SDKEXT="packages/modules/SdkExtensions/"
+ "$top/packages/modules/SdkExtensions/gen_sdk/bump_sdk.sh" ${FINAL_MAINLINE_EXTENSION}
+ # Leave the last commit as a set of modified files.
+ # The code to create a finalization topic will pick it up later.
+ git -C ${SDKEXT} reset HEAD~1
+
+ local version_defaults="$top/build/make/core/version_defaults.mk"
+ sed -i -e "s/PLATFORM_SDK_EXTENSION_VERSION := .*/PLATFORM_SDK_EXTENSION_VERSION := ${FINAL_MAINLINE_EXTENSION}/g" $version_defaults
+
+ # Force update current.txt
+ $m clobber
+ $m update-api
+}
+
+finalize_aidl_vndk_sdk_resources
+
diff --git a/tools/finalization/finalize-sdk-rel.sh b/tools/finalization/finalize-sdk-rel.sh
new file mode 100755
index 0000000..6cf4124
--- /dev/null
+++ b/tools/finalization/finalize-sdk-rel.sh
@@ -0,0 +1,66 @@
+#!/bin/bash
+
+set -ex
+
+function revert_droidstubs_hack() {
+ if grep -q 'STOPSHIP: RESTORE THIS LOGIC WHEN DECLARING "REL" BUILD' "$top/build/soong/java/droidstubs.go" ; then
+ git -C "$top/build/soong" apply --allow-empty ../../build/make/tools/finalization/build_soong_java_droidstubs.go.revert_hack.diff
+ fi
+}
+
+function revert_resources_sdk_int_fix() {
+ if grep -q 'public static final int RESOURCES_SDK_INT = SDK_INT;' "$top/frameworks/base/core/java/android/os/Build.java" ; then
+ git -C "$top/frameworks/base" apply --allow-empty ../../build/make/tools/finalization/frameworks_base.revert_resource_sdk_int.diff
+ fi
+}
+
+function apply_prerelease_sdk_hack() {
+ if ! grep -q 'STOPSHIP: hack for the pre-release SDK' "$top/frameworks/base/core/java/android/content/pm/parsing/FrameworkParsingPackageUtils.java" ; then
+ git -C "$top/frameworks/base" apply --allow-empty ../../build/make/tools/finalization/frameworks_base.apply_hack.diff
+ fi
+}
+
+function finalize_sdk_rel() {
+ local top="$(dirname "$0")"/../../../..
+ source $top/build/make/tools/finalization/environment.sh
+
+ # revert droidstubs hack now we are switching to REL
+ revert_droidstubs_hack
+
+ # let the apps built with pre-release SDK parse
+ apply_prerelease_sdk_hack
+
+ # in REL mode, resources would correctly set the resources_sdk_int, no fix required
+ revert_resources_sdk_int_fix
+
+ # build/make/core/version_defaults.mk
+ sed -i -e "s/PLATFORM_VERSION_CODENAME.${FINAL_BUILD_PREFIX} := .*/PLATFORM_VERSION_CODENAME.${FINAL_BUILD_PREFIX} := REL/g" "$top/build/make/core/version_defaults.mk"
+
+ # cts
+ echo "$FINAL_PLATFORM_VERSION" > "$top/cts/tests/tests/os/assets/platform_versions.txt"
+ if [ "$FINAL_PLATFORM_CODENAME" != "$CURRENT_PLATFORM_CODENAME" ]; then
+ echo "$CURRENT_PLATFORM_CODENAME" >> "./cts/tests/tests/os/assets/platform_versions.txt"
+ fi
+ git -C "$top/cts" mv hostsidetests/theme/assets/${FINAL_PLATFORM_CODENAME} hostsidetests/theme/assets/${FINAL_PLATFORM_SDK_VERSION}
+
+ # system/sepolicy
+ mkdir -p "$top/system/sepolicy/prebuilts/api/${FINAL_PLATFORM_SDK_VERSION}.0/"
+ cp -r "$top/system/sepolicy/public/" "$top/system/sepolicy/prebuilts/api/${FINAL_PLATFORM_SDK_VERSION}.0/"
+ cp -r "$top/system/sepolicy/private/" "$top/system/sepolicy/prebuilts/api/${FINAL_PLATFORM_SDK_VERSION}.0/"
+
+ # prebuilts/abi-dumps/ndk
+ mkdir -p "$top/prebuilts/abi-dumps/ndk/$FINAL_PLATFORM_SDK_VERSION"
+ cp -r "$top/prebuilts/abi-dumps/ndk/current/64/" "$top/prebuilts/abi-dumps/ndk/$FINAL_PLATFORM_SDK_VERSION/"
+
+ # prebuilts/abi-dumps/platform
+ mkdir -p "$top/prebuilts/abi-dumps/platform/$FINAL_PLATFORM_SDK_VERSION"
+ cp -r "$top/prebuilts/abi-dumps/platform/current/64/" "$top/prebuilts/abi-dumps/platform/$FINAL_PLATFORM_SDK_VERSION/"
+
+ if [ "$FINAL_STATE" != "sdk" ] ; then
+ # prebuilts/abi-dumps/vndk
+ mv "$top/prebuilts/abi-dumps/vndk/$CURRENT_PLATFORM_CODENAME" "$top/prebuilts/abi-dumps/vndk/$FINAL_PLATFORM_SDK_VERSION"
+ fi;
+}
+
+finalize_sdk_rel
+
diff --git a/tools/finalization/frameworks_base.apply_hack.diff b/tools/finalization/frameworks_base.apply_hack.diff
new file mode 100644
index 0000000..545c230
--- /dev/null
+++ b/tools/finalization/frameworks_base.apply_hack.diff
@@ -0,0 +1,129 @@
+From 3c9a5321dc94124367f2f4363d85a8f488f5d4d1 Mon Sep 17 00:00:00 2001
+From: Yurii Zubrytskyi <zyy@google.com>
+Date: Wed, 04 May 2022 01:05:24 -0700
+Subject: [PATCH] HACK: allow apps with pre-release SDK RESTRICT AUTOMERGE
+
+Revert before releasing
+Let the apps built with pre-release Tiramisu SDK parse
++ fix a test that didn't expect REL builds to throw
+ when checking for lettered versions
+
+Test: build
+Bug: 225745567
+Bug: 231407096
+Change-Id: Ia0de2ab1a99e5f186f0d871e6225d88bf3308df6
+---
+
+diff --git a/core/java/android/content/pm/PackageParser.java b/core/java/android/content/pm/PackageParser.java
+index c15b3e0..3f4df4d 100644
+--- a/core/java/android/content/pm/PackageParser.java
++++ b/core/java/android/content/pm/PackageParser.java
+@@ -2628,6 +2628,15 @@
+ return Build.VERSION_CODES.CUR_DEVELOPMENT;
+ }
+
++ // STOPSHIP: hack for the pre-release SDK
++ if (platformSdkCodenames.length == 0
++ && Build.VERSION.KNOWN_CODENAMES.stream().max(String::compareTo).orElse("").equals(
++ targetCode)) {
++ Slog.w(TAG, "Package requires development platform " + targetCode
++ + ", returning current version " + Build.VERSION.SDK_INT);
++ return Build.VERSION.SDK_INT;
++ }
++
+ // Otherwise, we're looking at an incompatible pre-release SDK.
+ if (platformSdkCodenames.length > 0) {
+ outError[0] = "Requires development platform " + targetCode
+@@ -2699,6 +2708,15 @@
+ return Build.VERSION_CODES.CUR_DEVELOPMENT;
+ }
+
++ // STOPSHIP: hack for the pre-release SDK
++ if (platformSdkCodenames.length == 0
++ && Build.VERSION.KNOWN_CODENAMES.stream().max(String::compareTo).orElse("").equals(
++ minCode)) {
++ Slog.w(TAG, "Package requires min development platform " + minCode
++ + ", returning current version " + Build.VERSION.SDK_INT);
++ return Build.VERSION.SDK_INT;
++ }
++
+ // Otherwise, we're looking at an incompatible pre-release SDK.
+ if (platformSdkCodenames.length > 0) {
+ outError[0] = "Requires development platform " + minCode
+diff --git a/core/java/android/content/pm/parsing/FrameworkParsingPackageUtils.java b/core/java/android/content/pm/parsing/FrameworkParsingPackageUtils.java
+index 3e1c5bb..8cc4cdb 100644
+--- a/core/java/android/content/pm/parsing/FrameworkParsingPackageUtils.java
++++ b/core/java/android/content/pm/parsing/FrameworkParsingPackageUtils.java
+@@ -316,6 +316,15 @@
+ return input.success(Build.VERSION_CODES.CUR_DEVELOPMENT);
+ }
+
++ // STOPSHIP: hack for the pre-release SDK
++ if (platformSdkCodenames.length == 0
++ && Build.VERSION.KNOWN_CODENAMES.stream().max(String::compareTo).orElse("").equals(
++ minCode)) {
++ Slog.w(TAG, "Parsed package requires min development platform " + minCode
++ + ", returning current version " + Build.VERSION.SDK_INT);
++ return input.success(Build.VERSION.SDK_INT);
++ }
++
+ // Otherwise, we're looking at an incompatible pre-release SDK.
+ if (platformSdkCodenames.length > 0) {
+ return input.error(PackageManager.INSTALL_FAILED_OLDER_SDK,
+@@ -368,19 +377,27 @@
+ return input.success(targetVers);
+ }
+
++ // If it's a pre-release SDK and the codename matches this platform, it
++ // definitely targets this SDK.
++ if (matchTargetCode(platformSdkCodenames, targetCode)) {
++ return input.success(Build.VERSION_CODES.CUR_DEVELOPMENT);
++ }
++
++ // STOPSHIP: hack for the pre-release SDK
++ if (platformSdkCodenames.length == 0
++ && Build.VERSION.KNOWN_CODENAMES.stream().max(String::compareTo).orElse("").equals(
++ targetCode)) {
++ Slog.w(TAG, "Parsed package requires development platform " + targetCode
++ + ", returning current version " + Build.VERSION.SDK_INT);
++ return input.success(Build.VERSION.SDK_INT);
++ }
++
+ try {
+ if (allowUnknownCodenames && UnboundedSdkLevel.isAtMost(targetCode)) {
+ return input.success(Build.VERSION_CODES.CUR_DEVELOPMENT);
+ }
+ } catch (IllegalArgumentException e) {
+- // isAtMost() throws it when encountering an older SDK codename
+- return input.error(PackageManager.INSTALL_FAILED_OLDER_SDK, e.getMessage());
+- }
+-
+- // If it's a pre-release SDK and the codename matches this platform, it
+- // definitely targets this SDK.
+- if (matchTargetCode(platformSdkCodenames, targetCode)) {
+- return input.success(Build.VERSION_CODES.CUR_DEVELOPMENT);
++ return input.error(PackageManager.INSTALL_FAILED_OLDER_SDK, "Bad package SDK");
+ }
+
+ // Otherwise, we're looking at an incompatible pre-release SDK.
+diff --git a/services/tests/servicestests/src/com/android/server/systemconfig/SystemConfigTest.java b/services/tests/servicestests/src/com/android/server/systemconfig/SystemConfigTest.java
+index 92c7871..687e8f7 100644
+--- a/services/tests/servicestests/src/com/android/server/systemconfig/SystemConfigTest.java
++++ b/services/tests/servicestests/src/com/android/server/systemconfig/SystemConfigTest.java
+@@ -446,14 +446,14 @@
+ + " <library \n"
+ + " name=\"foo\"\n"
+ + " file=\"" + mFooJar + "\"\n"
+- + " on-bootclasspath-before=\"Q\"\n"
++ + " on-bootclasspath-before=\"A\"\n"
+ + " on-bootclasspath-since=\"W\"\n"
+ + " />\n\n"
+ + " </permissions>";
+ parseSharedLibraries(contents);
+ assertFooIsOnlySharedLibrary();
+ SystemConfig.SharedLibraryEntry entry = mSysConfig.getSharedLibraries().get("foo");
+- assertThat(entry.onBootclasspathBefore).isEqualTo("Q");
++ assertThat(entry.onBootclasspathBefore).isEqualTo("A");
+ assertThat(entry.onBootclasspathSince).isEqualTo("W");
+ }
+
diff --git a/tools/finalization/frameworks_base.apply_resource_sdk_int.diff b/tools/finalization/frameworks_base.apply_resource_sdk_int.diff
new file mode 100644
index 0000000..f0576d0
--- /dev/null
+++ b/tools/finalization/frameworks_base.apply_resource_sdk_int.diff
@@ -0,0 +1,24 @@
+From cdb47fc90b8d6860ec1dc5efada1f9ccd471618b Mon Sep 17 00:00:00 2001
+From: Alex Buynytskyy <alexbuy@google.com>
+Date: Tue, 11 Apr 2023 22:12:44 +0000
+Subject: [PATCH] Don't force +1 for resource resolution.
+
+Bug: 277674088
+Fixes: 277674088
+Test: boots, no crashes
+Change-Id: I17e743a0f1cf6f98fddd40c358dea5a8b9cc7723
+---
+
+diff --git a/core/java/android/os/Build.java b/core/java/android/os/Build.java
+index eb47170..4d3e92b 100755
+--- a/core/java/android/os/Build.java
++++ b/core/java/android/os/Build.java
+@@ -493,7 +493,7 @@
+ * @hide
+ */
+ @TestApi
+- public static final int RESOURCES_SDK_INT = SDK_INT + ACTIVE_CODENAMES.length;
++ public static final int RESOURCES_SDK_INT = SDK_INT;
+
+ /**
+ * The current lowest supported value of app target SDK. Applications targeting
diff --git a/tools/finalization/frameworks_base.revert_hack.diff b/tools/finalization/frameworks_base.revert_hack.diff
new file mode 100644
index 0000000..1d147b1
--- /dev/null
+++ b/tools/finalization/frameworks_base.revert_hack.diff
@@ -0,0 +1,125 @@
+From b4ae5c71f327d00081bbb0b7b26d48eb88761fbc Mon Sep 17 00:00:00 2001
+From: Alex Buynytskyy <alexbuy@google.com>
+Date: Tue, 21 Feb 2023 01:43:14 +0000
+Subject: [PATCH] Revert "HACK: allow apps with pre-release SDK RESTRICT AUTOMERGE"
+
+This reverts commit 3c9a5321dc94124367f2f4363d85a8f488f5d4d1.
+
+Reason for revert: not needed anymore
+
+Change-Id: I5c5e3af78a41e7bd8cbc99464dccc57c345105f3
+---
+
+diff --git a/core/java/android/content/pm/PackageParser.java b/core/java/android/content/pm/PackageParser.java
+index 3f4df4d..c15b3e0 100644
+--- a/core/java/android/content/pm/PackageParser.java
++++ b/core/java/android/content/pm/PackageParser.java
+@@ -2628,15 +2628,6 @@
+ return Build.VERSION_CODES.CUR_DEVELOPMENT;
+ }
+
+- // STOPSHIP: hack for the pre-release SDK
+- if (platformSdkCodenames.length == 0
+- && Build.VERSION.KNOWN_CODENAMES.stream().max(String::compareTo).orElse("").equals(
+- targetCode)) {
+- Slog.w(TAG, "Package requires development platform " + targetCode
+- + ", returning current version " + Build.VERSION.SDK_INT);
+- return Build.VERSION.SDK_INT;
+- }
+-
+ // Otherwise, we're looking at an incompatible pre-release SDK.
+ if (platformSdkCodenames.length > 0) {
+ outError[0] = "Requires development platform " + targetCode
+@@ -2708,15 +2699,6 @@
+ return Build.VERSION_CODES.CUR_DEVELOPMENT;
+ }
+
+- // STOPSHIP: hack for the pre-release SDK
+- if (platformSdkCodenames.length == 0
+- && Build.VERSION.KNOWN_CODENAMES.stream().max(String::compareTo).orElse("").equals(
+- minCode)) {
+- Slog.w(TAG, "Package requires min development platform " + minCode
+- + ", returning current version " + Build.VERSION.SDK_INT);
+- return Build.VERSION.SDK_INT;
+- }
+-
+ // Otherwise, we're looking at an incompatible pre-release SDK.
+ if (platformSdkCodenames.length > 0) {
+ outError[0] = "Requires development platform " + minCode
+diff --git a/core/java/android/content/pm/parsing/FrameworkParsingPackageUtils.java b/core/java/android/content/pm/parsing/FrameworkParsingPackageUtils.java
+index 8cc4cdb..3e1c5bb 100644
+--- a/core/java/android/content/pm/parsing/FrameworkParsingPackageUtils.java
++++ b/core/java/android/content/pm/parsing/FrameworkParsingPackageUtils.java
+@@ -316,15 +316,6 @@
+ return input.success(Build.VERSION_CODES.CUR_DEVELOPMENT);
+ }
+
+- // STOPSHIP: hack for the pre-release SDK
+- if (platformSdkCodenames.length == 0
+- && Build.VERSION.KNOWN_CODENAMES.stream().max(String::compareTo).orElse("").equals(
+- minCode)) {
+- Slog.w(TAG, "Parsed package requires min development platform " + minCode
+- + ", returning current version " + Build.VERSION.SDK_INT);
+- return input.success(Build.VERSION.SDK_INT);
+- }
+-
+ // Otherwise, we're looking at an incompatible pre-release SDK.
+ if (platformSdkCodenames.length > 0) {
+ return input.error(PackageManager.INSTALL_FAILED_OLDER_SDK,
+@@ -377,27 +368,19 @@
+ return input.success(targetVers);
+ }
+
+- // If it's a pre-release SDK and the codename matches this platform, it
+- // definitely targets this SDK.
+- if (matchTargetCode(platformSdkCodenames, targetCode)) {
+- return input.success(Build.VERSION_CODES.CUR_DEVELOPMENT);
+- }
+-
+- // STOPSHIP: hack for the pre-release SDK
+- if (platformSdkCodenames.length == 0
+- && Build.VERSION.KNOWN_CODENAMES.stream().max(String::compareTo).orElse("").equals(
+- targetCode)) {
+- Slog.w(TAG, "Parsed package requires development platform " + targetCode
+- + ", returning current version " + Build.VERSION.SDK_INT);
+- return input.success(Build.VERSION.SDK_INT);
+- }
+-
+ try {
+ if (allowUnknownCodenames && UnboundedSdkLevel.isAtMost(targetCode)) {
+ return input.success(Build.VERSION_CODES.CUR_DEVELOPMENT);
+ }
+ } catch (IllegalArgumentException e) {
+- return input.error(PackageManager.INSTALL_FAILED_OLDER_SDK, "Bad package SDK");
++ // isAtMost() throws it when encountering an older SDK codename
++ return input.error(PackageManager.INSTALL_FAILED_OLDER_SDK, e.getMessage());
++ }
++
++ // If it's a pre-release SDK and the codename matches this platform, it
++ // definitely targets this SDK.
++ if (matchTargetCode(platformSdkCodenames, targetCode)) {
++ return input.success(Build.VERSION_CODES.CUR_DEVELOPMENT);
+ }
+
+ // Otherwise, we're looking at an incompatible pre-release SDK.
+diff --git a/services/tests/servicestests/src/com/android/server/systemconfig/SystemConfigTest.java b/services/tests/servicestests/src/com/android/server/systemconfig/SystemConfigTest.java
+index 687e8f7..92c7871 100644
+--- a/services/tests/servicestests/src/com/android/server/systemconfig/SystemConfigTest.java
++++ b/services/tests/servicestests/src/com/android/server/systemconfig/SystemConfigTest.java
+@@ -446,14 +446,14 @@
+ + " <library \n"
+ + " name=\"foo\"\n"
+ + " file=\"" + mFooJar + "\"\n"
+- + " on-bootclasspath-before=\"A\"\n"
++ + " on-bootclasspath-before=\"Q\"\n"
+ + " on-bootclasspath-since=\"W\"\n"
+ + " />\n\n"
+ + " </permissions>";
+ parseSharedLibraries(contents);
+ assertFooIsOnlySharedLibrary();
+ SystemConfig.SharedLibraryEntry entry = mSysConfig.getSharedLibraries().get("foo");
+- assertThat(entry.onBootclasspathBefore).isEqualTo("A");
++ assertThat(entry.onBootclasspathBefore).isEqualTo("Q");
+ assertThat(entry.onBootclasspathSince).isEqualTo("W");
+ }
+
diff --git a/tools/finalization/frameworks_base.revert_resource_sdk_int.diff b/tools/finalization/frameworks_base.revert_resource_sdk_int.diff
new file mode 100644
index 0000000..2ade499
--- /dev/null
+++ b/tools/finalization/frameworks_base.revert_resource_sdk_int.diff
@@ -0,0 +1,27 @@
+From c7e460bb19071d867cd7ca04282ce42694f4f358 Mon Sep 17 00:00:00 2001
+From: Alex Buynytskyy <alexbuy@google.com>
+Date: Wed, 12 Apr 2023 01:06:26 +0000
+Subject: [PATCH] Revert "Don't force +1 for resource resolution."
+
+It's not required for master.
+
+This reverts commit f1cb683988f81579a76ddbf9993848a4a06dd28c.
+
+Bug: 277674088
+Test: boots, no crashes
+Change-Id: Ia1692548f26496fdc6f1e4f0557213c7996d6823
+---
+
+diff --git a/core/java/android/os/Build.java b/core/java/android/os/Build.java
+index 4d3e92b..eb47170 100755
+--- a/core/java/android/os/Build.java
++++ b/core/java/android/os/Build.java
+@@ -493,7 +493,7 @@
+ * @hide
+ */
+ @TestApi
+- public static final int RESOURCES_SDK_INT = SDK_INT;
++ public static final int RESOURCES_SDK_INT = SDK_INT + ACTIVE_CODENAMES.length;
+
+ /**
+ * The current lowest supported value of app target SDK. Applications targeting
diff --git a/tools/finalization/localonly-steps.sh b/tools/finalization/localonly-steps.sh
new file mode 100755
index 0000000..6107b3e
--- /dev/null
+++ b/tools/finalization/localonly-steps.sh
@@ -0,0 +1,26 @@
+#!/bin/bash
+
+set -ex
+
+function finalize_locally() {
+ local top="$(dirname "$0")"/../../../..
+ source $top/build/make/tools/finalization/environment.sh
+
+ # default target to modify tree and build SDK
+ local m="$top/build/soong/soong_ui.bash --make-mode TARGET_PRODUCT=aosp_arm64 TARGET_BUILD_VARIANT=userdebug DIST_DIR=out/dist"
+
+ # adb keys
+ $m adb
+ LOGNAME=android-eng HOSTNAME=google.com "$top/out/host/linux-x86/bin/adb" keygen "$top/vendor/google/security/adb/${FINAL_PLATFORM_VERSION}.adb_key"
+
+ # Build Platform SDKs.
+ $top/build/soong/soong_ui.bash --make-mode TARGET_PRODUCT=sdk TARGET_BUILD_VARIANT=userdebug sdk dist sdk_repo DIST_DIR=out/dist
+
+ # Build Modules SDKs.
+ TARGET_BUILD_VARIANT=userdebug UNBUNDLED_BUILD_SDKS_FROM_SOURCE=true DIST_DIR=out/dist "$top/vendor/google/build/mainline_modules_sdks.sh"
+
+ # Update prebuilts.
+ "$top/prebuilts/build-tools/path/linux-x86/python3" -W ignore::DeprecationWarning "$top/prebuilts/sdk/update_prebuilts.py" --local_mode -f ${FINAL_PLATFORM_SDK_VERSION} -e ${FINAL_MAINLINE_EXTENSION} --bug 1 1
+}
+
+finalize_locally
diff --git a/tools/finalization/step-1.sh b/tools/finalization/step-1.sh
new file mode 100755
index 0000000..0dd4b3a
--- /dev/null
+++ b/tools/finalization/step-1.sh
@@ -0,0 +1,36 @@
+#!/bin/bash
+# Script to perform a 1st step of Android Finalization: API/SDK finalization, create CLs and upload to Gerrit.
+
+set -ex
+
+function commit_step_1_changes() {
+ set +e
+ repo forall -c '\
+ if [[ $(git status --short) ]]; then
+ repo start "$FINAL_PLATFORM_CODENAME-SDK-Finalization" ;
+ git add -A . ;
+ git commit -m "$FINAL_PLATFORM_CODENAME is now $FINAL_PLATFORM_SDK_VERSION and extension version $FINAL_MAINLINE_EXTENSION" \
+ -m "Ignore-AOSP-First: $FINAL_PLATFORM_CODENAME Finalization
+Bug: $FINAL_BUG_ID
+Test: build";
+ repo upload --cbr --no-verify -o nokeycheck -t -y . ;
+ fi'
+}
+
+function finalize_step_1_main() {
+ local top="$(dirname "$0")"/../../../..
+ source $top/build/make/tools/finalization/environment.sh
+
+ local m="$top/build/soong/soong_ui.bash --make-mode TARGET_PRODUCT=aosp_arm64 TARGET_BUILD_VARIANT=userdebug"
+
+ # vndk etc finalization
+ source $top/build/make/tools/finalization/finalize-aidl-vndk-sdk-resources.sh
+
+ # move all changes to finalization branch/topic and upload to gerrit
+ commit_step_1_changes
+
+ # build to confirm everything is OK
+ AIDL_FROZEN_REL=true $m
+}
+
+finalize_step_1_main
diff --git a/tools/finalization/step-2.sh b/tools/finalization/step-2.sh
new file mode 100755
index 0000000..d0b24ae
--- /dev/null
+++ b/tools/finalization/step-2.sh
@@ -0,0 +1,34 @@
+#!/bin/bash
+# Script to perform a 2nd step of Android Finalization: REL finalization, create CLs and upload to Gerrit.
+
+function commit_step_2_changes() {
+ repo forall -c '\
+ if [[ $(git status --short) ]]; then
+ repo start "$FINAL_PLATFORM_CODENAME-SDK-Finalization-Rel" ;
+ git add -A . ;
+ git commit -m "$FINAL_PLATFORM_CODENAME/$FINAL_PLATFORM_SDK_VERSION is now REL" \
+ -m "Ignore-AOSP-First: $FINAL_PLATFORM_CODENAME Finalization
+Bug: $FINAL_BUG_ID
+Test: build";
+
+ repo upload --cbr --no-verify -o nokeycheck -t -y . ;
+ fi'
+}
+
+function finalize_step_2_main() {
+ local top="$(dirname "$0")"/../../../..
+ source $top/build/make/tools/finalization/environment.sh
+
+ local m="$top/build/soong/soong_ui.bash --make-mode TARGET_PRODUCT=aosp_arm64 TARGET_BUILD_VARIANT=userdebug"
+
+ # prebuilts etc
+ source $top/build/make/tools/finalization/finalize-sdk-rel.sh
+
+ # move all changes to finalization branch/topic and upload to gerrit
+ commit_step_2_changes
+
+ # build to confirm everything is OK
+ AIDL_FROZEN_REL=true $m
+}
+
+finalize_step_2_main
diff --git a/tools/finalization/update-step-1.sh b/tools/finalization/update-step-1.sh
new file mode 100755
index 0000000..b469988
--- /dev/null
+++ b/tools/finalization/update-step-1.sh
@@ -0,0 +1,39 @@
+#!/bin/bash
+# Script to perform a 1st step of Android Finalization: API/SDK finalization, update CLs and upload to Gerrit.
+
+# WIP, does not work yet
+exit 10
+
+set -ex
+
+function update_step_1_changes() {
+ set +e
+ repo forall -c '\
+ if [[ $(git status --short) ]]; then
+ git stash -u ;
+ repo start "$FINAL_PLATFORM_CODENAME-SDK-Finalization" ;
+ git stash pop ;
+ git add -A . ;
+ git commit --amend --no-edit ;
+ repo upload --cbr --no-verify -o nokeycheck -t -y . ;
+ fi'
+}
+
+function update_step_1_main() {
+ local top="$(dirname "$0")"/../../../..
+ source $top/build/make/tools/finalization/environment.sh
+
+
+ local m="$top/build/soong/soong_ui.bash --make-mode TARGET_PRODUCT=aosp_arm64 TARGET_BUILD_VARIANT=userdebug"
+
+ # vndk etc finalization
+ source $top/build/make/tools/finalization/finalize-aidl-vndk-sdk-resources.sh
+
+ # update existing CLs and upload to gerrit
+ update_step_1_changes
+
+ # build to confirm everything is OK
+ AIDL_FROZEN_REL=true $m
+}
+
+update_step_1_main
diff --git a/tools/finalization/update-step-2.sh b/tools/finalization/update-step-2.sh
new file mode 100755
index 0000000..d2b8592
--- /dev/null
+++ b/tools/finalization/update-step-2.sh
@@ -0,0 +1,38 @@
+#!/bin/bash
+# Script to perform a 2nd step of Android Finalization: REL finalization, create CLs and upload to Gerrit.
+
+# WIP, does not work yet
+exit 10
+
+set -ex
+
+function update_step_2_changes() {
+ set +e
+ repo forall -c '\
+ if [[ $(git status --short) ]]; then
+ git stash -u ;
+ repo start "$FINAL_PLATFORM_CODENAME-SDK-Finalization-Rel" ;
+ git stash pop ;
+ git add -A . ;
+ git commit --amend --no-edit ;
+ repo upload --cbr --no-verify -o nokeycheck -t -y . ;
+ fi'
+}
+
+function update_step_2_main() {
+ local top="$(dirname "$0")"/../../../..
+ source $top/build/make/tools/finalization/environment.sh
+
+ local m="$top/build/soong/soong_ui.bash --make-mode TARGET_PRODUCT=aosp_arm64 TARGET_BUILD_VARIANT=userdebug"
+
+ # prebuilts etc
+ source $top/build/make/tools/finalization/finalize-sdk-rel.sh
+
+ # move all changes to finalization branch/topic and upload to gerrit
+ update_step_2_changes
+
+ # build to confirm everything is OK
+ AIDL_FROZEN_REL=true $m
+}
+
+update_step_2_main
diff --git a/tools/findleaves.py b/tools/findleaves.py
index 97302e9..86f3f3a 100755
--- a/tools/findleaves.py
+++ b/tools/findleaves.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
#
# Copyright (C) 2009 The Android Open Source Project
#
@@ -121,7 +121,7 @@
results = list(set(perform_find(mindepth, prune, dirlist, filenames)))
results.sort()
for r in results:
- print r
+ print(r)
if __name__ == "__main__":
main(sys.argv)
diff --git a/tools/fs_config/Android.bp b/tools/fs_config/Android.bp
index 8891a0a..55fdca4 100644
--- a/tools/fs_config/Android.bp
+++ b/tools/fs_config/Android.bp
@@ -40,14 +40,28 @@
cflags: ["-Werror"],
}
+python_binary_host {
+ name: "fs_config_generator",
+ srcs: ["fs_config_generator.py"],
+}
+
+python_test_host {
+ name: "test_fs_config_generator",
+ main: "test_fs_config_generator.py",
+ srcs: [
+ "test_fs_config_generator.py",
+ "fs_config_generator.py",
+ ],
+}
+
target_fs_config_gen_filegroup {
name: "target_fs_config_gen",
}
genrule {
name: "oemaids_header_gen",
- tool_files: ["fs_config_generator.py"],
- cmd: "$(location fs_config_generator.py) oemaid --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
+ tools: ["fs_config_generator"],
+ cmd: "$(location fs_config_generator) oemaid --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
srcs: [
":target_fs_config_gen",
":android_filesystem_config_header",
@@ -67,8 +81,8 @@
// TARGET_FS_CONFIG_GEN files.
genrule {
name: "passwd_gen_system",
- tool_files: ["fs_config_generator.py"],
- cmd: "$(location fs_config_generator.py) passwd --partition=system --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
+ tools: ["fs_config_generator"],
+ cmd: "$(location fs_config_generator) passwd --partition=system --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
srcs: [
":target_fs_config_gen",
":android_filesystem_config_header",
@@ -84,8 +98,8 @@
genrule {
name: "passwd_gen_vendor",
- tool_files: ["fs_config_generator.py"],
- cmd: "$(location fs_config_generator.py) passwd --partition=vendor --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
+ tools: ["fs_config_generator"],
+ cmd: "$(location fs_config_generator) passwd --partition=vendor --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
srcs: [
":target_fs_config_gen",
":android_filesystem_config_header",
@@ -102,8 +116,8 @@
genrule {
name: "passwd_gen_odm",
- tool_files: ["fs_config_generator.py"],
- cmd: "$(location fs_config_generator.py) passwd --partition=odm --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
+ tools: ["fs_config_generator"],
+ cmd: "$(location fs_config_generator) passwd --partition=odm --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
srcs: [
":target_fs_config_gen",
":android_filesystem_config_header",
@@ -120,8 +134,8 @@
genrule {
name: "passwd_gen_product",
- tool_files: ["fs_config_generator.py"],
- cmd: "$(location fs_config_generator.py) passwd --partition=product --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
+ tools: ["fs_config_generator"],
+ cmd: "$(location fs_config_generator) passwd --partition=product --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
srcs: [
":target_fs_config_gen",
":android_filesystem_config_header",
@@ -138,8 +152,8 @@
genrule {
name: "passwd_gen_system_ext",
- tool_files: ["fs_config_generator.py"],
- cmd: "$(location fs_config_generator.py) passwd --partition=system_ext --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
+ tools: ["fs_config_generator"],
+ cmd: "$(location fs_config_generator) passwd --partition=system_ext --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
srcs: [
":target_fs_config_gen",
":android_filesystem_config_header",
@@ -159,8 +173,8 @@
// TARGET_FS_CONFIG_GEN files.
genrule {
name: "group_gen_system",
- tool_files: ["fs_config_generator.py"],
- cmd: "$(location fs_config_generator.py) group --partition=system --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
+ tools: ["fs_config_generator"],
+ cmd: "$(location fs_config_generator) group --partition=system --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
srcs: [
":target_fs_config_gen",
":android_filesystem_config_header",
@@ -176,8 +190,8 @@
genrule {
name: "group_gen_vendor",
- tool_files: ["fs_config_generator.py"],
- cmd: "$(location fs_config_generator.py) group --partition=vendor --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
+ tools: ["fs_config_generator"],
+ cmd: "$(location fs_config_generator) group --partition=vendor --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
srcs: [
":target_fs_config_gen",
":android_filesystem_config_header",
@@ -194,8 +208,8 @@
genrule {
name: "group_gen_odm",
- tool_files: ["fs_config_generator.py"],
- cmd: "$(location fs_config_generator.py) group --partition=odm --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
+ tools: ["fs_config_generator"],
+ cmd: "$(location fs_config_generator) group --partition=odm --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
srcs: [
":target_fs_config_gen",
":android_filesystem_config_header",
@@ -212,8 +226,8 @@
genrule {
name: "group_gen_product",
- tool_files: ["fs_config_generator.py"],
- cmd: "$(location fs_config_generator.py) group --partition=product --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
+ tools: ["fs_config_generator"],
+ cmd: "$(location fs_config_generator) group --partition=product --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
srcs: [
":target_fs_config_gen",
":android_filesystem_config_header",
@@ -230,8 +244,8 @@
genrule {
name: "group_gen_system_ext",
- tool_files: ["fs_config_generator.py"],
- cmd: "$(location fs_config_generator.py) group --partition=system_ext --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
+ tools: ["fs_config_generator"],
+ cmd: "$(location fs_config_generator) group --partition=system_ext --aid-header=$(location :android_filesystem_config_header) $(locations :target_fs_config_gen) >$(out)",
srcs: [
":target_fs_config_gen",
":android_filesystem_config_header",
diff --git a/tools/fs_config/README.md b/tools/fs_config/README.md
index bad5e10..62d6d1e 100644
--- a/tools/fs_config/README.md
+++ b/tools/fs_config/README.md
@@ -69,13 +69,13 @@
From within the `fs_config` directory, unit tests can be executed like so:
- $ python -m unittest test_fs_config_generator.Tests
- .............
+ $ python test_fs_config_generator.py
+ ................
----------------------------------------------------------------------
- Ran 13 tests in 0.004s
-
+ Ran 16 tests in 0.004s
OK
+
One could also use nose if they would like:
$ nose2
diff --git a/tools/fs_config/fs_config_generator.py b/tools/fs_config/fs_config_generator.py
index 098fde6..44480b8 100755
--- a/tools/fs_config/fs_config_generator.py
+++ b/tools/fs_config/fs_config_generator.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python2
+#!/usr/bin/env python3
"""Generates config files for Android file system properties.
This script is used for generating configuration files for configuring
@@ -11,7 +11,7 @@
"""
import argparse
-import ConfigParser
+import configparser
import ctypes
import re
import sys
@@ -179,6 +179,10 @@
and self.normalized_value == other.normalized_value \
and self.login_shell == other.login_shell
+ def __repr__(self):
+ return "AID { identifier = %s, value = %s, normalized_value = %s, login_shell = %s }" % (
+ self.identifier, self.value, self.normalized_value, self.login_shell)
+
@staticmethod
def is_friendly(name):
"""Determines if an AID is a freindly name or C define.
@@ -312,7 +316,7 @@
]
_AID_DEFINE = re.compile(r'\s*#define\s+%s.*' % AID.PREFIX)
_RESERVED_RANGE = re.compile(
- r'#define AID_(.+)_RESERVED_\d*_*(START|END)\s+(\d+)')
+ r'#define AID_(.+)_RESERVED_(?:(\d+)_)?(START|END)\s+(\d+)')
# AID lines cannot end with _START or _END, ie AID_FOO is OK
# but AID_FOO_START is skiped. Note that AID_FOOSTART is NOT skipped.
@@ -345,6 +349,7 @@
aid_file (file): The open AID header file to parse.
"""
+ ranges_by_name = {}
for lineno, line in enumerate(aid_file):
def error_message(msg):
@@ -355,20 +360,24 @@
range_match = self._RESERVED_RANGE.match(line)
if range_match:
- partition = range_match.group(1).lower()
- value = int(range_match.group(3), 0)
+ partition, name, start, value = range_match.groups()
+ partition = partition.lower()
+ if name is None:
+ name = "unnamed"
+ start = start == "START"
+ value = int(value, 0)
if partition == 'oem':
partition = 'vendor'
- if partition in self._ranges:
- if isinstance(self._ranges[partition][-1], int):
- self._ranges[partition][-1] = (
- self._ranges[partition][-1], value)
- else:
- self._ranges[partition].append(value)
- else:
- self._ranges[partition] = [value]
+ if partition not in ranges_by_name:
+ ranges_by_name[partition] = {}
+ if name not in ranges_by_name[partition]:
+ ranges_by_name[partition][name] = [None, None]
+ if ranges_by_name[partition][name][0 if start else 1] is not None:
+ sys.exit(error_message("{} of range {} of partition {} was already defined".format(
+ "Start" if start else "End", name, partition)))
+ ranges_by_name[partition][name][0 if start else 1] = value
if AIDHeaderParser._AID_DEFINE.match(line):
chunks = line.split()
@@ -390,6 +399,21 @@
error_message('{} for "{}"'.format(
exception, identifier)))
+ for partition in ranges_by_name:
+ for name in ranges_by_name[partition]:
+ start = ranges_by_name[partition][name][0]
+ end = ranges_by_name[partition][name][1]
+ if start is None:
+ sys.exit("Range '%s' for partition '%s' had undefined start" % (name, partition))
+ if end is None:
+ sys.exit("Range '%s' for partition '%s' had undefined end" % (name, partition))
+ if start > end:
+ sys.exit("Range '%s' for partition '%s' had start after end. Start: %d, end: %d" % (name, partition, start, end))
+
+ if partition not in self._ranges:
+ self._ranges[partition] = []
+ self._ranges[partition].append((start, end))
+
def _handle_aid(self, identifier, value):
"""Handle an AID C #define.
@@ -439,7 +463,7 @@
# No core AIDs should be within any oem range.
for aid in self._aid_value_to_name:
for ranges in self._ranges.values():
- if Utils.in_any_range(aid, ranges):
+ if Utils.in_any_range(int(aid, 0), ranges):
name = self._aid_value_to_name[aid]
raise ValueError(
'AID "%s" value: %u within reserved OEM Range: "%s"' %
@@ -545,7 +569,7 @@
# override previous
# sections.
- config = ConfigParser.ConfigParser()
+ config = configparser.ConfigParser()
config.read(file_name)
for section in config.sections():
@@ -589,7 +613,7 @@
ranges = None
- partitions = self._ranges.keys()
+ partitions = list(self._ranges.keys())
partitions.sort(key=len, reverse=True)
for partition in partitions:
if aid.friendly.startswith(partition):
@@ -1049,7 +1073,7 @@
user_binary = bytearray(ctypes.c_uint16(int(user, 0)))
group_binary = bytearray(ctypes.c_uint16(int(group, 0)))
caps_binary = bytearray(ctypes.c_uint64(caps_value))
- path_binary = ctypes.create_string_buffer(path,
+ path_binary = ctypes.create_string_buffer(path.encode(),
path_length_aligned_64).raw
out_file.write(length_binary)
@@ -1145,21 +1169,21 @@
hdr = AIDHeaderParser(args['hdrfile'])
max_name_length = max(len(aid.friendly) + 1 for aid in hdr.aids)
- print AIDArrayGen._GENERATED
- print
- print AIDArrayGen._INCLUDE
- print
- print AIDArrayGen._STRUCT_FS_CONFIG % max_name_length
- print
- print AIDArrayGen._OPEN_ID_ARRAY
+ print(AIDArrayGen._GENERATED)
+ print()
+ print(AIDArrayGen._INCLUDE)
+ print()
+ print(AIDArrayGen._STRUCT_FS_CONFIG % max_name_length)
+ print()
+ print(AIDArrayGen._OPEN_ID_ARRAY)
for aid in hdr.aids:
- print AIDArrayGen._ID_ENTRY % (aid.friendly, aid.identifier)
+ print(AIDArrayGen._ID_ENTRY % (aid.friendly, aid.identifier))
- print AIDArrayGen._CLOSE_FILE_STRUCT
- print
- print AIDArrayGen._COUNT
- print
+ print(AIDArrayGen._CLOSE_FILE_STRUCT)
+ print()
+ print(AIDArrayGen._COUNT)
+ print()
@generator('oemaid')
@@ -1201,15 +1225,15 @@
parser = FSConfigFileParser(args['fsconfig'], hdr_parser.ranges)
- print OEMAidGen._GENERATED
+ print(OEMAidGen._GENERATED)
- print OEMAidGen._FILE_IFNDEF_DEFINE
+ print(OEMAidGen._FILE_IFNDEF_DEFINE)
for aid in parser.aids:
self._print_aid(aid)
- print
+ print()
- print OEMAidGen._FILE_ENDIF
+ print(OEMAidGen._FILE_ENDIF)
def _print_aid(self, aid):
"""Prints a valid #define AID identifier to stdout.
@@ -1221,10 +1245,10 @@
# print the source file location of the AID
found_file = aid.found
if found_file != self._old_file:
- print OEMAidGen._FILE_COMMENT % found_file
+ print(OEMAidGen._FILE_COMMENT % found_file)
self._old_file = found_file
- print OEMAidGen._GENERIC_DEFINE % (aid.identifier, aid.value)
+ print(OEMAidGen._GENERIC_DEFINE % (aid.identifier, aid.value))
@generator('passwd')
@@ -1268,7 +1292,7 @@
return
aids_by_partition = {}
- partitions = hdr_parser.ranges.keys()
+ partitions = list(hdr_parser.ranges.keys())
partitions.sort(key=len, reverse=True)
for aid in aids:
@@ -1307,7 +1331,7 @@
except ValueError as exception:
sys.exit(exception)
- print "%s::%s:%s::/:%s" % (logon, uid, uid, aid.login_shell)
+ print("%s::%s:%s::/:%s" % (logon, uid, uid, aid.login_shell))
@generator('group')
@@ -1332,7 +1356,7 @@
except ValueError as exception:
sys.exit(exception)
- print "%s::%s:" % (logon, uid)
+ print("%s::%s:" % (logon, uid))
@generator('print')
@@ -1355,7 +1379,7 @@
aids.sort(key=lambda item: int(item.normalized_value))
for aid in aids:
- print '%s %s' % (aid.identifier, aid.normalized_value)
+ print('%s %s' % (aid.identifier, aid.normalized_value))
def main():
@@ -1369,7 +1393,7 @@
gens = generator.get()
# for each gen, instantiate and add them as an option
- for name, gen in gens.iteritems():
+ for name, gen in gens.items():
generator_option_parser = subparser.add_parser(name, help=gen.__doc__)
generator_option_parser.set_defaults(which=name)
diff --git a/tools/fs_config/test_fs_config_generator.py b/tools/fs_config/test_fs_config_generator.py
index b7f173e..cbf46a1 100755
--- a/tools/fs_config/test_fs_config_generator.py
+++ b/tools/fs_config/test_fs_config_generator.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
"""Unit test suite for the fs_config_genertor.py tool."""
import tempfile
@@ -64,7 +64,7 @@
def test_aid_header_parser_good(self):
"""Test AID Header Parser good input file"""
- with tempfile.NamedTemporaryFile() as temp_file:
+ with tempfile.NamedTemporaryFile(mode='w') as temp_file:
temp_file.write(
textwrap.dedent("""
#define AID_FOO 1000
@@ -78,11 +78,11 @@
temp_file.flush()
parser = AIDHeaderParser(temp_file.name)
- oem_ranges = parser.oem_ranges
+ ranges = parser.ranges
aids = parser.aids
- self.assertTrue((2900, 2999) in oem_ranges)
- self.assertFalse((5000, 6000) in oem_ranges)
+ self.assertTrue((2900, 2999) in ranges["vendor"])
+ self.assertFalse((5000, 6000) in ranges["vendor"])
for aid in aids:
self.assertTrue(aid.normalized_value in ['1000', '1001'])
@@ -91,7 +91,7 @@
def test_aid_header_parser_good_unordered(self):
"""Test AID Header Parser good unordered input file"""
- with tempfile.NamedTemporaryFile() as temp_file:
+ with tempfile.NamedTemporaryFile(mode='w') as temp_file:
temp_file.write(
textwrap.dedent("""
#define AID_FOO 1000
@@ -105,11 +105,11 @@
temp_file.flush()
parser = AIDHeaderParser(temp_file.name)
- oem_ranges = parser.oem_ranges
+ ranges = parser.ranges
aids = parser.aids
- self.assertTrue((2900, 2999) in oem_ranges)
- self.assertFalse((5000, 6000) in oem_ranges)
+ self.assertTrue((2900, 2999) in ranges["vendor"])
+ self.assertFalse((5000, 6000) in ranges["vendor"])
for aid in aids:
self.assertTrue(aid.normalized_value in ['1000', '1001'])
@@ -118,7 +118,7 @@
def test_aid_header_parser_bad_aid(self):
"""Test AID Header Parser bad aid input file"""
- with tempfile.NamedTemporaryFile() as temp_file:
+ with tempfile.NamedTemporaryFile(mode='w') as temp_file:
temp_file.write(
textwrap.dedent("""
#define AID_FOO "bad"
@@ -131,7 +131,7 @@
def test_aid_header_parser_bad_oem_range(self):
"""Test AID Header Parser bad oem range input file"""
- with tempfile.NamedTemporaryFile() as temp_file:
+ with tempfile.NamedTemporaryFile(mode='w') as temp_file:
temp_file.write(
textwrap.dedent("""
#define AID_OEM_RESERVED_START 2900
@@ -145,7 +145,7 @@
def test_aid_header_parser_bad_oem_range_no_end(self):
"""Test AID Header Parser bad oem range (no end) input file"""
- with tempfile.NamedTemporaryFile() as temp_file:
+ with tempfile.NamedTemporaryFile(mode='w') as temp_file:
temp_file.write(
textwrap.dedent("""
#define AID_OEM_RESERVED_START 2900
@@ -158,7 +158,7 @@
def test_aid_header_parser_bad_oem_range_no_start(self):
"""Test AID Header Parser bad oem range (no start) input file"""
- with tempfile.NamedTemporaryFile() as temp_file:
+ with tempfile.NamedTemporaryFile(mode='w') as temp_file:
temp_file.write(
textwrap.dedent("""
#define AID_OEM_RESERVED_END 2900
@@ -168,10 +168,26 @@
with self.assertRaises(SystemExit):
AIDHeaderParser(temp_file.name)
+ def test_aid_header_parser_bad_oem_range_duplicated(self):
+ """Test AID Header Parser bad oem range (no start) input file"""
+
+ with tempfile.NamedTemporaryFile(mode='w') as temp_file:
+ temp_file.write(
+ textwrap.dedent("""
+ #define AID_OEM_RESERVED_START 2000
+ #define AID_OEM_RESERVED_END 2900
+ #define AID_OEM_RESERVED_START 3000
+ #define AID_OEM_RESERVED_END 3900
+ """))
+ temp_file.flush()
+
+ with self.assertRaises(SystemExit):
+ AIDHeaderParser(temp_file.name)
+
def test_aid_header_parser_bad_oem_range_mismatch_start_end(self):
"""Test AID Header Parser bad oem range mismatched input file"""
- with tempfile.NamedTemporaryFile() as temp_file:
+ with tempfile.NamedTemporaryFile(mode='w') as temp_file:
temp_file.write(
textwrap.dedent("""
#define AID_OEM_RESERVED_START 2900
@@ -185,7 +201,7 @@
def test_aid_header_parser_bad_duplicate_ranges(self):
"""Test AID Header Parser exits cleanly on duplicate AIDs"""
- with tempfile.NamedTemporaryFile() as temp_file:
+ with tempfile.NamedTemporaryFile(mode='w') as temp_file:
temp_file.write(
textwrap.dedent("""
#define AID_FOO 100
@@ -206,7 +222,7 @@
- https://android-review.googlesource.com/#/c/313169
"""
- with tempfile.NamedTemporaryFile() as temp_file:
+ with tempfile.NamedTemporaryFile(mode='w') as temp_file:
temp_file.write(
textwrap.dedent("""
#define AID_APP 10000 /* TODO: switch users over to AID_APP_START */
@@ -241,7 +257,7 @@
def test_fs_config_file_parser_good(self):
"""Test FSConfig Parser good input file"""
- with tempfile.NamedTemporaryFile() as temp_file:
+ with tempfile.NamedTemporaryFile(mode='w') as temp_file:
temp_file.write(
textwrap.dedent("""
[/system/bin/file]
@@ -262,7 +278,7 @@
"""))
temp_file.flush()
- parser = FSConfigFileParser([temp_file.name], [(5000, 5999)])
+ parser = FSConfigFileParser([temp_file.name], {"oem1": [(5000, 5999)]})
files = parser.files
dirs = parser.dirs
aids = parser.aids
@@ -284,12 +300,12 @@
FSConfig('0777', 'AID_FOO', 'AID_SYSTEM', '0',
'/vendor/path/dir/', temp_file.name))
- self.assertEqual(aid, AID('AID_OEM1', '0x1389', temp_file.name, '/vendor/bin/sh'))
+ self.assertEqual(aid, AID('AID_OEM1', '0x1389', temp_file.name, '/bin/sh'))
def test_fs_config_file_parser_bad(self):
"""Test FSConfig Parser bad input file"""
- with tempfile.NamedTemporaryFile() as temp_file:
+ with tempfile.NamedTemporaryFile(mode='w') as temp_file:
temp_file.write(
textwrap.dedent("""
[/system/bin/file]
@@ -298,12 +314,12 @@
temp_file.flush()
with self.assertRaises(SystemExit):
- FSConfigFileParser([temp_file.name], [(5000, 5999)])
+ FSConfigFileParser([temp_file.name], {})
def test_fs_config_file_parser_bad_aid_range(self):
"""Test FSConfig Parser bad aid range value input file"""
- with tempfile.NamedTemporaryFile() as temp_file:
+ with tempfile.NamedTemporaryFile(mode='w') as temp_file:
temp_file.write(
textwrap.dedent("""
[AID_OEM1]
@@ -312,4 +328,7 @@
temp_file.flush()
with self.assertRaises(SystemExit):
- FSConfigFileParser([temp_file.name], [(5000, 5999)])
+ FSConfigFileParser([temp_file.name], {"oem1": [(5000, 5999)]})
+
+if __name__ == "__main__":
+ unittest.main()
diff --git a/tools/generate_gts_shared_report.py b/tools/generate_gts_shared_report.py
new file mode 100644
index 0000000..3067ae1
--- /dev/null
+++ b/tools/generate_gts_shared_report.py
@@ -0,0 +1,103 @@
+#!/usr/bin/env python3
+#
+# Copyright (C) 2022 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+Checks and generates a report for gts modules that should be open-sourced.
+
+Usage:
+ generate_gts_open_source_report.py
+ --gts-test-metalic [android-gts meta_lic]
+ --checkshare [COMPLIANCE_CHECKSHARE]
+ --gts-test-dir [directory of android-gts]
+ --output [output file]
+
+Output example:
+ GTS-Modules: PASS/FAIL
+ GtsIncrementalInstallTestCases_BackgroundProcess
+ GtsUnsignedNetworkStackTestCases
+"""
+import sys
+import argparse
+import subprocess
+import re
+
+def _get_args():
+ """Parses input arguments."""
+ parser = argparse.ArgumentParser()
+ parser.add_argument(
+ '--gts-test-metalic', required=True,
+ help='license meta_lic file path of android-gts.zip')
+ parser.add_argument(
+ '--checkshare', required=True,
+ help='path of the COMPLIANCE_CHECKSHARE tool')
+ parser.add_argument(
+ '--gts-test-dir', required=True,
+ help='directory of android-gts')
+ parser.add_argument(
+ '-o', '--output', required=True,
+ help='file path of the output report')
+ return parser.parse_args()
+
+def _check_gts_test(checkshare: str, gts_test_metalic: str,
+ gts_test_dir: str) -> tuple[str, set[str]]:
+ """Checks android-gts license.
+
+ Args:
+ checkshare: path of the COMPLIANCE_CHECKSHARE tool
+ gts_test_metalic: license meta_lic file path of android-gts.zip
+ gts_test_dir: directory of android-gts
+
+ Returns:
+ Check result (PASS when android-gts doesn't need to be shared,
+ FAIL when some gts modules need to be shared) and gts modules
+ that need to be shared.
+ """
+ cmd = f'{checkshare} {gts_test_metalic}'
+ proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE)
+ _, str_stderr = map(lambda b: b.decode(), proc.communicate())
+ if proc.returncode == 0:
+ return 'PASS', []
+ open_source_modules = set()
+ for error_line in str_stderr.split('\n'):
+ # Skip the empty liness
+ if not error_line:
+ continue
+ module_meta_lic = error_line.strip().split()[0]
+ groups = re.fullmatch(
+ re.compile(f'.*/{gts_test_dir}/(.*)'), module_meta_lic)
+ if groups:
+ open_source_modules.add(
+ groups[1].removesuffix('.meta_lic'))
+ return 'FAIL', open_source_modules
+
+
+def main(argv):
+ args = _get_args()
+
+ gts_test_metalic = args.gts_test_metalic
+ output_file = args.output
+ checkshare = args.checkshare
+ gts_test_dir = args.gts_test_dir
+
+ with open(output_file, 'w') as file:
+ result, open_source_modules = _check_gts_test(
+ checkshare, gts_test_metalic, gts_test_dir)
+ file.write(f'GTS-Modules: {result}\n')
+ for open_source_module in open_source_modules:
+ file.write(f'\t{open_source_module}\n')
+
+if __name__ == "__main__":
+ main(sys.argv)
diff --git a/tools/java-event-log-tags.py b/tools/java-event-log-tags.py
index 4bd6d2b..bbd65fa 100755
--- a/tools/java-event-log-tags.py
+++ b/tools/java-event-log-tags.py
@@ -100,7 +100,8 @@
" * Source file: %s\n"
" */\n\n" % (fn,))
-buffer.write("package %s;\n\n" % (tagfile.options["java_package"][0],))
+# .rstrip(";") to avoid an empty top-level statement errorprone error
+buffer.write("package %s;\n\n" % (tagfile.options["java_package"][0].rstrip(";"),))
basename, _ = os.path.splitext(os.path.basename(fn))
diff --git a/tools/java-layers.py b/tools/java-layers.py
deleted file mode 100755
index b3aec2b..0000000
--- a/tools/java-layers.py
+++ /dev/null
@@ -1,257 +0,0 @@
-#!/usr/bin/env python
-
-import os
-import re
-import sys
-
-def fail_with_usage():
- sys.stderr.write("usage: java-layers.py DEPENDENCY_FILE SOURCE_DIRECTORIES...\n")
- sys.stderr.write("\n")
- sys.stderr.write("Enforces layering between java packages. Scans\n")
- sys.stderr.write("DIRECTORY and prints errors when the packages violate\n")
- sys.stderr.write("the rules defined in the DEPENDENCY_FILE.\n")
- sys.stderr.write("\n")
- sys.stderr.write("Prints a warning when an unknown package is encountered\n")
- sys.stderr.write("on the assumption that it should fit somewhere into the\n")
- sys.stderr.write("layering.\n")
- sys.stderr.write("\n")
- sys.stderr.write("DEPENDENCY_FILE format\n")
- sys.stderr.write(" - # starts comment\n")
- sys.stderr.write(" - Lines consisting of two java package names: The\n")
- sys.stderr.write(" first package listed must not contain any references\n")
- sys.stderr.write(" to any classes present in the second package, or any\n")
- sys.stderr.write(" of its dependencies.\n")
- sys.stderr.write(" - Lines consisting of one java package name: The\n")
- sys.stderr.write(" packge is assumed to be a high level package and\n")
- sys.stderr.write(" nothing may depend on it.\n")
- sys.stderr.write(" - Lines consisting of a dash (+) followed by one java\n")
- sys.stderr.write(" package name: The package is considered a low level\n")
- sys.stderr.write(" package and may not import any of the other packages\n")
- sys.stderr.write(" listed in the dependency file.\n")
- sys.stderr.write(" - Lines consisting of a plus (-) followed by one java\n")
- sys.stderr.write(" package name: The package is considered \'legacy\'\n")
- sys.stderr.write(" and excluded from errors.\n")
- sys.stderr.write("\n")
- sys.exit(1)
-
-class Dependency:
- def __init__(self, filename, lineno, lower, top, lowlevel, legacy):
- self.filename = filename
- self.lineno = lineno
- self.lower = lower
- self.top = top
- self.lowlevel = lowlevel
- self.legacy = legacy
- self.uppers = []
- self.transitive = set()
-
- def matches(self, imp):
- for d in self.transitive:
- if imp.startswith(d):
- return True
- return False
-
-class Dependencies:
- def __init__(self, deps):
- def recurse(obj, dep, visited):
- global err
- if dep in visited:
- sys.stderr.write("%s:%d: Circular dependency found:\n"
- % (dep.filename, dep.lineno))
- for v in visited:
- sys.stderr.write("%s:%d: Dependency: %s\n"
- % (v.filename, v.lineno, v.lower))
- err = True
- return
- visited.append(dep)
- for upper in dep.uppers:
- obj.transitive.add(upper)
- if upper in deps:
- recurse(obj, deps[upper], visited)
- self.deps = deps
- self.parts = [(dep.lower.split('.'),dep) for dep in deps.itervalues()]
- # transitive closure of dependencies
- for dep in deps.itervalues():
- recurse(dep, dep, [])
- # disallow everything from the low level components
- for dep in deps.itervalues():
- if dep.lowlevel:
- for d in deps.itervalues():
- if dep != d and not d.legacy:
- dep.transitive.add(d.lower)
- # disallow the 'top' components everywhere but in their own package
- for dep in deps.itervalues():
- if dep.top and not dep.legacy:
- for d in deps.itervalues():
- if dep != d and not d.legacy:
- d.transitive.add(dep.lower)
- for dep in deps.itervalues():
- dep.transitive = set([x+"." for x in dep.transitive])
- if False:
- for dep in deps.itervalues():
- print "-->", dep.lower, "-->", dep.transitive
-
- # Lookup the dep object for the given package. If pkg is a subpackage
- # of one with a rule, that one will be returned. If no matches are found,
- # None is returned.
- def lookup(self, pkg):
- # Returns the number of parts that match
- def compare_parts(parts, pkg):
- if len(parts) > len(pkg):
- return 0
- n = 0
- for i in range(0, len(parts)):
- if parts[i] != pkg[i]:
- return 0
- n = n + 1
- return n
- pkg = pkg.split(".")
- matched = 0
- result = None
- for (parts,dep) in self.parts:
- x = compare_parts(parts, pkg)
- if x > matched:
- matched = x
- result = dep
- return result
-
-def parse_dependency_file(filename):
- global err
- f = file(filename)
- lines = f.readlines()
- f.close()
- def lineno(s, i):
- i[0] = i[0] + 1
- return (i[0],s)
- n = [0]
- lines = [lineno(x,n) for x in lines]
- lines = [(n,s.split("#")[0].strip()) for (n,s) in lines]
- lines = [(n,s) for (n,s) in lines if len(s) > 0]
- lines = [(n,s.split()) for (n,s) in lines]
- deps = {}
- for n,words in lines:
- if len(words) == 1:
- lower = words[0]
- top = True
- legacy = False
- lowlevel = False
- if lower[0] == '+':
- lower = lower[1:]
- top = False
- lowlevel = True
- elif lower[0] == '-':
- lower = lower[1:]
- legacy = True
- if lower in deps:
- sys.stderr.write(("%s:%d: Package '%s' already defined on"
- + " line %d.\n") % (filename, n, lower, deps[lower].lineno))
- err = True
- else:
- deps[lower] = Dependency(filename, n, lower, top, lowlevel, legacy)
- elif len(words) == 2:
- lower = words[0]
- upper = words[1]
- if lower in deps:
- dep = deps[lower]
- if dep.top:
- sys.stderr.write(("%s:%d: Can't add dependency to top level package "
- + "'%s'\n") % (filename, n, lower))
- err = True
- else:
- dep = Dependency(filename, n, lower, False, False, False)
- deps[lower] = dep
- dep.uppers.append(upper)
- else:
- sys.stderr.write("%s:%d: Too many words on line starting at \'%s\'\n" % (
- filename, n, words[2]))
- err = True
- return Dependencies(deps)
-
-def find_java_files(srcs):
- result = []
- for d in srcs:
- if d[0] == '@':
- f = file(d[1:])
- result.extend([fn for fn in [s.strip() for s in f.readlines()]
- if len(fn) != 0])
- f.close()
- else:
- for root, dirs, files in os.walk(d):
- result.extend([os.sep.join((root,f)) for f in files
- if f.lower().endswith(".java")])
- return result
-
-COMMENTS = re.compile("//.*?\n|/\*.*?\*/", re.S)
-PACKAGE = re.compile("package\s+(.*)")
-IMPORT = re.compile("import\s+(.*)")
-
-def examine_java_file(deps, filename):
- global err
- # Yes, this is a crappy java parser. Write a better one if you want to.
- f = file(filename)
- text = f.read()
- f.close()
- text = COMMENTS.sub("", text)
- index = text.find("{")
- if index < 0:
- sys.stderr.write(("%s: Error: Unable to parse java. Can't find class "
- + "declaration.\n") % filename)
- err = True
- return
- text = text[0:index]
- statements = [s.strip() for s in text.split(";")]
- # First comes the package declaration. Then iterate while we see import
- # statements. Anything else is either bad syntax that we don't care about
- # because the compiler will fail, or the beginning of the class declaration.
- m = PACKAGE.match(statements[0])
- if not m:
- sys.stderr.write(("%s: Error: Unable to parse java. Missing package "
- + "statement.\n") % filename)
- err = True
- return
- pkg = m.group(1)
- imports = []
- for statement in statements[1:]:
- m = IMPORT.match(statement)
- if not m:
- break
- imports.append(m.group(1))
- # Do the checking
- if False:
- print filename
- print "'%s' --> %s" % (pkg, imports)
- dep = deps.lookup(pkg)
- if not dep:
- sys.stderr.write(("%s: Error: Package does not appear in dependency file: "
- + "%s\n") % (filename, pkg))
- err = True
- return
- for imp in imports:
- if dep.matches(imp):
- sys.stderr.write("%s: Illegal import in package '%s' of '%s'\n"
- % (filename, pkg, imp))
- err = True
-
-err = False
-
-def main(argv):
- if len(argv) < 3:
- fail_with_usage()
- deps = parse_dependency_file(argv[1])
-
- if err:
- sys.exit(1)
-
- java = find_java_files(argv[2:])
- for filename in java:
- examine_java_file(deps, filename)
-
- if err:
- sys.stderr.write("%s: Using this file as dependency file.\n" % argv[1])
- sys.exit(1)
-
- sys.exit(0)
-
-if __name__ == "__main__":
- main(sys.argv)
-
diff --git a/tools/list_files.py b/tools/list_files.py
new file mode 100644
index 0000000..3afa81f
--- /dev/null
+++ b/tools/list_files.py
@@ -0,0 +1,102 @@
+#!/usr/bin/env python
+#
+# Copyright (C) 2023 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from typing import List
+from glob import glob
+from pathlib import Path
+from os.path import join, relpath
+import argparse
+
+class FileLister:
+ def __init__(self, args) -> None:
+ self.out_file = args.out_file
+
+ self.folder_dir = args.dir
+ self.extensions = [e if e.startswith(".") else "." + e for e in args.extensions]
+ self.root = args.root
+ self.files_list = list()
+
+ def get_files(self) -> None:
+ """Get all files directory in the input directory including the files in the subdirectories
+
+ Recursively finds all files in the input directory.
+ Set file_list as a list of file directory strings,
+ which do not include directories but only files.
+ List is sorted in alphabetical order of the file directories.
+
+ Args:
+ dir: Directory to get the files. String.
+
+ Raises:
+ FileNotFoundError: An error occurred accessing the non-existing directory
+ """
+
+ if not dir_exists(self.folder_dir):
+ raise FileNotFoundError(f"Directory {self.folder_dir} does not exist")
+
+ if self.folder_dir[:-2] != "**":
+ self.folder_dir = join(self.folder_dir, "**")
+
+ self.files_list = list()
+ for file in sorted(glob(self.folder_dir, recursive=True)):
+ if Path(file).is_file():
+ if self.root:
+ file = join(self.root, relpath(file, self.folder_dir[:-2]))
+ self.files_list.append(file)
+
+
+ def list(self) -> None:
+ self.get_files()
+ self.files_list = [f for f in self.files_list if not self.extensions or Path(f).suffix in self.extensions]
+ self.write()
+
+ def write(self) -> None:
+ if self.out_file == "":
+ pprint(self.files_list)
+ else:
+ write_lines(self.out_file, self.files_list)
+
+###
+# Helper functions
+###
+def pprint(l: List[str]) -> None:
+ for line in l:
+ print(line)
+
+def dir_exists(dir: str) -> bool:
+ return Path(dir).exists()
+
+def write_lines(out_file: str, lines: List[str]) -> None:
+ with open(out_file, "w+") as f:
+ f.writelines(line + '\n' for line in lines)
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser()
+ parser.add_argument('dir', action='store', type=str,
+ help="directory to list all subdirectory files")
+ parser.add_argument('--out', dest='out_file',
+ action='store', default="", type=str,
+ help="optional directory to write subdirectory files. If not set, will print to console")
+ parser.add_argument('--root', dest='root',
+ action='store', default="", type=str,
+ help="optional directory to replace the root directories of output.")
+ parser.add_argument('--extensions', nargs='*', default=list(), dest='extensions',
+ help="Extensions to include in the output. If not set, all files are included")
+
+ args = parser.parse_args()
+
+ file_lister = FileLister(args)
+ file_lister.list()
diff --git a/tools/normalize_path.py b/tools/normalize_path.py
index 6c4d548..363df1f 100755
--- a/tools/normalize_path.py
+++ b/tools/normalize_path.py
@@ -22,8 +22,8 @@
if len(sys.argv) > 1:
for p in sys.argv[1:]:
- print os.path.normpath(p)
+ print(os.path.normpath(p))
sys.exit(0)
for line in sys.stdin:
- print os.path.normpath(line.strip())
+ print(os.path.normpath(line.strip()))
diff --git a/tools/parsedeps.py b/tools/parsedeps.py
deleted file mode 100755
index 32d8ad7..0000000
--- a/tools/parsedeps.py
+++ /dev/null
@@ -1,151 +0,0 @@
-#!/usr/bin/env python
-# vim: ts=2 sw=2
-
-import optparse
-import re
-import sys
-
-
-class Dependency:
- def __init__(self, tgt):
- self.tgt = tgt
- self.pos = ""
- self.prereqs = set()
- self.visit = 0
-
- def add(self, prereq):
- self.prereqs.add(prereq)
-
-
-class Dependencies:
- def __init__(self):
- self.lines = {}
- self.__visit = 0
- self.count = 0
-
- def add(self, tgt, prereq):
- t = self.lines.get(tgt)
- if not t:
- t = Dependency(tgt)
- self.lines[tgt] = t
- p = self.lines.get(prereq)
- if not p:
- p = Dependency(prereq)
- self.lines[prereq] = p
- t.add(p)
- self.count = self.count + 1
-
- def setPos(self, tgt, pos):
- t = self.lines.get(tgt)
- if not t:
- t = Dependency(tgt)
- self.lines[tgt] = t
- t.pos = pos
-
- def get(self, tgt):
- if self.lines.has_key(tgt):
- return self.lines[tgt]
- else:
- return None
-
- def __iter__(self):
- return self.lines.iteritems()
-
- def trace(self, tgt, prereq):
- self.__visit = self.__visit + 1
- d = self.lines.get(tgt)
- if not d:
- return
- return self.__trace(d, prereq)
-
- def __trace(self, d, prereq):
- if d.visit == self.__visit:
- return d.trace
- if d.tgt == prereq:
- return [ [ d ], ]
- d.visit = self.__visit
- result = []
- for pre in d.prereqs:
- recursed = self.__trace(pre, prereq)
- for r in recursed:
- result.append([ d ] + r)
- d.trace = result
- return result
-
-def help():
- print "Commands:"
- print " dep TARGET Print the prerequisites for TARGET"
- print " trace TARGET PREREQ Print the paths from TARGET to PREREQ"
-
-
-def main(argv):
- opts = optparse.OptionParser()
- opts.add_option("-i", "--interactive", action="store_true", dest="interactive",
- help="Interactive mode")
- (options, args) = opts.parse_args()
-
- deps = Dependencies()
-
- filename = args[0]
- print "Reading %s" % filename
-
- if True:
- f = open(filename)
- for line in f:
- line = line.strip()
- if len(line) > 0:
- if line[0] == '#':
- pos,tgt = line.rsplit(":", 1)
- pos = pos[1:].strip()
- tgt = tgt.strip()
- deps.setPos(tgt, pos)
- else:
- (tgt,prereq) = line.split(':', 1)
- tgt = tgt.strip()
- prereq = prereq.strip()
- deps.add(tgt, prereq)
- f.close()
-
- print "Read %d dependencies. %d targets." % (deps.count, len(deps.lines))
- while True:
- line = raw_input("target> ")
- if not line.strip():
- continue
- split = line.split()
- cmd = split[0]
- if len(split) == 2 and cmd == "dep":
- tgt = split[1]
- d = deps.get(tgt)
- if d:
- for prereq in d.prereqs:
- print prereq.tgt
- elif len(split) == 3 and cmd == "trace":
- tgt = split[1]
- prereq = split[2]
- if False:
- print "from %s to %s" % (tgt, prereq)
- trace = deps.trace(tgt, prereq)
- if trace:
- width = 0
- for g in trace:
- for t in g:
- if len(t.tgt) > width:
- width = len(t.tgt)
- for g in trace:
- for t in g:
- if t.pos:
- print t.tgt, " " * (width-len(t.tgt)), " #", t.pos
- else:
- print t.tgt
- print
- else:
- help()
-
-if __name__ == "__main__":
- try:
- main(sys.argv)
- except KeyboardInterrupt:
- print
- except EOFError:
- print
-
diff --git a/tools/post_process_props.py b/tools/post_process_props.py
index 38d17a8..31a460d 100755
--- a/tools/post_process_props.py
+++ b/tools/post_process_props.py
@@ -43,7 +43,7 @@
"""Validate GRF properties if exist.
If ro.board.first_api_level is defined, check if its value is valid for the
- sdk version.
+ sdk version. This is only for the release version.
Also, validate the value of ro.board.api_level if defined.
Returns:
@@ -51,6 +51,7 @@
"""
grf_api_level = prop_list.get_value("ro.board.first_api_level")
board_api_level = prop_list.get_value("ro.board.api_level")
+ platform_version_codename = prop_list.get_value("ro.build.version.codename")
if not grf_api_level:
if board_api_level:
@@ -61,6 +62,18 @@
return True
grf_api_level = int(grf_api_level)
+ if board_api_level:
+ board_api_level = int(board_api_level)
+ if board_api_level < grf_api_level:
+ sys.stderr.write("error: ro.board.api_level(%d) must be greater than "
+ "ro.board.first_api_level(%d)\n"
+ % (board_api_level, grf_api_level))
+ return False
+
+ # skip sdk version validation for dev-stage non-REL devices
+ if platform_version_codename != "REL":
+ return True
+
if grf_api_level > sdk_version:
sys.stderr.write("error: ro.board.first_api_level(%d) must be less than "
"or equal to ro.build.version.sdk(%d)\n"
@@ -68,12 +81,10 @@
return False
if board_api_level:
- board_api_level = int(board_api_level)
- if board_api_level < grf_api_level or board_api_level > sdk_version:
- sys.stderr.write("error: ro.board.api_level(%d) must be neither less "
- "than ro.board.first_api_level(%d) nor greater than "
- "ro.build.version.sdk(%d)\n"
- % (board_api_level, grf_api_level, sdk_version))
+ if board_api_level > sdk_version:
+ sys.stderr.write("error: ro.board.api_level(%d) must be less than or "
+ "equal to ro.build.version.sdk(%d)\n"
+ % (board_api_level, sdk_version))
return False
return True
diff --git a/tools/protos/Android.bp b/tools/protos/Android.bp
new file mode 100644
index 0000000..c6ad19e
--- /dev/null
+++ b/tools/protos/Android.bp
@@ -0,0 +1,32 @@
+// Copyright 2023 Google Inc. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package {
+ default_applicable_licenses: ["Android-Apache-2.0"],
+}
+
+python_library_host {
+ name: "metadata_file_proto_py",
+ version: {
+ py3: {
+ enabled: true,
+ },
+ },
+ srcs: [
+ "metadata_file.proto",
+ ],
+ proto: {
+ canonical_path_from_root: false,
+ },
+}
diff --git a/tools/protos/metadata_file.proto b/tools/protos/metadata_file.proto
new file mode 100644
index 0000000..ac1129a
--- /dev/null
+++ b/tools/protos/metadata_file.proto
@@ -0,0 +1,281 @@
+// Copyright (C) 2023 The Android Open Source Project
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+syntax = "proto2";
+
+package metadata_file;
+
+// Proto definition of METADATA files of packages in AOSP codebase.
+message Metadata {
+ // Name of the package.
+ optional string name = 1;
+
+ // A short description (a few lines) of the package.
+ // Example: "Handles location lookups, throttling, batching, etc."
+ optional string description = 2;
+
+ // Specifies additional data about third-party packages.
+ optional ThirdParty third_party = 3;
+}
+
+message ThirdParty {
+ // URL(s) associated with the package.
+ //
+ // At a minimum, all packages must specify a URL which identifies where it
+ // came from, containing a type of: ARCHIVE, GIT or OTHER. Typically,
+ // a package should contain only a single URL from these types. Occasionally,
+ // a package may be broken across multiple archive files for whatever reason,
+ // in which case having multiple ARCHIVE URLs is okay. However, this should
+ // not be used to combine different logical packages that are versioned and
+ // possibly licensed differently.
+ repeated URL url = 1;
+
+ // The package version. In order of preference, this should contain:
+ // - If the package comes from Git or another source control system,
+ // a specific tag or revision in source control, such as "r123" or
+ // "58e27d2". This MUST NOT be a mutable ref such as a branch name.
+ // - a released package version such as "1.0", "2.3-beta", etc.
+ // - the date the package was retrieved, formatted as "As of YYYY-MM-DD".
+ optional string version = 2;
+
+ // The date of the change in which the package was last upgraded from
+ // upstream.
+ // This should only identify package upgrades from upstream, not local
+ // modifications. This may identify the date of either the original or
+ // merged change.
+ //
+ // Note: this is NOT the date that this version of the package was released
+ // externally.
+ optional Date last_upgrade_date = 3;
+
+ // License type that identifies how the package may be used.
+ optional LicenseType license_type = 4;
+
+ // An additional note explaining the licensing of this package. This is most
+ // commonly used with commercial license.
+ optional string license_note = 5;
+
+ // Description of local changes that have been made to the package. This does
+ // not need to (and in most cases should not) attempt to include an exhaustive
+ // list of all changes, but may instead direct readers to review the local
+ // commit history, a collection of patch files, a separate README.md (or
+ // similar) document, etc.
+ // Note: Use of this field to store IDs of advisories fixed with a backported
+ // patch is deprecated, use "security.mitigated_security_patch" instead.
+ optional string local_modifications = 6;
+
+ // Security related metadata including risk category and any special
+ // instructions for using the package, as determined by an ISE-TPS review.
+ optional Security security = 7;
+
+ // The type of directory this metadata represents.
+ optional DirectoryType type = 8 [default = PACKAGE];
+
+ // The homepage for the package. This will eventually replace
+ // `url { type: HOMEPAGE }`
+ optional string homepage = 9;
+
+ // SBOM information of the package. It is mandatory for prebuilt packages.
+ oneof sbom {
+ // Reference to external SBOM document provided as URL.
+ SBOMRef sbom_ref = 10;
+ }
+
+}
+
+// URL associated with a third-party package.
+message URL {
+ enum Type {
+ // The homepage for the package. For example, "https://bazel.io/". This URL
+ // is optional, but encouraged to help disambiguate similarly named packages
+ // or to get more information about the package. This is especially helpful
+ // when no other URLs provide human readable resources (such as git:// or
+ // sso:// URLs).
+ HOMEPAGE = 1;
+
+ // The URL of the archive containing the source code for the package, for
+ // example a zip or tgz file.
+ ARCHIVE = 2;
+
+ // The URL of the upstream git repository this package is retrieved from.
+ // For example:
+ // - https://github.com/git/git.git
+ // - git://git.kernel.org/pub/scm/git/git.git
+ //
+ // Use of a git URL requires that the package "version" value must specify a
+ // specific git tag or revision.
+ GIT = 3;
+
+ // The URL of the upstream SVN repository this package is retrieved from.
+ // For example:
+ // - http://llvm.org/svn/llvm-project/llvm/
+ //
+ // Use of an SVN URL requires that the package "version" value must specify
+ // a specific SVN tag or revision.
+ SVN = 4;
+
+ // The URL of the upstream mercurial repository this package is retrieved
+ // from. For example:
+ // - https://mercurial-scm.org/repo/evolve
+ //
+ // Use of a mercurial URL requires that the package "version" value must
+ // specify a specific tag or revision.
+ HG = 5;
+
+ // The URL of the upstream darcs repository this package is retrieved
+ // from. For example:
+ // - https://hub.darcs.net/hu.dwim/hu.dwim.util
+ //
+ // Use of a DARCS URL requires that the package "version" value must
+ // specify a specific tag or revision.
+ DARCS = 6;
+
+ PIPER = 7;
+
+ // A URL that does not fit any other type. This may also indicate that the
+ // source code was received via email or some other out-of-band way. This is
+ // most commonly used with commercial software received directly from the
+ // vendor. In the case of email, the URL value can be used to provide
+ // additional information about how it was received.
+ OTHER = 8;
+
+ // The URL identifying where the local copy of the package source code can
+ // be found.
+ //
+ // Typically, the metadata files describing a package reside in the same
+ // directory as the source code for the package. In a few rare cases where
+ // they are separate, the LOCAL_SOURCE URL identifies where to find the
+ // source code. This only describes where to find the local copy of the
+ // source; there should always be an additional URL describing where the
+ // package was retrieved from.
+ //
+ // Examples:
+ // - https://android.googlesource.com/platform/external/apache-http/
+ LOCAL_SOURCE = 9;
+ }
+
+ // The type of resource this URL identifies.
+ optional Type type = 1;
+
+ // The actual URL value. URLs should be absolute and start with 'http://' or
+ // 'https://' (or occasionally 'git://' or 'ftp://' where appropriate).
+ optional string value = 2;
+}
+
+// License type that identifies how the packages may be used.
+enum LicenseType {
+ BY_EXCEPTION_ONLY = 1;
+ NOTICE = 2;
+ PERMISSIVE = 3;
+ RECIPROCAL = 4;
+ RESTRICTED_IF_STATICALLY_LINKED = 5;
+ RESTRICTED = 6;
+ UNENCUMBERED = 7;
+}
+
+// Identifies security related metadata including risk category and any special
+// instructions for using the package.
+message Security {
+ // Security risk category for a package, as determined by an ISE-TPS review.
+ enum Category {
+ CATEGORY_UNSPECIFIED = 0;
+
+ // Package should only be used in a sandboxed environment.
+ // Package should have restricted visibility.
+ SANDBOXED_ONLY = 1;
+
+ // Package should not be used to process user content. It is considered
+ // safe to use to process trusted data only. Package should have restricted
+ // visibility.
+ TRUSTED_DATA_ONLY = 2;
+
+ // Package is considered safe to use.
+ REVIEWED_AND_SECURE = 3;
+ }
+
+ // Identifies the security risk category for the package. This will be
+ // provided by the ISE-TPS team as the result of a security review of the
+ // package.
+ optional Category category = 1;
+
+ // An additional security note for the package.
+ optional string note = 2;
+
+ // Text tag to categorize the package. It's currently used by security to:
+ // - to disable OSV (https://osv.dev)
+ // support via the `OSV:disable` tag
+ // - to attach CPE to their corresponding packages, for vulnerability
+ // monitoring:
+ //
+ // Please do document your usecase here should you want to add one.
+ repeated string tag = 3;
+
+ // ID of advisories fixed with a mitigated patch, for example CVE-2018-1111.
+ repeated string mitigated_security_patch = 4;
+}
+
+enum DirectoryType {
+ UNDEFINED = 0;
+
+ // This directory represents a package.
+ PACKAGE = 1;
+
+ // This directory is designed to organize multiple third-party PACKAGE
+ // directories.
+ GROUP = 2;
+
+ // This directory contains several PACKAGE directories representing
+ // different versions of the same third-party project.
+ VERSIONS = 3;
+}
+
+// Represents a whole or partial calendar date, such as a birthday. The time of
+// day and time zone are either specified elsewhere or are insignificant. The
+// date is relative to the Gregorian Calendar. This can represent one of the
+// following:
+//
+// * A full date, with non-zero year, month, and day values.
+// * A month and day, with a zero year (for example, an anniversary).
+// * A year on its own, with a zero month and a zero day.
+// * A year and month, with a zero day (for example, a credit card expiration
+// date).
+message Date {
+ // Year of the date. Must be from 1 to 9999, or 0 to specify a date without
+ // a year.
+ optional int32 year = 1;
+ // Month of a year. Must be from 1 to 12, or 0 to specify a year without a
+ // month and day.
+ optional int32 month = 2;
+ // Day of a month. Must be from 1 to 31 and valid for the year and month, or 0
+ // to specify a year by itself or a year and month where the day isn't
+ // significant.
+ optional int32 day = 3;
+}
+
+// Reference to external SBOM document and element corresponding to the package.
+// See https://spdx.github.io/spdx-spec/v2.3/document-creation-information/#66-external-document-references-field
+message SBOMRef {
+ // The URL that points to the SBOM document of the upstream package of this
+ // third_party package.
+ optional string url = 1;
+ // Checksum of the SBOM document the url field points to.
+ // Format: e.g. SHA1:<checksum>, or any algorithm defined in
+ // https://spdx.github.io/spdx-spec/v2.3/file-information/#8.4
+ optional string checksum = 2;
+ // SPDXID of the upstream package/file defined in the SBOM document the url field points to.
+ // Format: SPDXRef-[a-zA-Z0-9.-]+, see
+ // https://spdx.github.io/spdx-spec/v2.3/package-information/#72-package-spdx-identifier-field or
+ // https://spdx.github.io/spdx-spec/v2.3/file-information/#82-file-spdx-identifier-field
+ optional string element_id = 3;
+}
\ No newline at end of file
diff --git a/tools/rbcrun/Android.bp b/tools/rbcrun/Android.bp
index 90173ac..fcc33ef 100644
--- a/tools/rbcrun/Android.bp
+++ b/tools/rbcrun/Android.bp
@@ -19,7 +19,7 @@
blueprint_go_binary {
name: "rbcrun",
- srcs: ["cmd/rbcrun.go"],
+ srcs: ["rbcrun/rbcrun.go"],
deps: ["rbcrun-module"],
}
diff --git a/tools/rbcrun/go.mod b/tools/rbcrun/go.mod
index a029eb4..5ae2972 100644
--- a/tools/rbcrun/go.mod
+++ b/tools/rbcrun/go.mod
@@ -1,9 +1,6 @@
module rbcrun
-require (
- github.com/nbutton23/zxcvbn-go v0.0.0-20180912185939-ae427f1e4c1d // indirect
- go.starlark.net v0.0.0-20201006213952-227f4aabceb5
-)
+require go.starlark.net v0.0.0-20201006213952-227f4aabceb5
replace go.starlark.net => ../../../../external/starlark-go
diff --git a/tools/rbcrun/go.sum b/tools/rbcrun/go.sum
index db4d51e..10761a8 100644
--- a/tools/rbcrun/go.sum
+++ b/tools/rbcrun/go.sum
@@ -1,11 +1,8 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
-github.com/chzyer/logex v1.1.10 h1:Swpa1K6QvQznwJRcfTfQJmTE72DqScAa40E+fbHEXEE=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
-github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e h1:fY5BOSpyZCqRo5OhCuC+XN+r/bBCmeuuJtjz+bCNIf8=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
-github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1 h1:q763qf9huN11kDQavWsoZXJNW3xEE4JJyHa5Q25/sd8=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
@@ -26,8 +23,6 @@
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
-github.com/nbutton23/zxcvbn-go v0.0.0-20180912185939-ae427f1e4c1d h1:AREM5mwr4u1ORQBMvzfzBgpsctsbQikCVpvC+tX285E=
-github.com/nbutton23/zxcvbn-go v0.0.0-20180912185939-ae427f1e4c1d/go.mod h1:o96djdrsSGy3AWPyBgZMAGfxZNfgntdJG+11KU4QvbU=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
@@ -44,9 +39,6 @@
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
-golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae h1:Ih9Yo4hSPImZOpfGuA4bR/ORKTAbhZo2AbWNRCnevdo=
-golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
-golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f h1:+Nyd8tzPX9R7BWHguqsrbFdRx3WQ/1ib8I44HXV5yTA=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
diff --git a/tools/rbcrun/host.go b/tools/rbcrun/host.go
index c6e89f0..32afa45 100644
--- a/tools/rbcrun/host.go
+++ b/tools/rbcrun/host.go
@@ -20,6 +20,7 @@
"os"
"os/exec"
"path/filepath"
+ "sort"
"strings"
"go.starlark.net/starlark"
@@ -111,19 +112,6 @@
return e.globals, e.err
}
-// fileExists returns True if file with given name exists.
-func fileExists(_ *starlark.Thread, b *starlark.Builtin, args starlark.Tuple,
- kwargs []starlark.Tuple) (starlark.Value, error) {
- var path string
- if err := starlark.UnpackPositionalArgs(b.Name(), args, kwargs, 1, &path); err != nil {
- return starlark.None, err
- }
- if _, err := os.Stat(path); err != nil {
- return starlark.False, nil
- }
- return starlark.True, nil
-}
-
// wildcard(pattern, top=None) expands shell's glob pattern. If 'top' is present,
// the 'top/pattern' is globbed and then 'top/' prefix is removed.
func wildcard(_ *starlark.Thread, b *starlark.Builtin, args starlark.Tuple,
@@ -150,6 +138,10 @@
files[i] = strings.TrimPrefix(files[i], prefix)
}
}
+ // Kati uses glob(3) with no flags, which means it's sorted
+ // because GLOB_NOSORT is not passed. Go's glob is not
+ // guaranteed to sort the results.
+ sort.Strings(files)
return makeStringList(files), nil
}
@@ -269,8 +261,6 @@
"struct": starlark.NewBuiltin("struct", starlarkstruct.Make),
"rblf_cli": structFromEnv(env),
"rblf_env": structFromEnv(os.Environ()),
- // To convert makefile's $(wildcard foo)
- "rblf_file_exists": starlark.NewBuiltin("rblf_file_exists", fileExists),
// To convert find-copy-subdir and product-copy-files-by pattern
"rblf_find_files": starlark.NewBuiltin("rblf_find_files", find),
// To convert makefile's $(shell cmd)
diff --git a/tools/rbcrun/cmd/rbcrun.go b/tools/rbcrun/rbcrun/rbcrun.go
similarity index 100%
rename from tools/rbcrun/cmd/rbcrun.go
rename to tools/rbcrun/rbcrun/rbcrun.go
diff --git a/tools/rbcrun/testdata/file_ops.star b/tools/rbcrun/testdata/file_ops.star
index 50e39bf..2ee78fc 100644
--- a/tools/rbcrun/testdata/file_ops.star
+++ b/tools/rbcrun/testdata/file_ops.star
@@ -4,9 +4,6 @@
def test():
myname = "file_ops.star"
- assert.true(rblf_file_exists("."), "./ exists ")
- assert.true(rblf_file_exists(myname), "the file %s does exist" % myname)
- assert.true(not rblf_file_exists("no_such_file"), "the file no_such_file does not exist")
files = rblf_wildcard("*.star")
assert.true(myname in files, "expected %s in %s" % (myname, files))
files = rblf_wildcard("*.star", rblf_env.TEST_DATA_DIR)
diff --git a/tools/rbcrun/testdata/module1.star b/tools/rbcrun/testdata/module1.star
index 913fb7d..be04f75 100644
--- a/tools/rbcrun/testdata/module1.star
+++ b/tools/rbcrun/testdata/module1.star
@@ -2,6 +2,6 @@
load("assert.star", "assert")
# Make sure that builtins are defined for the loaded module, too
-assert.true(rblf_file_exists("module1.star"))
-assert.true(not rblf_file_exists("no_such file"))
+assert.true(rblf_wildcard("module1.star"))
+assert.true(not rblf_wildcard("no_such file"))
test = "module1"
diff --git a/tools/releasetools/Android.bp b/tools/releasetools/Android.bp
index d8e34b7..d07292a 100644
--- a/tools/releasetools/Android.bp
+++ b/tools/releasetools/Android.bp
@@ -37,6 +37,7 @@
"releasetools_build_image",
"releasetools_build_super_image",
"releasetools_common",
+ "libavbtool",
],
required: [
"care_map_generator",
@@ -62,7 +63,7 @@
"mkuserimg_mke2fs",
"simg2img",
"tune2fs",
- "mkf2fsuserimg.sh",
+ "mkf2fsuserimg",
"fsck.f2fs",
],
}
@@ -94,10 +95,13 @@
"check_target_files_vintf.py",
],
libs: [
+ "apex_manifest",
"releasetools_common",
],
required: [
"checkvintf",
+ "deapexer",
+ "dump_apex_info",
],
}
@@ -150,7 +154,6 @@
"edify_generator.py",
"non_ab_ota.py",
"ota_from_target_files.py",
- "ota_utils.py",
"target_files_diff.py",
],
libs: [
@@ -160,6 +163,7 @@
"releasetools_verity_utils",
"apex_manifest",
"care_map_proto_py",
+ "ota_utils_lib",
],
required: [
"brillo_update_payload",
@@ -232,6 +236,9 @@
"rangelib.py",
"sparse_img.py",
],
+ data: [
+ ":zip2zip",
+ ],
// Only the tools that are referenced directly are listed as required modules. For example,
// `avbtool` is not here, as the script always uses the one from info_dict['avb_avbtool'].
required: [
@@ -324,6 +331,51 @@
],
}
+python_library_host {
+ name: "ota_utils_lib",
+ srcs: [
+ "ota_utils.py",
+ "payload_signer.py",
+ ],
+ libs: [
+ "releasetools_common",
+ ],
+}
+
+python_binary_host {
+ name: "merge_ota",
+ version: {
+ py3: {
+ embedded_launcher: true,
+ },
+ },
+ srcs: [
+ "merge_ota.py",
+ ],
+ libs: [
+ "ota_metadata_proto",
+ "update_payload",
+ "care_map_proto_py",
+ "releasetools_common",
+ "ota_utils_lib",
+ ],
+}
+
+python_binary_host {
+ name: "create_brick_ota",
+ version: {
+ py3: {
+ embedded_launcher: true,
+ },
+ },
+ srcs: [
+ "create_brick_ota.py",
+ ],
+ libs: [
+ "ota_utils_lib",
+ ],
+}
+
python_binary_host {
name: "build_image",
defaults: [
@@ -519,23 +571,6 @@
}
python_binary_host {
- name: "fsverity_manifest_generator",
- defaults: ["releasetools_binary_defaults"],
- srcs: [
- "fsverity_manifest_generator.py",
- ],
- libs: [
- "fsverity_digests_proto_python",
- "releasetools_common",
- ],
- required: [
- "aapt2",
- "apksigner",
- "fsverity",
- ],
-}
-
-python_binary_host {
name: "fsverity_metadata_generator",
defaults: ["releasetools_binary_defaults"],
srcs: [
@@ -561,6 +596,7 @@
"sign_apex.py",
"sign_target_files_apks.py",
"validate_target_files.py",
+ "merge_ota.py",
":releasetools_merge_sources",
":releasetools_merge_tests",
@@ -577,11 +613,13 @@
"releasetools_img_from_target_files",
"releasetools_ota_from_target_files",
"releasetools_verity_utils",
+ "update_payload",
],
data: [
"testdata/**/*",
":com.android.apex.compressed.v1",
":com.android.apex.compressed.v1_original",
+ ":com.android.apex.vendor.foo.with_vintf"
],
target: {
darwin: {
diff --git a/tools/releasetools/add_img_to_target_files b/tools/releasetools/add_img_to_target_files
deleted file mode 120000
index 04323bd..0000000
--- a/tools/releasetools/add_img_to_target_files
+++ /dev/null
@@ -1 +0,0 @@
-add_img_to_target_files.py
\ No newline at end of file
diff --git a/tools/releasetools/add_img_to_target_files.py b/tools/releasetools/add_img_to_target_files.py
index 09f69d0..ac3271b 100644
--- a/tools/releasetools/add_img_to_target_files.py
+++ b/tools/releasetools/add_img_to_target_files.py
@@ -46,6 +46,7 @@
from __future__ import print_function
+import avbtool
import datetime
import logging
import os
@@ -62,9 +63,11 @@
import common
import verity_utils
import ota_metadata_pb2
+import rangelib
+import sparse_img
from apex_utils import GetApexInfoFromTargetFiles
-from common import AddCareMapForAbOta, ZipDelete
+from common import ZipDelete, PARTITIONS_WITH_CARE_MAP, ExternalError, RunAndCheckOutput, IsSparseImage, MakeTempFile, ZipWrite
if sys.hexversion < 0x02070000:
print("Python 2.7 or newer is required.", file=sys.stderr)
@@ -76,8 +79,6 @@
OPTIONS.add_missing = False
OPTIONS.rebuild_recovery = False
OPTIONS.replace_updated_files_list = []
-OPTIONS.replace_verity_public_key = False
-OPTIONS.replace_verity_private_key = False
OPTIONS.is_signing = False
# Use a fixed timestamp (01/01/2009 00:00:00 UTC) for files when packaging
@@ -87,6 +88,159 @@
datetime.datetime.utcfromtimestamp(0)).total_seconds())
+def ParseAvbFooter(img_path) -> avbtool.AvbFooter:
+ with open(img_path, 'rb') as fp:
+ fp.seek(-avbtool.AvbFooter.SIZE, os.SEEK_END)
+ data = fp.read(avbtool.AvbFooter.SIZE)
+ return avbtool.AvbFooter(data)
+
+
+def GetCareMap(which, imgname):
+ """Returns the care_map string for the given partition.
+
+ Args:
+ which: The partition name, must be listed in PARTITIONS_WITH_CARE_MAP.
+ imgname: The filename of the image.
+
+ Returns:
+ (which, care_map_ranges): care_map_ranges is the raw string of the care_map
+ RangeSet; or None.
+ """
+ assert which in PARTITIONS_WITH_CARE_MAP
+
+ is_sparse_img = IsSparseImage(imgname)
+ unsparsed_image_size = os.path.getsize(imgname)
+
+ # A verified image contains original image + hash tree data + FEC data
+ # + AVB footer, all concatenated together. The caremap specifies a range
+ # of blocks that update_verifier should read on top of dm-verity device
+ # to verify correctness of OTA updates. When reading off of dm-verity device,
+ # the hashtree and FEC part of image isn't available. So caremap should
+ # only contain the original image blocks.
+ try:
+ avbfooter = None
+ if is_sparse_img:
+ with tempfile.NamedTemporaryFile() as tmpfile:
+ img = sparse_img.SparseImage(imgname)
+ unsparsed_image_size = img.total_blocks * img.blocksize
+ for data in img.ReadBlocks(img.total_blocks - 1, 1):
+ tmpfile.write(data)
+ tmpfile.flush()
+ avbfooter = ParseAvbFooter(tmpfile.name)
+ else:
+ avbfooter = ParseAvbFooter(imgname)
+ except LookupError as e:
+ logger.warning(
+ "Failed to parse avbfooter for partition %s image %s, %s", which, imgname, e)
+ return None
+
+ image_size = avbfooter.original_image_size
+ assert image_size < unsparsed_image_size, f"AVB footer's original image size {image_size} is larger than or equal to image size on disk {unsparsed_image_size}, this can't happen because a verified image = original image + hash tree data + FEC data + avbfooter."
+ assert image_size > 0
+
+ image_blocks = int(image_size) // 4096 - 1
+ # It's OK for image_blocks to be 0, because care map ranges are inclusive.
+ # So 0-0 means "just block 0", which is valid.
+ assert image_blocks >= 0, "blocks for {} must be non-negative, image size: {}".format(
+ which, image_size)
+
+ # For sparse images, we will only check the blocks that are listed in the care
+ # map, i.e. the ones with meaningful data.
+ if is_sparse_img:
+ simg = sparse_img.SparseImage(imgname)
+ care_map_ranges = simg.care_map.intersect(
+ rangelib.RangeSet("0-{}".format(image_blocks)))
+
+ # Otherwise for non-sparse images, we read all the blocks in the filesystem
+ # image.
+ else:
+ care_map_ranges = rangelib.RangeSet("0-{}".format(image_blocks))
+
+ return [which, care_map_ranges.to_string_raw()]
+
+
+def AddCareMapForAbOta(output_file, ab_partitions, image_paths):
+ """Generates and adds care_map.pb for a/b partition that has care_map.
+
+ Args:
+ output_file: The output zip file (needs to be already open),
+ or file path to write care_map.pb.
+ ab_partitions: The list of A/B partitions.
+ image_paths: A map from the partition name to the image path.
+ """
+ if not output_file:
+ raise ExternalError('Expected output_file for AddCareMapForAbOta')
+
+ care_map_list = []
+ for partition in ab_partitions:
+ partition = partition.strip()
+ if partition not in PARTITIONS_WITH_CARE_MAP:
+ continue
+
+ verity_block_device = "{}_verity_block_device".format(partition)
+ avb_hashtree_enable = "avb_{}_hashtree_enable".format(partition)
+ if (verity_block_device in OPTIONS.info_dict or
+ OPTIONS.info_dict.get(avb_hashtree_enable) == "true"):
+ if partition not in image_paths:
+ logger.warning('Potential partition with care_map missing from images: %s',
+ partition)
+ continue
+ image_path = image_paths[partition]
+ if not os.path.exists(image_path):
+ raise ExternalError('Expected image at path {}'.format(image_path))
+
+ care_map = GetCareMap(partition, image_path)
+ if not care_map:
+ continue
+ care_map_list += care_map
+
+ # adds fingerprint field to the care_map
+ # TODO(xunchang) revisit the fingerprint calculation for care_map.
+ partition_props = OPTIONS.info_dict.get(partition + ".build.prop")
+ prop_name_list = ["ro.{}.build.fingerprint".format(partition),
+ "ro.{}.build.thumbprint".format(partition)]
+
+ present_props = [x for x in prop_name_list if
+ partition_props and partition_props.GetProp(x)]
+ if not present_props:
+ logger.warning(
+ "fingerprint is not present for partition %s", partition)
+ property_id, fingerprint = "unknown", "unknown"
+ else:
+ property_id = present_props[0]
+ fingerprint = partition_props.GetProp(property_id)
+ care_map_list += [property_id, fingerprint]
+
+ if not care_map_list:
+ return
+
+ # Converts the list into proto buf message by calling care_map_generator; and
+ # writes the result to a temp file.
+ temp_care_map_text = MakeTempFile(prefix="caremap_text-",
+ suffix=".txt")
+ with open(temp_care_map_text, 'w') as text_file:
+ text_file.write('\n'.join(care_map_list))
+
+ temp_care_map = MakeTempFile(prefix="caremap-", suffix=".pb")
+ care_map_gen_cmd = ["care_map_generator", temp_care_map_text, temp_care_map]
+ RunAndCheckOutput(care_map_gen_cmd)
+
+ if not isinstance(output_file, zipfile.ZipFile):
+ shutil.copy(temp_care_map, output_file)
+ return
+ # output_file is a zip file
+ care_map_path = "META/care_map.pb"
+ if care_map_path in output_file.namelist():
+ # Copy the temp file into the OPTIONS.input_tmp dir and update the
+ # replace_updated_files_list used by add_img_to_target_files
+ if not OPTIONS.replace_updated_files_list:
+ OPTIONS.replace_updated_files_list = []
+ shutil.copy(temp_care_map, os.path.join(OPTIONS.input_tmp, care_map_path))
+ OPTIONS.replace_updated_files_list.append(care_map_path)
+ else:
+ ZipWrite(output_file, temp_care_map, arcname=care_map_path)
+
+
class OutputFile(object):
"""A helper class to write a generated file to the given dir or zip.
@@ -279,6 +433,7 @@
block_list=block_list)
return img.name
+
def AddSystemDlkm(output_zip):
"""Turn the contents of SystemDlkm into an system_dlkm image and store it in output_zip."""
@@ -457,8 +612,7 @@
# Set the '_image_size' for given image size.
is_verity_partition = "verity_block_device" in image_props
- verity_supported = (image_props.get("verity") == "true" or
- image_props.get("avb_enable") == "true")
+ verity_supported = (image_props.get("avb_enable") == "true")
is_avb_enable = image_props.get("avb_hashtree_enable") == "true"
if verity_supported and (is_verity_partition or is_avb_enable):
image_size = image_props.get("image_size")
@@ -557,7 +711,7 @@
cmd = [bpttool, "make_table", "--output_json", bpt.name,
"--output_gpt", img.name]
input_files_str = OPTIONS.info_dict["board_bpt_input_files"]
- input_files = input_files_str.split(" ")
+ input_files = input_files_str.split()
for i in input_files:
cmd.extend(["--input", i])
disk_size = OPTIONS.info_dict.get("board_bpt_disk_size")
@@ -664,6 +818,9 @@
"""Create a super_empty.img and store it in output_zip."""
img = OutputFile(output_zip, OPTIONS.input_tmp, "IMAGES", "super_empty.img")
+ if os.path.exists(img.name):
+ logger.info("super_empty.img already exists; no need to rebuild...")
+ return
build_super_image.BuildSuperImage(OPTIONS.info_dict, img.name)
img.Write()
@@ -783,7 +940,8 @@
has_boot = OPTIONS.info_dict.get("no_boot") != "true"
has_init_boot = OPTIONS.info_dict.get("init_boot") == "true"
has_vendor_boot = OPTIONS.info_dict.get("vendor_boot") == "true"
- has_vendor_kernel_boot = OPTIONS.info_dict.get("vendor_kernel_boot") == "true"
+ has_vendor_kernel_boot = OPTIONS.info_dict.get(
+ "vendor_kernel_boot") == "true"
# {vendor,odm,product,system_ext,vendor_dlkm,odm_dlkm, system_dlkm, system, system_other}.img
# can be built from source, or dropped into target_files.zip as a prebuilt blob.
@@ -847,7 +1005,8 @@
if has_init_boot:
banner("init_boot")
init_boot_image = common.GetBootableImage(
- "IMAGES/init_boot.img", "init_boot.img", OPTIONS.input_tmp, "INIT_BOOT")
+ "IMAGES/init_boot.img", "init_boot.img", OPTIONS.input_tmp, "INIT_BOOT",
+ dev_nodes=True)
if init_boot_image:
partitions['init_boot'] = os.path.join(
OPTIONS.input_tmp, "IMAGES", "init_boot.img")
@@ -876,7 +1035,7 @@
"VENDOR_KERNEL_BOOT")
if vendor_kernel_boot_image:
partitions['vendor_kernel_boot'] = os.path.join(OPTIONS.input_tmp, "IMAGES",
- "vendor_kernel_boot.img")
+ "vendor_kernel_boot.img")
if not os.path.exists(partitions['vendor_kernel_boot']):
vendor_kernel_boot_image.WriteToDir(OPTIONS.input_tmp)
if output_zip:
@@ -978,6 +1137,21 @@
item for item in vbmeta_partitions
if item not in vbmeta_vendor.split()]
vbmeta_partitions.append("vbmeta_vendor")
+ custom_avb_partitions = OPTIONS.info_dict.get("avb_custom_vbmeta_images_partition_list", "").strip().split()
+ if custom_avb_partitions:
+ for avb_part in custom_avb_partitions:
+ partition_name = "vbmeta_" + avb_part
+ included_partitions = OPTIONS.info_dict.get("avb_vbmeta_{}".format(avb_part), "").strip().split()
+ assert included_partitions, "Custom vbmeta partition {0} missing avb_vbmeta_{0} prop".format(avb_part)
+ banner(partition_name)
+ logger.info("VBMeta partition {} needs {}".format(partition_name, included_partitions))
+ partitions[partition_name] = AddVBMeta(
+ output_zip, partitions, partition_name, included_partitions)
+ vbmeta_partitions = [
+ item for item in vbmeta_partitions
+ if item not in included_partitions]
+ vbmeta_partitions.append(partition_name)
+
if OPTIONS.info_dict.get("avb_building_vbmeta_image") == "true":
banner("vbmeta")
@@ -1054,7 +1228,8 @@
ZipDelete(zipfile_path, [entry.filename for entry in entries_to_store])
with zipfile.ZipFile(zipfile_path, "a", allowZip64=True) as zfp:
for entry in entries_to_store:
- zfp.write(os.path.join(tmpdir, entry.filename), entry.filename, compress_type=zipfile.ZIP_STORED)
+ zfp.write(os.path.join(tmpdir, entry.filename),
+ entry.filename, compress_type=zipfile.ZIP_STORED)
def main(argv):
@@ -1064,9 +1239,11 @@
elif o in ("-r", "--rebuild_recovery",):
OPTIONS.rebuild_recovery = True
elif o == "--replace_verity_private_key":
- OPTIONS.replace_verity_private_key = (True, a)
+ raise ValueError("--replace_verity_private_key is no longer supported,"
+ " please switch to AVB")
elif o == "--replace_verity_public_key":
- OPTIONS.replace_verity_public_key = (True, a)
+ raise ValueError("--replace_verity_public_key is no longer supported,"
+ " please switch to AVB")
elif o == "--is_signing":
OPTIONS.is_signing = True
else:
diff --git a/tools/releasetools/apex_utils.py b/tools/releasetools/apex_utils.py
index 941edc6..59c712e 100644
--- a/tools/releasetools/apex_utils.py
+++ b/tools/releasetools/apex_utils.py
@@ -63,10 +63,14 @@
self.codename_to_api_level_map = codename_to_api_level_map
self.debugfs_path = os.path.join(
OPTIONS.search_path, "bin", "debugfs_static")
+ self.fsckerofs_path = os.path.join(
+ OPTIONS.search_path, "bin", "fsck.erofs")
+ self.blkid_path = os.path.join(
+ OPTIONS.search_path, "bin", "blkid_static")
self.avbtool = avbtool if avbtool else "avbtool"
self.sign_tool = sign_tool
- def ProcessApexFile(self, apk_keys, payload_key, signing_args=None):
+ def ProcessApexFile(self, apk_keys, payload_key, signing_args=None, is_sepolicy=False):
"""Scans and signs the payload files and repack the apex
Args:
@@ -80,13 +84,17 @@
"Couldn't find location of debugfs_static: " +
"Path {} does not exist. ".format(self.debugfs_path) +
"Make sure bin/debugfs_static can be found in -p <path>")
- list_cmd = ['deapexer', '--debugfs_path',
- self.debugfs_path, 'list', self.apex_path]
+ list_cmd = ['deapexer', '--debugfs_path', self.debugfs_path,
+ 'list', self.apex_path]
entries_names = common.RunAndCheckOutput(list_cmd).split()
apk_entries = [name for name in entries_names if name.endswith('.apk')]
+ sepolicy_entries = []
+ if is_sepolicy:
+ sepolicy_entries = [name for name in entries_names if
+ name.startswith('./etc/SEPolicy') and name.endswith('.zip')]
# No need to sign and repack, return the original apex path.
- if not apk_entries and self.sign_tool is None:
+ if not apk_entries and not sepolicy_entries and self.sign_tool is None:
logger.info('No apk file to sign in %s', self.apex_path)
return self.apex_path
@@ -102,29 +110,41 @@
' %s', entry)
payload_dir, has_signed_content = self.ExtractApexPayloadAndSignContents(
- apk_entries, apk_keys, payload_key, signing_args)
+ apk_entries, sepolicy_entries, apk_keys, payload_key, signing_args)
if not has_signed_content:
- logger.info('No contents has been signed in %s', self.apex_path)
+ logger.info('No contents have been signed in %s', self.apex_path)
return self.apex_path
return self.RepackApexPayload(payload_dir, payload_key, signing_args)
- def ExtractApexPayloadAndSignContents(self, apk_entries, apk_keys, payload_key, signing_args):
+ def ExtractApexPayloadAndSignContents(self, apk_entries, sepolicy_entries, apk_keys, payload_key, signing_args):
"""Extracts the payload image and signs the containing apk files."""
if not os.path.exists(self.debugfs_path):
raise ApexSigningError(
"Couldn't find location of debugfs_static: " +
"Path {} does not exist. ".format(self.debugfs_path) +
"Make sure bin/debugfs_static can be found in -p <path>")
+ if not os.path.exists(self.fsckerofs_path):
+ raise ApexSigningError(
+ "Couldn't find location of fsck.erofs: " +
+ "Path {} does not exist. ".format(self.fsckerofs_path) +
+ "Make sure bin/fsck.erofs can be found in -p <path>")
+ if not os.path.exists(self.blkid_path):
+ raise ApexSigningError(
+ "Couldn't find location of blkid: " +
+ "Path {} does not exist. ".format(self.blkid_path) +
+ "Make sure bin/blkid can be found in -p <path>")
payload_dir = common.MakeTempDir()
- extract_cmd = ['deapexer', '--debugfs_path',
- self.debugfs_path, 'extract', self.apex_path, payload_dir]
+ extract_cmd = ['deapexer', '--debugfs_path', self.debugfs_path,
+ '--fsckerofs_path', self.fsckerofs_path,
+ '--blkid_path', self.blkid_path, 'extract',
+ self.apex_path, payload_dir]
common.RunAndCheckOutput(extract_cmd)
+ assert os.path.exists(self.apex_path)
has_signed_content = False
for entry in apk_entries:
apk_path = os.path.join(payload_dir, entry)
- assert os.path.exists(self.apex_path)
key_name = apk_keys.get(os.path.basename(entry))
if key_name in common.SPECIAL_CERT_STRINGS:
@@ -141,6 +161,37 @@
codename_to_api_level_map=self.codename_to_api_level_map)
has_signed_content = True
+ for entry in sepolicy_entries:
+ sepolicy_path = os.path.join(payload_dir, entry)
+
+ if not 'etc' in entry:
+ logger.warning('Sepolicy path does not contain the intended directory name etc:'
+ ' %s', entry)
+
+ key_name = apk_keys.get(os.path.basename(entry))
+ if key_name is None:
+ logger.warning('Failed to find signing keys for {} in'
+ ' apex {}, payload key will be used instead.'
+ ' Use "-e <name>=" to specify a key'
+ .format(entry, self.apex_path))
+ key_name = payload_key
+
+ if key_name in common.SPECIAL_CERT_STRINGS:
+ logger.info('Not signing: %s due to special cert string', sepolicy_path)
+ continue
+
+ if OPTIONS.sign_sepolicy_path is not None:
+ sig_path = os.path.join(payload_dir, sepolicy_path + '.sig')
+ fsv_sig_path = os.path.join(payload_dir, sepolicy_path + '.fsv_sig')
+ old_sig = common.MakeTempFile()
+ old_fsv_sig = common.MakeTempFile()
+ os.rename(sig_path, old_sig)
+ os.rename(fsv_sig_path, old_fsv_sig)
+
+ logger.info('Signing sepolicy file %s in apex %s', sepolicy_path, self.apex_path)
+ if common.SignSePolicy(sepolicy_path, key_name, self.key_passwords.get(key_name)):
+ has_signed_content = True
+
if self.sign_tool:
logger.info('Signing payload contents in apex %s with %s', self.apex_path, self.sign_tool)
# Pass avbtool to the custom signing tool
@@ -324,7 +375,8 @@
def SignUncompressedApex(avbtool, apex_file, payload_key, container_key,
container_pw, apk_keys, codename_to_api_level_map,
- no_hashtree, signing_args=None, sign_tool=None):
+ no_hashtree, signing_args=None, sign_tool=None,
+ is_sepolicy=False):
"""Signs the current uncompressed APEX with the given payload/container keys.
Args:
@@ -337,6 +389,7 @@
no_hashtree: Don't include hashtree in the signed APEX.
signing_args: Additional args to be passed to the payload signer.
sign_tool: A tool to sign the contents of the APEX.
+ is_sepolicy: Indicates if the apex is a sepolicy.apex
Returns:
The path to the signed APEX file.
@@ -346,7 +399,8 @@
apk_signer = ApexApkSigner(apex_file, container_pw,
codename_to_api_level_map,
avbtool, sign_tool)
- apex_file = apk_signer.ProcessApexFile(apk_keys, payload_key, signing_args)
+ apex_file = apk_signer.ProcessApexFile(
+ apk_keys, payload_key, signing_args, is_sepolicy)
# 2a. Extract and sign the APEX_PAYLOAD_IMAGE entry with the given
# payload_key.
@@ -400,7 +454,8 @@
def SignCompressedApex(avbtool, apex_file, payload_key, container_key,
container_pw, apk_keys, codename_to_api_level_map,
- no_hashtree, signing_args=None, sign_tool=None):
+ no_hashtree, signing_args=None, sign_tool=None,
+ is_sepolicy=False):
"""Signs the current compressed APEX with the given payload/container keys.
Args:
@@ -412,6 +467,7 @@
codename_to_api_level_map: A dict that maps from codename to API level.
no_hashtree: Don't include hashtree in the signed APEX.
signing_args: Additional args to be passed to the payload signer.
+ is_sepolicy: Indicates if the apex is a sepolicy.apex
Returns:
The path to the signed APEX file.
@@ -438,7 +494,8 @@
codename_to_api_level_map,
no_hashtree,
signing_args,
- sign_tool)
+ sign_tool,
+ is_sepolicy)
# 3. Compress signed original apex.
compressed_apex_file = common.MakeTempFile(prefix='apex-container-',
@@ -465,8 +522,8 @@
def SignApex(avbtool, apex_data, payload_key, container_key, container_pw,
- apk_keys, codename_to_api_level_map,
- no_hashtree, signing_args=None, sign_tool=None):
+ apk_keys, codename_to_api_level_map, no_hashtree,
+ signing_args=None, sign_tool=None, is_sepolicy=False):
"""Signs the current APEX with the given payload/container keys.
Args:
@@ -478,6 +535,7 @@
codename_to_api_level_map: A dict that maps from codename to API level.
no_hashtree: Don't include hashtree in the signed APEX.
signing_args: Additional args to be passed to the payload signer.
+ is_sepolicy: Indicates if the apex is a sepolicy.apex
Returns:
The path to the signed APEX file.
@@ -498,24 +556,26 @@
apex_file,
payload_key=payload_key,
container_key=container_key,
- container_pw=None,
+ container_pw=container_pw,
codename_to_api_level_map=codename_to_api_level_map,
no_hashtree=no_hashtree,
apk_keys=apk_keys,
signing_args=signing_args,
- sign_tool=sign_tool)
+ sign_tool=sign_tool,
+ is_sepolicy=is_sepolicy)
elif apex_type == 'COMPRESSED':
return SignCompressedApex(
avbtool,
apex_file,
payload_key=payload_key,
container_key=container_key,
- container_pw=None,
+ container_pw=container_pw,
codename_to_api_level_map=codename_to_api_level_map,
no_hashtree=no_hashtree,
apk_keys=apk_keys,
signing_args=signing_args,
- sign_tool=sign_tool)
+ sign_tool=sign_tool,
+ is_sepolicy=is_sepolicy)
else:
# TODO(b/172912232): support signing compressed apex
raise ApexInfoError('Unsupported apex type {}'.format(apex_type))
@@ -559,12 +619,14 @@
debugfs_path = "debugfs"
if OPTIONS.search_path:
debugfs_path = os.path.join(OPTIONS.search_path, "bin", "debugfs_static")
+
deapexer = 'deapexer'
if OPTIONS.search_path:
deapexer_path = os.path.join(OPTIONS.search_path, "bin", "deapexer")
if os.path.isfile(deapexer_path):
deapexer = deapexer_path
- for apex_filename in os.listdir(target_dir):
+
+ for apex_filename in sorted(os.listdir(target_dir)):
apex_filepath = os.path.join(target_dir, apex_filename)
if not os.path.isfile(apex_filepath) or \
not zipfile.is_zipfile(apex_filepath):
diff --git a/tools/releasetools/blockimgdiff.py b/tools/releasetools/blockimgdiff.py
index d33c2f7..8087fcd 100644
--- a/tools/releasetools/blockimgdiff.py
+++ b/tools/releasetools/blockimgdiff.py
@@ -537,14 +537,6 @@
self.touched_src_sha1 = self.src.RangeSha1(self.touched_src_ranges)
- if self.tgt.hashtree_info:
- out.append("compute_hash_tree {} {} {} {} {}\n".format(
- self.tgt.hashtree_info.hashtree_range.to_string_raw(),
- self.tgt.hashtree_info.filesystem_range.to_string_raw(),
- self.tgt.hashtree_info.hash_algorithm,
- self.tgt.hashtree_info.salt,
- self.tgt.hashtree_info.root_hash))
-
# Zero out extended blocks as a workaround for bug 20881595.
if self.tgt.extended:
assert (WriteSplitTransfers(out, "zero", self.tgt.extended) ==
@@ -830,12 +822,6 @@
assert touched[i] == 0
touched[i] = 1
- if self.tgt.hashtree_info:
- for s, e in self.tgt.hashtree_info.hashtree_range:
- for i in range(s, e):
- assert touched[i] == 0
- touched[i] = 1
-
# Check that we've written every target block.
for s, e in self.tgt.care_map:
for i in range(s, e):
@@ -1185,7 +1171,7 @@
try:
# Compresses with the default level
compress_obj = zlib.compressobj(6, zlib.DEFLATED, -zlib.MAX_WBITS)
- compressed_data = (compress_obj.compress("".join(tgt_data))
+ compressed_data = (compress_obj.compress(b"".join(tgt_data))
+ compress_obj.flush())
compressed_size = len(compressed_data)
except zlib.error as e:
diff --git a/tools/releasetools/build_image.py b/tools/releasetools/build_image.py
index 9049622..9064136 100755
--- a/tools/releasetools/build_image.py
+++ b/tools/releasetools/build_image.py
@@ -232,11 +232,13 @@
mount_point, total_blocks, used_blocks, headroom_blocks,
adjusted_blocks))
+
def CalculateSizeAndReserved(prop_dict, size):
fs_type = prop_dict.get("fs_type", "")
partition_headroom = int(prop_dict.get("partition_headroom", 0))
# If not specified, give us 16MB margin for GetDiskUsage error ...
- reserved_size = int(prop_dict.get("partition_reserved_size", BYTES_IN_MB * 16))
+ reserved_size = int(prop_dict.get(
+ "partition_reserved_size", BYTES_IN_MB * 16))
if fs_type == "erofs":
reserved_size = int(prop_dict.get("partition_reserved_size", 0))
@@ -249,6 +251,7 @@
return size + reserved_size
+
def BuildImageMkfs(in_dir, prop_dict, out_file, target_out, fs_config):
"""Builds a pure image for the files under in_dir and writes it to out_file.
@@ -328,9 +331,17 @@
compressor = prop_dict["erofs_default_compressor"]
if "erofs_compressor" in prop_dict:
compressor = prop_dict["erofs_compressor"]
- if compressor:
+ if compressor and compressor != "none":
build_command.extend(["-z", compressor])
+ compress_hints = None
+ if "erofs_default_compress_hints" in prop_dict:
+ compress_hints = prop_dict["erofs_default_compress_hints"]
+ if "erofs_compress_hints" in prop_dict:
+ compress_hints = prop_dict["erofs_compress_hints"]
+ if compress_hints:
+ build_command.extend(["--compress-hints", compress_hints])
+
build_command.extend(["--mount-point", prop_dict["mount_point"]])
if target_out:
build_command.extend(["--product-out", target_out])
@@ -357,7 +368,7 @@
run_fsck = RunErofsFsck
elif fs_type.startswith("squash"):
- build_command = ["mksquashfsimage.sh"]
+ build_command = ["mksquashfsimage"]
build_command.extend([in_dir, out_file])
if "squashfs_sparse_flag" in prop_dict and not disable_sparse:
build_command.extend([prop_dict["squashfs_sparse_flag"]])
@@ -379,7 +390,7 @@
if prop_dict.get("squashfs_disable_4k_align") == "true":
build_command.extend(["-a"])
elif fs_type.startswith("f2fs"):
- build_command = ["mkf2fsuserimg.sh"]
+ build_command = ["mkf2fsuserimg"]
build_command.extend([out_file, prop_dict["image_size"]])
if "f2fs_sparse_flag" in prop_dict and not disable_sparse:
build_command.extend([prop_dict["f2fs_sparse_flag"]])
@@ -402,7 +413,7 @@
build_command.append("--casefold")
if (needs_compress or prop_dict.get("f2fs_compress") == "true"):
build_command.append("--compression")
- if (prop_dict.get("mount_point") != "data"):
+ if "ro_mount_point" in prop_dict:
build_command.append("--readonly")
if (prop_dict.get("f2fs_compress") == "true"):
build_command.append("--sldc")
@@ -510,11 +521,12 @@
disable_sparse = "disable_sparse" in prop_dict
mkfs_output = None
if (prop_dict.get("use_dynamic_partition_size") == "true" and
- "partition_size" not in prop_dict):
+ "partition_size" not in prop_dict):
# If partition_size is not defined, use output of `du' + reserved_size.
# For compressed file system, it's better to use the compressed size to avoid wasting space.
if fs_type.startswith("erofs"):
- mkfs_output = BuildImageMkfs(in_dir, prop_dict, out_file, target_out, fs_config)
+ mkfs_output = BuildImageMkfs(
+ in_dir, prop_dict, out_file, target_out, fs_config)
if "erofs_sparse_flag" in prop_dict and not disable_sparse:
image_path = UnsparseImage(out_file, replace=False)
size = GetDiskUsage(image_path)
@@ -604,7 +616,8 @@
prop_dict["image_size"] = str(max_image_size)
if not mkfs_output:
- mkfs_output = BuildImageMkfs(in_dir, prop_dict, out_file, target_out, fs_config)
+ mkfs_output = BuildImageMkfs(
+ in_dir, prop_dict, out_file, target_out, fs_config)
# Update the image (eg filesystem size). This can be different eg if mkfs
# rounds the requested size down due to alignment.
@@ -621,6 +634,7 @@
if verity_image_builder:
verity_image_builder.Build(out_file)
+
def ImagePropFromGlobalDict(glob_dict, mount_point):
"""Build an image property dictionary from the global dictionary.
@@ -652,6 +666,7 @@
common_props = (
"extfs_sparse_flag",
"erofs_default_compressor",
+ "erofs_default_compress_hints",
"erofs_pcluster_size",
"erofs_share_dup_blocks",
"erofs_sparse_flag",
@@ -662,11 +677,6 @@
"f2fs_sparse_flag",
"skip_fsck",
"ext_mkuserimg",
- "verity",
- "verity_key",
- "verity_signer_cmd",
- "verity_fec",
- "verity_disable",
"avb_enable",
"avb_avbtool",
"use_dynamic_partition_size",
@@ -706,6 +716,7 @@
(True, "{}_base_fs_file", "base_fs_file"),
(True, "{}_disable_sparse", "disable_sparse"),
(True, "{}_erofs_compressor", "erofs_compressor"),
+ (True, "{}_erofs_compress_hints", "erofs_compress_hints"),
(True, "{}_erofs_pcluster_size", "erofs_pcluster_size"),
(True, "{}_erofs_share_dup_blocks", "erofs_share_dup_blocks"),
(True, "{}_extfs_inode_count", "extfs_inode_count"),
@@ -733,7 +744,7 @@
# This property is legacy and only used on a few partitions. b/202600377
allowed_partitions = set(["system", "system_other", "data", "oem"])
if mount_point not in allowed_partitions:
- continue
+ continue
if (mount_point == "system_other") and (dest_prop != "partition_size"):
# Propagate system properties to system_other. They'll get overridden
@@ -752,6 +763,8 @@
if not copy_prop(prop, "extfs_rsv_pct"):
d["extfs_rsv_pct"] = "0"
+ d["ro_mount_point"] = "1"
+
# Copy partition-specific properties.
d["mount_point"] = mount_point
if mount_point == "system":
@@ -786,6 +799,7 @@
def GlobalDictFromImageProp(image_prop, mount_point):
d = {}
+
def copy_prop(src_p, dest_p):
if src_p in image_prop:
d[dest_p] = image_prop[src_p]
@@ -813,17 +827,69 @@
return d
+def BuildVBMeta(in_dir, glob_dict, output_path):
+ """Creates a VBMeta image.
+
+ It generates the requested VBMeta image. The requested image could be for
+ top-level or chained VBMeta image, which is determined based on the name.
+
+ Args:
+ output_path: Path to generated vbmeta.img
+ partitions: A dict that's keyed by partition names with image paths as
+ values. Only valid partition names are accepted, as partitions listed
+ in common.AVB_PARTITIONS and custom partitions listed in
+ OPTIONS.info_dict.get("avb_custom_images_partition_list")
+ name: Name of the VBMeta partition, e.g. 'vbmeta', 'vbmeta_system'.
+ needed_partitions: Partitions whose descriptors should be included into the
+ generated VBMeta image.
+
+ Returns:
+ Path to the created image.
+
+ Raises:
+ AssertionError: On invalid input args.
+ """
+ vbmeta_partitions = common.AVB_PARTITIONS[:]
+ name = os.path.basename(output_path).rstrip(".img")
+ vbmeta_system = glob_dict.get("avb_vbmeta_system", "").strip()
+ vbmeta_vendor = glob_dict.get("avb_vbmeta_vendor", "").strip()
+ if "vbmeta_system" in name:
+ vbmeta_partitions = vbmeta_system.split()
+ elif "vbmeta_vendor" in name:
+ vbmeta_partitions = vbmeta_vendor.split()
+ else:
+ if vbmeta_system:
+ vbmeta_partitions = [
+ item for item in vbmeta_partitions
+ if item not in vbmeta_system.split()]
+ vbmeta_partitions.append("vbmeta_system")
+
+ if vbmeta_vendor:
+ vbmeta_partitions = [
+ item for item in vbmeta_partitions
+ if item not in vbmeta_vendor.split()]
+ vbmeta_partitions.append("vbmeta_vendor")
+
+
+ partitions = {part: os.path.join(in_dir, part + ".img")
+ for part in vbmeta_partitions}
+ partitions = {part:path for (part, path) in partitions.items() if os.path.exists(path)}
+ common.BuildVBMeta(output_path, partitions, name, vbmeta_partitions)
+
+
def main(argv):
- if len(argv) != 4:
+ args = common.ParseOptions(argv, __doc__)
+
+ if len(args) != 4:
print(__doc__)
sys.exit(1)
common.InitLogging()
- in_dir = argv[0]
- glob_dict_file = argv[1]
- out_file = argv[2]
- target_out = argv[3]
+ in_dir = args[0]
+ glob_dict_file = args[1]
+ out_file = args[2]
+ target_out = args[3]
glob_dict = LoadGlobalDict(glob_dict_file)
if "mount_point" in glob_dict:
@@ -857,14 +923,21 @@
mount_point = "product"
elif image_filename == "system_ext.img":
mount_point = "system_ext"
+ elif "vbmeta" in image_filename:
+ mount_point = "vbmeta"
else:
logger.error("Unknown image file name %s", image_filename)
sys.exit(1)
- image_properties = ImagePropFromGlobalDict(glob_dict, mount_point)
+ if "vbmeta" != mount_point:
+ image_properties = ImagePropFromGlobalDict(glob_dict, mount_point)
try:
- BuildImage(in_dir, image_properties, out_file, target_out)
+ if "vbmeta" in os.path.basename(out_file):
+ OPTIONS.info_dict = glob_dict
+ BuildVBMeta(in_dir, glob_dict, out_file)
+ else:
+ BuildImage(in_dir, image_properties, out_file, target_out)
except:
logger.error("Failed to build %s from %s", out_file, in_dir)
raise
diff --git a/tools/releasetools/check_target_files_signatures b/tools/releasetools/check_target_files_signatures
deleted file mode 120000
index 9f62aa3..0000000
--- a/tools/releasetools/check_target_files_signatures
+++ /dev/null
@@ -1 +0,0 @@
-check_target_files_signatures.py
\ No newline at end of file
diff --git a/tools/releasetools/check_target_files_signatures.py b/tools/releasetools/check_target_files_signatures.py
index d935607..a7b3523 100755
--- a/tools/releasetools/check_target_files_signatures.py
+++ b/tools/releasetools/check_target_files_signatures.py
@@ -241,7 +241,8 @@
# Signer (minSdkVersion=24, maxSdkVersion=32) certificate SHA-1 digest: 19da94896ce4078c38ca695701f1dec741ec6d67
# ...
certs_info = {}
- certificate_regex = re.compile(r"(Signer (?:#[0-9]+|\(.*\))) (certificate .*):(.*)")
+ certificate_regex = re.compile(
+ r"(Signer (?:#[0-9]+|\(.*\))) (certificate .*):(.*)")
for line in output.splitlines():
m = certificate_regex.match(line)
if not m:
@@ -312,7 +313,7 @@
# This is the list of wildcards of files we extract from |filename|.
apk_extensions = ['*.apk', '*.apex']
- with zipfile.ZipFile(filename) as input_zip:
+ with zipfile.ZipFile(filename, "r") as input_zip:
self.certmap, compressed_extension = common.ReadApkCerts(input_zip)
if compressed_extension:
apk_extensions.append('*.apk' + compressed_extension)
diff --git a/tools/releasetools/check_target_files_vintf.py b/tools/releasetools/check_target_files_vintf.py
index 4a2a905..5b71c72 100755
--- a/tools/releasetools/check_target_files_vintf.py
+++ b/tools/releasetools/check_target_files_vintf.py
@@ -22,13 +22,16 @@
target_files can be a ZIP file or an extracted target files directory.
"""
+import json
import logging
+import os
+import shutil
import subprocess
import sys
-import os
import zipfile
import common
+from apex_manifest import ParseApexManifest
logger = logging.getLogger(__name__)
@@ -123,7 +126,12 @@
logger.warning('PRODUCT_ENFORCE_VINTF_MANIFEST is not set, skipping checks')
return True
+
dirmap = GetDirmap(input_tmp)
+
+ # Simulate apexd from target-files.
+ dirmap['/apex'] = PrepareApexDirectory(input_tmp)
+
args_for_skus = GetArgsForSkus(info_dict)
shipping_api_level_args = GetArgsForShippingApiLevel(info_dict)
kernel_args = GetArgsForKernel(input_tmp)
@@ -132,6 +140,7 @@
'checkvintf',
'--check-compat',
]
+
for device_path, real_path in sorted(dirmap.items()):
common_command += ['--dirmap', '{}:{}'.format(device_path, real_path)]
common_command += kernel_args
@@ -142,9 +151,10 @@
command = common_command + sku_args
proc = common.Run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = proc.communicate()
+ last_out_line = out.split()[-1] if out != "" else out
if proc.returncode == 0:
logger.info("Command `%s` returns 'compatible'", ' '.join(command))
- elif out.strip() == "INCOMPATIBLE":
+ elif last_out_line.strip() == "INCOMPATIBLE":
logger.info("Command `%s` returns 'incompatible'", ' '.join(command))
success = False
else:
@@ -185,6 +195,113 @@
paths = sum((PathToPatterns(path) for path in paths if path), [])
return paths
+def GetVintfApexUnzipPatterns():
+ """ Build unzip pattern for APEXes. """
+ patterns = []
+ for target_files_rel_paths in DIR_SEARCH_PATHS.values():
+ for target_files_rel_path in target_files_rel_paths:
+ patterns.append(os.path.join(target_files_rel_path,"apex/*"))
+
+ return patterns
+
+def PrepareApexDirectory(inp):
+ """ Prepare /apex directory before running checkvintf
+
+ Apex binaries do not support dirmaps, in order to use these binaries we
+ need to move the APEXes from the extracted target file archives to the
+ expected device locations.
+
+ This simulates how apexd activates APEXes.
+ 1. create {inp}/APEX which is treated as a "/" on device.
+ 2. copy apexes from target-files to {root}/{partition}/apex.
+ 3. mount apexes under {root}/{partition}/apex at {root}/apex.
+ 4. generate info files with dump_apex_info.
+
+ We'll get the following layout
+ {inp}/APEX/apex # Activated APEXes + some info files
+ {inp}/APEX/system/apex # System APEXes
+ {inp}/APEX/vendor/apex # Vendor APEXes
+ ...
+
+ Args:
+ inp: path to the directory that contains the extracted target files archive.
+
+ Returns:
+ directory representing /apex on device
+ """
+
+ deapexer = 'deapexer'
+ debugfs_path = 'debugfs'
+ blkid_path = 'blkid'
+ fsckerofs_path = 'fsck.erofs'
+ if OPTIONS.search_path:
+ debugfs_path = os.path.join(OPTIONS.search_path, 'bin', 'debugfs_static')
+ deapexer_path = os.path.join(OPTIONS.search_path, 'bin', 'deapexer')
+ blkid_path = os.path.join(OPTIONS.search_path, 'bin', 'blkid_static')
+ fsckerofs_path = os.path.join(OPTIONS.search_path, 'bin', 'fsck.erofs')
+ if os.path.isfile(deapexer_path):
+ deapexer = deapexer_path
+
+ def ExtractApexes(path, outp):
+ # Extract all APEXes found in input path.
+ logger.info('Extracting APEXs in %s', path)
+ for f in os.listdir(path):
+ logger.info(' adding APEX %s', os.path.basename(f))
+ apex = os.path.join(path, f)
+ if os.path.isdir(apex) and os.path.isfile(os.path.join(apex, 'apex_manifest.pb')):
+ info = ParseApexManifest(os.path.join(apex, 'apex_manifest.pb'))
+ # Flattened APEXes may have symlinks for libs (linked to /system/lib)
+ # We need to blindly copy them all.
+ shutil.copytree(apex, os.path.join(outp, info.name), symlinks=True)
+ elif os.path.isfile(apex) and apex.endswith(('.apex', '.capex')):
+ cmd = [deapexer,
+ '--debugfs_path', debugfs_path,
+ 'info',
+ apex]
+ info = json.loads(common.RunAndCheckOutput(cmd))
+
+ cmd = [deapexer,
+ '--debugfs_path', debugfs_path,
+ '--fsckerofs_path', fsckerofs_path,
+ '--blkid_path', blkid_path,
+ 'extract',
+ apex,
+ os.path.join(outp, info['name'])]
+ common.RunAndCheckOutput(cmd)
+ else:
+ logger.info(' .. skipping %s (is it APEX?)', path)
+
+ root_dir_name = 'APEX'
+ root_dir = os.path.join(inp, root_dir_name)
+ extracted_root = os.path.join(root_dir, 'apex')
+
+ # Always create /apex directory for dirmap
+ os.makedirs(extracted_root)
+
+ create_info_file = False
+
+ # Loop through search path looking for and processing apex/ directories.
+ for device_path, target_files_rel_paths in DIR_SEARCH_PATHS.items():
+ # checkvintf only needs vendor apexes. skip other partitions for efficiency
+ if device_path not in ['/vendor', '/odm']:
+ continue
+ # First, copy VENDOR/apex/foo.apex to APEX/vendor/apex/foo.apex
+ # Then, extract the contents to APEX/apex/foo/
+ for target_files_rel_path in target_files_rel_paths:
+ inp_partition = os.path.join(inp, target_files_rel_path,"apex")
+ if os.path.exists(inp_partition):
+ apex_dir = root_dir + os.path.join(device_path + "/apex");
+ os.makedirs(root_dir + device_path)
+ shutil.copytree(inp_partition, apex_dir, symlinks=True)
+ ExtractApexes(apex_dir, extracted_root)
+ create_info_file = True
+
+ if create_info_file:
+ ### Dump apex info files
+ dump_cmd = ['dump_apex_info', '--root_dir', root_dir]
+ common.RunAndCheckOutput(dump_cmd)
+
+ return extracted_root
def CheckVintfFromTargetFiles(inp, info_dict=None):
"""
@@ -198,7 +315,7 @@
True if VINTF check is skipped or compatible, False if incompatible. Raise
a RuntimeError if any error occurs.
"""
- input_tmp = common.UnzipTemp(inp, GetVintfFileList() + UNZIP_PATTERN)
+ input_tmp = common.UnzipTemp(inp, GetVintfFileList() + GetVintfApexUnzipPatterns() + UNZIP_PATTERN)
return CheckVintfFromExtractedTargetFiles(input_tmp, info_dict)
diff --git a/tools/releasetools/common.py b/tools/releasetools/common.py
index ec49b0d..abedecf 100644
--- a/tools/releasetools/common.py
+++ b/tools/releasetools/common.py
@@ -20,6 +20,7 @@
import datetime
import errno
import fnmatch
+from genericpath import isdir
import getopt
import getpass
import gzip
@@ -34,6 +35,7 @@
import shutil
import subprocess
import sys
+import stat
import tempfile
import threading
import time
@@ -72,18 +74,16 @@
if "ANDROID_HOST_OUT" in os.environ:
self.search_path = os.environ["ANDROID_HOST_OUT"]
self.signapk_shared_library_path = "lib64" # Relative to search_path
+ self.sign_sepolicy_path = None
self.extra_signapk_args = []
+ self.extra_sign_sepolicy_args = []
self.aapt2_path = "aapt2"
self.java_path = "java" # Use the one on the path by default.
- self.java_args = ["-Xmx2048m"] # The default JVM args.
+ self.java_args = ["-Xmx4096m"] # The default JVM args.
self.android_jar_path = None
self.public_key_suffix = ".x509.pem"
self.private_key_suffix = ".pk8"
# use otatools built boot_signer by default
- self.boot_signer_path = "boot_signer"
- self.boot_signer_args = []
- self.verity_signer_path = None
- self.verity_signer_args = []
self.verbose = False
self.tempfiles = []
self.device_specific = None
@@ -97,6 +97,7 @@
self.stash_threshold = 0.8
self.logfile = None
self.host_tools = {}
+ self.sepolicy_name = 'sepolicy.apex'
OPTIONS = Options()
@@ -302,6 +303,8 @@
Raises:
ExternalError: On non-zero exit from the command.
"""
+ if verbose is None:
+ verbose = OPTIONS.verbose
proc = Run(args, verbose=verbose, **kwargs)
output, _ = proc.communicate()
if output is None:
@@ -454,6 +457,30 @@
return vabc_enabled
@property
+ def is_android_r(self):
+ system_prop = self.info_dict.get("system.build.prop")
+ return system_prop and system_prop.GetProp("ro.build.version.release") == "11"
+
+ @property
+ def vendor_api_level(self):
+ vendor_prop = self.info_dict.get("vendor.build.prop")
+ if not vendor_prop:
+ return -1
+
+ props = [
+ "ro.board.api_level",
+ "ro.board.first_api_level",
+ "ro.product.first_api_level",
+ ]
+ for prop in props:
+ value = vendor_prop.GetProp(prop)
+ try:
+ return int(value)
+ except:
+ pass
+ return -1
+
+ @property
def is_vabc_xor(self):
vendor_prop = self.info_dict.get("vendor.build.prop")
vabc_xor_enabled = vendor_prop and \
@@ -691,20 +718,46 @@
script.AssertOemProperty(prop, values, oem_no_mount)
-def ReadFromInputFile(input_file, fn):
- """Reads the contents of fn from input zipfile or directory."""
+def DoesInputFileContain(input_file, fn):
+ """Check whether the input target_files.zip contain an entry `fn`"""
if isinstance(input_file, zipfile.ZipFile):
- return input_file.read(fn).decode()
+ return fn in input_file.namelist()
+ elif zipfile.is_zipfile(input_file):
+ with zipfile.ZipFile(input_file, "r", allowZip64=True) as zfp:
+ return fn in zfp.namelist()
else:
+ if not os.path.isdir(input_file):
+ raise ValueError(
+ "Invalid input_file, accepted inputs are ZipFile object, path to .zip file on disk, or path to extracted directory. Actual: " + input_file)
+ path = os.path.join(input_file, *fn.split("/"))
+ return os.path.exists(path)
+
+
+def ReadBytesFromInputFile(input_file, fn):
+ """Reads the bytes of fn from input zipfile or directory."""
+ if isinstance(input_file, zipfile.ZipFile):
+ return input_file.read(fn)
+ elif zipfile.is_zipfile(input_file):
+ with zipfile.ZipFile(input_file, "r", allowZip64=True) as zfp:
+ return zfp.read(fn)
+ else:
+ if not os.path.isdir(input_file):
+ raise ValueError(
+ "Invalid input_file, accepted inputs are ZipFile object, path to .zip file on disk, or path to extracted directory. Actual: " + input_file)
path = os.path.join(input_file, *fn.split("/"))
try:
- with open(path) as f:
+ with open(path, "rb") as f:
return f.read()
except IOError as e:
if e.errno == errno.ENOENT:
raise KeyError(fn)
+def ReadFromInputFile(input_file, fn):
+ """Reads the str contents of fn from input zipfile or directory."""
+ return ReadBytesFromInputFile(input_file, fn).decode()
+
+
def ExtractFromInputFile(input_file, fn):
"""Extracts the contents of fn from input zipfile or directory into a file."""
if isinstance(input_file, zipfile.ZipFile):
@@ -712,7 +765,16 @@
with open(tmp_file, 'wb') as f:
f.write(input_file.read(fn))
return tmp_file
+ elif zipfile.is_zipfile(input_file):
+ with zipfile.ZipFile(input_file, "r", allowZip64=True) as zfp:
+ tmp_file = MakeTempFile(os.path.basename(fn))
+ with open(tmp_file, "wb") as fp:
+ fp.write(zfp.read(fn))
+ return tmp_file
else:
+ if not os.path.isdir(input_file):
+ raise ValueError(
+ "Invalid input_file, accepted inputs are ZipFile object, path to .zip file on disk, or path to extracted directory. Actual: " + input_file)
file = os.path.join(input_file, *fn.split("/"))
if not os.path.exists(file):
raise KeyError(fn)
@@ -724,7 +786,7 @@
GZ = 2
-def _GetRamdiskFormat(info_dict):
+def GetRamdiskFormat(info_dict):
if info_dict.get('lz4_ramdisks') == 'true':
ramdisk_format = RamdiskFormat.LZ4
else:
@@ -833,7 +895,7 @@
# Load recovery fstab if applicable.
d["fstab"] = _FindAndLoadRecoveryFstab(d, input_file, read_helper)
- ramdisk_format = _GetRamdiskFormat(d)
+ ramdisk_format = GetRamdiskFormat(d)
# Tries to load the build props for all partitions with care_map, including
# system and vendor.
@@ -853,6 +915,10 @@
d["avb_{}_salt".format(partition)] = sha256(
fingerprint.encode()).hexdigest()
+ # Set up the salt for partitions without build.prop
+ if build_info.fingerprint:
+ d["avb_salt"] = sha256(build_info.fingerprint.encode()).hexdigest()
+
# Set the vbmeta digest if exists
try:
d["vbmeta_digest"] = read_helper("META/vbmeta_digest.txt").rstrip()
@@ -1047,6 +1113,13 @@
return {key: val for key, val in d.items()
if key in self.props_allow_override}
+ def __getstate__(self):
+ state = self.__dict__.copy()
+ # Don't pickle baz
+ if "input_file" in state and isinstance(state["input_file"], zipfile.ZipFile):
+ state["input_file"] = state["input_file"].filename
+ return state
+
def GetProp(self, prop):
return self.build_props.get(prop)
@@ -1181,13 +1254,13 @@
"""
def uniq_concat(a, b):
- combined = set(a.split(" "))
- combined.update(set(b.split(" ")))
+ combined = set(a.split())
+ combined.update(set(b.split()))
combined = [item.strip() for item in combined if item.strip()]
return " ".join(sorted(combined))
if (framework_dict.get("use_dynamic_partitions") !=
- "true") or (vendor_dict.get("use_dynamic_partitions") != "true"):
+ "true") or (vendor_dict.get("use_dynamic_partitions") != "true"):
raise ValueError("Both dictionaries must have use_dynamic_partitions=true")
merged_dict = {"use_dynamic_partitions": "true"}
@@ -1203,7 +1276,7 @@
# Super block devices are defined by the vendor dict.
if "super_block_devices" in vendor_dict:
merged_dict["super_block_devices"] = vendor_dict["super_block_devices"]
- for block_device in merged_dict["super_block_devices"].split(" "):
+ for block_device in merged_dict["super_block_devices"].split():
key = "super_%s_device_size" % block_device
if key not in vendor_dict:
raise ValueError("Vendor dict does not contain required key %s." % key)
@@ -1212,7 +1285,7 @@
# Partition groups and group sizes are defined by the vendor dict because
# these values may vary for each board that uses a shared system image.
merged_dict["super_partition_groups"] = vendor_dict["super_partition_groups"]
- for partition_group in merged_dict["super_partition_groups"].split(" "):
+ for partition_group in merged_dict["super_partition_groups"].split():
# Set the partition group's size using the value from the vendor dict.
key = "super_%s_group_size" % partition_group
if key not in vendor_dict:
@@ -1324,11 +1397,7 @@
def AppendAVBSigningArgs(cmd, partition):
"""Append signing arguments for avbtool."""
# e.g., "--key path/to/signing_key --algorithm SHA256_RSA4096"
- key_path = OPTIONS.info_dict.get("avb_" + partition + "_key_path")
- if key_path and not os.path.exists(key_path) and OPTIONS.search_path:
- new_key_path = os.path.join(OPTIONS.search_path, key_path)
- if os.path.exists(new_key_path):
- key_path = new_key_path
+ key_path = ResolveAVBSigningPathArgs(OPTIONS.info_dict.get("avb_" + partition + "_key_path"))
algorithm = OPTIONS.info_dict.get("avb_" + partition + "_algorithm")
if key_path and algorithm:
cmd.extend(["--key", key_path, "--algorithm", algorithm])
@@ -1338,6 +1407,32 @@
cmd.extend(["--salt", avb_salt])
+def ResolveAVBSigningPathArgs(split_args):
+
+ def ResolveBinaryPath(path):
+ if os.path.exists(path):
+ return path
+ new_path = os.path.join(OPTIONS.search_path, path)
+ if os.path.exists(new_path):
+ return new_path
+ raise ExternalError(
+ "Failed to find {}".format(new_path))
+
+ if not split_args:
+ return split_args
+
+ if isinstance(split_args, list):
+ for index, arg in enumerate(split_args[:-1]):
+ if arg == '--signing_helper':
+ signing_helper_path = split_args[index + 1]
+ split_args[index + 1] = ResolveBinaryPath(signing_helper_path)
+ break
+ elif isinstance(split_args, str):
+ split_args = ResolveBinaryPath(split_args)
+
+ return split_args
+
+
def GetAvbPartitionArg(partition, image, info_dict=None):
"""Returns the VBMeta arguments for partition.
@@ -1390,10 +1485,7 @@
"""
if key is None:
key = info_dict["avb_" + partition + "_key_path"]
- if key and not os.path.exists(key) and OPTIONS.search_path:
- new_key_path = os.path.join(OPTIONS.search_path, key)
- if os.path.exists(new_key_path):
- key = new_key_path
+ key = ResolveAVBSigningPathArgs(key)
pubkey_path = ExtractAvbPublicKey(info_dict["avb_avbtool"], key)
rollback_index_location = info_dict[
"avb_" + partition + "_rollback_index_location"]
@@ -1409,10 +1501,7 @@
key_path = OPTIONS.info_dict.get("gki_signing_key_path")
algorithm = OPTIONS.info_dict.get("gki_signing_algorithm")
- if not os.path.exists(key_path) and OPTIONS.search_path:
- new_key_path = os.path.join(OPTIONS.search_path, key_path)
- if os.path.exists(new_key_path):
- key_path = new_key_path
+ key_path = ResolveAVBSigningPathArgs(key_path)
# Checks key_path exists, before processing --gki_signing_* args.
if not os.path.exists(key_path):
@@ -1472,12 +1561,15 @@
custom_partitions = OPTIONS.info_dict.get(
"avb_custom_images_partition_list", "").strip().split()
+ custom_avb_partitions = ["vbmeta_" + part for part in OPTIONS.info_dict.get(
+ "avb_custom_vbmeta_images_partition_list", "").strip().split()]
for partition, path in partitions.items():
if partition not in needed_partitions:
continue
assert (partition in AVB_PARTITIONS or
partition in AVB_VBMETA_PARTITIONS or
+ partition in custom_avb_partitions or
partition in custom_partitions), \
'Unknown partition: {}'.format(partition)
assert os.path.exists(path), \
@@ -1506,20 +1598,28 @@
found = True
break
assert found, 'Failed to find {}'.format(chained_image)
+
+ split_args = ResolveAVBSigningPathArgs(split_args)
cmd.extend(split_args)
RunAndCheckOutput(cmd)
def _MakeRamdisk(sourcedir, fs_config_file=None,
+ dev_node_file=None,
ramdisk_format=RamdiskFormat.GZ):
ramdisk_img = tempfile.NamedTemporaryFile()
- if fs_config_file is not None and os.access(fs_config_file, os.F_OK):
- cmd = ["mkbootfs", "-f", fs_config_file,
- os.path.join(sourcedir, "RAMDISK")]
- else:
- cmd = ["mkbootfs", os.path.join(sourcedir, "RAMDISK")]
+ cmd = ["mkbootfs"]
+
+ if fs_config_file and os.access(fs_config_file, os.F_OK):
+ cmd.extend(["-f", fs_config_file])
+
+ if dev_node_file and os.access(dev_node_file, os.F_OK):
+ cmd.extend(["-n", dev_node_file])
+
+ cmd.append(os.path.join(sourcedir, "RAMDISK"))
+
p1 = Run(cmd, stdout=subprocess.PIPE)
if ramdisk_format == RamdiskFormat.LZ4:
p2 = Run(["lz4", "-l", "-12", "--favor-decSpeed"], stdin=p1.stdout,
@@ -1537,7 +1637,8 @@
return ramdisk_img
-def _BuildBootableImage(image_name, sourcedir, fs_config_file, info_dict=None,
+def _BuildBootableImage(image_name, sourcedir, fs_config_file,
+ dev_node_file=None, info_dict=None,
has_ramdisk=False, two_step_image=False):
"""Build a bootable image from the specified sourcedir.
@@ -1578,8 +1679,8 @@
img = tempfile.NamedTemporaryFile()
if has_ramdisk:
- ramdisk_format = _GetRamdiskFormat(info_dict)
- ramdisk_img = _MakeRamdisk(sourcedir, fs_config_file,
+ ramdisk_format = GetRamdiskFormat(info_dict)
+ ramdisk_img = _MakeRamdisk(sourcedir, fs_config_file, dev_node_file,
ramdisk_format=ramdisk_format)
# use MKBOOTIMG from environ, or "mkbootimg" if empty or not set
@@ -1674,23 +1775,8 @@
with open(img.name, 'ab') as f:
f.write(boot_signature_bytes)
- if (info_dict.get("boot_signer") == "true" and
- info_dict.get("verity_key")):
- # Hard-code the path as "/boot" for two-step special recovery image (which
- # will be loaded into /boot during the two-step OTA).
- if two_step_image:
- path = "/boot"
- else:
- path = "/" + partition_name
- cmd = [OPTIONS.boot_signer_path]
- cmd.extend(OPTIONS.boot_signer_args)
- cmd.extend([path, img.name,
- info_dict["verity_key"] + ".pk8",
- info_dict["verity_key"] + ".x509.pem", img.name])
- RunAndCheckOutput(cmd)
-
# Sign the image if vboot is non-empty.
- elif info_dict.get("vboot"):
+ if info_dict.get("vboot"):
path = "/" + partition_name
img_keyblock = tempfile.NamedTemporaryFile()
# We have switched from the prebuilt futility binary to using the tool
@@ -1724,7 +1810,8 @@
AppendAVBSigningArgs(cmd, partition_name)
args = info_dict.get("avb_" + partition_name + "_add_hash_footer_args")
if args and args.strip():
- cmd.extend(shlex.split(args))
+ split_args = ResolveAVBSigningPathArgs(shlex.split(args))
+ cmd.extend(split_args)
RunAndCheckOutput(cmd)
img.seek(os.SEEK_SET, 0)
@@ -1765,7 +1852,8 @@
AppendAVBSigningArgs(cmd, partition_name)
args = info_dict.get("avb_" + partition_name + "_add_hash_footer_args")
if args and args.strip():
- cmd.extend(shlex.split(args))
+ split_args = ResolveAVBSigningPathArgs(shlex.split(args))
+ cmd.extend(split_args)
RunAndCheckOutput(cmd)
@@ -1802,7 +1890,8 @@
def GetBootableImage(name, prebuilt_name, unpack_dir, tree_subdir,
- info_dict=None, two_step_image=False):
+ info_dict=None, two_step_image=False,
+ dev_nodes=False):
"""Return a File object with the desired bootable image.
Look for it in 'unpack_dir'/BOOTABLE_IMAGES under the name 'prebuilt_name',
@@ -1838,6 +1927,8 @@
fs_config = "META/" + tree_subdir.lower() + "_filesystem_config.txt"
data = _BuildBootableImage(prebuilt_name, os.path.join(unpack_dir, tree_subdir),
os.path.join(unpack_dir, fs_config),
+ os.path.join(unpack_dir, 'META/ramdisk_node_list')
+ if dev_nodes else None,
info_dict, has_ramdisk, two_step_image)
if data:
return File(name, data)
@@ -1859,7 +1950,7 @@
img = tempfile.NamedTemporaryFile()
- ramdisk_format = _GetRamdiskFormat(info_dict)
+ ramdisk_format = GetRamdiskFormat(info_dict)
ramdisk_img = _MakeRamdisk(sourcedir, ramdisk_format=ramdisk_format)
# use MKBOOTIMG from environ, or "mkbootimg" if empty or not set
@@ -1869,7 +1960,8 @@
fn = os.path.join(sourcedir, "dtb")
if os.access(fn, os.F_OK):
- has_vendor_kernel_boot = (info_dict.get("vendor_kernel_boot", "").lower() == "true")
+ has_vendor_kernel_boot = (info_dict.get(
+ "vendor_kernel_boot", "").lower() == "true")
# Pack dtb into vendor_kernel_boot if building vendor_kernel_boot.
# Otherwise pack dtb into vendor_boot.
@@ -1941,7 +2033,8 @@
AppendAVBSigningArgs(cmd, partition_name)
args = info_dict.get(f'avb_{partition_name}_add_hash_footer_args')
if args and args.strip():
- cmd.extend(shlex.split(args))
+ split_args = ResolveAVBSigningPathArgs(shlex.split(args))
+ cmd.extend(split_args)
RunAndCheckOutput(cmd)
img.seek(os.SEEK_SET, 0)
@@ -1980,7 +2073,7 @@
def GetVendorKernelBootImage(name, prebuilt_name, unpack_dir, tree_subdir,
- info_dict=None):
+ info_dict=None):
"""Return a File object with the desired vendor kernel boot image.
Look for it under 'unpack_dir'/IMAGES, otherwise construct it from
@@ -2010,6 +2103,26 @@
shutil.copyfileobj(in_file, out_file)
+def UnzipSingleFile(input_zip: zipfile.ZipFile, info: zipfile.ZipInfo, dirname: str):
+ # According to https://stackoverflow.com/questions/434641/how-do-i-set-permissions-attributes-on-a-file-in-a-zip-file-using-pythons-zip/6297838#6297838
+ # higher bits of |external_attr| are unix file permission and types
+ unix_filetype = info.external_attr >> 16
+
+ def CheckMask(a, mask):
+ return (a & mask) == mask
+
+ def IsSymlink(a):
+ return CheckMask(a, stat.S_IFLNK)
+ # python3.11 zipfile implementation doesn't handle symlink correctly
+ if not IsSymlink(unix_filetype):
+ return input_zip.extract(info, dirname)
+ if dirname is None:
+ dirname = os.getcwd()
+ target = os.path.join(dirname, info.filename)
+ os.makedirs(os.path.dirname(target), exist_ok=True)
+ os.symlink(input_zip.read(info).decode(), target)
+
+
def UnzipToDir(filename, dirname, patterns=None):
"""Unzips the archive to the given directory.
@@ -2020,20 +2133,46 @@
archvie. Non-matching patterns will be filtered out. If there's no match
after the filtering, no file will be unzipped.
"""
- cmd = ["unzip", "-o", "-q", filename, "-d", dirname]
- if patterns is not None:
+ with zipfile.ZipFile(filename, allowZip64=True, mode="r") as input_zip:
# Filter out non-matching patterns. unzip will complain otherwise.
- with zipfile.ZipFile(filename, allowZip64=True) as input_zip:
- names = input_zip.namelist()
- filtered = [
- pattern for pattern in patterns if fnmatch.filter(names, pattern)]
+ entries = input_zip.infolist()
+ # b/283033491
+ # Per https://en.wikipedia.org/wiki/ZIP_(file_format)#Central_directory_file_header
+ # In zip64 mode, central directory record's header_offset field might be
+ # set to 0xFFFFFFFF if header offset is > 2^32. In this case, the extra
+ # fields will contain an 8 byte little endian integer at offset 20
+ # to indicate the actual local header offset.
+ # As of python3.11, python does not handle zip64 central directories
+ # correctly, so we will manually do the parsing here.
- # There isn't any matching files. Don't unzip anything.
- if not filtered:
- return
- cmd.extend(filtered)
+ # ZIP64 central directory extra field has two required fields:
+ # 2 bytes header ID and 2 bytes size field. Thes two require fields have
+ # a total size of 4 bytes. Then it has three other 8 bytes field, followed
+ # by a 4 byte disk number field. The last disk number field is not required
+ # to be present, but if it is present, the total size of extra field will be
+ # divisible by 8(because 2+2+4+8*n is always going to be multiple of 8)
+ # Most extra fields are optional, but when they appear, their must appear
+ # in the order defined by zip64 spec. Since file header offset is the 2nd
+ # to last field in zip64 spec, it will only be at last 8 bytes or last 12-4
+ # bytes, depending on whether disk number is present.
+ for entry in entries:
+ if entry.header_offset == 0xFFFFFFFF:
+ if len(entry.extra) % 8 == 0:
+ entry.header_offset = int.from_bytes(entry.extra[-12:-4], "little")
+ else:
+ entry.header_offset = int.from_bytes(entry.extra[-8:], "little")
+ if patterns is not None:
+ filtered = [info for info in entries if any(
+ [fnmatch.fnmatch(info.filename, p) for p in patterns])]
- RunAndCheckOutput(cmd)
+ # There isn't any matching files. Don't unzip anything.
+ if not filtered:
+ return
+ for info in filtered:
+ UnzipSingleFile(input_zip, info, dirname)
+ else:
+ for info in entries:
+ UnzipSingleFile(input_zip, info, dirname)
def UnzipTemp(filename, patterns=None):
@@ -2065,7 +2204,6 @@
def GetUserImage(which, tmpdir, input_zip,
info_dict=None,
allow_shared_blocks=None,
- hashtree_info_generator=None,
reset_file_map=False):
"""Returns an Image object suitable for passing to BlockImageDiff.
@@ -2082,8 +2220,6 @@
info_dict: The dict to be looked up for relevant info.
allow_shared_blocks: If image is sparse, whether having shared blocks is
allowed. If none, it is looked up from info_dict.
- hashtree_info_generator: If present and image is sparse, generates the
- hashtree_info for this sparse image.
reset_file_map: If true and image is sparse, reset file map before returning
the image.
Returns:
@@ -2093,9 +2229,7 @@
if info_dict is None:
info_dict = LoadInfoDict(input_zip)
- is_sparse = info_dict.get("extfs_sparse_flag")
- if info_dict.get(which + "_disable_sparse"):
- is_sparse = False
+ is_sparse = IsSparseImage(os.path.join(tmpdir, "IMAGES", which + ".img"))
# When target uses 'BOARD_EXT4_SHARE_DUP_BLOCKS := true', images may contain
# shared blocks (i.e. some blocks will show up in multiple files' block
@@ -2105,15 +2239,14 @@
allow_shared_blocks = info_dict.get("ext4_share_dup_blocks") == "true"
if is_sparse:
- img = GetSparseImage(which, tmpdir, input_zip, allow_shared_blocks,
- hashtree_info_generator)
+ img = GetSparseImage(which, tmpdir, input_zip, allow_shared_blocks)
if reset_file_map:
img.ResetFileMap()
return img
- return GetNonSparseImage(which, tmpdir, hashtree_info_generator)
+ return GetNonSparseImage(which, tmpdir)
-def GetNonSparseImage(which, tmpdir, hashtree_info_generator=None):
+def GetNonSparseImage(which, tmpdir):
"""Returns a Image object suitable for passing to BlockImageDiff.
This function loads the specified non-sparse image from the given path.
@@ -2131,11 +2264,10 @@
# ota_from_target_files.py (since LMP).
assert os.path.exists(path) and os.path.exists(mappath)
- return images.FileImage(path, hashtree_info_generator=hashtree_info_generator)
+ return images.FileImage(path)
-def GetSparseImage(which, tmpdir, input_zip, allow_shared_blocks,
- hashtree_info_generator=None):
+def GetSparseImage(which, tmpdir, input_zip, allow_shared_blocks):
"""Returns a SparseImage object suitable for passing to BlockImageDiff.
This function loads the specified sparse image from the given path, and
@@ -2148,8 +2280,6 @@
tmpdir: The directory that contains the prebuilt image and block map file.
input_zip: The target-files ZIP archive.
allow_shared_blocks: Whether having shared blocks is allowed.
- hashtree_info_generator: If present, generates the hashtree_info for this
- sparse image.
Returns:
A SparseImage object, with file_map info loaded.
"""
@@ -2166,8 +2296,7 @@
clobbered_blocks = "0"
image = sparse_img.SparseImage(
- path, mappath, clobbered_blocks, allow_shared_blocks=allow_shared_blocks,
- hashtree_info_generator=hashtree_info_generator)
+ path, mappath, clobbered_blocks, allow_shared_blocks=allow_shared_blocks)
# block.map may contain less blocks, because mke2fs may skip allocating blocks
# if they contain all zeros. We can't reconstruct such a file from its block
@@ -2381,8 +2510,40 @@
stdoutdata, _ = proc.communicate(password)
if proc.returncode != 0:
raise ExternalError(
- "Failed to run signapk.jar: return code {}:\n{}".format(
+ "Failed to run {}: return code {}:\n{}".format(cmd,
+ proc.returncode, stdoutdata))
+
+
+def SignSePolicy(sepolicy, key, password):
+ """Sign the sepolicy zip, producing an fsverity .fsv_sig and
+ an RSA .sig signature files.
+ """
+
+ if OPTIONS.sign_sepolicy_path is None:
+ logger.info("No sign_sepolicy_path specified, %s was not signed", sepolicy)
+ return False
+
+ java_library_path = os.path.join(
+ OPTIONS.search_path, OPTIONS.signapk_shared_library_path)
+
+ cmd = ([OPTIONS.java_path] + OPTIONS.java_args +
+ ["-Djava.library.path=" + java_library_path,
+ "-jar", os.path.join(OPTIONS.search_path, OPTIONS.sign_sepolicy_path)] +
+ OPTIONS.extra_sign_sepolicy_args)
+
+ cmd.extend([key + OPTIONS.public_key_suffix,
+ key + OPTIONS.private_key_suffix,
+ sepolicy, os.path.dirname(sepolicy)])
+
+ proc = Run(cmd, stdin=subprocess.PIPE)
+ if password is not None:
+ password += "\n"
+ stdoutdata, _ = proc.communicate(password)
+ if proc.returncode != 0:
+ raise ExternalError(
+ "Failed to run sign sepolicy: return code {}:\n{}".format(
proc.returncode, stdoutdata))
+ return True
def CheckSize(data, target, info_dict):
@@ -2560,7 +2721,8 @@
opts, args = getopt.getopt(
argv, "hvp:s:x:" + extra_opts,
["help", "verbose", "path=", "signapk_path=",
- "signapk_shared_library_path=", "extra_signapk_args=", "aapt2_path=",
+ "signapk_shared_library_path=", "extra_signapk_args=",
+ "sign_sepolicy_path=", "extra_sign_sepolicy_args=", "aapt2_path=",
"java_path=", "java_args=", "android_jar_path=", "public_key_suffix=",
"private_key_suffix=", "boot_signer_path=", "boot_signer_args=",
"verity_signer_path=", "verity_signer_args=", "device_specific=",
@@ -2584,6 +2746,10 @@
OPTIONS.signapk_shared_library_path = a
elif o in ("--extra_signapk_args",):
OPTIONS.extra_signapk_args = shlex.split(a)
+ elif o in ("--sign_sepolicy_path",):
+ OPTIONS.sign_sepolicy_path = a
+ elif o in ("--extra_sign_sepolicy_args",):
+ OPTIONS.extra_sign_sepolicy_args = shlex.split(a)
elif o in ("--aapt2_path",):
OPTIONS.aapt2_path = a
elif o in ("--java_path",):
@@ -2597,13 +2763,17 @@
elif o in ("--private_key_suffix",):
OPTIONS.private_key_suffix = a
elif o in ("--boot_signer_path",):
- OPTIONS.boot_signer_path = a
+ raise ValueError(
+ "--boot_signer_path is no longer supported, please switch to AVB")
elif o in ("--boot_signer_args",):
- OPTIONS.boot_signer_args = shlex.split(a)
+ raise ValueError(
+ "--boot_signer_args is no longer supported, please switch to AVB")
elif o in ("--verity_signer_path",):
- OPTIONS.verity_signer_path = a
+ raise ValueError(
+ "--verity_signer_path is no longer supported, please switch to AVB")
elif o in ("--verity_signer_args",):
- OPTIONS.verity_signer_args = shlex.split(a)
+ raise ValueError(
+ "--verity_signer_args is no longer supported, please switch to AVB")
elif o in ("-s", "--device_specific"):
OPTIONS.device_specific = a
elif o in ("-x", "--extra"):
@@ -2848,26 +3018,33 @@
zipfile.ZIP64_LIMIT = saved_zip64_limit
-def ZipDelete(zip_filename, entries):
+def ZipDelete(zip_filename, entries, force=False):
"""Deletes entries from a ZIP file.
- Since deleting entries from a ZIP file is not supported, it shells out to
- 'zip -d'.
-
Args:
zip_filename: The name of the ZIP file.
entries: The name of the entry, or the list of names to be deleted.
-
- Raises:
- AssertionError: In case of non-zero return from 'zip'.
"""
if isinstance(entries, str):
entries = [entries]
# If list is empty, nothing to do
if not entries:
return
- cmd = ["zip", "-d", zip_filename] + entries
- RunAndCheckOutput(cmd)
+
+ with zipfile.ZipFile(zip_filename, 'r') as zin:
+ if not force and len(set(zin.namelist()).intersection(entries)) == 0:
+ raise ExternalError(
+ "Failed to delete zip entries, name not matched: %s" % entries)
+
+ fd, new_zipfile = tempfile.mkstemp(dir=os.path.dirname(zip_filename))
+ os.close(fd)
+ cmd = ["zip2zip", "-i", zip_filename, "-o", new_zipfile]
+ for entry in entries:
+ cmd.append("-x")
+ cmd.append(entry)
+ RunAndCheckOutput(cmd)
+
+ os.replace(new_zipfile, zip_filename)
def ZipClose(zip_file):
@@ -3425,7 +3602,8 @@
"ext4": "EMMC",
"emmc": "EMMC",
"f2fs": "EMMC",
- "squashfs": "EMMC"
+ "squashfs": "EMMC",
+ "erofs": "EMMC"
}
@@ -3573,11 +3751,13 @@
else:
system_root_image = info_dict.get("system_root_image") == "true"
+ include_recovery_dtbo = info_dict.get("include_recovery_dtbo") == "true"
+ include_recovery_acpio = info_dict.get("include_recovery_acpio") == "true"
path = os.path.join(input_dir, recovery_resource_dat_path)
# With system-root-image, boot and recovery images will have mismatching
# entries (only recovery has the ramdisk entry) (Bug: 72731506). Use bsdiff
# to handle such a case.
- if system_root_image:
+ if system_root_image or include_recovery_dtbo or include_recovery_acpio:
diff_program = ["bsdiff"]
bonus_args = ""
assert not os.path.exists(path)
@@ -3960,134 +4140,34 @@
return None
-def GetCareMap(which, imgname):
- """Returns the care_map string for the given partition.
-
- Args:
- which: The partition name, must be listed in PARTITIONS_WITH_CARE_MAP.
- imgname: The filename of the image.
-
- Returns:
- (which, care_map_ranges): care_map_ranges is the raw string of the care_map
- RangeSet; or None.
- """
- assert which in PARTITIONS_WITH_CARE_MAP
-
- # which + "_image_size" contains the size that the actual filesystem image
- # resides in, which is all that needs to be verified. The additional blocks in
- # the image file contain verity metadata, by reading which would trigger
- # invalid reads.
- image_size = OPTIONS.info_dict.get(which + "_image_size")
- if not image_size:
- return None
-
- disable_sparse = OPTIONS.info_dict.get(which + "_disable_sparse")
-
- image_blocks = int(image_size) // 4096 - 1
- # It's OK for image_blocks to be 0, because care map ranges are inclusive.
- # So 0-0 means "just block 0", which is valid.
- assert image_blocks >= 0, "blocks for {} must be non-negative, image size: {}".format(
- which, image_size)
-
- # For sparse images, we will only check the blocks that are listed in the care
- # map, i.e. the ones with meaningful data.
- if "extfs_sparse_flag" in OPTIONS.info_dict and not disable_sparse:
- simg = sparse_img.SparseImage(imgname)
- care_map_ranges = simg.care_map.intersect(
- rangelib.RangeSet("0-{}".format(image_blocks)))
-
- # Otherwise for non-sparse images, we read all the blocks in the filesystem
- # image.
- else:
- care_map_ranges = rangelib.RangeSet("0-{}".format(image_blocks))
-
- return [which, care_map_ranges.to_string_raw()]
-
-
-def AddCareMapForAbOta(output_file, ab_partitions, image_paths):
- """Generates and adds care_map.pb for a/b partition that has care_map.
-
- Args:
- output_file: The output zip file (needs to be already open),
- or file path to write care_map.pb.
- ab_partitions: The list of A/B partitions.
- image_paths: A map from the partition name to the image path.
- """
- if not output_file:
- raise ExternalError('Expected output_file for AddCareMapForAbOta')
-
- care_map_list = []
- for partition in ab_partitions:
- partition = partition.strip()
- if partition not in PARTITIONS_WITH_CARE_MAP:
- continue
-
- verity_block_device = "{}_verity_block_device".format(partition)
- avb_hashtree_enable = "avb_{}_hashtree_enable".format(partition)
- if (verity_block_device in OPTIONS.info_dict or
- OPTIONS.info_dict.get(avb_hashtree_enable) == "true"):
- if partition not in image_paths:
- logger.warning('Potential partition with care_map missing from images: %s',
- partition)
- continue
- image_path = image_paths[partition]
- if not os.path.exists(image_path):
- raise ExternalError('Expected image at path {}'.format(image_path))
-
- care_map = GetCareMap(partition, image_path)
- if not care_map:
- continue
- care_map_list += care_map
-
- # adds fingerprint field to the care_map
- # TODO(xunchang) revisit the fingerprint calculation for care_map.
- partition_props = OPTIONS.info_dict.get(partition + ".build.prop")
- prop_name_list = ["ro.{}.build.fingerprint".format(partition),
- "ro.{}.build.thumbprint".format(partition)]
-
- present_props = [x for x in prop_name_list if
- partition_props and partition_props.GetProp(x)]
- if not present_props:
- logger.warning(
- "fingerprint is not present for partition %s", partition)
- property_id, fingerprint = "unknown", "unknown"
- else:
- property_id = present_props[0]
- fingerprint = partition_props.GetProp(property_id)
- care_map_list += [property_id, fingerprint]
-
- if not care_map_list:
- return
-
- # Converts the list into proto buf message by calling care_map_generator; and
- # writes the result to a temp file.
- temp_care_map_text = MakeTempFile(prefix="caremap_text-",
- suffix=".txt")
- with open(temp_care_map_text, 'w') as text_file:
- text_file.write('\n'.join(care_map_list))
-
- temp_care_map = MakeTempFile(prefix="caremap-", suffix=".pb")
- care_map_gen_cmd = ["care_map_generator", temp_care_map_text, temp_care_map]
- RunAndCheckOutput(care_map_gen_cmd)
-
- if not isinstance(output_file, zipfile.ZipFile):
- shutil.copy(temp_care_map, output_file)
- return
- # output_file is a zip file
- care_map_path = "META/care_map.pb"
- if care_map_path in output_file.namelist():
- # Copy the temp file into the OPTIONS.input_tmp dir and update the
- # replace_updated_files_list used by add_img_to_target_files
- if not OPTIONS.replace_updated_files_list:
- OPTIONS.replace_updated_files_list = []
- shutil.copy(temp_care_map, os.path.join(OPTIONS.input_tmp, care_map_path))
- OPTIONS.replace_updated_files_list.append(care_map_path)
- else:
- ZipWrite(output_file, temp_care_map, arcname=care_map_path)
-
-
def IsSparseImage(filepath):
+ if not os.path.exists(filepath):
+ return False
with open(filepath, 'rb') as fp:
# Magic for android sparse image format
# https://source.android.com/devices/bootloader/images
return fp.read(4) == b'\x3A\xFF\x26\xED'
+
+
+def ParseUpdateEngineConfig(path: str):
+ """Parse the update_engine config stored in file `path`
+ Args
+ path: Path to update_engine_config.txt file in target_files
+
+ Returns
+ A tuple of (major, minor) version number . E.g. (2, 8)
+ """
+ with open(path, "r") as fp:
+ # update_engine_config.txt is only supposed to contain two lines,
+ # PAYLOAD_MAJOR_VERSION and PAYLOAD_MINOR_VERSION. 1024 should be more than
+ # sufficient. If the length is more than that, something is wrong.
+ data = fp.read(1024)
+ major = re.search(r"PAYLOAD_MAJOR_VERSION=(\d+)", data)
+ if not major:
+ raise ValueError(
+ f"{path} is an invalid update_engine config, missing PAYLOAD_MAJOR_VERSION {data}")
+ minor = re.search(r"PAYLOAD_MINOR_VERSION=(\d+)", data)
+ if not minor:
+ raise ValueError(
+ f"{path} is an invalid update_engine config, missing PAYLOAD_MINOR_VERSION {data}")
+ return (int(major.group(1)), int(minor.group(1)))
diff --git a/tools/releasetools/create_brick_ota.py b/tools/releasetools/create_brick_ota.py
new file mode 100644
index 0000000..44f0a95
--- /dev/null
+++ b/tools/releasetools/create_brick_ota.py
@@ -0,0 +1,92 @@
+#!/usr/bin/env python3
+#
+# Copyright (C) 2023 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import argparse
+from pathlib import Path
+import zipfile
+from typing import List
+import common
+import tempfile
+import shutil
+
+PARTITIONS_TO_WIPE = ["/dev/block/by-name/vbmeta",
+ "/dev/block/by-name/vbmeta_a",
+ "/dev/block/by-name/vbmeta_b",
+ "/dev/block/by-name/vbmeta_system_a",
+ "/dev/block/by-name/vbmeta_system_b",
+ "/dev/block/by-name/boot",
+ "/dev/block/by-name/boot_a",
+ "/dev/block/by-name/boot_b",
+ "/dev/block/by-name/vendor_boot",
+ "/dev/block/by-name/vendor_boot_a",
+ "/dev/block/by-name/vendor_boot_b",
+ "/dev/block/by-name/init_boot_a",
+ "/dev/block/by-name/init_boot_b",
+ "/dev/block/by-name/metadata",
+ "/dev/block/by-name/super",
+ "/dev/block/by-name/userdata"]
+
+
+def CreateBrickOta(product_name: str, output_path: Path, extra_wipe_partitions: str, serialno: str):
+ partitions_to_wipe = PARTITIONS_TO_WIPE
+ if extra_wipe_partitions is not None:
+ partitions_to_wipe = PARTITIONS_TO_WIPE + extra_wipe_partitions.split(",")
+ # recovery requiers product name to be a | separated list
+ product_name = product_name.replace(",", "|")
+ with zipfile.ZipFile(output_path, "w") as zfp:
+ zfp.writestr("recovery.wipe", "\n".join(partitions_to_wipe))
+ zfp.writestr("payload.bin", "")
+ zfp.writestr("META-INF/com/android/metadata", "\n".join(
+ ["ota-type=BRICK", "post-timestamp=9999999999", "pre-device=" + product_name, "serialno=" + serialno]))
+
+
+def main(argv):
+ parser = argparse.ArgumentParser(description='Android Brick OTA generator')
+ parser.add_argument('otafile', metavar='PAYLOAD', type=str,
+ help='The output OTA package file.')
+ parser.add_argument('--product', type=str,
+ help='The product name of the device, for example, bramble, redfin. This can be a comma separated list.', required=True)
+ parser.add_argument('--serialno', type=str,
+ help='The serial number of devices that are allowed to install this OTA package. This can be a comma separated list.')
+ parser.add_argument('--extra_wipe_partitions', type=str,
+ help='Additional partitions on device which should be wiped.')
+ parser.add_argument('-v', action="store_true",
+ help="Enable verbose logging", dest="verbose")
+ parser.add_argument('--package_key', type=str,
+ help='Paths to private key for signing payload')
+ parser.add_argument('--search_path', type=str,
+ help='Search path for framework/signapk.jar')
+ parser.add_argument('--private_key_suffix', type=str,
+ help='Suffix to be appended to package_key path', default=".pk8")
+ args = parser.parse_args(argv[1:])
+ if args.search_path:
+ common.OPTIONS.search_path = args.search_path
+ if args.verbose:
+ common.OPTIONS.verbose = args.verbose
+ CreateBrickOta(args.product, args.otafile,
+ args.extra_wipe_partitions, args.serialno)
+ if args.package_key:
+ common.OPTIONS.private_key_suffix = args.private_key_suffix
+ with tempfile.NamedTemporaryFile() as tmpfile:
+ common.SignFile(args.otafile, tmpfile.name,
+ args.package_key, None, whole_file=True)
+ shutil.copy(tmpfile.name, args.otafile)
+
+
+if __name__ == "__main__":
+ import sys
+ main(sys.argv)
diff --git a/tools/releasetools/fsverity_manifest_generator.py b/tools/releasetools/fsverity_manifest_generator.py
deleted file mode 100644
index b8184bc..0000000
--- a/tools/releasetools/fsverity_manifest_generator.py
+++ /dev/null
@@ -1,115 +0,0 @@
-#!/usr/bin/env python3
-#
-# Copyright 2022 Google Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-`fsverity_manifest_generator` generates build manifest APK file containing
-digests of target files. The APK file is signed so the manifest inside the APK
-can be trusted.
-"""
-
-import argparse
-import common
-import os
-import subprocess
-import sys
-from fsverity_digests_pb2 import FSVerityDigests
-
-HASH_ALGORITHM = 'sha256'
-
-def _digest(fsverity_path, input_file):
- cmd = [fsverity_path, 'digest', input_file]
- cmd.extend(['--compact'])
- cmd.extend(['--hash-alg', HASH_ALGORITHM])
- out = subprocess.check_output(cmd, universal_newlines=True).strip()
- return bytes(bytearray.fromhex(out))
-
-if __name__ == '__main__':
- p = argparse.ArgumentParser()
- p.add_argument(
- '--output',
- help='Path to the output manifest APK',
- required=True)
- p.add_argument(
- '--fsverity-path',
- help='path to the fsverity program',
- required=True)
- p.add_argument(
- '--aapt2-path',
- help='path to the aapt2 program',
- required=True)
- p.add_argument(
- '--min-sdk-version',
- help='minimum supported sdk version of the generated manifest apk',
- required=True)
- p.add_argument(
- '--version-code',
- help='version code for the generated manifest apk',
- required=True)
- p.add_argument(
- '--version-name',
- help='version name for the generated manifest apk',
- required=True)
- p.add_argument(
- '--framework-res',
- help='path to framework-res.apk',
- required=True)
- p.add_argument(
- '--apksigner-path',
- help='path to the apksigner program',
- required=True)
- p.add_argument(
- '--apk-key-path',
- help='path to the apk key',
- required=True)
- p.add_argument(
- '--apk-manifest-path',
- help='path to AndroidManifest.xml',
- required=True)
- p.add_argument(
- '--base-dir',
- help='directory to use as a relative root for the inputs',
- required=True)
- p.add_argument(
- 'inputs',
- nargs='+',
- help='input file for the build manifest')
- args = p.parse_args(sys.argv[1:])
-
- digests = FSVerityDigests()
- for f in sorted(args.inputs):
- # f is a full path for now; make it relative so it starts with {mount_point}/
- digest = digests.digests[os.path.relpath(f, args.base_dir)]
- digest.digest = _digest(args.fsverity_path, f)
- digest.hash_alg = HASH_ALGORITHM
-
- temp_dir = common.MakeTempDir()
-
- os.mkdir(os.path.join(temp_dir, "assets"))
- metadata_path = os.path.join(temp_dir, "assets", "build_manifest.pb")
- with open(metadata_path, "wb") as f:
- f.write(digests.SerializeToString())
-
- common.RunAndCheckOutput([args.aapt2_path, "link",
- "-A", os.path.join(temp_dir, "assets"),
- "-o", args.output,
- "--min-sdk-version", args.min_sdk_version,
- "--version-code", args.version_code,
- "--version-name", args.version_name,
- "-I", args.framework_res,
- "--manifest", args.apk_manifest_path])
- common.RunAndCheckOutput([args.apksigner_path, "sign", "--in", args.output,
- "--cert", args.apk_key_path + ".x509.pem",
- "--key", args.apk_key_path + ".pk8"])
diff --git a/tools/releasetools/images.py b/tools/releasetools/images.py
index a24148a..d06b979 100644
--- a/tools/releasetools/images.py
+++ b/tools/releasetools/images.py
@@ -149,7 +149,7 @@
class FileImage(Image):
"""An image wrapped around a raw image file."""
- def __init__(self, path, hashtree_info_generator=None):
+ def __init__(self, path):
self.path = path
self.blocksize = 4096
self._file_size = os.path.getsize(self.path)
@@ -166,10 +166,6 @@
self.generator_lock = threading.Lock()
- self.hashtree_info = None
- if hashtree_info_generator:
- self.hashtree_info = hashtree_info_generator.Generate(self)
-
zero_blocks = []
nonzero_blocks = []
reference = '\0' * self.blocksize
@@ -190,8 +186,6 @@
self.file_map["__ZERO"] = RangeSet(data=zero_blocks)
if nonzero_blocks:
self.file_map["__NONZERO"] = RangeSet(data=nonzero_blocks)
- if self.hashtree_info:
- self.file_map["__HASHTREE"] = self.hashtree_info.hashtree_range
def __del__(self):
self._file.close()
diff --git a/tools/releasetools/img_from_target_files b/tools/releasetools/img_from_target_files
deleted file mode 120000
index afaf24b..0000000
--- a/tools/releasetools/img_from_target_files
+++ /dev/null
@@ -1 +0,0 @@
-img_from_target_files.py
\ No newline at end of file
diff --git a/tools/releasetools/img_from_target_files.py b/tools/releasetools/img_from_target_files.py
index 76da89c..f8bdd81 100755
--- a/tools/releasetools/img_from_target_files.py
+++ b/tools/releasetools/img_from_target_files.py
@@ -173,7 +173,7 @@
logger.info('Writing super.img to archive...')
with zipfile.ZipFile(
output_file, 'a', compression=zipfile.ZIP_DEFLATED,
- allowZip64=not OPTIONS.sparse_userimages) as output_zip:
+ allowZip64=True) as output_zip:
common.ZipWrite(output_zip, super_file, 'super.img')
diff --git a/tools/releasetools/merge/OWNERS b/tools/releasetools/merge/OWNERS
index 9012e3a..0eddee2 100644
--- a/tools/releasetools/merge/OWNERS
+++ b/tools/releasetools/merge/OWNERS
@@ -1,3 +1,4 @@
-danielnorman@google.com
+deyaoren@google.com
+haamed@google.com
jgalmes@google.com
rseymour@google.com
diff --git a/tools/releasetools/merge/merge_dexopt.py b/tools/releasetools/merge/merge_dexopt.py
index 7bf9bd4..16182b5 100644
--- a/tools/releasetools/merge/merge_dexopt.py
+++ b/tools/releasetools/merge/merge_dexopt.py
@@ -164,6 +164,10 @@
'deapexer',
'--debugfs_path',
'debugfs_static',
+ '--blkid_path',
+ 'blkid',
+ '--fsckerofs_path',
+ 'fsck.erofs',
'extract',
apex,
apex_extract_dir,
diff --git a/tools/releasetools/merge/merge_meta.py b/tools/releasetools/merge/merge_meta.py
index 580b3ce..b61f039 100644
--- a/tools/releasetools/merge/merge_meta.py
+++ b/tools/releasetools/merge/merge_meta.py
@@ -29,6 +29,7 @@
import merge_utils
import sparse_img
import verity_utils
+from ota_utils import ParseUpdateEngineConfig
from common import ExternalError
@@ -52,20 +53,41 @@
MODULE_KEY_PATTERN = re.compile(r'name="(.+)\.(apex|apk)"')
+
+
+def MergeUpdateEngineConfig(input_metadir1, input_metadir2, merged_meta_dir):
+ UPDATE_ENGINE_CONFIG_NAME = "update_engine_config.txt"
+ config1_path = os.path.join(
+ input_metadir1, UPDATE_ENGINE_CONFIG_NAME)
+ config2_path = os.path.join(
+ input_metadir2, UPDATE_ENGINE_CONFIG_NAME)
+ config1 = ParseUpdateEngineConfig(config1_path)
+ config2 = ParseUpdateEngineConfig(config2_path)
+ # Copy older config to merged target files for maximum compatibility
+ # update_engine in system partition is from system side, but
+ # update_engine_sideload in recovery is from vendor side.
+ if config1 < config2:
+ shutil.copy(config1_path, os.path.join(
+ merged_meta_dir, UPDATE_ENGINE_CONFIG_NAME))
+ else:
+ shutil.copy(config2_path, os.path.join(
+ merged_meta_dir, UPDATE_ENGINE_CONFIG_NAME))
+
+
def MergeMetaFiles(temp_dir, merged_dir):
"""Merges various files in META/*."""
framework_meta_dir = os.path.join(temp_dir, 'framework_meta', 'META')
- merge_utils.ExtractItems(
- input_zip=OPTIONS.framework_target_files,
+ merge_utils.CollectTargetFiles(
+ input_zipfile_or_dir=OPTIONS.framework_target_files,
output_dir=os.path.dirname(framework_meta_dir),
- extract_item_list=('META/*',))
+ item_list=('META/*',))
vendor_meta_dir = os.path.join(temp_dir, 'vendor_meta', 'META')
- merge_utils.ExtractItems(
- input_zip=OPTIONS.vendor_target_files,
+ merge_utils.CollectTargetFiles(
+ input_zipfile_or_dir=OPTIONS.vendor_target_files,
output_dir=os.path.dirname(vendor_meta_dir),
- extract_item_list=('META/*',))
+ item_list=('META/*',))
merged_meta_dir = os.path.join(merged_dir, 'META')
@@ -102,6 +124,11 @@
merged_meta_dir=merged_meta_dir,
file_name=file_name)
+ MergeUpdateEngineConfig(
+ framework_meta_dir,
+ vendor_meta_dir, merged_meta_dir,
+ )
+
# Write the now-finalized OPTIONS.merged_misc_info.
merge_utils.WriteSortedData(
data=OPTIONS.merged_misc_info,
diff --git a/tools/releasetools/merge/merge_target_files.py b/tools/releasetools/merge/merge_target_files.py
index c06fd4c..ba2b14f 100755
--- a/tools/releasetools/merge/merge_target_files.py
+++ b/tools/releasetools/merge/merge_target_files.py
@@ -26,9 +26,9 @@
Usage: merge_target_files [args]
- --framework-target-files framework-target-files-zip-archive
+ --framework-target-files framework-target-files-package
The input target files package containing framework bits. This is a zip
- archive.
+ archive or a directory.
--framework-item-list framework-item-list-file
The optional path to a newline-separated config file of items that
@@ -38,9 +38,9 @@
The optional path to a newline-separated config file of keys to
extract from the framework META/misc_info.txt file.
- --vendor-target-files vendor-target-files-zip-archive
+ --vendor-target-files vendor-target-files-package
The input target files package containing vendor bits. This is a zip
- archive.
+ archive or a directory.
--vendor-item-list vendor-item-list-file
The optional path to a newline-separated config file of items that
@@ -149,6 +149,35 @@
OPTIONS.vendor_dexpreopt_config = None
+def move_only_exists(source, destination):
+ """Judge whether the file exists and then move the file."""
+
+ if os.path.exists(source):
+ shutil.move(source, destination)
+
+
+def remove_file_if_exists(file_name):
+ """Remove the file if it exists and skip otherwise."""
+
+ try:
+ os.remove(file_name)
+ except FileNotFoundError:
+ pass
+
+
+def include_meta_in_list(item_list):
+ """Include all `META/*` files in the item list.
+
+ To ensure that `AddImagesToTargetFiles` can still be used with vendor item
+ list that do not specify all of the required META/ files, those files should
+ be included by default. This preserves the backward compatibility of
+ `rebuild_image_with_sepolicy`.
+ """
+ if not item_list:
+ return None
+ return list(item_list) + ['META/*']
+
+
def create_merged_package(temp_dir):
"""Merges two target files packages into one target files structure.
@@ -156,18 +185,18 @@
Path to merged package under temp directory.
"""
# Extract "as is" items from the input framework and vendor partial target
- # files packages directly into the output temporary directory, since these items
- # do not need special case processing.
+ # files packages directly into the output temporary directory, since these
+ # items do not need special case processing.
output_target_files_temp_dir = os.path.join(temp_dir, 'output')
- merge_utils.ExtractItems(
- input_zip=OPTIONS.framework_target_files,
+ merge_utils.CollectTargetFiles(
+ input_zipfile_or_dir=OPTIONS.framework_target_files,
output_dir=output_target_files_temp_dir,
- extract_item_list=OPTIONS.framework_item_list)
- merge_utils.ExtractItems(
- input_zip=OPTIONS.vendor_target_files,
+ item_list=OPTIONS.framework_item_list)
+ merge_utils.CollectTargetFiles(
+ input_zipfile_or_dir=OPTIONS.vendor_target_files,
output_dir=output_target_files_temp_dir,
- extract_item_list=OPTIONS.vendor_item_list)
+ item_list=OPTIONS.vendor_item_list)
# Perform special case processing on META/* items.
# After this function completes successfully, all the files we need to create
@@ -203,8 +232,7 @@
If odm is present then odm is preferred -- otherwise vendor is used.
"""
partition = 'vendor'
- if os.path.exists(os.path.join(target_files_dir, 'ODM')) or os.path.exists(
- os.path.join(target_files_dir, 'IMAGES/odm.img')):
+ if os.path.exists(os.path.join(target_files_dir, 'ODM')):
partition = 'odm'
partition_img = '{}.img'.format(partition)
partition_map = '{}.map'.format(partition)
@@ -216,7 +244,8 @@
def copy_selinux_file(input_path, output_filename):
input_filename = os.path.join(target_files_dir, input_path)
if not os.path.exists(input_filename):
- input_filename = input_filename.replace('SYSTEM_EXT/', 'SYSTEM/system_ext/') \
+ input_filename = input_filename.replace('SYSTEM_EXT/',
+ 'SYSTEM/system_ext/') \
.replace('PRODUCT/', 'SYSTEM/product/')
if not os.path.exists(input_filename):
logger.info('Skipping copy_selinux_file for %s', input_filename)
@@ -238,7 +267,8 @@
if not OPTIONS.vendor_otatools:
# Remove the partition from the merged target-files archive. It will be
# rebuilt later automatically by generate_missing_images().
- os.remove(os.path.join(target_files_dir, 'IMAGES', partition_img))
+ remove_file_if_exists(
+ os.path.join(target_files_dir, 'IMAGES', partition_img))
return
# TODO(b/192253131): Remove the need for vendor_otatools by fixing
@@ -256,7 +286,10 @@
vendor_target_files_dir = common.MakeTempDir(
prefix='merge_target_files_vendor_target_files_')
common.UnzipToDir(OPTIONS.vendor_otatools, vendor_otatools_dir)
- common.UnzipToDir(OPTIONS.vendor_target_files, vendor_target_files_dir)
+ merge_utils.CollectTargetFiles(
+ input_zipfile_or_dir=OPTIONS.vendor_target_files,
+ output_dir=vendor_target_files_dir,
+ item_list=include_meta_in_list(OPTIONS.vendor_item_list))
# Copy the partition contents from the merged target-files archive to the
# vendor target-files archive.
@@ -267,7 +300,8 @@
symlinks=True)
# Delete then rebuild the partition.
- os.remove(os.path.join(vendor_target_files_dir, 'IMAGES', partition_img))
+ remove_file_if_exists(
+ os.path.join(vendor_target_files_dir, 'IMAGES', partition_img))
rebuild_partition_command = [
os.path.join(vendor_otatools_dir, 'bin', 'add_img_to_target_files'),
'--verbose',
@@ -286,7 +320,7 @@
shutil.move(
os.path.join(vendor_target_files_dir, 'IMAGES', partition_img),
os.path.join(target_files_dir, 'IMAGES', partition_img))
- shutil.move(
+ move_only_exists(
os.path.join(vendor_target_files_dir, 'IMAGES', partition_map),
os.path.join(target_files_dir, 'IMAGES', partition_map))
@@ -562,10 +596,10 @@
common.Usage(__doc__)
sys.exit(1)
- with zipfile.ZipFile(OPTIONS.framework_target_files, allowZip64=True) as fz:
- framework_namelist = fz.namelist()
- with zipfile.ZipFile(OPTIONS.vendor_target_files, allowZip64=True) as vz:
- vendor_namelist = vz.namelist()
+ framework_namelist = merge_utils.GetTargetFilesItems(
+ OPTIONS.framework_target_files)
+ vendor_namelist = merge_utils.GetTargetFilesItems(
+ OPTIONS.vendor_target_files)
if OPTIONS.framework_item_list:
OPTIONS.framework_item_list = common.LoadListFromFile(
diff --git a/tools/releasetools/merge/merge_utils.py b/tools/releasetools/merge/merge_utils.py
index f623ad2..b5683a8 100644
--- a/tools/releasetools/merge/merge_utils.py
+++ b/tools/releasetools/merge/merge_utils.py
@@ -49,28 +49,80 @@
common.UnzipToDir(input_zip, output_dir, filtered_extract_item_list)
-def CopyItems(from_dir, to_dir, patterns):
- """Similar to ExtractItems() except uses an input dir instead of zip."""
- file_paths = []
- for dirpath, _, filenames in os.walk(from_dir):
- file_paths.extend(
- os.path.relpath(path=os.path.join(dirpath, filename), start=from_dir)
- for filename in filenames)
+def CopyItems(from_dir, to_dir, copy_item_list):
+ """Copies the items in copy_item_list from source to destination directory.
- filtered_file_paths = set()
- for pattern in patterns:
- filtered_file_paths.update(fnmatch.filter(file_paths, pattern))
+ copy_item_list may include files and directories. Will copy the matched
+ files and create the matched directories.
- for file_path in filtered_file_paths:
- original_file_path = os.path.join(from_dir, file_path)
- copied_file_path = os.path.join(to_dir, file_path)
- copied_file_dir = os.path.dirname(copied_file_path)
- if not os.path.exists(copied_file_dir):
- os.makedirs(copied_file_dir)
- if os.path.islink(original_file_path):
- os.symlink(os.readlink(original_file_path), copied_file_path)
+ Args:
+ from_dir: The source directory.
+ to_dir: The destination directory.
+ copy_item_list: Items to be copied.
+ """
+ item_paths = []
+ for root, dirs, files in os.walk(from_dir):
+ item_paths.extend(
+ os.path.relpath(path=os.path.join(root, item_name), start=from_dir)
+ for item_name in files + dirs)
+
+ filtered = set()
+ for pattern in copy_item_list:
+ filtered.update(fnmatch.filter(item_paths, pattern))
+
+ for item in filtered:
+ original_path = os.path.join(from_dir, item)
+ copied_path = os.path.join(to_dir, item)
+ copied_parent_path = os.path.dirname(copied_path)
+ if not os.path.exists(copied_parent_path):
+ os.makedirs(copied_parent_path)
+ if os.path.islink(original_path):
+ os.symlink(os.readlink(original_path), copied_path)
+ elif os.path.isdir(original_path):
+ if not os.path.exists(copied_path):
+ os.makedirs(copied_path)
else:
- shutil.copyfile(original_file_path, copied_file_path)
+ shutil.copyfile(original_path, copied_path)
+
+
+def GetTargetFilesItems(target_files_zipfile_or_dir):
+ """Gets a list of target files items."""
+ if zipfile.is_zipfile(target_files_zipfile_or_dir):
+ with zipfile.ZipFile(target_files_zipfile_or_dir, allowZip64=True) as fz:
+ return fz.namelist()
+ elif os.path.isdir(target_files_zipfile_or_dir):
+ item_list = []
+ for root, dirs, files in os.walk(target_files_zipfile_or_dir):
+ item_list.extend(
+ os.path.relpath(path=os.path.join(root, item),
+ start=target_files_zipfile_or_dir)
+ for item in dirs + files)
+ return item_list
+ else:
+ raise ValueError('Target files should be either zipfile or directory.')
+
+
+def CollectTargetFiles(input_zipfile_or_dir, output_dir, item_list=None):
+ """Extracts input zipfile or copy input directory to output directory.
+
+ Extracts the input zipfile if `input_zipfile_or_dir` is a zip archive, or
+ copies the items if `input_zipfile_or_dir` is a directory.
+
+ Args:
+ input_zipfile_or_dir: The input target files, could be either a zipfile to
+ extract or a directory to copy.
+ output_dir: The output directory that the input files are either extracted
+ or copied.
+ item_list: Files to be extracted or copied. Will extract or copy all files
+ if omitted.
+ """
+ patterns = item_list if item_list else ('*',)
+ if zipfile.is_zipfile(input_zipfile_or_dir):
+ ExtractItems(input_zipfile_or_dir, output_dir, patterns)
+ elif os.path.isdir(input_zipfile_or_dir):
+ CopyItems(input_zipfile_or_dir, output_dir, patterns)
+ else:
+ raise ValueError('Target files should be either zipfile or directory.')
def WriteSortedData(data, path):
@@ -100,20 +152,16 @@
has_error = False
# Check that partitions only come from one input.
- for partition in _FRAMEWORK_PARTITIONS.union(_VENDOR_PARTITIONS):
- image_path = 'IMAGES/{}.img'.format(partition.lower().replace('/', ''))
- in_framework = (
- any(item.startswith(partition) for item in OPTIONS.framework_item_list)
- or image_path in OPTIONS.framework_item_list)
- in_vendor = (
- any(item.startswith(partition) for item in OPTIONS.vendor_item_list) or
- image_path in OPTIONS.vendor_item_list)
- if in_framework and in_vendor:
- logger.error(
- 'Cannot extract items from %s for both the framework and vendor'
- ' builds. Please ensure only one merge config item list'
- ' includes %s.', partition, partition)
- has_error = True
+ framework_partitions = ItemListToPartitionSet(OPTIONS.framework_item_list)
+ vendor_partitions = ItemListToPartitionSet(OPTIONS.vendor_item_list)
+ from_both = framework_partitions.intersection(vendor_partitions)
+ if from_both:
+ logger.error(
+ 'Cannot extract items from the same partition in both the '
+ 'framework and vendor builds. Please ensure only one merge config '
+ 'item list (or inferred list) includes each partition: %s' %
+ ','.join(from_both))
+ has_error = True
if any([
key in OPTIONS.framework_misc_info_keys
@@ -131,7 +179,9 @@
# system partition). The following regex matches this and extracts the
# partition name.
-_PARTITION_ITEM_PATTERN = re.compile(r'^([A-Z_]+)/\*$')
+_PARTITION_ITEM_PATTERN = re.compile(r'^([A-Z_]+)/.*$')
+_IMAGE_PARTITION_PATTERN = re.compile(r'^IMAGES/(.*)\.img$')
+_PREBUILT_IMAGE_PARTITION_PATTERN = re.compile(r'^PREBUILT_IMAGES/(.*)\.img$')
def ItemListToPartitionSet(item_list):
@@ -154,62 +204,88 @@
partition_set = set()
for item in item_list:
- partition_match = _PARTITION_ITEM_PATTERN.search(item.strip())
- partition_tag = partition_match.group(
- 1).lower() if partition_match else None
-
- if partition_tag:
- partition_set.add(partition_tag)
+ for pattern in (_PARTITION_ITEM_PATTERN, _IMAGE_PARTITION_PATTERN, _PREBUILT_IMAGE_PARTITION_PATTERN):
+ partition_match = pattern.search(item.strip())
+ if partition_match:
+ partition = partition_match.group(1).lower()
+ # These directories in target-files are not actual partitions.
+ if partition not in ('meta', 'images', 'prebuilt_images'):
+ partition_set.add(partition)
return partition_set
# Partitions that are grabbed from the framework partial build by default.
_FRAMEWORK_PARTITIONS = {
- 'system', 'product', 'system_ext', 'system_other', 'root', 'system_dlkm'
-}
-# Partitions that are grabbed from the vendor partial build by default.
-_VENDOR_PARTITIONS = {
- 'vendor', 'odm', 'oem', 'boot', 'vendor_boot', 'recovery',
- 'prebuilt_images', 'radio', 'data', 'vendor_dlkm', 'odm_dlkm'
+ 'system', 'product', 'system_ext', 'system_other', 'root', 'system_dlkm',
+ 'vbmeta_system', 'pvmfw'
}
def InferItemList(input_namelist, framework):
- item_list = []
+ item_set = set()
- # Some META items are grabbed from partial builds directly.
+ # Some META items are always grabbed from partial builds directly.
# Others are combined in merge_meta.py.
if framework:
- item_list.extend([
+ item_set.update([
'META/liblz4.so',
'META/postinstall_config.txt',
- 'META/update_engine_config.txt',
'META/zucchini_config.txt',
])
else: # vendor
- item_list.extend([
+ item_set.update([
'META/kernel_configs.txt',
'META/kernel_version.txt',
'META/otakeys.txt',
+ 'META/pack_radioimages.txt',
'META/releasetools.py',
- 'OTA/android-info.txt',
])
# Grab a set of items for the expected partitions in the partial build.
- for partition in (_FRAMEWORK_PARTITIONS if framework else _VENDOR_PARTITIONS):
- for namelist in input_namelist:
- if namelist.startswith('%s/' % partition.upper()):
- fs_config_prefix = '' if partition == 'system' else '%s_' % partition
- item_list.extend([
- '%s/*' % partition.upper(),
- 'IMAGES/%s.img' % partition,
- 'IMAGES/%s.map' % partition,
- 'META/%sfilesystem_config.txt' % fs_config_prefix,
- ])
- break
+ seen_partitions = []
+ for namelist in input_namelist:
+ if namelist.endswith('/'):
+ continue
- return sorted(item_list)
+ partition = namelist.split('/')[0].lower()
+
+ # META items are grabbed above, or merged later.
+ if partition == 'meta':
+ continue
+
+ if partition in ('images', 'prebuilt_images'):
+ image_partition, extension = os.path.splitext(os.path.basename(namelist))
+ if image_partition == 'vbmeta':
+ # Always regenerate vbmeta.img since it depends on hash information
+ # from both builds.
+ continue
+ if extension in ('.img', '.map'):
+ # Include image files in IMAGES/* if the partition comes from
+ # the expected set.
+ if (framework and image_partition in _FRAMEWORK_PARTITIONS) or (
+ not framework and image_partition not in _FRAMEWORK_PARTITIONS):
+ item_set.add(namelist)
+ elif not framework:
+ # Include all miscellaneous non-image files in IMAGES/* from
+ # the vendor build.
+ item_set.add(namelist)
+ continue
+
+ # Skip already-visited partitions.
+ if partition in seen_partitions:
+ continue
+ seen_partitions.append(partition)
+
+ if (framework and partition in _FRAMEWORK_PARTITIONS) or (
+ not framework and partition not in _FRAMEWORK_PARTITIONS):
+ fs_config_prefix = '' if partition == 'system' else '%s_' % partition
+ item_set.update([
+ '%s/*' % partition.upper(),
+ 'META/%sfilesystem_config.txt' % fs_config_prefix,
+ ])
+
+ return sorted(item_set)
def InferFrameworkMiscInfoKeys(input_namelist):
@@ -223,8 +299,8 @@
]
for partition in _FRAMEWORK_PARTITIONS:
- for namelist in input_namelist:
- if namelist.startswith('%s/' % partition.upper()):
+ for partition_dir in ('%s/' % partition.upper(), 'SYSTEM/%s/' % partition):
+ if partition_dir in input_namelist:
fs_type_prefix = '' if partition == 'system' else '%s_' % partition
keys.extend([
'avb_%s_hashtree_enable' % partition,
diff --git a/tools/releasetools/merge/test_merge_utils.py b/tools/releasetools/merge/test_merge_utils.py
index 1949050..b4c47ae 100644
--- a/tools/releasetools/merge/test_merge_utils.py
+++ b/tools/releasetools/merge/test_merge_utils.py
@@ -35,22 +35,27 @@
open(path, 'a').close()
return path
+ def createEmptyFolder(path):
+ os.makedirs(path)
+ return path
+
def createSymLink(source, dest):
os.symlink(source, dest)
return dest
def getRelPaths(start, filepaths):
return set(
- os.path.relpath(path=filepath, start=start) for filepath in filepaths)
+ os.path.relpath(path=filepath, start=start)
+ for filepath in filepaths)
input_dir = common.MakeTempDir()
output_dir = common.MakeTempDir()
expected_copied_items = []
actual_copied_items = []
- patterns = ['*.cpp', 'subdir/*.txt']
+ patterns = ['*.cpp', 'subdir/*.txt', 'subdir/empty_dir']
- # Create various files that we expect to get copied because they
- # match one of the patterns.
+ # Create various files and empty directories that we expect to get copied
+ # because they match one of the patterns.
expected_copied_items.extend([
createEmptyFile(os.path.join(input_dir, 'a.cpp')),
createEmptyFile(os.path.join(input_dir, 'b.cpp')),
@@ -58,6 +63,7 @@
createEmptyFile(os.path.join(input_dir, 'subdir', 'd.txt')),
createEmptyFile(
os.path.join(input_dir, 'subdir', 'subsubdir', 'e.txt')),
+ createEmptyFolder(os.path.join(input_dir, 'subdir', 'empty_dir')),
createSymLink('a.cpp', os.path.join(input_dir, 'a_link.cpp')),
])
# Create some more files that we expect to not get copied.
@@ -70,9 +76,13 @@
merge_utils.CopyItems(input_dir, output_dir, patterns)
# Assert the actual copied items match the ones we expected.
- for dirpath, _, filenames in os.walk(output_dir):
+ for root_dir, dirs, files in os.walk(output_dir):
actual_copied_items.extend(
- os.path.join(dirpath, filename) for filename in filenames)
+ os.path.join(root_dir, filename) for filename in files)
+ for dirname in dirs:
+ dir_path = os.path.join(root_dir, dirname)
+ if not os.listdir(dir_path):
+ actual_copied_items.append(dir_path)
self.assertEqual(
getRelPaths(output_dir, actual_copied_items),
getRelPaths(input_dir, expected_copied_items))
@@ -108,20 +118,27 @@
def test_ItemListToPartitionSet(self):
item_list = [
+ 'IMAGES/system_ext.img',
'META/apexkeys.txt',
'META/apkcerts.txt',
'META/filesystem_config.txt',
'PRODUCT/*',
'SYSTEM/*',
- 'SYSTEM_EXT/*',
+ 'SYSTEM/system_ext/*',
]
partition_set = merge_utils.ItemListToPartitionSet(item_list)
self.assertEqual(set(['product', 'system', 'system_ext']), partition_set)
def test_InferItemList_Framework(self):
zip_namelist = [
+ 'IMAGES/product.img',
+ 'IMAGES/product.map',
+ 'IMAGES/system.img',
+ 'IMAGES/system.map',
'SYSTEM/my_system_file',
'PRODUCT/my_product_file',
+ # Device does not use a separate system_ext partition.
+ 'SYSTEM/system_ext/system_ext_file',
]
item_list = merge_utils.InferItemList(zip_namelist, framework=True)
@@ -135,7 +152,6 @@
'META/liblz4.so',
'META/postinstall_config.txt',
'META/product_filesystem_config.txt',
- 'META/update_engine_config.txt',
'META/zucchini_config.txt',
'PRODUCT/*',
'SYSTEM/*',
@@ -147,37 +163,55 @@
zip_namelist = [
'VENDOR/my_vendor_file',
'ODM/my_odm_file',
+ 'IMAGES/odm.img',
+ 'IMAGES/odm.map',
+ 'IMAGES/vendor.img',
+ 'IMAGES/vendor.map',
+ 'IMAGES/my_custom_image.img',
+ 'IMAGES/my_custom_file.txt',
+ 'IMAGES/vbmeta.img',
+ 'CUSTOM_PARTITION/my_custom_file',
+ # Leftover framework pieces that shouldn't be grabbed.
+ 'IMAGES/system.img',
+ 'SYSTEM/system_file',
]
item_list = merge_utils.InferItemList(zip_namelist, framework=False)
expected_vendor_item_list = [
+ 'CUSTOM_PARTITION/*',
+ 'IMAGES/my_custom_file.txt',
+ 'IMAGES/my_custom_image.img',
'IMAGES/odm.img',
'IMAGES/odm.map',
'IMAGES/vendor.img',
'IMAGES/vendor.map',
+ 'META/custom_partition_filesystem_config.txt',
'META/kernel_configs.txt',
'META/kernel_version.txt',
'META/odm_filesystem_config.txt',
'META/otakeys.txt',
+ 'META/pack_radioimages.txt',
'META/releasetools.py',
'META/vendor_filesystem_config.txt',
'ODM/*',
- 'OTA/android-info.txt',
'VENDOR/*',
]
self.assertEqual(item_list, expected_vendor_item_list)
def test_InferFrameworkMiscInfoKeys(self):
zip_namelist = [
- 'SYSTEM/my_system_file',
- 'SYSTEM_EXT/my_system_ext_file',
+ 'PRODUCT/',
+ 'SYSTEM/',
+ 'SYSTEM/system_ext/',
]
keys = merge_utils.InferFrameworkMiscInfoKeys(zip_namelist)
expected_keys = [
'ab_update',
+ 'avb_product_add_hashtree_footer_args',
+ 'avb_product_hashtree_enable',
'avb_system_add_hashtree_footer_args',
'avb_system_ext_add_hashtree_footer_args',
'avb_system_ext_hashtree_enable',
@@ -186,10 +220,13 @@
'avb_vbmeta_system_algorithm',
'avb_vbmeta_system_key_path',
'avb_vbmeta_system_rollback_index_location',
+ 'building_product_image',
'building_system_ext_image',
'building_system_image',
'default_system_dev_certificate',
'fs_type',
+ 'product_disable_sparse',
+ 'product_fs_type',
'system_disable_sparse',
'system_ext_disable_sparse',
'system_ext_fs_type',
diff --git a/tools/releasetools/merge_ota.py b/tools/releasetools/merge_ota.py
new file mode 100644
index 0000000..441312c
--- /dev/null
+++ b/tools/releasetools/merge_ota.py
@@ -0,0 +1,308 @@
+# Copyright (C) 2022 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import logging
+import shlex
+import struct
+import sys
+import update_payload
+import tempfile
+import zipfile
+import os
+import care_map_pb2
+
+import common
+from typing import BinaryIO, List
+from update_metadata_pb2 import DeltaArchiveManifest, DynamicPartitionMetadata, DynamicPartitionGroup
+from ota_metadata_pb2 import OtaMetadata
+from update_payload import Payload
+
+from payload_signer import PayloadSigner
+from ota_utils import PayloadGenerator, METADATA_PROTO_NAME, FinalizeMetadata
+
+logger = logging.getLogger(__name__)
+
+CARE_MAP_ENTRY = "care_map.pb"
+APEX_INFO_ENTRY = "apex_info.pb"
+
+
+def WriteDataBlob(payload: Payload, outfp: BinaryIO, read_size=1024*64):
+ for i in range(0, payload.total_data_length, read_size):
+ blob = payload.ReadDataBlob(
+ i, min(i+read_size, payload.total_data_length)-i)
+ outfp.write(blob)
+
+
+def ConcatBlobs(payloads: List[Payload], outfp: BinaryIO):
+ for payload in payloads:
+ WriteDataBlob(payload, outfp)
+
+
+def TotalDataLength(partitions):
+ for partition in reversed(partitions):
+ for op in reversed(partition.operations):
+ if op.data_length > 0:
+ return op.data_offset + op.data_length
+ return 0
+
+
+def ExtendPartitionUpdates(partitions, new_partitions):
+ prefix_blob_length = TotalDataLength(partitions)
+ partitions.extend(new_partitions)
+ for part in partitions[-len(new_partitions):]:
+ for op in part.operations:
+ if op.HasField("data_length") and op.data_length != 0:
+ op.data_offset += prefix_blob_length
+
+
+class DuplicatePartitionError(ValueError):
+ pass
+
+
+def MergeDynamicPartitionGroups(groups: List[DynamicPartitionGroup], new_groups: List[DynamicPartitionGroup]):
+ new_groups = {new_group.name: new_group for new_group in new_groups}
+ for group in groups:
+ if group.name not in new_groups:
+ continue
+ new_group = new_groups[group.name]
+ common_partitions = set(group.partition_names).intersection(
+ set(new_group.partition_names))
+ if len(common_partitions) != 0:
+ raise DuplicatePartitionError(
+ f"Old group and new group should not have any intersections, {group.partition_names}, {new_group.partition_names}, common partitions: {common_partitions}")
+ group.partition_names.extend(new_group.partition_names)
+ group.size = max(new_group.size, group.size)
+ del new_groups[group.name]
+ for new_group in new_groups.values():
+ groups.append(new_group)
+
+
+def MergeDynamicPartitionMetadata(metadata: DynamicPartitionMetadata, new_metadata: DynamicPartitionMetadata):
+ MergeDynamicPartitionGroups(metadata.groups, new_metadata.groups)
+ metadata.snapshot_enabled &= new_metadata.snapshot_enabled
+ metadata.vabc_enabled &= new_metadata.vabc_enabled
+ assert metadata.vabc_compression_param == new_metadata.vabc_compression_param, f"{metadata.vabc_compression_param} vs. {new_metadata.vabc_compression_param}"
+ metadata.cow_version = max(metadata.cow_version, new_metadata.cow_version)
+
+
+def MergeManifests(payloads: List[Payload]) -> DeltaArchiveManifest:
+ if len(payloads) == 0:
+ return None
+ if len(payloads) == 1:
+ return payloads[0].manifest
+
+ output_manifest = DeltaArchiveManifest()
+ output_manifest.block_size = payloads[0].manifest.block_size
+ output_manifest.partial_update = True
+ output_manifest.dynamic_partition_metadata.snapshot_enabled = payloads[
+ 0].manifest.dynamic_partition_metadata.snapshot_enabled
+ output_manifest.dynamic_partition_metadata.vabc_enabled = payloads[
+ 0].manifest.dynamic_partition_metadata.vabc_enabled
+ output_manifest.dynamic_partition_metadata.vabc_compression_param = payloads[
+ 0].manifest.dynamic_partition_metadata.vabc_compression_param
+ apex_info = {}
+ for payload in payloads:
+ manifest = payload.manifest
+ assert manifest.block_size == output_manifest.block_size
+ output_manifest.minor_version = max(
+ output_manifest.minor_version, manifest.minor_version)
+ output_manifest.max_timestamp = max(
+ output_manifest.max_timestamp, manifest.max_timestamp)
+ output_manifest.apex_info.extend(manifest.apex_info)
+ for apex in manifest.apex_info:
+ apex_info[apex.package_name] = apex
+ ExtendPartitionUpdates(output_manifest.partitions, manifest.partitions)
+ try:
+ MergeDynamicPartitionMetadata(
+ output_manifest.dynamic_partition_metadata, manifest.dynamic_partition_metadata)
+ except DuplicatePartitionError:
+ logger.error(
+ "OTA %s has duplicate partition with some of the previous OTAs", payload.name)
+ raise
+
+ for apex_name in sorted(apex_info.keys()):
+ output_manifest.apex_info.extend(apex_info[apex_name])
+
+ return output_manifest
+
+
+def MergePayloads(payloads: List[Payload]):
+ with tempfile.NamedTemporaryFile(prefix="payload_blob") as tmpfile:
+ ConcatBlobs(payloads, tmpfile)
+
+
+def MergeCareMap(paths: List[str]):
+ care_map = care_map_pb2.CareMap()
+ for path in paths:
+ with zipfile.ZipFile(path, "r", allowZip64=True) as zfp:
+ if CARE_MAP_ENTRY in zfp.namelist():
+ care_map_bytes = zfp.read(CARE_MAP_ENTRY)
+ partial_care_map = care_map_pb2.CareMap()
+ partial_care_map.ParseFromString(care_map_bytes)
+ care_map.partitions.extend(partial_care_map.partitions)
+ if len(care_map.partitions) == 0:
+ return b""
+ return care_map.SerializeToString()
+
+
+def WriteHeaderAndManifest(manifest: DeltaArchiveManifest, fp: BinaryIO):
+ __MAGIC = b"CrAU"
+ __MAJOR_VERSION = 2
+ manifest_bytes = manifest.SerializeToString()
+ fp.write(struct.pack(f">4sQQL", __MAGIC,
+ __MAJOR_VERSION, len(manifest_bytes), 0))
+ fp.write(manifest_bytes)
+
+
+def AddOtaMetadata(input_ota, metadata_ota, output_ota, package_key, pw):
+ with zipfile.ZipFile(metadata_ota, 'r') as zfp:
+ metadata = OtaMetadata()
+ metadata.ParseFromString(zfp.read(METADATA_PROTO_NAME))
+ FinalizeMetadata(metadata, input_ota, output_ota,
+ package_key=package_key, pw=pw)
+ return output_ota
+
+
+def CheckOutput(output_ota):
+ payload = update_payload.Payload(output_ota)
+ payload.CheckOpDataHash()
+
+
+def CheckDuplicatePartitions(payloads: List[Payload]):
+ partition_to_ota = {}
+ for payload in payloads:
+ for group in payload.manifest.dynamic_partition_metadata.groups:
+ for part in group.partition_names:
+ if part in partition_to_ota:
+ raise DuplicatePartitionError(
+ f"OTA {partition_to_ota[part].name} and {payload.name} have duplicating partition {part}")
+ partition_to_ota[part] = payload
+
+def ApexInfo(file_paths):
+ if len(file_paths) > 1:
+ logger.info("More than one target file specified, will ignore "
+ "apex_info.pb (if any)")
+ return None
+ with zipfile.ZipFile(file_paths[0], "r", allowZip64=True) as zfp:
+ if APEX_INFO_ENTRY in zfp.namelist():
+ apex_info_bytes = zfp.read(APEX_INFO_ENTRY)
+ return apex_info_bytes
+ return None
+
+def ParseSignerArgs(args):
+ if args is None:
+ return None
+ return shlex.split(args)
+
+def main(argv):
+ parser = argparse.ArgumentParser(description='Merge multiple partial OTAs')
+ parser.add_argument('packages', type=str, nargs='+',
+ help='Paths to OTA packages to merge')
+ parser.add_argument('--package_key', type=str,
+ help='Paths to private key for signing payload')
+ parser.add_argument('--search_path', type=str,
+ help='Search path for framework/signapk.jar')
+ parser.add_argument('--payload_signer', type=str,
+ help='Path to custom payload signer')
+ parser.add_argument('--payload_signer_args', type=ParseSignerArgs,
+ help='Arguments for payload signer if necessary')
+ parser.add_argument('--payload_signer_maximum_signature_size', type=str,
+ help='Maximum signature size (in bytes) that would be '
+ 'generated by the given payload signer')
+ parser.add_argument('--output', type=str,
+ help='Paths to output merged ota', required=True)
+ parser.add_argument('--metadata_ota', type=str,
+ help='Output zip will use build metadata from this OTA package, if unspecified, use the last OTA package in merge list')
+ parser.add_argument('--private_key_suffix', type=str,
+ help='Suffix to be appended to package_key path', default=".pk8")
+ parser.add_argument('-v', action="store_true", help="Enable verbose logging", dest="verbose")
+ parser.epilog = ('This tool can also be used to resign a regular OTA. For a single regular OTA, '
+ 'apex_info.pb will be written to output. When merging multiple OTAs, '
+ 'apex_info.pb will not be written.')
+ args = parser.parse_args(argv[1:])
+ file_paths = args.packages
+
+ common.OPTIONS.verbose = args.verbose
+ if args.verbose:
+ logger.setLevel(logging.INFO)
+
+ logger.info(args)
+ if args.search_path:
+ common.OPTIONS.search_path = args.search_path
+
+ metadata_ota = args.packages[-1]
+ if args.metadata_ota is not None:
+ metadata_ota = args.metadata_ota
+ assert os.path.exists(metadata_ota)
+
+ payloads = [Payload(path) for path in file_paths]
+
+ CheckDuplicatePartitions(payloads)
+
+ merged_manifest = MergeManifests(payloads)
+
+ # Get signing keys
+ key_passwords = common.GetKeyPasswords([args.package_key])
+
+ generator = PayloadGenerator()
+
+ apex_info_bytes = ApexInfo(file_paths)
+
+ with tempfile.NamedTemporaryFile() as unsigned_payload:
+ WriteHeaderAndManifest(merged_manifest, unsigned_payload)
+ ConcatBlobs(payloads, unsigned_payload)
+ unsigned_payload.flush()
+
+ generator = PayloadGenerator()
+ generator.payload_file = unsigned_payload.name
+ logger.info("Payload size: %d", os.path.getsize(generator.payload_file))
+
+ if args.package_key:
+ logger.info("Signing payload...")
+ # TODO: remove OPTIONS when no longer used as fallback in payload_signer
+ common.OPTIONS.payload_signer_args = None
+ common.OPTIONS.payload_signer_maximum_signature_size = None
+ signer = PayloadSigner(args.package_key, args.private_key_suffix,
+ key_passwords[args.package_key],
+ payload_signer=args.payload_signer,
+ payload_signer_args=args.payload_signer_args,
+ payload_signer_maximum_signature_size=args.payload_signer_maximum_signature_size)
+ generator.payload_file = unsigned_payload.name
+ generator.Sign(signer)
+
+ logger.info("Payload size: %d", os.path.getsize(generator.payload_file))
+
+ logger.info("Writing to %s", args.output)
+
+ key_passwords = common.GetKeyPasswords([args.package_key])
+ with tempfile.NamedTemporaryFile(prefix="signed_ota", suffix=".zip") as signed_ota:
+ with zipfile.ZipFile(signed_ota, "w") as zfp:
+ generator.WriteToZip(zfp)
+ care_map_bytes = MergeCareMap(args.packages)
+ if care_map_bytes:
+ common.ZipWriteStr(zfp, CARE_MAP_ENTRY, care_map_bytes)
+ if apex_info_bytes:
+ logger.info("Writing %s", APEX_INFO_ENTRY)
+ common.ZipWriteStr(zfp, APEX_INFO_ENTRY, apex_info_bytes)
+ AddOtaMetadata(signed_ota.name, metadata_ota,
+ args.output, args.package_key, key_passwords[args.package_key])
+ return 0
+
+
+
+
+if __name__ == '__main__':
+ logging.basicConfig()
+ sys.exit(main(sys.argv))
diff --git a/tools/releasetools/non_ab_ota.py b/tools/releasetools/non_ab_ota.py
index 9732cda..c4fd809 100644
--- a/tools/releasetools/non_ab_ota.py
+++ b/tools/releasetools/non_ab_ota.py
@@ -40,28 +40,20 @@
info_dict=source_info,
allow_shared_blocks=allow_shared_blocks)
- hashtree_info_generator = verity_utils.CreateHashtreeInfoGenerator(
- name, 4096, target_info)
partition_tgt = common.GetUserImage(name, OPTIONS.target_tmp, target_zip,
info_dict=target_info,
- allow_shared_blocks=allow_shared_blocks,
- hashtree_info_generator=hashtree_info_generator)
+ allow_shared_blocks=allow_shared_blocks)
# Check the first block of the source system partition for remount R/W only
# if the filesystem is ext4.
partition_source_info = source_info["fstab"]["/" + name]
check_first_block = partition_source_info.fs_type == "ext4"
- # Disable using imgdiff for squashfs. 'imgdiff -z' expects input files to be
- # in zip formats. However with squashfs, a) all files are compressed in LZ4;
- # b) the blocks listed in block map may not contain all the bytes for a
- # given file (because they're rounded to be 4K-aligned).
- partition_target_info = target_info["fstab"]["/" + name]
- disable_imgdiff = (partition_source_info.fs_type == "squashfs" or
- partition_target_info.fs_type == "squashfs")
+ # Disable imgdiff because it relies on zlib to produce stable output
+ # across different versions, which is often not the case.
return common.BlockDifference(name, partition_tgt, partition_src,
check_first_block,
version=blockimgdiff_version,
- disable_imgdiff=disable_imgdiff)
+ disable_imgdiff=True)
if source_zip:
# See notes in common.GetUserImage()
@@ -285,7 +277,7 @@
needed_property_files = (
NonAbOtaPropertyFiles(),
)
- FinalizeMetadata(metadata, staging_file, output_file, needed_property_files)
+ FinalizeMetadata(metadata, staging_file, output_file, needed_property_files, package_key=OPTIONS.package_key)
def WriteBlockIncrementalOTAPackage(target_zip, source_zip, output_file):
@@ -412,7 +404,7 @@
if updating_boot:
boot_type, boot_device_expr = common.GetTypeAndDeviceExpr("/boot",
source_info)
- d = common.Difference(target_boot, source_boot)
+ d = common.Difference(target_boot, source_boot, "bsdiff")
_, _, d = d.ComputePatch()
if d is None:
include_full_boot = True
@@ -540,7 +532,7 @@
needed_property_files = (
NonAbOtaPropertyFiles(),
)
- FinalizeMetadata(metadata, staging_file, output_file, needed_property_files)
+ FinalizeMetadata(metadata, staging_file, output_file, needed_property_files, package_key=OPTIONS.package_key)
def GenerateNonAbOtaPackage(target_file, output_file, source_file=None):
diff --git a/tools/releasetools/ota_from_target_files b/tools/releasetools/ota_from_target_files
deleted file mode 120000
index 6755a90..0000000
--- a/tools/releasetools/ota_from_target_files
+++ /dev/null
@@ -1 +0,0 @@
-ota_from_target_files.py
\ No newline at end of file
diff --git a/tools/releasetools/ota_from_target_files.py b/tools/releasetools/ota_from_target_files.py
index d1b9358..e40256c 100755
--- a/tools/releasetools/ota_from_target_files.py
+++ b/tools/releasetools/ota_from_target_files.py
@@ -205,7 +205,8 @@
--partial "<PARTITION> [<PARTITION>[...]]"
Generate partial updates, overriding ab_partitions list with the given
- list.
+ list. Specify --partial= without partition list to let tooling auto detect
+ partial partition list.
--custom_image <custom_partition=custom_image>
Use the specified custom_image to update custom_partition when generating
@@ -244,6 +245,12 @@
--vabc_compression_param
Compression algorithm to be used for VABC. Available options: gz, brotli, none
+
+ --security_patch_level
+ Override the security patch level in target files
+
+ --max_threads
+ Specify max number of threads allowed when generating A/B OTA
"""
from __future__ import print_function
@@ -255,7 +262,6 @@
import re
import shlex
import shutil
-import struct
import subprocess
import sys
import zipfile
@@ -264,11 +270,12 @@
import common
import ota_utils
from ota_utils import (UNZIP_PATTERN, FinalizeMetadata, GetPackageMetadata,
- PropertyFiles, SECURITY_PATCH_LEVEL_PROP_NAME, GetZipEntryOffset)
-from common import IsSparseImage
+ PayloadGenerator, SECURITY_PATCH_LEVEL_PROP_NAME, CopyTargetFilesDir)
+from common import DoesInputFileContain, IsSparseImage
import target_files_diff
from check_target_files_vintf import CheckVintfIfTrebleEnabled
from non_ab_ota import GenerateNonAbOtaPackage
+from payload_signer import PayloadSigner
if sys.hexversion < 0x02070000:
print("Python 2.7 or newer is required.", file=sys.stderr)
@@ -316,6 +323,9 @@
OPTIONS.enable_zucchini = True
OPTIONS.enable_lz4diff = False
OPTIONS.vabc_compression_param = None
+OPTIONS.security_patch_level = None
+OPTIONS.max_threads = None
+
POSTINSTALL_CONFIG = 'META/postinstall_config.txt'
DYNAMIC_PARTITION_INFO = 'META/dynamic_partitions_info.txt'
@@ -335,207 +345,6 @@
'vendor', 'vendor_boot']
-class PayloadSigner(object):
- """A class that wraps the payload signing works.
-
- When generating a Payload, hashes of the payload and metadata files will be
- signed with the device key, either by calling an external payload signer or
- by calling openssl with the package key. This class provides a unified
- interface, so that callers can just call PayloadSigner.Sign().
-
- If an external payload signer has been specified (OPTIONS.payload_signer), it
- calls the signer with the provided args (OPTIONS.payload_signer_args). Note
- that the signing key should be provided as part of the payload_signer_args.
- Otherwise without an external signer, it uses the package key
- (OPTIONS.package_key) and calls openssl for the signing works.
- """
-
- def __init__(self):
- if OPTIONS.payload_signer is None:
- # Prepare the payload signing key.
- private_key = OPTIONS.package_key + OPTIONS.private_key_suffix
- pw = OPTIONS.key_passwords[OPTIONS.package_key]
-
- cmd = ["openssl", "pkcs8", "-in", private_key, "-inform", "DER"]
- cmd.extend(["-passin", "pass:" + pw] if pw else ["-nocrypt"])
- signing_key = common.MakeTempFile(prefix="key-", suffix=".key")
- cmd.extend(["-out", signing_key])
- common.RunAndCheckOutput(cmd, verbose=False)
-
- self.signer = "openssl"
- self.signer_args = ["pkeyutl", "-sign", "-inkey", signing_key,
- "-pkeyopt", "digest:sha256"]
- self.maximum_signature_size = self._GetMaximumSignatureSizeInBytes(
- signing_key)
- else:
- self.signer = OPTIONS.payload_signer
- self.signer_args = OPTIONS.payload_signer_args
- if OPTIONS.payload_signer_maximum_signature_size:
- self.maximum_signature_size = int(
- OPTIONS.payload_signer_maximum_signature_size)
- else:
- # The legacy config uses RSA2048 keys.
- logger.warning("The maximum signature size for payload signer is not"
- " set, default to 256 bytes.")
- self.maximum_signature_size = 256
-
- @staticmethod
- def _GetMaximumSignatureSizeInBytes(signing_key):
- out_signature_size_file = common.MakeTempFile("signature_size")
- cmd = ["delta_generator", "--out_maximum_signature_size_file={}".format(
- out_signature_size_file), "--private_key={}".format(signing_key)]
- common.RunAndCheckOutput(cmd)
- with open(out_signature_size_file) as f:
- signature_size = f.read().rstrip()
- logger.info("%s outputs the maximum signature size: %s", cmd[0],
- signature_size)
- return int(signature_size)
-
- def Sign(self, in_file):
- """Signs the given input file. Returns the output filename."""
- out_file = common.MakeTempFile(prefix="signed-", suffix=".bin")
- cmd = [self.signer] + self.signer_args + ['-in', in_file, '-out', out_file]
- common.RunAndCheckOutput(cmd)
- return out_file
-
-
-class Payload(object):
- """Manages the creation and the signing of an A/B OTA Payload."""
-
- PAYLOAD_BIN = 'payload.bin'
- PAYLOAD_PROPERTIES_TXT = 'payload_properties.txt'
- SECONDARY_PAYLOAD_BIN = 'secondary/payload.bin'
- SECONDARY_PAYLOAD_PROPERTIES_TXT = 'secondary/payload_properties.txt'
-
- def __init__(self, secondary=False):
- """Initializes a Payload instance.
-
- Args:
- secondary: Whether it's generating a secondary payload (default: False).
- """
- self.payload_file = None
- self.payload_properties = None
- self.secondary = secondary
-
- def _Run(self, cmd): # pylint: disable=no-self-use
- # Don't pipe (buffer) the output if verbose is set. Let
- # brillo_update_payload write to stdout/stderr directly, so its progress can
- # be monitored.
- if OPTIONS.verbose:
- common.RunAndCheckOutput(cmd, stdout=None, stderr=None)
- else:
- common.RunAndCheckOutput(cmd)
-
- def Generate(self, target_file, source_file=None, additional_args=None):
- """Generates a payload from the given target-files zip(s).
-
- Args:
- target_file: The filename of the target build target-files zip.
- source_file: The filename of the source build target-files zip; or None if
- generating a full OTA.
- additional_args: A list of additional args that should be passed to
- brillo_update_payload script; or None.
- """
- if additional_args is None:
- additional_args = []
-
- payload_file = common.MakeTempFile(prefix="payload-", suffix=".bin")
- cmd = ["brillo_update_payload", "generate",
- "--payload", payload_file,
- "--target_image", target_file]
- if source_file is not None:
- cmd.extend(["--source_image", source_file])
- if OPTIONS.disable_fec_computation:
- cmd.extend(["--disable_fec_computation", "true"])
- if OPTIONS.disable_verity_computation:
- cmd.extend(["--disable_verity_computation", "true"])
- cmd.extend(additional_args)
- self._Run(cmd)
-
- self.payload_file = payload_file
- self.payload_properties = None
-
- def Sign(self, payload_signer):
- """Generates and signs the hashes of the payload and metadata.
-
- Args:
- payload_signer: A PayloadSigner() instance that serves the signing work.
-
- Raises:
- AssertionError: On any failure when calling brillo_update_payload script.
- """
- assert isinstance(payload_signer, PayloadSigner)
-
- # 1. Generate hashes of the payload and metadata files.
- payload_sig_file = common.MakeTempFile(prefix="sig-", suffix=".bin")
- metadata_sig_file = common.MakeTempFile(prefix="sig-", suffix=".bin")
- cmd = ["brillo_update_payload", "hash",
- "--unsigned_payload", self.payload_file,
- "--signature_size", str(payload_signer.maximum_signature_size),
- "--metadata_hash_file", metadata_sig_file,
- "--payload_hash_file", payload_sig_file]
- self._Run(cmd)
-
- # 2. Sign the hashes.
- signed_payload_sig_file = payload_signer.Sign(payload_sig_file)
- signed_metadata_sig_file = payload_signer.Sign(metadata_sig_file)
-
- # 3. Insert the signatures back into the payload file.
- signed_payload_file = common.MakeTempFile(prefix="signed-payload-",
- suffix=".bin")
- cmd = ["brillo_update_payload", "sign",
- "--unsigned_payload", self.payload_file,
- "--payload", signed_payload_file,
- "--signature_size", str(payload_signer.maximum_signature_size),
- "--metadata_signature_file", signed_metadata_sig_file,
- "--payload_signature_file", signed_payload_sig_file]
- self._Run(cmd)
-
- # 4. Dump the signed payload properties.
- properties_file = common.MakeTempFile(prefix="payload-properties-",
- suffix=".txt")
- cmd = ["brillo_update_payload", "properties",
- "--payload", signed_payload_file,
- "--properties_file", properties_file]
- self._Run(cmd)
-
- if self.secondary:
- with open(properties_file, "a") as f:
- f.write("SWITCH_SLOT_ON_REBOOT=0\n")
-
- if OPTIONS.wipe_user_data:
- with open(properties_file, "a") as f:
- f.write("POWERWASH=1\n")
-
- self.payload_file = signed_payload_file
- self.payload_properties = properties_file
-
- def WriteToZip(self, output_zip):
- """Writes the payload to the given zip.
-
- Args:
- output_zip: The output ZipFile instance.
- """
- assert self.payload_file is not None
- assert self.payload_properties is not None
-
- if self.secondary:
- payload_arcname = Payload.SECONDARY_PAYLOAD_BIN
- payload_properties_arcname = Payload.SECONDARY_PAYLOAD_PROPERTIES_TXT
- else:
- payload_arcname = Payload.PAYLOAD_BIN
- payload_properties_arcname = Payload.PAYLOAD_PROPERTIES_TXT
-
- # Add the signed payload file and properties into the zip. In order to
- # support streaming, we pack them as ZIP_STORED. So these entries can be
- # read directly with the offset and length pairs.
- common.ZipWrite(output_zip, self.payload_file, arcname=payload_arcname,
- compress_type=zipfile.ZIP_STORED)
- common.ZipWrite(output_zip, self.payload_properties,
- arcname=payload_properties_arcname,
- compress_type=zipfile.ZIP_STORED)
-
-
def _LoadOemDicts(oem_source):
"""Returns the list of loaded OEM properties dict."""
if not oem_source:
@@ -547,113 +356,6 @@
return oem_dicts
-class StreamingPropertyFiles(PropertyFiles):
- """A subclass for computing the property-files for streaming A/B OTAs."""
-
- def __init__(self):
- super(StreamingPropertyFiles, self).__init__()
- self.name = 'ota-streaming-property-files'
- self.required = (
- # payload.bin and payload_properties.txt must exist.
- 'payload.bin',
- 'payload_properties.txt',
- )
- self.optional = (
- # apex_info.pb isn't directly used in the update flow
- 'apex_info.pb',
- # care_map is available only if dm-verity is enabled.
- 'care_map.pb',
- 'care_map.txt',
- # compatibility.zip is available only if target supports Treble.
- 'compatibility.zip',
- )
-
-
-class AbOtaPropertyFiles(StreamingPropertyFiles):
- """The property-files for A/B OTA that includes payload_metadata.bin info.
-
- Since P, we expose one more token (aka property-file), in addition to the ones
- for streaming A/B OTA, for a virtual entry of 'payload_metadata.bin'.
- 'payload_metadata.bin' is the header part of a payload ('payload.bin'), which
- doesn't exist as a separate ZIP entry, but can be used to verify if the
- payload can be applied on the given device.
-
- For backward compatibility, we keep both of the 'ota-streaming-property-files'
- and the newly added 'ota-property-files' in P. The new token will only be
- available in 'ota-property-files'.
- """
-
- def __init__(self):
- super(AbOtaPropertyFiles, self).__init__()
- self.name = 'ota-property-files'
-
- def _GetPrecomputed(self, input_zip):
- offset, size = self._GetPayloadMetadataOffsetAndSize(input_zip)
- return ['payload_metadata.bin:{}:{}'.format(offset, size)]
-
- @staticmethod
- def _GetPayloadMetadataOffsetAndSize(input_zip):
- """Computes the offset and size of the payload metadata for a given package.
-
- (From system/update_engine/update_metadata.proto)
- A delta update file contains all the deltas needed to update a system from
- one specific version to another specific version. The update format is
- represented by this struct pseudocode:
-
- struct delta_update_file {
- char magic[4] = "CrAU";
- uint64 file_format_version;
- uint64 manifest_size; // Size of protobuf DeltaArchiveManifest
-
- // Only present if format_version > 1:
- uint32 metadata_signature_size;
-
- // The Bzip2 compressed DeltaArchiveManifest
- char manifest[metadata_signature_size];
-
- // The signature of the metadata (from the beginning of the payload up to
- // this location, not including the signature itself). This is a
- // serialized Signatures message.
- char medatada_signature_message[metadata_signature_size];
-
- // Data blobs for files, no specific format. The specific offset
- // and length of each data blob is recorded in the DeltaArchiveManifest.
- struct {
- char data[];
- } blobs[];
-
- // These two are not signed:
- uint64 payload_signatures_message_size;
- char payload_signatures_message[];
- };
-
- 'payload-metadata.bin' contains all the bytes from the beginning of the
- payload, till the end of 'medatada_signature_message'.
- """
- payload_info = input_zip.getinfo('payload.bin')
- (payload_offset, payload_size) = GetZipEntryOffset(input_zip, payload_info)
-
- # Read the underlying raw zipfile at specified offset
- payload_fp = input_zip.fp
- payload_fp.seek(payload_offset)
- header_bin = payload_fp.read(24)
-
- # network byte order (big-endian)
- header = struct.unpack("!IQQL", header_bin)
-
- # 'CrAU'
- magic = header[0]
- assert magic == 0x43724155, "Invalid magic: {:x}, computed offset {}" \
- .format(magic, payload_offset)
-
- manifest_size = header[2]
- metadata_signature_size = header[3]
- metadata_total = 24 + manifest_size + metadata_signature_size
- assert metadata_total < payload_size
-
- return (payload_offset, metadata_total)
-
-
def ModifyVABCCompressionParam(content, algo):
""" Update update VABC Compression Param in dynamic_partitions_info.txt
Args:
@@ -726,6 +428,13 @@
slot will be used. This is to ensure that we always have valid boot, vbmeta,
bootloader images in the inactive slot.
+ After writing system_other to inactive slot's system partiiton,
+ PackageManagerService will read `ro.cp_system_other_odex`, and set
+ `sys.cppreopt` to "requested". Then, according to
+ system/extras/cppreopts/cppreopts.rc , init will mount system_other at
+ /postinstall, and execute `cppreopts` to copy optimized APKs from
+ /postinstall to /data .
+
Args:
input_file: The input target-files.zip file.
skip_postinstall: Whether to skip copying the postinstall config file.
@@ -1030,30 +739,34 @@
Returns:
The filename of a target-files.zip which has renamed the custom images in
- the IMAGS/ to their partition names.
+ the IMAGES/ to their partition names.
"""
- # Use zip2zip to avoid extracting the zipfile.
+
+ # First pass: use zip2zip to copy the target files contents, excluding
+ # the "custom" images that will be replaced.
target_file = common.MakeTempFile(prefix="targetfiles-", suffix=".zip")
cmd = ['zip2zip', '-i', input_file, '-o', target_file]
- with zipfile.ZipFile(input_file, allowZip64=True) as input_zip:
- namelist = input_zip.namelist()
-
- # Write {custom_image}.img as {custom_partition}.img.
+ images = {}
for custom_partition, custom_image in custom_images.items():
default_custom_image = '{}.img'.format(custom_partition)
if default_custom_image != custom_image:
- logger.info("Update custom partition '%s' with '%s'",
- custom_partition, custom_image)
- # Default custom image need to be deleted first.
- namelist.remove('IMAGES/{}'.format(default_custom_image))
- # IMAGES/{custom_image}.img:IMAGES/{custom_partition}.img.
- cmd.extend(['IMAGES/{}:IMAGES/{}'.format(custom_image,
- default_custom_image)])
+ src = 'IMAGES/' + custom_image
+ dst = 'IMAGES/' + default_custom_image
+ cmd.extend(['-x', dst])
+ images[dst] = src
- cmd.extend(['{}:{}'.format(name, name) for name in namelist])
common.RunAndCheckOutput(cmd)
+ # Second pass: write {custom_image}.img as {custom_partition}.img.
+ with zipfile.ZipFile(input_file, allowZip64=True) as input_zip:
+ with zipfile.ZipFile(target_file, 'a', allowZip64=True) as output_zip:
+ for dst, src in images.items():
+ data = input_zip.read(src)
+ logger.info("Update custom partition '%s'", dst)
+ common.ZipWriteStr(output_zip, dst, data)
+ output_zip.close()
+
return target_file
@@ -1073,7 +786,7 @@
for part in pre_partition_state:
if part.partition_name in partition_timestamps:
partition_timestamps[part.partition_name] = \
- max(part.version, partition_timestamps[part.partition_name])
+ max(part.version, partition_timestamps[part.partition_name])
return [
"--partition_timestamps",
",".join([key + ":" + val for (key, val)
@@ -1122,6 +835,12 @@
def GenerateAbOtaPackage(target_file, output_file, source_file=None):
"""Generates an Android OTA package that has A/B update payload."""
+ # If input target_files are directories, create a copy so that we can modify
+ # them directly
+ if os.path.isdir(target_file):
+ target_file = CopyTargetFilesDir(target_file)
+ if source_file is not None and os.path.isdir(source_file):
+ source_file = CopyTargetFilesDir(source_file)
# Stage the output zip package for package signing.
if not OPTIONS.no_signing:
staging_file = common.MakeTempFile(suffix='.zip')
@@ -1132,6 +851,7 @@
allowZip64=True)
if source_file is not None:
+ source_file = ota_utils.ExtractTargetFiles(source_file)
assert "ab_partitions" in OPTIONS.source_info_dict, \
"META/ab_partitions.txt is required for ab_update."
assert "ab_partitions" in OPTIONS.target_info_dict, \
@@ -1146,12 +866,45 @@
logger.info("Either source or target does not support VABC, disabling.")
OPTIONS.disable_vabc = True
+ # Virtual AB Compression was introduced in Androd S.
+ # Later, we backported VABC to Android R. But verity support was not
+ # backported, so if VABC is used and we are on Android R, disable
+ # verity computation.
+ if not OPTIONS.disable_vabc and source_info.is_android_r:
+ OPTIONS.disable_verity_computation = True
+ OPTIONS.disable_fec_computation = True
+
else:
assert "ab_partitions" in OPTIONS.info_dict, \
"META/ab_partitions.txt is required for ab_update."
target_info = common.BuildInfo(OPTIONS.info_dict, OPTIONS.oem_dicts)
source_info = None
+ if OPTIONS.partial == []:
+ logger.info(
+ "Automatically detecting partial partition list from input target files.")
+ OPTIONS.partial = target_info.get(
+ "partial_ota_update_partitions_list").split()
+ assert OPTIONS.partial, "Input target_file does not have"
+ " partial_ota_update_partitions_list defined, failed to auto detect partial"
+ " partition list. Please specify list of partitions to update manually via"
+ " --partial=a,b,c , or generate a complete OTA by removing the --partial"
+ " option"
+ OPTIONS.partial.sort()
+ if source_info:
+ source_partial_list = source_info.get(
+ "partial_ota_update_partitions_list").split()
+ if source_partial_list:
+ source_partial_list.sort()
+ if source_partial_list != OPTIONS.partial:
+ logger.warning("Source build and target build have different partial partition lists. Source: %s, target: %s, taking the intersection.",
+ source_partial_list, OPTIONS.partial)
+ OPTIONS.partial = list(
+ set(OPTIONS.partial) and set(source_partial_list))
+ OPTIONS.partial.sort()
+ logger.info("Automatically deduced partial partition list: %s",
+ OPTIONS.partial)
+
if target_info.vendor_suppressed_vabc:
logger.info("Vendor suppressed VABC. Disabling")
OPTIONS.disable_vabc = True
@@ -1163,6 +916,24 @@
(source_info is not None and not source_info.is_vabc_xor):
logger.info("VABC XOR Not supported, disabling")
OPTIONS.enable_vabc_xor = False
+
+ if OPTIONS.vabc_compression_param == "none":
+ logger.info(
+ "VABC Compression algorithm is set to 'none', disabling VABC xor")
+ OPTIONS.enable_vabc_xor = False
+
+ if OPTIONS.enable_vabc_xor:
+ api_level = -1
+ if source_info is not None:
+ api_level = source_info.vendor_api_level
+ if api_level == -1:
+ api_level = target_info.vendor_api_level
+
+ # XOR is only supported on T and higher.
+ if api_level < 33:
+ logger.error("VABC XOR not supported on this vendor, disabling")
+ OPTIONS.enable_vabc_xor = False
+
additional_args = []
# Prepare custom images.
@@ -1177,23 +948,22 @@
elif OPTIONS.partial:
target_file = GetTargetFilesZipForPartialUpdates(target_file,
OPTIONS.partial)
- additional_args += ["--is_partial_update", "true"]
elif OPTIONS.vabc_compression_param:
target_file = GetTargetFilesZipForCustomVABCCompression(
target_file, OPTIONS.vabc_compression_param)
elif OPTIONS.skip_postinstall:
target_file = GetTargetFilesZipWithoutPostinstallConfig(target_file)
# Target_file may have been modified, reparse ab_partitions
- with zipfile.ZipFile(target_file, allowZip64=True) as zfp:
- target_info.info_dict['ab_partitions'] = zfp.read(
- AB_PARTITIONS).decode().strip().split("\n")
+ target_info.info_dict['ab_partitions'] = common.ReadFromInputFile(target_file,
+ AB_PARTITIONS).strip().split("\n")
CheckVintfIfTrebleEnabled(target_file, target_info)
# Metadata to comply with Android OTA package format.
metadata = GetPackageMetadata(target_info, source_info)
# Generate payload.
- payload = Payload()
+ payload = PayloadGenerator(
+ wipe_user_data=OPTIONS.wipe_user_data, minor_version=OPTIONS.force_minor_version, is_partial_update=OPTIONS.partial)
partition_timestamps_flags = []
# Enforce a max timestamp this payload can be applied on top of.
@@ -1209,9 +979,21 @@
metadata.postcondition.partition_state)
if not ota_utils.IsZucchiniCompatible(source_file, target_file):
+ logger.warning(
+ "Builds doesn't support zucchini, or source/target don't have compatible zucchini versions. Disabling zucchini.")
OPTIONS.enable_zucchini = False
- additional_args += ["--enable_zucchini",
+ security_patch_level = target_info.GetBuildProp(
+ "ro.build.version.security_patch")
+ if OPTIONS.security_patch_level is not None:
+ security_patch_level = OPTIONS.security_patch_level
+
+ additional_args += ["--security_patch_level", security_patch_level]
+
+ if OPTIONS.max_threads:
+ additional_args += ["--max_threads", OPTIONS.max_threads]
+
+ additional_args += ["--enable_zucchini=" +
str(OPTIONS.enable_zucchini).lower()]
if not ota_utils.IsLz4diffCompatible(source_file, target_file):
@@ -1219,7 +1001,7 @@
"Source build doesn't support lz4diff, or source/target don't have compatible lz4diff versions. Disabling lz4diff.")
OPTIONS.enable_lz4diff = False
- additional_args += ["--enable_lz4diff",
+ additional_args += ["--enable_lz4diff=" +
str(OPTIONS.enable_lz4diff).lower()]
if source_file and OPTIONS.enable_lz4diff:
@@ -1235,20 +1017,13 @@
additional_args += ["--erofs_compression_param", erofs_compression_param]
if OPTIONS.disable_vabc:
- additional_args += ["--disable_vabc", "true"]
+ additional_args += ["--disable_vabc=true"]
if OPTIONS.enable_vabc_xor:
- additional_args += ["--enable_vabc_xor", "true"]
- if OPTIONS.force_minor_version:
- additional_args += ["--force_minor_version", OPTIONS.force_minor_version]
+ additional_args += ["--enable_vabc_xor=true"]
if OPTIONS.compressor_types:
additional_args += ["--compressor_types", OPTIONS.compressor_types]
additional_args += ["--max_timestamp", max_timestamp]
- if SupportsMainlineGkiUpdates(source_file):
- logger.warning(
- "Detected build with mainline GKI, include full boot image.")
- additional_args.extend(["--full_boot", "true"])
-
payload.Generate(
target_file,
source_file,
@@ -1256,7 +1031,10 @@
)
# Sign the payload.
- payload_signer = PayloadSigner()
+ pw = OPTIONS.key_passwords[OPTIONS.package_key]
+ payload_signer = PayloadSigner(
+ OPTIONS.package_key, OPTIONS.private_key_suffix,
+ pw, OPTIONS.payload_signer)
payload.Sign(payload_signer)
# Write the payload into output zip.
@@ -1269,7 +1047,7 @@
# building an incremental OTA. See the comments for "--include_secondary".
secondary_target_file = GetTargetFilesZipForSecondaryImages(
target_file, OPTIONS.skip_postinstall)
- secondary_payload = Payload(secondary=True)
+ secondary_payload = PayloadGenerator(secondary=True)
secondary_payload.Generate(secondary_target_file,
additional_args=["--max_timestamp",
max_timestamp])
@@ -1278,16 +1056,13 @@
# If dm-verity is supported for the device, copy contents of care_map
# into A/B OTA package.
- target_zip = zipfile.ZipFile(target_file, "r", allowZip64=True)
- if (target_info.get("verity") == "true" or
- target_info.get("avb_enable") == "true"):
- care_map_list = [x for x in ["care_map.pb", "care_map.txt"] if
- "META/" + x in target_zip.namelist()]
-
+ if target_info.get("avb_enable") == "true":
# Adds care_map if either the protobuf format or the plain text one exists.
- if care_map_list:
- care_map_name = care_map_list[0]
- care_map_data = target_zip.read("META/" + care_map_name)
+ for care_map_name in ["care_map.pb", "care_map.txt"]:
+ if not DoesInputFileContain(target_file, "META/" + care_map_name):
+ continue
+ care_map_data = common.ReadBytesFromInputFile(
+ target_file, "META/" + care_map_name)
# In order to support streaming, care_map needs to be packed as
# ZIP_STORED.
common.ZipWriteStr(output_zip, care_map_name, care_map_data,
@@ -1297,26 +1072,17 @@
# Add the source apex version for incremental ota updates, and write the
# result apex info to the ota package.
- ota_apex_info = ota_utils.ConstructOtaApexInfo(target_zip, source_file)
+ ota_apex_info = ota_utils.ConstructOtaApexInfo(target_file, source_file)
if ota_apex_info is not None:
common.ZipWriteStr(output_zip, "apex_info.pb", ota_apex_info,
compress_type=zipfile.ZIP_STORED)
- common.ZipClose(target_zip)
-
# We haven't written the metadata entry yet, which will be handled in
# FinalizeMetadata().
common.ZipClose(output_zip)
- # AbOtaPropertyFiles intends to replace StreamingPropertyFiles, as it covers
- # all the info of the latter. However, system updaters and OTA servers need to
- # take time to switch to the new flag. We keep both of the flags for
- # P-timeframe, and will remove StreamingPropertyFiles in later release.
- needed_property_files = (
- AbOtaPropertyFiles(),
- StreamingPropertyFiles(),
- )
- FinalizeMetadata(metadata, staging_file, output_file, needed_property_files)
+ FinalizeMetadata(metadata, staging_file, output_file,
+ package_key=OPTIONS.package_key)
def main(argv):
@@ -1399,9 +1165,12 @@
elif o == "--boot_variable_file":
OPTIONS.boot_variable_file = a
elif o == "--partial":
- partitions = a.split()
- if not partitions:
- raise ValueError("Cannot parse partitions in {}".format(a))
+ if a:
+ partitions = a.split()
+ if not partitions:
+ raise ValueError("Cannot parse partitions in {}".format(a))
+ else:
+ partitions = []
OPTIONS.partial = partitions
elif o == "--custom_image":
custom_partition, custom_image = a.split("=")
@@ -1428,6 +1197,14 @@
OPTIONS.enable_lz4diff = a.lower() != "false"
elif o == "--vabc_compression_param":
OPTIONS.vabc_compression_param = a.lower()
+ elif o == "--security_patch_level":
+ OPTIONS.security_patch_level = a
+ elif o in ("--max_threads"):
+ if a.isdigit():
+ OPTIONS.max_threads = a
+ else:
+ raise ValueError("Cannot parse value %r for option %r - only "
+ "integers are allowed." % (a, o))
else:
return False
return True
@@ -1478,14 +1255,15 @@
"enable_zucchini=",
"enable_lz4diff=",
"vabc_compression_param=",
+ "security_patch_level=",
+ "max_threads=",
], extra_option_handler=option_handler)
+ common.InitLogging()
if len(args) != 2:
common.Usage(__doc__)
sys.exit(1)
- common.InitLogging()
-
# Load the build info dicts from the zip directly or the extracted input
# directory. We don't need to unzip the entire target-files zips, because they
# won't be needed for A/B OTAs (brillo_update_payload does that on its own).
@@ -1496,7 +1274,7 @@
if OPTIONS.extracted_input is not None:
OPTIONS.info_dict = common.LoadInfoDict(OPTIONS.extracted_input)
else:
- OPTIONS.info_dict = ParseInfoDict(args[0])
+ OPTIONS.info_dict = common.LoadInfoDict(args[0])
if OPTIONS.wipe_user_data:
if not OPTIONS.vabc_downgrade:
@@ -1602,6 +1380,15 @@
source_spl = source_build_prop.GetProp(SECURITY_PATCH_LEVEL_PROP_NAME)
target_spl = target_build_prop.GetProp(SECURITY_PATCH_LEVEL_PROP_NAME)
is_spl_downgrade = target_spl < source_spl
+ if is_spl_downgrade and target_build_prop.GetProp("ro.build.tags") == "release-keys":
+ raise common.ExternalError(
+ "Target security patch level {} is older than source SPL {} "
+ "A locked bootloader will reject SPL downgrade no matter "
+ "what(even if data wipe is done), so SPL downgrade on any "
+ "release-keys build is not allowed.".format(target_spl, source_spl))
+
+ logger.info("SPL downgrade on %s",
+ target_build_prop.GetProp("ro.build.tags"))
if is_spl_downgrade and not OPTIONS.spl_downgrade and not OPTIONS.downgrade:
raise common.ExternalError(
"Target security patch level {} is older than source SPL {} applying "
diff --git a/tools/releasetools/ota_metadata_pb2.py b/tools/releasetools/ota_metadata_pb2.py
index 2552464..012d9ab 100644
--- a/tools/releasetools/ota_metadata_pb2.py
+++ b/tools/releasetools/ota_metadata_pb2.py
@@ -19,8 +19,8 @@
name='ota_metadata.proto',
package='build.tools.releasetools',
syntax='proto3',
- serialized_options=_b('H\003'),
- serialized_pb=_b('\n\x12ota_metadata.proto\x12\x18\x62uild.tools.releasetools\"X\n\x0ePartitionState\x12\x16\n\x0epartition_name\x18\x01 \x01(\t\x12\x0e\n\x06\x64\x65vice\x18\x02 \x03(\t\x12\r\n\x05\x62uild\x18\x03 \x03(\t\x12\x0f\n\x07version\x18\x04 \x01(\t\"\xce\x01\n\x0b\x44\x65viceState\x12\x0e\n\x06\x64\x65vice\x18\x01 \x03(\t\x12\r\n\x05\x62uild\x18\x02 \x03(\t\x12\x19\n\x11\x62uild_incremental\x18\x03 \x01(\t\x12\x11\n\ttimestamp\x18\x04 \x01(\x03\x12\x11\n\tsdk_level\x18\x05 \x01(\t\x12\x1c\n\x14security_patch_level\x18\x06 \x01(\t\x12\x41\n\x0fpartition_state\x18\x07 \x03(\x0b\x32(.build.tools.releasetools.PartitionState\"c\n\x08\x41pexInfo\x12\x14\n\x0cpackage_name\x18\x01 \x01(\t\x12\x0f\n\x07version\x18\x02 \x01(\x03\x12\x15\n\ris_compressed\x18\x03 \x01(\x08\x12\x19\n\x11\x64\x65\x63ompressed_size\x18\x04 \x01(\x03\"E\n\x0c\x41pexMetadata\x12\x35\n\tapex_info\x18\x01 \x03(\x0b\x32\".build.tools.releasetools.ApexInfo\"\xf8\x03\n\x0bOtaMetadata\x12;\n\x04type\x18\x01 \x01(\x0e\x32-.build.tools.releasetools.OtaMetadata.OtaType\x12\x0c\n\x04wipe\x18\x02 \x01(\x08\x12\x11\n\tdowngrade\x18\x03 \x01(\x08\x12P\n\x0eproperty_files\x18\x04 \x03(\x0b\x32\x38.build.tools.releasetools.OtaMetadata.PropertyFilesEntry\x12;\n\x0cprecondition\x18\x05 \x01(\x0b\x32%.build.tools.releasetools.DeviceState\x12<\n\rpostcondition\x18\x06 \x01(\x0b\x32%.build.tools.releasetools.DeviceState\x12#\n\x1bretrofit_dynamic_partitions\x18\x07 \x01(\x08\x12\x16\n\x0erequired_cache\x18\x08 \x01(\x03\x12\x15\n\rspl_downgrade\x18\t \x01(\x08\x1a\x34\n\x12PropertyFilesEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\"4\n\x07OtaType\x12\x0b\n\x07UNKNOWN\x10\x00\x12\x06\n\x02\x41\x42\x10\x01\x12\t\n\x05\x42LOCK\x10\x02\x12\t\n\x05\x42RICK\x10\x03\x42\x02H\x03\x62\x06proto3')
+ serialized_options=_b('\n\013android.otaB\022OtaPackageMetadataH\003'),
+ serialized_pb=_b('\n\x12ota_metadata.proto\x12\x18\x62uild.tools.releasetools\"X\n\x0ePartitionState\x12\x16\n\x0epartition_name\x18\x01 \x01(\t\x12\x0e\n\x06\x64\x65vice\x18\x02 \x03(\t\x12\r\n\x05\x62uild\x18\x03 \x03(\t\x12\x0f\n\x07version\x18\x04 \x01(\t\"\xce\x01\n\x0b\x44\x65viceState\x12\x0e\n\x06\x64\x65vice\x18\x01 \x03(\t\x12\r\n\x05\x62uild\x18\x02 \x03(\t\x12\x19\n\x11\x62uild_incremental\x18\x03 \x01(\t\x12\x11\n\ttimestamp\x18\x04 \x01(\x03\x12\x11\n\tsdk_level\x18\x05 \x01(\t\x12\x1c\n\x14security_patch_level\x18\x06 \x01(\t\x12\x41\n\x0fpartition_state\x18\x07 \x03(\x0b\x32(.build.tools.releasetools.PartitionState\"{\n\x08\x41pexInfo\x12\x14\n\x0cpackage_name\x18\x01 \x01(\t\x12\x0f\n\x07version\x18\x02 \x01(\x03\x12\x15\n\ris_compressed\x18\x03 \x01(\x08\x12\x19\n\x11\x64\x65\x63ompressed_size\x18\x04 \x01(\x03\x12\x16\n\x0esource_version\x18\x05 \x01(\x03\"E\n\x0c\x41pexMetadata\x12\x35\n\tapex_info\x18\x01 \x03(\x0b\x32\".build.tools.releasetools.ApexInfo\"\xf8\x03\n\x0bOtaMetadata\x12;\n\x04type\x18\x01 \x01(\x0e\x32-.build.tools.releasetools.OtaMetadata.OtaType\x12\x0c\n\x04wipe\x18\x02 \x01(\x08\x12\x11\n\tdowngrade\x18\x03 \x01(\x08\x12P\n\x0eproperty_files\x18\x04 \x03(\x0b\x32\x38.build.tools.releasetools.OtaMetadata.PropertyFilesEntry\x12;\n\x0cprecondition\x18\x05 \x01(\x0b\x32%.build.tools.releasetools.DeviceState\x12<\n\rpostcondition\x18\x06 \x01(\x0b\x32%.build.tools.releasetools.DeviceState\x12#\n\x1bretrofit_dynamic_partitions\x18\x07 \x01(\x08\x12\x16\n\x0erequired_cache\x18\x08 \x01(\x03\x12\x15\n\rspl_downgrade\x18\t \x01(\x08\x1a\x34\n\x12PropertyFilesEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\"4\n\x07OtaType\x12\x0b\n\x07UNKNOWN\x10\x00\x12\x06\n\x02\x41\x42\x10\x01\x12\t\n\x05\x42LOCK\x10\x02\x12\t\n\x05\x42RICK\x10\x03\x42#\n\x0b\x61ndroid.otaB\x12OtaPackageMetadataH\x03\x62\x06proto3')
)
@@ -50,8 +50,8 @@
],
containing_type=None,
serialized_options=None,
- serialized_start=972,
- serialized_end=1024,
+ serialized_start=996,
+ serialized_end=1048,
)
_sym_db.RegisterEnumDescriptor(_OTAMETADATA_OTATYPE)
@@ -216,6 +216,13 @@
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
+ _descriptor.FieldDescriptor(
+ name='source_version', full_name='build.tools.releasetools.ApexInfo.source_version', index=4,
+ number=5, type=3, cpp_type=2, label=1,
+ has_default_value=False, default_value=0,
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
@@ -229,7 +236,7 @@
oneofs=[
],
serialized_start=347,
- serialized_end=446,
+ serialized_end=470,
)
@@ -259,8 +266,8 @@
extension_ranges=[],
oneofs=[
],
- serialized_start=448,
- serialized_end=517,
+ serialized_start=472,
+ serialized_end=541,
)
@@ -297,8 +304,8 @@
extension_ranges=[],
oneofs=[
],
- serialized_start=918,
- serialized_end=970,
+ serialized_start=942,
+ serialized_end=994,
)
_OTAMETADATA = _descriptor.Descriptor(
@@ -384,8 +391,8 @@
extension_ranges=[],
oneofs=[
],
- serialized_start=520,
- serialized_end=1024,
+ serialized_start=544,
+ serialized_end=1048,
)
_DEVICESTATE.fields_by_name['partition_state'].message_type = _PARTITIONSTATE
diff --git a/tools/releasetools/ota_utils.py b/tools/releasetools/ota_utils.py
index 5d403dc..3291d56 100644
--- a/tools/releasetools/ota_utils.py
+++ b/tools/releasetools/ota_utils.py
@@ -16,13 +16,19 @@
import itertools
import logging
import os
+import shutil
import struct
import zipfile
import ota_metadata_pb2
-from common import (ZipDelete, ZipClose, OPTIONS, MakeTempFile,
+import common
+import fnmatch
+from common import (ZipDelete, DoesInputFileContain, ReadBytesFromInputFile, OPTIONS, MakeTempFile,
ZipWriteStr, BuildInfo, LoadDictionaryFromFile,
- SignFile, PARTITIONS_WITH_BUILD_PROP, PartitionBuildProps)
+ SignFile, PARTITIONS_WITH_BUILD_PROP, PartitionBuildProps,
+ GetRamdiskFormat, ParseUpdateEngineConfig)
+from payload_signer import PayloadSigner
+
logger = logging.getLogger(__name__)
@@ -39,11 +45,12 @@
METADATA_NAME = 'META-INF/com/android/metadata'
METADATA_PROTO_NAME = 'META-INF/com/android/metadata.pb'
-UNZIP_PATTERN = ['IMAGES/*', 'META/*', 'OTA/*', 'RADIO/*']
+UNZIP_PATTERN = ['IMAGES/*', 'META/*', 'OTA/*',
+ 'RADIO/*', '*/build.prop', '*/default.prop', '*/build.default', "*/etc/vintf/*"]
SECURITY_PATCH_LEVEL_PROP_NAME = "ro.build.version.security_patch"
-def FinalizeMetadata(metadata, input_file, output_file, needed_property_files):
+def FinalizeMetadata(metadata, input_file, output_file, needed_property_files=None, package_key=None, pw=None):
"""Finalizes the metadata and signs an A/B OTA package.
In order to stream an A/B OTA package, we need 'ota-streaming-property-files'
@@ -61,32 +68,42 @@
input_file: The input ZIP filename that doesn't contain the package METADATA
entry yet.
output_file: The final output ZIP filename.
- needed_property_files: The list of PropertyFiles' to be generated.
+ needed_property_files: The list of PropertyFiles' to be generated. Default is [AbOtaPropertyFiles(), StreamingPropertyFiles()]
+ package_key: The key used to sign this OTA package
+ pw: Password for the package_key
"""
+ no_signing = package_key is None
+
+ if needed_property_files is None:
+ # AbOtaPropertyFiles intends to replace StreamingPropertyFiles, as it covers
+ # all the info of the latter. However, system updaters and OTA servers need to
+ # take time to switch to the new flag. We keep both of the flags for
+ # P-timeframe, and will remove StreamingPropertyFiles in later release.
+ needed_property_files = (
+ AbOtaPropertyFiles(),
+ StreamingPropertyFiles(),
+ )
def ComputeAllPropertyFiles(input_file, needed_property_files):
# Write the current metadata entry with placeholders.
- with zipfile.ZipFile(input_file, allowZip64=True) as input_zip:
+ with zipfile.ZipFile(input_file, 'r', allowZip64=True) as input_zip:
for property_files in needed_property_files:
metadata.property_files[property_files.name] = property_files.Compute(
input_zip)
- namelist = input_zip.namelist()
- if METADATA_NAME in namelist or METADATA_PROTO_NAME in namelist:
- ZipDelete(input_file, [METADATA_NAME, METADATA_PROTO_NAME])
- output_zip = zipfile.ZipFile(input_file, 'a', allowZip64=True)
- WriteMetadata(metadata, output_zip)
- ZipClose(output_zip)
+ ZipDelete(input_file, [METADATA_NAME, METADATA_PROTO_NAME], True)
+ with zipfile.ZipFile(input_file, 'a', allowZip64=True) as output_zip:
+ WriteMetadata(metadata, output_zip)
- if OPTIONS.no_signing:
+ if no_signing:
return input_file
prelim_signing = MakeTempFile(suffix='.zip')
- SignOutput(input_file, prelim_signing)
+ SignOutput(input_file, prelim_signing, package_key, pw)
return prelim_signing
def FinalizeAllPropertyFiles(prelim_signing, needed_property_files):
- with zipfile.ZipFile(prelim_signing, allowZip64=True) as prelim_signing_zip:
+ with zipfile.ZipFile(prelim_signing, 'r', allowZip64=True) as prelim_signing_zip:
for property_files in needed_property_files:
metadata.property_files[property_files.name] = property_files.Finalize(
prelim_signing_zip,
@@ -112,15 +129,17 @@
# Replace the METADATA entry.
ZipDelete(prelim_signing, [METADATA_NAME, METADATA_PROTO_NAME])
- output_zip = zipfile.ZipFile(prelim_signing, 'a', allowZip64=True)
- WriteMetadata(metadata, output_zip)
- ZipClose(output_zip)
+ with zipfile.ZipFile(prelim_signing, 'a', allowZip64=True) as output_zip:
+ WriteMetadata(metadata, output_zip)
# Re-sign the package after updating the metadata entry.
- if OPTIONS.no_signing:
- output_file = prelim_signing
+ if no_signing:
+ logger.info(f"Signing disabled for output file {output_file}")
+ shutil.copy(prelim_signing, output_file)
else:
- SignOutput(prelim_signing, output_file)
+ logger.info(
+ f"Signing the output file {output_file} with key {package_key}")
+ SignOutput(prelim_signing, output_file, package_key, pw)
# Reopen the final signed zip to double check the streaming metadata.
with zipfile.ZipFile(output_file, allowZip64=True) as output_zip:
@@ -371,15 +390,18 @@
for partition in PARTITIONS_WITH_BUILD_PROP:
partition_prop_key = "{}.build.prop".format(partition)
input_file = info_dict[partition_prop_key].input_file
+ ramdisk = GetRamdiskFormat(info_dict)
if isinstance(input_file, zipfile.ZipFile):
with zipfile.ZipFile(input_file.filename, allowZip64=True) as input_zip:
info_dict[partition_prop_key] = \
PartitionBuildProps.FromInputFile(input_zip, partition,
- placeholder_values)
+ placeholder_values,
+ ramdisk)
else:
info_dict[partition_prop_key] = \
PartitionBuildProps.FromInputFile(input_file, partition,
- placeholder_values)
+ placeholder_values,
+ ramdisk)
info_dict["build.prop"] = info_dict["system.build.prop"]
build_info_set.add(BuildInfo(info_dict, default_build_info.oem_dicts))
@@ -570,7 +592,7 @@
else:
tokens.append(ComputeEntryOffsetSize(METADATA_NAME))
if METADATA_PROTO_NAME in zip_file.namelist():
- tokens.append(ComputeEntryOffsetSize(METADATA_PROTO_NAME))
+ tokens.append(ComputeEntryOffsetSize(METADATA_PROTO_NAME))
return ','.join(tokens)
@@ -592,10 +614,13 @@
return []
-def SignOutput(temp_zip_name, output_zip_name):
- pw = OPTIONS.key_passwords[OPTIONS.package_key]
+def SignOutput(temp_zip_name, output_zip_name, package_key=None, pw=None):
+ if package_key is None:
+ package_key = OPTIONS.package_key
+ if pw is None and OPTIONS.key_passwords:
+ pw = OPTIONS.key_passwords[package_key]
- SignFile(temp_zip_name, output_zip_name, OPTIONS.package_key, pw,
+ SignFile(temp_zip_name, output_zip_name, package_key, pw,
whole_file=True)
@@ -603,12 +628,10 @@
"""If applicable, add the source version to the apex info."""
def _ReadApexInfo(input_zip):
- if "META/apex_info.pb" not in input_zip.namelist():
+ if not DoesInputFileContain(input_zip, "META/apex_info.pb"):
logger.warning("target_file doesn't contain apex_info.pb %s", input_zip)
return None
-
- with input_zip.open("META/apex_info.pb", "r") as zfp:
- return zfp.read()
+ return ReadBytesFromInputFile(input_zip, "META/apex_info.pb")
target_apex_string = _ReadApexInfo(target_zip)
# Return early if the target apex info doesn't exist or is empty.
@@ -619,8 +642,7 @@
if not source_file:
return target_apex_string
- with zipfile.ZipFile(source_file, "r", allowZip64=True) as source_zip:
- source_apex_string = _ReadApexInfo(source_zip)
+ source_apex_string = _ReadApexInfo(source_file)
if not source_apex_string:
return target_apex_string
@@ -689,10 +711,355 @@
if entry in zfp.namelist():
return zfp.read(entry).decode()
else:
- entry_path = os.path.join(entry, path)
+ entry_path = os.path.join(path, entry)
if os.path.exists(entry_path):
with open(entry_path, "r") as fp:
return fp.read()
- else:
- return ""
- return ReadEntry(source_file, _ZUCCHINI_CONFIG_ENTRY_NAME) == ReadEntry(target_file, _ZUCCHINI_CONFIG_ENTRY_NAME)
+ return False
+ sourceEntry = ReadEntry(source_file, _ZUCCHINI_CONFIG_ENTRY_NAME)
+ targetEntry = ReadEntry(target_file, _ZUCCHINI_CONFIG_ENTRY_NAME)
+ return sourceEntry and targetEntry and sourceEntry == targetEntry
+
+
+def ExtractTargetFiles(path: str):
+ if os.path.isdir(path):
+ logger.info("target files %s is already extracted", path)
+ return path
+ extracted_dir = common.MakeTempDir("target_files")
+ common.UnzipToDir(path, extracted_dir, UNZIP_PATTERN + [""])
+ return extracted_dir
+
+
+def LocatePartitionPath(target_files_dir: str, partition: str, allow_empty):
+ path = os.path.join(target_files_dir, "RADIO", partition + ".img")
+ if os.path.exists(path):
+ return path
+ path = os.path.join(target_files_dir, "IMAGES", partition + ".img")
+ if os.path.exists(path):
+ return path
+ if allow_empty:
+ return ""
+ raise common.ExternalError(
+ "Partition {} not found in target files {}".format(partition, target_files_dir))
+
+
+def GetPartitionImages(target_files_dir: str, ab_partitions, allow_empty=True):
+ assert os.path.isdir(target_files_dir)
+ return ":".join([LocatePartitionPath(target_files_dir, partition, allow_empty) for partition in ab_partitions])
+
+
+def LocatePartitionMap(target_files_dir: str, partition: str):
+ path = os.path.join(target_files_dir, "RADIO", partition + ".map")
+ if os.path.exists(path):
+ return path
+ return ""
+
+
+def GetPartitionMaps(target_files_dir: str, ab_partitions):
+ assert os.path.isdir(target_files_dir)
+ return ":".join([LocatePartitionMap(target_files_dir, partition) for partition in ab_partitions])
+
+
+class PayloadGenerator(object):
+ """Manages the creation and the signing of an A/B OTA Payload."""
+
+ PAYLOAD_BIN = 'payload.bin'
+ PAYLOAD_PROPERTIES_TXT = 'payload_properties.txt'
+ SECONDARY_PAYLOAD_BIN = 'secondary/payload.bin'
+ SECONDARY_PAYLOAD_PROPERTIES_TXT = 'secondary/payload_properties.txt'
+
+ def __init__(self, secondary=False, wipe_user_data=False, minor_version=None, is_partial_update=False):
+ """Initializes a Payload instance.
+
+ Args:
+ secondary: Whether it's generating a secondary payload (default: False).
+ """
+ self.payload_file = None
+ self.payload_properties = None
+ self.secondary = secondary
+ self.wipe_user_data = wipe_user_data
+ self.minor_version = minor_version
+ self.is_partial_update = is_partial_update
+
+ def _Run(self, cmd): # pylint: disable=no-self-use
+ # Don't pipe (buffer) the output if verbose is set. Let
+ # brillo_update_payload write to stdout/stderr directly, so its progress can
+ # be monitored.
+ if OPTIONS.verbose:
+ common.RunAndCheckOutput(cmd, stdout=None, stderr=None)
+ else:
+ common.RunAndCheckOutput(cmd)
+
+ def Generate(self, target_file, source_file=None, additional_args=None):
+ """Generates a payload from the given target-files zip(s).
+
+ Args:
+ target_file: The filename of the target build target-files zip.
+ source_file: The filename of the source build target-files zip; or None if
+ generating a full OTA.
+ additional_args: A list of additional args that should be passed to
+ delta_generator binary; or None.
+ """
+ if additional_args is None:
+ additional_args = []
+
+ payload_file = common.MakeTempFile(prefix="payload-", suffix=".bin")
+ target_dir = ExtractTargetFiles(target_file)
+ cmd = ["delta_generator",
+ "--out_file", payload_file]
+ with open(os.path.join(target_dir, "META", "ab_partitions.txt")) as fp:
+ ab_partitions = fp.read().strip().split("\n")
+ cmd.extend(["--partition_names", ":".join(ab_partitions)])
+ cmd.extend(
+ ["--new_partitions", GetPartitionImages(target_dir, ab_partitions, False)])
+ cmd.extend(
+ ["--new_mapfiles", GetPartitionMaps(target_dir, ab_partitions)])
+ if source_file is not None:
+ source_dir = ExtractTargetFiles(source_file)
+ cmd.extend(
+ ["--old_partitions", GetPartitionImages(source_dir, ab_partitions, True)])
+ cmd.extend(
+ ["--old_mapfiles", GetPartitionMaps(source_dir, ab_partitions)])
+
+ if OPTIONS.disable_fec_computation:
+ cmd.extend(["--disable_fec_computation=true"])
+ if OPTIONS.disable_verity_computation:
+ cmd.extend(["--disable_verity_computation=true"])
+ postinstall_config = os.path.join(
+ target_dir, "META", "postinstall_config.txt")
+
+ if os.path.exists(postinstall_config):
+ cmd.extend(["--new_postinstall_config_file", postinstall_config])
+ dynamic_partition_info = os.path.join(
+ target_dir, "META", "dynamic_partitions_info.txt")
+
+ if os.path.exists(dynamic_partition_info):
+ cmd.extend(["--dynamic_partition_info_file", dynamic_partition_info])
+
+ major_version, minor_version = ParseUpdateEngineConfig(
+ os.path.join(target_dir, "META", "update_engine_config.txt"))
+ if source_file:
+ major_version, minor_version = ParseUpdateEngineConfig(
+ os.path.join(source_dir, "META", "update_engine_config.txt"))
+ if self.minor_version:
+ minor_version = self.minor_version
+ cmd.extend(["--major_version", str(major_version)])
+ if source_file is not None or self.is_partial_update:
+ cmd.extend(["--minor_version", str(minor_version)])
+ if self.is_partial_update:
+ cmd.extend(["--is_partial_update=true"])
+ cmd.extend(additional_args)
+ self._Run(cmd)
+
+ self.payload_file = payload_file
+ self.payload_properties = None
+
+ def Sign(self, payload_signer):
+ """Generates and signs the hashes of the payload and metadata.
+
+ Args:
+ payload_signer: A PayloadSigner() instance that serves the signing work.
+
+ Raises:
+ AssertionError: On any failure when calling brillo_update_payload script.
+ """
+ assert isinstance(payload_signer, PayloadSigner)
+
+ # 1. Generate hashes of the payload and metadata files.
+ payload_sig_file = common.MakeTempFile(prefix="sig-", suffix=".bin")
+ metadata_sig_file = common.MakeTempFile(prefix="sig-", suffix=".bin")
+ cmd = ["brillo_update_payload", "hash",
+ "--unsigned_payload", self.payload_file,
+ "--signature_size", str(payload_signer.maximum_signature_size),
+ "--metadata_hash_file", metadata_sig_file,
+ "--payload_hash_file", payload_sig_file]
+ self._Run(cmd)
+
+ # 2. Sign the hashes.
+ signed_payload_sig_file = payload_signer.SignHashFile(payload_sig_file)
+ signed_metadata_sig_file = payload_signer.SignHashFile(metadata_sig_file)
+
+ # 3. Insert the signatures back into the payload file.
+ signed_payload_file = common.MakeTempFile(prefix="signed-payload-",
+ suffix=".bin")
+ cmd = ["brillo_update_payload", "sign",
+ "--unsigned_payload", self.payload_file,
+ "--payload", signed_payload_file,
+ "--signature_size", str(payload_signer.maximum_signature_size),
+ "--metadata_signature_file", signed_metadata_sig_file,
+ "--payload_signature_file", signed_payload_sig_file]
+ self._Run(cmd)
+
+ self.payload_file = signed_payload_file
+
+ def WriteToZip(self, output_zip):
+ """Writes the payload to the given zip.
+
+ Args:
+ output_zip: The output ZipFile instance.
+ """
+ assert self.payload_file is not None
+ # 4. Dump the signed payload properties.
+ properties_file = common.MakeTempFile(prefix="payload-properties-",
+ suffix=".txt")
+ cmd = ["brillo_update_payload", "properties",
+ "--payload", self.payload_file,
+ "--properties_file", properties_file]
+ self._Run(cmd)
+
+ if self.secondary:
+ with open(properties_file, "a") as f:
+ f.write("SWITCH_SLOT_ON_REBOOT=0\n")
+
+ if self.wipe_user_data:
+ with open(properties_file, "a") as f:
+ f.write("POWERWASH=1\n")
+
+ self.payload_properties = properties_file
+
+ if self.secondary:
+ payload_arcname = PayloadGenerator.SECONDARY_PAYLOAD_BIN
+ payload_properties_arcname = PayloadGenerator.SECONDARY_PAYLOAD_PROPERTIES_TXT
+ else:
+ payload_arcname = PayloadGenerator.PAYLOAD_BIN
+ payload_properties_arcname = PayloadGenerator.PAYLOAD_PROPERTIES_TXT
+
+ # Add the signed payload file and properties into the zip. In order to
+ # support streaming, we pack them as ZIP_STORED. So these entries can be
+ # read directly with the offset and length pairs.
+ common.ZipWrite(output_zip, self.payload_file, arcname=payload_arcname,
+ compress_type=zipfile.ZIP_STORED)
+ common.ZipWrite(output_zip, self.payload_properties,
+ arcname=payload_properties_arcname,
+ compress_type=zipfile.ZIP_STORED)
+
+
+class StreamingPropertyFiles(PropertyFiles):
+ """A subclass for computing the property-files for streaming A/B OTAs."""
+
+ def __init__(self):
+ super(StreamingPropertyFiles, self).__init__()
+ self.name = 'ota-streaming-property-files'
+ self.required = (
+ # payload.bin and payload_properties.txt must exist.
+ 'payload.bin',
+ 'payload_properties.txt',
+ )
+ self.optional = (
+ # apex_info.pb isn't directly used in the update flow
+ 'apex_info.pb',
+ # care_map is available only if dm-verity is enabled.
+ 'care_map.pb',
+ 'care_map.txt',
+ # compatibility.zip is available only if target supports Treble.
+ 'compatibility.zip',
+ )
+
+
+class AbOtaPropertyFiles(StreamingPropertyFiles):
+ """The property-files for A/B OTA that includes payload_metadata.bin info.
+
+ Since P, we expose one more token (aka property-file), in addition to the ones
+ for streaming A/B OTA, for a virtual entry of 'payload_metadata.bin'.
+ 'payload_metadata.bin' is the header part of a payload ('payload.bin'), which
+ doesn't exist as a separate ZIP entry, but can be used to verify if the
+ payload can be applied on the given device.
+
+ For backward compatibility, we keep both of the 'ota-streaming-property-files'
+ and the newly added 'ota-property-files' in P. The new token will only be
+ available in 'ota-property-files'.
+ """
+
+ def __init__(self):
+ super(AbOtaPropertyFiles, self).__init__()
+ self.name = 'ota-property-files'
+
+ def _GetPrecomputed(self, input_zip):
+ offset, size = self._GetPayloadMetadataOffsetAndSize(input_zip)
+ return ['payload_metadata.bin:{}:{}'.format(offset, size)]
+
+ @staticmethod
+ def _GetPayloadMetadataOffsetAndSize(input_zip):
+ """Computes the offset and size of the payload metadata for a given package.
+
+ (From system/update_engine/update_metadata.proto)
+ A delta update file contains all the deltas needed to update a system from
+ one specific version to another specific version. The update format is
+ represented by this struct pseudocode:
+
+ struct delta_update_file {
+ char magic[4] = "CrAU";
+ uint64 file_format_version;
+ uint64 manifest_size; // Size of protobuf DeltaArchiveManifest
+
+ // Only present if format_version > 1:
+ uint32 metadata_signature_size;
+
+ // The Bzip2 compressed DeltaArchiveManifest
+ char manifest[metadata_signature_size];
+
+ // The signature of the metadata (from the beginning of the payload up to
+ // this location, not including the signature itself). This is a
+ // serialized Signatures message.
+ char medatada_signature_message[metadata_signature_size];
+
+ // Data blobs for files, no specific format. The specific offset
+ // and length of each data blob is recorded in the DeltaArchiveManifest.
+ struct {
+ char data[];
+ } blobs[];
+
+ // These two are not signed:
+ uint64 payload_signatures_message_size;
+ char payload_signatures_message[];
+ };
+
+ 'payload-metadata.bin' contains all the bytes from the beginning of the
+ payload, till the end of 'medatada_signature_message'.
+ """
+ payload_info = input_zip.getinfo('payload.bin')
+ (payload_offset, payload_size) = GetZipEntryOffset(input_zip, payload_info)
+
+ # Read the underlying raw zipfile at specified offset
+ payload_fp = input_zip.fp
+ payload_fp.seek(payload_offset)
+ header_bin = payload_fp.read(24)
+
+ # network byte order (big-endian)
+ header = struct.unpack("!IQQL", header_bin)
+
+ # 'CrAU'
+ magic = header[0]
+ assert magic == 0x43724155, "Invalid magic: {:x}, computed offset {}" \
+ .format(magic, payload_offset)
+
+ manifest_size = header[2]
+ metadata_signature_size = header[3]
+ metadata_total = 24 + manifest_size + metadata_signature_size
+ assert metadata_total <= payload_size
+
+ return (payload_offset, metadata_total)
+
+
+def Fnmatch(filename, pattersn):
+ return any([fnmatch.fnmatch(filename, pat) for pat in pattersn])
+
+
+def CopyTargetFilesDir(input_dir):
+ output_dir = common.MakeTempDir("target_files")
+ shutil.copytree(os.path.join(input_dir, "IMAGES"), os.path.join(
+ output_dir, "IMAGES"), dirs_exist_ok=True)
+ shutil.copytree(os.path.join(input_dir, "META"), os.path.join(
+ output_dir, "META"), dirs_exist_ok=True)
+ for (dirpath, _, filenames) in os.walk(input_dir):
+ for filename in filenames:
+ path = os.path.join(dirpath, filename)
+ relative_path = path.removeprefix(input_dir).removeprefix("/")
+ if not Fnmatch(relative_path, UNZIP_PATTERN):
+ continue
+ if filename.endswith(".prop") or filename == "prop.default" or "/etc/vintf/" in relative_path:
+ target_path = os.path.join(
+ output_dir, relative_path)
+ os.makedirs(os.path.dirname(target_path), exist_ok=True)
+ shutil.copy(path, target_path)
+ return output_dir
diff --git a/tools/releasetools/payload_signer.py b/tools/releasetools/payload_signer.py
new file mode 100644
index 0000000..9933aef
--- /dev/null
+++ b/tools/releasetools/payload_signer.py
@@ -0,0 +1,127 @@
+#!/usr/bin/env python3
+#
+# Copyright (C) 2022 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import common
+import logging
+from common import OPTIONS
+
+logger = logging.getLogger(__name__)
+
+
+class PayloadSigner(object):
+ """A class that wraps the payload signing works.
+
+ When generating a Payload, hashes of the payload and metadata files will be
+ signed with the device key, either by calling an external payload signer or
+ by calling openssl with the package key. This class provides a unified
+ interface, so that callers can just call PayloadSigner.Sign().
+
+ If an external payload signer has been specified (OPTIONS.payload_signer), it
+ calls the signer with the provided args (OPTIONS.payload_signer_args). Note
+ that the signing key should be provided as part of the payload_signer_args.
+ Otherwise without an external signer, it uses the package key
+ (OPTIONS.package_key) and calls openssl for the signing works.
+ """
+
+ def __init__(self, package_key=None, private_key_suffix=None, pw=None, payload_signer=None,
+ payload_signer_args=None, payload_signer_maximum_signature_size=None):
+ if package_key is None:
+ package_key = OPTIONS.package_key
+ if private_key_suffix is None:
+ private_key_suffix = OPTIONS.private_key_suffix
+ if payload_signer_args is None:
+ payload_signer_args = OPTIONS.payload_signer_args
+ if payload_signer_maximum_signature_size is None:
+ payload_signer_maximum_signature_size = OPTIONS.payload_signer_maximum_signature_size
+
+ if payload_signer is None:
+ # Prepare the payload signing key.
+ private_key = package_key + private_key_suffix
+
+ cmd = ["openssl", "pkcs8", "-in", private_key, "-inform", "DER"]
+ cmd.extend(["-passin", "pass:" + pw] if pw else ["-nocrypt"])
+ signing_key = common.MakeTempFile(prefix="key-", suffix=".key")
+ cmd.extend(["-out", signing_key])
+ common.RunAndCheckOutput(cmd, verbose=True)
+
+ self.signer = "openssl"
+ self.signer_args = ["pkeyutl", "-sign", "-inkey", signing_key,
+ "-pkeyopt", "digest:sha256"]
+ self.maximum_signature_size = self._GetMaximumSignatureSizeInBytes(
+ signing_key)
+ else:
+ self.signer = payload_signer
+ self.signer_args = payload_signer_args
+ if payload_signer_maximum_signature_size:
+ self.maximum_signature_size = int(
+ payload_signer_maximum_signature_size)
+ else:
+ # The legacy config uses RSA2048 keys.
+ logger.warning("The maximum signature size for payload signer is not"
+ " set, default to 256 bytes.")
+ self.maximum_signature_size = 256
+
+ @staticmethod
+ def _GetMaximumSignatureSizeInBytes(signing_key):
+ out_signature_size_file = common.MakeTempFile("signature_size")
+ cmd = ["delta_generator", "--out_maximum_signature_size_file={}".format(
+ out_signature_size_file), "--private_key={}".format(signing_key)]
+ common.RunAndCheckOutput(cmd, verbose=True)
+ with open(out_signature_size_file) as f:
+ signature_size = f.read().rstrip()
+ logger.info("%s outputs the maximum signature size: %s", cmd[0],
+ signature_size)
+ return int(signature_size)
+
+ @staticmethod
+ def _Run(cmd):
+ common.RunAndCheckOutput(cmd, stdout=None, stderr=None)
+
+ def SignPayload(self, unsigned_payload):
+
+ # 1. Generate hashes of the payload and metadata files.
+ payload_sig_file = common.MakeTempFile(prefix="sig-", suffix=".bin")
+ metadata_sig_file = common.MakeTempFile(prefix="sig-", suffix=".bin")
+ cmd = ["brillo_update_payload", "hash",
+ "--unsigned_payload", unsigned_payload,
+ "--signature_size", str(self.maximum_signature_size),
+ "--metadata_hash_file", metadata_sig_file,
+ "--payload_hash_file", payload_sig_file]
+ self._Run(cmd)
+
+ # 2. Sign the hashes.
+ signed_payload_sig_file = self.SignHashFile(payload_sig_file)
+ signed_metadata_sig_file = self.SignHashFile(metadata_sig_file)
+
+ # 3. Insert the signatures back into the payload file.
+ signed_payload_file = common.MakeTempFile(prefix="signed-payload-",
+ suffix=".bin")
+ cmd = ["brillo_update_payload", "sign",
+ "--unsigned_payload", unsigned_payload,
+ "--payload", signed_payload_file,
+ "--signature_size", str(self.maximum_signature_size),
+ "--metadata_signature_file", signed_metadata_sig_file,
+ "--payload_signature_file", signed_payload_sig_file]
+ self._Run(cmd)
+ return signed_payload_file
+
+
+ def SignHashFile(self, in_file):
+ """Signs the given input file. Returns the output filename."""
+ out_file = common.MakeTempFile(prefix="signed-", suffix=".bin")
+ cmd = [self.signer] + self.signer_args + ['-in', in_file, '-out', out_file]
+ common.RunAndCheckOutput(cmd)
+ return out_file
diff --git a/tools/releasetools/sign_apex.py b/tools/releasetools/sign_apex.py
index 6926467..d739982 100755
--- a/tools/releasetools/sign_apex.py
+++ b/tools/releasetools/sign_apex.py
@@ -42,20 +42,25 @@
--sign_tool <sign_tool>
Optional flag that specifies a custom signing tool for the contents of the apex.
+
+ --container_pw <name1=passwd,name2=passwd>
+ A mapping of key_name to password
"""
import logging
import shutil
+import re
import sys
import apex_utils
import common
logger = logging.getLogger(__name__)
+OPTIONS = common.OPTIONS
def SignApexFile(avbtool, apex_file, payload_key, container_key, no_hashtree,
- apk_keys=None, signing_args=None, codename_to_api_level_map=None, sign_tool=None):
+ apk_keys=None, signing_args=None, codename_to_api_level_map=None, sign_tool=None, container_pw=None):
"""Signs the given apex file."""
with open(apex_file, 'rb') as input_fp:
apex_data = input_fp.read()
@@ -65,12 +70,13 @@
apex_data,
payload_key=payload_key,
container_key=container_key,
- container_pw=None,
+ container_pw=container_pw,
codename_to_api_level_map=codename_to_api_level_map,
no_hashtree=no_hashtree,
apk_keys=apk_keys,
signing_args=signing_args,
- sign_tool=sign_tool)
+ sign_tool=sign_tool,
+ is_sepolicy=apex_file.endswith(OPTIONS.sepolicy_name))
def main(argv):
@@ -106,6 +112,15 @@
options['extra_apks'].update({n: key})
elif o == '--sign_tool':
options['sign_tool'] = a
+ elif o == '--container_pw':
+ passwords = {}
+ pairs = a.split()
+ for pair in pairs:
+ if "=" not in pair:
+ continue
+ tokens = pair.split("=", maxsplit=1)
+ passwords[tokens[0].strip()] = tokens[1].strip()
+ options['container_pw'] = passwords
else:
return False
return True
@@ -121,6 +136,7 @@
'payload_key=',
'extra_apks=',
'sign_tool=',
+ 'container_pw=',
],
extra_option_handler=option_handler)
@@ -141,7 +157,9 @@
signing_args=options.get('payload_extra_args'),
codename_to_api_level_map=options.get(
'codename_to_api_level_map', {}),
- sign_tool=options.get('sign_tool', None))
+ sign_tool=options.get('sign_tool', None),
+ container_pw=options.get('container_pw'),
+ )
shutil.copyfile(signed_apex, args[1])
logger.info("done.")
diff --git a/tools/releasetools/sign_target_files_apks b/tools/releasetools/sign_target_files_apks
deleted file mode 120000
index b5ec59a..0000000
--- a/tools/releasetools/sign_target_files_apks
+++ /dev/null
@@ -1 +0,0 @@
-sign_target_files_apks.py
\ No newline at end of file
diff --git a/tools/releasetools/sign_target_files_apks.py b/tools/releasetools/sign_target_files_apks.py
index 09d0b10..8291448 100755
--- a/tools/releasetools/sign_target_files_apks.py
+++ b/tools/releasetools/sign_target_files_apks.py
@@ -27,7 +27,7 @@
apkcerts.txt file, or the container key for an APEX. Option may be
repeated to give multiple extra packages.
- --extra_apex_payload_key <name=key>
+ --extra_apex_payload_key <name,name,...=key>
Add a mapping for APEX package name to payload signing key, which will
override the default payload signing key in apexkeys.txt. Note that the
container key should be overridden via the `--extra_apks` flag above.
@@ -141,6 +141,12 @@
Allow the existence of the file 'userdebug_plat_sepolicy.cil' under
(/system/system_ext|/system_ext)/etc/selinux.
If not set, error out when the file exists.
+
+ --override_apk_keys <path>
+ Replace all APK keys with this private key
+
+ --override_apex_keys <path>
+ Replace all APEX keys with this private key
"""
from __future__ import print_function
@@ -182,9 +188,6 @@
OPTIONS.key_map = {}
OPTIONS.rebuild_recovery = False
OPTIONS.replace_ota_keys = False
-OPTIONS.replace_verity_public_key = False
-OPTIONS.replace_verity_private_key = False
-OPTIONS.replace_verity_keyid = False
OPTIONS.remove_avb_public_keys = None
OPTIONS.tag_changes = ("-test-keys", "-dev-keys", "+release-keys")
OPTIONS.avb_keys = {}
@@ -197,6 +200,8 @@
OPTIONS.vendor_partitions = set()
OPTIONS.vendor_otatools = None
OPTIONS.allow_gsi_debug_sepolicy = False
+OPTIONS.override_apk_keys = None
+OPTIONS.override_apex_keys = None
AVB_FOOTER_ARGS_BY_PARTITION = {
@@ -245,6 +250,10 @@
def GetApkCerts(certmap):
+ if OPTIONS.override_apk_keys is not None:
+ for apk in certmap.keys():
+ certmap[apk] = OPTIONS.override_apk_keys
+
# apply the key remapping to the contents of the file
for apk, cert in certmap.items():
certmap[apk] = OPTIONS.key_map.get(cert, cert)
@@ -275,6 +284,15 @@
Raises:
AssertionError: On invalid container / payload key overrides.
"""
+ if OPTIONS.override_apex_keys is not None:
+ for apex in keys_info.keys():
+ keys_info[apex] = (OPTIONS.override_apex_keys, keys_info[apex][1], keys_info[apex][2])
+
+ if OPTIONS.override_apk_keys is not None:
+ key = key_map.get(OPTIONS.override_apk_keys, OPTIONS.override_apk_keys)
+ for apex in keys_info.keys():
+ keys_info[apex] = (keys_info[apex][0], key, keys_info[apex][2])
+
# Apply all the --extra_apex_payload_key options to override the payload
# signing keys in the given keys_info.
for apex, key in OPTIONS.extra_apex_payload_keys.items():
@@ -513,7 +531,14 @@
# RECOVERY/RAMDISK/default.prop is a legacy path, but will always exist
# as a symlink in the current code. So it's a no-op here. Keeping the
# path here for clarity.
- "RECOVERY/RAMDISK/default.prop") or filename.endswith("build.prop")
+ # Some build props might be stored under path
+ # VENDOR_BOOT/RAMDISK_FRAGMENTS/recovery/RAMDISK/default.prop, and
+ # default.prop can be a symbolic link to prop.default, so overwrite all
+ # files that ends with build.prop, default.prop or prop.default
+ "RECOVERY/RAMDISK/default.prop") or \
+ filename.endswith("build.prop") or \
+ filename.endswith("/default.prop") or \
+ filename.endswith("/prop.default")
def ProcessTargetFiles(input_tf_zip, output_tf_zip, misc_info,
@@ -642,11 +667,6 @@
elif filename == "META/misc_info.txt":
pass
- # Skip verity public key if we will replace it.
- elif (OPTIONS.replace_verity_public_key and
- filename in ("BOOT/RAMDISK/verity_key",
- "ROOT/verity_key")):
- pass
elif (OPTIONS.remove_avb_public_keys and
(filename.startswith("BOOT/RAMDISK/avb/") or
filename.startswith("BOOT/RAMDISK/first_stage_ramdisk/avb/"))):
@@ -660,10 +680,6 @@
# Copy it verbatim if we don't want to remove it.
common.ZipWriteStr(output_tf_zip, out_info, data)
- # Skip verity keyid (for system_root_image use) if we will replace it.
- elif OPTIONS.replace_verity_keyid and filename == "BOOT/cmdline":
- pass
-
# Skip the vbmeta digest as we will recalculate it.
elif filename == "META/vbmeta_digest.txt":
pass
@@ -745,27 +761,6 @@
if OPTIONS.replace_ota_keys:
ReplaceOtaKeys(input_tf_zip, output_tf_zip, misc_info)
- # Replace the keyid string in misc_info dict.
- if OPTIONS.replace_verity_private_key:
- ReplaceVerityPrivateKey(misc_info, OPTIONS.replace_verity_private_key[1])
-
- if OPTIONS.replace_verity_public_key:
- # Replace the one in root dir in system.img.
- ReplaceVerityPublicKey(
- output_tf_zip, 'ROOT/verity_key', OPTIONS.replace_verity_public_key[1])
-
- if not system_root_image:
- # Additionally replace the copy in ramdisk if not using system-as-root.
- ReplaceVerityPublicKey(
- output_tf_zip,
- 'BOOT/RAMDISK/verity_key',
- OPTIONS.replace_verity_public_key[1])
-
- # Replace the keyid string in BOOT/cmdline.
- if OPTIONS.replace_verity_keyid:
- ReplaceVerityKeyId(input_tf_zip, output_tf_zip,
- OPTIONS.replace_verity_keyid[1])
-
# Replace the AVB signing keys, if any.
ReplaceAvbSigningKeys(misc_info)
@@ -881,7 +876,7 @@
pieces[-1] = EditTags(pieces[-1])
value = "/".join(pieces)
elif key == "ro.build.description":
- pieces = value.split(" ")
+ pieces = value.split()
assert pieces[-1].endswith("-keys")
pieces[-1] = EditTags(pieces[-1])
value = " ".join(pieces)
@@ -982,64 +977,6 @@
WriteOtacerts(output_tf_zip, info.filename, mapped_keys + extra_keys)
-def ReplaceVerityPublicKey(output_zip, filename, key_path):
- """Replaces the verity public key at the given path in the given zip.
-
- Args:
- output_zip: The output target_files zip.
- filename: The archive name in the output zip.
- key_path: The path to the public key.
- """
- print("Replacing verity public key with %s" % (key_path,))
- common.ZipWrite(output_zip, key_path, arcname=filename)
-
-
-def ReplaceVerityPrivateKey(misc_info, key_path):
- """Replaces the verity private key in misc_info dict.
-
- Args:
- misc_info: The info dict.
- key_path: The path to the private key in PKCS#8 format.
- """
- print("Replacing verity private key with %s" % (key_path,))
- misc_info["verity_key"] = key_path
-
-
-def ReplaceVerityKeyId(input_zip, output_zip, key_path):
- """Replaces the veritykeyid parameter in BOOT/cmdline.
-
- Args:
- input_zip: The input target_files zip, which should be already open.
- output_zip: The output target_files zip, which should be already open and
- writable.
- key_path: The path to the PEM encoded X.509 certificate.
- """
- in_cmdline = input_zip.read("BOOT/cmdline").decode()
- # Copy in_cmdline to output_zip if veritykeyid is not present.
- if "veritykeyid" not in in_cmdline:
- common.ZipWriteStr(output_zip, "BOOT/cmdline", in_cmdline)
- return
-
- out_buffer = []
- for param in in_cmdline.split():
- if "veritykeyid" not in param:
- out_buffer.append(param)
- continue
-
- # Extract keyid using openssl command.
- p = common.Run(["openssl", "x509", "-in", key_path, "-text"],
- stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- keyid, stderr = p.communicate()
- assert p.returncode == 0, "Failed to dump certificate: {}".format(stderr)
- keyid = re.search(
- r'Authority Key Identifier:\s*(?:keyid:)?([0-9a-fA-F:]*)', keyid).group(1).replace(':', '').lower()
- print("Replacing verity keyid with {}".format(keyid))
- out_buffer.append("veritykeyid=id:%s" % (keyid,))
-
- out_cmdline = ' '.join(out_buffer).strip() + '\n'
- common.ZipWriteStr(output_zip, "BOOT/cmdline", out_cmdline)
-
-
def ReplaceMiscInfoTxt(input_zip, output_zip, misc_info):
"""Replaces META/misc_info.txt.
@@ -1098,7 +1035,7 @@
tokens = []
changed = False
- for token in args.split(' '):
+ for token in args.split():
fingerprint_key = 'com.android.build.{}.fingerprint'.format(partition)
if not token.startswith(fingerprint_key):
tokens.append(token)
@@ -1304,6 +1241,7 @@
vendor_misc_info["avb_building_vbmeta_image"] = "false" # skip building vbmeta
vendor_misc_info["use_dynamic_partitions"] = "false" # super_empty
vendor_misc_info["build_super_partition"] = "false" # super split
+ vendor_misc_info["avb_vbmeta_system"] = "" # skip building vbmeta_system
with open(vendor_misc_info_path, "w") as output:
for key in sorted(vendor_misc_info):
output.write("{}={}\n".format(key, vendor_misc_info[key]))
@@ -1355,7 +1293,8 @@
img_file_path = "IMAGES/{}.img".format(p)
map_file_path = "IMAGES/{}.map".format(p)
common.ZipWrite(output_zip, os.path.join(vendor_tempdir, img_file_path), img_file_path)
- common.ZipWrite(output_zip, os.path.join(vendor_tempdir, map_file_path), map_file_path)
+ if os.path.exists(os.path.join(vendor_tempdir, map_file_path)):
+ common.ZipWrite(output_zip, os.path.join(vendor_tempdir, map_file_path), map_file_path)
# copy recovery.img, boot.img, recovery patch & install.sh
if OPTIONS.rebuild_recovery:
recovery_img = "IMAGES/recovery.img"
@@ -1379,8 +1318,9 @@
for n in names:
OPTIONS.extra_apks[n] = key
elif o == "--extra_apex_payload_key":
- apex_name, key = a.split("=")
- OPTIONS.extra_apex_payload_keys[apex_name] = key
+ apex_names, key = a.split("=")
+ for name in apex_names.split(","):
+ OPTIONS.extra_apex_payload_keys[name] = key
elif o == "--skip_apks_with_path_prefix":
# Check the prefix, which must be in all upper case.
prefix = a.split('/')[0]
@@ -1402,11 +1342,14 @@
new.append(i[0] + i[1:].strip())
OPTIONS.tag_changes = tuple(new)
elif o == "--replace_verity_public_key":
- OPTIONS.replace_verity_public_key = (True, a)
+ raise ValueError("--replace_verity_public_key is no longer supported,"
+ " please switch to AVB")
elif o == "--replace_verity_private_key":
- OPTIONS.replace_verity_private_key = (True, a)
+ raise ValueError("--replace_verity_private_key is no longer supported,"
+ " please switch to AVB")
elif o == "--replace_verity_keyid":
- OPTIONS.replace_verity_keyid = (True, a)
+ raise ValueError("--replace_verity_keyid is no longer supported, please"
+ " switch to AVB")
elif o == "--remove_avb_public_keys":
OPTIONS.remove_avb_public_keys = a.split(",")
elif o == "--avb_vbmeta_key":
@@ -1495,6 +1438,10 @@
OPTIONS.vendor_partitions = set(a.split(","))
elif o == "--allow_gsi_debug_sepolicy":
OPTIONS.allow_gsi_debug_sepolicy = True
+ elif o == "--override_apk_keys":
+ OPTIONS.override_apk_keys = a
+ elif o == "--override_apex_keys":
+ OPTIONS.override_apex_keys = a
else:
return False
return True
@@ -1554,6 +1501,8 @@
"vendor_partitions=",
"vendor_otatools=",
"allow_gsi_debug_sepolicy",
+ "override_apk_keys=",
+ "override_apex_keys=",
],
extra_option_handler=option_handler)
diff --git a/tools/releasetools/sparse_img.py b/tools/releasetools/sparse_img.py
index 524c0f2..a2f7e9e 100644
--- a/tools/releasetools/sparse_img.py
+++ b/tools/releasetools/sparse_img.py
@@ -41,8 +41,7 @@
"""
def __init__(self, simg_fn, file_map_fn=None, clobbered_blocks=None,
- mode="rb", build_map=True, allow_shared_blocks=False,
- hashtree_info_generator=None):
+ mode="rb", build_map=True, allow_shared_blocks=False):
self.simg_f = f = open(simg_fn, mode)
header_bin = f.read(28)
@@ -74,8 +73,6 @@
blk_sz, total_chunks)
if not build_map:
- assert not hashtree_info_generator, \
- "Cannot generate the hashtree info without building the offset map."
return
pos = 0 # in blocks
@@ -83,7 +80,7 @@
self.offset_map = offset_map = []
self.clobbered_blocks = rangelib.RangeSet(data=clobbered_blocks)
- for i in range(total_chunks):
+ for _ in range(total_chunks):
header_bin = f.read(12)
header = struct.unpack("<2H2I", header_bin)
chunk_type = header[0]
@@ -114,16 +111,6 @@
if data_sz != 0:
raise ValueError("Don't care chunk input size is non-zero (%u)" %
(data_sz))
- # Fills the don't care data ranges with zeros.
- # TODO(xunchang) pass the care_map to hashtree info generator.
- if hashtree_info_generator:
- fill_data = '\x00' * 4
- # In order to compute verity hashtree on device, we need to write
- # zeros explicitly to the don't care ranges. Because these ranges may
- # contain non-zero data from the previous build.
- care_data.append(pos)
- care_data.append(pos + chunk_sz)
- offset_map.append((pos, chunk_sz, None, fill_data))
pos += chunk_sz
@@ -150,10 +137,6 @@
extended = extended.intersect(all_blocks).subtract(self.care_map)
self.extended = extended
- self.hashtree_info = None
- if hashtree_info_generator:
- self.hashtree_info = hashtree_info_generator.Generate(self)
-
if file_map_fn:
self.LoadFileBlockMap(file_map_fn, self.clobbered_blocks,
allow_shared_blocks)
@@ -183,6 +166,11 @@
def ReadRangeSet(self, ranges):
return [d for d in self._GetRangeData(ranges)]
+ def ReadBlocks(self, start=0, num_blocks=None):
+ if num_blocks is None:
+ num_blocks = self.total_blocks
+ return self._GetRangeData([(start, start + num_blocks)])
+
def TotalSha1(self, include_clobbered_blocks=False):
"""Return the SHA-1 hash of all data in the 'care' regions.
@@ -286,8 +274,6 @@
remaining = remaining.subtract(ranges)
remaining = remaining.subtract(clobbered_blocks)
- if self.hashtree_info:
- remaining = remaining.subtract(self.hashtree_info.hashtree_range)
# For all the remaining blocks in the care_map (ie, those that
# aren't part of the data for any file nor part of the clobbered_blocks),
@@ -350,8 +336,6 @@
out["__NONZERO-%d" % i] = rangelib.RangeSet(data=blocks)
if clobbered_blocks:
out["__COPY"] = clobbered_blocks
- if self.hashtree_info:
- out["__HASHTREE"] = self.hashtree_info.hashtree_range
def ResetFileMap(self):
"""Throw away the file map and treat the entire image as
diff --git a/tools/releasetools/test_add_img_to_target_files.py b/tools/releasetools/test_add_img_to_target_files.py
index a5850d3..7b5476d 100644
--- a/tools/releasetools/test_add_img_to_target_files.py
+++ b/tools/releasetools/test_add_img_to_target_files.py
@@ -16,15 +16,16 @@
import os
import os.path
+import tempfile
import zipfile
import common
import test_utils
from add_img_to_target_files import (
AddPackRadioImages,
+ AddCareMapForAbOta, GetCareMap,
CheckAbOtaImages)
from rangelib import RangeSet
-from common import AddCareMapForAbOta, GetCareMap
OPTIONS = common.OPTIONS
@@ -124,9 +125,6 @@
def _test_AddCareMapForAbOta():
"""Helper function to set up the test for test_AddCareMapForAbOta()."""
OPTIONS.info_dict = {
- 'extfs_sparse_flag' : '-s',
- 'system_image_size' : 65536,
- 'vendor_image_size' : 40960,
'system_verity_block_device': '/dev/block/system',
'vendor_verity_block_device': '/dev/block/vendor',
'system.build.prop': common.PartitionBuildProps.FromDictionary(
@@ -149,13 +147,13 @@
system_image = test_utils.construct_sparse_image([
(0xCAC1, 6),
(0xCAC3, 4),
- (0xCAC1, 8)])
+ (0xCAC1, 6)], "system")
vendor_image = test_utils.construct_sparse_image([
- (0xCAC2, 12)])
+ (0xCAC2, 10)], "vendor")
image_paths = {
- 'system' : system_image,
- 'vendor' : vendor_image,
+ 'system': system_image,
+ 'vendor': vendor_image,
}
return image_paths
@@ -210,9 +208,6 @@
"""Tests the case for device using AVB."""
image_paths = self._test_AddCareMapForAbOta()
OPTIONS.info_dict = {
- 'extfs_sparse_flag': '-s',
- 'system_image_size': 65536,
- 'vendor_image_size': 40960,
'avb_system_hashtree_enable': 'true',
'avb_vendor_hashtree_enable': 'true',
'system.build.prop': common.PartitionBuildProps.FromDictionary(
@@ -244,9 +239,6 @@
"""Tests the case for partitions without fingerprint."""
image_paths = self._test_AddCareMapForAbOta()
OPTIONS.info_dict = {
- 'extfs_sparse_flag' : '-s',
- 'system_image_size' : 65536,
- 'vendor_image_size' : 40960,
'system_verity_block_device': '/dev/block/system',
'vendor_verity_block_device': '/dev/block/vendor',
}
@@ -255,8 +247,9 @@
AddCareMapForAbOta(care_map_file, ['system', 'vendor'], image_paths)
expected = ['system', RangeSet("0-5 10-15").to_string_raw(), "unknown",
- "unknown", 'vendor', RangeSet("0-9").to_string_raw(), "unknown",
- "unknown"]
+ "unknown", 'vendor', RangeSet(
+ "0-9").to_string_raw(), "unknown",
+ "unknown"]
self._verifyCareMap(expected, care_map_file)
@@ -265,9 +258,6 @@
"""Tests the case for partitions with thumbprint."""
image_paths = self._test_AddCareMapForAbOta()
OPTIONS.info_dict = {
- 'extfs_sparse_flag': '-s',
- 'system_image_size': 65536,
- 'vendor_image_size': 40960,
'system_verity_block_device': '/dev/block/system',
'vendor_verity_block_device': '/dev/block/vendor',
'system.build.prop': common.PartitionBuildProps.FromDictionary(
@@ -297,9 +287,7 @@
@test_utils.SkipIfExternalToolsUnavailable()
def test_AddCareMapForAbOta_skipPartition(self):
image_paths = self._test_AddCareMapForAbOta()
-
- # Remove vendor_image_size to invalidate the care_map for vendor.img.
- del OPTIONS.info_dict['vendor_image_size']
+ test_utils.erase_avb_footer(image_paths["vendor"])
care_map_file = os.path.join(OPTIONS.input_tmp, 'META', 'care_map.pb')
AddCareMapForAbOta(care_map_file, ['system', 'vendor'], image_paths)
@@ -313,10 +301,8 @@
@test_utils.SkipIfExternalToolsUnavailable()
def test_AddCareMapForAbOta_skipAllPartitions(self):
image_paths = self._test_AddCareMapForAbOta()
-
- # Remove the image_size properties for all the partitions.
- del OPTIONS.info_dict['system_image_size']
- del OPTIONS.info_dict['vendor_image_size']
+ test_utils.erase_avb_footer(image_paths["system"])
+ test_utils.erase_avb_footer(image_paths["vendor"])
care_map_file = os.path.join(OPTIONS.input_tmp, 'META', 'care_map.pb')
AddCareMapForAbOta(care_map_file, ['system', 'vendor'], image_paths)
@@ -395,35 +381,18 @@
sparse_image = test_utils.construct_sparse_image([
(0xCAC1, 6),
(0xCAC3, 4),
- (0xCAC1, 6)])
- OPTIONS.info_dict = {
- 'extfs_sparse_flag' : '-s',
- 'system_image_size' : 53248,
- }
+ (0xCAC1, 6)], "system")
name, care_map = GetCareMap('system', sparse_image)
self.assertEqual('system', name)
- self.assertEqual(RangeSet("0-5 10-12").to_string_raw(), care_map)
+ self.assertEqual(RangeSet("0-5 10-15").to_string_raw(), care_map)
def test_GetCareMap_invalidPartition(self):
self.assertRaises(AssertionError, GetCareMap, 'oem', None)
- def test_GetCareMap_invalidAdjustedPartitionSize(self):
- sparse_image = test_utils.construct_sparse_image([
- (0xCAC1, 6),
- (0xCAC3, 4),
- (0xCAC1, 6)])
- OPTIONS.info_dict = {
- 'extfs_sparse_flag' : '-s',
- 'system_image_size' : -45056,
- }
- self.assertRaises(AssertionError, GetCareMap, 'system', sparse_image)
-
def test_GetCareMap_nonSparseImage(self):
- OPTIONS.info_dict = {
- 'system_image_size' : 53248,
- }
- # 'foo' is the image filename, which is expected to be not used by
- # GetCareMap().
- name, care_map = GetCareMap('system', 'foo')
- self.assertEqual('system', name)
- self.assertEqual(RangeSet("0-12").to_string_raw(), care_map)
+ with tempfile.NamedTemporaryFile() as tmpfile:
+ tmpfile.truncate(4096 * 13)
+ test_utils.append_avb_footer(tmpfile.name, "system")
+ name, care_map = GetCareMap('system', tmpfile.name)
+ self.assertEqual('system', name)
+ self.assertEqual(RangeSet("0-12").to_string_raw(), care_map)
diff --git a/tools/releasetools/test_check_target_files_vintf.py b/tools/releasetools/test_check_target_files_vintf.py
index 8725dd6..7c154d7 100644
--- a/tools/releasetools/test_check_target_files_vintf.py
+++ b/tools/releasetools/test_check_target_files_vintf.py
@@ -15,6 +15,7 @@
#
import os.path
+import shutil
import common
import test_utils
@@ -86,6 +87,28 @@
return test_dir
+ # Prepare test dir with required HAL for APEX testing
+ def prepare_apex_test_dir(self, test_delta_rel_path):
+ test_dir = self.prepare_test_dir(test_delta_rel_path)
+ write_string_to_file(
+ """<compatibility-matrix version="1.0" level="1" type="framework">
+ <hal format="aidl" optional="false" updatable-via-apex="true">
+ <name>android.apex.foo</name>
+ <version>1</version>
+ <interface>
+ <name>IApex</name>
+ <instance>default</instance>
+ </interface>
+ </hal>
+ <sepolicy>
+ <sepolicy-version>0.0</sepolicy-version>
+ <kernel-sepolicy-version>0</kernel-sepolicy-version>
+ </sepolicy>
+ </compatibility-matrix>""",
+ os.path.join(test_dir, 'SYSTEM/etc/vintf/compatibility_matrix.1.xml'))
+
+ return test_dir
+
@test_utils.SkipIfExternalToolsUnavailable()
def test_CheckVintf_skeleton(self):
msg = 'vintf check with skeleton target files failed.'
@@ -143,3 +166,25 @@
os.path.join(test_dir, 'VENDOR/etc/vintf/manifest.xml'))
# Should raise an error because a file has invalid format.
self.assertRaises(common.ExternalError, CheckVintf, test_dir)
+
+ @test_utils.SkipIfExternalToolsUnavailable()
+ def test_CheckVintf_apex_compat(self):
+ apex_file_name = 'com.android.apex.vendor.foo.with_vintf.apex'
+ msg = 'vintf/apex_compat should be compatible because ' \
+ 'APEX %s has the required HALs' % (apex_file_name)
+ test_dir = self.prepare_apex_test_dir('vintf/apex_compat')
+ # Copy APEX under VENDOR/apex
+ apex_file = os.path.join(test_utils.get_current_dir(), apex_file_name)
+ apex_dir = os.path.join(test_dir, 'VENDOR/apex')
+ os.makedirs(apex_dir)
+ shutil.copy(apex_file, apex_dir)
+ # Should find required HAL via APEX
+ self.assertTrue(CheckVintf(test_dir), msg=msg)
+
+ @test_utils.SkipIfExternalToolsUnavailable()
+ def test_CheckVintf_apex_incompat(self):
+ msg = 'vintf/apex_incompat should be incompatible because ' \
+ 'no APEX data'
+ test_dir = self.prepare_apex_test_dir('vintf/apex_incompat')
+ # Should not find required HAL
+ self.assertFalse(CheckVintf(test_dir), msg=msg)
diff --git a/tools/releasetools/test_common.py b/tools/releasetools/test_common.py
index f973263..2dfd8c7 100644
--- a/tools/releasetools/test_common.py
+++ b/tools/releasetools/test_common.py
@@ -452,12 +452,14 @@
test_file.write(bytes(data))
test_file.close()
- expected_stat = os.stat(test_file_name)
expected_mode = extra_zipwrite_args.get("perms", 0o644)
expected_compress_type = extra_zipwrite_args.get("compress_type",
zipfile.ZIP_STORED)
- time.sleep(5) # Make sure the atime/mtime will change measurably.
+ # Arbitrary timestamp, just to make sure common.ZipWrite() restores
+ # the timestamp after writing.
+ os.utime(test_file_name, (1234567, 1234567))
+ expected_stat = os.stat(test_file_name)
common.ZipWrite(zip_file, test_file_name, **extra_zipwrite_args)
common.ZipClose(zip_file)
@@ -480,8 +482,6 @@
try:
expected_compress_type = extra_args.get("compress_type",
zipfile.ZIP_STORED)
- time.sleep(5) # Make sure the atime/mtime will change measurably.
-
if not isinstance(zinfo_or_arcname, zipfile.ZipInfo):
arcname = zinfo_or_arcname
expected_mode = extra_args.get("perms", 0o644)
@@ -528,11 +528,13 @@
test_file.write(data)
test_file.close()
+ # Arbitrary timestamp, just to make sure common.ZipWrite() restores
+ # the timestamp after writing.
+ os.utime(test_file_name, (1234567, 1234567))
expected_stat = os.stat(test_file_name)
expected_mode = 0o644
expected_compress_type = extra_args.get("compress_type",
zipfile.ZIP_STORED)
- time.sleep(5) # Make sure the atime/mtime will change measurably.
common.ZipWrite(zip_file, test_file_name, **extra_args)
common.ZipWriteStr(zip_file, arcname_small, small, **extra_args)
@@ -2186,3 +2188,29 @@
}
self.assertRaises(ValueError, common.PartitionBuildProps.FromInputFile,
input_zip, 'odm', placeholder_values)
+
+ def test_partitionBuildProps_fromInputFile_deepcopy(self):
+ build_prop = [
+ 'ro.odm.build.date.utc=1578430045',
+ 'ro.odm.build.fingerprint='
+ 'google/coral/coral:10/RP1A.200325.001/6337676:user/dev-keys',
+ 'ro.product.odm.device=coral',
+ ]
+ input_file = self._BuildZipFile({
+ 'ODM/etc/build.prop': '\n'.join(build_prop),
+ })
+
+ with zipfile.ZipFile(input_file, 'r', allowZip64=True) as input_zip:
+ placeholder_values = {
+ 'ro.boot.product.device_name': ['std', 'pro']
+ }
+ partition_props = common.PartitionBuildProps.FromInputFile(
+ input_zip, 'odm', placeholder_values)
+
+ copied_props = copy.deepcopy(partition_props)
+ self.assertEqual({
+ 'ro.odm.build.date.utc': '1578430045',
+ 'ro.odm.build.fingerprint':
+ 'google/coral/coral:10/RP1A.200325.001/6337676:user/dev-keys',
+ 'ro.product.odm.device': 'coral',
+ }, copied_props.build_props)
diff --git a/tools/releasetools/test_merge_ota.py b/tools/releasetools/test_merge_ota.py
new file mode 100644
index 0000000..4fa7c02
--- /dev/null
+++ b/tools/releasetools/test_merge_ota.py
@@ -0,0 +1,86 @@
+# Copyright (C) 2008 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import os
+import tempfile
+import test_utils
+import merge_ota
+import update_payload
+from update_metadata_pb2 import DynamicPartitionGroup
+from update_metadata_pb2 import DynamicPartitionMetadata
+from test_utils import SkipIfExternalToolsUnavailable, ReleaseToolsTestCase
+
+
+class MergeOtaTest(ReleaseToolsTestCase):
+ def setUp(self) -> None:
+ self.testdata_dir = test_utils.get_testdata_dir()
+ return super().setUp()
+
+ @SkipIfExternalToolsUnavailable()
+ def test_MergeThreeOtas(self):
+ ota1 = os.path.join(self.testdata_dir, "tuna_vbmeta.zip")
+ ota2 = os.path.join(self.testdata_dir, "tuna_vbmeta_system.zip")
+ ota3 = os.path.join(self.testdata_dir, "tuna_vbmeta_vendor.zip")
+ payloads = [update_payload.Payload(ota) for ota in [ota1, ota2, ota3]]
+ with tempfile.NamedTemporaryFile() as output_file:
+ merge_ota.main(["merge_ota", "-v", ota1, ota2, ota3,
+ "--output", output_file.name])
+ payload = update_payload.Payload(output_file.name)
+ partition_names = [
+ part.partition_name for part in payload.manifest.partitions]
+ self.assertEqual(partition_names, [
+ "vbmeta", "vbmeta_system", "vbmeta_vendor"])
+ payload.CheckDataHash()
+ for i in range(3):
+ self.assertEqual(payload.manifest.partitions[i].old_partition_info,
+ payloads[i].manifest.partitions[0].old_partition_info)
+ self.assertEqual(payload.manifest.partitions[i].new_partition_info,
+ payloads[i].manifest.partitions[0].new_partition_info)
+
+ def test_MergeDAPSnapshotDisabled(self):
+ dap1 = DynamicPartitionMetadata()
+ dap2 = DynamicPartitionMetadata()
+ merged_dap = DynamicPartitionMetadata()
+ dap1.snapshot_enabled = True
+ dap2.snapshot_enabled = False
+ merge_ota.MergeDynamicPartitionMetadata(merged_dap, dap1)
+ merge_ota.MergeDynamicPartitionMetadata(merged_dap, dap2)
+ self.assertFalse(merged_dap.snapshot_enabled)
+
+ def test_MergeDAPSnapshotEnabled(self):
+ dap1 = DynamicPartitionMetadata()
+ dap2 = DynamicPartitionMetadata()
+ merged_dap = DynamicPartitionMetadata()
+ merged_dap.snapshot_enabled = True
+ dap1.snapshot_enabled = True
+ dap2.snapshot_enabled = True
+ merge_ota.MergeDynamicPartitionMetadata(merged_dap, dap1)
+ merge_ota.MergeDynamicPartitionMetadata(merged_dap, dap2)
+ self.assertTrue(merged_dap.snapshot_enabled)
+
+ def test_MergeDAPGroups(self):
+ dap1 = DynamicPartitionMetadata()
+ dap1.groups.append(DynamicPartitionGroup(
+ name="abc", partition_names=["a", "b", "c"]))
+ dap2 = DynamicPartitionMetadata()
+ dap2.groups.append(DynamicPartitionGroup(
+ name="abc", partition_names=["d", "e", "f"]))
+ merged_dap = DynamicPartitionMetadata()
+ merge_ota.MergeDynamicPartitionMetadata(merged_dap, dap1)
+ merge_ota.MergeDynamicPartitionMetadata(merged_dap, dap2)
+ self.assertEqual(len(merged_dap.groups), 1)
+ self.assertEqual(merged_dap.groups[0].name, "abc")
+ self.assertEqual(merged_dap.groups[0].partition_names, [
+ "a", "b", "c", "d", "e", "f"])
diff --git a/tools/releasetools/test_ota_from_target_files.py b/tools/releasetools/test_ota_from_target_files.py
index 11cfee1..ad0f7a8 100644
--- a/tools/releasetools/test_ota_from_target_files.py
+++ b/tools/releasetools/test_ota_from_target_files.py
@@ -17,6 +17,7 @@
import copy
import os
import os.path
+import tempfile
import zipfile
import common
@@ -24,17 +25,18 @@
import test_utils
from ota_utils import (
BuildLegacyOtaMetadata, CalculateRuntimeDevicesAndFingerprints,
- ConstructOtaApexInfo, FinalizeMetadata, GetPackageMetadata, PropertyFiles)
+ ConstructOtaApexInfo, FinalizeMetadata, GetPackageMetadata, PropertyFiles, AbOtaPropertyFiles, PayloadGenerator, StreamingPropertyFiles)
from ota_from_target_files import (
- _LoadOemDicts, AbOtaPropertyFiles,
+ _LoadOemDicts,
GetTargetFilesZipForCustomImagesUpdates,
GetTargetFilesZipForPartialUpdates,
GetTargetFilesZipForSecondaryImages,
GetTargetFilesZipWithoutPostinstallConfig,
- Payload, PayloadSigner, POSTINSTALL_CONFIG,
- StreamingPropertyFiles, AB_PARTITIONS)
+ POSTINSTALL_CONFIG, AB_PARTITIONS)
from apex_utils import GetApexInfoFromTargetFiles
from test_utils import PropertyFilesTestCase
+from common import OPTIONS
+from payload_signer import PayloadSigner
def construct_target_files(secondary=False, compressedApex=False):
@@ -973,7 +975,7 @@
@test_utils.SkipIfExternalToolsUnavailable()
def test_GetPayloadMetadataOffsetAndSize(self):
target_file = construct_target_files()
- payload = Payload()
+ payload = PayloadGenerator()
payload.Generate(target_file)
payload_signer = PayloadSigner()
@@ -1028,7 +1030,7 @@
0, proc.returncode,
'Failed to run brillo_update_payload:\n{}'.format(stdoutdata))
- signed_metadata_sig_file = payload_signer.Sign(metadata_sig_file)
+ signed_metadata_sig_file = payload_signer.SignHashFile(metadata_sig_file)
# Finally we can compare the two signatures.
with open(signed_metadata_sig_file, 'rb') as verify_fp:
@@ -1038,7 +1040,7 @@
def construct_zip_package_withValidPayload(with_metadata=False):
# Cannot use construct_zip_package() since we need a "valid" payload.bin.
target_file = construct_target_files()
- payload = Payload()
+ payload = PayloadGenerator()
payload.Generate(target_file)
payload_signer = PayloadSigner()
@@ -1142,10 +1144,10 @@
self.assertEqual('openssl', payload_signer.signer)
def test_init_withExternalSigner(self):
- common.OPTIONS.payload_signer = 'abc'
common.OPTIONS.payload_signer_args = ['arg1', 'arg2']
common.OPTIONS.payload_signer_maximum_signature_size = '512'
- payload_signer = PayloadSigner()
+ payload_signer = PayloadSigner(
+ OPTIONS.package_key, OPTIONS.private_key_suffix, payload_signer='abc')
self.assertEqual('abc', payload_signer.signer)
self.assertEqual(['arg1', 'arg2'], payload_signer.signer_args)
self.assertEqual(512, payload_signer.maximum_signature_size)
@@ -1168,35 +1170,36 @@
def test_Sign(self):
payload_signer = PayloadSigner()
input_file = os.path.join(self.testdata_dir, self.SIGFILE)
- signed_file = payload_signer.Sign(input_file)
+ signed_file = payload_signer.SignHashFile(input_file)
verify_file = os.path.join(self.testdata_dir, self.SIGNED_SIGFILE)
self._assertFilesEqual(verify_file, signed_file)
def test_Sign_withExternalSigner_openssl(self):
"""Uses openssl as the external payload signer."""
- common.OPTIONS.payload_signer = 'openssl'
common.OPTIONS.payload_signer_args = [
'pkeyutl', '-sign', '-keyform', 'DER', '-inkey',
os.path.join(self.testdata_dir, 'testkey.pk8'),
'-pkeyopt', 'digest:sha256']
- payload_signer = PayloadSigner()
+ payload_signer = PayloadSigner(
+ OPTIONS.package_key, OPTIONS.private_key_suffix, payload_signer="openssl")
input_file = os.path.join(self.testdata_dir, self.SIGFILE)
- signed_file = payload_signer.Sign(input_file)
+ signed_file = payload_signer.SignHashFile(input_file)
verify_file = os.path.join(self.testdata_dir, self.SIGNED_SIGFILE)
self._assertFilesEqual(verify_file, signed_file)
def test_Sign_withExternalSigner_script(self):
"""Uses testdata/payload_signer.sh as the external payload signer."""
- common.OPTIONS.payload_signer = os.path.join(
+ external_signer = os.path.join(
self.testdata_dir, 'payload_signer.sh')
- os.chmod(common.OPTIONS.payload_signer, 0o700)
+ os.chmod(external_signer, 0o700)
common.OPTIONS.payload_signer_args = [
os.path.join(self.testdata_dir, 'testkey.pk8')]
- payload_signer = PayloadSigner()
+ payload_signer = PayloadSigner(
+ OPTIONS.package_key, OPTIONS.private_key_suffix, payload_signer=external_signer)
input_file = os.path.join(self.testdata_dir, self.SIGFILE)
- signed_file = payload_signer.Sign(input_file)
+ signed_file = payload_signer.SignHashFile(input_file)
verify_file = os.path.join(self.testdata_dir, self.SIGNED_SIGFILE)
self._assertFilesEqual(verify_file, signed_file)
@@ -1219,7 +1222,7 @@
@staticmethod
def _create_payload_full(secondary=False):
target_file = construct_target_files(secondary)
- payload = Payload(secondary)
+ payload = PayloadGenerator(secondary, OPTIONS.wipe_user_data)
payload.Generate(target_file)
return payload
@@ -1227,7 +1230,7 @@
def _create_payload_incremental():
target_file = construct_target_files()
source_file = construct_target_files()
- payload = Payload()
+ payload = PayloadGenerator()
payload.Generate(target_file, source_file)
return payload
@@ -1245,7 +1248,7 @@
def test_Generate_additionalArgs(self):
target_file = construct_target_files()
source_file = construct_target_files()
- payload = Payload()
+ payload = PayloadGenerator()
# This should work the same as calling payload.Generate(target_file,
# source_file).
payload.Generate(
@@ -1256,7 +1259,7 @@
def test_Generate_invalidInput(self):
target_file = construct_target_files()
common.ZipDelete(target_file, 'IMAGES/vendor.img')
- payload = Payload()
+ payload = PayloadGenerator()
self.assertRaises(common.ExternalError, payload.Generate, target_file)
@test_utils.SkipIfExternalToolsUnavailable()
@@ -1292,6 +1295,9 @@
common.OPTIONS.wipe_user_data = True
payload = self._create_payload_full()
payload.Sign(PayloadSigner())
+ with tempfile.NamedTemporaryFile() as fp:
+ with zipfile.ZipFile(fp, "w") as zfp:
+ payload.WriteToZip(zfp)
with open(payload.payload_properties) as properties_fp:
self.assertIn("POWERWASH=1", properties_fp.read())
@@ -1300,6 +1306,9 @@
def test_Sign_secondary(self):
payload = self._create_payload_full(secondary=True)
payload.Sign(PayloadSigner())
+ with tempfile.NamedTemporaryFile() as fp:
+ with zipfile.ZipFile(fp, "w") as zfp:
+ payload.WriteToZip(zfp)
with open(payload.payload_properties) as properties_fp:
self.assertIn("SWITCH_SLOT_ON_REBOOT=0", properties_fp.read())
@@ -1324,33 +1333,17 @@
with zipfile.ZipFile(output_file) as verify_zip:
# First make sure we have the essential entries.
namelist = verify_zip.namelist()
- self.assertIn(Payload.PAYLOAD_BIN, namelist)
- self.assertIn(Payload.PAYLOAD_PROPERTIES_TXT, namelist)
+ self.assertIn(PayloadGenerator.PAYLOAD_BIN, namelist)
+ self.assertIn(PayloadGenerator.PAYLOAD_PROPERTIES_TXT, namelist)
# Then assert these entries are stored.
for entry_info in verify_zip.infolist():
- if entry_info.filename not in (Payload.PAYLOAD_BIN,
- Payload.PAYLOAD_PROPERTIES_TXT):
+ if entry_info.filename not in (PayloadGenerator.PAYLOAD_BIN,
+ PayloadGenerator.PAYLOAD_PROPERTIES_TXT):
continue
self.assertEqual(zipfile.ZIP_STORED, entry_info.compress_type)
@test_utils.SkipIfExternalToolsUnavailable()
- def test_WriteToZip_unsignedPayload(self):
- """Unsigned payloads should not be allowed to be written to zip."""
- payload = self._create_payload_full()
-
- output_file = common.MakeTempFile(suffix='.zip')
- with zipfile.ZipFile(output_file, 'w', allowZip64=True) as output_zip:
- self.assertRaises(AssertionError, payload.WriteToZip, output_zip)
-
- # Also test with incremental payload.
- payload = self._create_payload_incremental()
-
- output_file = common.MakeTempFile(suffix='.zip')
- with zipfile.ZipFile(output_file, 'w', allowZip64=True) as output_zip:
- self.assertRaises(AssertionError, payload.WriteToZip, output_zip)
-
- @test_utils.SkipIfExternalToolsUnavailable()
def test_WriteToZip_secondary(self):
payload = self._create_payload_full(secondary=True)
payload.Sign(PayloadSigner())
@@ -1362,14 +1355,14 @@
with zipfile.ZipFile(output_file) as verify_zip:
# First make sure we have the essential entries.
namelist = verify_zip.namelist()
- self.assertIn(Payload.SECONDARY_PAYLOAD_BIN, namelist)
- self.assertIn(Payload.SECONDARY_PAYLOAD_PROPERTIES_TXT, namelist)
+ self.assertIn(PayloadGenerator.SECONDARY_PAYLOAD_BIN, namelist)
+ self.assertIn(PayloadGenerator.SECONDARY_PAYLOAD_PROPERTIES_TXT, namelist)
# Then assert these entries are stored.
for entry_info in verify_zip.infolist():
if entry_info.filename not in (
- Payload.SECONDARY_PAYLOAD_BIN,
- Payload.SECONDARY_PAYLOAD_PROPERTIES_TXT):
+ PayloadGenerator.SECONDARY_PAYLOAD_BIN,
+ PayloadGenerator.SECONDARY_PAYLOAD_PROPERTIES_TXT):
continue
self.assertEqual(zipfile.ZIP_STORED, entry_info.compress_type)
diff --git a/tools/releasetools/test_sign_apex.py b/tools/releasetools/test_sign_apex.py
index 8470f20..7723de7 100644
--- a/tools/releasetools/test_sign_apex.py
+++ b/tools/releasetools/test_sign_apex.py
@@ -59,6 +59,21 @@
self.assertTrue(os.path.exists(signed_test_apex))
@test_utils.SkipIfExternalToolsUnavailable()
+ def test_SignSepolicyApex(self):
+ test_apex = os.path.join(self.testdata_dir, 'sepolicy.apex')
+ payload_key = os.path.join(self.testdata_dir, 'testkey_RSA4096.key')
+ container_key = os.path.join(self.testdata_dir, 'testkey')
+ apk_keys = {'SEPolicy-33.zip': os.path.join(self.testdata_dir, 'testkey')}
+ signed_test_apex = sign_apex.SignApexFile(
+ 'avbtool',
+ test_apex,
+ payload_key,
+ container_key,
+ False,
+ None)
+ self.assertTrue(os.path.exists(signed_test_apex))
+
+ @test_utils.SkipIfExternalToolsUnavailable()
def test_SignCompressedApexFile(self):
apex = os.path.join(test_utils.get_current_dir(), 'com.android.apex.compressed.v1.capex')
payload_key = os.path.join(self.testdata_dir, 'testkey_RSA4096.key')
diff --git a/tools/releasetools/test_sign_target_files_apks.py b/tools/releasetools/test_sign_target_files_apks.py
index 0f13add..0cd7dac 100644
--- a/tools/releasetools/test_sign_target_files_apks.py
+++ b/tools/releasetools/test_sign_target_files_apks.py
@@ -23,8 +23,8 @@
import test_utils
from sign_target_files_apks import (
CheckApkAndApexKeysAvailable, EditTags, GetApkFileInfo, ReadApexKeysInfo,
- ReplaceCerts, ReplaceGkiSigningKey, ReplaceVerityKeyId, RewriteAvbProps,
- RewriteProps, WriteOtacerts)
+ ReplaceCerts, ReplaceGkiSigningKey, RewriteAvbProps, RewriteProps,
+ WriteOtacerts)
class SignTargetFilesApksTest(test_utils.ReleaseToolsTestCase):
@@ -154,64 +154,6 @@
'\n'.join([prop[1] for prop in props]) + '\n',
RewriteProps('\n'.join([prop[0] for prop in props])))
- def test_ReplaceVerityKeyId(self):
- BOOT_CMDLINE1 = (
- "console=ttyHSL0,115200,n8 androidboot.console=ttyHSL0 "
- "androidboot.hardware=marlin user_debug=31 ehci-hcd.park=3 "
- "lpm_levels.sleep_disabled=1 cma=32M@0-0xffffffff loop.max_part=7 "
- "buildvariant=userdebug "
- "veritykeyid=id:7e4333f9bba00adfe0ede979e28ed1920492b40f\n")
-
- BOOT_CMDLINE2 = (
- "console=ttyHSL0,115200,n8 androidboot.console=ttyHSL0 "
- "androidboot.hardware=marlin user_debug=31 ehci-hcd.park=3 "
- "lpm_levels.sleep_disabled=1 cma=32M@0-0xffffffff loop.max_part=7 "
- "buildvariant=userdebug "
- "veritykeyid=id:d24f2590e9abab5cff5f59da4c4f0366e3f43e94\n")
-
- input_file = common.MakeTempFile(suffix='.zip')
- with zipfile.ZipFile(input_file, 'w', allowZip64=True) as input_zip:
- input_zip.writestr('BOOT/cmdline', BOOT_CMDLINE1)
-
- # Test with the first certificate.
- cert_file = os.path.join(self.testdata_dir, 'verity.x509.pem')
-
- output_file = common.MakeTempFile(suffix='.zip')
- with zipfile.ZipFile(input_file, 'r', allowZip64=True) as input_zip, \
- zipfile.ZipFile(output_file, 'w', allowZip64=True) as output_zip:
- ReplaceVerityKeyId(input_zip, output_zip, cert_file)
-
- with zipfile.ZipFile(output_file) as output_zip:
- self.assertEqual(BOOT_CMDLINE1, output_zip.read('BOOT/cmdline').decode())
-
- # Test with the second certificate.
- cert_file = os.path.join(self.testdata_dir, 'testkey.x509.pem')
-
- with zipfile.ZipFile(input_file, 'r', allowZip64=True) as input_zip, \
- zipfile.ZipFile(output_file, 'w', allowZip64=True) as output_zip:
- ReplaceVerityKeyId(input_zip, output_zip, cert_file)
-
- with zipfile.ZipFile(output_file) as output_zip:
- self.assertEqual(BOOT_CMDLINE2, output_zip.read('BOOT/cmdline').decode())
-
- def test_ReplaceVerityKeyId_no_veritykeyid(self):
- BOOT_CMDLINE = (
- "console=ttyHSL0,115200,n8 androidboot.hardware=bullhead boot_cpus=0-5 "
- "lpm_levels.sleep_disabled=1 msm_poweroff.download_mode=0 "
- "loop.max_part=7\n")
-
- input_file = common.MakeTempFile(suffix='.zip')
- with zipfile.ZipFile(input_file, 'w', allowZip64=True) as input_zip:
- input_zip.writestr('BOOT/cmdline', BOOT_CMDLINE)
-
- output_file = common.MakeTempFile(suffix='.zip')
- with zipfile.ZipFile(input_file, 'r', allowZip64=True) as input_zip, \
- zipfile.ZipFile(output_file, 'w', allowZip64=True) as output_zip:
- ReplaceVerityKeyId(input_zip, output_zip, None)
-
- with zipfile.ZipFile(output_file) as output_zip:
- self.assertEqual(BOOT_CMDLINE, output_zip.read('BOOT/cmdline').decode())
-
def test_ReplaceCerts(self):
cert1_path = os.path.join(self.testdata_dir, 'platform.x509.pem')
with open(cert1_path) as cert1_fp:
diff --git a/tools/releasetools/test_utils.py b/tools/releasetools/test_utils.py
index e30d2b9..5bbcf7f 100755
--- a/tools/releasetools/test_utils.py
+++ b/tools/releasetools/test_utils.py
@@ -19,6 +19,7 @@
Utils for running unittests.
"""
+import avbtool
import logging
import os
import os.path
@@ -57,12 +58,14 @@
current_dir = os.path.dirname(os.path.realpath(__file__))
return os.path.join(current_dir, 'testdata')
+
def get_current_dir():
"""Returns the current dir, relative to the script dir."""
# The script dir is the one we want, which could be different from pwd.
current_dir = os.path.dirname(os.path.realpath(__file__))
return current_dir
+
def get_search_path():
"""Returns the search path that has 'framework/signapk.jar' under."""
@@ -83,14 +86,33 @@
# In relative to 'build/make/tools/releasetools' in the Android source.
['..'] * 4 + ['out', 'host', 'linux-x86'],
# Or running the script unpacked from otatools.zip.
- ['..']):
+ ['..']):
full_path = os.path.realpath(os.path.join(current_dir, *path))
if signapk_exists(full_path):
return full_path
return None
-def construct_sparse_image(chunks):
+def append_avb_footer(file_path: str, partition_name: str = ""):
+ avb = avbtool.AvbTool()
+ try:
+ args = ["avbtool", "add_hashtree_footer", "--image", file_path,
+ "--partition_name", partition_name, "--do_not_generate_fec"]
+ avb.run(args)
+ except SystemExit:
+ raise ValueError(f"Failed to append hashtree footer {args}")
+
+
+def erase_avb_footer(file_path: str):
+ avb = avbtool.AvbTool()
+ try:
+ args = ["avbtool", "erase_footer", "--image", file_path]
+ avb.run(args)
+ except SystemExit:
+ raise ValueError(f"Failed to erase hashtree footer {args}")
+
+
+def construct_sparse_image(chunks, partition_name: str = ""):
"""Returns a sparse image file constructed from the given chunks.
From system/core/libsparse/sparse_format.h.
@@ -151,6 +173,7 @@
if data_size != 0:
fp.write(os.urandom(data_size))
+ append_avb_footer(sparse_image, partition_name)
return sparse_image
@@ -201,6 +224,7 @@
def tearDown(self):
common.Cleanup()
+
class PropertyFilesTestCase(ReleaseToolsTestCase):
@staticmethod
diff --git a/tools/releasetools/test_verity_utils.py b/tools/releasetools/test_verity_utils.py
index e2a022a..4a0ff09 100644
--- a/tools/releasetools/test_verity_utils.py
+++ b/tools/releasetools/test_verity_utils.py
@@ -27,249 +27,11 @@
from test_utils import (
get_testdata_dir, ReleaseToolsTestCase, SkipIfExternalToolsUnavailable)
from verity_utils import (
- CalculateVbmetaDigest, CreateHashtreeInfoGenerator,
- CreateVerityImageBuilder, HashtreeInfo,
- VerifiedBootVersion1HashtreeInfoGenerator)
+ CalculateVbmetaDigest, CreateVerityImageBuilder)
BLOCK_SIZE = common.BLOCK_SIZE
-class VerifiedBootVersion1HashtreeInfoGeneratorTest(ReleaseToolsTestCase):
-
- def setUp(self):
- self.testdata_dir = get_testdata_dir()
-
- self.partition_size = 1024 * 1024
- self.prop_dict = {
- 'verity': 'true',
- 'verity_fec': 'true',
- 'system_verity_block_device': '/dev/block/system',
- 'system_size': self.partition_size
- }
-
- self.hash_algorithm = "sha256"
- self.fixed_salt = (
- "aee087a5be3b982978c923f566a94613496b417f2af592639bc80d141e34dfe7")
- self.expected_root_hash = (
- "0b7c4565e87b1026e11fbab91c0bc29e185c847a5b44d40e6e86e461e8adf80d")
-
- def _CreateSimg(self, raw_data): # pylint: disable=no-self-use
- output_file = common.MakeTempFile()
- raw_image = common.MakeTempFile()
- with open(raw_image, 'wb') as f:
- f.write(raw_data)
-
- cmd = ["img2simg", raw_image, output_file, '4096']
- common.RunAndCheckOutput(cmd)
- return output_file
-
- def _GenerateImage(self):
- partition_size = 1024 * 1024
- prop_dict = {
- 'partition_size': str(partition_size),
- 'verity': 'true',
- 'verity_block_device': '/dev/block/system',
- 'verity_key': os.path.join(self.testdata_dir, 'testkey'),
- 'verity_fec': 'true',
- 'verity_signer_cmd': 'verity_signer',
- }
- verity_image_builder = CreateVerityImageBuilder(prop_dict)
- self.assertIsNotNone(verity_image_builder)
- adjusted_size = verity_image_builder.CalculateMaxImageSize()
-
- raw_image = bytearray(adjusted_size)
- for i in range(adjusted_size):
- raw_image[i] = ord('0') + i % 10
-
- output_file = self._CreateSimg(raw_image)
-
- # Append the verity metadata.
- verity_image_builder.Build(output_file)
-
- return output_file
-
- @SkipIfExternalToolsUnavailable()
- def test_CreateHashtreeInfoGenerator(self):
- image_file = sparse_img.SparseImage(self._GenerateImage())
-
- generator = CreateHashtreeInfoGenerator(
- 'system', image_file, self.prop_dict)
- self.assertEqual(
- VerifiedBootVersion1HashtreeInfoGenerator, type(generator))
- self.assertEqual(self.partition_size, generator.partition_size)
- self.assertTrue(generator.fec_supported)
-
- @SkipIfExternalToolsUnavailable()
- def test_DecomposeSparseImage(self):
- image_file = sparse_img.SparseImage(self._GenerateImage())
-
- generator = VerifiedBootVersion1HashtreeInfoGenerator(
- self.partition_size, 4096, True)
- generator.DecomposeSparseImage(image_file)
- self.assertEqual(991232, generator.filesystem_size)
- self.assertEqual(12288, generator.hashtree_size)
- self.assertEqual(32768, generator.metadata_size)
-
- @SkipIfExternalToolsUnavailable()
- def test_ParseHashtreeMetadata(self):
- image_file = sparse_img.SparseImage(self._GenerateImage())
- generator = VerifiedBootVersion1HashtreeInfoGenerator(
- self.partition_size, 4096, True)
- generator.DecomposeSparseImage(image_file)
-
- # pylint: disable=protected-access
- generator._ParseHashtreeMetadata()
-
- self.assertEqual(
- self.hash_algorithm, generator.hashtree_info.hash_algorithm)
- self.assertEqual(self.fixed_salt, generator.hashtree_info.salt)
- self.assertEqual(self.expected_root_hash, generator.hashtree_info.root_hash)
-
- @SkipIfExternalToolsUnavailable()
- def test_ValidateHashtree_smoke(self):
- generator = VerifiedBootVersion1HashtreeInfoGenerator(
- self.partition_size, 4096, True)
- generator.image = sparse_img.SparseImage(self._GenerateImage())
-
- generator.hashtree_info = info = HashtreeInfo()
- info.filesystem_range = RangeSet(data=[0, 991232 // 4096])
- info.hashtree_range = RangeSet(
- data=[991232 // 4096, (991232 + 12288) // 4096])
- info.hash_algorithm = self.hash_algorithm
- info.salt = self.fixed_salt
- info.root_hash = self.expected_root_hash
-
- self.assertTrue(generator.ValidateHashtree())
-
- @SkipIfExternalToolsUnavailable()
- def test_ValidateHashtree_failure(self):
- generator = VerifiedBootVersion1HashtreeInfoGenerator(
- self.partition_size, 4096, True)
- generator.image = sparse_img.SparseImage(self._GenerateImage())
-
- generator.hashtree_info = info = HashtreeInfo()
- info.filesystem_range = RangeSet(data=[0, 991232 // 4096])
- info.hashtree_range = RangeSet(
- data=[991232 // 4096, (991232 + 12288) // 4096])
- info.hash_algorithm = self.hash_algorithm
- info.salt = self.fixed_salt
- info.root_hash = "a" + self.expected_root_hash[1:]
-
- self.assertFalse(generator.ValidateHashtree())
-
- @SkipIfExternalToolsUnavailable()
- def test_Generate(self):
- image_file = sparse_img.SparseImage(self._GenerateImage())
- generator = CreateHashtreeInfoGenerator('system', 4096, self.prop_dict)
- info = generator.Generate(image_file)
-
- self.assertEqual(RangeSet(data=[0, 991232 // 4096]), info.filesystem_range)
- self.assertEqual(RangeSet(data=[991232 // 4096, (991232 + 12288) // 4096]),
- info.hashtree_range)
- self.assertEqual(self.hash_algorithm, info.hash_algorithm)
- self.assertEqual(self.fixed_salt, info.salt)
- self.assertEqual(self.expected_root_hash, info.root_hash)
-
-
-class VerifiedBootVersion1VerityImageBuilderTest(ReleaseToolsTestCase):
-
- DEFAULT_PARTITION_SIZE = 4096 * 1024
- DEFAULT_PROP_DICT = {
- 'partition_size': str(DEFAULT_PARTITION_SIZE),
- 'verity': 'true',
- 'verity_block_device': '/dev/block/system',
- 'verity_key': os.path.join(get_testdata_dir(), 'testkey'),
- 'verity_fec': 'true',
- 'verity_signer_cmd': 'verity_signer',
- }
-
- def test_init(self):
- prop_dict = copy.deepcopy(self.DEFAULT_PROP_DICT)
- verity_image_builder = CreateVerityImageBuilder(prop_dict)
- self.assertIsNotNone(verity_image_builder)
- self.assertEqual(1, verity_image_builder.version)
-
- def test_init_MissingProps(self):
- prop_dict = copy.deepcopy(self.DEFAULT_PROP_DICT)
- del prop_dict['verity']
- self.assertIsNone(CreateVerityImageBuilder(prop_dict))
-
- prop_dict = copy.deepcopy(self.DEFAULT_PROP_DICT)
- del prop_dict['verity_block_device']
- self.assertIsNone(CreateVerityImageBuilder(prop_dict))
-
- @SkipIfExternalToolsUnavailable()
- def test_CalculateMaxImageSize(self):
- verity_image_builder = CreateVerityImageBuilder(self.DEFAULT_PROP_DICT)
- size = verity_image_builder.CalculateMaxImageSize()
- self.assertLess(size, self.DEFAULT_PARTITION_SIZE)
-
- # Same result by explicitly passing the partition size.
- self.assertEqual(
- verity_image_builder.CalculateMaxImageSize(),
- verity_image_builder.CalculateMaxImageSize(
- self.DEFAULT_PARTITION_SIZE))
-
- @staticmethod
- def _BuildAndVerify(prop, verify_key):
- verity_image_builder = CreateVerityImageBuilder(prop)
- image_size = verity_image_builder.CalculateMaxImageSize()
-
- # Build the sparse image with verity metadata.
- input_dir = common.MakeTempDir()
- image = common.MakeTempFile(suffix='.img')
- cmd = ['mkuserimg_mke2fs', input_dir, image, 'ext4', '/system',
- str(image_size), '-j', '0', '-s']
- common.RunAndCheckOutput(cmd)
- verity_image_builder.Build(image)
-
- # Verify the verity metadata.
- cmd = ['verity_verifier', image, '-mincrypt', verify_key]
- common.RunAndCheckOutput(cmd)
-
- @SkipIfExternalToolsUnavailable()
- def test_Build(self):
- self._BuildAndVerify(
- self.DEFAULT_PROP_DICT,
- os.path.join(get_testdata_dir(), 'testkey_mincrypt'))
-
- @SkipIfExternalToolsUnavailable()
- def test_Build_ValidationCheck(self):
- # A validity check for the test itself: the image shouldn't be verifiable
- # with wrong key.
- self.assertRaises(
- common.ExternalError,
- self._BuildAndVerify,
- self.DEFAULT_PROP_DICT,
- os.path.join(get_testdata_dir(), 'verity_mincrypt'))
-
- @SkipIfExternalToolsUnavailable()
- def test_Build_FecDisabled(self):
- prop_dict = copy.deepcopy(self.DEFAULT_PROP_DICT)
- del prop_dict['verity_fec']
- self._BuildAndVerify(
- prop_dict,
- os.path.join(get_testdata_dir(), 'testkey_mincrypt'))
-
- @SkipIfExternalToolsUnavailable()
- def test_Build_SquashFs(self):
- verity_image_builder = CreateVerityImageBuilder(self.DEFAULT_PROP_DICT)
- verity_image_builder.CalculateMaxImageSize()
-
- # Build the sparse image with verity metadata.
- input_dir = common.MakeTempDir()
- image = common.MakeTempFile(suffix='.img')
- cmd = ['mksquashfsimage.sh', input_dir, image, '-s']
- common.RunAndCheckOutput(cmd)
- verity_image_builder.PadSparseImage(image)
- verity_image_builder.Build(image)
-
- # Verify the verity metadata.
- cmd = ["verity_verifier", image, '-mincrypt',
- os.path.join(get_testdata_dir(), 'testkey_mincrypt')]
- common.RunAndCheckOutput(cmd)
-
-
class VerifiedBootVersion2VerityImageBuilderTest(ReleaseToolsTestCase):
DEFAULT_PROP_DICT = {
diff --git a/tools/releasetools/testdata/sepolicy.apex b/tools/releasetools/testdata/sepolicy.apex
new file mode 100644
index 0000000..2c646cd
--- /dev/null
+++ b/tools/releasetools/testdata/sepolicy.apex
Binary files differ
diff --git a/tools/releasetools/testdata/tuna_vbmeta.zip b/tools/releasetools/testdata/tuna_vbmeta.zip
new file mode 100644
index 0000000..64e7bb3
--- /dev/null
+++ b/tools/releasetools/testdata/tuna_vbmeta.zip
Binary files differ
diff --git a/tools/releasetools/testdata/tuna_vbmeta_system.zip b/tools/releasetools/testdata/tuna_vbmeta_system.zip
new file mode 100644
index 0000000..3d76ef0
--- /dev/null
+++ b/tools/releasetools/testdata/tuna_vbmeta_system.zip
Binary files differ
diff --git a/tools/releasetools/testdata/tuna_vbmeta_vendor.zip b/tools/releasetools/testdata/tuna_vbmeta_vendor.zip
new file mode 100644
index 0000000..6994c59
--- /dev/null
+++ b/tools/releasetools/testdata/tuna_vbmeta_vendor.zip
Binary files differ
diff --git a/tools/releasetools/verity_utils.py b/tools/releasetools/verity_utils.py
index d55ad88..dddb7f4 100644
--- a/tools/releasetools/verity_utils.py
+++ b/tools/releasetools/verity_utils.py
@@ -49,107 +49,6 @@
Exception.__init__(self, message)
-def GetVerityFECSize(image_size):
- cmd = ["fec", "-s", str(image_size)]
- output = common.RunAndCheckOutput(cmd, verbose=False)
- return int(output)
-
-
-def GetVerityTreeSize(image_size):
- cmd = ["build_verity_tree", "-s", str(image_size)]
- output = common.RunAndCheckOutput(cmd, verbose=False)
- return int(output)
-
-
-def GetVerityMetadataSize(image_size):
- cmd = ["build_verity_metadata", "size", str(image_size)]
- output = common.RunAndCheckOutput(cmd, verbose=False)
- return int(output)
-
-
-def GetVeritySize(image_size, fec_supported):
- verity_tree_size = GetVerityTreeSize(image_size)
- verity_metadata_size = GetVerityMetadataSize(image_size)
- verity_size = verity_tree_size + verity_metadata_size
- if fec_supported:
- fec_size = GetVerityFECSize(image_size + verity_size)
- return verity_size + fec_size
- return verity_size
-
-
-def GetSimgSize(image_file):
- simg = sparse_img.SparseImage(image_file, build_map=False)
- return simg.blocksize * simg.total_blocks
-
-
-def ZeroPadSimg(image_file, pad_size):
- blocks = pad_size // BLOCK_SIZE
- logger.info("Padding %d blocks (%d bytes)", blocks, pad_size)
- simg = sparse_img.SparseImage(image_file, mode="r+b", build_map=False)
- simg.AppendFillChunk(0, blocks)
-
-
-def BuildVerityFEC(sparse_image_path, verity_path, verity_fec_path,
- padding_size):
- cmd = ["fec", "-e", "-p", str(padding_size), sparse_image_path,
- verity_path, verity_fec_path]
- common.RunAndCheckOutput(cmd)
-
-
-def BuildVerityTree(sparse_image_path, verity_image_path):
- cmd = ["build_verity_tree", "-A", FIXED_SALT, sparse_image_path,
- verity_image_path]
- output = common.RunAndCheckOutput(cmd)
- root, salt = output.split()
- return root, salt
-
-
-def BuildVerityMetadata(image_size, verity_metadata_path, root_hash, salt,
- block_device, signer_path, key, signer_args,
- verity_disable):
- cmd = ["build_verity_metadata", "build", str(image_size),
- verity_metadata_path, root_hash, salt, block_device, signer_path, key]
- if signer_args:
- cmd.append("--signer_args=\"%s\"" % (' '.join(signer_args),))
- if verity_disable:
- cmd.append("--verity_disable")
- common.RunAndCheckOutput(cmd)
-
-
-def Append2Simg(sparse_image_path, unsparse_image_path, error_message):
- """Appends the unsparse image to the given sparse image.
-
- Args:
- sparse_image_path: the path to the (sparse) image
- unsparse_image_path: the path to the (unsparse) image
-
- Raises:
- BuildVerityImageError: On error.
- """
- cmd = ["append2simg", sparse_image_path, unsparse_image_path]
- try:
- common.RunAndCheckOutput(cmd)
- except:
- logger.exception(error_message)
- raise BuildVerityImageError(error_message)
-
-
-def Append(target, file_to_append, error_message):
- """Appends file_to_append to target.
-
- Raises:
- BuildVerityImageError: On error.
- """
- try:
- with open(target, 'ab') as out_file, \
- open(file_to_append, 'rb') as input_file:
- for line in input_file:
- out_file.write(line)
- except IOError:
- logger.exception(error_message)
- raise BuildVerityImageError(error_message)
-
-
def CreateVerityImageBuilder(prop_dict):
"""Returns a verity image builder based on the given build properties.
@@ -166,23 +65,6 @@
if partition_size:
partition_size = int(partition_size)
- # Verified Boot 1.0
- verity_supported = prop_dict.get("verity") == "true"
- is_verity_partition = "verity_block_device" in prop_dict
- if verity_supported and is_verity_partition:
- if OPTIONS.verity_signer_path is not None:
- signer_path = OPTIONS.verity_signer_path
- else:
- signer_path = prop_dict["verity_signer_cmd"]
- return Version1VerityImageBuilder(
- partition_size,
- prop_dict["verity_block_device"],
- prop_dict.get("verity_fec") == "true",
- signer_path,
- prop_dict["verity_key"] + ".pk8",
- OPTIONS.verity_signer_args,
- "verity_disable" in prop_dict)
-
# Verified Boot 2.0
if (prop_dict.get("avb_hash_enable") == "true" or
prop_dict.get("avb_hashtree_enable") == "true"):
@@ -245,125 +127,6 @@
raise NotImplementedError
-class Version1VerityImageBuilder(VerityImageBuilder):
- """A VerityImageBuilder for Verified Boot 1.0."""
-
- def __init__(self, partition_size, block_dev, fec_supported, signer_path,
- signer_key, signer_args, verity_disable):
- self.version = 1
- self.partition_size = partition_size
- self.block_device = block_dev
- self.fec_supported = fec_supported
- self.signer_path = signer_path
- self.signer_key = signer_key
- self.signer_args = signer_args
- self.verity_disable = verity_disable
- self.image_size = None
- self.verity_size = None
-
- def CalculateDynamicPartitionSize(self, image_size):
- # This needs to be implemented. Note that returning the given image size as
- # the partition size doesn't make sense, as it will fail later.
- raise NotImplementedError
-
- def CalculateMaxImageSize(self, partition_size=None):
- """Calculates the max image size by accounting for the verity metadata.
-
- Args:
- partition_size: The partition size, which defaults to self.partition_size
- if unspecified.
-
- Returns:
- The size of the image adjusted for verity metadata.
- """
- if partition_size is None:
- partition_size = self.partition_size
- assert partition_size > 0, \
- "Invalid partition size: {}".format(partition_size)
-
- hi = partition_size
- if hi % BLOCK_SIZE != 0:
- hi = (hi // BLOCK_SIZE) * BLOCK_SIZE
-
- # verity tree and fec sizes depend on the partition size, which
- # means this estimate is always going to be unnecessarily small
- verity_size = GetVeritySize(hi, self.fec_supported)
- lo = partition_size - verity_size
- result = lo
-
- # do a binary search for the optimal size
- while lo < hi:
- i = ((lo + hi) // (2 * BLOCK_SIZE)) * BLOCK_SIZE
- v = GetVeritySize(i, self.fec_supported)
- if i + v <= partition_size:
- if result < i:
- result = i
- verity_size = v
- lo = i + BLOCK_SIZE
- else:
- hi = i
-
- self.image_size = result
- self.verity_size = verity_size
-
- logger.info(
- "Calculated image size for verity: partition_size %d, image_size %d, "
- "verity_size %d", partition_size, result, verity_size)
- return result
-
- def Build(self, out_file):
- """Creates an image that is verifiable using dm-verity.
-
- Args:
- out_file: the output image.
-
- Returns:
- AssertionError: On invalid partition sizes.
- BuildVerityImageError: On other errors.
- """
- image_size = int(self.image_size)
- tempdir_name = common.MakeTempDir(suffix="_verity_images")
-
- # Get partial image paths.
- verity_image_path = os.path.join(tempdir_name, "verity.img")
- verity_metadata_path = os.path.join(tempdir_name, "verity_metadata.img")
-
- # Build the verity tree and get the root hash and salt.
- root_hash, salt = BuildVerityTree(out_file, verity_image_path)
-
- # Build the metadata blocks.
- BuildVerityMetadata(
- image_size, verity_metadata_path, root_hash, salt, self.block_device,
- self.signer_path, self.signer_key, self.signer_args,
- self.verity_disable)
-
- padding_size = self.partition_size - self.image_size - self.verity_size
- assert padding_size >= 0
-
- # Build the full verified image.
- Append(
- verity_image_path, verity_metadata_path,
- "Failed to append verity metadata")
-
- if self.fec_supported:
- # Build FEC for the entire partition, including metadata.
- verity_fec_path = os.path.join(tempdir_name, "verity_fec.img")
- BuildVerityFEC(
- out_file, verity_image_path, verity_fec_path, padding_size)
- Append(verity_image_path, verity_fec_path, "Failed to append FEC")
-
- Append2Simg(
- out_file, verity_image_path, "Failed to append verity data")
-
- def PadSparseImage(self, out_file):
- sparse_image_size = GetSimgSize(out_file)
- if sparse_image_size > self.image_size:
- raise BuildVerityImageError(
- "Error: image size of {} is larger than partition size of "
- "{}".format(sparse_image_size, self.image_size))
- ZeroPadSimg(out_file, self.image_size - sparse_image_size)
-
-
class VerifiedBootVersion2VerityImageBuilder(VerityImageBuilder):
"""A VerityImageBuilder for Verified Boot 2.0."""
@@ -378,11 +141,7 @@
self.footer_type = footer_type
self.avbtool = avbtool
self.algorithm = algorithm
- self.key_path = key_path
- if key_path and not os.path.exists(key_path) and OPTIONS.search_path:
- new_key_path = os.path.join(OPTIONS.search_path, key_path)
- if os.path.exists(new_key_path):
- self.key_path = new_key_path
+ self.key_path = common.ResolveAVBSigningPathArgs(key_path)
self.salt = salt
self.signing_args = signing_args
@@ -519,199 +278,6 @@
raise BuildVerityImageError("Failed to add AVB footer: {}".format(output))
-class HashtreeInfoGenerationError(Exception):
- """An Exception raised during hashtree info generation."""
-
- def __init__(self, message):
- Exception.__init__(self, message)
-
-
-class HashtreeInfo(object):
- def __init__(self):
- self.hashtree_range = None
- self.filesystem_range = None
- self.hash_algorithm = None
- self.salt = None
- self.root_hash = None
-
-
-def CreateHashtreeInfoGenerator(partition_name, block_size, info_dict):
- generator = None
- if (info_dict.get("verity") == "true" and
- info_dict.get("{}_verity_block_device".format(partition_name))):
- partition_size = info_dict["{}_size".format(partition_name)]
- fec_supported = info_dict.get("verity_fec") == "true"
- generator = VerifiedBootVersion1HashtreeInfoGenerator(
- partition_size, block_size, fec_supported)
-
- return generator
-
-
-class HashtreeInfoGenerator(object):
- def Generate(self, image):
- raise NotImplementedError
-
- def DecomposeSparseImage(self, image):
- raise NotImplementedError
-
- def ValidateHashtree(self):
- raise NotImplementedError
-
-
-class VerifiedBootVersion1HashtreeInfoGenerator(HashtreeInfoGenerator):
- """A class that parses the metadata of hashtree for a given partition."""
-
- def __init__(self, partition_size, block_size, fec_supported):
- """Initialize VerityTreeInfo with the sparse image and input property.
-
- Arguments:
- partition_size: The whole size in bytes of a partition, including the
- filesystem size, padding size, and verity size.
- block_size: Expected size in bytes of each block for the sparse image.
- fec_supported: True if the verity section contains fec data.
- """
-
- self.block_size = block_size
- self.partition_size = partition_size
- self.fec_supported = fec_supported
-
- self.image = None
- self.filesystem_size = None
- self.hashtree_size = None
- self.metadata_size = None
-
- prop_dict = {
- 'partition_size': str(partition_size),
- 'verity': 'true',
- 'verity_fec': 'true' if fec_supported else None,
- # 'verity_block_device' needs to be present to indicate a verity-enabled
- # partition.
- 'verity_block_device': '',
- # We don't need the following properties that are needed for signing the
- # verity metadata.
- 'verity_key': '',
- 'verity_signer_cmd': None,
- }
- self.verity_image_builder = CreateVerityImageBuilder(prop_dict)
-
- self.hashtree_info = HashtreeInfo()
-
- def DecomposeSparseImage(self, image):
- """Calculate the verity size based on the size of the input image.
-
- Since we already know the structure of a verity enabled image to be:
- [filesystem, verity_hashtree, verity_metadata, fec_data]. We can then
- calculate the size and offset of each section.
- """
-
- self.image = image
- assert self.block_size == image.blocksize
- assert self.partition_size == image.total_blocks * self.block_size, \
- "partition size {} doesn't match with the calculated image size." \
- " total_blocks: {}".format(self.partition_size, image.total_blocks)
-
- adjusted_size = self.verity_image_builder.CalculateMaxImageSize()
- assert adjusted_size % self.block_size == 0
-
- verity_tree_size = GetVerityTreeSize(adjusted_size)
- assert verity_tree_size % self.block_size == 0
-
- metadata_size = GetVerityMetadataSize(adjusted_size)
- assert metadata_size % self.block_size == 0
-
- self.filesystem_size = adjusted_size
- self.hashtree_size = verity_tree_size
- self.metadata_size = metadata_size
-
- self.hashtree_info.filesystem_range = RangeSet(
- data=[0, adjusted_size // self.block_size])
- self.hashtree_info.hashtree_range = RangeSet(
- data=[adjusted_size // self.block_size,
- (adjusted_size + verity_tree_size) // self.block_size])
-
- def _ParseHashtreeMetadata(self):
- """Parses the hash_algorithm, root_hash, salt from the metadata block."""
-
- metadata_start = self.filesystem_size + self.hashtree_size
- metadata_range = RangeSet(
- data=[metadata_start // self.block_size,
- (metadata_start + self.metadata_size) // self.block_size])
- meta_data = b''.join(self.image.ReadRangeSet(metadata_range))
-
- # More info about the metadata structure available in:
- # system/extras/verity/build_verity_metadata.py
- META_HEADER_SIZE = 268
- header_bin = meta_data[0:META_HEADER_SIZE]
- header = struct.unpack("II256sI", header_bin)
-
- # header: magic_number, version, signature, table_len
- assert header[0] == 0xb001b001, header[0]
- table_len = header[3]
- verity_table = meta_data[META_HEADER_SIZE: META_HEADER_SIZE + table_len]
- table_entries = verity_table.rstrip().split()
-
- # Expected verity table format: "1 block_device block_device block_size
- # block_size data_blocks data_blocks hash_algorithm root_hash salt"
- assert len(table_entries) == 10, "Unexpected verity table size {}".format(
- len(table_entries))
- assert (int(table_entries[3]) == self.block_size and
- int(table_entries[4]) == self.block_size)
- assert (int(table_entries[5]) * self.block_size == self.filesystem_size and
- int(table_entries[6]) * self.block_size == self.filesystem_size)
-
- self.hashtree_info.hash_algorithm = table_entries[7].decode()
- self.hashtree_info.root_hash = table_entries[8].decode()
- self.hashtree_info.salt = table_entries[9].decode()
-
- def ValidateHashtree(self):
- """Checks that we can reconstruct the verity hash tree."""
-
- # Writes the filesystem section to a temp file; and calls the executable
- # build_verity_tree to construct the hash tree.
- adjusted_partition = common.MakeTempFile(prefix="adjusted_partition")
- with open(adjusted_partition, "wb") as fd:
- self.image.WriteRangeDataToFd(self.hashtree_info.filesystem_range, fd)
-
- generated_verity_tree = common.MakeTempFile(prefix="verity")
- root_hash, salt = BuildVerityTree(adjusted_partition, generated_verity_tree)
-
- # The salt should be always identical, as we use fixed value.
- assert salt == self.hashtree_info.salt, \
- "Calculated salt {} doesn't match the one in metadata {}".format(
- salt, self.hashtree_info.salt)
-
- if root_hash != self.hashtree_info.root_hash:
- logger.warning(
- "Calculated root hash %s doesn't match the one in metadata %s",
- root_hash, self.hashtree_info.root_hash)
- return False
-
- # Reads the generated hash tree and checks if it has the exact same bytes
- # as the one in the sparse image.
- with open(generated_verity_tree, 'rb') as fd:
- return fd.read() == b''.join(self.image.ReadRangeSet(
- self.hashtree_info.hashtree_range))
-
- def Generate(self, image):
- """Parses and validates the hashtree info in a sparse image.
-
- Returns:
- hashtree_info: The information needed to reconstruct the hashtree.
-
- Raises:
- HashtreeInfoGenerationError: If we fail to generate the exact bytes of
- the hashtree.
- """
-
- self.DecomposeSparseImage(image)
- self._ParseHashtreeMetadata()
-
- if not self.ValidateHashtree():
- raise HashtreeInfoGenerationError("Failed to reconstruct the verity tree")
-
- return self.hashtree_info
-
-
def CreateCustomImageBuilder(info_dict, partition_name, partition_size,
key_path, algorithm, signing_args):
builder = None
diff --git a/tools/sbom/Android.bp b/tools/sbom/Android.bp
new file mode 100644
index 0000000..4837dde
--- /dev/null
+++ b/tools/sbom/Android.bp
@@ -0,0 +1,57 @@
+// Copyright (C) 2023 The Android Open Source Project
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package {
+ default_applicable_licenses: ["Android-Apache-2.0"],
+}
+
+python_binary_host {
+ name: "generate-sbom",
+ srcs: [
+ "generate-sbom.py",
+ ],
+ version: {
+ py3: {
+ embedded_launcher: true,
+ },
+ },
+ libs: [
+ "metadata_file_proto_py",
+ "libprotobuf-python",
+ "sbom_lib",
+ ],
+}
+
+python_library_host {
+ name: "sbom_lib",
+ srcs: [
+ "sbom_data.py",
+ "sbom_writers.py",
+ ],
+}
+
+python_test_host {
+ name: "sbom_writers_test",
+ main: "sbom_writers_test.py",
+ srcs: [
+ "sbom_writers_test.py",
+ ],
+ data: [
+ "testdata/*",
+ ],
+ libs: [
+ "sbom_lib",
+ ],
+ test_suites: ["general-tests"],
+}
diff --git a/tools/sbom/generate-sbom.py b/tools/sbom/generate-sbom.py
new file mode 100755
index 0000000..b19be87
--- /dev/null
+++ b/tools/sbom/generate-sbom.py
@@ -0,0 +1,586 @@
+#!/usr/bin/env python3
+#
+# Copyright (C) 2023 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Generate the SBOM of the current target product in SPDX format.
+Usage example:
+ generate-sbom.py --output_file out/target/product/vsoc_x86_64/sbom.spdx \
+ --metadata out/target/product/vsoc_x86_64/sbom-metadata.csv \
+ --build_version $(cat out/target/product/vsoc_x86_64/build_fingerprint.txt) \
+ --product_mfr=Google
+"""
+
+import argparse
+import csv
+import datetime
+import google.protobuf.text_format as text_format
+import hashlib
+import os
+import metadata_file_pb2
+import sbom_data
+import sbom_writers
+
+
+# Package type
+PKG_SOURCE = 'SOURCE'
+PKG_UPSTREAM = 'UPSTREAM'
+PKG_PREBUILT = 'PREBUILT'
+
+# Security tag
+NVD_CPE23 = 'NVD-CPE2.3:'
+
+# Report
+ISSUE_NO_METADATA = 'No metadata generated in Make for installed files:'
+ISSUE_NO_METADATA_FILE = 'No METADATA file found for installed file:'
+ISSUE_METADATA_FILE_INCOMPLETE = 'METADATA file incomplete:'
+ISSUE_UNKNOWN_SECURITY_TAG_TYPE = 'Unknown security tag type:'
+ISSUE_INSTALLED_FILE_NOT_EXIST = 'Non-exist installed files:'
+INFO_METADATA_FOUND_FOR_PACKAGE = 'METADATA file found for packages:'
+
+SOONG_PREBUILT_MODULE_TYPES = [
+ 'android_app_import',
+ 'android_library_import',
+ 'cc_prebuilt_binary',
+ 'cc_prebuilt_library',
+ 'cc_prebuilt_library_headers',
+ 'cc_prebuilt_library_shared',
+ 'cc_prebuilt_library_static',
+ 'cc_prebuilt_object',
+ 'dex_import',
+ 'java_import',
+ 'java_sdk_library_import',
+ 'java_system_modules_import',
+ 'libclang_rt_prebuilt_library_static',
+ 'libclang_rt_prebuilt_library_shared',
+ 'llvm_prebuilt_library_static',
+ 'ndk_prebuilt_object',
+ 'ndk_prebuilt_shared_stl',
+ 'nkd_prebuilt_static_stl',
+ 'prebuilt_apex',
+ 'prebuilt_bootclasspath_fragment',
+ 'prebuilt_dsp',
+ 'prebuilt_firmware',
+ 'prebuilt_kernel_modules',
+ 'prebuilt_rfsa',
+ 'prebuilt_root',
+ 'rust_prebuilt_dylib',
+ 'rust_prebuilt_library',
+ 'rust_prebuilt_rlib',
+ 'vndk_prebuilt_shared',
+]
+
+
+def get_args():
+ parser = argparse.ArgumentParser()
+ parser.add_argument('-v', '--verbose', action='store_true', default=False, help='Print more information.')
+ parser.add_argument('--output_file', required=True, help='The generated SBOM file in SPDX format.')
+ parser.add_argument('--metadata', required=True, help='The SBOM metadata file path.')
+ parser.add_argument('--build_version', required=True, help='The build version.')
+ parser.add_argument('--product_mfr', required=True, help='The product manufacturer.')
+ parser.add_argument('--json', action='store_true', default=False, help='Generated SBOM file in SPDX JSON format')
+ parser.add_argument('--unbundled_apk', action='store_true', default=False, help='Generate SBOM for unbundled APKs')
+ parser.add_argument('--unbundled_apex', action='store_true', default=False, help='Generate SBOM for unbundled APEXs')
+
+ return parser.parse_args()
+
+
+def log(*info):
+ if args.verbose:
+ for i in info:
+ print(i)
+
+
+def encode_for_spdxid(s):
+ """Simple encode for string values used in SPDXID which uses the charset of A-Za-Z0-9.-"""
+ result = ''
+ for c in s:
+ if c.isalnum() or c in '.-':
+ result += c
+ elif c in '_@/':
+ result += '-'
+ else:
+ result += '0x' + c.encode('utf-8').hex()
+
+ return result.lstrip('-')
+
+
+def new_package_id(package_name, type):
+ return f'SPDXRef-{type}-{encode_for_spdxid(package_name)}'
+
+
+def new_file_id(file_path):
+ return f'SPDXRef-{encode_for_spdxid(file_path)}'
+
+
+def checksum(file_path):
+ h = hashlib.sha1()
+ if os.path.islink(file_path):
+ h.update(os.readlink(file_path).encode('utf-8'))
+ else:
+ with open(file_path, 'rb') as f:
+ h.update(f.read())
+ return f'SHA1: {h.hexdigest()}'
+
+
+def is_soong_prebuilt_module(file_metadata):
+ return (file_metadata['soong_module_type'] and
+ file_metadata['soong_module_type'] in SOONG_PREBUILT_MODULE_TYPES)
+
+
+def is_source_package(file_metadata):
+ module_path = file_metadata['module_path']
+ return module_path.startswith('external/') and not is_prebuilt_package(file_metadata)
+
+
+def is_prebuilt_package(file_metadata):
+ module_path = file_metadata['module_path']
+ if module_path:
+ return (module_path.startswith('prebuilts/') or
+ is_soong_prebuilt_module(file_metadata) or
+ file_metadata['is_prebuilt_make_module'])
+
+ kernel_module_copy_files = file_metadata['kernel_module_copy_files']
+ if kernel_module_copy_files and not kernel_module_copy_files.startswith('ANDROID-GEN:'):
+ return True
+
+ return False
+
+
+def get_source_package_info(file_metadata, metadata_file_path):
+ """Return source package info exists in its METADATA file, currently including name, security tag
+ and external SBOM reference.
+
+ See go/android-spdx and go/android-sbom-gen for more details.
+ """
+ if not metadata_file_path:
+ return file_metadata['module_path'], []
+
+ metadata_proto = metadata_file_protos[metadata_file_path]
+ external_refs = []
+ for tag in metadata_proto.third_party.security.tag:
+ if tag.lower().startswith((NVD_CPE23 + 'cpe:2.3:').lower()):
+ external_refs.append(
+ sbom_data.PackageExternalRef(category=sbom_data.PackageExternalRefCategory.SECURITY,
+ type=sbom_data.PackageExternalRefType.cpe23Type,
+ locator=tag.removeprefix(NVD_CPE23)))
+ elif tag.lower().startswith((NVD_CPE23 + 'cpe:/').lower()):
+ external_refs.append(
+ sbom_data.PackageExternalRef(category=sbom_data.PackageExternalRefCategory.SECURITY,
+ type=sbom_data.PackageExternalRefType.cpe22Type,
+ locator=tag.removeprefix(NVD_CPE23)))
+
+ if metadata_proto.name:
+ return metadata_proto.name, external_refs
+ else:
+ return os.path.basename(metadata_file_path), external_refs # return the directory name only as package name
+
+
+def get_prebuilt_package_name(file_metadata, metadata_file_path):
+ """Return name of a prebuilt package, which can be from the METADATA file, metadata file path,
+ module path or kernel module's source path if the installed file is a kernel module.
+
+ See go/android-spdx and go/android-sbom-gen for more details.
+ """
+ name = None
+ if metadata_file_path:
+ metadata_proto = metadata_file_protos[metadata_file_path]
+ if metadata_proto.name:
+ name = metadata_proto.name
+ else:
+ name = metadata_file_path
+ elif file_metadata['module_path']:
+ name = file_metadata['module_path']
+ elif file_metadata['kernel_module_copy_files']:
+ src_path = file_metadata['kernel_module_copy_files'].split(':')[0]
+ name = os.path.dirname(src_path)
+
+ return name.removeprefix('prebuilts/').replace('/', '-')
+
+
+def get_metadata_file_path(file_metadata):
+ """Search for METADATA file of a package and return its path."""
+ metadata_path = ''
+ if file_metadata['module_path']:
+ metadata_path = file_metadata['module_path']
+ elif file_metadata['kernel_module_copy_files']:
+ metadata_path = os.path.dirname(file_metadata['kernel_module_copy_files'].split(':')[0])
+
+ while metadata_path and not os.path.exists(metadata_path + '/METADATA'):
+ metadata_path = os.path.dirname(metadata_path)
+
+ return metadata_path
+
+
+def get_package_version(metadata_file_path):
+ """Return a package's version in its METADATA file."""
+ if not metadata_file_path:
+ return None
+ metadata_proto = metadata_file_protos[metadata_file_path]
+ return metadata_proto.third_party.version
+
+
+def get_package_homepage(metadata_file_path):
+ """Return a package's homepage URL in its METADATA file."""
+ if not metadata_file_path:
+ return None
+ metadata_proto = metadata_file_protos[metadata_file_path]
+ if metadata_proto.third_party.homepage:
+ return metadata_proto.third_party.homepage
+ for url in metadata_proto.third_party.url:
+ if url.type == metadata_file_pb2.URL.Type.HOMEPAGE:
+ return url.value
+
+ return None
+
+
+def get_package_download_location(metadata_file_path):
+ """Return a package's code repository URL in its METADATA file."""
+ if not metadata_file_path:
+ return None
+ metadata_proto = metadata_file_protos[metadata_file_path]
+ if metadata_proto.third_party.url:
+ urls = sorted(metadata_proto.third_party.url, key=lambda url: url.type)
+ if urls[0].type != metadata_file_pb2.URL.Type.HOMEPAGE:
+ return urls[0].value
+ elif len(urls) > 1:
+ return urls[1].value
+
+ return None
+
+
+def get_sbom_fragments(installed_file_metadata, metadata_file_path):
+ """Return SPDX fragment of source/prebuilt packages, which usually contains a SOURCE/PREBUILT
+ package, a UPSTREAM package and an external SBOM document reference if sbom_ref defined in its
+ METADATA file.
+
+ See go/android-spdx and go/android-sbom-gen for more details.
+ """
+ external_doc_ref = None
+ packages = []
+ relationships = []
+
+ # Info from METADATA file
+ homepage = get_package_homepage(metadata_file_path)
+ version = get_package_version(metadata_file_path)
+ download_location = get_package_download_location(metadata_file_path)
+
+ if is_source_package(installed_file_metadata):
+ # Source fork packages
+ name, external_refs = get_source_package_info(installed_file_metadata, metadata_file_path)
+ source_package_id = new_package_id(name, PKG_SOURCE)
+ source_package = sbom_data.Package(id=source_package_id, name=name, version=args.build_version,
+ download_location=sbom_data.VALUE_NONE,
+ supplier='Organization: ' + args.product_mfr,
+ external_refs=external_refs)
+
+ upstream_package_id = new_package_id(name, PKG_UPSTREAM)
+ upstream_package = sbom_data.Package(id=upstream_package_id, name=name, version=version,
+ supplier=('Organization: ' + homepage) if homepage else sbom_data.VALUE_NOASSERTION,
+ download_location=download_location)
+ packages += [source_package, upstream_package]
+ relationships.append(sbom_data.Relationship(id1=source_package_id,
+ relationship=sbom_data.RelationshipType.VARIANT_OF,
+ id2=upstream_package_id))
+ elif is_prebuilt_package(installed_file_metadata):
+ # Prebuilt fork packages
+ name = get_prebuilt_package_name(installed_file_metadata, metadata_file_path)
+ prebuilt_package_id = new_package_id(name, PKG_PREBUILT)
+ prebuilt_package = sbom_data.Package(id=prebuilt_package_id,
+ name=name,
+ download_location=sbom_data.VALUE_NONE,
+ version=version if version else args.build_version,
+ supplier='Organization: ' + args.product_mfr)
+
+ upstream_package_id = new_package_id(name, PKG_UPSTREAM)
+ upstream_package = sbom_data.Package(id=upstream_package_id, name=name, version = version,
+ supplier=('Organization: ' + homepage) if homepage else sbom_data.VALUE_NOASSERTION,
+ download_location=download_location)
+ packages += [prebuilt_package, upstream_package]
+ relationships.append(sbom_data.Relationship(id1=prebuilt_package_id,
+ relationship=sbom_data.RelationshipType.VARIANT_OF,
+ id2=upstream_package_id))
+
+ if metadata_file_path:
+ metadata_proto = metadata_file_protos[metadata_file_path]
+ if metadata_proto.third_party.WhichOneof('sbom') == 'sbom_ref':
+ sbom_url = metadata_proto.third_party.sbom_ref.url
+ sbom_checksum = metadata_proto.third_party.sbom_ref.checksum
+ upstream_element_id = metadata_proto.third_party.sbom_ref.element_id
+ if sbom_url and sbom_checksum and upstream_element_id:
+ doc_ref_id = f'DocumentRef-{PKG_UPSTREAM}-{encode_for_spdxid(name)}'
+ external_doc_ref = sbom_data.DocumentExternalReference(id=doc_ref_id,
+ uri=sbom_url,
+ checksum=sbom_checksum)
+ relationships.append(
+ sbom_data.Relationship(id1=upstream_package_id,
+ relationship=sbom_data.RelationshipType.VARIANT_OF,
+ id2=doc_ref_id + ':' + upstream_element_id))
+
+ return external_doc_ref, packages, relationships
+
+
+def save_report(report_file_path, report):
+ with open(report_file_path, 'w', encoding='utf-8') as report_file:
+ for type, issues in report.items():
+ report_file.write(type + '\n')
+ for issue in issues:
+ report_file.write('\t' + issue + '\n')
+ report_file.write('\n')
+
+
+# Validate the metadata generated by Make for installed files and report if there is no metadata.
+def installed_file_has_metadata(installed_file_metadata, report):
+ installed_file = installed_file_metadata['installed_file']
+ module_path = installed_file_metadata['module_path']
+ product_copy_files = installed_file_metadata['product_copy_files']
+ kernel_module_copy_files = installed_file_metadata['kernel_module_copy_files']
+ is_platform_generated = installed_file_metadata['is_platform_generated']
+
+ if (not module_path and
+ not product_copy_files and
+ not kernel_module_copy_files and
+ not is_platform_generated and
+ not installed_file.endswith('.fsv_meta')):
+ report[ISSUE_NO_METADATA].append(installed_file)
+ return False
+
+ return True
+
+
+def report_metadata_file(metadata_file_path, installed_file_metadata, report):
+ if metadata_file_path:
+ report[INFO_METADATA_FOUND_FOR_PACKAGE].append(
+ 'installed_file: {}, module_path: {}, METADATA file: {}'.format(
+ installed_file_metadata['installed_file'],
+ installed_file_metadata['module_path'],
+ metadata_file_path + '/METADATA'))
+
+ package_metadata = metadata_file_pb2.Metadata()
+ with open(metadata_file_path + '/METADATA', 'rt') as f:
+ text_format.Parse(f.read(), package_metadata)
+
+ if not metadata_file_path in metadata_file_protos:
+ metadata_file_protos[metadata_file_path] = package_metadata
+ if not package_metadata.name:
+ report[ISSUE_METADATA_FILE_INCOMPLETE].append(f'{metadata_file_path}/METADATA does not has "name"')
+
+ if not package_metadata.third_party.version:
+ report[ISSUE_METADATA_FILE_INCOMPLETE].append(
+ f'{metadata_file_path}/METADATA does not has "third_party.version"')
+
+ for tag in package_metadata.third_party.security.tag:
+ if not tag.startswith(NVD_CPE23):
+ report[ISSUE_UNKNOWN_SECURITY_TAG_TYPE].append(
+ f'Unknown security tag type: {tag} in {metadata_file_path}/METADATA')
+ else:
+ report[ISSUE_NO_METADATA_FILE].append(
+ "installed_file: {}, module_path: {}".format(
+ installed_file_metadata['installed_file'], installed_file_metadata['module_path']))
+
+
+def generate_sbom_for_unbundled_apk():
+ with open(args.metadata, newline='') as sbom_metadata_file:
+ reader = csv.DictReader(sbom_metadata_file)
+ doc = sbom_data.Document(name=args.build_version,
+ namespace=f'https://www.google.com/sbom/spdx/android/{args.build_version}',
+ creators=['Organization: ' + args.product_mfr])
+ for installed_file_metadata in reader:
+ installed_file = installed_file_metadata['installed_file']
+ if args.output_file != installed_file_metadata['build_output_path'] + '.spdx.json':
+ continue
+
+ module_path = installed_file_metadata['module_path']
+ package_id = new_package_id(module_path, PKG_PREBUILT)
+ package = sbom_data.Package(id=package_id,
+ name=module_path,
+ version=args.build_version,
+ supplier='Organization: ' + args.product_mfr)
+ file_id = new_file_id(installed_file)
+ file = sbom_data.File(id=file_id,
+ name=installed_file,
+ checksum=checksum(installed_file_metadata['build_output_path']))
+ relationship = sbom_data.Relationship(id1=file_id,
+ relationship=sbom_data.RelationshipType.GENERATED_FROM,
+ id2=package_id)
+ doc.add_package(package)
+ doc.files.append(file)
+ doc.describes = file_id
+ doc.add_relationship(relationship)
+ doc.created = datetime.datetime.now(tz=datetime.timezone.utc).strftime('%Y-%m-%dT%H:%M:%SZ')
+ break
+
+ with open(args.output_file, 'w', encoding='utf-8') as file:
+ sbom_writers.JSONWriter.write(doc, file)
+ fragment_file = args.output_file.removesuffix('.spdx.json') + '-fragment.spdx'
+ with open(fragment_file, 'w', encoding='utf-8') as file:
+ sbom_writers.TagValueWriter.write(doc, file, fragment=True)
+
+
+def main():
+ global args
+ args = get_args()
+ log('Args:', vars(args))
+
+ if args.unbundled_apk:
+ generate_sbom_for_unbundled_apk()
+ return
+
+ global metadata_file_protos
+ metadata_file_protos = {}
+
+ product_package = sbom_data.Package(id=sbom_data.SPDXID_PRODUCT,
+ name=sbom_data.PACKAGE_NAME_PRODUCT,
+ download_location=sbom_data.VALUE_NONE,
+ version=args.build_version,
+ supplier='Organization: ' + args.product_mfr,
+ files_analyzed=True)
+
+ doc = sbom_data.Document(name=args.build_version,
+ namespace=f'https://www.google.com/sbom/spdx/android/{args.build_version}',
+ creators=['Organization: ' + args.product_mfr])
+ if not args.unbundled_apex:
+ doc.packages.append(product_package)
+
+ doc.packages.append(sbom_data.Package(id=sbom_data.SPDXID_PLATFORM,
+ name=sbom_data.PACKAGE_NAME_PLATFORM,
+ download_location=sbom_data.VALUE_NONE,
+ version=args.build_version,
+ supplier='Organization: ' + args.product_mfr))
+
+ # Report on some issues and information
+ report = {
+ ISSUE_NO_METADATA: [],
+ ISSUE_NO_METADATA_FILE: [],
+ ISSUE_METADATA_FILE_INCOMPLETE: [],
+ ISSUE_UNKNOWN_SECURITY_TAG_TYPE: [],
+ ISSUE_INSTALLED_FILE_NOT_EXIST: [],
+ INFO_METADATA_FOUND_FOR_PACKAGE: [],
+ }
+
+ # Scan the metadata in CSV file and create the corresponding package and file records in SPDX
+ with open(args.metadata, newline='') as sbom_metadata_file:
+ reader = csv.DictReader(sbom_metadata_file)
+ for installed_file_metadata in reader:
+ installed_file = installed_file_metadata['installed_file']
+ module_path = installed_file_metadata['module_path']
+ product_copy_files = installed_file_metadata['product_copy_files']
+ kernel_module_copy_files = installed_file_metadata['kernel_module_copy_files']
+ build_output_path = installed_file_metadata['build_output_path']
+ is_static_lib = installed_file_metadata['is_static_lib']
+
+ if not installed_file_has_metadata(installed_file_metadata, report):
+ continue
+ if not is_static_lib and not (os.path.islink(build_output_path) or os.path.isfile(build_output_path)):
+ # Ignore non-existing static library files for now since they are not shipped on devices.
+ report[ISSUE_INSTALLED_FILE_NOT_EXIST].append(installed_file)
+ continue
+
+ file_id = new_file_id(installed_file)
+ # TODO(b/285453664): Soong should report the information of statically linked libraries to Make.
+ # This happens when a different sanitized version of static libraries is used in linking.
+ # As a workaround, use the following SHA1 checksum for static libraries created by Soong, if .a files could not be
+ # located correctly because Soong doesn't report the information to Make.
+ sha1 = 'SHA1: da39a3ee5e6b4b0d3255bfef95601890afd80709' # SHA1 of empty string
+ if os.path.islink(build_output_path) or os.path.isfile(build_output_path):
+ sha1 = checksum(build_output_path)
+ doc.files.append(sbom_data.File(id=file_id,
+ name=installed_file,
+ checksum=sha1))
+
+ if not is_static_lib:
+ if not args.unbundled_apex:
+ product_package.file_ids.append(file_id)
+ elif len(doc.files) > 1:
+ doc.add_relationship(sbom_data.Relationship(doc.files[0].id, sbom_data.RelationshipType.CONTAINS, file_id))
+
+ if is_source_package(installed_file_metadata) or is_prebuilt_package(installed_file_metadata):
+ metadata_file_path = get_metadata_file_path(installed_file_metadata)
+ report_metadata_file(metadata_file_path, installed_file_metadata, report)
+
+ # File from source fork packages or prebuilt fork packages
+ external_doc_ref, pkgs, rels = get_sbom_fragments(installed_file_metadata, metadata_file_path)
+ if len(pkgs) > 0:
+ if external_doc_ref:
+ doc.add_external_ref(external_doc_ref)
+ for p in pkgs:
+ doc.add_package(p)
+ for rel in rels:
+ doc.add_relationship(rel)
+ fork_package_id = pkgs[0].id # The first package should be the source/prebuilt fork package
+ doc.add_relationship(sbom_data.Relationship(id1=file_id,
+ relationship=sbom_data.RelationshipType.GENERATED_FROM,
+ id2=fork_package_id))
+ elif module_path or installed_file_metadata['is_platform_generated']:
+ # File from PLATFORM package
+ doc.add_relationship(sbom_data.Relationship(id1=file_id,
+ relationship=sbom_data.RelationshipType.GENERATED_FROM,
+ id2=sbom_data.SPDXID_PLATFORM))
+ elif product_copy_files:
+ # Format of product_copy_files: <source path>:<dest path>
+ src_path = product_copy_files.split(':')[0]
+ # So far product_copy_files are copied from directory system, kernel, hardware, frameworks and device,
+ # so process them as files from PLATFORM package
+ doc.add_relationship(sbom_data.Relationship(id1=file_id,
+ relationship=sbom_data.RelationshipType.GENERATED_FROM,
+ id2=sbom_data.SPDXID_PLATFORM))
+ elif installed_file.endswith('.fsv_meta'):
+ # See build/make/core/Makefile:2988
+ doc.add_relationship(sbom_data.Relationship(id1=file_id,
+ relationship=sbom_data.RelationshipType.GENERATED_FROM,
+ id2=sbom_data.SPDXID_PLATFORM))
+ elif kernel_module_copy_files.startswith('ANDROID-GEN'):
+ # For the four files generated for _dlkm, _ramdisk partitions
+ # See build/make/core/Makefile:323
+ doc.add_relationship(sbom_data.Relationship(id1=file_id,
+ relationship=sbom_data.RelationshipType.GENERATED_FROM,
+ id2=sbom_data.SPDXID_PLATFORM))
+
+ # Process static libraries and whole static libraries the installed file links to
+ static_libs = installed_file_metadata['static_libraries']
+ whole_static_libs = installed_file_metadata['whole_static_libraries']
+ all_static_libs = (static_libs + ' ' + whole_static_libs).strip()
+ if all_static_libs:
+ for lib in all_static_libs.split(' '):
+ doc.add_relationship(sbom_data.Relationship(id1=file_id,
+ relationship=sbom_data.RelationshipType.STATIC_LINK,
+ id2=new_file_id(lib + '.a')))
+
+ if args.unbundled_apex:
+ doc.describes = doc.files[0].id
+
+ # Save SBOM records to output file
+ doc.generate_packages_verification_code()
+ doc.created = datetime.datetime.now(tz=datetime.timezone.utc).strftime('%Y-%m-%dT%H:%M:%SZ')
+ prefix = args.output_file
+ if prefix.endswith('.spdx'):
+ prefix = prefix.removesuffix('.spdx')
+ elif prefix.endswith('.spdx.json'):
+ prefix = prefix.removesuffix('.spdx.json')
+
+ output_file = prefix + '.spdx'
+ if args.unbundled_apex:
+ output_file = prefix + '-fragment.spdx'
+ with open(output_file, 'w', encoding="utf-8") as file:
+ sbom_writers.TagValueWriter.write(doc, file, fragment=args.unbundled_apex)
+ if args.json:
+ with open(prefix + '.spdx.json', 'w', encoding="utf-8") as file:
+ sbom_writers.JSONWriter.write(doc, file)
+
+ save_report(prefix + '-gen-report.txt', report)
+
+
+if __name__ == '__main__':
+ main()
diff --git a/tools/sbom/sbom_data.py b/tools/sbom/sbom_data.py
new file mode 100644
index 0000000..ea38e36
--- /dev/null
+++ b/tools/sbom/sbom_data.py
@@ -0,0 +1,140 @@
+#!/usr/bin/env python3
+#
+# Copyright (C) 2023 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Define data classes that model SBOMs defined by SPDX. The data classes could be
+written out to different formats (tagvalue, JSON, etc) of SPDX with corresponding
+writer utilities.
+
+Rrefer to SPDX 2.3 spec: https://spdx.github.io/spdx-spec/v2.3/ and go/android-spdx for details of
+fields in each data class.
+"""
+
+from dataclasses import dataclass, field
+from typing import List
+import hashlib
+
+SPDXID_DOC = 'SPDXRef-DOCUMENT'
+SPDXID_PRODUCT = 'SPDXRef-PRODUCT'
+SPDXID_PLATFORM = 'SPDXRef-PLATFORM'
+
+PACKAGE_NAME_PRODUCT = 'PRODUCT'
+PACKAGE_NAME_PLATFORM = 'PLATFORM'
+
+VALUE_NOASSERTION = 'NOASSERTION'
+VALUE_NONE = 'NONE'
+
+
+class PackageExternalRefCategory:
+ SECURITY = 'SECURITY'
+ PACKAGE_MANAGER = 'PACKAGE-MANAGER'
+ PERSISTENT_ID = 'PERSISTENT-ID'
+ OTHER = 'OTHER'
+
+
+class PackageExternalRefType:
+ cpe22Type = 'cpe22Type'
+ cpe23Type = 'cpe23Type'
+
+
+@dataclass
+class PackageExternalRef:
+ category: PackageExternalRefCategory
+ type: PackageExternalRefType
+ locator: str
+
+
+@dataclass
+class Package:
+ name: str
+ id: str
+ version: str = None
+ supplier: str = None
+ download_location: str = None
+ files_analyzed: bool = False
+ verification_code: str = None
+ file_ids: List[str] = field(default_factory=list)
+ external_refs: List[PackageExternalRef] = field(default_factory=list)
+
+
+@dataclass
+class File:
+ id: str
+ name: str
+ checksum: str
+
+
+class RelationshipType:
+ DESCRIBES = 'DESCRIBES'
+ VARIANT_OF = 'VARIANT_OF'
+ GENERATED_FROM = 'GENERATED_FROM'
+ CONTAINS = 'CONTAINS'
+ STATIC_LINK = 'STATIC_LINK'
+
+
+@dataclass
+class Relationship:
+ id1: str
+ relationship: RelationshipType
+ id2: str
+
+
+@dataclass
+class DocumentExternalReference:
+ id: str
+ uri: str
+ checksum: str
+
+
+@dataclass
+class Document:
+ name: str
+ namespace: str
+ id: str = SPDXID_DOC
+ describes: str = SPDXID_PRODUCT
+ creators: List[str] = field(default_factory=list)
+ created: str = None
+ external_refs: List[DocumentExternalReference] = field(default_factory=list)
+ packages: List[Package] = field(default_factory=list)
+ files: List[File] = field(default_factory=list)
+ relationships: List[Relationship] = field(default_factory=list)
+
+ def add_external_ref(self, external_ref):
+ if not any(external_ref.uri == ref.uri for ref in self.external_refs):
+ self.external_refs.append(external_ref)
+
+ def add_package(self, package):
+ if not any(package.id == p.id for p in self.packages):
+ self.packages.append(package)
+
+ def add_relationship(self, rel):
+ if not any(rel.id1 == r.id1 and rel.id2 == r.id2 and rel.relationship == r.relationship
+ for r in self.relationships):
+ self.relationships.append(rel)
+
+ def generate_packages_verification_code(self):
+ for package in self.packages:
+ if not package.file_ids:
+ continue
+
+ checksums = []
+ for file in self.files:
+ if file.id in package.file_ids:
+ checksums.append(file.checksum)
+ checksums.sort()
+ h = hashlib.sha1()
+ h.update(''.join(checksums).encode(encoding='utf-8'))
+ package.verification_code = h.hexdigest()
diff --git a/tools/sbom/sbom_writers.py b/tools/sbom/sbom_writers.py
new file mode 100644
index 0000000..1cb864d
--- /dev/null
+++ b/tools/sbom/sbom_writers.py
@@ -0,0 +1,356 @@
+#!/usr/bin/env python3
+#
+# Copyright (C) 2023 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Serialize objects defined in package sbom_data to SPDX format: tagvalue, JSON.
+"""
+
+import json
+import sbom_data
+
+SPDX_VER = 'SPDX-2.3'
+DATA_LIC = 'CC0-1.0'
+
+
+class Tags:
+ # Common
+ SPDXID = 'SPDXID'
+ SPDX_VERSION = 'SPDXVersion'
+ DATA_LICENSE = 'DataLicense'
+ DOCUMENT_NAME = 'DocumentName'
+ DOCUMENT_NAMESPACE = 'DocumentNamespace'
+ CREATED = 'Created'
+ CREATOR = 'Creator'
+ EXTERNAL_DOCUMENT_REF = 'ExternalDocumentRef'
+
+ # Package
+ PACKAGE_NAME = 'PackageName'
+ PACKAGE_DOWNLOAD_LOCATION = 'PackageDownloadLocation'
+ PACKAGE_VERSION = 'PackageVersion'
+ PACKAGE_SUPPLIER = 'PackageSupplier'
+ FILES_ANALYZED = 'FilesAnalyzed'
+ PACKAGE_VERIFICATION_CODE = 'PackageVerificationCode'
+ PACKAGE_EXTERNAL_REF = 'ExternalRef'
+ # Package license
+ PACKAGE_LICENSE_CONCLUDED = 'PackageLicenseConcluded'
+ PACKAGE_LICENSE_INFO_FROM_FILES = 'PackageLicenseInfoFromFiles'
+ PACKAGE_LICENSE_DECLARED = 'PackageLicenseDeclared'
+ PACKAGE_LICENSE_COMMENTS = 'PackageLicenseComments'
+
+ # File
+ FILE_NAME = 'FileName'
+ FILE_CHECKSUM = 'FileChecksum'
+ # File license
+ FILE_LICENSE_CONCLUDED = 'LicenseConcluded'
+ FILE_LICENSE_INFO_IN_FILE = 'LicenseInfoInFile'
+ FILE_LICENSE_COMMENTS = 'LicenseComments'
+ FILE_COPYRIGHT_TEXT = 'FileCopyrightText'
+ FILE_NOTICE = 'FileNotice'
+ FILE_ATTRIBUTION_TEXT = 'FileAttributionText'
+
+ # Relationship
+ RELATIONSHIP = 'Relationship'
+
+
+class TagValueWriter:
+ @staticmethod
+ def marshal_doc_headers(sbom_doc):
+ headers = [
+ f'{Tags.SPDX_VERSION}: {SPDX_VER}',
+ f'{Tags.DATA_LICENSE}: {DATA_LIC}',
+ f'{Tags.SPDXID}: {sbom_doc.id}',
+ f'{Tags.DOCUMENT_NAME}: {sbom_doc.name}',
+ f'{Tags.DOCUMENT_NAMESPACE}: {sbom_doc.namespace}',
+ ]
+ for creator in sbom_doc.creators:
+ headers.append(f'{Tags.CREATOR}: {creator}')
+ headers.append(f'{Tags.CREATED}: {sbom_doc.created}')
+ for doc_ref in sbom_doc.external_refs:
+ headers.append(
+ f'{Tags.EXTERNAL_DOCUMENT_REF}: {doc_ref.id} {doc_ref.uri} {doc_ref.checksum}')
+ headers.append('')
+ return headers
+
+ @staticmethod
+ def marshal_package(sbom_doc, package, fragment):
+ download_location = sbom_data.VALUE_NOASSERTION
+ if package.download_location:
+ download_location = package.download_location
+ tagvalues = [
+ f'{Tags.PACKAGE_NAME}: {package.name}',
+ f'{Tags.SPDXID}: {package.id}',
+ f'{Tags.PACKAGE_DOWNLOAD_LOCATION}: {download_location}',
+ f'{Tags.FILES_ANALYZED}: {str(package.files_analyzed).lower()}',
+ ]
+ if package.version:
+ tagvalues.append(f'{Tags.PACKAGE_VERSION}: {package.version}')
+ if package.supplier:
+ tagvalues.append(f'{Tags.PACKAGE_SUPPLIER}: {package.supplier}')
+ if package.verification_code:
+ tagvalues.append(f'{Tags.PACKAGE_VERIFICATION_CODE}: {package.verification_code}')
+ if package.external_refs:
+ for external_ref in package.external_refs:
+ tagvalues.append(
+ f'{Tags.PACKAGE_EXTERNAL_REF}: {external_ref.category} {external_ref.type} {external_ref.locator}')
+
+ tagvalues.append('')
+
+ if package.id == sbom_doc.describes and not fragment:
+ tagvalues.append(
+ f'{Tags.RELATIONSHIP}: {sbom_doc.id} {sbom_data.RelationshipType.DESCRIBES} {sbom_doc.describes}')
+ tagvalues.append('')
+
+ for file in sbom_doc.files:
+ if file.id in package.file_ids:
+ tagvalues += TagValueWriter.marshal_file(file)
+
+ return tagvalues
+
+ @staticmethod
+ def marshal_packages(sbom_doc, fragment):
+ tagvalues = []
+ marshaled_relationships = []
+ i = 0
+ packages = sbom_doc.packages
+ while i < len(packages):
+ if (i + 1 < len(packages)
+ and packages[i].id.startswith('SPDXRef-SOURCE-')
+ and packages[i + 1].id.startswith('SPDXRef-UPSTREAM-')):
+ # Output SOURCE, UPSTREAM packages and their VARIANT_OF relationship together, so they are close to each other
+ # in SBOMs in tagvalue format.
+ tagvalues += TagValueWriter.marshal_package(sbom_doc, packages[i], fragment)
+ tagvalues += TagValueWriter.marshal_package(sbom_doc, packages[i + 1], fragment)
+ rel = next((r for r in sbom_doc.relationships if
+ r.id1 == packages[i].id and
+ r.id2 == packages[i + 1].id and
+ r.relationship == sbom_data.RelationshipType.VARIANT_OF), None)
+ if rel:
+ marshaled_relationships.append(rel)
+ tagvalues.append(TagValueWriter.marshal_relationship(rel))
+ tagvalues.append('')
+
+ i += 2
+ else:
+ tagvalues += TagValueWriter.marshal_package(sbom_doc, packages[i], fragment)
+ i += 1
+
+ return tagvalues, marshaled_relationships
+
+ @staticmethod
+ def marshal_file(file):
+ tagvalues = [
+ f'{Tags.FILE_NAME}: {file.name}',
+ f'{Tags.SPDXID}: {file.id}',
+ f'{Tags.FILE_CHECKSUM}: {file.checksum}',
+ '',
+ ]
+
+ return tagvalues
+
+ @staticmethod
+ def marshal_files(sbom_doc, fragment):
+ tagvalues = []
+ files_in_packages = []
+ for package in sbom_doc.packages:
+ files_in_packages += package.file_ids
+ for file in sbom_doc.files:
+ if file.id in files_in_packages:
+ continue
+ tagvalues += TagValueWriter.marshal_file(file)
+ if file.id == sbom_doc.describes and not fragment:
+ # Fragment is not a full SBOM document so the relationship DESCRIBES is not applicable.
+ tagvalues.append(
+ f'{Tags.RELATIONSHIP}: {sbom_doc.id} {sbom_data.RelationshipType.DESCRIBES} {sbom_doc.describes}')
+ tagvalues.append('')
+ return tagvalues
+
+ @staticmethod
+ def marshal_relationship(rel):
+ return f'{Tags.RELATIONSHIP}: {rel.id1} {rel.relationship} {rel.id2}'
+
+ @staticmethod
+ def marshal_relationships(sbom_doc, marshaled_rels):
+ tagvalues = []
+ sorted_rels = sorted(sbom_doc.relationships, key=lambda r: r.id2 + r.id1)
+ for rel in sorted_rels:
+ if any(r.id1 == rel.id1 and r.id2 == rel.id2 and r.relationship == rel.relationship
+ for r in marshaled_rels):
+ continue
+ tagvalues.append(TagValueWriter.marshal_relationship(rel))
+ tagvalues.append('')
+ return tagvalues
+
+ @staticmethod
+ def write(sbom_doc, file, fragment=False):
+ content = []
+ if not fragment:
+ content += TagValueWriter.marshal_doc_headers(sbom_doc)
+ content += TagValueWriter.marshal_files(sbom_doc, fragment)
+ tagvalues, marshaled_relationships = TagValueWriter.marshal_packages(sbom_doc, fragment)
+ content += tagvalues
+ content += TagValueWriter.marshal_relationships(sbom_doc, marshaled_relationships)
+ file.write('\n'.join(content))
+
+
+class PropNames:
+ # Common
+ SPDXID = 'SPDXID'
+ SPDX_VERSION = 'spdxVersion'
+ DATA_LICENSE = 'dataLicense'
+ NAME = 'name'
+ DOCUMENT_NAMESPACE = 'documentNamespace'
+ CREATION_INFO = 'creationInfo'
+ CREATORS = 'creators'
+ CREATED = 'created'
+ EXTERNAL_DOCUMENT_REF = 'externalDocumentRefs'
+ DOCUMENT_DESCRIBES = 'documentDescribes'
+ EXTERNAL_DOCUMENT_ID = 'externalDocumentId'
+ EXTERNAL_DOCUMENT_URI = 'spdxDocument'
+ EXTERNAL_DOCUMENT_CHECKSUM = 'checksum'
+ ALGORITHM = 'algorithm'
+ CHECKSUM_VALUE = 'checksumValue'
+
+ # Package
+ PACKAGES = 'packages'
+ PACKAGE_DOWNLOAD_LOCATION = 'downloadLocation'
+ PACKAGE_VERSION = 'versionInfo'
+ PACKAGE_SUPPLIER = 'supplier'
+ FILES_ANALYZED = 'filesAnalyzed'
+ PACKAGE_VERIFICATION_CODE = 'packageVerificationCode'
+ PACKAGE_VERIFICATION_CODE_VALUE = 'packageVerificationCodeValue'
+ PACKAGE_EXTERNAL_REFS = 'externalRefs'
+ PACKAGE_EXTERNAL_REF_CATEGORY = 'referenceCategory'
+ PACKAGE_EXTERNAL_REF_TYPE = 'referenceType'
+ PACKAGE_EXTERNAL_REF_LOCATOR = 'referenceLocator'
+ PACKAGE_HAS_FILES = 'hasFiles'
+
+ # File
+ FILES = 'files'
+ FILE_NAME = 'fileName'
+ FILE_CHECKSUMS = 'checksums'
+
+ # Relationship
+ RELATIONSHIPS = 'relationships'
+ REL_ELEMENT_ID = 'spdxElementId'
+ REL_RELATED_ELEMENT_ID = 'relatedSpdxElement'
+ REL_TYPE = 'relationshipType'
+
+
+class JSONWriter:
+ @staticmethod
+ def marshal_doc_headers(sbom_doc):
+ headers = {
+ PropNames.SPDX_VERSION: SPDX_VER,
+ PropNames.DATA_LICENSE: DATA_LIC,
+ PropNames.SPDXID: sbom_doc.id,
+ PropNames.NAME: sbom_doc.name,
+ PropNames.DOCUMENT_NAMESPACE: sbom_doc.namespace,
+ PropNames.CREATION_INFO: {}
+ }
+ creators = [creator for creator in sbom_doc.creators]
+ headers[PropNames.CREATION_INFO][PropNames.CREATORS] = creators
+ headers[PropNames.CREATION_INFO][PropNames.CREATED] = sbom_doc.created
+ external_refs = []
+ for doc_ref in sbom_doc.external_refs:
+ checksum = doc_ref.checksum.split(': ')
+ external_refs.append({
+ PropNames.EXTERNAL_DOCUMENT_ID: f'{doc_ref.id}',
+ PropNames.EXTERNAL_DOCUMENT_URI: doc_ref.uri,
+ PropNames.EXTERNAL_DOCUMENT_CHECKSUM: {
+ PropNames.ALGORITHM: checksum[0],
+ PropNames.CHECKSUM_VALUE: checksum[1]
+ }
+ })
+ if external_refs:
+ headers[PropNames.EXTERNAL_DOCUMENT_REF] = external_refs
+ headers[PropNames.DOCUMENT_DESCRIBES] = [sbom_doc.describes]
+
+ return headers
+
+ @staticmethod
+ def marshal_packages(sbom_doc):
+ packages = []
+ for p in sbom_doc.packages:
+ package = {
+ PropNames.NAME: p.name,
+ PropNames.SPDXID: p.id,
+ PropNames.PACKAGE_DOWNLOAD_LOCATION: p.download_location if p.download_location else sbom_data.VALUE_NOASSERTION,
+ PropNames.FILES_ANALYZED: p.files_analyzed
+ }
+ if p.version:
+ package[PropNames.PACKAGE_VERSION] = p.version
+ if p.supplier:
+ package[PropNames.PACKAGE_SUPPLIER] = p.supplier
+ if p.verification_code:
+ package[PropNames.PACKAGE_VERIFICATION_CODE] = {
+ PropNames.PACKAGE_VERIFICATION_CODE_VALUE: p.verification_code
+ }
+ if p.external_refs:
+ package[PropNames.PACKAGE_EXTERNAL_REFS] = []
+ for ref in p.external_refs:
+ ext_ref = {
+ PropNames.PACKAGE_EXTERNAL_REF_CATEGORY: ref.category,
+ PropNames.PACKAGE_EXTERNAL_REF_TYPE: ref.type,
+ PropNames.PACKAGE_EXTERNAL_REF_LOCATOR: ref.locator,
+ }
+ package[PropNames.PACKAGE_EXTERNAL_REFS].append(ext_ref)
+ if p.file_ids:
+ package[PropNames.PACKAGE_HAS_FILES] = []
+ for file_id in p.file_ids:
+ package[PropNames.PACKAGE_HAS_FILES].append(file_id)
+
+ packages.append(package)
+
+ return {PropNames.PACKAGES: packages}
+
+ @staticmethod
+ def marshal_files(sbom_doc):
+ files = []
+ for f in sbom_doc.files:
+ file = {
+ PropNames.FILE_NAME: f.name,
+ PropNames.SPDXID: f.id
+ }
+ checksum = f.checksum.split(': ')
+ file[PropNames.FILE_CHECKSUMS] = [{
+ PropNames.ALGORITHM: checksum[0],
+ PropNames.CHECKSUM_VALUE: checksum[1],
+ }]
+ files.append(file)
+ return {PropNames.FILES: files}
+
+ @staticmethod
+ def marshal_relationships(sbom_doc):
+ relationships = []
+ sorted_rels = sorted(sbom_doc.relationships, key=lambda r: r.relationship + r.id2 + r.id1)
+ for r in sorted_rels:
+ rel = {
+ PropNames.REL_ELEMENT_ID: r.id1,
+ PropNames.REL_RELATED_ELEMENT_ID: r.id2,
+ PropNames.REL_TYPE: r.relationship,
+ }
+ relationships.append(rel)
+
+ return {PropNames.RELATIONSHIPS: relationships}
+
+ @staticmethod
+ def write(sbom_doc, file):
+ doc = {}
+ doc.update(JSONWriter.marshal_doc_headers(sbom_doc))
+ doc.update(JSONWriter.marshal_packages(sbom_doc))
+ doc.update(JSONWriter.marshal_files(sbom_doc))
+ doc.update(JSONWriter.marshal_relationships(sbom_doc))
+ file.write(json.dumps(doc, indent=4))
diff --git a/tools/sbom/sbom_writers_test.py b/tools/sbom/sbom_writers_test.py
new file mode 100644
index 0000000..cf85e01
--- /dev/null
+++ b/tools/sbom/sbom_writers_test.py
@@ -0,0 +1,173 @@
+#!/usr/bin/env python3
+#
+# Copyright (C) 2023 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import io
+import pathlib
+import unittest
+import sbom_data
+import sbom_writers
+
+BUILD_FINGER_PRINT = 'build_finger_print'
+SUPPLIER_GOOGLE = 'Organization: Google'
+SUPPLIER_UPSTREAM = 'Organization: upstream'
+
+SPDXID_PREBUILT_PACKAGE1 = 'SPDXRef-PREBUILT-package1'
+SPDXID_SOURCE_PACKAGE1 = 'SPDXRef-SOURCE-package1'
+SPDXID_UPSTREAM_PACKAGE1 = 'SPDXRef-UPSTREAM-package1'
+
+SPDXID_FILE1 = 'SPDXRef-file1'
+SPDXID_FILE2 = 'SPDXRef-file2'
+SPDXID_FILE3 = 'SPDXRef-file3'
+SPDXID_FILE4 = 'SPDXRef-file4'
+
+
+class SBOMWritersTest(unittest.TestCase):
+
+ def setUp(self):
+ # SBOM of a product
+ self.sbom_doc = sbom_data.Document(name='test doc',
+ namespace='http://www.google.com/sbom/spdx/android',
+ creators=[SUPPLIER_GOOGLE],
+ created='2023-03-31T22:17:58Z',
+ describes=sbom_data.SPDXID_PRODUCT)
+ self.sbom_doc.add_external_ref(
+ sbom_data.DocumentExternalReference(id='DocumentRef-external_doc_ref',
+ uri='external_doc_uri',
+ checksum='SHA1: 1234567890'))
+ self.sbom_doc.add_package(
+ sbom_data.Package(id=sbom_data.SPDXID_PRODUCT,
+ name=sbom_data.PACKAGE_NAME_PRODUCT,
+ download_location=sbom_data.VALUE_NONE,
+ supplier=SUPPLIER_GOOGLE,
+ version=BUILD_FINGER_PRINT,
+ files_analyzed=True,
+ verification_code='123456',
+ file_ids=[SPDXID_FILE1, SPDXID_FILE2, SPDXID_FILE3]))
+
+ self.sbom_doc.add_package(
+ sbom_data.Package(id=sbom_data.SPDXID_PLATFORM,
+ name=sbom_data.PACKAGE_NAME_PLATFORM,
+ download_location=sbom_data.VALUE_NONE,
+ supplier=SUPPLIER_GOOGLE,
+ version=BUILD_FINGER_PRINT,
+ ))
+
+ self.sbom_doc.add_package(
+ sbom_data.Package(id=SPDXID_PREBUILT_PACKAGE1,
+ name='Prebuilt package1',
+ download_location=sbom_data.VALUE_NONE,
+ supplier=SUPPLIER_GOOGLE,
+ version=BUILD_FINGER_PRINT,
+ ))
+
+ self.sbom_doc.add_package(
+ sbom_data.Package(id=SPDXID_SOURCE_PACKAGE1,
+ name='Source package1',
+ download_location=sbom_data.VALUE_NONE,
+ supplier=SUPPLIER_GOOGLE,
+ version=BUILD_FINGER_PRINT,
+ external_refs=[sbom_data.PackageExternalRef(
+ category=sbom_data.PackageExternalRefCategory.SECURITY,
+ type=sbom_data.PackageExternalRefType.cpe22Type,
+ locator='cpe:/a:jsoncpp_project:jsoncpp:1.9.4')]
+ ))
+
+ self.sbom_doc.add_package(
+ sbom_data.Package(id=SPDXID_UPSTREAM_PACKAGE1,
+ name='Upstream package1',
+ supplier=SUPPLIER_UPSTREAM,
+ version='1.1',
+ ))
+
+ self.sbom_doc.add_relationship(sbom_data.Relationship(id1=SPDXID_SOURCE_PACKAGE1,
+ relationship=sbom_data.RelationshipType.VARIANT_OF,
+ id2=SPDXID_UPSTREAM_PACKAGE1))
+
+ self.sbom_doc.files.append(
+ sbom_data.File(id=SPDXID_FILE1, name='/bin/file1', checksum='SHA1: 11111'))
+ self.sbom_doc.files.append(
+ sbom_data.File(id=SPDXID_FILE2, name='/bin/file2', checksum='SHA1: 22222'))
+ self.sbom_doc.files.append(
+ sbom_data.File(id=SPDXID_FILE3, name='/bin/file3', checksum='SHA1: 33333'))
+ self.sbom_doc.files.append(
+ sbom_data.File(id=SPDXID_FILE4, name='file4.a', checksum='SHA1: 44444'))
+
+ self.sbom_doc.add_relationship(sbom_data.Relationship(id1=SPDXID_FILE1,
+ relationship=sbom_data.RelationshipType.GENERATED_FROM,
+ id2=sbom_data.SPDXID_PLATFORM))
+ self.sbom_doc.add_relationship(sbom_data.Relationship(id1=SPDXID_FILE2,
+ relationship=sbom_data.RelationshipType.GENERATED_FROM,
+ id2=SPDXID_PREBUILT_PACKAGE1))
+ self.sbom_doc.add_relationship(sbom_data.Relationship(id1=SPDXID_FILE3,
+ relationship=sbom_data.RelationshipType.GENERATED_FROM,
+ id2=SPDXID_SOURCE_PACKAGE1
+ ))
+ self.sbom_doc.add_relationship(sbom_data.Relationship(id1=SPDXID_FILE1,
+ relationship=sbom_data.RelationshipType.STATIC_LINK,
+ id2=SPDXID_FILE4
+ ))
+
+ # SBOM fragment of a APK
+ self.unbundled_sbom_doc = sbom_data.Document(name='test doc',
+ namespace='http://www.google.com/sbom/spdx/android',
+ creators=[SUPPLIER_GOOGLE],
+ created='2023-03-31T22:17:58Z',
+ describes=SPDXID_FILE1)
+
+ self.unbundled_sbom_doc.files.append(
+ sbom_data.File(id=SPDXID_FILE1, name='/bin/file1.apk', checksum='SHA1: 11111'))
+ self.unbundled_sbom_doc.add_package(
+ sbom_data.Package(id=SPDXID_SOURCE_PACKAGE1,
+ name='Unbundled apk package',
+ download_location=sbom_data.VALUE_NONE,
+ supplier=SUPPLIER_GOOGLE,
+ version=BUILD_FINGER_PRINT))
+ self.unbundled_sbom_doc.add_relationship(sbom_data.Relationship(id1=SPDXID_FILE1,
+ relationship=sbom_data.RelationshipType.GENERATED_FROM,
+ id2=SPDXID_SOURCE_PACKAGE1))
+
+ def test_tagvalue_writer(self):
+ with io.StringIO() as output:
+ sbom_writers.TagValueWriter.write(self.sbom_doc, output)
+ expected_output = pathlib.Path('testdata/expected_tagvalue_sbom.spdx').read_text()
+ self.maxDiff = None
+ self.assertEqual(expected_output, output.getvalue())
+
+ def test_tagvalue_writer_doc_describes_file(self):
+ with io.StringIO() as output:
+ self.sbom_doc.describes = SPDXID_FILE4
+ sbom_writers.TagValueWriter.write(self.sbom_doc, output)
+ expected_output = pathlib.Path('testdata/expected_tagvalue_sbom_doc_describes_file.spdx').read_text()
+ self.maxDiff = None
+ self.assertEqual(expected_output, output.getvalue())
+
+ def test_tagvalue_writer_unbundled(self):
+ with io.StringIO() as output:
+ sbom_writers.TagValueWriter.write(self.unbundled_sbom_doc, output, fragment=True)
+ expected_output = pathlib.Path('testdata/expected_tagvalue_sbom_unbundled.spdx').read_text()
+ self.maxDiff = None
+ self.assertEqual(expected_output, output.getvalue())
+
+ def test_json_writer(self):
+ with io.StringIO() as output:
+ sbom_writers.JSONWriter.write(self.sbom_doc, output)
+ expected_output = pathlib.Path('testdata/expected_json_sbom.spdx.json').read_text()
+ self.maxDiff = None
+ self.assertEqual(expected_output, output.getvalue())
+
+
+if __name__ == '__main__':
+ unittest.main(verbosity=2)
diff --git a/tools/sbom/testdata/expected_json_sbom.spdx.json b/tools/sbom/testdata/expected_json_sbom.spdx.json
new file mode 100644
index 0000000..53936c5
--- /dev/null
+++ b/tools/sbom/testdata/expected_json_sbom.spdx.json
@@ -0,0 +1,152 @@
+{
+ "spdxVersion": "SPDX-2.3",
+ "dataLicense": "CC0-1.0",
+ "SPDXID": "SPDXRef-DOCUMENT",
+ "name": "test doc",
+ "documentNamespace": "http://www.google.com/sbom/spdx/android",
+ "creationInfo": {
+ "creators": [
+ "Organization: Google"
+ ],
+ "created": "2023-03-31T22:17:58Z"
+ },
+ "externalDocumentRefs": [
+ {
+ "externalDocumentId": "DocumentRef-external_doc_ref",
+ "spdxDocument": "external_doc_uri",
+ "checksum": {
+ "algorithm": "SHA1",
+ "checksumValue": "1234567890"
+ }
+ }
+ ],
+ "documentDescribes": [
+ "SPDXRef-PRODUCT"
+ ],
+ "packages": [
+ {
+ "name": "PRODUCT",
+ "SPDXID": "SPDXRef-PRODUCT",
+ "downloadLocation": "NONE",
+ "filesAnalyzed": true,
+ "versionInfo": "build_finger_print",
+ "supplier": "Organization: Google",
+ "packageVerificationCode": {
+ "packageVerificationCodeValue": "123456"
+ },
+ "hasFiles": [
+ "SPDXRef-file1",
+ "SPDXRef-file2",
+ "SPDXRef-file3"
+ ]
+ },
+ {
+ "name": "PLATFORM",
+ "SPDXID": "SPDXRef-PLATFORM",
+ "downloadLocation": "NONE",
+ "filesAnalyzed": false,
+ "versionInfo": "build_finger_print",
+ "supplier": "Organization: Google"
+ },
+ {
+ "name": "Prebuilt package1",
+ "SPDXID": "SPDXRef-PREBUILT-package1",
+ "downloadLocation": "NONE",
+ "filesAnalyzed": false,
+ "versionInfo": "build_finger_print",
+ "supplier": "Organization: Google"
+ },
+ {
+ "name": "Source package1",
+ "SPDXID": "SPDXRef-SOURCE-package1",
+ "downloadLocation": "NONE",
+ "filesAnalyzed": false,
+ "versionInfo": "build_finger_print",
+ "supplier": "Organization: Google",
+ "externalRefs": [
+ {
+ "referenceCategory": "SECURITY",
+ "referenceType": "cpe22Type",
+ "referenceLocator": "cpe:/a:jsoncpp_project:jsoncpp:1.9.4"
+ }
+ ]
+ },
+ {
+ "name": "Upstream package1",
+ "SPDXID": "SPDXRef-UPSTREAM-package1",
+ "downloadLocation": "NOASSERTION",
+ "filesAnalyzed": false,
+ "versionInfo": "1.1",
+ "supplier": "Organization: upstream"
+ }
+ ],
+ "files": [
+ {
+ "fileName": "/bin/file1",
+ "SPDXID": "SPDXRef-file1",
+ "checksums": [
+ {
+ "algorithm": "SHA1",
+ "checksumValue": "11111"
+ }
+ ]
+ },
+ {
+ "fileName": "/bin/file2",
+ "SPDXID": "SPDXRef-file2",
+ "checksums": [
+ {
+ "algorithm": "SHA1",
+ "checksumValue": "22222"
+ }
+ ]
+ },
+ {
+ "fileName": "/bin/file3",
+ "SPDXID": "SPDXRef-file3",
+ "checksums": [
+ {
+ "algorithm": "SHA1",
+ "checksumValue": "33333"
+ }
+ ]
+ },
+ {
+ "fileName": "file4.a",
+ "SPDXID": "SPDXRef-file4",
+ "checksums": [
+ {
+ "algorithm": "SHA1",
+ "checksumValue": "44444"
+ }
+ ]
+ }
+ ],
+ "relationships": [
+ {
+ "spdxElementId": "SPDXRef-file1",
+ "relatedSpdxElement": "SPDXRef-PLATFORM",
+ "relationshipType": "GENERATED_FROM"
+ },
+ {
+ "spdxElementId": "SPDXRef-file2",
+ "relatedSpdxElement": "SPDXRef-PREBUILT-package1",
+ "relationshipType": "GENERATED_FROM"
+ },
+ {
+ "spdxElementId": "SPDXRef-file3",
+ "relatedSpdxElement": "SPDXRef-SOURCE-package1",
+ "relationshipType": "GENERATED_FROM"
+ },
+ {
+ "spdxElementId": "SPDXRef-file1",
+ "relatedSpdxElement": "SPDXRef-file4",
+ "relationshipType": "STATIC_LINK"
+ },
+ {
+ "spdxElementId": "SPDXRef-SOURCE-package1",
+ "relatedSpdxElement": "SPDXRef-UPSTREAM-package1",
+ "relationshipType": "VARIANT_OF"
+ }
+ ]
+}
\ No newline at end of file
diff --git a/tools/sbom/testdata/expected_tagvalue_sbom.spdx b/tools/sbom/testdata/expected_tagvalue_sbom.spdx
new file mode 100644
index 0000000..e6fd17e
--- /dev/null
+++ b/tools/sbom/testdata/expected_tagvalue_sbom.spdx
@@ -0,0 +1,70 @@
+SPDXVersion: SPDX-2.3
+DataLicense: CC0-1.0
+SPDXID: SPDXRef-DOCUMENT
+DocumentName: test doc
+DocumentNamespace: http://www.google.com/sbom/spdx/android
+Creator: Organization: Google
+Created: 2023-03-31T22:17:58Z
+ExternalDocumentRef: DocumentRef-external_doc_ref external_doc_uri SHA1: 1234567890
+
+FileName: file4.a
+SPDXID: SPDXRef-file4
+FileChecksum: SHA1: 44444
+
+PackageName: PRODUCT
+SPDXID: SPDXRef-PRODUCT
+PackageDownloadLocation: NONE
+FilesAnalyzed: true
+PackageVersion: build_finger_print
+PackageSupplier: Organization: Google
+PackageVerificationCode: 123456
+
+Relationship: SPDXRef-DOCUMENT DESCRIBES SPDXRef-PRODUCT
+
+FileName: /bin/file1
+SPDXID: SPDXRef-file1
+FileChecksum: SHA1: 11111
+
+FileName: /bin/file2
+SPDXID: SPDXRef-file2
+FileChecksum: SHA1: 22222
+
+FileName: /bin/file3
+SPDXID: SPDXRef-file3
+FileChecksum: SHA1: 33333
+
+PackageName: PLATFORM
+SPDXID: SPDXRef-PLATFORM
+PackageDownloadLocation: NONE
+FilesAnalyzed: false
+PackageVersion: build_finger_print
+PackageSupplier: Organization: Google
+
+PackageName: Prebuilt package1
+SPDXID: SPDXRef-PREBUILT-package1
+PackageDownloadLocation: NONE
+FilesAnalyzed: false
+PackageVersion: build_finger_print
+PackageSupplier: Organization: Google
+
+PackageName: Source package1
+SPDXID: SPDXRef-SOURCE-package1
+PackageDownloadLocation: NONE
+FilesAnalyzed: false
+PackageVersion: build_finger_print
+PackageSupplier: Organization: Google
+ExternalRef: SECURITY cpe22Type cpe:/a:jsoncpp_project:jsoncpp:1.9.4
+
+PackageName: Upstream package1
+SPDXID: SPDXRef-UPSTREAM-package1
+PackageDownloadLocation: NOASSERTION
+FilesAnalyzed: false
+PackageVersion: 1.1
+PackageSupplier: Organization: upstream
+
+Relationship: SPDXRef-SOURCE-package1 VARIANT_OF SPDXRef-UPSTREAM-package1
+
+Relationship: SPDXRef-file1 GENERATED_FROM SPDXRef-PLATFORM
+Relationship: SPDXRef-file2 GENERATED_FROM SPDXRef-PREBUILT-package1
+Relationship: SPDXRef-file3 GENERATED_FROM SPDXRef-SOURCE-package1
+Relationship: SPDXRef-file1 STATIC_LINK SPDXRef-file4
diff --git a/tools/sbom/testdata/expected_tagvalue_sbom_doc_describes_file.spdx b/tools/sbom/testdata/expected_tagvalue_sbom_doc_describes_file.spdx
new file mode 100644
index 0000000..428d7e3
--- /dev/null
+++ b/tools/sbom/testdata/expected_tagvalue_sbom_doc_describes_file.spdx
@@ -0,0 +1,70 @@
+SPDXVersion: SPDX-2.3
+DataLicense: CC0-1.0
+SPDXID: SPDXRef-DOCUMENT
+DocumentName: test doc
+DocumentNamespace: http://www.google.com/sbom/spdx/android
+Creator: Organization: Google
+Created: 2023-03-31T22:17:58Z
+ExternalDocumentRef: DocumentRef-external_doc_ref external_doc_uri SHA1: 1234567890
+
+FileName: file4.a
+SPDXID: SPDXRef-file4
+FileChecksum: SHA1: 44444
+
+Relationship: SPDXRef-DOCUMENT DESCRIBES SPDXRef-file4
+
+PackageName: PRODUCT
+SPDXID: SPDXRef-PRODUCT
+PackageDownloadLocation: NONE
+FilesAnalyzed: true
+PackageVersion: build_finger_print
+PackageSupplier: Organization: Google
+PackageVerificationCode: 123456
+
+FileName: /bin/file1
+SPDXID: SPDXRef-file1
+FileChecksum: SHA1: 11111
+
+FileName: /bin/file2
+SPDXID: SPDXRef-file2
+FileChecksum: SHA1: 22222
+
+FileName: /bin/file3
+SPDXID: SPDXRef-file3
+FileChecksum: SHA1: 33333
+
+PackageName: PLATFORM
+SPDXID: SPDXRef-PLATFORM
+PackageDownloadLocation: NONE
+FilesAnalyzed: false
+PackageVersion: build_finger_print
+PackageSupplier: Organization: Google
+
+PackageName: Prebuilt package1
+SPDXID: SPDXRef-PREBUILT-package1
+PackageDownloadLocation: NONE
+FilesAnalyzed: false
+PackageVersion: build_finger_print
+PackageSupplier: Organization: Google
+
+PackageName: Source package1
+SPDXID: SPDXRef-SOURCE-package1
+PackageDownloadLocation: NONE
+FilesAnalyzed: false
+PackageVersion: build_finger_print
+PackageSupplier: Organization: Google
+ExternalRef: SECURITY cpe22Type cpe:/a:jsoncpp_project:jsoncpp:1.9.4
+
+PackageName: Upstream package1
+SPDXID: SPDXRef-UPSTREAM-package1
+PackageDownloadLocation: NOASSERTION
+FilesAnalyzed: false
+PackageVersion: 1.1
+PackageSupplier: Organization: upstream
+
+Relationship: SPDXRef-SOURCE-package1 VARIANT_OF SPDXRef-UPSTREAM-package1
+
+Relationship: SPDXRef-file1 GENERATED_FROM SPDXRef-PLATFORM
+Relationship: SPDXRef-file2 GENERATED_FROM SPDXRef-PREBUILT-package1
+Relationship: SPDXRef-file3 GENERATED_FROM SPDXRef-SOURCE-package1
+Relationship: SPDXRef-file1 STATIC_LINK SPDXRef-file4
diff --git a/tools/sbom/testdata/expected_tagvalue_sbom_unbundled.spdx b/tools/sbom/testdata/expected_tagvalue_sbom_unbundled.spdx
new file mode 100644
index 0000000..a00c291
--- /dev/null
+++ b/tools/sbom/testdata/expected_tagvalue_sbom_unbundled.spdx
@@ -0,0 +1,12 @@
+FileName: /bin/file1.apk
+SPDXID: SPDXRef-file1
+FileChecksum: SHA1: 11111
+
+PackageName: Unbundled apk package
+SPDXID: SPDXRef-SOURCE-package1
+PackageDownloadLocation: NONE
+FilesAnalyzed: false
+PackageVersion: build_finger_print
+PackageSupplier: Organization: Google
+
+Relationship: SPDXRef-file1 GENERATED_FROM SPDXRef-SOURCE-package1
diff --git a/tools/signapk/Android.bp b/tools/signapk/Android.bp
index bee6a6f..c4f25f8 100644
--- a/tools/signapk/Android.bp
+++ b/tools/signapk/Android.bp
@@ -31,6 +31,10 @@
"conscrypt-unbundled",
],
+ // b/267608166: Prevent target Java 17 so the host-side tool can run in an
+ // environment where JDK 11 is available.
+ java_version: "11",
+
jni_libs: ["libconscrypt_openjdk_jni"],
// The post-build signing tools need signapk.jar (and its shared libraries,
diff --git a/tools/signapk/src/com/android/signapk/SignApk.java b/tools/signapk/src/com/android/signapk/SignApk.java
index b0c792c..25c53d3 100644
--- a/tools/signapk/src/com/android/signapk/SignApk.java
+++ b/tools/signapk/src/com/android/signapk/SignApk.java
@@ -901,7 +901,7 @@
* Tries to load a JSE Provider by class name. This is for custom PrivateKey
* types that might be stored in PKCS#11-like storage.
*/
- private static void loadProviderIfNecessary(String providerClassName) {
+ private static void loadProviderIfNecessary(String providerClassName, String providerArg) {
if (providerClassName == null) {
return;
}
@@ -920,27 +920,41 @@
return;
}
- Constructor<?> constructor = null;
- for (Constructor<?> c : klass.getConstructors()) {
- if (c.getParameterTypes().length == 0) {
- constructor = c;
- break;
+ Constructor<?> constructor;
+ Object o = null;
+ if (providerArg == null) {
+ try {
+ constructor = klass.getConstructor();
+ o = constructor.newInstance();
+ } catch (ReflectiveOperationException e) {
+ e.printStackTrace();
+ System.err.println("Unable to instantiate " + providerClassName
+ + " with a zero-arg constructor");
+ System.exit(1);
+ }
+ } else {
+ try {
+ constructor = klass.getConstructor(String.class);
+ o = constructor.newInstance(providerArg);
+ } catch (ReflectiveOperationException e) {
+ // This is expected from JDK 9+; the single-arg constructor accepting the
+ // configuration has been replaced with a configure(String) method to be invoked
+ // after instantiating the Provider with the zero-arg constructor.
+ try {
+ constructor = klass.getConstructor();
+ o = constructor.newInstance();
+ // The configure method will return either the modified Provider or a new
+ // Provider if this one cannot be configured in-place.
+ o = klass.getMethod("configure", String.class).invoke(o, providerArg);
+ } catch (ReflectiveOperationException roe) {
+ roe.printStackTrace();
+ System.err.println("Unable to instantiate " + providerClassName
+ + " with the provided argument " + providerArg);
+ System.exit(1);
+ }
}
}
- if (constructor == null) {
- System.err.println("No zero-arg constructor found for " + providerClassName);
- System.exit(1);
- return;
- }
- final Object o;
- try {
- o = constructor.newInstance();
- } catch (Exception e) {
- e.printStackTrace();
- System.exit(1);
- return;
- }
if (!(o instanceof Provider)) {
System.err.println("Not a Provider class: " + providerClassName);
System.exit(1);
@@ -1049,6 +1063,7 @@
"[-a <alignment>] " +
"[--align-file-size] " +
"[-providerClass <className>] " +
+ "[-providerArg <configureArg>] " +
"[-loadPrivateKeysFromKeyStore <keyStoreName>]" +
"[-keyStorePin <pin>]" +
"[--min-sdk-version <n>] " +
@@ -1073,6 +1088,7 @@
boolean signWholeFile = false;
String providerClass = null;
+ String providerArg = null;
String keyStoreName = null;
String keyStorePin = null;
int alignment = 4;
@@ -1094,6 +1110,12 @@
}
providerClass = args[++argstart];
++argstart;
+ } else if("-providerArg".equals(args[argstart])) {
+ if (argstart + 1 >= args.length) {
+ usage();
+ }
+ providerArg = args[++argstart];
+ ++argstart;
} else if ("-loadPrivateKeysFromKeyStore".equals(args[argstart])) {
if (argstart + 1 >= args.length) {
usage();
@@ -1163,7 +1185,7 @@
System.exit(2);
}
- loadProviderIfNecessary(providerClass);
+ loadProviderIfNecessary(providerClass, providerArg);
String inputFilename = args[numArgsExcludeV4FilePath - 2];
String outputFilename = args[numArgsExcludeV4FilePath - 1];
diff --git a/tools/soong_to_convert.py b/tools/soong_to_convert.py
index 949131b..649829f 100755
--- a/tools/soong_to_convert.py
+++ b/tools/soong_to_convert.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
#
# Copyright (C) 2016 The Android Open Source Project
#
@@ -50,9 +50,6 @@
Not all problems can be discovered, but this is a starting point.
"""
-
-from __future__ import print_function
-
import csv
import sys
@@ -113,7 +110,7 @@
def main(filename):
"""Read the CSV file, print the results"""
- with open(filename, 'rb') as csvfile:
+ with open(filename, 'r') as csvfile:
results = process(csv.reader(csvfile))
native_results = filter(results, "native")
diff --git a/tools/stub_diff_analyzer.py b/tools/stub_diff_analyzer.py
new file mode 100644
index 0000000..e49d092
--- /dev/null
+++ b/tools/stub_diff_analyzer.py
@@ -0,0 +1,328 @@
+#!/usr/bin/env python
+#
+# Copyright (C) 2022 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from sys import exit
+from typing import List
+from glob import glob
+from pathlib import Path
+from collections import defaultdict
+from difflib import Differ
+from re import split
+from tqdm import tqdm
+import argparse
+
+
+DIFFER_CODE_LEN = 2
+
+class DifferCodes:
+ COMMON = ' '
+ UNIQUE_FIRST = '- '
+ UNIQUE_SECOND = '+ '
+ DIFF_IDENT = '? '
+
+class FilesDiffAnalyzer:
+ def __init__(self, args) -> None:
+ self.out_dir = args.out_dir
+ self.show_diff = args.show_diff
+ self.skip_words = args.skip_words
+ self.first_dir = args.first_dir
+ self.second_dir = args.second_dir
+ self.include_common = args.include_common
+
+ self.first_dir_files = self.get_files(self.first_dir)
+ self.second_dir_files = self.get_files(self.second_dir)
+ self.common_file_map = defaultdict(set)
+
+ self.map_common_files(self.first_dir_files, self.first_dir)
+ self.map_common_files(self.second_dir_files, self.second_dir)
+
+ def get_files(self, dir: str) -> List[str]:
+ """Get all files directory in the input directory including the files in the subdirectories
+
+ Recursively finds all files in the input directory.
+ Returns a list of file directory strings, which do not include directories but only files.
+ List is sorted in alphabetical order of the file directories.
+
+ Args:
+ dir: Directory to get the files. String.
+
+ Returns:
+ A list of file directory strings within the input directory.
+ Sorted in Alphabetical order.
+
+ Raises:
+ FileNotFoundError: An error occurred accessing the non-existing directory
+ """
+
+ if not dir_exists(dir):
+ raise FileNotFoundError("Directory does not exist")
+
+ if dir[:-2] != "**":
+ if dir[:-1] != "/":
+ dir += "/"
+ dir += "**"
+
+ return [file for file in sorted(glob(dir, recursive=True)) if Path(file).is_file()]
+
+ def map_common_files(self, files: List[str], dir: str) -> None:
+ for file in files:
+ file_name = file.split(dir, 1)[-1]
+ self.common_file_map[file_name].add(dir)
+ return
+
+ def compare_file_contents(self, first_file: str, second_file: str) -> List[str]:
+ """Compare the contents of the files and return different lines
+
+ Given two file directory strings, compare the contents of the two files
+ and return the list of file contents string prepended with unique identifier codes.
+ The identifier codes include:
+ - ' '(two empty space characters): Line common to two files
+ - '- '(minus followed by a space) : Line unique to first file
+ - '+ '(plus followed by a space) : Line unique to second file
+
+ Args:
+ first_file: First file directory string to compare the content
+ second_file: Second file directory string to compare the content
+
+ Returns:
+ A list of the file content strings. For example:
+
+ [
+ " Foo",
+ "- Bar",
+ "+ Baz"
+ ]
+ """
+
+ d = Differ()
+ first_file_contents = sort_methods(get_file_contents(first_file))
+ second_file_contents = sort_methods(get_file_contents(second_file))
+ diff = list(d.compare(first_file_contents, second_file_contents))
+ ret = [f"diff {first_file} {second_file}"]
+
+ idx = 0
+ while idx < len(diff):
+ line = diff[idx]
+ line_code = line[:DIFFER_CODE_LEN]
+
+ match line_code:
+ case DifferCodes.COMMON:
+ if self.include_common:
+ ret.append(line)
+
+ case DifferCodes.UNIQUE_FIRST:
+ # Should compare line
+ if (idx < len(diff) - 1 and
+ (next_line_code := diff[idx + 1][:DIFFER_CODE_LEN])
+ not in (DifferCodes.UNIQUE_FIRST, DifferCodes.COMMON)):
+ delta = 1 if next_line_code == DifferCodes.UNIQUE_SECOND else 2
+ line_to_compare = diff[idx + delta]
+ if self.lines_differ(line, line_to_compare):
+ ret.extend([line, line_to_compare])
+ else:
+ if self.include_common:
+ ret.append(DifferCodes.COMMON +
+ line[DIFFER_CODE_LEN:])
+ idx += delta
+ else:
+ ret.append(line)
+
+ case DifferCodes.UNIQUE_SECOND:
+ ret.append(line)
+
+ case DifferCodes.DIFF_IDENT:
+ pass
+ idx += 1
+ return ret
+
+ def lines_differ(self, line1: str, line2: str) -> bool:
+ """Check if the input lines are different or not
+
+ Compare the two lines word by word and check if the two lines are different or not.
+ If the different words in the comparing lines are included in skip_words,
+ the lines are not considered different.
+
+ Args:
+ line1: first line to compare
+ line2: second line to compare
+
+ Returns:
+ Boolean value indicating if the two lines are different or not
+
+ """
+ # Split by '.' or ' '(whitespace)
+ def split_words(line: str) -> List[str]:
+ return split('\\s|\\.', line[DIFFER_CODE_LEN:])
+
+ line1_words, line2_words = split_words(line1), split_words(line2)
+ if len(line1_words) != len(line2_words):
+ return True
+
+ for word1, word2 in zip(line1_words, line2_words):
+ if word1 != word2:
+ # not check if words are equal to skip word, but
+ # check if words contain skip word as substring
+ if all(sw not in word1 and sw not in word2 for sw in self.skip_words):
+ return True
+
+ return False
+
+ def analyze(self) -> None:
+ """Analyze file contents in both directories and write to output or console.
+ """
+ for file in tqdm(sorted(self.common_file_map.keys())):
+ val = self.common_file_map[file]
+
+ # When file exists in both directories
+ lines = list()
+ if val == set([self.first_dir, self.second_dir]):
+ lines = self.compare_file_contents(
+ self.first_dir + file, self.second_dir + file)
+ else:
+ existing_dir, not_existing_dir = (
+ (self.first_dir, self.second_dir) if self.first_dir in val
+ else (self.second_dir, self.first_dir))
+
+ lines = [f"{not_existing_dir}{file} does not exist."]
+
+ if self.show_diff:
+ lines.append(f"Content of {existing_dir}{file}: \n")
+ lines.extend(get_file_contents(existing_dir + file))
+
+ self.write(lines)
+
+ def write(self, lines: List[str]) -> None:
+ if self.out_dir == "":
+ pprint(lines)
+ else:
+ write_lines(self.out_dir, lines)
+
+###
+# Helper functions
+###
+
+def sort_methods(lines: List[str]) -> List[str]:
+ """Sort class methods in the file contents by alphabetical order
+
+ Given lines of Java file contents, return lines with class methods sorted in alphabetical order.
+ Also omit empty lines or lines with spaces.
+ For example:
+ l = [
+ "package android.test;",
+ "",
+ "public static final int ORANGE = 1;",
+ "",
+ "public class TestClass {",
+ "public TestClass() { throw new RuntimeException("Stub!"); }",
+ "public void foo() { throw new RuntimeException("Stub!"); }",
+ "public void bar() { throw new RuntimeException("Stub!"); }",
+ "}"
+ ]
+ sort_methods(l) returns
+ [
+ "package android.test;",
+ "public static final int ORANGE = 1;",
+ "public class TestClass {",
+ "public TestClass() { throw new RuntimeException("Stub!"); }",
+ "public void bar() { throw new RuntimeException("Stub!"); }",
+ "public void foo() { throw new RuntimeException("Stub!"); }",
+ "}"
+ ]
+
+ Args:
+ lines: List of strings consisted of Java file contents.
+
+ Returns:
+ A list of string with sorted class methods.
+
+ """
+ def is_not_blank(l: str) -> bool:
+ return bool(l) and not l.isspace()
+
+ ret = list()
+
+ in_class = False
+ buffer = list()
+ for line in lines:
+ if not in_class:
+ if "class" in line:
+ in_class = True
+ ret.append(line)
+ else:
+ # Adding static variables, package info, etc.
+ # Skipping empty or space lines.
+ if is_not_blank(line):
+ ret.append(line)
+ else:
+ # End of class
+ if line and line[0] == "}":
+ in_class = False
+ ret.extend(sorted(buffer))
+ buffer = list()
+ ret.append(line)
+ else:
+ if is_not_blank(line):
+ buffer.append(line)
+
+ return ret
+
+def get_file_contents(file_path: str) -> List[str]:
+ lines = list()
+ with open(file_path) as f:
+ lines = [line.rstrip('\n') for line in f]
+ f.close()
+ return lines
+
+def pprint(l: List[str]) -> None:
+ for line in l:
+ print(line)
+
+def write_lines(out_dir: str, lines: List[str]) -> None:
+ with open(out_dir, "a") as f:
+ f.writelines(line + '\n' for line in lines)
+ f.write("\n")
+ f.close()
+
+def dir_exists(dir: str) -> bool:
+ return Path(dir).exists()
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser()
+ parser.add_argument('first_dir', action='store', type=str,
+ help="first path to compare file directory and contents")
+ parser.add_argument('second_dir', action='store', type=str,
+ help="second path to compare file directory and contents")
+ parser.add_argument('--out', dest='out_dir',
+ action='store', default="", type=str,
+ help="optional directory to write log. If not set, will print to console")
+ parser.add_argument('--show-diff-file', dest='show_diff',
+ action=argparse.BooleanOptionalAction,
+ help="optional flag. If passed, will print out the content of the file unique to each directories")
+ parser.add_argument('--include-common', dest='include_common',
+ action=argparse.BooleanOptionalAction,
+ help="optional flag. If passed, will print out the contents common to both files as well,\
+ instead of printing only diff lines.")
+ parser.add_argument('--skip-words', nargs='+',
+ dest='skip_words', default=[], help="optional words to skip in comparison")
+
+ args = parser.parse_args()
+
+ if not args.first_dir or not args.second_dir:
+ parser.print_usage()
+ exit(0)
+
+ analyzer = FilesDiffAnalyzer(args)
+ analyzer.analyze()
diff --git a/tools/test_post_process_props.py b/tools/test_post_process_props.py
index 236f9ed..439fc9f 100644
--- a/tools/test_post_process_props.py
+++ b/tools/test_post_process_props.py
@@ -256,6 +256,7 @@
with contextlib.redirect_stderr(stderr_redirect):
props = PropList("hello")
props.put("ro.board.first_api_level","25")
+ props.put("ro.build.version.codename", "REL")
# ro.board.first_api_level must be less than or equal to the sdk version
self.assertFalse(validate_grf_props(props, 20))
@@ -273,5 +274,10 @@
# ro.board.api_level must be less than or equal to the sdk version
self.assertFalse(validate_grf_props(props, 25))
+ # allow setting future api_level before release
+ props.get_all_props()[-2].make_as_comment()
+ props.put("ro.build.version.codename", "NonRel")
+ self.assertTrue(validate_grf_props(props, 24))
+
if __name__ == '__main__':
unittest.main(verbosity=2)
diff --git a/tools/warn/html_writer.py b/tools/warn/html_writer.py
index 3fa822a..46ba253 100644
--- a/tools/warn/html_writer.py
+++ b/tools/warn/html_writer.py
@@ -56,6 +56,7 @@
from __future__ import print_function
import csv
+import datetime
import html
import sys
@@ -258,7 +259,7 @@
def dump_stats(writer, warn_patterns):
- """Dump some stats about total number of warnings and such."""
+ """Dump some stats about total number of warnings and date."""
known = 0
skipped = 0
@@ -279,6 +280,8 @@
if total < 1000:
extra_msg = ' (low count may indicate incremental build)'
writer('Total number of warnings: <b>' + str(total) + '</b>' + extra_msg)
+ date_time_str = datetime.datetime.now().strftime('%Y/%m/%d %H:%M:%S')
+ writer('<p>(generated on ' + date_time_str + ')')
# New base table of warnings, [severity, warn_id, project, warning_message]
@@ -662,15 +665,26 @@
var warningsOfFiles = {};
var warningsOfDirs = {};
var subDirs = {};
- function addOneWarning(map, key) {
- map[key] = 1 + ((key in map) ? map[key] : 0);
+ function addOneWarning(map, key, type, unique) {
+ function increaseCounter(idx) {
+ map[idx] = 1 + ((idx in map) ? map[idx] : 0);
+ }
+ increaseCounter(key)
+ if (type != "") {
+ increaseCounter(type + " " + key)
+ if (unique) {
+ increaseCounter(type + " *")
+ }
+ }
}
for (var i = 0; i < numWarnings; i++) {
- var file = WarningMessages[i].replace(/:.*/, "");
- addOneWarning(warningsOfFiles, file);
+ var message = WarningMessages[i]
+ var file = message.replace(/:.*/, "");
+ var warningType = message.endsWith("]") ? message.replace(/.*\[/, "[") : "";
+ addOneWarning(warningsOfFiles, file, warningType, true);
var dirs = file.split("/");
var dir = dirs[0];
- addOneWarning(warningsOfDirs, dir);
+ addOneWarning(warningsOfDirs, dir, warningType, true);
for (var d = 1; d < dirs.length - 1; d++) {
var subDir = dir + "/" + dirs[d];
if (!(dir in subDirs)) {
@@ -678,7 +692,7 @@
}
subDirs[dir][subDir] = 1;
dir = subDir;
- addOneWarning(warningsOfDirs, dir);
+ addOneWarning(warningsOfDirs, dir, warningType, false);
}
}
var minDirWarnings = numWarnings*(LimitPercentWarnings/100);
@@ -725,27 +739,33 @@
document.getElementById(divName));
table.draw(view, {allowHtml: true, alternatingRowStyle: true});
}
- addTable("Directory", "top_dirs_table", TopDirs, "selectDir");
- addTable("File", "top_files_table", TopFiles, "selectFile");
+ addTable("[Warning Type] Directory", "top_dirs_table", TopDirs, "selectDir");
+ addTable("[Warning Type] File", "top_files_table", TopFiles, "selectFile");
}
function selectDirFile(idx, rows, dirFile) {
if (rows.length <= idx) {
return;
}
var name = rows[idx][2];
+ var type = "";
+ if (name.startsWith("[")) {
+ type = " " + name.replace(/ .*/, "");
+ name = name.replace(/.* /, "");
+ }
var spanName = "selected_" + dirFile + "_name";
- document.getElementById(spanName).innerHTML = name;
+ document.getElementById(spanName).innerHTML = name + type;
var divName = "selected_" + dirFile + "_warnings";
var numWarnings = rows[idx][1].v;
var prefix = name.replace(/\\.\\.\\.$/, "");
var data = new google.visualization.DataTable();
- data.addColumn('string', numWarnings + ' warnings in ' + name);
+ data.addColumn('string', numWarnings + type + ' warnings in ' + name);
var getWarningMessage = (FlagPlatform == "chrome")
? ((x) => addURLToLine(WarningMessages[Warnings[x][2]],
WarningLinks[Warnings[x][3]]))
: ((x) => addURL(WarningMessages[Warnings[x][2]]));
for (var i = 0; i < Warnings.length; i++) {
- if (WarningMessages[Warnings[i][2]].startsWith(prefix)) {
+ if ((prefix.startsWith("*") || WarningMessages[Warnings[i][2]].startsWith(prefix)) &&
+ (type == "" || WarningMessages[Warnings[i][2]].endsWith(type))) {
data.addRow([getWarningMessage(i)]);
}
}
@@ -827,14 +847,14 @@
def section2():
dump_dir_file_section(
writer, 'directory', 'top_dirs_table',
- 'Directories with at least ' +
- str(LIMIT_PERCENT_WARNINGS) + '% warnings')
+ 'Directories/Warnings with at least ' +
+ str(LIMIT_PERCENT_WARNINGS) + '% of all cases')
def section3():
dump_dir_file_section(
writer, 'file', 'top_files_table',
- 'Files with at least ' +
- str(LIMIT_PERCENT_WARNINGS) + '% or ' +
- str(LIMIT_WARNINGS_PER_FILE) + ' warnings')
+ 'Files/Warnings with at least ' +
+ str(LIMIT_PERCENT_WARNINGS) + '% of all or ' +
+ str(LIMIT_WARNINGS_PER_FILE) + ' cases')
def section4():
writer('<script>')
emit_js_data(writer, flags, warning_messages, warning_links,
diff --git a/tools/warn/tidy_warn_patterns.py b/tools/warn/tidy_warn_patterns.py
index c138f1c..5ee66c0 100644
--- a/tools/warn/tidy_warn_patterns.py
+++ b/tools/warn/tidy_warn_patterns.py
@@ -224,6 +224,7 @@
analyzer_warn_check('clang-analyzer-valist.Unterminated'),
analyzer_group_check('clang-analyzer-core.uninitialized'),
analyzer_group_check('clang-analyzer-deadcode'),
+ analyzer_warn_check('clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling'),
analyzer_warn_check('clang-analyzer-security.insecureAPI.bcmp'),
analyzer_warn_check('clang-analyzer-security.insecureAPI.bcopy'),
analyzer_warn_check('clang-analyzer-security.insecureAPI.bzero'),
diff --git a/tools/warn/warn_common.py b/tools/warn/warn_common.py
index 61c8676..aa68313 100755
--- a/tools/warn/warn_common.py
+++ b/tools/warn/warn_common.py
@@ -64,6 +64,10 @@
from . import tidy_warn_patterns as tidy_patterns
+# Location of this file is used to guess the root of Android source tree.
+THIS_FILE_PATH = 'build/make/tools/warn/warn_common.py'
+
+
def parse_args(use_google3):
"""Define and parse the args. Return the parse_args() result."""
parser = argparse.ArgumentParser(
@@ -217,17 +221,27 @@
return link
-def find_warn_py_and_android_root(path):
- """Return android source root path if warn.py is found."""
+def find_this_file_and_android_root(path):
+ """Return android source root path if this file is found."""
parts = path.split('/')
for idx in reversed(range(2, len(parts))):
root_path = '/'.join(parts[:idx])
# Android root directory should contain this script.
- if os.path.exists(root_path + '/build/make/tools/warn.py'):
+ if os.path.exists(root_path + '/' + THIS_FILE_PATH):
return root_path
return ''
+def find_android_root_top_dirs(root_dir):
+ """Return a list of directories under the root_dir, if it exists."""
+ # Root directory should contain at least build/make and build/soong.
+ if (not os.path.isdir(root_dir + '/build/make') or
+ not os.path.isdir(root_dir + '/build/soong')):
+ return None
+ return list(filter(lambda d: os.path.isdir(root_dir + '/' + d),
+ os.listdir(root_dir)))
+
+
def find_android_root(buildlog):
"""Guess android source root from common prefix of file paths."""
# Use the longest common prefix of the absolute file paths
@@ -239,8 +253,8 @@
# We want to find android_root of a local build machine.
# Do not use RBE warning lines, which has '/b/f/w/' path prefix.
# Do not use /tmp/ file warnings.
- if warning_pattern.match(line) and (
- '/b/f/w' not in line and not line.startswith('/tmp/')):
+ if ('/b/f/w' not in line and not line.startswith('/tmp/') and
+ warning_pattern.match(line)):
warning_lines.append(line)
count += 1
if count > 9999:
@@ -249,17 +263,26 @@
# the source tree root.
if count < 100:
path = os.path.normpath(re.sub(':.*$', '', line))
- android_root = find_warn_py_and_android_root(path)
+ android_root = find_this_file_and_android_root(path)
if android_root:
- return android_root
+ return android_root, find_android_root_top_dirs(android_root)
# Do not use common prefix of a small number of paths.
+ android_root = ''
if count > 10:
# pytype: disable=wrong-arg-types
root_path = os.path.commonprefix(warning_lines)
# pytype: enable=wrong-arg-types
if len(root_path) > 2 and root_path[len(root_path) - 1] == '/':
- return root_path[:-1]
- return ''
+ android_root = root_path[:-1]
+ if android_root and os.path.isdir(android_root):
+ return android_root, find_android_root_top_dirs(android_root)
+ # When the build.log file is moved to a different machine where
+ # android_root is not found, use the location of this script
+ # to find the android source tree sub directories.
+ if __file__.endswith('/' + THIS_FILE_PATH):
+ script_root = __file__.replace('/' + THIS_FILE_PATH, '')
+ return android_root, find_android_root_top_dirs(script_root)
+ return android_root, None
def remove_android_root_prefix(path, android_root):
@@ -310,8 +333,6 @@
warning_pattern = re.compile(chrome_warning_pattern)
# Collect all unique warning lines
- # Remove the duplicated warnings save ~8% of time when parsing
- # one typical build log than before
unique_warnings = dict()
for line in infile:
if warning_pattern.match(line):
@@ -353,8 +374,7 @@
target_product = 'unknown'
target_variant = 'unknown'
build_id = 'unknown'
- use_rbe = False
- android_root = find_android_root(infile)
+ android_root, root_top_dirs = find_android_root(infile)
infile.seek(0)
# rustc warning messages have two lines that should be combined:
@@ -367,24 +387,39 @@
# C/C++ compiler warning messages have line and column numbers:
# some/path/file.c:line_number:column_number: warning: description
warning_pattern = re.compile('(^[^ ]*/[^ ]*: warning: .*)|(^warning: .*)')
- warning_without_file = re.compile('^warning: .*')
rustc_file_position = re.compile('^[ ]+--> [^ ]*/[^ ]*:[0-9]+:[0-9]+')
- # If RBE was used, try to reclaim some warning lines mixed with some
- # leading chars from other concurrent job's stderr output .
+ # If RBE was used, try to reclaim some warning lines (from stdout)
+ # that contain leading characters from stderr.
# The leading characters can be any character, including digits and spaces.
- # It's impossible to correctly identify the starting point of the source
- # file path without the file directory name knowledge.
- # Here we can only be sure to recover lines containing "/b/f/w/".
- rbe_warning_pattern = re.compile('.*/b/f/w/[^ ]*: warning: .*')
- # Collect all unique warning lines
- # Remove the duplicated warnings save ~8% of time when parsing
- # one typical build log than before
+ # If a warning line's source file path contains the special RBE prefix
+ # /b/f/w/, we can remove all leading chars up to and including the "/b/f/w/".
+ bfw_warning_pattern = re.compile('.*/b/f/w/([^ ]*: warning: .*)')
+
+ # When android_root is known and available, we find its top directories
+ # and remove all leading chars before a top directory name.
+ # We assume that the leading chars from stderr do not contain "/".
+ # For example,
+ # 10external/...
+ # 12 warningsexternal/...
+ # 413 warningexternal/...
+ # 5 warnings generatedexternal/...
+ # Suppressed 1000 warnings (packages/modules/...
+ if root_top_dirs:
+ extra_warning_pattern = re.compile(
+ '^.[^/]*((' + '|'.join(root_top_dirs) +
+ ')/[^ ]*: warning: .*)')
+ else:
+ extra_warning_pattern = re.compile('^[^/]* ([^ /]*/[^ ]*: warning: .*)')
+
+ # Collect all unique warning lines
unique_warnings = dict()
+ checked_warning_lines = dict()
line_counter = 0
prev_warning = ''
for line in infile:
+ line_counter += 1
if prev_warning:
if rustc_file_position.match(line):
# must be a rustc warning, combine 2 lines into one warning
@@ -399,14 +434,31 @@
prev_warning, flags, android_root, unique_warnings)
prev_warning = ''
- if use_rbe and rbe_warning_pattern.match(line):
- cleaned_up_line = re.sub('.*/b/f/w/', '', line)
- unique_warnings = add_normalized_line_to_warnings(
- cleaned_up_line, flags, android_root, unique_warnings)
+ # re.match is slow, with several warning line patterns and
+ # long input lines like "TIMEOUT: ...".
+ # We save significant time by skipping non-warning lines.
+ # But do not skip the first 100 lines, because we want to
+ # catch build variables.
+ if line_counter > 100 and line.find('warning: ') < 0:
continue
+ # A large clean build output can contain up to 90% of duplicated
+ # "warning:" lines. If we can skip them quickly, we can
+ # speed up this for-loop 3X to 5X.
+ if line in checked_warning_lines:
+ continue
+ checked_warning_lines[line] = True
+
+ # Clean up extra prefix that could be introduced when RBE was used.
+ if '/b/f/w/' in line:
+ result = bfw_warning_pattern.search(line)
+ else:
+ result = extra_warning_pattern.search(line)
+ if result is not None:
+ line = result.group(1)
+
if warning_pattern.match(line):
- if warning_without_file.match(line):
+ if line.startswith('warning: '):
# save this line and combine it with the next line
prev_warning = line
else:
@@ -416,7 +468,6 @@
if line_counter < 100:
# save a little bit of time by only doing this for the first few lines
- line_counter += 1
result = re.search('(?<=^PLATFORM_VERSION=).*', line)
if result is not None:
platform_version = result.group(0)
@@ -433,13 +484,6 @@
if result is not None:
build_id = result.group(0)
continue
- result = re.search('(?<=^TOP=).*', line)
- if result is not None:
- android_root = result.group(1)
- continue
- if re.search('USE_RBE=', line) is not None:
- use_rbe = True
- continue
if android_root:
new_unique_warnings = dict()
diff --git a/tools/whichgit b/tools/whichgit
new file mode 100755
index 0000000..b0bf2e4
--- /dev/null
+++ b/tools/whichgit
@@ -0,0 +1,110 @@
+#!/usr/bin/env python3
+
+import argparse
+import os
+import subprocess
+import sys
+
+def get_build_var(var):
+ return subprocess.run(["build/soong/soong_ui.bash","--dumpvar-mode", var],
+ check=True, capture_output=True, text=True).stdout.strip()
+
+
+def get_sources(modules):
+ result = subprocess.run(["./prebuilts/build-tools/linux-x86/bin/ninja", "-f",
+ "out/combined-" + os.environ["TARGET_PRODUCT"] + ".ninja",
+ "-t", "inputs", "-d", ] + modules,
+ stderr=subprocess.STDOUT, stdout=subprocess.PIPE, check=False, text=True)
+ if result.returncode != 0:
+ sys.stderr.write(result.stdout)
+ sys.exit(1)
+ return set([f for f in result.stdout.split("\n") if not f.startswith("out/")])
+
+
+def m_nothing():
+ result = subprocess.run(["build/soong/soong_ui.bash", "--build-mode", "--all-modules",
+ "--dir=" + os.getcwd(), "nothing"],
+ check=False, stderr=subprocess.STDOUT, stdout=subprocess.PIPE, text=True)
+ if result.returncode != 0:
+ sys.stderr.write(result.stdout)
+ sys.exit(1)
+
+
+def get_git_dirs():
+ text = subprocess.run(["repo","list"], check=True, capture_output=True, text=True).stdout
+ return [line.split(" : ")[0] + "/" for line in text.split("\n")]
+
+
+def get_referenced_projects(git_dirs, files):
+ # files must be sorted
+ referenced_dirs = set()
+ prev_dir = None
+ for f in files:
+ # Optimization is ~5x speedup for large sets of files
+ if prev_dir:
+ if f.startswith(prev_dir):
+ referenced_dirs.add(d)
+ continue
+ for d in git_dirs:
+ if f.startswith(d):
+ referenced_dirs.add(d)
+ prev_dir = d
+ break
+ return [d[0:-1] for d in referenced_dirs]
+
+
+def main(argv):
+ # Argument parsing
+ ap = argparse.ArgumentParser(description="List the required git projects for the given modules")
+ ap.add_argument("--products", nargs="*",
+ help="The TARGET_PRODUCT to check. If not provided just uses whatever has"
+ + " already been built")
+ ap.add_argument("--variants", nargs="*",
+ help="The TARGET_BUILD_VARIANTS to check. If not provided just uses whatever has"
+ + " already been built, or eng if --products is supplied")
+ ap.add_argument("--modules", nargs="*",
+ help="The build modules to check, or droid it not supplied")
+ ap.add_argument("--why", nargs="*",
+ help="Also print the input files used in these projects, or \"*\" for all")
+ args = ap.parse_args(argv[1:])
+
+ modules = args.modules if args.modules else ["droid"]
+
+ # Get the list of sources for all of the requested build combos
+ if not args.products and not args.variants:
+ sources = get_sources(modules)
+ else:
+ if not args.products:
+ sys.stderr.write("Error: --products must be supplied if --variants is supplied")
+ sys.exit(1)
+ sources = set()
+ build_num = 1
+ for product in args.products:
+ os.environ["TARGET_PRODUCT"] = product
+ variants = args.variants if args.variants else ["user", "userdebug", "eng"]
+ for variant in variants:
+ sys.stderr.write(f"Analyzing build {build_num} of {len(args.products)*len(variants)}\r")
+ os.environ["TARGET_BUILD_VARIANT"] = variant
+ m_nothing()
+ sources.update(get_sources(modules))
+ build_num += 1
+ sys.stderr.write("\n\n")
+
+ sources = sorted(sources)
+
+ # Print the list of git directories that has one or more of the sources in it
+ for project in sorted(get_referenced_projects(get_git_dirs(), sources)):
+ print(project)
+ if args.why:
+ if "*" in args.why or project in args.why:
+ prefix = project + "/"
+ for f in sources:
+ if f.startswith(prefix):
+ print(" " + f)
+
+
+if __name__ == "__main__":
+ sys.exit(main(sys.argv))
+
+
+# vim: set ts=2 sw=2 sts=2 expandtab nocindent tw=100:
diff --git a/tools/zipalign/Android.bp b/tools/zipalign/Android.bp
index 8cab04c..0e1d58e 100644
--- a/tools/zipalign/Android.bp
+++ b/tools/zipalign/Android.bp
@@ -70,6 +70,7 @@
"libgmock",
],
data: [
+ "tests/data/archiveWithOneDirectoryEntry.zip",
"tests/data/diffOrders.zip",
"tests/data/holes.zip",
"tests/data/unaligned.zip",
diff --git a/tools/zipalign/ZipAlign.cpp b/tools/zipalign/ZipAlign.cpp
index 08f67ff..23840e3 100644
--- a/tools/zipalign/ZipAlign.cpp
+++ b/tools/zipalign/ZipAlign.cpp
@@ -22,6 +22,19 @@
namespace android {
+// An entry is considered a directory if it has a stored size of zero
+// and it ends with '/' or '\' character.
+static bool isDirectory(ZipEntry* entry) {
+ if (entry->getUncompressedLen() != 0) {
+ return false;
+ }
+
+ const char* name = entry->getFileName();
+ size_t nameLength = strlen(name);
+ char lastChar = name[nameLength-1];
+ return lastChar == '/' || lastChar == '\\';
+}
+
static int getAlignment(bool pageAlignSharedLibs, int defaultAlignment,
ZipEntry* pEntry) {
@@ -59,7 +72,7 @@
return 1;
}
- if (pEntry->isCompressed()) {
+ if (pEntry->isCompressed() || isDirectory(pEntry)) {
/* copy the entry without padding */
//printf("--- %s: orig at %ld len=%ld (compressed)\n",
// pEntry->getFileName(), (long) pEntry->getFileOffset(),
@@ -160,7 +173,13 @@
printf("%8jd %s (OK - compressed)\n",
(intmax_t) pEntry->getFileOffset(), pEntry->getFileName());
}
- } else {
+ } else if(isDirectory(pEntry)) {
+ // Directory entries do not need to be aligned.
+ if (verbose)
+ printf("%8jd %s (OK - directory)\n",
+ (intmax_t) pEntry->getFileOffset(), pEntry->getFileName());
+ continue;
+ } else {
off_t offset = pEntry->getFileOffset();
const int alignTo = getAlignment(pageAlignSharedLibs, alignment, pEntry);
if ((offset % alignTo) != 0) {
diff --git a/tools/zipalign/ZipEntry.cpp b/tools/zipalign/ZipEntry.cpp
index fcad96c..689999e 100644
--- a/tools/zipalign/ZipEntry.cpp
+++ b/tools/zipalign/ZipEntry.cpp
@@ -40,14 +40,10 @@
*/
status_t ZipEntry::initFromCDE(FILE* fp)
{
- status_t result;
- long posn; // NOLINT(google-runtime-int), for ftell/fseek
- bool hasDD;
-
//ALOGV("initFromCDE ---\n");
/* read the CDE */
- result = mCDE.read(fp);
+ status_t result = mCDE.read(fp);
if (result != OK) {
ALOGD("mCDE.read failed\n");
return result;
@@ -56,8 +52,8 @@
//mCDE.dump();
/* using the info in the CDE, go load up the LFH */
- posn = ftell(fp);
- if (fseek(fp, mCDE.mLocalHeaderRelOffset, SEEK_SET) != 0) {
+ off_t posn = ftello(fp);
+ if (fseeko(fp, mCDE.mLocalHeaderRelOffset, SEEK_SET) != 0) {
ALOGD("local header seek failed (%" PRIu32 ")\n",
mCDE.mLocalHeaderRelOffset);
return UNKNOWN_ERROR;
@@ -69,7 +65,7 @@
return result;
}
- if (fseek(fp, posn, SEEK_SET) != 0)
+ if (fseeko(fp, posn, SEEK_SET) != 0)
return UNKNOWN_ERROR;
//mLFH.dump();
@@ -80,7 +76,7 @@
* compressed size, and uncompressed size will be zero. In practice
* these seem to be rare.
*/
- hasDD = (mLFH.mGPBitFlag & kUsesDataDescr) != 0;
+ bool hasDD = (mLFH.mGPBitFlag & kUsesDataDescr) != 0;
if (hasDD) {
// do something clever
//ALOGD("+++ has data descriptor\n");
diff --git a/tools/zipalign/ZipFile.cpp b/tools/zipalign/ZipFile.cpp
index f2f65a6..42cc349 100644
--- a/tools/zipalign/ZipFile.cpp
+++ b/tools/zipalign/ZipFile.cpp
@@ -35,6 +35,8 @@
#include <assert.h>
#include <inttypes.h>
+_Static_assert(sizeof(off_t) == 8, "off_t too small");
+
namespace android {
/*
@@ -205,56 +207,43 @@
*/
status_t ZipFile::readCentralDir(void)
{
- status_t result = OK;
- uint8_t* buf = NULL;
- off_t fileLength, seekStart;
- long readAmount;
- int i;
-
- fseek(mZipFp, 0, SEEK_END);
- fileLength = ftell(mZipFp);
+ fseeko(mZipFp, 0, SEEK_END);
+ off_t fileLength = ftello(mZipFp);
rewind(mZipFp);
/* too small to be a ZIP archive? */
if (fileLength < EndOfCentralDir::kEOCDLen) {
- ALOGD("Length is %ld -- too small\n", (long)fileLength);
- result = INVALID_OPERATION;
- goto bail;
+ ALOGD("Length is %lld -- too small\n", (long long) fileLength);
+ return INVALID_OPERATION;
}
- buf = new uint8_t[EndOfCentralDir::kMaxEOCDSearch];
- if (buf == NULL) {
- ALOGD("Failure allocating %d bytes for EOCD search",
- EndOfCentralDir::kMaxEOCDSearch);
- result = NO_MEMORY;
- goto bail;
- }
-
+ off_t seekStart;
+ size_t readAmount;
if (fileLength > EndOfCentralDir::kMaxEOCDSearch) {
seekStart = fileLength - EndOfCentralDir::kMaxEOCDSearch;
readAmount = EndOfCentralDir::kMaxEOCDSearch;
} else {
seekStart = 0;
- readAmount = (long) fileLength;
+ readAmount = fileLength;
}
- if (fseek(mZipFp, seekStart, SEEK_SET) != 0) {
- ALOGD("Failure seeking to end of zip at %ld", (long) seekStart);
- result = UNKNOWN_ERROR;
- goto bail;
+ if (fseeko(mZipFp, seekStart, SEEK_SET) != 0) {
+ ALOGD("Failure seeking to end of zip at %lld", (long long) seekStart);
+ return UNKNOWN_ERROR;
}
/* read the last part of the file into the buffer */
- if (fread(buf, 1, readAmount, mZipFp) != (size_t) readAmount) {
+ uint8_t buf[EndOfCentralDir::kMaxEOCDSearch];
+ if (fread(buf, 1, readAmount, mZipFp) != readAmount) {
if (feof(mZipFp)) {
- ALOGW("fread %ld bytes failed, unexpected EOF", readAmount);
+ ALOGW("fread %zu bytes failed, unexpected EOF", readAmount);
} else {
- ALOGW("fread %ld bytes failed, %s", readAmount, strerror(errno));
+ ALOGW("fread %zu bytes failed, %s", readAmount, strerror(errno));
}
- result = UNKNOWN_ERROR;
- goto bail;
+ return UNKNOWN_ERROR;
}
/* find the end-of-central-dir magic */
+ int i;
for (i = readAmount - 4; i >= 0; i--) {
if (buf[i] == 0x50 &&
ZipEntry::getLongLE(&buf[i]) == EndOfCentralDir::kSignature)
@@ -265,15 +254,14 @@
}
if (i < 0) {
ALOGD("EOCD not found, not Zip\n");
- result = INVALID_OPERATION;
- goto bail;
+ return INVALID_OPERATION;
}
/* extract eocd values */
- result = mEOCD.readBuf(buf + i, readAmount - i);
+ status_t result = mEOCD.readBuf(buf + i, readAmount - i);
if (result != OK) {
- ALOGD("Failure reading %ld bytes of EOCD values", readAmount - i);
- goto bail;
+ ALOGD("Failure reading %zu bytes of EOCD values", readAmount - i);
+ return result;
}
//mEOCD.dump();
@@ -281,8 +269,7 @@
mEOCD.mNumEntries != mEOCD.mTotalNumEntries)
{
ALOGD("Archive spanning not supported\n");
- result = INVALID_OPERATION;
- goto bail;
+ return INVALID_OPERATION;
}
/*
@@ -299,11 +286,10 @@
* The only thing we really need right now is the file comment, which
* we're hoping to preserve.
*/
- if (fseek(mZipFp, mEOCD.mCentralDirOffset, SEEK_SET) != 0) {
+ if (fseeko(mZipFp, mEOCD.mCentralDirOffset, SEEK_SET) != 0) {
ALOGD("Failure seeking to central dir offset %" PRIu32 "\n",
mEOCD.mCentralDirOffset);
- result = UNKNOWN_ERROR;
- goto bail;
+ return UNKNOWN_ERROR;
}
/*
@@ -318,7 +304,7 @@
if (result != OK) {
ALOGD("initFromCDE failed\n");
delete pEntry;
- goto bail;
+ return result;
}
mEntries.add(pEntry);
@@ -336,20 +322,16 @@
} else {
ALOGW("fread EOCD failed, %s", strerror(errno));
}
- result = INVALID_OPERATION;
- goto bail;
+ return INVALID_OPERATION;
}
if (ZipEntry::getLongLE(checkBuf) != EndOfCentralDir::kSignature) {
ALOGD("EOCD read check failed\n");
- result = UNKNOWN_ERROR;
- goto bail;
+ return UNKNOWN_ERROR;
}
ALOGV("+++ EOCD read check passed\n");
}
-bail:
- delete[] buf;
- return result;
+ return OK;
}
@@ -370,9 +352,8 @@
{
ZipEntry* pEntry = NULL;
status_t result = OK;
- long lfhPosn, startPosn, endPosn, uncompressedLen;
- FILE* inputFp = NULL;
- uint32_t crc;
+ off_t lfhPosn, startPosn, endPosn, uncompressedLen;
+ uint32_t crc = 0;
time_t modWhen;
if (mReadOnly)
@@ -389,13 +370,14 @@
if (getEntryByName(storageName) != NULL)
return ALREADY_EXISTS;
+ FILE* inputFp = NULL;
if (!data) {
inputFp = fopen(fileName, FILE_OPEN_RO);
if (inputFp == NULL)
return errnoToStatus(errno);
}
- if (fseek(mZipFp, mEOCD.mCentralDirOffset, SEEK_SET) != 0) {
+ if (fseeko(mZipFp, mEOCD.mCentralDirOffset, SEEK_SET) != 0) {
result = UNKNOWN_ERROR;
goto bail;
}
@@ -413,9 +395,9 @@
* as a place-holder. In theory the LFH isn't necessary, but in
* practice some utilities demand it.
*/
- lfhPosn = ftell(mZipFp);
+ lfhPosn = ftello(mZipFp);
pEntry->mLFH.write(mZipFp);
- startPosn = ftell(mZipFp);
+ startPosn = ftello(mZipFp);
/*
* Copy the data in, possibly compressing it as we go.
@@ -432,11 +414,11 @@
* to be set through an API call, but I don't expect our
* criteria to change over time.
*/
- long src = inputFp ? ftell(inputFp) : size;
- long dst = ftell(mZipFp) - startPosn;
+ off_t src = inputFp ? ftello(inputFp) : size;
+ off_t dst = ftello(mZipFp) - startPosn;
if (dst + (dst / 10) > src) {
- ALOGD("insufficient compression (src=%ld dst=%ld), storing\n",
- src, dst);
+ ALOGD("insufficient compression (src=%lld dst=%lld), storing\n",
+ (long long) src, (long long) dst);
failed = true;
}
}
@@ -444,7 +426,7 @@
if (failed) {
compressionMethod = ZipEntry::kCompressStored;
if (inputFp) rewind(inputFp);
- fseek(mZipFp, startPosn, SEEK_SET);
+ fseeko(mZipFp, startPosn, SEEK_SET);
/* fall through to kCompressStored case */
}
}
@@ -463,7 +445,7 @@
}
// currently seeked to end of file
- uncompressedLen = inputFp ? ftell(inputFp) : size;
+ uncompressedLen = inputFp ? ftello(inputFp) : size;
/*
* We could write the "Data Descriptor", but there doesn't seem to
@@ -471,7 +453,7 @@
*
* Update file offsets.
*/
- endPosn = ftell(mZipFp); // seeked to end of compressed data
+ endPosn = ftello(mZipFp); // seeked to end of compressed data
/*
* Success! Fill out new values.
@@ -489,7 +471,7 @@
/*
* Go back and write the LFH.
*/
- if (fseek(mZipFp, lfhPosn, SEEK_SET) != 0) {
+ if (fseeko(mZipFp, lfhPosn, SEEK_SET) != 0) {
result = UNKNOWN_ERROR;
goto bail;
}
@@ -522,7 +504,7 @@
// Calculate where the entry payload offset will end up if we were to write
// it as-is.
- uint64_t expectedPayloadOffset = ftell(mZipFp) +
+ uint64_t expectedPayloadOffset = ftello(mZipFp) +
android::ZipEntry::LocalFileHeader::kLFHLen +
pEntry->mLFH.mFileNameLength +
pEntry->mLFH.mExtraFieldLength;
@@ -548,7 +530,7 @@
{
ZipEntry* pEntry = NULL;
status_t result;
- long lfhPosn, endPosn;
+ off_t lfhPosn, endPosn;
if (mReadOnly)
return INVALID_OPERATION;
@@ -557,7 +539,7 @@
assert(mZipFp != NULL);
assert(mEntries.size() == mEOCD.mTotalNumEntries);
- if (fseek(mZipFp, mEOCD.mCentralDirOffset, SEEK_SET) != 0) {
+ if (fseeko(mZipFp, mEOCD.mCentralDirOffset, SEEK_SET) != 0) {
result = UNKNOWN_ERROR;
goto bail;
}
@@ -585,7 +567,7 @@
* Write the LFH. Since we're not recompressing the data, we already
* have all of the fields filled out.
*/
- lfhPosn = ftell(mZipFp);
+ lfhPosn = ftello(mZipFp);
pEntry->mLFH.write(mZipFp);
/*
@@ -595,8 +577,7 @@
* fields as well. This is a fixed-size area immediately following
* the data.
*/
- if (fseek(pSourceZip->mZipFp, pSourceEntry->getFileOffset(), SEEK_SET) != 0)
- {
+ if (fseeko(pSourceZip->mZipFp, pSourceEntry->getFileOffset(), SEEK_SET) != 0) {
result = UNKNOWN_ERROR;
goto bail;
}
@@ -617,7 +598,7 @@
/*
* Update file offsets.
*/
- endPosn = ftell(mZipFp);
+ endPosn = ftello(mZipFp);
/*
* Success! Fill out new values.
@@ -654,7 +635,7 @@
{
ZipEntry* pEntry = NULL;
status_t result;
- long lfhPosn, uncompressedLen;
+ off_t lfhPosn, uncompressedLen;
if (mReadOnly)
return INVALID_OPERATION;
@@ -663,7 +644,7 @@
assert(mZipFp != NULL);
assert(mEntries.size() == mEOCD.mTotalNumEntries);
- if (fseek(mZipFp, mEOCD.mCentralDirOffset, SEEK_SET) != 0) {
+ if (fseeko(mZipFp, mEOCD.mCentralDirOffset, SEEK_SET) != 0) {
result = UNKNOWN_ERROR;
goto bail;
}
@@ -688,7 +669,7 @@
* as a place-holder. In theory the LFH isn't necessary, but in
* practice some utilities demand it.
*/
- lfhPosn = ftell(mZipFp);
+ lfhPosn = ftello(mZipFp);
pEntry->mLFH.write(mZipFp);
/*
@@ -698,8 +679,7 @@
* fields as well. This is a fixed-size area immediately following
* the data.
*/
- if (fseek(pSourceZip->mZipFp, pSourceEntry->getFileOffset(), SEEK_SET) != 0)
- {
+ if (fseeko(pSourceZip->mZipFp, pSourceEntry->getFileOffset(), SEEK_SET) != 0) {
result = UNKNOWN_ERROR;
goto bail;
}
@@ -712,7 +692,7 @@
result = NO_MEMORY;
goto bail;
}
- long startPosn = ftell(mZipFp);
+ off_t startPosn = ftello(mZipFp);
uint32_t crc;
if (compressFpToFp(mZipFp, NULL, buf, uncompressedLen, &crc) != OK) {
ALOGW("recompress of '%s' failed\n", pEntry->mCDE.mFileName);
@@ -720,13 +700,12 @@
free(buf);
goto bail;
}
- long endPosn = ftell(mZipFp);
+ off_t endPosn = ftello(mZipFp);
pEntry->setDataInfo(uncompressedLen, endPosn - startPosn,
pSourceEntry->getCRC32(), ZipEntry::kCompressDeflated);
free(buf);
} else {
- off_t copyLen;
- copyLen = pSourceEntry->getCompressedLen();
+ off_t copyLen = pSourceEntry->getCompressedLen();
if ((pSourceEntry->mLFH.mGPBitFlag & ZipEntry::kUsesDataDescr) != 0)
copyLen += ZipEntry::kDataDescriptorLen;
@@ -746,12 +725,12 @@
mEOCD.mNumEntries++;
mEOCD.mTotalNumEntries++;
mEOCD.mCentralDirSize = 0; // mark invalid; set by flush()
- mEOCD.mCentralDirOffset = ftell(mZipFp);
+ mEOCD.mCentralDirOffset = ftello(mZipFp);
/*
* Go back and write the LFH.
*/
- if (fseek(mZipFp, lfhPosn, SEEK_SET) != 0) {
+ if (fseeko(mZipFp, lfhPosn, SEEK_SET) != 0) {
result = UNKNOWN_ERROR;
goto bail;
}
@@ -978,7 +957,7 @@
status_t ZipFile::flush(void)
{
status_t result = OK;
- long eocdPosn;
+ off_t eocdPosn;
int i, count;
if (mReadOnly)
@@ -992,8 +971,7 @@
if (result != OK)
return result;
- if (fseek(mZipFp, mEOCD.mCentralDirOffset, SEEK_SET) != 0)
- return UNKNOWN_ERROR;
+ if (fseeko(mZipFp, mEOCD.mCentralDirOffset, SEEK_SET) != 0) return UNKNOWN_ERROR;
count = mEntries.size();
for (i = 0; i < count; i++) {
@@ -1001,7 +979,7 @@
pEntry->mCDE.write(mZipFp);
}
- eocdPosn = ftell(mZipFp);
+ eocdPosn = ftello(mZipFp);
mEOCD.mCentralDirSize = eocdPosn - mEOCD.mCentralDirOffset;
mEOCD.write(mZipFp);
@@ -1011,8 +989,8 @@
* with plain files, or if we deleted some entries, there's a lot
* of wasted space at the end of the file. Remove it now.
*/
- if (ftruncate(fileno(mZipFp), ftell(mZipFp)) != 0) {
- ALOGW("ftruncate failed %ld: %s\n", ftell(mZipFp), strerror(errno));
+ if (ftruncate(fileno(mZipFp), ftello(mZipFp)) != 0) {
+ ALOGW("ftruncate failed %lld: %s\n", (long long) ftello(mZipFp), strerror(errno));
// not fatal
}
@@ -1141,32 +1119,32 @@
if (getSize > n)
getSize = n;
- if (fseek(fp, (long) src, SEEK_SET) != 0) {
- ALOGW("filemove src seek %ld failed, %s",
- (long) src, strerror(errno));
+ if (fseeko(fp, src, SEEK_SET) != 0) {
+ ALOGW("filemove src seek %lld failed, %s",
+ (long long) src, strerror(errno));
return UNKNOWN_ERROR;
}
if (fread(readBuf, 1, getSize, fp) != getSize) {
if (feof(fp)) {
- ALOGW("fread %zu bytes off=%ld failed, unexpected EOF",
- getSize, (long) src);
+ ALOGW("fread %zu bytes off=%lld failed, unexpected EOF",
+ getSize, (long long) src);
} else {
- ALOGW("fread %zu bytes off=%ld failed, %s",
- getSize, (long) src, strerror(errno));
+ ALOGW("fread %zu bytes off=%lld failed, %s",
+ getSize, (long long) src, strerror(errno));
}
return UNKNOWN_ERROR;
}
- if (fseek(fp, (long) dst, SEEK_SET) != 0) {
- ALOGW("filemove dst seek %ld failed, %s",
- (long) dst, strerror(errno));
+ if (fseeko(fp, dst, SEEK_SET) != 0) {
+ ALOGW("filemove dst seek %lld failed, %s",
+ (long long) dst, strerror(errno));
return UNKNOWN_ERROR;
}
if (fwrite(readBuf, 1, getSize, fp) != getSize) {
- ALOGW("filemove write %zu off=%ld failed, %s",
- getSize, (long) dst, strerror(errno));
+ ALOGW("filemove write %zu off=%lld failed, %s",
+ getSize, (long long) dst, strerror(errno));
return UNKNOWN_ERROR;
}
@@ -1263,10 +1241,10 @@
bool ReadAtOffset(uint8_t* buf, size_t len, off64_t offset) const {
// Data is usually requested sequentially, so this helps avoid pointless
- // fseeks every time we perform a read. There's an impedence mismatch
+ // seeks every time we perform a read. There's an impedence mismatch
// here because the original API was designed around pread and pwrite.
if (offset != current_offset_) {
- if (fseek(fp_, offset, SEEK_SET) != 0) {
+ if (fseeko(fp_, offset, SEEK_SET) != 0) {
return false;
}
@@ -1298,10 +1276,10 @@
return NULL;
}
- fseek(mZipFp, 0, SEEK_SET);
+ fseeko(mZipFp, 0, SEEK_SET);
off_t offset = entry->getFileOffset();
- if (fseek(mZipFp, offset, SEEK_SET) != 0) {
+ if (fseeko(mZipFp, offset, SEEK_SET) != 0) {
goto bail;
}
diff --git a/tools/zipalign/tests/data/archiveWithOneDirectoryEntry.zip b/tools/zipalign/tests/data/archiveWithOneDirectoryEntry.zip
new file mode 100644
index 0000000..00be0ce
--- /dev/null
+++ b/tools/zipalign/tests/data/archiveWithOneDirectoryEntry.zip
Binary files differ
diff --git a/tools/zipalign/tests/src/align_test.cpp b/tools/zipalign/tests/src/align_test.cpp
index ff45187..a8433fa 100644
--- a/tools/zipalign/tests/src/align_test.cpp
+++ b/tools/zipalign/tests/src/align_test.cpp
@@ -12,6 +12,28 @@
using namespace android;
using namespace base;
+// This load the whole file to memory so be careful!
+static bool sameContent(const std::string& path1, const std::string& path2) {
+ std::string f1;
+ if (!ReadFileToString(path1, &f1)) {
+ printf("Unable to read '%s' content: %m\n", path1.c_str());
+ return false;
+ }
+
+ std::string f2;
+ if (!ReadFileToString(path2, &f2)) {
+ printf("Unable to read '%s' content %m\n", path1.c_str());
+ return false;
+ }
+
+ if (f1.size() != f2.size()) {
+ printf("File '%s' and '%s' are not the same\n", path1.c_str(), path2.c_str());
+ return false;
+ }
+
+ return f1.compare(f2) == 0;
+}
+
static std::string GetTestPath(const std::string& filename) {
static std::string test_data_dir = android::base::GetExecutableDirectory() + "/tests/data/";
return test_data_dir + filename;
@@ -87,3 +109,21 @@
int verified = verify(dst.c_str(), 4, false, true);
ASSERT_EQ(0, verified);
}
+
+TEST(Align, DirectoryEntryDoNotRequireAlignment) {
+ const std::string src = GetTestPath("archiveWithOneDirectoryEntry.zip");
+ int verified = verify(src.c_str(), 4, false, true);
+ ASSERT_EQ(0, verified);
+}
+
+TEST(Align, DirectoryEntry) {
+ const std::string src = GetTestPath("archiveWithOneDirectoryEntry.zip");
+ const std::string dst = GetTempPath("archiveWithOneDirectoryEntry_out.zip");
+
+ int processed = process(src.c_str(), dst.c_str(), 4, true, false, 4096);
+ ASSERT_EQ(0, processed);
+ ASSERT_EQ(true, sameContent(src, dst));
+
+ int verified = verify(dst.c_str(), 4, false, true);
+ ASSERT_EQ(0, verified);
+}
diff --git a/tools/ziptime/ZipEntry.cpp b/tools/ziptime/ZipEntry.cpp
index e7b52ed..c8eb377 100644
--- a/tools/ziptime/ZipEntry.cpp
+++ b/tools/ziptime/ZipEntry.cpp
@@ -43,19 +43,16 @@
*/
status_t ZipEntry::initAndRewriteFromCDE(FILE* fp)
{
- status_t result;
- long posn;
-
/* read the CDE */
- result = mCDE.rewrite(fp);
+ status_t result = mCDE.rewrite(fp);
if (result != 0) {
LOG("mCDE.rewrite failed\n");
return result;
}
/* using the info in the CDE, go load up the LFH */
- posn = ftell(fp);
- if (fseek(fp, mCDE.mLocalHeaderRelOffset, SEEK_SET) != 0) {
+ off_t posn = ftello(fp);
+ if (fseeko(fp, mCDE.mLocalHeaderRelOffset, SEEK_SET) != 0) {
LOG("local header seek failed (%" PRIu32 ")\n",
mCDE.mLocalHeaderRelOffset);
return -1;
@@ -67,7 +64,7 @@
return result;
}
- if (fseek(fp, posn, SEEK_SET) != 0)
+ if (fseeko(fp, posn, SEEK_SET) != 0)
return -1;
return 0;
diff --git a/tools/ziptime/ZipFile.cpp b/tools/ziptime/ZipFile.cpp
index 1d111af..3002a65 100644
--- a/tools/ziptime/ZipFile.cpp
+++ b/tools/ziptime/ZipFile.cpp
@@ -40,8 +40,7 @@
/* open the file */
mZipFp = fopen(zipFileName, "r+b");
if (mZipFp == NULL) {
- int err = errno;
- LOG("fopen failed: %d\n", err);
+ LOG("fopen \"%s\" failed: %s\n", zipFileName, strerror(errno));
return -1;
}
@@ -72,52 +71,39 @@
*/
status_t ZipFile::rewriteCentralDir(void)
{
- status_t result = 0;
- uint8_t* buf = NULL;
- off_t fileLength, seekStart;
- long readAmount;
- int i;
-
- fseek(mZipFp, 0, SEEK_END);
- fileLength = ftell(mZipFp);
+ fseeko(mZipFp, 0, SEEK_END);
+ off_t fileLength = ftello(mZipFp);
rewind(mZipFp);
/* too small to be a ZIP archive? */
if (fileLength < EndOfCentralDir::kEOCDLen) {
- LOG("Length is %ld -- too small\n", (long)fileLength);
- result = -1;
- goto bail;
+ LOG("Length is %lld -- too small\n", (long long) fileLength);
+ return -1;
}
- buf = new uint8_t[EndOfCentralDir::kMaxEOCDSearch];
- if (buf == NULL) {
- LOG("Failure allocating %d bytes for EOCD search",
- EndOfCentralDir::kMaxEOCDSearch);
- result = -1;
- goto bail;
- }
-
+ off_t seekStart;
+ size_t readAmount;
if (fileLength > EndOfCentralDir::kMaxEOCDSearch) {
seekStart = fileLength - EndOfCentralDir::kMaxEOCDSearch;
readAmount = EndOfCentralDir::kMaxEOCDSearch;
} else {
seekStart = 0;
- readAmount = (long) fileLength;
+ readAmount = fileLength;
}
- if (fseek(mZipFp, seekStart, SEEK_SET) != 0) {
- LOG("Failure seeking to end of zip at %ld", (long) seekStart);
- result = -1;
- goto bail;
+ if (fseeko(mZipFp, seekStart, SEEK_SET) != 0) {
+ LOG("Failure seeking to end of zip at %lld", (long long) seekStart);
+ return -1;
}
/* read the last part of the file into the buffer */
- if (fread(buf, 1, readAmount, mZipFp) != (size_t) readAmount) {
- LOG("short file? wanted %ld\n", readAmount);
- result = -1;
- goto bail;
+ uint8_t buf[EndOfCentralDir::kMaxEOCDSearch];
+ if (fread(buf, 1, readAmount, mZipFp) != readAmount) {
+ LOG("short file? wanted %zu\n", readAmount);
+ return -1;
}
/* find the end-of-central-dir magic */
+ int i;
for (i = readAmount - 4; i >= 0; i--) {
if (buf[i] == 0x50 &&
ZipEntry::getLongLE(&buf[i]) == EndOfCentralDir::kSignature)
@@ -127,15 +113,14 @@
}
if (i < 0) {
LOG("EOCD not found, not Zip\n");
- result = -1;
- goto bail;
+ return -1;
}
/* extract eocd values */
- result = mEOCD.readBuf(buf + i, readAmount - i);
+ status_t result = mEOCD.readBuf(buf + i, readAmount - i);
if (result != 0) {
- LOG("Failure reading %ld bytes of EOCD values", readAmount - i);
- goto bail;
+ LOG("Failure reading %zu bytes of EOCD values", readAmount - i);
+ return result;
}
/*
@@ -152,49 +137,39 @@
* The only thing we really need right now is the file comment, which
* we're hoping to preserve.
*/
- if (fseek(mZipFp, mEOCD.mCentralDirOffset, SEEK_SET) != 0) {
+ if (fseeko(mZipFp, mEOCD.mCentralDirOffset, SEEK_SET) != 0) {
LOG("Failure seeking to central dir offset %" PRIu32 "\n",
mEOCD.mCentralDirOffset);
- result = -1;
- goto bail;
+ return -1;
}
/*
* Loop through and read the central dir entries.
*/
- int entry;
- for (entry = 0; entry < mEOCD.mTotalNumEntries; entry++) {
+ for (int entry = 0; entry < mEOCD.mTotalNumEntries; entry++) {
ZipEntry* pEntry = new ZipEntry;
-
result = pEntry->initAndRewriteFromCDE(mZipFp);
+ delete pEntry;
if (result != 0) {
LOG("initFromCDE failed\n");
- delete pEntry;
- goto bail;
+ return -1;
}
-
- delete pEntry;
}
-
/*
* If all went well, we should now be back at the EOCD.
*/
uint8_t checkBuf[4];
if (fread(checkBuf, 1, 4, mZipFp) != 4) {
LOG("EOCD check read failed\n");
- result = -1;
- goto bail;
+ return -1;
}
if (ZipEntry::getLongLE(checkBuf) != EndOfCentralDir::kSignature) {
LOG("EOCD read check failed\n");
- result = -1;
- goto bail;
+ return -1;
}
-bail:
- delete[] buf;
- return result;
+ return 0;
}
/*