NNAPI narrow evaluation for P -- HAL
We have determined that for Android P it is sufficient to have a
mechanism for a developer to specify on a per-model basis that it is
acceptable for FLOAT32 operations to be carried out as if they were
FLOAT16 operations. This CL extends the model to include information on
these precision requirements.
Bug: 63911257
Test: mm
Test: NeuralNetworksTest
Test: VtsHalNeuralnetworksV1_0TargetTest
Merged-In: I5331e69c05c20c588035897322a6c7dceca374a0
Change-Id: I5331e69c05c20c588035897322a6c7dceca374a0
(cherry picked from commit 7ffe78ba76c679724a7d3deed1963196fde0e3ce)
diff --git a/neuralnetworks/1.1/types.hal b/neuralnetworks/1.1/types.hal
index 18863d3..fae5dd0 100644
--- a/neuralnetworks/1.1/types.hal
+++ b/neuralnetworks/1.1/types.hal
@@ -330,4 +330,12 @@
* equals OperandLifeTime::CONSTANT_REFERENCE.
*/
vec<memory> pools;
+
+ /**
+ * 'true' indicates TENSOR_FLOAT32 may be calculated with range and/or
+ * precision as low as that of the IEEE 754 16-bit floating-point format.
+ * 'false' indicates TENSOR_FLOAT32 must be calculated using at least the
+ * range and precision of the IEEE 754 32-bit floating-point format.
+ */
+ bool relaxComputationFloat32toFloat16;
};