Fix the spec for TENSOR_QUANT8_ASYMM to match our validation.
- Scale of 0.0 is invalid for quantized tensor.
Bug: 77236592
Test: mma
Merged-In: I3a53d6303d8c964d451e17a3b1671de82d0ff335
Change-Id: I3a53d6303d8c964d451e17a3b1671de82d0ff335
(cherry picked from commit a82d39102af38725f72a415a69330f9304637f96)
diff --git a/current.txt b/current.txt
index 8f9d701..af44905 100644
--- a/current.txt
+++ b/current.txt
@@ -248,5 +248,5 @@
# Future changes to HALs
5804ca86611d72e5481f022b3a0c1b334217f2e4988dad25730c42af2d1f4d1c android.hardware.neuralnetworks@1.0::IDevice
-08ae9fc24f21f809e9b8501dfbc803662fcd6a8d8e1fb71d9dd7c0c4c6f5d556 android.hardware.neuralnetworks@1.0::types
+1488db5ffb8a7979488d1084761aab8bca2f59bc9a02d75cdefc296afeaf591b android.hardware.neuralnetworks@1.0::types
diff --git a/neuralnetworks/1.0/types.hal b/neuralnetworks/1.0/types.hal
index a951d60..5b8f22c 100644
--- a/neuralnetworks/1.0/types.hal
+++ b/neuralnetworks/1.0/types.hal
@@ -44,7 +44,7 @@
*
* Attached to this tensor are two numbers that can be used to convert the
* 8 bit integer to the real value and vice versa. These two numbers are:
- * - scale: a 32 bit floating point value
+ * - scale: a 32 bit floating point value greater than zero
* - zero_value: a 32 bit integer
*
* The formula is: