Fix the NNAPI HAL documentation about ADD and MUL
am: 9237ae8889
Change-Id: I82477495696172b456305ac7dc739d93cdcd9e2e
diff --git a/current.txt b/current.txt
index 83479f3..8f9d701 100644
--- a/current.txt
+++ b/current.txt
@@ -248,5 +248,5 @@
# Future changes to HALs
5804ca86611d72e5481f022b3a0c1b334217f2e4988dad25730c42af2d1f4d1c android.hardware.neuralnetworks@1.0::IDevice
-088b30a9c9ce27bc955b08a03c38c208f8f65b51133053c7656c875479801b99 android.hardware.neuralnetworks@1.0::types
+08ae9fc24f21f809e9b8501dfbc803662fcd6a8d8e1fb71d9dd7c0c4c6f5d556 android.hardware.neuralnetworks@1.0::types
diff --git a/neuralnetworks/1.0/types.hal b/neuralnetworks/1.0/types.hal
index 12461e9..a951d60 100644
--- a/neuralnetworks/1.0/types.hal
+++ b/neuralnetworks/1.0/types.hal
@@ -84,6 +84,7 @@
* output.dimension = {5, 4, 3, 2}
*
* Supported tensor types: {@link OperandType::TENSOR_FLOAT32}
+ * {@link OperandType::TENSOR_QUANT8_ASYMM}
* Supported tensor rank: up to 4
*
* Inputs:
@@ -645,6 +646,7 @@
* input operands. It starts with the trailing dimensions, and works its way forward.
*
* Supported tensor types: {@link OperandType::TENSOR_FLOAT32}
+ * {@link OperandType::TENSOR_QUANT8_ASYMM}
* Supported tensor rank: up to 4
*
* Inputs: