Update TRANSPOSE_CONV_2D docs
* Add info about per-channel quantization
* Update current.txt
Test: mma
Change-Id: I197d984c8b65b4c46bf526eb137f212ad8844926
Merged-In: I197d984c8b65b4c46bf526eb137f212ad8844926
(cherry picked from commit 44015c090ae4d7df4b723b1928c2a446d49be6bf)
diff --git a/current.txt b/current.txt
index 8f93d8e..1d765f2 100644
--- a/current.txt
+++ b/current.txt
@@ -420,7 +420,7 @@
92714960d1a53fc2ec557302b41c7cc93d2636d8364a44bd0f85be0c92927ff8 android.hardware.neuralnetworks@1.2::IExecutionCallback
83885d366f22ada42c00d8854f0b7e7ba4cf73ddf80bb0d8e168ce132cec57ea android.hardware.neuralnetworks@1.2::IPreparedModel
e1c734d1545e1a4ae749ff1dd9704a8e594c59aea7c8363159dc258e93e0df3b android.hardware.neuralnetworks@1.2::IPreparedModelCallback
-313b341f1f6196a48cf304eaf067f67510c1ebc04df8c7cd536db5611df5c5c2 android.hardware.neuralnetworks@1.2::types
+769f8650631eef7a3ceedc8cf130f4b99eb52fe698a11609d55de32985a3dddf android.hardware.neuralnetworks@1.2::types
cf7a4ba516a638f9b82a249c91fb603042c2d9ca43fd5aad9cf6c0401ed2a5d7 android.hardware.nfc@1.2::INfc
abf98c2ae08bf765db54edc8068e36d52eb558cff6706b6fd7c18c65a1f3fc18 android.hardware.nfc@1.2::types
4cb252dc6372a874aef666b92a6e9529915aa187521a700f0789065c3c702ead android.hardware.power.stats@1.0::IPowerStats
diff --git a/neuralnetworks/1.2/types.hal b/neuralnetworks/1.2/types.hal
index 06bdc6a..ab17598 100644
--- a/neuralnetworks/1.2/types.hal
+++ b/neuralnetworks/1.2/types.hal
@@ -342,7 +342,7 @@
* * * input.scale * filter.scale).
*
* Available since API level 29:
- * * Quantized with symetric per channel quantization for the filter:
+ * * Quantized with symmetric per channel quantization for the filter:
* * * {@link OperandType::TENSOR_QUANT8_ASYMM} for input, and output.
* * * {@link OperandType::TENSOR_QUANT8_SYMM_PER_CHANNEL} for filter.
* * * {@link OperandType::TENSOR_INT32} for bias (scale set to 0.0,
@@ -491,7 +491,7 @@
* * * input.scale * filter.scale).
*
* Available since API level 29:
- * * Quantized with symetric per channel quantization for the filter:
+ * * Quantized with symmetric per channel quantization for the filter:
* * * {@link OperandType::TENSOR_QUANT8_ASYMM} for input, and output.
* * * {@link OperandType::TENSOR_QUANT8_SYMM_PER_CHANNEL} for filter.
* * * {@link OperandType::TENSOR_INT32} for bias (scale set to 0.0,
@@ -3018,7 +3018,7 @@
* * * {@link OperandType::TENSOR_INT32} for bias (with scale set to
* * * input.scale * filter.scale).
*
- * * Quantized with symetric per channel quantization for the filter:
+ * * Quantized with symmetric per channel quantization for the filter:
* * * {@link OperandType::TENSOR_QUANT8_ASYMM} for input, and output.
* * * {@link OperandType::TENSOR_QUANT8_SYMM_PER_CHANNEL} for filter.
* * * {@link OperandType::TENSOR_INT32} for bias (scale set to 0.0,
@@ -4204,10 +4204,21 @@
* The output dimensions are functions of the filter dimensions, stride, and
* padding.
*
- * Supported tensor {@link OperandType}:
- * * {@link OperandType::TENSOR_FLOAT16}
- * * {@link OperandType::TENSOR_FLOAT32}
- * * {@link OperandType::TENSOR_QUANT8_ASYMM}
+ * Supported tensor {@link OperandCode} configurations:
+ * * 32 bit Floating point :
+ * * * {@link OperandType::TENSOR_FLOAT32} for input, filter, output, and bias.
+ *
+ * * Quantized:
+ * * * {@link OperandType::TENSOR_QUANT8_ASYMM} for input, filter, and output.
+ * * * {@link OperandType::TENSOR_INT32} for bias (with scale set to
+ * * * input.scale * filter.scale).
+ *
+ * Available since API level 29:
+ * * Quantized with symmetric per channel quantization for the filter:
+ * * * {@link OperandType::TENSOR_QUANT8_ASYMM} for input, and output.
+ * * * {@link OperandType::TENSOR_QUANT8_SYMM_PER_CHANNEL} for filter.
+ * * * {@link OperandType::TENSOR_INT32} for bias (scale set to 0.0,
+ * * * each value scaling is separate and equal to input.scale * filter.scales[channel]).
*
* Supported tensor rank: 4, with "NHWC" or "NCHW" data layout.
* With the default data layout NHWC, the data is stored in the order of:
@@ -4221,14 +4232,20 @@
* specifying the input.
* * 1: A 4-D tensor, of shape
* [depth_out, filter_height, filter_width, depth_in], specifying the
- * filter.
+ * filter. For tensor of type
+ * {@link OperandType::TENSOR_QUANT8_SYMM_PER_CHANNEL} the channel
+ * dimension (extraParams.channelQuant.channelDim) must be set to 0.
* * 2: A 1-D tensor, of shape [depth_out], specifying the bias. For input
* tensor of type {@link OperandType::TENSOR_FLOAT32} or
* {@link OperandType::TENSOR_FLOAT16}, the bias should be of the
* same type. For input tensor of type
* {@link OperandType::TENSOR_QUANT8_ASYMM}, the bias should be
* of {@link OperandType::TENSOR_INT32}, with zeroPoint of 0 and
- * bias_scale == input_scale * filter_scale.
+ * bias_scale == input_scale * filter_scale. For filter tensor of
+ * {@link OperandType::TENSOR_QUANT8_SYMM_PER_CHANNEL}, the bias
+ * must be of {@link OperandType::TENSOR_INT32}, with zeroPoint of
+ * 0 and bias_scale of 0. The actual scale of each value 'i' is equal
+ * to bias_scale[i] = input_scale * filter_scale[i].
* * 3: An {@link OperandType::INT32} scalar, specifying the padding on
* the left, in the ‘width’ dimension.
* * 4: An {@link OperandType::INT32} scalar, specifying the padding on
@@ -4252,14 +4269,20 @@
* specifying the input.
* * 1: A 4-D tensor, of shape
* [depth_out, filter_height, filter_width, depth_in], specifying the
- * filter.
+ * filter. For tensor of type
+ * {@link OperandType::TENSOR_QUANT8_SYMM_PER_CHANNEL} the channel
+ * dimension (extraParams.channelQuant.channelDim) must be set to 0.
* * 2: A 1-D tensor, of shape [depth_out], specifying the bias. For input
* tensor of type {@link OperandType::TENSOR_FLOAT32} or
* {@link OperandType::TENSOR_FLOAT16}, the bias should be of the
* same type. For input tensor of type
* {@link OperandType::TENSOR_QUANT8_ASYMM}, the bias should be
* of {@link OperandType::TENSOR_INT32}, with zeroPoint of 0 and
- * bias_scale == input_scale * filter_scale.
+ * bias_scale == input_scale * filter_scale. For filter tensor of
+ * {@link OperandType::TENSOR_QUANT8_SYMM_PER_CHANNEL}, the bias
+ * must be of {@link OperandType::TENSOR_INT32}, with zeroPoint of
+ * 0 and bias_scale of 0. The actual scale of each value 'i' is equal
+ * to bias_scale[i] = input_scale * filter_scale[i].
* * 3: An {@link OperandType::TENSOR_INT32} tensor, specifying the output
* tensor shape.
* * 4: An {@link OperandType::INT32} scalar, specifying the implicit
@@ -4279,7 +4302,9 @@
* * 0: The output 4-D tensor, of shape
* [batches, out_height, out_width, depth_out]. For output tensor of
* {@link OperandType::TENSOR_QUANT8_ASYMM}, the following condition
- * must be satisfied: output_scale > input_scale * filter_scale.
+ * must be satisfied: output_scale > input_scale * filter_scale (for
+ * filter tensor of {@link OperandType::TENSOR_QUANT8_SYMM_PER_CHANNEL}
+ * this condition must be true for all filter scales).
*
* Available since API level 29.
*/