Add @1.2::IPreparedModel::executeSynchronously() and corresponding VTS tests.

Bug: 119274127

Test: all of the following, with the appropriate android.hardware.neuralnetworks@1.${X}::IDevice/sample-all
    VtsHalNeuralnetworksV1_0TargetTest
    VtsHalNeuralnetworksV1_0TargetTest
    VtsHalNeuralnetworksV1_1CompatV1_0TargetTest
    VtsHalNeuralnetworksV1_1CompatV1_0TargetTest
    VtsHalNeuralnetworksV1_1TargetTest
    VtsHalNeuralnetworksV1_1TargetTest
    VtsHalNeuralnetworksV1_2CompatV1_0TargetTest
    VtsHalNeuralnetworksV1_2CompatV1_0TargetTest
    VtsHalNeuralnetworksV1_2CompatV1_1TargetTest
    VtsHalNeuralnetworksV1_2CompatV1_1TargetTest
    VtsHalNeuralnetworksV1_2TargetTest
    VtsHalNeuralnetworksV1_2TargetTest

Change-Id: Iedfa485b4008d9cec3b81ff4c0ce3ebc0b83c823
(cherry picked from commit 49e41678f5781230b9f7bf02cc4886fee9891b71)
diff --git a/neuralnetworks/1.2/IPreparedModel.hal b/neuralnetworks/1.2/IPreparedModel.hal
index 5590487..4e91c67 100644
--- a/neuralnetworks/1.2/IPreparedModel.hal
+++ b/neuralnetworks/1.2/IPreparedModel.hal
@@ -51,8 +51,9 @@
      * and complete successfully (ErrorStatus::NONE). There must be
      * no failure unless the device itself is in a bad state.
      *
-     * Multiple threads can call the execute_1_2 function on the same IPreparedModel
-     * object concurrently with different requests.
+     * Any number of calls to the execute, execute_1_2, and executeSynchronously
+     * functions, in any combination, may be made concurrently, even on the same
+     * IPreparedModel object.
      *
      * @param request The input and output information on which the prepared
      *                model is to be executed.
@@ -71,4 +72,39 @@
      */
     execute_1_2(Request request, IExecutionCallback callback)
         generates (ErrorStatus status);
+
+    /**
+     * Performs a synchronous execution on a prepared model.
+     *
+     * The execution is performed synchronously with respect to the caller.
+     * executeSynchronously must verify the inputs to the function are
+     * correct. If there is an error, executeSynchronously must immediately
+     * return with the appropriate ErrorStatus value. If the inputs to the
+     * function are valid and there is no error, executeSynchronously must
+     * perform the execution, and must not return until the execution is
+     * complete.
+     *
+     * If the prepared model was prepared from a model wherein all tensor
+     * operands have fully specified dimensions, and the inputs to the function
+     * are valid, then the execution should complete successfully
+     * (ErrorStatus::NONE). There must be no failure unless the device itself is
+     * in a bad state.
+     *
+     * Any number of calls to the execute, execute_1_2, and executeSynchronously
+     * functions, in any combination, may be made concurrently, even on the same
+     * IPreparedModel object.
+     *
+     * @param request The input and output information on which the prepared
+     *                model is to be executed.
+     * @return status Error status of the execution, must be:
+     *                - NONE if execution is performed successfully
+     *                - DEVICE_UNAVAILABLE if driver is offline or busy
+     *                - GENERAL_FAILURE if there is an unspecified error
+     *                - OUTPUT_INSUFFICIENT_SIZE if provided output buffer is
+     *                  not large enough to store the resultant values
+     *                - INVALID_ARGUMENT if one of the input arguments is
+     *                  invalid
+     */
+    executeSynchronously(Request request)
+        generates (ErrorStatus status);
 };