The HAL implementations are declared in: - VirtualCameraDevice - VirtualCameraProvider - VirtualCameraSession
Virtual Cameras report EXTERNAL
hardware level but some functionalities of EXTERNAL
hardware level are not fully supported.
Here is a list of supported features - Single input multiple output stream and capture:
Notable missing features:
Support for auto 3A (AWB, AE, AF): virtual camera will announce convergence of 3A algorithm even though it can't receive any information about this from the owner.
No flash/torch support
Graphic data are exchanged using the Surface infrastructure. Like any other Camera HAL, the Surfaces to write data into are received from the client. Virtual Camera exposes a different Surface onto which the owner can write data. In the middle, we use an EGL Texture which adapts (if needed) the producer data to the required consumer format (scaling only for now, but we might also add support for rotation and cropping in the future).
When the client application requires multiple resolutions, the closest one among supported resolutions is used for the input data and the image data is down scaled for the lower resolutions.
Depending on the type of output, the rendering pipelines change. Here is an overview of the YUV and JPEG pipelines.
YUV Rendering:
Virtual Device Owner Surface[1] (Producer) --{binds to}--> EGL Texture[1] --{renders into}--> Client Surface[1-n] (Consumer)
JPEG Rendering:
Virtual Device Owner Surface[1] (Producer) --{binds to}--> EGL Texture[1] --{compress data into}--> temporary buffer --{renders into}--> Client Surface[1-n] (Consumer)
Before reading the following, you must understand the concepts of CaptureRequest and OutputConfiguration.
The consumer creates a session with one or more Surfaces
The VirtualCamera owner will receive a call to VirtualCameraCallback#onStreamConfigured
with a reference to another Suface
where it can write into.
The consumer will then start sending CaptureRequests
. The owner will receive a call to VirtualCameraCallback#onProcessCaptureRequest
, at which points it should write the required data into the surface it previously received. At the same time, a new task will be enqueued in the render thread
The VirtualCameraRenderThread will consume the enqueued tasks as they come. It will wait for the producer to write into the input Surface (using Surface::waitForNextFrame
).
Note: Since the Surface API allows us to wait for the next frame, there is no need for the producer to notify when the frame is ready by calling a
processCaptureResult()
equivalent.
The EGL Texture is updated with the content of the Surface.
The EGL Texture renders into the output Surfaces.
The Camera client is notified of the "shutter" event and the CaptureResult
is sent to the consumer.
The VirtualCameraRenderThread module takes care of rendering the input from the owner to the output via the EGL Texture. The rendering is done either to a JPEG buffer, which is the BLOB rendering for creating a JPEG or to a YUV buffer used mainly for preview Surfaces or video. Two EGLPrograms (shaders) defined in EglProgram handle the rendering of the data.
EGlDisplayContext initializes the whole EGL environment (Display, Surface, Context, and Config).
The EGL Rendering is backed by a ANativeWindow which is just the native counterpart of the Surface, which itself is the producer side of buffer queue, the consumer being either the display (Camera preview) or some encoder (to save the data or send it across the network).
To better understand how the EGL rendering works the following resources can be used:
Introduction to OpenGL: https://learnopengl.com/
The official documentation of EGL API can be queried at: https://www.khronos.org/registry/egl/sdk/docs/man/xhtml/
And using Google search with the following query:
[function name] site:https://registry.khronos.org/EGL/sdk/docs/man/html/ // example: eglSwapBuffers site:https://registry.khronos.org/EGL/sdk/docs/man/html/