
The NVIDIA Holoscan Sensor Bridge has become an alternative to low-latency capture, interconnecting an MIPI camera to a Lattice CrossLink NX FPGA. The data is transmitted to a Lattice CertusPro NX FPGA as an intermediate FPGA for data packaging and transmission over 10-Gigabit Ethernet. The idea of having an intermediate FPGA is to integrate image signal processing algorithms before the image leaves the Holoscan Sensor Bridge, ready to consume by other platforms like the NVIDIA AGX Orin, having gains in shortening the capture latency. This will be explored in the future. For more information, please visit our blog post Leveraging Low Latency to the Next Level with the Holoscan Sensor Bridge.
The Holoscan Sensor Bridge can also be connected to DPDK-compatible (Data Plane Development Kit) platforms, such as the NVIDIA IGX Orin, simplifying the network stack and reducing kernel overhead over the packet exchange.
For this blog, we will measure the glass-to-glass latency of the Holoscan Sensor Bridge with an IMX274 camera connected to an NVIDIA AGX Orin. No optimisations on the FPGA or the network stack have been performed during these measurements. Moreover, rough comparisons against the latency of the Argus MIPI integrated into the NVIDIA AGX orin.
What is Glass-To-Glass Latency
Glass-to-glass latency is a common metric for live video display that quantifies the delay between the camera capture and the video display. Low latencies are crucial in applications like medicine endoscopy to avoid medical misinterpretations.
The test consists of a camera connected to a system that displays the image on a monitor. The setup has two monitors: one showing a chronometer and another displaying the live video capture. The camera must point to both monitors. Then, with a camera, the setup is photographed multiple times until a clear measurement of both monitors showing the clock. The following picture illustrates the aforementioned scenario:

Setup of the Holoscan Sensor Bridge and Jetson AGX Orin
For the glass-to-glass measurement, the following hardware components are needed:
An NVIDIA Jetson AGX Orin developer kit
A USB-C power supply for the NVIDIA Jetson AGX orin
A USB-C power supply for the Holoscan Sensor Bridge (2A works)
An ethernet cable: category 6 or superior
A USB-micro data cable
A display port cable (optional: with HDMI adapter for monitors)
A monitor or display
A keyboard and mouse with USB connection
A smartphone with camera (or digital camera) to take pictures

For the setup and instructions on getting started, please visit our blog post Getting Started with the NVIDIA Jetson and the Holoscan Sensor Bridge and our developer wiki.
Measurements
This section shows the results and the tools used to get the measurements.
Tools
For the live capture, we followed the instructions from our blog Getting Started with the NVIDIA Jetson and the Holoscan Sensor Bridge. It details the instructions from installing the Jetpack and running the Holoscan demo.
For the chronometer, any terminal chronometer with millisecond resolution can be used on a computer with a monitor with a refresh rate greater than or equal to 60 Hz. The critical requirement is to avoid failures while meeting the refresh rate deadlines.
Image Signal Processing Pipeline
To understand the origin of the latency and how to decompose it, it is necessary to have a look at the ISP pipeline, which is illustrated below:

The process begins with the capture on the Holoscan Sensor Bridge, which is composed of the camera PHY and packetisation using UDP over Ethernet. Then, the NVIDIA AGX Orin receives the packets using UDP sockets, where the image is a pure Bayer pattern, followed by an Image Signal Processor equipped with black-and-white balancing. Then, the demosaicing happens, converting a stabilised Bayer image into a 64-bit RGBA image that is later gamma-corrected and visualised through Vulkan.
Results
We have performed multiple experiments to determine the behaviour of the latency under different conditions, which are mainly:
Power Profile (Mode)
Jetson Clocks Enable
Exclusive Display: no display manager running
Simplified Image Signal Processing Pipeline: using the example and using only the debayering element.
Using either a Display Port or HDMI monitor.
Configuring the Jetson
To configure the Jetson according to the aforementioned conditions, please, check the following resources:
Power Mode and Jetson Clocks: use the jetson-stats tool.
Exclusive Display: follow this guide for more information.
Simplified Image Signal Processing: remove all other operations except the CSI Operator, the Debayer and the Holoviz.
Final Results
The results under these configurations are as follows:
Monitor and Display Mode | Power Profile / Jetson Clocks | Image Signal Processing Pipeline | Glass-to-Glass Latency |
Display Port / Exclusive Display | MAXN / Enabled | Debayer only | 35.95 ms |
Display Port / Exclusive Display | MAXN / Enabled | IMX274 capture example | 41.61 ms |
Display Port / Non-Exclusive Display | MAXN / Enabled | IMX274 capture example | 49.62 ms |
HDMI Port / Exclusive Display | MAXN / Enabled | IMX274 capture example | 49.71 ms |
Display Port / Exclusive Display | MAXN / Disabled | IMX274 capture example | 49.51 ms |
Display Port / Exclusive Display | 15W | IMX274 capture example | 68.37 ms |
According to the table above, the latency results are subject to variations depending on the operating conditions and the hardware. For instance, it is susceptible to the power profile and the Jetson Clocks. It is also sensitive to the display manager overhead. The results are below the 36 ms in a glass-to-glass configuration, suggesting around two frames of latency, which is impressive compared to the other capture mechanisms, placing a hit in the NVIDIA ecosystem.
Comparison Against Other Capture Methods on NVIDIA Jetson
The camera can also be connected directly to the MIPI CSI PHY of the NVIDIA Jetson AGX Orin. It uses an integrated ISP chip device and the LibArgus capture to deliver the image. For this purpose, we use GStreamer with the nvarguscamerasrc element to create a glass-to-class display video capture.
The results are:
Sensor | Image Dimensions (px) | CPU Usage (%) | RAM Usage (MiB) | GPU Usage (%) | Glass-to-Glass Latency (ms) |
IMX477 - Xavier AGX JP5 | 3840x2160 (4K 30fps) | - | - | 0 | 115 |
IMX477 - Xavier NX JP 4 | 1920x1080 (1080p 30fps) | - | - | - | 77.8 |
IMX477 - Xavier NX JP 4 | 1920x1080 (1080p 60fps) | - | - | - | 110.6 |
The results from this table are higher than the 36 ms of latency from the Holoscan Sensor Bridge, highlighting the HSB's dominance over the MIPI capture.
Important Remarks
For stereo capture, the processing platform (i.e. the NVIDIA Jetson) must have two separate network interfaces, given that each camera stream is delivered in separate ports.
The NVIDIA Jetson AGX Orin developer kit does not possess a DPDK-compatible card, falling back into the Linux socket system. This increases the glass-to-glass latency. For DPDK, it is necessary to connect a DPDK-compatible card or use a custom carrier board with a compatible NIC.
Expect more information from us
The next blog post will show how to lower the latency by using our CUDA ISP.
If you want to know more about how to leverage this technology in your project: Contact Us.