CMOS Sensor or V4L2 driver
Custom Linux Drivers for Embedded Systems
Before any purchase:
Please check availability with RidgeRun. Every driver needs customization for your hardware. Please check with RidgeRun first how soon this can be done.
All RidgeRun Drivers are developed T&M. Contact RidgeRun first to get a quote.
RidgeRun started working on drivers creation back in 2006, including custom Linux V4L2 drivers for embedded systems. The customer selects the hardware sensor or chip and RidgeRun creates the V4L2 driver for it. This product covers the work required to create the V4L2 capture driver for your system, assuming no hardware issues. It always includes support for one camera and one resolution. Others can be added as part of the engineering services.
V4L2 Driver: Official Linux Kernel API
V4L2 is the official Linux Kernel API to handle capture devices like camera sensors, video decoders or FPGAs feeding video frames to the SoC. The video frames can be component, composite, HDMI, SDI, or frames from other video encoding standards.
V4L2 framework defines which API the Linux camera driver supports in order to be V4L2 compliant. The Linux kernel uses this camera driver to initialize the hardware and produce video frames. Each of these functions has a specific implication to the camera sensor. Often, the driver interacts with the camera sensor, receiver chip or FPGA by reading and writing I2C or SPI registers.
Creating a Linux Camera Driver
Creating a Linux camera driver consists of the following, included in the price of the product:
Camera sensor configuration via I2C, SPI or other low-level communication to initialize the sensor and support different resolutions. RidgeRun custom drivers support one resolution. Others can be added as needed.
Device Tree Modification
Capture subsystem configuration and video node creation(dev/video):
In Jetson, this involves the code needed to configure the Video Input (VI) so it will receive the video coming from the camera. Support to capture from v4l2, libargus and nvcamerasrc (YUV)
In UltraScale+, this involves adding the code to configure the VPSS to receive the video coming from the sensor. It might require some work on the PL.
In DM8168 and DM8148, this is the VPSS configuration.
In i.MX6 and i.MX8, this is the IPU configuration.
In DM368, this is the VPFE configuration.
Add support to one application like GStreamer, Yavta, etc., to grab the frames available in the video node (/dev/video). Sometimes this involves creating software patches to support custom color spaces.
Note: RidgeRun assumes that there are no hardware issues that would delay the development process (and increase costs). In case of problems with your hardware, RidgeRun will bill up to 20 hours of engineers services for the time needed to inform you of what is wrong. If it has to be created from scratch, driver delivery time is 3 to 4 weeks from receipt of the hardware.
Platforms Supported by Linux V4L2 Drivers
Jetson Xavier AGX and NX