DM365 Based Solutions
RidgeRun provides a fully featured Embedded Linux Software Development Kit for the Texas Instruments DM365 architecture, a member of TI's DaVinci™ product family.
The DM365 is tuned for applications such as digital cameras, IP video cameras, digital photo frames and video baby monitors.
The DM365 processor consists of an integrated video processing subsystem, an MPEG-4-JPEG co-processor (MJCP), an ARM926EJ-S core and peripherals. The corresponding development tool, the DM365 Digital Video Evaluation Module (DVEVM), will help developers quickly and easily create low-cost portable, digital video devices with HD video capability.
RidgeRun has extensive support for the TI DM365 and TI DM365 based hardware platforms. We are specialists in GStreamer, an open-source multimedia framework for Linux. We have developed and contributed GStreamer prlug-in's for these TI chip based platforms which will leverage the hardware accelerators and DVSDK components from TI.
We support encoding and decoding based on all TI provided CODEC's for the TI DM365 chipset. In addition, RidgeRun has made special plug-in enhancements to the TI provided DMAI framework to support:
Zero memory copy piplines
Fast forward and Reverse processing
In addition, RidgeRun has developed significant add-ons which allow for image capture inside a video stream, video stabilization, auto-exposure / auto white balance, video overlays and a number of additional capabilities.
- U-boot 2010.06 bootloader
- Linux kernel 2.6.32 + Real Time patches
- GCC 4.2.4,EABI, glibc
- DVSDK 3.1 integration
- GStreamer version 0.1
- GStreamer accelerated plugin's based on the RidgeRun provided DDompe branch
- Open Source drivers:
- UART, Ethernet, MTD (NAND and NOR flash), GPIO, DMA, V4L2, watchdog timer,
- Real time clock, USB host, USB gadget, VPBE framebuffer, ALSA audio, I2C, SPI, MMC/SD
- Example program to set IPIPE exposure and white balance values
- ARM® side audio codecs available
- MJCP codecs available
- Many open source applications available
SDK ADDITIONAL FEATURES
Auto exposure / auto white balance
Only needed if your video sensor doesn't support this functionality.
Camera Engine is a re-usable component for embedded systems that leverages open-source techonologies like GStreamer and DBus to control media streaming, recording and video preview from remote user-interfaces (QT, webservices, etc). Camera engine is writte in Vala since both GStreamer and D-Bus use GNOME's object system. You can interact with the Camera Engine daemon in any programming language that has a d-bus library (e.g. C, C++, python). The server uses GStreamer to perform RTSP streaming, record and preview videos simultaneously, as well as taking snapshots while it allows customization of certain parameters. Camera Engine exposes a D-Bus interface so multiple clients can perform actions over the server, regardless the programming language used. Camera Engine is designed to be portable, but is currently tested using DM368 and DM365 platforms.
The DM365 and DM368 include an IPIPE dual output hardware resizer as part of the Video Processing Front End (VPFE). This means each input video frame can be resized to two different output buffers. The resizer can also perform a color space conversion. TI describes the funcationality as:
Programmable down or up-sampling filter for both horizontal and vertical directions with range from 1/16x to 16x, in which the filter outputs two images with different magnification simultaneously.
If your IP camera will be mounted on a shaky location, such as an outdoor pole or moving vehicle.
Motion detection with pluggable detection algorithm
We provide a general-purpose motion detection algorithm which you can use, or you can create one for you custom needs.
Face detection support
DM36x with part number ending in "F" (e.g DM368ZCEF) includes a face detection hardware module. GStreamer face detect element for the DM368 DM365 processors. Hardware based face detection in the DM368 DM365 processors is easy to use when enabled with the RidgeRun GStreamer face detect element. You simply put the dm365facedetect element in your GStreamer pipeline, specify if you want frames drawn around the detected faces to easily verify fact detect is working as expected. An example utility shows how you can get face locations into your application so you can count the number of faces in a picture, extract the faces from the picture, or what ever else you plan to do with the detected face location information. If you don't need GStreamer and rather talk to the face detect driver directly, that is possible as well. There are a few ioctl() calls you configure the driver, then get back the face location data.
GStreamer Fast Text/Graphics Overlay
A GStreamer element that can be used to overlay: images, text and/or time and date over video streams or photos without using floating point arithmetic. This is necessary to get good performance when the processor doesn't contain an FPU.
USB standard UVC / UAC driver
Works on Windows, Mac and Linux at one resolution using the VLC application. RidgeRun offers customization services to support other requirements. Customer is responsible for host PC integration and testing.
A customized set of busybox shell scripts that run on the target device to exercise all the I/Os used in the product. These tests are useful for bringing up new hardware and on the assembly line for verifying the functionality of each board. Customers will need to tune the tests to match their manufacturing needs. Typically takes 4 to 6 weeks to generate the manufacturing tests and includes up to 40 hours of customization.
The Gstreamer Video Segmenter - GstVS is an extension of the conventional GstQTMux that allows splitting recordings into multiple files constrained by a size and/or a duration. These recordings can be audio, video or audio+video and in any case every segmented file can be viewed independently. If the recording contains an encoded video stream, then it is guaranteed that the file will start with a reference frame. All these extra functionality is added without interfering with the normal GstQTMux operation and without loosing buffers in between the files
GStreamer multi-stream / mulit-channel RTSP server element
The RTSP Sink is a Gstreamer sink element which permits high performance streaming to multiple computers using the RTSP protocol. This element leverages previous logic from RidgeRun's RTSP server but providing the benefits of a Gstreamer sink element like great flexibility to integrate into applications and easy gst-launch based testing. With RTSP Sink multiple streams can be achieved simultaneously using any desired combination. This means that within a single pipeline you can stream multiple videos, multiple audios and multiple audio+video, each one to a different client and a different mapping. On the examples section different streaming possibilities are shown
The GStreamer pre-record element
The GStreamer pre-record element can be placed in the pipeline to allow you to be continually recording data into a FIFO, where you can sent the FIFO size based on the amount of pre-recorded data you want kept. The FIFO size is specified in milliseconds. When pre-recording the GStreamer pre-record element doesn't pass any buffer downstream. After the FIFO is filled the oldest data is release as new data is added. When you want to start recording, you can trigger the GStreamer pre-record element. Once triggered, the element will pass the data in the FIFO downstream (to be saved to a file for example) while adding any new data to the back of the FIFO buffer so no data is lost. Eventually the downstream elements will drain the FIFO such that the GStreamer pre-record element simply passes received buffers downstream as they are received. Once the GStreamer pipeline is taken out of the PLAY state, the GStreamer pre-record element resets and will again go into pre-record mode of operation.
Other technologies that are being considered for development
Metadata inclusion. Information, such as GPS location, device serial number, etc, could be included as metadata in the video stream.
Watermarking. Each video frame is marked with non-visible data so that if the frame is modified, the watermark can be used to detect a change to the frame occurred. This technology only works on processors with a DSP.
For more technical details on the DM365 check TI's website. Get your DM365 EVM board from our partner Spectrum Digital Spectrum Digital DM365 EVM