RidgeRun has added new support for the GstInference GStreamer plugin for easy development of deep learning inference apps using GStreamer pipelines. This time there is new backend support for NVIDIA’s TensorRT SDK. TensorRT is based on CUDA which enables high performance during inference on the Jetson embedded platforms and NVIDIA GPU-based workstations. It is also possible to use models trained in any other framework and optimize them for TensorRT.
Figure 1 &2. Classification result for label ‘coffee mug’ (left) and detection results for labels `bottle` and ‘chair’ using Jetson TX2 platform.
Check out the following guides from RidgeRun:
Any Questions? : email@example.com