top of page

Exploring the Data Plane Development Kit on NVIDIA Jetson

Writer: Luis G. Leon-VegaLuis G. Leon-Vega
Network Illustration
Network Illustration taken from Freepik

The Data Plane Development Kit (DPDK) is an open-source set of data plane libraries and network interface controller polling-mode drivers for offloading TCP packet processing from the operating system kernel to processes running in user space. This aims to lower the overhead the packet transition adds between the Network Interface Card (NIC), the Kernel and the user application.


The NVIDIA Jetson integrated NICs are non-DPDK compatible. At RidgeRun, we performed research to leverage DPDK on these NVIDIA platforms with external DPDK-compatible cards using the PCIe expansion ports.


To install and leverage an external PCIe NIC on the Jetson, the following steps are necessary:


  • Installing a DPDK-Compatible Card.

  • Modifying PCIe node in the device tree to remove the iommus.

  • Disable the IOMMU bypassing by default.

  • Install DPDK and run the examples.


You can check all the information from this wiki.


Data Plane Development Kit Compatible Cards


At RidgeRun, we have tested the following DPDK-compatible cards:


  • Intel X520 10GbE: 10 Gbps ethernet card

  • Intel I210 1GbE: 1 Gbps ethernet card


Nevertheless, more cards, like the NVIDIA ConnectX family, are likely to be supported. Please refer to the list in this wiki for more information.



NVIDIA Jetson Support


Most of the NVIDIA Jetson development kit (devkit) carriers do not support DPDK because it does not have poll-mode support. Instead, the DPDK support can be added by installing an external NIC to the NVIDIA Jetson.


The installation involves inserting the hardware into a PCIe port, which in the NVIDIA AGX Orin can be the PCIe slot, and in NVIDIA Orin NX (and similar) can be placed in an M.2 port. Additionally, the DPDK drivers require modifying the IOMMU support on the device tree.


In the case of an NVIDIA AGX Orin with a devkit carrier, this change in the sources is as follows:


pcie@141a0000 {
	compatible = "nvidia,tegra234-pcie";
	...
	interconnect-names = "dma-mem", "write";
	//iommu-map = <0x0 &smmu_niso0 TEGRA234_SID_PCIE5 0x1000>;
	//iommu-map-mask = <0x0>;
	//dma-coherent;
	…
	status = "disabled";
};
pcie@141a0000 {
	//iommus = <&smmu_niso0 TEGRA234_SID_PCIE5>;
};

Moreover, it is required to disable the bypass mode to the IOMMU by default, adjusting the kernel compilation options as follows:


CONFIG_ARM_SMMU_DISABLE_BYPASS_BY_DEFAULT=n

Last but not least, DPDK uses the UIO PCI Generic module, which needs to be enabled for compilation:


CONFIG_UIO_PCI_GENERIC=m

Once the changes have been made, the device tree, kernel and modules must be recompiled and reinstalled.


Finally, the installation of DPDK is as usual. For more information, please read the complete post on our developer wiki.


Testing the DPDK


For testing the DPDK support, it is possible to use the examples provided by the DPDK sources, such as the dpdk-ethtool. To test, it is needed to follow these steps:


  1. Disable the ethernet interface:


sudo ifconfig eth0 down

If eth0 comes from the external link.


  1. Query the PCIe identifier using:



In our case, it leads to:


Network devices using kernel driver

===================================

0001:01:00.0 'RTL8822CE 802.11ac PCIe Wireless Network Adapter c822' if=wlan0 drv=rtl88x2ce unused=rtl8822ce,vfio-pci,uio_pci_generic 

0005:01:00.0 'I210 Gigabit Network Connection 1533' if=eth0 drv=igb unused=vfio-pci,uio_pci_generic

  1. Enable the interface in DPDK:


sudo dpdk-devbind.py --bind=uio_pci_generic 0005:01:00.0

  1. Allocate some hugepages:


sudo dpdk-hugepages.py --pagesize 2M --setup 256M --node 0

Finally, use the tool:


sudo ./dpdk-ethtool

Which comes from the examples. The result should be like this:


EAL: Detected CPU lcores: 8
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: VFIO support initialized
EAL: Probe PCI driver: net_e1000_igb (8086:1533) device: 0005:01:00.0 (socket -1)
TELEMETRY: No legacy callbacks, legacy socket not created
Number of NICs: 1
Init port 0..
EthApp> drvinfo
Port 0 driver: net_e1000_igb (ver: DPDK 23.11.2)
firmware-version: 3.16, 0x800004ff, 1.304.0
bus-info: 0005:01:00.0
EthApp> Closing port 0... Done

For more information, please read the complete post on our developer wiki.


Contact us for more information


If you want to know more about how to leverage this technology in your project: Contact Us.


We want to hear from you! Let us know what you're looking for by sharing your feedback. It takes less than a minute—just fill out this quick survey!

bottom of page