The cutting-edge architecture consists in separating different system processes in isolated containers in the form of microservices. This has several advantages compared to the traditional monolithic implementations, which are worth mentioning:
Facilitates simultaneous development: different teams focus on building different modules of the system without worrying about breaking something in the other team's environment.
Eases system integration: the different modules can be independently integrated, each with unique environments, dependencies, and versions.
Simpler deployment and upgrade: Each module can be independently upgraded by deploying a new version of the container.
On the other hand, disadvantages can also be mentioned:
Increase in disk usage: each service has its own resources and therefore consumes more memory and CPU.
Distributed system complexity: with independent containers, the communication and synchronization mechanisms are much more complex.
Expensive data sharing: unlike monolithic codebases where modules can share pointers with large amounts of information, microservices are independent processes that need to copy shared data.
Along with microservices, Deep Learning has also gained a lot of traction these last few years. Python has become the language of choice to train and execute Deep Learning models. Modern computing capabilities allow the inference directly in the embedded device with accelerators. In such scenarios performing predictions from a live camera feed is a common use case. Recently, there is a tendency in combining decoupling with microservices and Deep Learning, falling into the data-sharing bottleneck described above.
What is Buffer Interprocess Sharing?
RidgeRun Buffer Interprocess Sharing (BIPS) comes to address the issue of large data exchange between microservices. It is an optimized IPC library fully compatible with C++ and Python allowing sharing of data buffers between two or more processes with zero copy, even in Deep Learning environments.
Clients can be classified as Producers or Consumers according to their role in the system. The producer is responsible for generating and filling in the information on the buffers that the Consumers will read. The synchronization between these entities is handled by the Signaler, which ensures that all operations are concurrent-safe. This means that Consumers can only read buffers that are fully written by Producers and that Producers can only write buffers already read by Producers. These buffers are created and managed by a shared structure known as the Buffer Pool, which has a fixed capacity. The Signaler handles the synchronization between the Buffer Pool and the Consumers/Producers.
Buffer exchange between C++ and Python:
With Buffer Interprocess Sharing, you can communicate C++ and Python applications back and forth. You can feed your Deep Learning applications in Python by using a video feed in GStreamer live capture application. From C++, you can define an agnostic memory buffer to submit it through BIPS and receive it by a Python application using the Buffer Protocol, in a convenient NumPy array without any memory copy.
Buffer exchange between applications in containers :
BIPS allows zero-copy communication between applications isolated through containers. This makes BIPS a friend in your applications based on microservices without requiring any other container or process for the data exchange. You can even think about moving your Deep Learning application to microservices!
Learn more in our developer wiki.
Want to give it a try? Email: email@example.com for an evaluation version!