Memory Footprint Reduction for Dataflow Process Networks using Virtual Channels

Conference: MBMV 2024 - 27. Workshop
02/14/2024 - 02/15/2024 at Kaiserslautern

Proceedings: ITG-Fb. 314: MBMV 2024

Pages: 8Language: englishTyp: PDF

Authors:
Krebs, Florian; Schneider, Klaus (Department of Computer Science, RPTU Kaiserslautern-Landau, Germany)

Abstract:
Synthesizing code from dataflow process networks requires implementing the communication channels as queues in software. Common approaches use fixed-size queues with memory statically allocated at initialization. While the number of tokens flowing through the channels may be known in advance, the specific route they will take through the network is typically unknown. Therefore, the size of the buffers is determined by the maximum number of possible tokens. This results in a high memory overhead, since some of this memory is not used dynamically. In this paper, we reduce the memory footprint of code synthesized for such scenarios by introducing multiple virtual channels on top of a physical channel. Virtual channels act as point-to-point connections between consumer and producer nodes as defined in the model. Instead of implementing them directly as fixed-size FIFO buffers, virtual channels store their data in an underlying physical channel that is shared with other virtual channels. In this way, the dynamic nature of the data flow is accommodated by dynamically shrinking and growing the memory claimed by particular virtual channels, while the memory of their physical channel is statically allocated. In our experiments, this approach shows a reduction in memory footprint of up to 50% with no significant change in performance. However, sharing memory between different virtual channels introduces additional deadlock scenarios, where a virtual channel cannot be written to even though it currently holds no data because another virtual channel is consuming all of the physical channel’s memory.