Design and Prototyping Distributed CNN Inference Acceleration in Edge Computing
Konferenz: European Wireless 2022 - 27th European Wireless Conference
19.09.2022 - 21.09.2022 in Dresden, Germany
Tagungsband: European Wireless 2022
Seiten: 6Sprache: EnglischTyp: PDF
Autoren:
Dong, Zhongtian; Li, Nan; Iosifidis, Alexandros; Zhang, Qi (DIGIT, Department of Electrical and Computer Engineering, Aarhus University, Denmark)
Inhalt:
For time-critical IoT applications using deep learning, inference acceleration through distributed computing is a promising approach to meet a stringent deadline. In this paper, we implement a working prototype of a new distributed inference acceleration method HALP using three raspberry Π 4. HALP accelerates inference by designing a seamless collaboration among edge devices (EDs) in Edge Computing. We maximize the parallelization between communication and computation among the collaborative EDs by optimizing the task partitioning ratio based on the segment-based partitioning. Experimental results show that the distributed inference HALP achieves 1.7× inference acceleration for VGG-16. Then, we combine distributed inference with conventional neural network model compression by setting up different shrinking hyperparameters for MobileNet-V1. In this way, we can further accelerate inference but at the cost of inference accuracy loss. To strike a balance between latency and accuracy, we propose dynamic model selection to select a model which provides the highest accuracy within the latency constraint. It is shown that the model selection with distributed inference HALP can significantly improve service reliability compared to the conventional stand-alone computation.