Publication Details
A Reality Check on Inference at Mobile Networks Edge
Kocour Martin, Ing. (FIT BUT)
Raman Aravindh (KCL)
Leontiadis Ilias (TID)
Luque Jordi (Telefónica)
Sastry Nishanth (KCL)
Nunez-Martinez Leon (TID)
Perino Diego (TID)
Perales Carlos Segura (Telefónica)
Edge computing, Artificial Intelligence
Edge computing is considered a key enabler to deploy ArtificialIntelligence platforms to provide real-time applications such asAR/VR or cognitive assistance. Previous works show computingcapabilities deployed very close to the user can actually reduce theend-to-end latency of such interactive applications. Nonetheless,the main performance bottleneck remains in the machine learninginference operation. In this paper, we question some assumptionsof these works, as the network location where edge computing isdeployed, and considered software architectures within the frame-work of a couple of popular machine learning tasks. Our experimen-tal evaluation shows that after performance tuning that leveragesrecent advances in deep learning algorithms and hardware, net-work latency is now the main bottleneck on end-to-end applicationperformance. We also report that deploying computing capabilitiesat the first network node still provides latency reduction but, over-all, it is not required by all applications. Based on our findings, weoverview the requirements and sketch the design of an adaptivearchitecture for general machine learning inference across edgelocations.
Edge computing is considered a key enabler to deploy Artificial Intelligence platforms to provide real-time applications such as AR/VR or cognitive assistance. Previous works show computing capabilities deployed very close to the user can actually reduce the end-to-end latency of such interactive applications. Nonetheless, the main performance bottleneck remains in the machine learning inference operation. In this paper, we question some assumptions of these works, as the network location where edge computing is deployed, and considered software architectures within the framework of a couple of popular machine learning tasks. Our experimental evaluation shows that after performance tuning that leverages recent advances in deep learning algorithms and hardware, network latency is now the main bottleneck on end-to-end application performance. We also report that deploying computing capabilities at the first network node still provides latency reduction but, overall, it is not required by all applications. Based on our findings, we overview the requirements and sketch the design of an adaptive architecture for general machine learning inference across edge locations.
@INPROCEEDINGS{FITPUB11956, author = "Alejandro Cartas and Martin Kocour and Aravindh Raman and Ilias Leontiadis and Jordi Luque and Nishanth Sastry and Leon Nunez-Martinez and Diego Perino and Segura Carlos Perales", title = "A Reality Check on Inference at Mobile Networks Edge", pages = "54--59", booktitle = "Proceedings of the 2nd ACM International Workshop on Edge Systems, Analytics and Networking (EDGESYS '19)", year = 2019, location = "Dressden, DE", publisher = "Association for Computing Machinery", ISBN = "978-1-4503-6275-7", doi = "10.1145/3301418.3313946", language = "english", url = "https://www.fit.vut.cz/research/publication/11956" }