Publications from Cake Lab
-
MODI: Mobile Deep Inference Made Efficient by Edge Computing
Authors: Samuel S. Ogden and Tian Guo
The USENIX Workshop on Hot Topics in Edge Computing (HotEdge ‘18)
In this paper, we propose a novel mobile deep inference platform, MODI, that delivers good inference performance. MODI improves deep learning powered mobile applications performance with optimizations in three complementary aspects. First, MODI provides a number of models and dynamically selects the best one during runtime. Second, MODI extends the set of models each mobile application can use by storing high quality models at the edge servers. Third, MODI manages a centralized model repository and periodically updates models at edge locations, ensuring up-to-date models for mobile applications without incurring high network latency. Our evaluation demonstrates the feasibility of trading off inference accuracy for improved inference speed, as well as the acceptable performance of edge-based inference.
BibTeX
@inproceedings {216771, author = \{Samuel S. Ogden and Tian Guo\}, title = \{\{MODI\}: Mobile Deep Inference Made Efficient by Edge Computing\}, booktitle = \{\{USENIX\} Workshop on Hot Topics in Edge Computing (HotEdge 18)\}, year = \{2018\}, address = \{Boston, MA\}, url = \{https://www.usenix.org/conference/hotedge18/presentation/ogden\}, publisher = \{\{USENIX\} Association\}, \}
-
Cloud-based or On-device: An Empirical Study of Mobile Deep Inference
Authors: Tian Guo
2018 IEEE International Conference on Cloud Engineering (IC2E'18)
Modern mobile applications benefit significantly from the advancement in deep learning, e.g., implementing real-time image recognition and conversational system. Given a trained deep learning model, applications usually need to perform a series of matrix operations based on the input data, in order to infer possible output values. Because of computation complexity and size constrained, these trained models are often hosted in the cloud. When utilizing these cloud-based models, mobile apps will have to send input dat over the network. While cloud-based deep learning can provide reasonable response time for mobile apps, it also restricts the use case scenarios, e.g. mobile apps need to have access to network. With mobile specific deep learning optimizations, it is now possible to employ on-device inference. However, because mobile hardware, e.g. GPU and memory size, can be very limited when compared to desktop counterpart, it is important to understand the feasibility of this new on-device deep learning inference architecture. In this paper, we empirically evaluate the inference efficiency of three Convolutional Neural Networks using a benchmark Android application we developed. Our measurement and analysis suggest that on-device inference can cost up to two orders of magnitude response time and energy when compared to cloud-based inference, and loading model and computing probability are two performance bottlenecks for on- device deep inferences.