Back To Schedule
Tuesday, July 9 • 11:05am - 11:20am
Distributing Deep Neural Networks with Containerized Partitions at the Edge

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Deploying machine learning on edge devices is becoming increasingly important, driven by new applications such as smart homes, smart cities, and autonomous vehicles. Unfortunately, it is challenging to deploy deep neural networks (DNNs) on resource-constrained devices. These workloads are computationally intensive and often require cloud-like resources. Prior solutions attempted to address these challenges by either sacrificing accuracy or by relying on cloud resources for assistance.

In this paper, we propose a containerized partition-based runtime adaptive convolutional neural network (CNN) acceleration framework for Internet of Things (IoT) environments. The framework leverages spatial partitioning techniques through convolution layer fusion to dynamically select the optimal partition according to the availability of computational resources and network conditions. By containerizing each partition, we simplify the model update and deployment with Docker and Kubernetes to efficiently handle runtime resource management and scheduling of containers.


Li Zhou

The Ohio State University

Hao Wen

University of Minnesota, Twin Cities

Radu Teodorescu

The Ohio State University

David H.C. Du

University of Minnesota, Twin Cities

Tuesday July 9, 2019 11:05am - 11:20am PDT
HotEdge: Grand Ballroom VII–IX