Loading…
Monday, July 8 • 2:15pm - 2:30pm
The Case for Unifying Data Loading in Machine Learning Clusters

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Training machine learning models involves iteratively fetching and pre-processing batches of data. Conventionally, popular ML frameworks implement data loading within a job and focus on improving the performance of a single job. However, such an approach is inefficient in shared clusters where multiple training jobs are likely to be accessing the same data and duplicating operations. To illustrate this, we present a case study which reveals that for hyper-parameter tuning experiments we can reduce up to 89% I/O and 97% pre-processing redundancy.

Based on this observation, we make the case for unifying data loading in machine learning clusters by bringing the isolated data loading systems together into a single system. Such a system architecture can remove the aforementioned redundancies that arise due to the isolation of data loading in each job. We introduce OneAccess, a unified data access layer and present a prototype implementation that shows a 47.3% improvement in I/O cost when sharing data across jobs. Finally we discuss open research challenges in designing and developing a unified data loading layer that can run across frameworks on shared multi-tenant clusters, including how to handle distributed data access, support diverse sampling schemes, and exploit new storage media.

Speakers
AK

Aarati Kakaraparthy

University of Wisconsin, Madison & Microsoft Gray Systems Lab, Madison
AV

Abhay Venkatesh

University of Wisconsin, Madison
AP

Amar Phanishayee

Microsoft Research
SV

Shivaram Venkataraman

University of Wisconsin and Microsoft Research


Monday July 8, 2019 2:15pm - 2:30pm PDT
HotCloud: Grand Ballroom VII–IX