Predictive Modeling Supported Collective I/O Auto-tuning
TimeMonday, June 22nd10:44pm - 11:21pm
DescriptionThe achievement of effective parallel I/O is a nontrivial job due to the complex interdependencies between the layers of I/O stack. These layers provide the best possible I/O performance through a number of tunable parameters. Sadly, the correct combination of parameters depends on diverse applications and HPC platforms. When a configuration space gets larger, it becomes difficult for humans to monitor the interactions between the configuration options. Engineers has no time or experience for exploring good configuration parameters for each problem because of long benchmarking phase. In most cases, the default settings are implemented, often leading to poor I/O efficiency. In this case, an auto-tuning solution for optimizing I/O requests and providing system administrators or engineers the statistic information is strongly required.
In this study, a predictive modeling supported collective I/O auto-tuning solution for engineering applications is presented that can be understood by engineers or scientists with little knowledge of parallel I/O without any post-processing step. The auto-tuning solution is implemented upon the MPI-IO library to be compatible with MPI based engineering applications, and be portable to different HPC platforms as well.
The main contributions of the study are the following. First, it states several challenges of the different layers' run time factors on the optimization process. Second, it points collective I/O performance variability and work on development of an auto-tuning system to tune collective I/O performance transparently between layers. Third, it evaluates statistical performance models and present an I/O performance predictor.
This study shows several challenges faced when optimizing collective I/O with collective buffering and use random forest regression to develop a predictor model in auto-tuning solution to estimate I/O performance based on the results of the previous runs. After evaluating the predictor model under various conditions, random forest regression is determined as an accurate indicator of the expected collective I/O performance. The success of the auto-tuning approach varies between 50-130% in the parametrization's I/O bandwidth gain by finding out the optimal ones that are better than the default configurations. Incorporating findings of predictive model with the auto-tuning system has the potential to further reduce the training time and size of the training set.
The parameters discussed in this paper are system dependent, but new parameters can be easily integrated to auto-tuning configuration files. Future efforts will further explore more accurate representations and characteristics of the configuration parameters and predictive modeling techniques. As future work, the auto-tuning solution will be tested on engineering applications in different professional areas to show the usability. Engineering use cases can provide a guideline for the potential users to analyze and optimize their own applications.
Not yet registered? Event registration is available here .