ComPar: Optimized Multi-Compiler for Automatic OpenMP Source-to-Source Parallelization
TimeMonday, June 22nd8:14pm - 8:52pm
DescriptionParallelization schemes are essential in order to exploit the full benefits of multicore architectures, which have become widespread in recent years, especially for scientific applications. In shared-memory architectures, the most common parallelization API is OpenMP. However, the introduction of correct and optimal OpenMP parallelization to applications is not always a simple task, due to common parallel shared-memory management pitfalls and architecture heterogeneity. To ease this process, many automatic parallelization compilers were created over the last decade. In [Harel et al., IJPP’20] we first analyzed and compared S2S compilers with parallelization capabilities, pointed out their strengths and weaknesses and analyzed their performances on representative code benchmarks under different hyperparameters and hardware setups. We concluded that each compiler has its advantages and disadvantages, as each compiler had the best performances in at least one of the tests, but no compiler had better performances than other compilers in all tests. This indicated that a fusion of the compilers’ best outputs under best hyperparameters and hardware setups can yield much greater speedups. Therefore, we created a novel parallelization S2S compiler called ComPar. ComPar uses S2S compilers as an input and fuses their results together to create a superior parallelization scheme. This is achieved by using the S2S compilers on each loop with different hyperparameters to create several different parallel versions of the code. Finally, the loop runtimes are compared and the optimal parallelization scheme for each loop is chosen and pieced together while removing unnecessary parallelization to create ComPars’ final output.