1、)外文文献翻译原文及译文标题:ENHANCING APPLICATION PERFORMANCE USING MINI-APPS: COMPARISON OF HYBRID PARALLEL PROGRAMMING PARADIGMS作者: Gary Lawson Michael Poteat Masha Sosonkina Robert Baurle期刊:Computer Science年份:2016原文 COMPARISON OF HYBRID PARALLELPROGRAMMING PARADIGMSGary Lawson Michael Poteat Masha Sosonkina R
2、obert Ba uricABSTRACTIn many fields, real-world applications fbr High Performance Computing have already been developed. Eor these applications to stay up- to-date, new parallel strategies must be explored to yield the best performance; however, restructuring or modifying a real-world application ma
3、y be daunting depending on the size of the code. In this case, a mini-app may be employed to quickly explore such options without modifying the entire code. In this work, several mini-apps have been created to enhance a real-world application performance, namely the VULCAN code fbr complex flow anal
4、ysis developed at the NASA Langley Research Center. These mini-apps explore hybrid parallel programming paradigms with Message Passing Interface (MPI) fbr distributed memory access and either Shared MPI (SMPI) orOpenMP fbr shared memory accesses. Performance testing shows that MPI+SMPI yields the be
5、st execution performance, while requiring the largest number of code changes. A maximum speedup of 23 was measured for MPI+SMPI, but only 10 was measured fbr MPI+OpenMP. Keywords: Mini-apps, Performance, VULCAN, SharedMemory, MP1. OpenMP1 INTRODUCTIONIn many fields, real-world applications have alre
6、ady been developed. For established applications to stay up-to-date, new parallel strategies must be explored to determine which may yield the best performance, especially with advances in computing hardware. However, restructuring or modifying a real-world application incurs increased cost dependin
7、g on the size of the code and changes to be made. A mini-app may be created to quickly explore such options without modifying the entire code. Mini-apps reduce the overhead of applying new strategies, thus various strategies may be implemented and compared. This work presents the authors experiences
8、 when following this strategy for a real-world application developed by NASA.VULCAN (Viscous Upwind Algorithm for Complex Flow Analysis) is a turbulent, no equilibrium, finite- rate chemical kinetics, Navier-Stokes flow solver fbr structured, cell-centered, multiblock grids that is maintained and di
9、stributed by the Hypersonic Air Breathing Propulsion Branch of the NASA Langley Research Center (NASA 2016). The mini- app developed in this work uses the Householder Reflector kernel fbr solving systems of linear equations. This kernel is used often by different workloads, and is a good candidate t
10、o decide what strategy type to apply to VULCAN. VULCAN is built on a single-layer of MP1 and the code has been optimized to obtain perfect vectorization, therefore two-levels of parallelism are currently used. This work investigates two flavors of shared-memory parallelism, OpenMP and Shared MPI, wh
11、ich will provide the third-level of parallelism fbr the application. A third-level of parallelism increases performance, which decreases the time-to-solution.MP1 has extended the standard to MPI version 3.0, which includes the Shared Memory (SHM) model (Mikhail B, (Intel) 2015,Message Passing Interf
12、ace Forum 2012), known in this work as Shared MPI (SMPI). This extension allows MPI to create memory windows that are shared between MPI tasks on the same physical node. In this way, MPI tasks are equivalent to threads, except Shared MPI is more difficult fbr a programmer to implement. OpenMP is the
13、 most common shared-memory library used to date because of its ease-of- use (OpenMP 2016). In most cases, only a few OpenMP pragmas are required to parallelize a loop; however, OpenMP is subject to increased overhead, which may decrease performance if not properly tuned.As early as the year 2000, th
14、e authors in (Cappello and Etiemble 2000) found that latency sensitive codes seem to benefit from pure MPI implementations whereas bandwidth sensitive codes benefit from hybrid MPI+OpenMP. Also, the authors found that faster processors will benefit hybrid MPI+OpenMP codes if data movement is not an
15、overwhelming bottleneck (Cappello and Etiemble 2000). Since this time, hybrid MPl+OpenMP implementations have improved, but not without difficulties. In (Drosi- nos and Kozins 2004,Chorley and Walker 2010), it was found that OpenMP incurs many performance reductions, including: overhead (fbrk/join,
16、atomics, etc), false sharing, imbalanced message passing, and a sensitivity to processor mapping. However, OpenMP overhead may be hidden when using more threads. In (Rabenseifher, Hager, and Jost 2009), the authors found that simply using OpenMP could incur per- fbrmance penalties because the compil
17、er avoids optimizing OpenMP loops - verified up to version 10.1. Although compilers have advanced considerably since this time, application users that still compile using older versions may be at risk if using OpenMP. In (Drosinos and Koziris 2004,Chorley and Walker 2010) the authors found that the
18、hybrid MPI+OpenMP approach outperforms the pure MPI approach because the hybrid strategy diversifies the path to parallel execution. More recently, MPI extended its standard to include the SHM model (M汰hail B. (Intel) 2015). The authors in (Hoetier, Dinan, Thakur, Barrett, Balaji, Gropp, and Underwo
19、od 2015) present MPI RMA theory and examples, which are the basis of the SHM model. In (Gerstenbergen Besta, and Hoefler 2013), the authors conduct a thorough performance evaluation of MPI RMA, including an investigation of different synchronization techniques fbr memory windows. In (Hoefler, Dinan,
20、 Buntinas, Balaji, Barrett, Brightwell, Gropp, Kale, and Thakur 2013), the authors investigate the viability of MPI+SMPI execution, as well as compare it to MPI+OpenMP execution. It was found that an underlying limitation of OpenMP is the shared-by-default model for memory, which does not couple wel
21、l with MP1 since the memory model is private-by-default. For this reason, MPI+SMPI codes are expected to perform better, since shared memory is explicit and the memory model fbr the entire code is private- by-default. Most recently, a new MPI communication model has been introduced in (Gropp, Olson,
22、 and Samfass 2016), which better captures multinode communication performance, and oilers an open-source benchmarking tool to capture the model parameters fbr a given system. Independent of the shared memory layer, MPI is the de facto standard in data movement between nodes and such a model can help
23、 any MPI program. The remainder of this paper is organized into the following sections: 2 introduces the Householder mini-apps, 3 presents the performance testing results fbr the mini-apps considered, and 4 concludes this paper.2 HOUSEHOLDER MIN1-APPThe mini-apps use the householder computation kern
24、el from VULCAN, which is used in solving systems of linear equations. The householder routine is an algorithm that is used to transform a square matrix into triangular form, without increasing the magnitude of each element significantly (Hansen 1992). The Householder routine is numerically stable, i
25、n that it does not lose a significant amount of accuracy due to very small or very large intermediate values used in the computation.Mini-apps are designed to perform specific functions. In this work, the important features are as follows:Accept generic input. Validate the numerical result of the op
26、timized routine. Measure performance of the original and optimized routines. Tune optimizations.The generic input is read in from a file, where the file must contain at least one matrix A and resulting vector b. Should only one matrix and vector be supplied, the input will be duplicated fbr all inst
27、ances of m. Validation of the optimized routine is performed by taking the difference of the output from the original and optimized routines. The mini-app will first compute the solution of the input using the original routine, and then the optimized routine. This way the output may be compared dire
28、ctly, and relative performance may also be measured using execution time. Should the optimized routine feature one or more parameters that may be varied, they are to be investigated such that the optimization may be tuned to the hardware. In this work, there is always at least one tunable parameter.
29、 One feature that should have been factored into the mini-app design was modularizing the different versions of the Householder routine. In this work, two mini-apps were designed because each implements a different version of the parallel Householder routine; however, it would have been better to de
30、sign a single mini-app that uses modules to include other versions of the parallel Householder kernel. With this functionality, it would be less cumbersome to work on each version of the kernel. To parallelize the Householder routine, m is decomposed into separate, but equal chunks that are then sol
31、ved by each thread - shared MPI tasks are equivalent to threads in this work fbr brevity. However, the original routine varies over m inside the inner-most computational loop (an optimization that benefits vectorization and caching), but the parallel loop must be the outer-most loop fbr best perform
32、ance. Therefore, loop blocking has been invoked tor the parallel sections of the code. Loop blocking is a technique commonly used to reduce the memory footprint of a computation such that it fits inside the cache fbr a given hardware. Therefore, the parallel Householder routine has at least one tunable parameter, block size.In this work, two flavors of the shared memory model are investigated: Open MP and SMP1. The difference between OpenMP and SMP1 lies in how memory is managed. OpenMP uses a public-memory model where all data is available to all thr
copyright@ 2008-2023 冰点文库 网站版权所有
经营许可证编号:鄂ICP备19020893号-2