ImageVerifierCode 换一换
格式:DOCX , 页数:18 ,大小:29.29KB ,
资源ID:469875      下载积分:15 金币
快捷下载
登录下载
邮箱/手机:
温馨提示:
快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。 如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.bingdoc.com/d-469875.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(小程序中英文外文文献翻译Word格式.docx)为本站会员(聆听****声音)主动上传,冰点文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知冰点文库(发送邮件至service@bingdoc.com或直接QQ联系客服),我们立即给予删除!

小程序中英文外文文献翻译Word格式.docx

1、)外文文献翻译原文及译文标题:ENHANCING APPLICATION PERFORMANCE USING MINI-APPS: COMPARISON OF HYBRID PARALLEL PROGRAMMING PARADIGMS作者: Gary Lawson Michael Poteat Masha Sosonkina Robert Baurle期刊:Computer Science年份:2016原文 COMPARISON OF HYBRID PARALLELPROGRAMMING PARADIGMSGary Lawson Michael Poteat Masha Sosonkina R

2、obert Ba uricABSTRACTIn many fields, real-world applications fbr High Performance Computing have already been developed. Eor these applications to stay up- to-date, new parallel strategies must be explored to yield the best performance; however, restructuring or modifying a real-world application ma

3、y be daunting depending on the size of the code. In this case, a mini-app may be employed to quickly explore such options without modifying the entire code. In this work, several mini-apps have been created to enhance a real-world application performance, namely the VULCAN code fbr complex flow anal

4、ysis developed at the NASA Langley Research Center. These mini-apps explore hybrid parallel programming paradigms with Message Passing Interface (MPI) fbr distributed memory access and either Shared MPI (SMPI) orOpenMP fbr shared memory accesses. Performance testing shows that MPI+SMPI yields the be

5、st execution performance, while requiring the largest number of code changes. A maximum speedup of 23 was measured for MPI+SMPI, but only 10 was measured fbr MPI+OpenMP. Keywords: Mini-apps, Performance, VULCAN, SharedMemory, MP1. OpenMP1 INTRODUCTIONIn many fields, real-world applications have alre

6、ady been developed. For established applications to stay up-to-date, new parallel strategies must be explored to determine which may yield the best performance, especially with advances in computing hardware. However, restructuring or modifying a real-world application incurs increased cost dependin

7、g on the size of the code and changes to be made. A mini-app may be created to quickly explore such options without modifying the entire code. Mini-apps reduce the overhead of applying new strategies, thus various strategies may be implemented and compared. This work presents the authors experiences

8、 when following this strategy for a real-world application developed by NASA.VULCAN (Viscous Upwind Algorithm for Complex Flow Analysis) is a turbulent, no equilibrium, finite- rate chemical kinetics, Navier-Stokes flow solver fbr structured, cell-centered, multiblock grids that is maintained and di

9、stributed by the Hypersonic Air Breathing Propulsion Branch of the NASA Langley Research Center (NASA 2016). The mini- app developed in this work uses the Householder Reflector kernel fbr solving systems of linear equations. This kernel is used often by different workloads, and is a good candidate t

10、o decide what strategy type to apply to VULCAN. VULCAN is built on a single-layer of MP1 and the code has been optimized to obtain perfect vectorization, therefore two-levels of parallelism are currently used. This work investigates two flavors of shared-memory parallelism, OpenMP and Shared MPI, wh

11、ich will provide the third-level of parallelism fbr the application. A third-level of parallelism increases performance, which decreases the time-to-solution.MP1 has extended the standard to MPI version 3.0, which includes the Shared Memory (SHM) model (Mikhail B, (Intel) 2015,Message Passing Interf

12、ace Forum 2012), known in this work as Shared MPI (SMPI). This extension allows MPI to create memory windows that are shared between MPI tasks on the same physical node. In this way, MPI tasks are equivalent to threads, except Shared MPI is more difficult fbr a programmer to implement. OpenMP is the

13、 most common shared-memory library used to date because of its ease-of- use (OpenMP 2016). In most cases, only a few OpenMP pragmas are required to parallelize a loop; however, OpenMP is subject to increased overhead, which may decrease performance if not properly tuned.As early as the year 2000, th

14、e authors in (Cappello and Etiemble 2000) found that latency sensitive codes seem to benefit from pure MPI implementations whereas bandwidth sensitive codes benefit from hybrid MPI+OpenMP. Also, the authors found that faster processors will benefit hybrid MPI+OpenMP codes if data movement is not an

15、overwhelming bottleneck (Cappello and Etiemble 2000). Since this time, hybrid MPl+OpenMP implementations have improved, but not without difficulties. In (Drosi- nos and Kozins 2004,Chorley and Walker 2010), it was found that OpenMP incurs many performance reductions, including: overhead (fbrk/join,

16、atomics, etc), false sharing, imbalanced message passing, and a sensitivity to processor mapping. However, OpenMP overhead may be hidden when using more threads. In (Rabenseifher, Hager, and Jost 2009), the authors found that simply using OpenMP could incur per- fbrmance penalties because the compil

17、er avoids optimizing OpenMP loops - verified up to version 10.1. Although compilers have advanced considerably since this time, application users that still compile using older versions may be at risk if using OpenMP. In (Drosinos and Koziris 2004,Chorley and Walker 2010) the authors found that the

18、hybrid MPI+OpenMP approach outperforms the pure MPI approach because the hybrid strategy diversifies the path to parallel execution. More recently, MPI extended its standard to include the SHM model (M汰hail B. (Intel) 2015). The authors in (Hoetier, Dinan, Thakur, Barrett, Balaji, Gropp, and Underwo

19、od 2015) present MPI RMA theory and examples, which are the basis of the SHM model. In (Gerstenbergen Besta, and Hoefler 2013), the authors conduct a thorough performance evaluation of MPI RMA, including an investigation of different synchronization techniques fbr memory windows. In (Hoefler, Dinan,

20、 Buntinas, Balaji, Barrett, Brightwell, Gropp, Kale, and Thakur 2013), the authors investigate the viability of MPI+SMPI execution, as well as compare it to MPI+OpenMP execution. It was found that an underlying limitation of OpenMP is the shared-by-default model for memory, which does not couple wel

21、l with MP1 since the memory model is private-by-default. For this reason, MPI+SMPI codes are expected to perform better, since shared memory is explicit and the memory model fbr the entire code is private- by-default. Most recently, a new MPI communication model has been introduced in (Gropp, Olson,

22、 and Samfass 2016), which better captures multinode communication performance, and oilers an open-source benchmarking tool to capture the model parameters fbr a given system. Independent of the shared memory layer, MPI is the de facto standard in data movement between nodes and such a model can help

23、 any MPI program. The remainder of this paper is organized into the following sections: 2 introduces the Householder mini-apps, 3 presents the performance testing results fbr the mini-apps considered, and 4 concludes this paper.2 HOUSEHOLDER MIN1-APPThe mini-apps use the householder computation kern

24、el from VULCAN, which is used in solving systems of linear equations. The householder routine is an algorithm that is used to transform a square matrix into triangular form, without increasing the magnitude of each element significantly (Hansen 1992). The Householder routine is numerically stable, i

25、n that it does not lose a significant amount of accuracy due to very small or very large intermediate values used in the computation.Mini-apps are designed to perform specific functions. In this work, the important features are as follows:Accept generic input. Validate the numerical result of the op

26、timized routine. Measure performance of the original and optimized routines. Tune optimizations.The generic input is read in from a file, where the file must contain at least one matrix A and resulting vector b. Should only one matrix and vector be supplied, the input will be duplicated fbr all inst

27、ances of m. Validation of the optimized routine is performed by taking the difference of the output from the original and optimized routines. The mini-app will first compute the solution of the input using the original routine, and then the optimized routine. This way the output may be compared dire

28、ctly, and relative performance may also be measured using execution time. Should the optimized routine feature one or more parameters that may be varied, they are to be investigated such that the optimization may be tuned to the hardware. In this work, there is always at least one tunable parameter.

29、 One feature that should have been factored into the mini-app design was modularizing the different versions of the Householder routine. In this work, two mini-apps were designed because each implements a different version of the parallel Householder routine; however, it would have been better to de

30、sign a single mini-app that uses modules to include other versions of the parallel Householder kernel. With this functionality, it would be less cumbersome to work on each version of the kernel. To parallelize the Householder routine, m is decomposed into separate, but equal chunks that are then sol

31、ved by each thread - shared MPI tasks are equivalent to threads in this work fbr brevity. However, the original routine varies over m inside the inner-most computational loop (an optimization that benefits vectorization and caching), but the parallel loop must be the outer-most loop fbr best perform

32、ance. Therefore, loop blocking has been invoked tor the parallel sections of the code. Loop blocking is a technique commonly used to reduce the memory footprint of a computation such that it fits inside the cache fbr a given hardware. Therefore, the parallel Householder routine has at least one tunable parameter, block size.In this work, two flavors of the shared memory model are investigated: Open MP and SMP1. The difference between OpenMP and SMP1 lies in how memory is managed. OpenMP uses a public-memory model where all data is available to all thr

copyright@ 2008-2023 冰点文库 网站版权所有

经营许可证编号:鄂ICP备19020893号-2