HPC with many core processors

Access Full Text

HPC with many core processors

For access to this article, please select a purchase option:

Buy chapter PDF
£10.00
(plus tax if applicable)
Buy Knowledge Pack
10 chapters for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
Many-Core Computing: Hardware and Software — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Author(s): Xavier Martorell 1 ; Jorge Bellon 2 ; Victor Lopez 3 ; Vicenç Beltran 3 ; Sergi Mateo 3 ; Xavier Teruel 3 ; Eduard Ayguade 1 ; Jesus Labarta 1
View affiliations
Source: Many-Core Computing: Hardware and Software,2019
Publication date June 2019

The current trends in building clusters and supercomputers are to use medium-to-big symmetric multi-processors (SMP) nodes connected through a high-speed network. Applications need to accommodate to these execution environments using distributed and shared memory programming, and thus become hybrid. Hybrid applications are written with two or more programming models, usually message passing interface (MPI) [1,2] for the distributed environment and OpenMP [3,4] for the shared memory support. The goal of this chapter is to show how the two programming models can be made interoperable and ease the work of the programmer. Thus, instead of asking the programmers to code optimizations targeting performance, it is possible to rely on the good interoperability between the programming models to achieve high performance. For example, instead of using non-blocking message passing and double buffering to achieve computation-communication overlap, our approach provides this feature by taskifying communications using OpenMP tasks [5,6].

Chapter Contents:

  • 1.1 MPI+OmpSs interoperability
  • 1.2 The interposition library
  • 1.3 Implementation of the MPI+OmpSs interoperability
  • 1.4 Solving priority inversion
  • 1.5 Putting it all together
  • 1.6 Machine characteristics
  • 1.7 Evaluation of NTChem
  • 1.7.1 Application analysis
  • 1.7.2 Parallelization approach
  • 1.7.3 Performance analysis
  • 1.8 Evaluation with Linpack
  • 1.8.1 Application analysis
  • 1.8.2 Parallelization approach
  • 1.8.3 Performance analysis
  • 1.9 Conclusions and future directions
  • Acknowledgments
  • References

Inspec keywords: parallel processing; shared memory systems; multiprocessing systems; mainframes; open systems; message passing

Other keywords: HPC; SMP; programming models; OpenMP; medium-to-big symmetric multi-processors; MPI; shared memory support; many core processors; message passing interface

Subjects: Parallel software; Multiprocessing systems; Distributed systems software; Computer networks and techniques; Parallel programming and algorithm theory; Parallel architecture

Preview this chapter:
Zoom in
Zoomout

HPC with many core processors, Page 1 of 2

| /docserver/preview/fulltext/books/pc/pbpc022e/PBPC022E_ch1-1.gif /docserver/preview/fulltext/books/pc/pbpc022e/PBPC022E_ch1-2.gif

Related content

content/books/10.1049/pbpc022e_ch1
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading