Verkauf durch Sack Fachmedien

Keller / Gabriel / Dongarra

Recent Advances in the Message Passing Interface

17th European MPI User's Group Meeting, EuroMPI 2010, Stuttgart, Germany, September12-15, 2010, Proceedings

Medium: Buch
ISBN: 978-3-642-15645-8
Verlag: Springer
Erscheinungstermin: 02.09.2010
Lieferfrist: bis zu 10 Tage

Parallel Computing is at the verge of a new era. Multi-core processors make parallel computing a fundamental skill required by all computer scientists. At the same time, high-end systems have surpassed the Peta?op barrier, and s- ni?cant e?orts are devoted to the development of hardware and software te- nologies for the next-generation Exascale systems. To reach this next stage, processor architectures, high-speed interconnects and programming models will go through dramatic changes. The Message Passing Interface (MPI) has been the most widespread programming model for parallel systems of today. A key questions of upcoming Exascale systems is whether and how MPI has to evolve inordertomeettheperformanceandproductivitydemandsofExascalesystems. EuroMPI is the successor of the EuroPVM/MPI series, a ?agship conf- ence for this community, established as the premier international forum for - searchers,usersandvendorsto presenttheir latestadvancesinMPIandmessage th passingsystemingeneral.The17 EuropeanMPIusersgroupmeetingwasheld in Stuttgart during September 12-15,2010.The conferencewasorganizedbythe High Performance Computing Center Stuttgart at the University of Stuttgart. ThepreviousconferenceswereheldinEspoo(2009),Dublin(2008),Paris(2007), Bonn(2006),Sorrento(2005),Budapest(2004),Venice(2003),Linz(2002),S- torini (2001), Balatonfured (2000), Barcelona (1999), Liverpool (1998), Krakow (1997), Munich (1996), Lyon (1995) and Rome (1994). The main topics of the conference were message-passing systems – especially MPI, performance, scalability and reliability issues on very large scale systems.


Produkteigenschaften


  • Artikelnummer: 9783642156458
  • Medium: Buch
  • ISBN: 978-3-642-15645-8
  • Verlag: Springer
  • Erscheinungstermin: 02.09.2010
  • Sprache(n): Englisch
  • Auflage: 1. Auflage 2010
  • Serie: Lecture Notes in Computer Science
  • Produktform: Kartoniert
  • Gewicht: 492 g
  • Seiten: 308
  • Ausgabetyp: Kein, Unbekannt
Autoren/Hrsg.

Herausgeber

Large Scale Systems.- A Scalable MPI_Comm_split Algorithm for Exascale Computing.- Enabling Concurrent Multithreaded MPI Communication on Multicore Petascale Systems.- Toward Performance Models of MPI Implementations for Understanding Application Scaling Issues.- PMI: A Scalable Parallel Process-Management Interface for Extreme-Scale Systems.- Run-Time Analysis and Instrumentation for Communication Overlap Potential.- Efficient MPI Support for Advanced Hybrid Programming Models.- Parallel Filesystems and I/O.- An HDF5 MPI Virtual File Driver for Parallel In-situ Post-processing.- Automated Tracing of I/O Stack.- MPI Datatype Marshalling: A Case Study in Datatype Equivalence.- Collective Operations.- Design of Kernel-Level Asynchronous Collective Communication.- Network Offloaded Hierarchical Collectives Using ConnectX-2’s CORE-Direct Capabilities.- An In-Place Algorithm for Irregular All-to-All Communication with Limited Memory.- Applications.- Massively Parallel Finite Element Programming.- Parallel Zero-Copy Algorithms for Fast Fourier Transform and Conjugate Gradient Using MPI Datatypes.- Parallel Chaining Algorithms.- MPI Internals (I).- Precise Dynamic Analysis for Slack Elasticity: Adding Buffering without Adding Bugs.- Implementing MPI on Windows: Comparison with Common Approaches on Unix.- Compact and Efficient Implementation of the MPI Group Operations.- Characteristics of the Unexpected Message Queue of MPI Applications.- Fault Tolerance.- Dodging the Cost of Unavoidable Memory Copies in Message Logging Protocols.- Communication Target Selection for Replicated MPI Processes.- Transparent Redundant Computing with MPI.- Checkpoint/Restart-Enabled Parallel Debugging.- Best Paper Awards.- Load Balancing for Regular Meshes on SMPs with MPI.- Adaptive MPIMultirail Tuning for Non-uniform Input/Output Access.- Using Triggered Operations to Offload Collective Communication Operations.- MPI Internals (II).- Second-Order Algorithmic Differentiation by Source Transformation of MPI Code.- Locality and Topology Aware Intra-node Communication among Multicore CPUs.- Transparent Neutral Element Elimination in MPI Reduction Operations.- Poster Abstracts.- Use Case Evaluation of the Proposed MPIT Configuration and Performance Interface.- Two Algorithms of Irregular Scatter/Gather Operations for Heterogeneous Platforms.- Measuring Execution Times of Collective Communications in an Empirical Optimization Framework.- Dynamic Verification of Hybrid Programs.- Challenges and Issues of Supporting Task Parallelism in MPI.