Alexander Supalov 
Inside the Message Passing Interface [EPUB ebook] 
Creating Fast Communication Libraries

الدعم

A hands-on guide to writing a Message Passing Interface, this book takes the reader on a tour across major MPI implementations, best optimization techniques, application relevant usage hints, and a historical retrospective of the MPI world, all based on a quarter of a century spent inside MPI. Readers will learn to write MPI implementations from scratch, and to design and optimize communication mechanisms using pragmatic subsetting as the guiding principle. Inside the Message Passing Interface also covers MPI quirks and tricks to achieve best performance.


Dr. Alexander Supalov created the Intel Cluster Tools product line, including the Intel MP Library that he designed and led between 2003 and 2015. He invented the common MPICH ABI and also guided Intel efforts in the MPI Forum during the development of the MPI-2.1, MPI-2.2, and MPI-3 standards. Before that, Alexander designed new finite-element mesh-generation methods, contributing to the PARMACS and PARASOL interfaces, and developed the first full MPI-2 and IMPI implementations in the world. He graduated from the Moscow Institute of Physics and Technology in 1990, and earned his Ph D in applied mathematics at the Institute of Numerical Mathematics of the Russian Academy of Sciences in 1995. Alexander holds 26 patents (more pending worldwide).

€99.95
طرق الدفع

قائمة المحتويات



  • Introduction – Learn what expects you inside the book

  • What this book is about

  • Who should read this book

  • Notation and conventions

  • How to read this book
  • Overview
  • Parallel computer
  • Intraprocessor parallelism

  • Interprocessor parallelism

  • Exercises
  • MPI standard
  • MPI history

  • Related standards

  • Exercises
  • MPI subsetting
  • Motivation

  • Typical examples

  • Implementation practice

  • Exercises

  • Shared memory – Learn how to create a simple MPI subset capable of basic blocking point-to-point and collective operations over shared memory
  • Subset definition
  • General assumptions

  • Blocking point-to-point communication

  • Blocking collective operations

  • Exercises
  • Communication mechanisms
  • Basic communication

  • Intraprocess performance

  • Interprocess performance

  • Exercises
  • Startup and termination
  • Process creation
  • Two processes

  • More processes
  • Connection establishment

  • Process termination

  • Exercises
  • Blocking point-to-point communication
  • Limited message length
  • Blocking protocol
  • Unlimited message length
  • Double buffering

  • Eager protocol

  • Rendezvous protocol
  • Exercises
  • Blocking collective operations
  • Naive algorithms

  • Barrier

  • Broadcast

  • Reduce and Allreduce

  • Exercises

  • Sockets – Learn how to create an MPI subset capable of all point-to-point and blocking collective operations over Ethernet and other IP capable networks
  • Subset definition
  • General assumptions

  • Blocking point-to-point communication

  • Nonblocking point-to-point operations

  • Blocking collective operations

  • Exercises
  • Communication mechanisms
  • Basic communication

  • Intranode performance

  • Internode performance

  • Exercises
  • Synchronous progress engine
  • Communication establishment

  • Data transfer

  • Exercises
  • Startup and termination
  • Process creation
  • Startup command

  • Process daemon

  • Out-of-band communication

  • Host name resolution
  • Connection establishment
  • At startup (eager)

  • On request (lazy)
  • Process termination

  • Exercises
  • Blocking point-to-point communication
  • Source and tag matching

  • Unexpected messages

  • Exercises
  • Nonblocking point-to-point communication
  • Request management

  • Exercises
  • Blocking collective operations
  • Communication context

  • Basic algorithms
  • Tree based algorithms

  • Circular algorithms

  • Hypercube algorithms
  • Exercises

  • OFA libfabrics – Learn how to create an MPI subset capable of all point-to-point and collective operations over Infini Band and upcoming future networks
  • Subset definition
  • General assumptions

  • Point-to-point operations

  • Collective operations

  • Exercises
  • Communication mechanisms
  • Basic communication

  • Intranode performance

  • Internode performance

  • Exercises
  • Startup and termination
  • Process creation

  • Credential exchange

  • Connection establishment

  • Process termination

  • Exercises
  • Point-to-point communication
  • Blocking communication

  • Nonblocking communication

  • Exercises
  • Collective operations
  • Advanced algorithms

  • Blocking operations

  • Nonblocking operations

  • Exercises

  • Advanced features – Learn how to add advanced MPI features including but not limited to heterogeneity, one-sided communication, file I/O, and language bindings
  • Communication modes
  • Standard

  • Buffered

  • Synchronous
  • Heterogeneity
  • Basic datatypes

  • Simple datatypes

  • Derived datatypes

  • Exercises
  • Groups, communicators, topologies
  • Group management

  • Communicator management

  • Process topologies

  • Exercises
  • One-sided communication
  • Mapped implementation

  • Native implementation

  • Exercises
  • File I/O
  • Standard I/O

  • MPI file I/O

  • Exercises
  • Language bindings
  • Fortran

  • C++

  • Java

  • Python

  • Exercises

  • Optimization – Learn how to optimize MPI internally by using advanced implementation techniques and available special hardware
  • Direct data transfer
  • Direct memory access

  • Remote direct memory access

  • Exercises
  • Threads
  • Thread support level

  • Threads as MPI processes

  • Shared memory extensions

  • Exercises
  • Multiple fabrics
  • Synchronous progress engine

  • Asynchronous progress engine

  • Hybrid progress engine

  • Exercises
  • Dedicated hardware
  • Synchronization

  • Special memory

  • Auxiliary networks

  • Exercises

  • Look ahead – Learn to recognize MPI advantages and drawbacks to better assess its future
  • MPI axioms
  • Reliable data transfer

  • Ordered message delivery

  • Dense process rank sequence

  • Exercises
  • MPI-4 en route
  • Fault tolerance

  • Exercises
  • Beyond MPI
  • Exascale challenge

  • Exercises

  • References – Learn about books that may further extend your knowledge

  • Appendices

  • MPI Families – Learn about major MPI implementation families, their genesis, architecture and relative performance
  • MPICH
  • Genesis

  • Architecture

  • Details
  • MPICH

  • MVAPICH

  • Intel MPI

  • Exercises
  • Open MPI
  • Genesis

  • Architecture

  • Details

  • Exercises
  • Comparison
  • Market

  • Features

  • Performance

  • Exercises

  • Alternative interfaces – Learn about other popular interfaces that are used to implement MPI

  • DAPL

  • Exercises
  • SHMEM

  • Exercises
  • Gas NET

  • Exercises
  • Portals

  • Exercises

  • Solutions to all exercises – Learn how to answer all those questions
  • عن المؤلف

    Dr. Alexander Supalov, Supalov HPC, Germany

    قم بشراء هذا الكتاب الإلكتروني واحصل على كتاب آخر مجانًا!
    لغة الإنجليزية ● شكل EPUB ● صفحات 384 ● ISBN 9781501506789 ● حجم الملف 19.8 MB ● الناشر De|G Press ● مدينة Basel/Berlin/Boston ● نشرت 2018 ● الإصدار 1 ● للتحميل 24 الشهور ● دقة EUR ● هوية شخصية 6964953 ● حماية النسخ Adobe DRM
    يتطلب قارئ الكتاب الاليكتروني قادرة DRM

    المزيد من الكتب الإلكترونية من نفس المؤلف (المؤلفين) / محرر

    3٬375 كتب إلكترونية في هذه الفئة