Introduction to MPI - Part II
HTML-код
- Опубликовано: 5 янв 2025
- (NOTE: This talk is a continuation of Part I talk given on Nov. 11, 2015. If you missed it, the recording is posted on SHARCNET’s RUclips channel.)
This talk will build on the introduction to MPI (Message Passing Interface) Part I talk, introducing some more advanced features such as collective and non-blocking communications. Collective communications are implemented in a set of standard MPI routines, and they permit efficient exchange of information between processes without extra effort from the programmer when communication occurs in a standard, structured pattern. Examples of collective communications include broadcasts and reductions. Non-blocking communications allow to overlap communication with computation. Since communications are generally slow compared to computations, having such an overlap is often necessary to produce an efficient MPI code. The example programs in this talk will be implemented in C.
Our complete MPI series of webinars:
Part I : • Introduction to MPI - ...
Part II : • Introduction to MPI - ...
Part III : • Introduction to MPI - ...
_______________________________________________
This webinar was presented by Pawel Pomorski (SHARCNET) on November 25th, 2015 as a part of a series of regular biweekly webinars ran by SHARCNET. The webinars cover different high performance computing (HPC) topics, are approximately 45 minutes in length, and are delivered by experts in the relevant fields. Further details can be found on this web page: www.sharcnet.c...
SHARCNET is a consortium of 18 Canadian academic institutions who share a network of high performance computers (www.sharcnet.ca). SHARCNET is a part of Compute Ontario (computeontario.ca/) and Compute Canada (computecanada.ca).