Difference between revisions of "Training:SummerSchool2016:Programme:MPI"

From CAC Wiki
Jump to: navigation, search
(Abstract of MPI course of 2016 Summer School)
 
(Summary of MPI of 2016 Summer School)
Line 1: Line 1:
The MPI (Message Passing Interface) API is a widely used standard set of interfaces for programming parallel computers ranging from multicore laptops to large-scale SMP servers and clusters. Its versatility and wide range of applicability make it a standard system for high-performance programming.
+
The MPI (Message Passing Interface) API is a widely used standard set of interfaces for programming parallel computers ranging from multicore laptops to large-scale SMP servers and clusters. This workshop is directed at current or prospective users of parallel computers who want to significantly improve the performance of their programs by “parallelizing” the code on a wide range of platforms. Prior background in parallel computing is not required, but basic in either Fortran or C/C++.
  
This workshop is directed at current or prospective users of parallel computers who want to significantly improve the performance of their programs by “parallelizing” the code on a wide range of platforms. We do not require any prior background in parallel computing, but some experience with programming in either Fortran or C/C++ is useful.
+
The content of the course ranges from introductory to intermediate. After a brief introduction to MPI, we talk about MPI fundamentals including about a dozen MPI routines to familiarize users with the basic concepts of MPI programming. Later We discuss and demonstrate array distribution, user-defined data types, and task distribution with examples.  
 
+
The content of the course ranges from introductory to intermediate. After a brief introduction to MPI, we talk about MPI fundamentals including about a dozen MPI routines to familiarize users with the basic concepts of MPI programming. Later We demonstrate array distribution, user-defined data types, and task distribution with examples.  
+
  
 
Throughout this workshop we will perform simple exercises on a dedicated cluster to apply our newly gained knowledge practically.
 
Throughout this workshop we will perform simple exercises on a dedicated cluster to apply our newly gained knowledge practically.
  
'''Instructor''': Gang Liu, CAC, Queen's University.
+
'''Instructor''': Gang Liu, CAC, Queen's University.
'''Prerequisites''': Basic Fortran or C/C++ programming.
+

Revision as of 18:58, 27 June 2016

The MPI (Message Passing Interface) API is a widely used standard set of interfaces for programming parallel computers ranging from multicore laptops to large-scale SMP servers and clusters. This workshop is directed at current or prospective users of parallel computers who want to significantly improve the performance of their programs by “parallelizing” the code on a wide range of platforms. Prior background in parallel computing is not required, but basic in either Fortran or C/C++.

The content of the course ranges from introductory to intermediate. After a brief introduction to MPI, we talk about MPI fundamentals including about a dozen MPI routines to familiarize users with the basic concepts of MPI programming. Later We discuss and demonstrate array distribution, user-defined data types, and task distribution with examples.

Throughout this workshop we will perform simple exercises on a dedicated cluster to apply our newly gained knowledge practically.

Instructor: Gang Liu, CAC, Queen's University.