HowTo:Compilers

From CAC Wiki
Revision as of 16:57, 14 June 2016 by Hasch (Talk | contribs) (An Example)

Jump to: navigation, search

Compilers at the Centre for Advanced Computing

This is an introduction to the Fortran, C, and C++ compilers used on our clusters and servers. It is meant to give the user a basic idea about the usage of the compilers and about code optimization options.

Available Compilers

We are currently supporting two Compiler Suites on the Linux platform:

  • The Intel Compiler Suite is located in the /opt/ics directory. The version is 12.1. The compilers ifort and icc are in the /opt/ics/composer_xe_2011_sp1.6.233/bin/intel64 directory. Various libraries are in the /opt/ics/lib/intel64 directory.
  • As part of the CentOS distribution, we also have the Gnu C/C++ and Fortran Compilers called gcc, g++ and gfortran, respectively. They are installed in /usr/bin.

Setup

  • For setting up the Intel Compiler Suite you need to issue the command
    use icsmpi
    . This replaces the issuing of lengthy setting for environment variables by a simple command use. It also adds the proper directories to the PATH variable.
  • The public-domain compilers "gcc" and "gfortran" are available by default, i.e. they require no set-up. The current version for these is 4.4.7-4. Sometimes these compilers are required when compiling public-domain programs. We recommend the use of these compilers unless Intel is required to improve performance.

Compiling and Linking

  • The Intel compilers are called ifort and icc for Fortran (all versions) and C/C++, respectively.
  • The public-domain gnu compilers are called gcc, g++, and gfortran for C, C++, and Fortran (all versions), respectively.
Compilation commands
Fortran C C++ Activation
Intel ifort icc use icc
Gnu gfortran gcc g++ n/a

Compiling and linking is best done with a makefile. Here are a few common flags. Consult man pages for specific details (for instance "man gcc").

Compiling

compiler -c [options] name.ext

where "compiler" stands for the compiler name, for instance "gfortran" for the GNU Fortran compiler. The file extension "ext" determines what source code is being compiled, for instance "f" means "fixed format" Fortran, f90 means "free format" Fortran (90), or "c" stands for C. "[options]" denotes additional compiler flags that usually start with a '-'.

Linking

compiler -o name [options] [libraries] list

"compiler" see above. "name" is the name of the executable (if not specified, the default is "a.out". [options] see above. [libraries] is a list of libraries that need to be linked in, usually as a list of file names with full path, or as '-L' and '-l' combinations [see below]. "list" means a list of object files, usually with ".o" extension.

Using the compilers and the linker in the above manner requires the proper setting of the PATH environment variable, i.e. prior set-up.

Options / flags

There are hundreds of compiler flags, and many of them are not required most of the time. A few that are in more frequent use are:

  • -On optimizes your code. "n" is a number from 1 to 5 with increasing severity of alterations made to the code, but also increasing gain. Up to -xO3 is generally rather safe to use. But you should, of course, always check results against an un-optimized version: they might differ.
  • -g produces code that can be debugged. -g and -On are not necessarily mutually exclusive, but optimization may make debugging difficult, because it alters the relationship between source code and executable. This is a good flag to have in the development stage of a program, but is usually dropped later.
  • -V (or -v) produces the version of the compiler.
  • -lname is used to bind in a library called libname.a (static) or libname.so (dynamic). This flag is used to link only.
  • -L dirname is used in conjunction with -lname and lets the linker know where to look for libraries. "dirname" is a directory name such as /opt/studio12/SUNWspro/prod/lib.
  • -Rdirname is used to tell the program where to get dynamic libraries at runtime.

There are many more flags. They are documented in the man pages (e.g. "man ifort" for the Intel Fortran compiler), as well in the documentation for the compiler. Some compiler flags are only useful for parallel programs and will be discussed later.

Implementations

While MPI itself is a portable, platform independent standard, much like a programming language, the actual implementation is necessarily platform dependent since it has to take into account the architecture of the machine or cluster in question.

The most commonly used implementation of MPI for the Linux platform is called OpenMPI. The following considerations will be focussed on this implementation.

Our machines are small to mid-sized shared-memory machines that form a cluster. Since the interconnect between the individual nodes is a bottleneck in efficient program execution, most of the MPI programs running on our machines are executed within a node. This alloows processes to commuincate rapidly through a so-called "shared-memory layer". Our cluster is configured in to preferably schedule processes within a single node.

Currently, two versions of the OpenMPI parallel environment are in common use:

  • For the Intel compiler suite, an Intel implementation of OpenMPI is automatically available when setting up the compiler suite with the
    use icsmpi
    command.
  • For the gnu compiler, OpenMPI is made available through the
    use openmpi
    setup command.

We do not recommend to have both versions set up simultaneously.

Compiling MPI code

The compilation of MPI programs requires a few compiler options to direct the compiler to the location of header files and libraries. Since these switches are always the same, they have been collected in a macro to avoid unnecessary typing. The macro is has an mpi prefix before the normal compiler name. The commands are mpiifort for the Intel Fortran compiler, mpiicc for the gnu C compilers, respectively. For instance, if a serial C program is compiled by

gcc -O3 -c test.c

the corresponding parallel (MPI) program is compiled (using gnu compiler) by

mpicc -xO3 -c test_mpi.c

In the linking stage, the usage of mpi* macros also includes the proper specification of the MPI libraries. For example, the above MPI program should be linked with something like this:

mpicc -o test_mpi.exe test_mpi.o

Compiling and linking may also be combined by omitting the -c option and including the naming option (-o) in the compilation line.

Here are the corresponding MPI macros for the 6 commonly used compilers on our systems:

Language Intel gnu
Fortran mpiifort mpif77, mpif90, mpifort
C mpiicc mpicc
C++ mpiicc, mpiicpc mpicxx

Running MPI programs

To run MPI programs, a special Runtime Environment is required. This includes commands for the control of multi-process jobs.

mpirun is used to start a multi-process run of a program. This required to run MPI programs. The most commonly used command line option is -np to specify the number of processes to be started. For instance, the following line will start the program test_mpi.exe with 9 processes:

mpirun -np 9 test_mpi.exe

The mpirun command offers additional options that are sometimes useful or required. Most tend to interfere with the scheduling of jobs in a multi-user environment such as ours and should be used with caution. Please consult the man pages for details.

Note that the usage of a scheduler is mandatory for production jobs on our system. This option is therefore used frequently. For a details about Gridengine and jobs submission on our machines and clusters, go here.

More Information

As already pointed out, this FAQ is not an introduction to MPI programming. The standard reference text on MPI is:

Marc Snir, Steve Otto, Steven Huss-Lederman, David Walker, and Jack Dongarra:
MPI - The Complete Reference (2nd edition), The MIT Press, Cambridge, Massachusetts, 2000;
2 volumes, ISBN 0-262-69215-5 and 0-262-69213-3

This text specifies all MPI routines and concepts, and includes a large number of examples. Most people will find it sufficient for all their needs.

A quite good online tutorial for MPI programming can be found at the Maui HPCC site.

There is also an official MPI webpage which contains the standards documents for MPI and gives access to the MPI Forum.

We are conducting Workshops on a regular basis, some devoted to MPI programming. They are announced on our web site. We might see you there sometime soon.

Some Tools

Standard debugging and profiling tools such as Sun Studio are designed for serial or multi-threaded programs. They do not handle multi-process runs very well.

Quite often, the best way to check the performance of an MPI program is timing it by insertion of suitable routines. MPI supplies a "wall-clock" routine called MPI_WTIME(), that lets you determine how much actual time was spent in a specific segment of your code. An other method is calling the subroutines ETIME and DTIME, which can give you information about the actual CPU time used. However, it is advisable to carefully read the documentation before using them with MPI programs. In this case, refer to the Sun Studio 12: Fortran Library Reference.

We also provide a package called the HPCVL Working Template (HWT), which was created by Gang Liu. The HWT provides 3 main functionalities:

  • Maintenance of multiple versions of the same code from a single source file. This is very useful, if your MPI code is based on a serial code that you want to convert.
  • Automatic Relative Debugging which allows you to use pre-existing code (for example the serial version of your program) as a reference when checking the correctness of your MPI code.
  • Simple Timing which is needed to determine bottlenecks for parallelization, to optimize code, and to check its scaling properties.

The HWT is based on libraries and script files. It is easy to use and portable (written largely in Fortran). Fortran, C, C++, and any mixture thereof are supported, as well as MPI and OpenMP for parallelism. Documentation of the HWT is available. The package is installed on our clusters in /opt/hwt.

Help

Send email to cac.help@queensu.ca. We have scientific programmers on staff who will probably be able to help you out. Of course, we can't do the coding for you but we do our best to get your code ready for parallel machines and clusters.