HowTo:fortran
Contents
Fortran (Programming Language)
FORTRAN, C, and C++ have a long history as the basic/main compiled languages for high performance computing. The key parallel computing packages, MPI and OpenMP, have been implemented in all of them from the beginning. While C and C++ have been extended for all programming purposes, FORTRAN originated from FORmular TRANslation, and developed with an emphasis on scientific computing. After the FORTRAN I-IV, 66, and 77 stages, the FORTRAN 90, 95, 2003, 2008, and 2015 versions have adopted many advanced features to become a true modern (object oriented) programming language, especially geared toward scientific computations. The following lists some of the most useful and prominent programming features of FORTRAN.
Well StructuredFORTRAN is very well structured. All routines should have a clear beginning statement, and a corresponding ending one. For example (since case-in-sensitiveness, usually written in either lower or upper case only) PROGRAM MY_VERY_USEFUL_CODE ... CALL PROBLEM_SOLVING (...) ... STOP END PROGRAM MY_VERY_USEFUL_CODE SUBROUTINE PROBLEM_SOLVING (...) ... RESULT = AVERAGE_SCORE (...) RETURN END SUBROUTINE PROBLEM_SOLVING FUNCTION AVERAGE_SCORE (...) ... RETURN END FUNCTION AVERAGE_SCORE The DO loop and IF structure are also finished with an END statement. DO I = ISTART, IEND ... END DO IF (CONDITION) ... ELSE ... END IF |
ModulesSimilar to classes in C++, modules are very important and widely-used in FORTRAN. Theoretically modules are not classes, but usually contain many objects, since in most scientific computations data structures are known and given objects. Modules can also contain specific routines operating on the objects inside, similar to the encapsulation concept of classes. Meanwhile modules are also a good method to share such objects, so that routines arguments can be reduced to necessaries only. OverloadingAs a modern language, FORTRAN also supports routine overloading. MODULE MY_KINETICS INTERFACE GENERIC_KINETIC SUBROUTINE KINETIC_ROUTINE_A(...) ... END SUBROUTINE KINETIC_ROUTINE_A SUBROUTINE KINETIC_ROUTINE_B(...) ... END SUBROUTINE KINETIC_ROUTINE_B SUBROUTINE KINETIC_ROUTINE_C(...) ... END SUBROUTINE KINETIC_ROUTINE_C ... END INTERFACE GENERIC_KINETIC END MODULE MY_KINETICS
|
After this module is cited
USE MY_KINETICS
with each of the specific routines available, the call
CALL GENERIC_KINETIC(...)
will invoke the specific routine with the matching unique interface. In C++, overloading is a type of class polymorphism. |}
ImplementationsWhile MPI itself is a portable, platform independent standard, much like a programming language, the actual implementation is necessarily platform dependent since it has to take into account the architecture of the machine or cluster in question. The most commonly used implementation of MPI for the Linux platform is called OpenMPI. The following considerations will be focussed on this implementation. Our machines are small to mid-sized shared-memory machines that form a cluster. Since the interconnect between the individual nodes is a bottleneck in efficient program execution, most of the MPI programs running on our machines are executed within a node. This alloows processes to commuincate rapidly through a so-called "shared-memory layer". Our cluster is configured in to preferably schedule processes within a single node. Currently, two versions of the OpenMPI parallel environment are in common use:
We do not recommend to have both versions set up simultaneously. Compiling MPI codeThe compilation of MPI programs requires a few compiler options to direct the compiler to the location of header files and libraries. Since these switches are always the same, they have been collected in a macro to avoid unnecessary typing. The macro is has an mpi prefix before the normal compiler name. The commands are mpiifort for the Intel Fortran compiler, mpiicc for the gnu C compilers, respectively. For instance, if a serial C program is compiled by gcc -O3 -c test.c the corresponding parallel (MPI) program is compiled (using gnu compiler) by mpicc -xO3 -c test_mpi.c In the linking stage, the usage of mpi* macros also includes the proper specification of the MPI libraries. For example, the above MPI program should be linked with something like this: mpicc -o test_mpi.exe test_mpi.o Compiling and linking may also be combined by omitting the -c option and including the naming option (-o) in the compilation line. Here are the corresponding MPI macros for the 6 commonly used compilers on our systems:
Running MPI programsTo run MPI programs, a special Runtime Environment is required. This includes commands for the control of multi-process jobs. mpirun is used to start a multi-process run of a program. This required to run MPI programs. The most commonly used command line option is -np to specify the number of processes to be started. For instance, the following line will start the program test_mpi.exe with 9 processes: mpirun -np 9 test_mpi.exe The mpirun command offers additional options that are sometimes useful or required. Most tend to interfere with the scheduling of jobs in a multi-user environment such as ours and should be used with caution. Please consult the man pages for details. Note that the usage of a scheduler is mandatory for production jobs on our system. This option is therefore used frequently. For a details about Gridengine and jobs submission on our machines and clusters, go here. |
More InformationAs already pointed out, this FAQ is not an introduction to MPI programming. The standard reference text on MPI is: Marc Snir, Steve Otto, Steven Huss-Lederman, David Walker, and Jack Dongarra: This text specifies all MPI routines and concepts, and includes a large number of examples. Most people will find it sufficient for all their needs. A quite good online tutorial for MPI programming can be found at the Maui HPCC site. There is also an official MPI webpage which contains the standards documents for MPI and gives access to the MPI Forum. We are conducting Workshops on a regular basis, some devoted to MPI programming. They are announced on our web site. We might see you there sometime soon. Some ToolsStandard debugging and profiling tools such as Sun Studio are designed for serial or multi-threaded programs. They do not handle multi-process runs very well. Quite often, the best way to check the performance of an MPI program is timing it by insertion of suitable routines. MPI supplies a "wall-clock" routine called MPI_WTIME(), that lets you determine how much actual time was spent in a specific segment of your code. An other method is calling the subroutines ETIME and DTIME, which can give you information about the actual CPU time used. However, it is advisable to carefully read the documentation before using them with MPI programs. In this case, refer to the Sun Studio 12: Fortran Library Reference. We also provide a package called the HPCVL Working Template (HWT), which was created by Gang Liu. The HWT provides 3 main functionalities:
The HWT is based on libraries and script files. It is easy to use and portable (written largely in Fortran). Fortran, C, C++, and any mixture thereof are supported, as well as MPI and OpenMP for parallelism. Documentation of the HWT is available. The package is installed on our clusters in /opt/hwt. HelpSend email to cac.help@queensu.ca. We have scientific programmers on staff who will probably be able to help you out. Of course, we can't do the coding for you but we do our best to get your code ready for parallel machines and clusters. |