Difference between revisions of "HowTo:fortran"

From CAC Wiki
Jump to: navigation, search
(What kind of system uses MPI ?)
(Fortran (Programming Language))
 
(33 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
= Fortran (Programming Language) =
 
= Fortran (Programming Language) =
  
FORTRAN, C, and C++ have a long history as the basic/main compiled languages for high performance computing. The key parallel computing packages, MPI and OpenMP, have been implemented in all of them from the beginning. While C and C++ have been extended for all programming purposes, FORTRAN originated from FORmular TRANslation, and developed with an emphasis on scientific computing. After the FORTRAN I-IV, 66, and 77 stages, the FORTRAN 90, 95, 2003, 2008, and 2015 versions have adopted many advanced features to become a true modern (object oriented) programming language, especially geared toward scientific computations. The following lists some of the most useful and prominent programming features of FORTRAN.  
+
FORTRAN, C, and C++ have a long history as the basic/main compiled languages for high performance computing. The key parallel computing packages, MPI and OpenMP, have been implemented in all of them from the beginning. While C and C++ have been extended for all programming purposes, FORTRAN, originated from FORmular TRANslation, developed with an emphasis on scientific computing. After the FORTRAN I-IV, 66, and 77 stages, the FORTRAN 90, 95, 2003, 2008, and 2015 versions have adopted many advanced features to become a true modern (object oriented) programming language, especially geared toward scientific computations. The following lists some of the most useful and prominent programming features of FORTRAN.  
  
 
{|  style="border-spacing: 8px;"
 
{|  style="border-spacing: 8px;"
| valign="top" width="50%" style="padding:1em; border:1px solid #aaaaaa; background-color:#e1eaf1; border-radius:7px" |
+
| valign="top" width="50%" style="padding:1em; border:1px solid #aaaaaa; background-color:#f7f7f7; border-radius:7px" |
 
== Well Structured ==
 
== Well Structured ==
  
FORTRAN is very well structured. All routines should have a clear beginning statement, and a corresponding ending one. For example (since case-in-sensitiveness, usually written in either lower or upper case only)
+
FORTRAN is very well structured. All routines should have a clear beginning statement, and a corresponding ending one. For example (since case-in-sensitiveness, usually written in either all lower or all upper case)
  
 
<pre>
 
<pre>
 
PROGRAM MY_VERY_USEFUL_CODE
 
PROGRAM MY_VERY_USEFUL_CODE
...
+
    ...
CALL PROBLEM_SOLVING (...)
+
    CALL PROBLEM_SOLVING (...)
...
+
    ...
STOP
+
    STOP
 
END PROGRAM MY_VERY_USEFUL_CODE
 
END PROGRAM MY_VERY_USEFUL_CODE
  
 
SUBROUTINE PROBLEM_SOLVING (...)
 
SUBROUTINE PROBLEM_SOLVING (...)
...
+
    ...
RESULT = AVERAGE_SCORE (...)
+
    RESULT = AVERAGE_SCORE (...)
RETURN
+
    RETURN
 
END SUBROUTINE PROBLEM_SOLVING
 
END SUBROUTINE PROBLEM_SOLVING
  
 
FUNCTION  AVERAGE_SCORE (...)
 
FUNCTION  AVERAGE_SCORE (...)
...
+
    ...
RETURN
+
    RETURN
 
END FUNCTION AVERAGE_SCORE
 
END FUNCTION AVERAGE_SCORE
 
</pre>
 
</pre>
  
The DO loop and IF structure are also finished with an END statement.
+
Even the DO loop and IF structure are also finished with an END statement.
  
 
<pre>
 
<pre>
Line 43: Line 43:
 
</pre>
 
</pre>
  
== Using MPI ==
+
|}
  
MPI is a set of subroutines which are used explicitly to communicate between processes. As such, MPI programs are truly "multi-processing". Parallelisation can not be done automatically or semi-automatically as in "multi-threading" programs. Instead, function and subroutine calls have to be inserted into the code and form an integral part of the program. Often it is beneficial to alter the algorithm of the code with respect to the serial version.
+
{|  style="border-spacing: 8px;"
 
+
| valign="top" width="50%" style="padding:1em; border:1px solid #aaaaaa; background-color:#e1eaf1; border-radius:7px" |
The need to include the parallelism explicitly in the program is both a curse and a blessing: while it means more work and requires more planning than multi-threading, it also often leads to more reliable and scalable code since the behaviour of the latter is in the hands of the programmer. Well-written MPI codes can be made to scale for thousands of CPUs.
+
  
To create an MPI program, you need to:
+
== Modules ==
 
+
* Include appropriate header files for the definitions of variables and data structures. These are called mpif.h, mpi.h, and mpi++.h for Fortran, C, and C++, respectively.
+
* Program the communication between processes in the form of calls to the MPI communication routines. These are commonly of the form MPI_* for Fortran and C, and MPI::* for C++.
+
* Bind in the proper libraries at the linking stage of program compilation. This is usually done with the -lmpi option of the compiler/linker.
+
* MPI programs also usually need a special runtime environment to be executed properly. This is commonly supplied by the machine vendor and is machine specific.
+
 
+
== An Example ==
+
 
+
The working principle of MPI is perhaps best illustrated on the grounds of a programming example. The following program, written in Fortran 90 computes the sum of all square-roots of integers from 0 up to a specific limit m:
+
  
 +
Similar to classes in C++, modules are very important and widely-used in FORTRAN. Modules, in the form of a separate code structure, may contain various definitions/declarations and  can use other predefined modules. Theoretically modules are not classes, but usually used to provide some data structures (objects) for sharing, since in most scientific computations objects are known beforehand and the task is to manipulate them. Modules can also contain specific routines accessing the objects inside and accessible only when the module is used, similar to the encapsulation concept of classes. By using modules, the code can be written very concisely. Here is an example and its usage.
 
<pre>
 
<pre>
module mpi
+
MODULE MY_PARAMETERS
  include 'mpif.h'
+
    DOUBLE PRECISION, PARAMETER :: THE_EARTH_RADIUS = 6371.0D0
  end module mpi
+
END MODULE MY_PARAMETERS
  
module cpuids
+
SUBROUTINE EARTH_STORY (...)
  integer::myid,totps, ierr
+
    USE MY_PARAMETERS
end module cpuids
+
    DOUBLE PRECISION:: THE_EARTH_DIAMETER
 
+
    ...
program example02
+
    THE_EARTH_DIAMETER = 2 * THE_EARTH_RADIUS
  use mpi
+
    ...
  use cpuids
+
    RETURN
  call mpiinit
+
END SUBROUTINE EARTH_STORY
  call demo02
+
  call mpi_finalize(ierr)  
+
  stop
+
end
+
 
+
subroutine mpiinit
+
  use mpi
+
  use cpuids
+
  call mpi_init( ierr )
+
  call mpi_comm_rank(mpi_comm_world,myid,ierr)
+
  call mpi_comm_size(mpi_comm_world,totps,ierr)
+
  return
+
end
+
 
+
subroutine demo02
+
  use mpi
+
  use cpuids
+
  integer:: m, i
+
  real*8 :: s, mys
+
  if(myid.eq.0) then
+
  write(*,*)'how many terms?'
+
  read(*,*) m
+
  end if
+
  call mpi_bcast(m,1,mpi_integer,0,mpi_comm_world,ierr)
+
  mys=0.0d0
+
  do i=myid,m,totps
+
  mys=mys+dsqrt(dfloat(i))
+
  end do
+
  write(*,*)'rank:', myid,'mys=',mys, ' m:',m
+
  s=0.0d0
+
  call mpi_reduce(mys,s,1,mpi_real8,mpi_sum,0,mpi_comm_world,ierr)
+
  if(myid.eq.0) then
+
  write(*,*)'total sum: ', s
+
  end if
+
  return
+
end
+
 
</pre>
 
</pre>
  
Some of the common tasks that need to be performed in every MPI program are done in the subroutine mpiinit in this program. Namely, we need to call the routine ''mpi_init'' to prepare the usage of MPI. This has to be done before any other MPI routine is called. The two routine calls to ''mpi_comm_size'' and ''call mpi_comm_rank'' determine how many processes are running and what is the unique ID number of the present, i.e. the calling process. Both pieces of information are essential. The results are stored in the variables ''totps'' and ''myid'', respectively. Note that these variables appear in a module ''cpuids'' so that they may be accessed from all routines that "use" that module.
+
|}
  
The main work in the example is done in the subroutine ''demo02''. Note that this routine does use the module ''cpuids''. The first operation is to determine the maximum integer ''m'' in the sum by requesting input from the user. The if-clause ''if(myid.eq.0) then'' serves to restrict this I/O operation to only one process, the so-called "root process", usually chosen to be the one with rank (i.e. unique ID number) zero.
+
{|  style="border-spacing: 8px;"
 +
| valign="top" width="50%" style="padding:1em; border:1px solid #aaaaaa; background-color:#f7f7f7; border-radius:7px" |
  
After this initial operation, communication has become necessary, since only one process has the right value of ''m''. This is done by a call to the MPI collective operation routine ''mpi_bcast''. This call has the effect of "broadcasting" the integer ''m''. This call needs to be made by all processes, and after they have done so, all of them know ''m''.
+
== Overloading ==
  
The sum over the square root is then executed on each process in a slightly different manner. Each term is added to a local variable ''mys''. A stride of ''totps'' (the number of processes) in the do-loop ensures that each process adds different terms to its local sum, by skipping all others. For instance, if there are 10 processes, process 0 will add the square-roots of 0,10,20,30,..., while process 7 will add the square-roots of 7,17,27,37,...
+
As a modern language, FORTRAN also supports routine overloading: the ability to pick up the correct one from a group of routines with different unique interfaces by calling a fixed routine name. The routines are usually of the same functionality.  
  
After the sums have been completed, further communication is necessary, since each process only has computed a partial, local sum. We need to collect these local sums into one total, and we do so by calling ''mpi_reduce''. The effect of this call is to "reduce" a value local to each process to a variable that is local to only one process, usually the root process. We can do this in various ways, but in our case we choose to sum the values up by specifying ''mpi_sum'' in the function call. Afterwards, the total sum resides in the variable ''s'', which is printed out by the root process.
+
<pre>
 +
MODULE MY_KINETICS
 +
    INTERFACE  GENERIC_KINETIC
 +
          SUBROUTINE KINETIC_ROUTINE_A(...)
 +
                  ...
 +
          END SUBROUTINE KINETIC_ROUTINE_A
  
The last operation done in our example is finalizing MPI usage by a call to ''mpi_finalize'', which is necessary for proper program completion.
+
          SUBROUTINE KINETIC_ROUTINE_B(...)
 +
                  ...
 +
          END SUBROUTINE KINETIC_ROUTINE_B
  
In this simple example, we have distributed the tasks of computing many square roots among processes, each of which only did a part of the work. We used communication to exchange information about the tasks that needed to be performed, and to collect results. This mode of programming is called "task parallel". Often it is necessary to distribute large amounts of data among processes as well, leading to "data parallel" programs. Of course, the distinction is not always clear.
+
          SUBROUTINE KINETIC_ROUTINE_C(...)
 +
                  ...
 +
          END SUBROUTINE KINETIC_ROUTINE_C
 +
                  ...
 +
    END INTERFACE GENERIC_KINETIC
 +
END MODULE  MY_KINETICS
 +
</pre>
 +
 
 +
After this module is cited
 +
<pre>USE MY_KINETICS</pre>
 +
with each of the specific routines available, the call
 +
<pre>CALL GENERIC_KINETIC(...)</pre>
 +
will invoke the specific routine with the matching unique interface. In C++, overloading is a type of class polymorphism.
 
|}
 
|}
  
Line 129: Line 104:
 
| valign="top" width="50%" style="padding:1em; border:1px solid #aaaaaa; background-color:#e1eaf1; border-radius:7px" |
 
| valign="top" width="50%" style="padding:1em; border:1px solid #aaaaaa; background-color:#e1eaf1; border-radius:7px" |
  
== Implementations ==
+
== High Precision ==
 +
Most FORTRAN compilers have built-in data types of very high precision, like quadruple precision
 +
<pre>
 +
REAL*16    ::  VELOCITY(3,1000)
 +
COMPLEX*32 ::  HAMILTONIAN(1000, 1000)
 +
</pre>
  
While MPI itself is a portable, platform independent standard, much like a programming language, the actual implementation is necessarily platform dependent since it has to take into account the architecture of the machine or cluster in question.
+
|}
  
The most commonly used implementation of MPI for the Linux platform is called '''OpenMPI'''. The following considerations will be focussed on this implementation.
+
{|  style="border-spacing: 8px;"
 +
| valign="top" width="50%" style="padding:1em; border:1px solid #aaaaaa; background-color:#f7f7f7; border-radius:7px" |
  
Our machines are small to mid-sized shared-memory machines that form a cluster.  Since the interconnect between the individual nodes is a bottleneck in efficient program execution, most of the MPI programs running on our machines are executed within a node. This alloows processes to commuincate rapidly through a so-called "shared-memory layer". Our cluster is configured in to preferably schedule processes within a single node.
 
  
Currently, two versions of the OpenMPI parallel environment are in common use:
+
== Collective Operations ==
* For the '''Intel compiler suite''', an Intel implementation of OpenMPI is automatically available when setting up the compiler suite with the <pre>use icsmpi</pre> command.
+
FORTRAN supports collective operations on a whole array or a section of it.
* For the '''gnu''' compiler, OpenMPI is made available through the <pre>use openmpi</pre> setup command.
+
<pre>
 
+
REAL*16 ::  V1(3,100), V2(3,100), V3(3,100)
We do not recommend to have both versions set up simultaneously.
+
...
 
+
V1 = 0.0Q0
== Compiling MPI code ==
+
V1(2:3, 20:50) = 0.9Q0
 
+
V2 = 0.8Q0 * V3
The compilation of MPI programs requires a few compiler options to direct the compiler to the location of header files and libraries. Since these switches are always the same, they have been collected in a macro to avoid unnecessary typing. The macro is has an mpi prefix before the normal compiler name. The commands are '''mpiifort''' for the Intel Fortran compiler, '''mpiicc''' for the gnu C compilers, respectively. For instance, if a serial C program is compiled by
+
</pre>
 
+
which assign all the "mentioned" elements with the corresponding values, without a need of loop(s). A pure array name means all elements.
<pre>gcc -O3 -c test.c</pre>
+
 
+
the corresponding parallel (MPI) program is compiled (using gnu compiler) by
+
 
+
<pre>mpicc -xO3 -c test_mpi.c</pre>
+
 
+
In the linking stage, the usage of '''mpi*''' macros also includes the proper specification of the MPI libraries. For example, the above MPI program should be linked with something like this:
+
 
+
<pre>mpicc -o test_mpi.exe test_mpi.o</pre>  
+
 
+
Compiling and linking may also be combined by omitting the ''-c'' option and including the naming option (''-o'') in the compilation line.
+
 
+
Here are the corresponding MPI macros for the 6 commonly used compilers on our systems:
+
 
+
{| class="wikitable sortable" border="1" cellpadding="2" cellspacing="0"
+
|'''Language'''
+
|'''Intel'''
+
|'''gnu'''
+
|-
+
|''Fortran''
+
| mpiifort
+
| mpif77, mpif90, mpifort
+
|-
+
|''C''
+
| mpiicc
+
| mpicc
+
|-       
+
|''C++'' 
+
| mpiicc, mpiicpc
+
| mpicxx
+
 
|}
 
|}
  
== Running MPI programs ==
+
{|  style="border-spacing: 8px;"
 +
| valign="top" width="50%" style="padding:1em; border:1px solid #aaaaaa; background-color:#e1eaf1; border-radius:7px" |
  
To run MPI programs, a special Runtime Environment is required. This includes commands for the control of multi-process jobs.
+
== Dynamic Memory Allocation ==
  
'''mpirun''' is used to start a multi-process run of a program. This required to run MPI programs. The most commonly used command line option is '''-np''' to specify the number of processes to be started. For instance, the following line will start the program ''test_mpi.exe'' with 9 processes:
+
Early versions of FORTRAN had a big drawback: they did not allow for dynamic memory allocation, forcing re-compilation for array sizes changed. Newer versions of FORTRAN (since F90) support such operations even for many-dimensional arrays.  
  
<pre>mpirun -np 9 test_mpi.exe</pre>  
+
<pre>
 
+
REAL*16, ALLOCATABLE :: COMPLICATED_DATA(:, :, :, :, :, :)
The mpirun command offers additional options that are sometimes useful or required. Most tend to interfere with the scheduling of jobs in a multi-user environment such as ours and should be used with caution. Please consult the man pages for details.
+
INTEGER              :: I1=3, I2=90, I3=80, I4, I5, I6=28
 +
I4 = 24; I5 = 500
 +
ALLOCATE(COMPLICATED_DATA(I1, I2, I3, I4, I5, I6))
 +
</pre>
  
Note that the usage of [[HowTo:Scheduler|a scheduler]] is mandatory for production jobs on our system. This option is therefore used frequently. For a details about Gridengine and jobs submission on our machines and clusters, [[HowTo:Scheduler|go here]].
+
in contrast to C/C++ where all arrays are allocated as one-dimensional.
 
|}
 
|}
  
Line 195: Line 149:
 
| valign="top" width="50%" style="padding:1em; border:1px solid #aaaaaa; background-color:#f7f7f7; border-radius:7px" |
 
| valign="top" width="50%" style="padding:1em; border:1px solid #aaaaaa; background-color:#f7f7f7; border-radius:7px" |
  
== More Information ==
+
== User Defined Data Types ==
  
As already pointed out, this FAQ is not an introduction to MPI programming. The standard reference text on MPI is:
+
FORTRAN also supports user defined data types:
  
Marc Snir, Steve Otto, Steven Huss-Lederman, David Walker, and Jack Dongarra: <br>
+
<pre>
[http://www.amazon.com/MPI-Complete-Reference-2--set/dp/0262692163/ref=sr_1_1?s=books&ie=UTF8&qid=1409163940&sr=1-1&keywords=MPI+-+The+complete+reference MPI - The Complete Reference (2nd edition)], The MIT Press, Cambridge, Massachusetts, 2000;<br>  
+
TYPE PERSON
2 volumes, ISBN 0-262-69215-5 and 0-262-69213-3
+
    CHARACTER(LEN=10) :: NAME
 +
    REAL              ::  AGE
 +
    INTEGER          ::  ID
 +
END TYPE PERSON
 +
TYPE(PERSON) :: YOU, ME
 +
REAL :: DIFF
 +
YOU%ID = 12345
 +
DIFF = YOU%AGE - ME%AGE
 +
</pre>
  
This text specifies all MPI routines and concepts, and includes a large number of examples. Most people will find it sufficient for all their needs.
+
|}
  
[http://www.mhpcc.edu/training/workshop/mpi/MAIN.html A quite good online tutorial for MPI programming] can be found at the Maui HPCC site.
+
{|  style="border-spacing: 8px;"
 +
| valign="top" width="50%" style="padding:1em; border:1px solid #aaaaaa; background-color:#e1eaf1; border-radius:7px" |
  
There is also an [http://www.mpi-forum.org/ official MPI webpage] which contains the standards documents for MPI and gives access to the MPI Forum.
+
== Some Other Features ==
  
We are conducting [[Training:Workshops|Workshops on a regular basis]], some devoted to MPI programming. They are announced on [http://caca.queensu.ca our web site]. We might see you there sometime soon.
+
* FORTRAN also supports recursive routines calls and optional arguments for routines.  
 +
* OpenMP and OpenAcc can easier understand and parallelize FORTRAN code.
 +
* Compilers check FORTRAN code strictly based on grammars and point out any problems they find.
 +
|}
  
== Some Tools ==
 
  
Standard debugging and profiling tools such as Sun Studio are designed for serial or multi-threaded programs. They do not handle multi-process runs very well.
+
{|  style="border-spacing: 8px;"
 +
| valign="top" width="50%" style="padding:1em; border:1px solid #aaaaaa; background-color:#f7f7f7; border-radius:7px" |
  
Quite often, the best way to check the performance of an MPI program is timing it by insertion of suitable routines. MPI supplies a "wall-clock" routine called ''MPI_WTIME()'', that lets you determine how much actual time was spent in a specific segment of your code. An other method is calling the subroutines ''ETIME'' and ''DTIME'', which can give you information about the actual CPU time used. However, it is advisable to carefully read the documentation before using them with MPI programs. In this case, refer to the [http://docs.oracle.com/cd/E19205-01/819-5259/ Sun Studio 12: Fortran Library Reference].
+
== Links and Further Reading ==
  
We also provide a package called the [[Software:HWT|HPCVL Working Template (HWT)]], which was created by Gang Liu. The HWT provides 3 main functionalities:
+
* [http://www.j3-fortran.org/ Fortran Standard Technical Committee]
 +
* [https://en.wikipedia.org/wiki/Fortran Fortran Wikipedia Entry] with information about History, features, and variants of Fortran.
 +
* [https://en.wikipedia.org/wiki/List_of_compilers#Fortran_compilers List of Fortran Compilers.] We are operating the [https://gcc.gnu.org/ GNU] and [https://software.intel.com/en-us/intel-compilers Intel] compilers on our systems, see [[HowTo:Compilers|our compiler help file]].
 +
* [https://www.amazon.ca/Fortran-90-Explained-Michael-Metcalf/dp/0198505582 FORTRAN 90/95 explained, by Michael Metcalf and John Reid.] A good introduction focussing on the 90 version that introduced many of the "modern" features.
 +
|}
  
* '''Maintenance of multiple versions''' of the same code from a single source file. This is very useful, if your MPI code is based on a serial code that you want to convert.
 
* '''Automatic Relative Debugging''' which allows you to use pre-existing code (for example the serial version of your program) as a reference when checking the correctness of your MPI code.
 
* '''Simple Timing''' which is needed to determine bottlenecks for parallelization, to optimize code, and to check its scaling properties.
 
  
The HWT is based on libraries and script files. It is easy to use and portable (written largely in Fortran). Fortran, C, C++, and any mixture thereof are supported, as well as MPI and OpenMP for parallelism. [http://hpcvl.org/sites/default/files/hpcvl%20HWTmanual_1.pdf Documentation of the HWT is available]. The package is installed on our clusters in /opt/hwt.
+
{|  style="border-spacing: 8px;"
 +
| valign="top" width="50%" style="padding:1em; border:1px solid #aaaaaa; background-color:#e1eaf1; border-radius:7px" |
  
 
== Help ==
 
== Help ==
 
[mailto:cac.help@queensu.ca Send email to cac.help@queensu.ca]. We have scientific programmers on staff who will probably be able to help you out. Of course, we can't do the coding for you but we do our best to get your code ready for parallel machines and clusters.
 
[mailto:cac.help@queensu.ca Send email to cac.help@queensu.ca]. We have scientific programmers on staff who will probably be able to help you out. Of course, we can't do the coding for you but we do our best to get your code ready for parallel machines and clusters.
 
|}
 
|}

Latest revision as of 17:00, 23 November 2016

Fortran (Programming Language)

FORTRAN, C, and C++ have a long history as the basic/main compiled languages for high performance computing. The key parallel computing packages, MPI and OpenMP, have been implemented in all of them from the beginning. While C and C++ have been extended for all programming purposes, FORTRAN, originated from FORmular TRANslation, developed with an emphasis on scientific computing. After the FORTRAN I-IV, 66, and 77 stages, the FORTRAN 90, 95, 2003, 2008, and 2015 versions have adopted many advanced features to become a true modern (object oriented) programming language, especially geared toward scientific computations. The following lists some of the most useful and prominent programming features of FORTRAN.

Well Structured

FORTRAN is very well structured. All routines should have a clear beginning statement, and a corresponding ending one. For example (since case-in-sensitiveness, usually written in either all lower or all upper case)

PROGRAM MY_VERY_USEFUL_CODE
    ...
    CALL PROBLEM_SOLVING (...)
    ...
    STOP
END PROGRAM MY_VERY_USEFUL_CODE

SUBROUTINE PROBLEM_SOLVING (...)
    ...
    RESULT = AVERAGE_SCORE (...)
    RETURN
END SUBROUTINE PROBLEM_SOLVING

FUNCTION  AVERAGE_SCORE (...)
    ...
    RETURN
END FUNCTION AVERAGE_SCORE

Even the DO loop and IF structure are also finished with an END statement.

DO I = ISTART, IEND
    ...
END DO

IF (CONDITION)
    ...
ELSE
    ...
END IF

Modules

Similar to classes in C++, modules are very important and widely-used in FORTRAN. Modules, in the form of a separate code structure, may contain various definitions/declarations and can use other predefined modules. Theoretically modules are not classes, but usually used to provide some data structures (objects) for sharing, since in most scientific computations objects are known beforehand and the task is to manipulate them. Modules can also contain specific routines accessing the objects inside and accessible only when the module is used, similar to the encapsulation concept of classes. By using modules, the code can be written very concisely. Here is an example and its usage.

MODULE MY_PARAMETERS
    DOUBLE PRECISION, PARAMETER :: THE_EARTH_RADIUS = 6371.0D0
END MODULE  MY_PARAMETERS

SUBROUTINE EARTH_STORY (...)
    USE MY_PARAMETERS
    DOUBLE PRECISION:: THE_EARTH_DIAMETER 
    ...
    THE_EARTH_DIAMETER = 2 * THE_EARTH_RADIUS 
    ...
    RETURN
END SUBROUTINE EARTH_STORY 

Overloading

As a modern language, FORTRAN also supports routine overloading: the ability to pick up the correct one from a group of routines with different unique interfaces by calling a fixed routine name. The routines are usually of the same functionality.

MODULE MY_KINETICS
     INTERFACE  GENERIC_KINETIC
           SUBROUTINE KINETIC_ROUTINE_A(...)
                   ...
           END SUBROUTINE KINETIC_ROUTINE_A

           SUBROUTINE KINETIC_ROUTINE_B(...)
                   ...
           END SUBROUTINE KINETIC_ROUTINE_B

           SUBROUTINE KINETIC_ROUTINE_C(...)
                   ...
           END SUBROUTINE KINETIC_ROUTINE_C
                   ...
     END INTERFACE GENERIC_KINETIC
END MODULE  MY_KINETICS

After this module is cited

USE MY_KINETICS

with each of the specific routines available, the call

CALL GENERIC_KINETIC(...)

will invoke the specific routine with the matching unique interface. In C++, overloading is a type of class polymorphism.

High Precision

Most FORTRAN compilers have built-in data types of very high precision, like quadruple precision

REAL*16    ::  VELOCITY(3,1000)
COMPLEX*32 ::  HAMILTONIAN(1000, 1000)


Collective Operations

FORTRAN supports collective operations on a whole array or a section of it.

REAL*16 ::  V1(3,100), V2(3,100), V3(3,100)
...
V1 = 0.0Q0
V1(2:3, 20:50) = 0.9Q0
V2 = 0.8Q0 * V3 

which assign all the "mentioned" elements with the corresponding values, without a need of loop(s). A pure array name means all elements.

Dynamic Memory Allocation

Early versions of FORTRAN had a big drawback: they did not allow for dynamic memory allocation, forcing re-compilation for array sizes changed. Newer versions of FORTRAN (since F90) support such operations even for many-dimensional arrays.

REAL*16, ALLOCATABLE :: COMPLICATED_DATA(:, :, :, :, :, :) 
INTEGER              :: I1=3, I2=90, I3=80, I4, I5, I6=28
I4 = 24; I5 = 500
ALLOCATE(COMPLICATED_DATA(I1, I2, I3, I4, I5, I6)) 

in contrast to C/C++ where all arrays are allocated as one-dimensional.

User Defined Data Types

FORTRAN also supports user defined data types:

TYPE PERSON
     CHARACTER(LEN=10) ::  NAME
     REAL              ::  AGE
     INTEGER           ::  ID
END TYPE PERSON
TYPE(PERSON) :: YOU, ME
REAL :: DIFF
YOU%ID = 12345
DIFF = YOU%AGE - ME%AGE

Some Other Features

  • FORTRAN also supports recursive routines calls and optional arguments for routines.
  • OpenMP and OpenAcc can easier understand and parallelize FORTRAN code.
  • Compilers check FORTRAN code strictly based on grammars and point out any problems they find.


Links and Further Reading


Help

Send email to cac.help@queensu.ca. We have scientific programmers on staff who will probably be able to help you out. Of course, we can't do the coding for you but we do our best to get your code ready for parallel machines and clusters.