HowTo:Migrate

From CAC Wiki
Revision as of 15:39, 15 September 2016 by Hasch (Talk | contribs) (Manual Set-Up)

Jump to: navigation, search

Migrating from Sparc/Solaris to x86/Linux

This is an introduction to setting up your account on our systems. When first logging in, you are presented with a default set-up that enables the use of basic system commands, simple compilers, and access to the scheduler. This help file is meant to explain how to modify that default.

Access

The login node for the Linux nodes is swlogin1. It may be accessed in two different ways:

  • From the default login node sflogin0 (which stgill runs on Solaris) by secure shell:
     
ssh -X swlogin1. Re-issuing the password will be required.
  • Directly from the Secure Global Desktop through the xterm (sxwlogin1) application.

For people used to work on sflogin0, this iomplies an additional "node hop" to swlogin1.

Shell Setup

There are several set-up files in your home directory:

  • .bashrc is "sourced in" every time a bash shell is invoked.
  • .bash_profile applies only to login shells, i.e. when you access the system from outside.

Most of the setup is automatic through usepackage. On login, you have a default setup that is appropriate for a Linux system. Additional packages can be set up by adding commands such as

use anaconda3

to the above setup files, if you want to use the Python 3 distribution "Anaconda" (as an example). Note that this is the same as it was on Solaris, but that the available packages may differ. For a list, use the

use -l

command.

Compiling Code

The standard Fortran/C/C++ compilers differ between the Solaris and the Linux systems. The ones on the x86/Linux platform are discussed here. Here is a comparison in table form. Since there are two compilers (gnu and Intel) on the Linux platform, they are treated separately. The default is gnu. We also list the MPI - related commands for setup, compilation, and runtime.

Fortran/C/C++ Compilers Sparc/Solaris to x86/Linux
Sparc/Solaris x86/Linux (gnu) x86/Linux (Intel)
Name/Version Studio 12.4 Gnu gcc 4.4.7 Intel 12.1
Setup command none (default) none (default) use icsmpi
MPI setup none (default) use openmpi use icsmpi
Fortran / C / C++ compilers f90 / cc / CC gfortran / gcc / g++ ifort / icc / icpc
MPI compoiler wrappers mpif90 / mpicc / mpiCC mpif90 / mpicc / mpicxx mpiifort / mpiicc / mpiicpc
MPI runtime environment mpirun mpirun mpirun

MPI

On both Solaris and Linux systems, the MPI distribution used is OpenMPI. On the Solaris platform this was integrated with the standard Studio compilers. On the Linux platform, two versions are in use:

  • A stand-alone version of OpenMPI 1.8 is used in combination with the gcc compiler and setup through the use openmpi command.
  • A second version (Intel 4.0 update 3) is used with the Intel compilers and set up together with them ("use icsmpi")

All of these versions use the mpirun command to invoke the runtime environment. Check with which mpirun to see which version you are currently using.

Running pre-installed software

A lot of software is pre-installed on our clusters. Some of this software requires specific license agreements, other programs are freely accessible. With the use command, most of them can be set up with a single line such as "use fluent" in your shell's start-up file. If the software you want to run is not included in our usepackage list, please contact us, and we can include it. If you are using very specific software that is not accessed by other users, you might have to do the setup manually.

Here is a few steps to follow in that case.

  • Check out the documentation for the specific program, including users' manuals and home pages.
  • Inform yourself about licensing. Some software requires each individual user to hold a license, some is covered by a collective license agreement, some does not require a license at all. For example, the finite-element structural code "Abaqus" is only accessible to users who work at an institution that is covered by a local license, whereas the license agreement for the electronic-structure code "Gaussian" covers all our HPCVL users. Finally, code such as "Gamess" (another quantum-chemistry program) are free to use by all users, although the distributor encourages registration.
  • Set the proper environment variables. This can usually be done in your shell setup files, since you'll be running the same code on most occasions you log on. These variables might include the PATH, but also variables specific for the program in question. Which ones to set you will be able to find out in most cases from the program documentation. Remember that this is only necessary if no entry exists in the "usepackage" configuration file, which can be checked by running "use -l".

How do I run parallel code ?

That depends on how the code is "parallelized":

  • If it was "multi-threaded" by the compiler (automatic or via compiler directives), it is usually enough to set the environment variable PARALLEL or OMP_NUM_THREADS to the number of threads that should be used.
  • If it is MPI code, a special parallel runtime environment has to be used. The command there is mpirun, which has command-line options that let you tell how many and which processors to use. This command is part of the Cluster Tools parallel runtime environment. Cluster Tools involves a good deal of commands that let you modify the condition under which your program runs. The settings for these are included in our default setup.

You can learn more about parallel code by having a look at our Parallel Programming FAQ. We also have a bit more specific information about parallel programming tools, namely OpenMP compiler directives and the Message Passing Interface (MPI).

Help

If you have questions that you can't resolve by checking documentation, email to cac.help@queensu.ca.