Difference between revisions of "HowTo:Scheduler"
(Created page with "==How to run Jobs: The Grid Engine== This is an introduction to the '''Sun Grid Engine''' (SGE) scheduling software that is used to submit batch jobs to our production cluste...") |
(Redirected page to SLURM) |
||
(61 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
− | + | #REDIRECT [[SLURM]] | |
+ | |||
+ | = How to run Jobs: The Grid Engine = | ||
This is an introduction to the '''Sun Grid Engine''' (SGE) scheduling software that is used to submit batch jobs to our production clusters. Note that the use of this software is '''mandatory'''. Please familiarize yourself with Grid Engine by reading this file, and refer to the documentation listed in it for details. | This is an introduction to the '''Sun Grid Engine''' (SGE) scheduling software that is used to submit batch jobs to our production clusters. Note that the use of this software is '''mandatory'''. Please familiarize yourself with Grid Engine by reading this file, and refer to the documentation listed in it for details. | ||
Line 5: | Line 7: | ||
Note that the usage of SGE on the production systems of the Centre for Advanced Computing will be phased out in the course of 2016. We will replace this scheduler with a newer one, in all likelihood "SLURM". | Note that the usage of SGE on the production systems of the Centre for Advanced Computing will be phased out in the course of 2016. We will replace this scheduler with a newer one, in all likelihood "SLURM". | ||
− | + | ==What is Grid Engine?== | |
− | Sun Grid Engine (SGE) is a Load Management System that allocates resources such as processors (CPU's), memory, disk-space, and computing time. Grid Engine like other schedulers enables transparent load sharing, controls the sharing of resources, and also implements utilization and site policies. | + | Sun Grid Engine (SGE) is a Load Management System that allocates resources such as processors (CPU's), memory, disk-space, and computing time. Grid Engine like other schedulers enables transparent load sharing, controls the sharing of resources, and also implements utilization and site policies. It has many characteristics including batch queuing and load balancing, as well as giving the users the ability to suspend/resume jobs and check the status of their jobs. |
− | + | Grid Engine can be used through the command line or through a Graphical User Interface (GUI) called "qmon", both with the same set of commands. | |
− | + | ||
− | Grid Engine can be used through the command line or through a Graphical User Interface (GUI) called "qmon", | + | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
The version of Grid Engine on our systems is '''Sun Grid Engine 6.1 (update 4)'''. | The version of Grid Engine on our systems is '''Sun Grid Engine 6.1 (update 4)'''. | ||
− | == | + | == Using Grid Engine == |
− | + | Jobs are submitted to Grid Engine through the '''qsub''' command. | |
− | + | If the job is simple and consists on only few commands then the submission can be done via the command line. If the job requires the set-up of many options and requests, the job is written in the form of a script. | |
− | + | ||
− | + | ||
− | + | Here is a '''sample script''' that must be modified to fit your use case: | |
<pre> | <pre> | ||
− | / | + | #!/bin/bash |
+ | #$ -S /bin/bash | ||
+ | #$ -V | ||
+ | #$ -cwd | ||
+ | #$ -M my.email@some.address.com | ||
+ | #$ -m be | ||
+ | #$ -o STD.out | ||
+ | #$ -e STD.err | ||
+ | ./program < input | ||
</pre> | </pre> | ||
− | + | Such a script is then submitted through the '''qsub''' command: | |
− | <pre> | + | <pre>qsub test.sh </pre> |
− | + | ||
− | </pre> | + | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
And, if the submission of the job is successful, you will see this message: | And, if the submission of the job is successful, you will see this message: | ||
− | + | <pre>your job 12345 (``test.sh'') has been submitted.</pre> | |
− | After that, you can monitor the status of your job with the command qstat or the GUI qmon | + | After that, you can monitor the status of your job with the command '''qstat''' or the GUI '''qmon'''. |
− | Now, let's take a look at the structure of | + | Now, let's take a look at the structure of the Grid Engine batch job script. |
We first recall that a batch job is a UNIX shell script consisting of a sequence of UNIX command-line instructions (or interpreted scripts like perl,...) assembled in a file. In Grid Engine, it is a batch script that contains additionally to normal UNIX command special comments lines defined by the leading prefix "#$". | We first recall that a batch job is a UNIX shell script consisting of a sequence of UNIX command-line instructions (or interpreted scripts like perl,...) assembled in a file. In Grid Engine, it is a batch script that contains additionally to normal UNIX command special comments lines defined by the leading prefix "#$". | ||
Line 95: | Line 51: | ||
The first two lines usually specify the shell | The first two lines usually specify the shell | ||
− | + | <pre> | |
− | + | #! /bin/bash | |
+ | #$ -S /bin/bash | ||
+ | </pre> | ||
+ | |||
+ | We force Grid Engine to use a ''bash'' shell interpreter (''csh'' is the default). | ||
− | |||
To tell SGE to run the job from the current working directory add this script line | To tell SGE to run the job from the current working directory add this script line | ||
− | + | <pre>#$ -cwd </pre> | |
if you want to pass some environment variable VAR (or a list of variables separated by commas) use the "-v" option like this: | if you want to pass some environment variable VAR (or a list of variables separated by commas) use the "-v" option like this: | ||
− | + | <pre>#$ -v VAR </pre> | |
The "-V" option passes all variables listed in env: | The "-V" option passes all variables listed in env: | ||
− | + | <pre>#$ -V </pre> | |
− | Insert the | + | Insert the name of the files to which you want to redirect the standard output/error, respectively: |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
<pre> | <pre> | ||
− | + | #$ -o STD.out | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | #$ -o STD.out | + | |
#$ -e STD.err | #$ -e STD.err | ||
− | |||
</pre> | </pre> | ||
− | The | + | The '''-M''' option is for email notification. It is best to use ''hpcXXXX@localhost'' {''hpcXXXX'' stands for your actual user name) and place file named ".forward" that contains your real email address into your home directory. This way, your email address remains private and invisible to other users. With the '''-m''' option you let the system know when you want to be notified ('''b'''eginning and '''e'''nd). |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | Note that qsub usually expects shell scripts, not executable files. | |
− | + | Note that you can also add options from the command line, for instance | |
− | + | <pre>$ qsub -cwd -v VAR=value -o /home/tmp -e /home/tmp test.sh </pre> | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | Note that | + | The default Linux (SW) cluster queue is called '''abaqus.q'''. Note that jobs to the SW (Linux) cluster are best submitted from the swlogin1 Linux login node, not from sflogin0 (Solaris). This is because scripts often assume that settings are inherited ("#$ -V" line) and so the settings have to be appropriate for Linux. |
− | == | + | == Array Jobs == |
An array of jobs is a job consisting of a range of independent near-identical tasks. Rather than making a separate submission script for each of these tasks, it is preferable to make only one script with all the information that is identical among the tasks, and then use a "counter" to vary the parts that differ. | An array of jobs is a job consisting of a range of independent near-identical tasks. Rather than making a separate submission script for each of these tasks, it is preferable to make only one script with all the information that is identical among the tasks, and then use a "counter" to vary the parts that differ. | ||
Line 167: | Line 93: | ||
In an array job, there is usually a line like this: | In an array job, there is usually a line like this: | ||
− | + | <pre>#$ -t 2-1000:2</pre> | |
which instructs Grid Engine to dynamically (and internally) create copies of the current job script that differ from each other in a counter variable '''SGE_TASK_ID''' which gets counted up from 2 to 1000 in steps of 2. This variable can be used to distinguish between the tasks. For instance, if we want to run the same program "runme.exe" with various different input and output files, we may have a line | which instructs Grid Engine to dynamically (and internally) create copies of the current job script that differ from each other in a counter variable '''SGE_TASK_ID''' which gets counted up from 2 to 1000 in steps of 2. This variable can be used to distinguish between the tasks. For instance, if we want to run the same program "runme.exe" with various different input and output files, we may have a line | ||
− | + | <pre>runme.exe < input$SGE_TASK_ID > output$SGE_TASK_ID</pre> | |
− | in our script. Note that it is also possible to use SGE_TASK_ID in a script that does not explicitly contain the | + | in our script. Note that it is also possible to use SGE_TASK_ID in a script that does not explicitly contain the '''#$ -t''' line. You can then just submit your job (let's call it "array.sh") with the corresponding option '''-t''', like this |
− | + | <pre>qsub -t 2-1000:2 array.sh</pre> | |
Check '''page 71 [http://www.hpcvl.org/sites/default/files/hpvcl_sge_manual.pdf of the manual]''' for more details. | Check '''page 71 [http://www.hpcvl.org/sites/default/files/hpvcl_sge_manual.pdf of the manual]''' for more details. | ||
− | == | + | == Monitoring Jobs == |
After submitting your job to Grid Engine you may track its status by using either the '''qstat''' command, the GUI interface '''qmon''', or by '''email'''. | After submitting your job to Grid Engine you may track its status by using either the '''qstat''' command, the GUI interface '''qmon''', or by '''email'''. | ||
− | === | + | === With qstat === |
The qstat command provides the status of all jobs and queues in the cluster. The most useful options are: | The qstat command provides the status of all jobs and queues in the cluster. The most useful options are: | ||
* '''qstat''': Displays list of all jobs of the current user with no queue status information. | * '''qstat''': Displays list of all jobs of the current user with no queue status information. | ||
* '''qstat -u hpc1234''': Displays list of all jobs belonging to user hpc1234 | * '''qstat -u hpc1234''': Displays list of all jobs belonging to user hpc1234 | ||
− | * '''qstat -u "*"''': Displays list of all jobs belonging to all users. | + | * '''qstat -u "*"''': Displays list of all jobs belonging to all users (note the double-quotes around the asterisk). |
* '''qstat -f''': gives full information about jobs and queues. | * '''qstat -f''': gives full information about jobs and queues. | ||
− | * '''qstat -j | + | * '''qstat -j 1234567''': Gives details about pending or running job job 1234567. |
You can refer to the man pages for a complete description of all the options of the qstat command. | You can refer to the man pages for a complete description of all the options of the qstat command. | ||
− | === | + | === By electronic mail === |
Another way to monitor your jobs is to make Grid Engine notify you by email on status of the job. | Another way to monitor your jobs is to make Grid Engine notify you by email on status of the job. | ||
Line 198: | Line 124: | ||
In your batch script or from the command line use the -m option to request that an email should be send and the -M option to specify the email address where this should be sent. This will look like: | In your batch script or from the command line use the -m option to request that an email should be send and the -M option to specify the email address where this should be sent. This will look like: | ||
− | + | <pre> | |
− | + | #$ -M email@address.com | |
+ | #$ -m be | ||
+ | </pre> | ||
− | Where the (-m) option can select after which events you want to receive your email. In particular you can select to be notified at the '''b'''eginning/'''e'''nd of the job (see the sample script lines above), or when the job is '''a'''borted/'''s'''uspended. | + | Where the (-m) option can select after which events you want to receive your email. In particular you can select to be notified at the '''b'''eginning/'''e'''nd of the job (see the sample script lines above), or when the job is '''a'''borted/'''s'''uspended. The -M option is used to specify the email address at which you want to be notified. |
From the command line you can use the options (for example): | From the command line you can use the options (for example): | ||
− | + | <pre>qsub -M email@address.com job.sh</pre> | |
− | === | + | === With qmon === |
You can also use the GUI '''qmon''', which gives a convenient window dialog specifically designed for monitoring and controlling jobs, and the buttons are self explanatory. | You can also use the GUI '''qmon''', which gives a convenient window dialog specifically designed for monitoring and controlling jobs, and the buttons are self explanatory. | ||
− | == | + | == Deleting Jobs == |
− | + | ||
− | + | You can delete a job that is running or spooled in the queue by using the qdel command like this | |
− | + | <pre>qdel 1234567</pre> | |
− | + | which removes job number 1234567. Note that if your job is not on the waiting queue, but is already executing, you might need to issue the | |
+ | '''-f''' (force) option with the qdel job_id command to terminate the job. | ||
− | + | == Requesting Memory == | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | == | + | |
Sometimes your job requires additional resources to run, for instance you may need a minimum amount of memory. This is particularly relevant when you are running jobs on the SW (Linux) cluster, using '''abaqus.q'''. Since this cluster consists of nodes with different available physical memory (see this table for a list), it is important to be aware of whether the node you are running your job on has enough memory to execute properly. To this end, Grid Engine provides a simple resource specification of "free memory": | Sometimes your job requires additional resources to run, for instance you may need a minimum amount of memory. This is particularly relevant when you are running jobs on the SW (Linux) cluster, using '''abaqus.q'''. Since this cluster consists of nodes with different available physical memory (see this table for a list), it is important to be aware of whether the node you are running your job on has enough memory to execute properly. To this end, Grid Engine provides a simple resource specification of "free memory": | ||
− | + | <pre>#$ -l mf=35G</pre> | |
− | + | In this example a program that requires up to 35 GB of physical memory. SGE spot-checks before scheduling whether this amount is available on a node, and avoid nodes with less. Note that this is checked only at the time of scheduling, it won't provide a safeguard against "running out" later. However, it makes it less likely that the job ends up "swapping", i.e. using disk to store data. Swapping usually slows down execution by a huge factor, often leading to unacceptable execution times. | |
− | == | + | == Parallel Jobs == |
− | A Parallel Environment is a programming environment designed for parallel computing in a network of computers, which allows execution of shared memory and distributed memory parallel applications. The most commonly used parallel environments are Message Passing Interface (MPI) for distributed-memory machines, and OpenMP for shared-memory | + | A Parallel Environment is a programming environment designed for parallel computing in a network of computers, which allows execution of shared memory and distributed memory parallel applications. The most commonly used parallel environments are Message Passing Interface (MPI) for distributed-memory machines, and OpenMP for shared-memory machines. |
− | + | Grid Engine provides an interface to handle parallel jobs running on the top of these parallel environments. For the users convenience we have predefined parallel environment interfaces for them. These are: | |
− | + | ||
− | Grid Engine | + | * '''shm.pe''': This environment is intended for shared-memory applications. Grid Engine will assign the processors in a single node to take advantage of the fastest connection available between the slots. It is permissible to use '''shm.pe''' for distributed-memory (e.g. MPI) jobs, if the intention is to keep them within a single node. Note that this might speed up communication, but can also lead to longer waiting periods. |
+ | * '''dist.pe''': This environment is intended for distributed memory applications using MPI. Grid Engine will assign the '''dist.pe''' jobs to the production queue and try to use fastest connection available between the slots and nodes. Although the system will try to allocate processes on as few nodes as possible, it will be allowed to spread them out over the cluster, since this parallel environment is meant to handle distributed-memory jobs. Note that currently, '''dist.pe''' is functionally equivalent to '''shm.pe''', i.e. no inter-node scheduling takes place. | ||
− | + | === Multi-threaded Jobs === | |
− | + | ||
− | + | ||
− | + | ||
− | |||
You need to specify the parallel environment to use, which is shm.pe in our case, and how many processors are going to be used. This is done via the script line: | You need to specify the parallel environment to use, which is shm.pe in our case, and how many processors are going to be used. This is done via the script line: | ||
− | + | <pre>#$ -pe shm.pe 16</pre> | |
− | if you want to use 16 processors. This sets and environment variable NSLOTS and requests the corresponding number of processes. | + | if you want to use 16 processors. This sets and environment variable '''NSLOTS''' and requests the corresponding number of processes. |
− | + | In the case of an OpenMP based multi-threaded programs you need to set the variable '''OMP_NUM_THREADS''' to the number of processors to be used. Add the following lines to your script file: | |
− | + | <pre>export OMP_NUM_THREADS=$NSLOTS</pre> | |
− | + | ||
− | Here is a '''multi-threading sample script''' that has to be modified to fit | + | Here is a '''multi-threading sample script''' that has to be modified to fit your case: |
<pre> | <pre> | ||
Line 270: | Line 184: | ||
#$ -V | #$ -V | ||
#$ -cwd | #$ -cwd | ||
− | #$ -M | + | #$ -M email@address.com |
#$ -m be | #$ -m be | ||
#$ -o STD.out | #$ -o STD.out | ||
#$ -e STD.err | #$ -e STD.err | ||
− | #$ -pe shm.pe | + | #$ -pe shm.pe 16 |
− | + | ||
export OMP_NUM_THREADS=$NSLOTS | export OMP_NUM_THREADS=$NSLOTS | ||
− | ./ | + | ./omp_program < input |
</pre> | </pre> | ||
− | |||
− | === | + | We're assuming 16 threads in this example, you'll have to change that if you're using a different number. Note that this number should not exceed the number of cores available on the nodes you are planning to run this, because otherwise the job won't get scheduled. |
+ | |||
+ | === Distributed (MPI) Jobs === | ||
+ | |||
A specific parallel environment needs to be specified, to let the system know which environment and how many processors are going to be used. This is done via the script line: | A specific parallel environment needs to be specified, to let the system know which environment and how many processors are going to be used. This is done via the script line: | ||
− | + | <pre>#$ -pe dist.pe 16</pre> | |
where the number of processors is 16 in this case. | where the number of processors is 16 in this case. | ||
− | In the standard | + | In the standard '''mpirun''' command, you specify the number of processes through the '''-np''' option, because the Cluster Tools runtime system knows that resource allocation will be done by Grid Engine and determines the number of processes from the '''-pe''' directive. |
− | Here is an '''MPI sample script''' that has to be modified to fit | + | Here is an '''MPI sample script''' that has to be modified to fit your case: |
<pre> | <pre> | ||
Line 297: | Line 212: | ||
#$ -V | #$ -V | ||
#$ -cwd | #$ -cwd | ||
− | #$ -M | + | #$ -M email@address.com |
#$ -m be | #$ -m be | ||
#$ -o STD.out | #$ -o STD.out | ||
#$ -e STD.err | #$ -e STD.err | ||
− | #$ -pe dist.pe | + | #$ -pe dist.pe 16 |
− | mpirun ./ | + | mpirun -np $NSLOTS ./mpi_program < input |
</pre> | </pre> | ||
− | + | 16 should be replaced by the number of MPI processes you actually want to use. Since by default we are operating our cluster nodes "in box", i.e. no internode communication is used, the number of processes should not exceed the number of cores for the node the job is running on. Otherwise the job will not be scheduled. | |
To run this job you simply type | To run this job you simply type | ||
− | + | <pre>qsub mpi_job.sh</pre> | |
+ | |||
+ | '''Note:''' Presently the dist.pe and shm.pe parallel environments are configured the same way. This means that an MPI job will only be scheduled on a single node. This is done for reasons of efficiency. | ||
+ | |||
+ | == Other Commands == | ||
+ | |||
+ | Sun Grid Engine allows the user to submit/delete jobs, check job status, and have information about available queues and environments. For most users the knowledge of the following basic commands should be sufficient: | ||
+ | |||
+ | * '''qconf''': Shows (-s) the user the configurations and access permissions only. | ||
+ | * '''qdel''': Gives the user the ability to delete his own jobs only. | ||
+ | * '''qhost''': Displays status information about Sun Grid Engine execution hosts. | ||
+ | * '''qmod''': Modify the status of your jobs (like suspend/resume). | ||
+ | * '''qmon''': Provides the X-windows GUI command interface. | ||
+ | * '''qstat''': Provides a status listing of all jobs and queues associated with the cluster. | ||
+ | * '''qsub''': Is the user interface for submitting a job to Grid Engine. | ||
+ | |||
+ | All these commands come with many options and switches and are also available with the GUI '''qmon'''. They all have detailed man pages (e.g. "man qsub"), and are documented in the [http://www.hpcvl.org/sites/default/files/hpvcl_sge_manual.pdf Sun Grid Engine 6 User's Guide]. | ||
+ | |||
+ | == Environment Setup == | ||
+ | |||
+ | When you first log in you will already have the proper setup for using Gridengine. This is because Gridengine is included in the default settings for ''usepackage''. If for some reason Gridengine is not part of your environment setup, you can add it by issuing the | ||
+ | |||
+ | <pre> | ||
+ | use sge6 | ||
+ | </pre> | ||
+ | |||
+ | command. Part of the setup that is done automatically by ''usepackage'' is to source a setup-script that is located in the directory | ||
+ | |||
+ | <pre> | ||
+ | /opt/n1ge6/default/common/ | ||
+ | </pre> | ||
+ | |||
+ | You can also "source" those scripts manually: | ||
+ | |||
+ | <pre> | ||
+ | source /opt/n1ge6/default/common/settings.sh | ||
+ | </pre> | ||
+ | |||
+ | The setup script modifies your search PATH and sets other environment variables that are required to get Grid Engine running. One of those variables is SGE_ROOT which contains the directory in which the Grid Engine-related programs are located. | ||
+ | |||
+ | == Help and documentation == | ||
− | |||
Grid Engine has a lot more options and possibilities for every kind of jobs. Here, we gave the user only the basic steps to get started using GE. Detailed documentation is available. First, there is [http://www.hpcvl.org/sites/default/files/hpvcl_sge_manual.pdf the User's Guide] which should answer almost all of your questions. | Grid Engine has a lot more options and possibilities for every kind of jobs. Here, we gave the user only the basic steps to get started using GE. Detailed documentation is available. First, there is [http://www.hpcvl.org/sites/default/files/hpvcl_sge_manual.pdf the User's Guide] which should answer almost all of your questions. | ||
For specific commands, the '''man pages''' are very comprehensive and should be consulted. For instance "man qstat" explains the meaning of the qstat command options. | For specific commands, the '''man pages''' are very comprehensive and should be consulted. For instance "man qstat" explains the meaning of the qstat command options. | ||
− | + | The Centre for Advanced Computing offers user support; for questions about this help file and the usage of Grid Engine on our machines [[contacts:UserSupport|contact us]]. |
Latest revision as of 14:45, 30 June 2020
Redirect to:
Contents
How to run Jobs: The Grid Engine
This is an introduction to the Sun Grid Engine (SGE) scheduling software that is used to submit batch jobs to our production clusters. Note that the use of this software is mandatory. Please familiarize yourself with Grid Engine by reading this file, and refer to the documentation listed in it for details.
Note that the usage of SGE on the production systems of the Centre for Advanced Computing will be phased out in the course of 2016. We will replace this scheduler with a newer one, in all likelihood "SLURM".
What is Grid Engine?
Sun Grid Engine (SGE) is a Load Management System that allocates resources such as processors (CPU's), memory, disk-space, and computing time. Grid Engine like other schedulers enables transparent load sharing, controls the sharing of resources, and also implements utilization and site policies. It has many characteristics including batch queuing and load balancing, as well as giving the users the ability to suspend/resume jobs and check the status of their jobs.
Grid Engine can be used through the command line or through a Graphical User Interface (GUI) called "qmon", both with the same set of commands.
The version of Grid Engine on our systems is Sun Grid Engine 6.1 (update 4).
Using Grid Engine
Jobs are submitted to Grid Engine through the qsub command.
If the job is simple and consists on only few commands then the submission can be done via the command line. If the job requires the set-up of many options and requests, the job is written in the form of a script.
Here is a sample script that must be modified to fit your use case:
#!/bin/bash #$ -S /bin/bash #$ -V #$ -cwd #$ -M my.email@some.address.com #$ -m be #$ -o STD.out #$ -e STD.err ./program < input
Such a script is then submitted through the qsub command:
qsub test.sh
And, if the submission of the job is successful, you will see this message:
your job 12345 (``test.sh'') has been submitted.
After that, you can monitor the status of your job with the command qstat or the GUI qmon.
Now, let's take a look at the structure of the Grid Engine batch job script.
We first recall that a batch job is a UNIX shell script consisting of a sequence of UNIX command-line instructions (or interpreted scripts like perl,...) assembled in a file. In Grid Engine, it is a batch script that contains additionally to normal UNIX command special comments lines defined by the leading prefix "#$".
The first two lines usually specify the shell
#! /bin/bash #$ -S /bin/bash
We force Grid Engine to use a bash shell interpreter (csh is the default).
To tell SGE to run the job from the current working directory add this script line
#$ -cwd
if you want to pass some environment variable VAR (or a list of variables separated by commas) use the "-v" option like this:
#$ -v VAR
The "-V" option passes all variables listed in env:
#$ -V
Insert the name of the files to which you want to redirect the standard output/error, respectively:
#$ -o STD.out #$ -e STD.err
The -M option is for email notification. It is best to use hpcXXXX@localhost {hpcXXXX stands for your actual user name) and place file named ".forward" that contains your real email address into your home directory. This way, your email address remains private and invisible to other users. With the -m option you let the system know when you want to be notified (beginning and end).
Note that qsub usually expects shell scripts, not executable files.
Note that you can also add options from the command line, for instance
$ qsub -cwd -v VAR=value -o /home/tmp -e /home/tmp test.sh
The default Linux (SW) cluster queue is called abaqus.q. Note that jobs to the SW (Linux) cluster are best submitted from the swlogin1 Linux login node, not from sflogin0 (Solaris). This is because scripts often assume that settings are inherited ("#$ -V" line) and so the settings have to be appropriate for Linux.
Array Jobs
An array of jobs is a job consisting of a range of independent near-identical tasks. Rather than making a separate submission script for each of these tasks, it is preferable to make only one script with all the information that is identical among the tasks, and then use a "counter" to vary the parts that differ.
In an array job, there is usually a line like this:
#$ -t 2-1000:2
which instructs Grid Engine to dynamically (and internally) create copies of the current job script that differ from each other in a counter variable SGE_TASK_ID which gets counted up from 2 to 1000 in steps of 2. This variable can be used to distinguish between the tasks. For instance, if we want to run the same program "runme.exe" with various different input and output files, we may have a line
runme.exe < input$SGE_TASK_ID > output$SGE_TASK_ID
in our script. Note that it is also possible to use SGE_TASK_ID in a script that does not explicitly contain the #$ -t line. You can then just submit your job (let's call it "array.sh") with the corresponding option -t, like this
qsub -t 2-1000:2 array.sh
Check page 71 of the manual for more details.
Monitoring Jobs
After submitting your job to Grid Engine you may track its status by using either the qstat command, the GUI interface qmon, or by email.
With qstat
The qstat command provides the status of all jobs and queues in the cluster. The most useful options are:
- qstat: Displays list of all jobs of the current user with no queue status information.
- qstat -u hpc1234: Displays list of all jobs belonging to user hpc1234
- qstat -u "*": Displays list of all jobs belonging to all users (note the double-quotes around the asterisk).
- qstat -f: gives full information about jobs and queues.
- qstat -j 1234567: Gives details about pending or running job job 1234567.
You can refer to the man pages for a complete description of all the options of the qstat command.
By electronic mail
Another way to monitor your jobs is to make Grid Engine notify you by email on status of the job.
In your batch script or from the command line use the -m option to request that an email should be send and the -M option to specify the email address where this should be sent. This will look like:
#$ -M email@address.com #$ -m be
Where the (-m) option can select after which events you want to receive your email. In particular you can select to be notified at the beginning/end of the job (see the sample script lines above), or when the job is aborted/suspended. The -M option is used to specify the email address at which you want to be notified.
From the command line you can use the options (for example):
qsub -M email@address.com job.sh
With qmon
You can also use the GUI qmon, which gives a convenient window dialog specifically designed for monitoring and controlling jobs, and the buttons are self explanatory.
Deleting Jobs
You can delete a job that is running or spooled in the queue by using the qdel command like this
qdel 1234567
which removes job number 1234567. Note that if your job is not on the waiting queue, but is already executing, you might need to issue the -f (force) option with the qdel job_id command to terminate the job.
Requesting Memory
Sometimes your job requires additional resources to run, for instance you may need a minimum amount of memory. This is particularly relevant when you are running jobs on the SW (Linux) cluster, using abaqus.q. Since this cluster consists of nodes with different available physical memory (see this table for a list), it is important to be aware of whether the node you are running your job on has enough memory to execute properly. To this end, Grid Engine provides a simple resource specification of "free memory":
#$ -l mf=35G
In this example a program that requires up to 35 GB of physical memory. SGE spot-checks before scheduling whether this amount is available on a node, and avoid nodes with less. Note that this is checked only at the time of scheduling, it won't provide a safeguard against "running out" later. However, it makes it less likely that the job ends up "swapping", i.e. using disk to store data. Swapping usually slows down execution by a huge factor, often leading to unacceptable execution times.
Parallel Jobs
A Parallel Environment is a programming environment designed for parallel computing in a network of computers, which allows execution of shared memory and distributed memory parallel applications. The most commonly used parallel environments are Message Passing Interface (MPI) for distributed-memory machines, and OpenMP for shared-memory machines.
Grid Engine provides an interface to handle parallel jobs running on the top of these parallel environments. For the users convenience we have predefined parallel environment interfaces for them. These are:
- shm.pe: This environment is intended for shared-memory applications. Grid Engine will assign the processors in a single node to take advantage of the fastest connection available between the slots. It is permissible to use shm.pe for distributed-memory (e.g. MPI) jobs, if the intention is to keep them within a single node. Note that this might speed up communication, but can also lead to longer waiting periods.
- dist.pe: This environment is intended for distributed memory applications using MPI. Grid Engine will assign the dist.pe jobs to the production queue and try to use fastest connection available between the slots and nodes. Although the system will try to allocate processes on as few nodes as possible, it will be allowed to spread them out over the cluster, since this parallel environment is meant to handle distributed-memory jobs. Note that currently, dist.pe is functionally equivalent to shm.pe, i.e. no inter-node scheduling takes place.
Multi-threaded Jobs
You need to specify the parallel environment to use, which is shm.pe in our case, and how many processors are going to be used. This is done via the script line:
#$ -pe shm.pe 16
if you want to use 16 processors. This sets and environment variable NSLOTS and requests the corresponding number of processes.
In the case of an OpenMP based multi-threaded programs you need to set the variable OMP_NUM_THREADS to the number of processors to be used. Add the following lines to your script file:
export OMP_NUM_THREADS=$NSLOTS
Here is a multi-threading sample script that has to be modified to fit your case:
#!/bin/bash #$ -S /bin/bash #$ -V #$ -cwd #$ -M email@address.com #$ -m be #$ -o STD.out #$ -e STD.err #$ -pe shm.pe 16 export OMP_NUM_THREADS=$NSLOTS ./omp_program < input
We're assuming 16 threads in this example, you'll have to change that if you're using a different number. Note that this number should not exceed the number of cores available on the nodes you are planning to run this, because otherwise the job won't get scheduled.
Distributed (MPI) Jobs
A specific parallel environment needs to be specified, to let the system know which environment and how many processors are going to be used. This is done via the script line:
#$ -pe dist.pe 16
where the number of processors is 16 in this case.
In the standard mpirun command, you specify the number of processes through the -np option, because the Cluster Tools runtime system knows that resource allocation will be done by Grid Engine and determines the number of processes from the -pe directive.
Here is an MPI sample script that has to be modified to fit your case:
#!/bin/bash #$ -S /bin/bash #$ -V #$ -cwd #$ -M email@address.com #$ -m be #$ -o STD.out #$ -e STD.err #$ -pe dist.pe 16 mpirun -np $NSLOTS ./mpi_program < input
16 should be replaced by the number of MPI processes you actually want to use. Since by default we are operating our cluster nodes "in box", i.e. no internode communication is used, the number of processes should not exceed the number of cores for the node the job is running on. Otherwise the job will not be scheduled.
To run this job you simply type
qsub mpi_job.sh
Note: Presently the dist.pe and shm.pe parallel environments are configured the same way. This means that an MPI job will only be scheduled on a single node. This is done for reasons of efficiency.
Other Commands
Sun Grid Engine allows the user to submit/delete jobs, check job status, and have information about available queues and environments. For most users the knowledge of the following basic commands should be sufficient:
- qconf: Shows (-s) the user the configurations and access permissions only.
- qdel: Gives the user the ability to delete his own jobs only.
- qhost: Displays status information about Sun Grid Engine execution hosts.
- qmod: Modify the status of your jobs (like suspend/resume).
- qmon: Provides the X-windows GUI command interface.
- qstat: Provides a status listing of all jobs and queues associated with the cluster.
- qsub: Is the user interface for submitting a job to Grid Engine.
All these commands come with many options and switches and are also available with the GUI qmon. They all have detailed man pages (e.g. "man qsub"), and are documented in the Sun Grid Engine 6 User's Guide.
Environment Setup
When you first log in you will already have the proper setup for using Gridengine. This is because Gridengine is included in the default settings for usepackage. If for some reason Gridengine is not part of your environment setup, you can add it by issuing the
use sge6
command. Part of the setup that is done automatically by usepackage is to source a setup-script that is located in the directory
/opt/n1ge6/default/common/
You can also "source" those scripts manually:
source /opt/n1ge6/default/common/settings.sh
The setup script modifies your search PATH and sets other environment variables that are required to get Grid Engine running. One of those variables is SGE_ROOT which contains the directory in which the Grid Engine-related programs are located.
Help and documentation
Grid Engine has a lot more options and possibilities for every kind of jobs. Here, we gave the user only the basic steps to get started using GE. Detailed documentation is available. First, there is the User's Guide which should answer almost all of your questions.
For specific commands, the man pages are very comprehensive and should be consulted. For instance "man qstat" explains the meaning of the qstat command options.
The Centre for Advanced Computing offers user support; for questions about this help file and the usage of Grid Engine on our machines contact us.