Difference between revisions of "Hardware:Frontenac"
(create page) |
|||
Line 3: | Line 3: | ||
The Frontenac cluster is expected to rapidly grow as nodes are migrated from the SW cluster. Currently the cluster consists entirely of 24 core (Intel Xeon CPU E5-2650 v4 @ 2.20GHz) x 256GB RAM nodes. | The Frontenac cluster is expected to rapidly grow as nodes are migrated from the SW cluster. Currently the cluster consists entirely of 24 core (Intel Xeon CPU E5-2650 v4 @ 2.20GHz) x 256GB RAM nodes. | ||
− | = | + | = Full documentation = |
− | === Logging on | + | '''A full migration guide can be found here: [[Migration:Frontenac|Frontenac cluster migration guide]]''' |
+ | |||
+ | * [[Access:Frontenac|Logging on to the system]] | ||
+ | * [[Software:Frontenac|List of installed software and how to use it]] | ||
+ | * [[Filesystems:Frontenac|Storage and filesystems]] | ||
+ | * [[SLURM|Submitting jobs using SLURM]] | ||
+ | * [[SLURM_Accounting|SLURM accounting and special job submission]] | ||
+ | |||
+ | = Quickstart = | ||
+ | |||
+ | For those who want to just log on and get started with the new system, the bare essentials are shown below. | ||
+ | |||
+ | == Logging on == | ||
Login to the Frontenac cluster is via SSH access only. You will need an SSH client like Terminal on Linux/macOS or [http://mobaxterm.mobatek.net/ MobaXterm] on Windows. To log on to the cluster, execute the following command in your SSH client of choice: | Login to the Frontenac cluster is via SSH access only. You will need an SSH client like Terminal on Linux/macOS or [http://mobaxterm.mobatek.net/ MobaXterm] on Windows. To log on to the cluster, execute the following command in your SSH client of choice: | ||
Line 13: | Line 25: | ||
The first time you log on, you will be prompted to accept this server's RSA key (<code>d0:9f:e9:e2:b0:fe:6b:56:bb:74:46:c5:fb:89:a4:41</code>). Type "yes" to proceed, then enter your password normally. No characters appear while typing your password. | The first time you log on, you will be prompted to accept this server's RSA key (<code>d0:9f:e9:e2:b0:fe:6b:56:bb:74:46:c5:fb:89:a4:41</code>). Type "yes" to proceed, then enter your password normally. No characters appear while typing your password. | ||
− | + | == Filesystems == | |
The Frontenac cluster uses a shared GPFS filesystem for all file storage. User files are located under <code>/global/home</code>, shared project space under <code>/global/project</code>, and network scratch space under <code>/global/scratch</code>. In to network storage, each compute node has a 1.5TB local hard disk for fast access to local scratch space by jobs using the location specified by the <code>$TMPDISK</code> environment variable. | The Frontenac cluster uses a shared GPFS filesystem for all file storage. User files are located under <code>/global/home</code>, shared project space under <code>/global/project</code>, and network scratch space under <code>/global/scratch</code>. In to network storage, each compute node has a 1.5TB local hard disk for fast access to local scratch space by jobs using the location specified by the <code>$TMPDISK</code> environment variable. | ||
− | + | == Submitting jobs == | |
Frontenac uses the SLURM scheduler instead of Sun Grid Engine. The <code>sbatch</code> command is used to submit jobs, <code>squeue</code> can be used to check the status of jobs, and <code>scancel</code> can be used to kill a job. For users looking to get started with SLURM as fast as possible, a minimalist template job script is shown below: | Frontenac uses the SLURM scheduler instead of Sun Grid Engine. The <code>sbatch</code> command is used to submit jobs, <code>squeue</code> can be used to check the status of jobs, and <code>scancel</code> can be used to kill a job. For users looking to get started with SLURM as fast as possible, a minimalist template job script is shown below: | ||
Line 37: | Line 49: | ||
== Migration guide == | == Migration guide == | ||
− | + | Please see our [[Migration:Frontenac|Frontenac cluster migration guide]] for a full overview of the migration process. |
Revision as of 13:48, 10 May 2017
The Frontenac cluster is CAC's newest compute cluster. It features a new set of hardware, a new network configuration, a new scheduler, a new software module system, a new OS, and a new set of compilers and related software. This page is intended to give an overview of its capabilities and provide a migration guide for new users. Please note that user accounts and data are *not* shared between Frontenac and the SW cluster, although you may request that your data is copied over.
The Frontenac cluster is expected to rapidly grow as nodes are migrated from the SW cluster. Currently the cluster consists entirely of 24 core (Intel Xeon CPU E5-2650 v4 @ 2.20GHz) x 256GB RAM nodes.
Contents
Full documentation
A full migration guide can be found here: Frontenac cluster migration guide
- Logging on to the system
- List of installed software and how to use it
- Storage and filesystems
- Submitting jobs using SLURM
- SLURM accounting and special job submission
Quickstart
For those who want to just log on and get started with the new system, the bare essentials are shown below.
Logging on
Login to the Frontenac cluster is via SSH access only. You will need an SSH client like Terminal on Linux/macOS or MobaXterm on Windows. To log on to the cluster, execute the following command in your SSH client of choice:
ssh -X yourUserName@login.cac.queensu.ca
The first time you log on, you will be prompted to accept this server's RSA key (d0:9f:e9:e2:b0:fe:6b:56:bb:74:46:c5:fb:89:a4:41
). Type "yes" to proceed, then enter your password normally. No characters appear while typing your password.
Filesystems
The Frontenac cluster uses a shared GPFS filesystem for all file storage. User files are located under /global/home
, shared project space under /global/project
, and network scratch space under /global/scratch
. In to network storage, each compute node has a 1.5TB local hard disk for fast access to local scratch space by jobs using the location specified by the $TMPDISK
environment variable.
Submitting jobs
Frontenac uses the SLURM scheduler instead of Sun Grid Engine. The sbatch
command is used to submit jobs, squeue
can be used to check the status of jobs, and scancel
can be used to kill a job. For users looking to get started with SLURM as fast as possible, a minimalist template job script is shown below:
#!/bin/bash #SBATCH -c num_cpus # Number of CPUS requested. If omitted, the default is 1 CPU. #SBATCH --mem=megabytes # Memory requested in megabytes. If omitted, the default is 1024 MB. #SBATCH -t days-hours:minutes:seconds # How long will your job run for? If omitted, the default is 3 hours. # some demo commands to use as a test echo 'starting test job...' sleep 120 echo 'our job worked!'
Assuming our job is called test-job.sh
, we can submit it with sbatch test-job.sh
. Detailed documentation can be found on our SLURM documentation page. One final thing to note is that it is possible to submit an interactive job with srun --x11 --pty bash
. This starts a personal bash shell on a node with resources available.
Migration guide
Please see our Frontenac cluster migration guide for a full overview of the migration process.