Difference between revisions of "Hardware:SW"
(→The SW (Linux) Cluster) |
|||
(11 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
{| style="border-spacing: 8px;" | {| style="border-spacing: 8px;" | ||
| valign="top" width="50%" style="padding:1em; border:1px solid #fa5882; background-color:#f6eee3; border-radius:7px" | | | valign="top" width="50%" style="padding:1em; border:1px solid #fa5882; background-color:#f6eee3; border-radius:7px" | | ||
− | '''The SW cluster | + | '''The SW cluster has been decomissioned. Please refer to the [[Hardware:Frontenac|Frontenac Cluster]]''' |
<center> | <center> | ||
|} | |} | ||
Line 13: | Line 13: | ||
{| class="wikitable" style="float:left; margin-right: 25px;" | {| class="wikitable" style="float:left; margin-right: 25px;" | ||
− | !colspan="6"| '''SW (Linux) Cluster Nodes (sw series)''' | + | !colspan="6"| '''SW (Linux) Cluster Nodes ("old" sw series)''' |
|- | |- | ||
|'''Host''' | |'''Host''' | ||
Line 23: | Line 23: | ||
|- | |- | ||
| sw0044 | | sw0044 | ||
− | | Xeon | + | | Xeon E7-4860 |
− | + | | 2.3GHz | |
− | + | | 40 | |
− | + | | 80 | |
− | + | | 256 GB | |
− | + | ||
− | | | + | |
− | + | ||
− | + | ||
− | | | + | |
− | | | + | |
− | | | + | |
|- | |- | ||
| sw0045 | | sw0045 | ||
− | | Xeon | + | | Xeon E7-4860 |
− | + | | 2.3GHz | |
− | + | | 40 | |
− | + | | 80 | |
− | + | | 256 GB | |
− | + | ||
− | | | + | |
− | + | ||
− | + | ||
− | | | + | |
− | | | + | |
− | | | + | |
|- | |- | ||
| sw0046 | | sw0046 | ||
− | | Xeon | + | | Xeon E7-4860 |
− | | | + | | 2.3GHz |
− | | | + | | 40 |
− | | | + | | 80 |
− | | | + | | 256 GB |
|- | |- | ||
| sw0047 | | sw0047 | ||
− | | Xeon | + | | Xeon E7-4860 |
− | | | + | | 2.3GHz |
− | | | + | | 40 |
− | | | + | | 80 |
− | | | + | | 256 GB |
|- | |- | ||
| sw0048 | | sw0048 | ||
− | | Xeon | + | | Xeon E7-4860 |
− | | | + | | 2.3GHz |
− | | | + | | 40 |
− | | | + | | 80 |
− | | | + | | 256 GB |
|- | |- | ||
| sw0049 | | sw0049 | ||
− | | Xeon | + | | Xeon E7-4860 |
− | | | + | | 2.3GHz |
− | | | + | | 40 |
− | | | + | | 80 |
− | | | + | | 256 GB |
|- | |- | ||
!colspan="6"| [[File:x3950.jpg|thumb|left|alt=Software (SW) Linux Cluster|Software (SW) Linux Cluster]] | !colspan="6"| [[File:x3950.jpg|thumb|left|alt=Software (SW) Linux Cluster|Software (SW) Linux Cluster]] | ||
Line 82: | Line 68: | ||
{| class="wikitable" style="float:left; margin-right: 25px;" | {| class="wikitable" style="float:left; margin-right: 25px;" | ||
− | !colspan="6"| '''SW (Linux) Cluster Nodes (cac series)''' | + | !colspan="6"| '''SW (Linux) Cluster Nodes ("new" cac series)''' |
|- | |- | ||
|'''Host''' | |'''Host''' | ||
Line 277: | Line 263: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 284: | Line 270: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 291: | Line 277: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 298: | Line 284: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 305: | Line 291: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 312: | Line 298: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 319: | Line 305: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 326: | Line 312: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 333: | Line 319: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 340: | Line 326: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 347: | Line 333: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 354: | Line 340: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 361: | Line 347: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 368: | Line 354: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 375: | Line 361: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 382: | Line 368: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 389: | Line 375: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 396: | Line 382: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 403: | Line 389: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 410: | Line 396: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 417: | Line 403: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 424: | Line 410: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 431: | Line 417: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 438: | Line 424: | ||
| 2.2 GHz | | 2.2 GHz | ||
| 24 | | 24 | ||
− | | | + | | |
| 256 GB | | 256 GB | ||
|- | |- | ||
Line 616: | Line 602: | ||
| 256 GB | | 256 GB | ||
|- | |- | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|} | |} | ||
Line 696: | Line 641: | ||
* Your application is required to scale to a very large number of processes in a distributed-memory fashion and is communication intensive. Such jobs require a fast interconnect (Infiniband or similar) and should be run on a different system, for instance other Compute Canada installations. | * Your application is required to scale to a very large number of processes in a distributed-memory fashion and is communication intensive. Such jobs require a fast interconnect (Infiniband or similar) and should be run on a different system, for instance other Compute Canada installations. | ||
− | If you think your application could run more efficiently on these machines, please contact us (help@ | + | If you think your application could run more efficiently on these machines, please contact us (cac.help@queensu.ca) to discuss any concerns and let us assist you in getting started. |
Note that we have to enforce dedicated cores or CPUs to avoid sharing and context switching overheads. No "overloading" can be allowed. | Note that we have to enforce dedicated cores or CPUs to avoid sharing and context switching overheads. No "overloading" can be allowed. | ||
Line 707: | Line 652: | ||
=== Access === | === Access === | ||
− | |||
* Indirectly through '''ssh from sflogin0''': | * Indirectly through '''ssh from sflogin0''': | ||
<pre>ssh hpcXXXX@130.15.59.64 | <pre>ssh hpcXXXX@130.15.59.64 | ||
Line 758: | Line 702: | ||
As mentioned earlier, program runs for user and application software on the login node are allowed only for test purposes or if interactive use is unavoidable. In the latter case, please get in touch to let us know what you need. Production jobs must be submitted through the [[HowTo:Scheduler|Grid Engine load scheduler]]. | As mentioned earlier, program runs for user and application software on the login node are allowed only for test purposes or if interactive use is unavoidable. In the latter case, please get in touch to let us know what you need. Production jobs must be submitted through the [[HowTo:Scheduler|Grid Engine load scheduler]]. | ||
− | + | The name for the SGE queue that schedules to this cluster is '''abaqus.q'''. This does not have to be specified as it is the default. | |
− | + | The abaqus name for the queue derives from the initial software Abaqus that was (and still is) run on this cluster. | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | The abaqus name for the queue | + | |
Note that your jobs will run on dedicated threads, i.e. typically up to 12 processes can be scheduled to a single node. The Grid Engine will do the scheduling, i.e. there is no way for the user to determine which processes run on which cores. | Note that your jobs will run on dedicated threads, i.e. typically up to 12 processes can be scheduled to a single node. The Grid Engine will do the scheduling, i.e. there is no way for the user to determine which processes run on which cores. | ||
Line 771: | Line 709: | ||
===Help?=== | ===Help?=== | ||
− | General information about using | + | General information about using CAC facilities can be found in our FAQ pages. We also supply user support (please [mailto:cac.help@queensu.ca send email to cac.help@queensu.ca] or [[Contacts:UserSupport|contact us directly]]), so if you experience problems, we can assist you. |
Latest revision as of 13:36, 19 January 2018
The SW cluster has been decomissioned. Please refer to the Frontenac Cluster |
Contents
The SW (Linux) Cluster
The Centre for Advanced Computing operates a cluster of X86 based multicore machines running Linux.This page explains essential features of this cluster and is meant as a basic guide for its usage.
Type of HardwareThis cluster consists of X86 multicore nodes made by Lenovo and IBM. All nodes run CentOS Linux and share a file system. Access is handled by Grid Engine. The server nodes are called cac019...cac099.
Why these Systems?The main emphasis in these systems is a high floating-point performance for a modest number of processes / threads. Since commercial software such as Fluent and Abaqus offer support for Linux only, this cluster was originally acquired to offer recent versions of these software packages. In addition, the higher single-core performance of these nodes allows for an efficient use of license seats which usually a priced per-core. Who Should Use This Cluster?The software cluster runs on the Linux operating system and should be used by anyone who wants to run applications that are available on that platform. Runs that require more than 32 Gbyte of memory need to request this explicitly to avoid mis-scheduling. We suggest you use this cluster if:
This cluster may not be suitable if:
If you think your application could run more efficiently on these machines, please contact us (cac.help@queensu.ca) to discuss any concerns and let us assist you in getting started. Note that we have to enforce dedicated cores or CPUs to avoid sharing and context switching overheads. No "overloading" can be allowed. |
Using the ClusterAccess
ssh hpcXXXX@130.15.59.64 hpcXXXX@130.15.59.64's password: ***** hpcXXXX@sflogin0$ ssh swlogin1 hpcXXXX@swlogin1's password: ***** The file systems for all of our clusters are shared, so you will be using the same home directory as when you are using the M9000 servers or the standard login node sfnode0. swlogin1 can be used for compilation, program development, and testing only, not for production jobs. Compiling CodeIntel Compiler SuiteThe best compiler to use is the Intel Compiler Suite. This includes compilers for Fortran, C, and C++, as well as MPI and OpenMP support, debuggers and development suite. This software resides in /opt/ics. The versions are:
This compiler suite needs to be activated before use. The command is use icsmpi Gnu CompilersIn many cases, especially for public domain software, the preferable compiler is gnu C/C++/Fortran. The system version of these is: Using built-in specs. Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre --enable-libgcj-multifile --enable-java-maintainer-mode --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) No special activation is needed to use these, as they reside in a system director. A newer version of this compiler set is available in /opt/gcc-4.8.3 and can be access using the command use gcc-4.8.3 If MPI is required, it can be loaded through use openmpi For applications that cannot be re-compiled (for instance, because the source code is not accessible), a pre-compiled Linux version (x64 for Redhat will do the trick) needs to be obtained. Running JobsAs mentioned earlier, program runs for user and application software on the login node are allowed only for test purposes or if interactive use is unavoidable. In the latter case, please get in touch to let us know what you need. Production jobs must be submitted through the Grid Engine load scheduler. The name for the SGE queue that schedules to this cluster is abaqus.q. This does not have to be specified as it is the default. The abaqus name for the queue derives from the initial software Abaqus that was (and still is) run on this cluster. Note that your jobs will run on dedicated threads, i.e. typically up to 12 processes can be scheduled to a single node. The Grid Engine will do the scheduling, i.e. there is no way for the user to determine which processes run on which cores. Help?General information about using CAC facilities can be found in our FAQ pages. We also supply user support (please send email to cac.help@queensu.ca or contact us directly), so if you experience problems, we can assist you. |