Difference between revisions of "Hardware:M9000"

From CAC Wiki
Jump to: navigation, search
 
(15 intermediate revisions by 2 users not shown)
Line 1: Line 1:
= The Enterprise M9000 Servers =
+
{|  style="border-spacing: 8px;"
 +
| valign="top" width="50%" style="padding:1em; border:1px solid #fa5882; background-color:#f6eee3; border-radius:7px" |
 +
'''The "M9K's" were our large SMP systems and served as the main compute cluster. However, they have reached their "end of life" and are being de-commissioned. Their queues will be discaled in mid-September 2016, and they will be turned off on September 30, 2016. These servers ran on the Solaris platform which will be discontinued along with them. Future replacements of these servers will be running on a standard Linux platform (CentOS).'''
 +
<center>
 +
|}
  
The "M9K's" are our large SMP systems that have served as main compute cluster. However, they have reached their "end of life" and will be de-commissioned during 2016. These servers run on the Solaris platform which will be discontinued along with them. Future replacements of these servers will be running on a standard Linux platform (CentOS).
+
= The Enterprise M9000 Servers =
  
 
{|  style="border-spacing: 8px;"
 
{|  style="border-spacing: 8px;"
Line 19: Line 23:
 
== Main Purpose ==
 
== Main Purpose ==
  
The main emphasis in these high-end Shared-Memory servers is to deliver the maximum possible floating-point performance while not compromising on memory requirements. These machines are to some degree complementary to our Victoria Falls Cluster, where the emphasis is on "Throughput", while here the focus is on sheer TFlops.
+
The main emphasis in these high-end Shared-Memory servers is to deliver the maximum possible floating-point performance while not compromising on memory requirements. The large memory of these servers make them ideally suited for large-scale computations. Large L2 caches keep memory latencies low, while chip multithreading technology increases core utilization.
 
+
The large memory of these servers make them ideally suited for large-scale computations. Large L2 caches keep memory latencies low, while chip multithreading technology increases core utilization.  
+
  
 
== Who Should Use these Machines ==
 
== Who Should Use these Machines ==
Line 28: Line 30:
 
|}
 
|}
  
== Using the M9000 servers? ==
+
{|  style="border-spacing: 8px;"
 +
| valign="top" width="50%" style="padding:1em; border:1px solid #aaaaaa; background-color:#f7f7f7; border-radius:7px" |
 +
 
 +
== Using the M9000 servers ==
  
 
=== Access ===
 
=== Access ===
Line 41: Line 46:
 
Since the architecture of the Sparc64 VII chips of the M9000 Servers differs in some important details from the one of the login node, it may be a good idea to re-compile your code whenever possible. This is simple in most cases:
 
Since the architecture of the Sparc64 VII chips of the M9000 Servers differs in some important details from the one of the login node, it may be a good idea to re-compile your code whenever possible. This is simple in most cases:
  
* The default compilers on Solaris are Studio 12. Other versions may be accessed through usepackage, for instance
+
* The default compilers on Solaris are Studio 12. Other versions may be accessed through usepackage, for instance <pre>use studio12u3</pre> will switch to the (newer) update 3 compilers.
 
+
: use studio12u3
+
 
+
will switch to the (newer) update 3 compilers.
+
  
 
* Many optimization options in the Studio compilers, such as ''-fast'' imply settings that involve ''-native'', i.e. they optimize for the architecture and chipset of the machine on which you are doing the compilation. You might want to change these settings as they imply optimization for the login node which may be somewhat sub-optimal for the M9000 servers. The compilation should include additional options to overwrite existing ones.
 
* Many optimization options in the Studio compilers, such as ''-fast'' imply settings that involve ''-native'', i.e. they optimize for the architecture and chipset of the machine on which you are doing the compilation. You might want to change these settings as they imply optimization for the login node which may be somewhat sub-optimal for the M9000 servers. The compilation should include additional options to overwrite existing ones.
  
* Explicitly architecture-dependent optimization options include  
+
* Explicitly architecture-dependent optimization options include <pre>-xtarget=sparc64vii -xcache=64/64/2:6144/256/12 -xarch=sparcima</pre> These are best added to the right of pre-existing compiler options such as ''-fast'' because this way they overwrite previous settings. An environment variable ''M9KFLAGS'' is set to these flags in the default setup, so that instead of the above settings, you can just type ''$M9KFLAGS''
: ''-xtarget=sparc64vii -xcache=64/64/2:6144/256/12 -xarch=sparcima''
+
  
: These are best added to the right of pre-existing compiler options such as ''-fast'' because this way they overwrite previous settings. An environment variable ''M9KFLAGS'' is set to these flags in the default setup, so that instead of the above settings, you can just type ''$M9KFLAGS''
+
* To include "fused multiplication/addition" (FMA) in the compilation you need to specify <pre>-xarch=sparcfmaf -fma=fused</pre> after the other options (note that ''-xarch'' needs to be overwritten). An environment variable ''FMAFLAGS'' is being set by default and may be used instead of these settings.
 
+
* To include "fused multiplication/addition" (FMA) in the compilation you need to specify  
+
: ''-xarch=sparcfmaf -fma=fused''
+
 
+
: after the other options (note that ''-xarch'' needs to be overwritten). An environment variable ''FMAFLAGS'' is being set by default and may be used instead of these settings.
+
  
 
* For applications that can not be re-compiled (for instance, because the source code is not accessible), compilations for any post-USIII UltraSparc chip will work, usually pretty well.
 
* For applications that can not be re-compiled (for instance, because the source code is not accessible), compilations for any post-USIII UltraSparc chip will work, usually pretty well.
Line 63: Line 58:
 
=== Running jobs ===
 
=== Running jobs ===
  
As mentioned earlier, program runs for user and application software on the login node are allowed only for test purposes. Production runs must be submitted to Grid Engine. For a description of how to use Grid Engine, see the [[HowTo|GridEngine Help File]].
+
As mentioned earlier, program runs for user and application software on the login node are allowed only for test purposes. Production runs must be submitted to Grid Engine. For a description of how to use Grid Engine, see the [[HowTo:Scheduler|GridEngine Help File]].
  
 
Grid Engine will schedule jobs to a default pool of machines unless otherwise stated. This default pool contains presently only the our M9000 nodes m9k0002-7. Therefore, you need to add no special script lines to be scheduled to these servers exclusively.
 
Grid Engine will schedule jobs to a default pool of machines unless otherwise stated. This default pool contains presently only the our M9000 nodes m9k0002-7. Therefore, you need to add no special script lines to be scheduled to these servers exclusively.
  
 
Note that your jobs will run on dedicated threads, i.e. up to 512 processes can be scheduled to a single server. The Grid Engine will do the scheduling, i.e. there is no way for the user to determine which processes run on which cores.
 
Note that your jobs will run on dedicated threads, i.e. up to 512 processes can be scheduled to a single server. The Grid Engine will do the scheduling, i.e. there is no way for the user to determine which processes run on which cores.
 +
|}
  
== Further Help ==
+
{|  style="border-spacing: 8px;"
 +
| valign="top" width="50%" style="padding:1em; border:1px solid #aaaaaa; background-color:#e1eaf1; border-radius:7px" |
  
For a more thorough review of Multi-core environment, [https://software.intel.com/en-us/intel-cilk-plus/Default.aspx?app=LeadgenDownload&shortpath=docs%2fHow_to_Survive_the_Multicore_Software_Revolution.pdf please check out this PDF]. You might want to follow some of the links provided in this document.
+
== Further Help ==
 
+
We supply user support (please contact us at cac.help@queensu.ca), so if you experience problems, we can assist you.
We also supply user support (please contact us at cac.help@queensu.ca), so if you experience problems, we can assist you.
+
|}

Latest revision as of 16:50, 8 September 2016

The "M9K's" were our large SMP systems and served as the main compute cluster. However, they have reached their "end of life" and are being de-commissioned. Their queues will be discaled in mid-September 2016, and they will be turned off on September 30, 2016. These servers ran on the Solaris platform which will be discontinued along with them. Future replacements of these servers will be running on a standard Linux platform (CentOS).

The Enterprise M9000 Servers

Type of server

Enterprise M9000 Servers.
Enterprise M9000 Servers

Our cluster consists of eight shared-memory machines that are highend SPARC Enterprise M9000 Servers which Sun Microsystems built in partnership with Fujitsu. Access is handled exclusively by Grid Engine, including test jobs that are specific to these servers. The server nodes are called m9k0001...m9k0008

Each of these servers consists of 64 quad-core 2.52 Ghz Sparc64 VII processors. Each of these chips has 4 compute cores, and each core is capable of Chip Multi Threading with 2 hardware threads. This means that each of the servers is capable of working simultaneously on up to 512 threads. In total they are able to process more than 4000 threads. As each core carries two Floating-Point Units that can handle Additions and Multiplications in a "Fused" manner (FMA), the cluster has a TPP of up to 20 TFlops.

Chip Multi Threading (CMT) is a technology that allows multiple threads (process) to simultaneously share a single computing resource, such as a core. This increases the efficiency of usage of the core. At the same time, multiple cores share chip resources, thereby improving their utilization.

Our servers have a total of 2TByte of memory (8 GB per core). These machines are suitable for very-high-memory applications.

For more information on the Sparc64 VII Architecture, please check out this website.

Main Purpose

The main emphasis in these high-end Shared-Memory servers is to deliver the maximum possible floating-point performance while not compromising on memory requirements. The large memory of these servers make them ideally suited for large-scale computations. Large L2 caches keep memory latencies low, while chip multithreading technology increases core utilization.

Who Should Use these Machines

If you are just starting to run applications on our systems, we advise against using the M9000 servers as your platform. This is because the servers have reached the end of their life and will be decommisioned during 2016. Their capacity will be replaced by high-memory systems of the x86/Linux type.

Using the M9000 servers

Access

The server can be accessed directly through ssh from IP address 130.15.59.64 (login node sflogin0). They also can be accessed from the Secure Portal (dtterm (sfnode0) or xterm (sfnode0)) which brings you to the same (Solaris) login node.

The file systems for all of our clusters are shared, so you will be using the same home directory. The login node can be used for compilation, program development, and testing only, not for production jobs.

Compiling code

Since the architecture of the Sparc64 VII chips of the M9000 Servers differs in some important details from the one of the login node, it may be a good idea to re-compile your code whenever possible. This is simple in most cases:

  • The default compilers on Solaris are Studio 12. Other versions may be accessed through usepackage, for instance
    use studio12u3
    will switch to the (newer) update 3 compilers.
  • Many optimization options in the Studio compilers, such as -fast imply settings that involve -native, i.e. they optimize for the architecture and chipset of the machine on which you are doing the compilation. You might want to change these settings as they imply optimization for the login node which may be somewhat sub-optimal for the M9000 servers. The compilation should include additional options to overwrite existing ones.
  • Explicitly architecture-dependent optimization options include
    -xtarget=sparc64vii -xcache=64/64/2:6144/256/12 -xarch=sparcima
    These are best added to the right of pre-existing compiler options such as -fast because this way they overwrite previous settings. An environment variable M9KFLAGS is set to these flags in the default setup, so that instead of the above settings, you can just type $M9KFLAGS
  • To include "fused multiplication/addition" (FMA) in the compilation you need to specify
    -xarch=sparcfmaf -fma=fused
    after the other options (note that -xarch needs to be overwritten). An environment variable FMAFLAGS is being set by default and may be used instead of these settings.
  • For applications that can not be re-compiled (for instance, because the source code is not accessible), compilations for any post-USIII UltraSparc chip will work, usually pretty well.

Running jobs

As mentioned earlier, program runs for user and application software on the login node are allowed only for test purposes. Production runs must be submitted to Grid Engine. For a description of how to use Grid Engine, see the GridEngine Help File.

Grid Engine will schedule jobs to a default pool of machines unless otherwise stated. This default pool contains presently only the our M9000 nodes m9k0002-7. Therefore, you need to add no special script lines to be scheduled to these servers exclusively.

Note that your jobs will run on dedicated threads, i.e. up to 512 processes can be scheduled to a single server. The Grid Engine will do the scheduling, i.e. there is no way for the user to determine which processes run on which cores.

Further Help

We supply user support (please contact us at cac.help@queensu.ca), so if you experience problems, we can assist you.