Difference between revisions of "HowTo:ansys"

From CAC Wiki
Jump to: navigation, search
(Batch runs)
(ANSYS Mechanical)
 
(34 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 
= ANSYS Mechanical =
 
= ANSYS Mechanical =
This is a help file on using the Mechanical Engineering structural code "ANSYS Mechanical" on our systems. This software is only licensed for academic researchers who have prior training. The software is only made available to persons who belong to a specific Unix group. See details below.
+
 
 +
'''Important Note: This software is provided on the basis of a "hosting" model. This means that users who want to run this software on our cluster will have to supply a license that supports the run.'''
 +
 
 +
This is a help file on using the Mechanical Engineering structural code "ANSYS Mechanical" on our systems. This software requires a user-supplied license. The software is only made available to persons who belong to a specific Unix group. See details below.
  
 
{|  style="border-spacing: 8px;"
 
{|  style="border-spacing: 8px;"
Line 12: Line 15:
 
== Version ==  
 
== Version ==  
  
The most current version of Fluent on our systems is Ansys-18.
+
The most current version on our systems is Ansys/ext2020R1. To see other versions, use '''Module spider ansys'''
  
 
== Location and Access ==
 
== Location and Access ==
Line 18: Line 21:
 
ANSYS Mechanical runs under the Linux operating system.  
 
ANSYS Mechanical runs under the Linux operating system.  
  
The program is located in /opt/fluent/ on swlogin1 ("SW cluster") and in /global/software/ansys on caclogin01 ("Frontenac").
+
The program is located in /global/software/ansys.
  
To use it, you have to be a trained University User. It is furthermore required that you [http://www.ansys.com/terms.htm read our licensing terms], and [http://cac.queensu.ca/wp-content/files/cac-ansys-statement.pdf sign a statement]. We will confirm your statement, and you will then be made a member of a Unix group fluent (so called for "historical" reasons), which enables you to run the software. [mailto:cac.help@queensu.ca Contact us] if you are in doubt of whether you qualify to run Fluent on our system.
+
To use it, you have to be covered by a user-supplied license. It is furthermore required that you [https://info.cac.queensu.ca/wiki/files/cac-ansys-statement.pdf sign a statement]. We will confirm the statement, and you will then be made a member of a Unix group fluent (so called for "historical" reasons), which enables you to run the software. [mailto:cac.help@queensu.ca Contact us] if you are in doubt of whether you qualify to run ANSYS on our system or if you are looking for options for access to a license.
  
 
== Licensing ==
 
== Licensing ==
  
The ANSYS license is "seat limited" and "process limited". At present, there are the following licensing limits on our systems:
+
To use ANSYS, you have to provide us with a license. The software is only accessible to users who are covered by such a license and are members of the "fluent" Posix group (so called for historical reasons). To be included in that group you need to complete a statement that you can [https://info.cac.queensu.ca/wiki/files/cac-fluent-statement.pdf download here]. Note that prior inclusion in the fluent group is now void as the licensing terms for the software have changed. If you are in doubt, please contact us at [mailto:"cac.help@queensu.ca" cac.help@queensu.ca].
  
<pre>25 program runs plus 512 parallel processes</pre>  
+
The license is "seat limited" and "process limited".
 +
 
 +
<pre>The number of seats and process available to you depends on the license under which you are covered</pre>  
  
i.e. at most 25 separate sessions can be run simultaneously (serial or parallel). Each of these sessions can run up to 4 processes for a total of 100. In addition, it is possible to run up to 512 "parallel only" processes in total. One scenario would be 24 users have 24 process parallel jobs running, and another one with 36, thus using up all available Fluent resources. Not that the seats are shared with the CFD software Fluent.
 
 
|}
 
|}
  
Line 36: Line 40:
 
== Running ANSYS ==
 
== Running ANSYS ==
 
=== Setup ===
 
=== Setup ===
==== On the SW cluster ====
+
The setup for ANSYS on Frontenac is done via '''module'''. Type:
 
+
The setup for Fluent on the SW cluster is done via '''usepackage'''. Simply type:
+
<pre>use ansys</pre>
+
on the workup node (swlogin1) or include this command in your setup (.bash_profile) file. This will set up the default (Ansys 18) version. Other versions can be setup through specification of the proper version, for instance
+
<pre>use ansys17</pre>
+
to set up the "Ansys 17.0" version.
+
 
+
==== On Frontenac ====
+
 
+
The setup for Fluent on Frontenac is done via '''module'''. Type:
+
 
<pre>
 
<pre>
 
module purge --force
 
module purge --force
module load ansys
+
module load ansys/ext181
 +
export ANSYSLMD_LICENSE_FILE={port}@{license server address}
 
</pre>
 
</pre>
on the workup node (caclogin01) or include these commands in your setup (.bash_profile) file. Note that this is "purging" the present setup which may make the shell in which this done unusable for running other software.
+
on the workup node or include these commands in your setup (.bash_profile) file. Note that this is "purging" the present setup which may make the shell in which this done unusable for running other software. The setting of the environment variable ANSYSLMD_LICENSE_FILE points the system to the correct license server and port. This information is dependent on the user (substitute the information in the curly brackets).
  
 
==== Note ====
 
==== Note ====
  
You have to be in the '''fluent''' Unix group for this to work on either system, as access permissions prevent general users from accessing ANSYS software such as Fluent.  
+
You have to be in the '''fluent''' Unix group for this to work on either system, as access permissions prevent general users from accessing ANSYS software.
 
+
=== Interactive runs ===
+
 
+
(...under construction...)
+
  
 
=== Batch runs ===
 
=== Batch runs ===
  
ANSYS can (and usually must) be run in '''batch mode'''. Since you likely have access to Fluent on your local machines, most interactive work should be done there, whereas the computationally intensive runs can be executed on a parallel system such as ours.
+
ANSYS can (and usually must) be run in '''batch mode'''. Since you likely have access to ANSYS on your local machines, most interactive work should be done there, whereas the computationally intensive runs can be executed on a parallel system such as ours. For this, data and commands are written into a text file written in '''ANSYS Parametric Design Language''' (APDL) which is used to specify the system and describe the Analysis to be performed.
 +
Here is the top of an input file (the full file is too long to be displayed here):
  
For this, data and commands are written into a text file
 
 
<pre>
 
<pre>
 
/prep7
 
/prep7
ET,1,SOLID185, ,2
+
ET,1,SOLID185, ,2
 
MP,EX,1,70e09
 
MP,EX,1,70e09
 
MP,EY,1,70e09
 
MP,EY,1,70e09
Line 92: Line 83:
 
</pre>
 
</pre>
  
 +
Let's call this file "testsys.txt". The analysis can now be performed by calling ANSYS directly from the command line
  
 +
<pre>ansys181 -b -i testsys.txt</pre>
  
you have to set up a batch command file that consists of a sequence of commands that are issue to Fluent. To get an idea how such a batch command file looks, you can produce a journal file during an interactive session, and edit it later to eliminate unnecessary commands. Note that this needs to be done using the command line inside Fluent, not the menu buttons of the GUI. In fact, it is best to generate journal files in sessions that have been started with the '''-g option''', i.e. that do not use the GUI at all.
+
In this case, output is sent to the screen, and output files are given the default name "file.*". No further input from the user is required.
 
+
The '''"Text User Interface"''' that has to be used for writing batch files is documented in the Fluent documentation.
+
Here is an example for a simple batch file that reads in a "case", initializes the flow, and runs 200 iterations. At the end a "data file" is printed and Fluent exits.  
+
 
+
<pre>
+
rc fan.cas
+
/solve/initialize/initialize-flow
+
/solve/iterate 1
+
/file/write-data fan_1
+
exit
+
yes
+
</pre>
+
 
+
Let's call this file "example.flin". Note that every command has to be included in the batch command file, including the answer "yes" to the question if you really want to exit the program without saving the case file. Once you have produced a working command file, you can test it by calling
+
 
+
<pre>fluent 3ddp -g -i example.flin</pre>
+
 
+
We have assumed you are running a three-dimensional solver in double precision. You will have to alter this entries when the case is different. Make sure that the output file for the data (in this case, "fan_1.dat") does not exist before you start the job, otherwise the system will query if you want to over-write it and the answer is not in your command file.
+
 
+
 
Once everything works you could submit this job into the background (using bash) by typing
 
Once everything works you could submit this job into the background (using bash) by typing
  
<pre>fluent 3d -g -i example.flin > example.flout 2>&1 & </pre>
+
<pre>ansys181 -b -i testsys.txt > test.out 2>&1 & </pre>
  
This would redirect standard output and standard error to example.flout. The point is that Fluent is run non-interactively this way, i.e. we can use the same technique to submit a production job to the scheduler, as shown in the next section.
+
This would redirect standard output and standard error to test.out. The point is that ANSYS is run non-interactively this way, i.e. we can use the same technique to submit a production job to the scheduler, as shown in the next section.
  
 
=== Production runs ===
 
=== Production runs ===
  
To submit a production job on our clusters, '''you must use the scheduler'''. To obtain details, read through our [[|]] (SW cluster) or [[|]] (Frontenac). Production jobs that are run without scheduler will be terminated by the system administrator.
+
To submit a production job on our clusters, '''you must use the scheduler'''. To obtain details, read through the wiki page for [[SLURM|SLURM]] (Frontenac). Production jobs that are run without scheduler will be terminated by the system administrator.
  
==== On the SW cluster ====
+
On Frontenac, the scheduler in use is SLURM. Here is a SLURM example script of an ANSYS production job:
 
+
For a Fluent production job, the batch command is "wrapped" into a Grid Engine script that looks like this:
+
  
 
<pre>
 
<pre>
 
#!/bin/bash
 
#!/bin/bash
#$ -S /bin/bash
+
#SBATCH --job-name=ansys-test
#$ -V
+
#SBATCH --mail-type=ALL
#$ -cwd
+
#SBATCH --mail-user={email address}
#$ -pe shm.pe 12
+
#SBATCH --output=STD.out
#$ -m be
+
#SBATCH --error=STD.err
#$ -M hpcXXXX@localhost
+
#SBATCH --nodes=1
#$ -o STD.out
+
#SBATCH --ntasks=1
#$ -e STD.err
+
#SBATCH --cpus-per-task=8
rm fan_1.dat
+
#SBATCH --time=00:30
. /opt/fluent/ansys-16.1/setup_64bit.sh
+
#SBATCH --mem=1G
fluent 3ddp -t$NSLOTS -g -i example.flin
+
</pre>
+
 
+
Here we are running the above example batch file "example.flin" using 12 processors on a parallel machine. The output and any error messages from the system are re-directed to files called "STD.out" and "STD.err", respectively. The "#$ -q" and "#$ -l" entries force execution on the Linux cluster (important!). Email notification is handled by the "#$ -m" and "#$ -M" lines. Replace "hpcXXXX" by your actual username and make sure that a file called ".forward" that contains you actual email address is in your home directory. This practice makes it impossible for other users to see your email address.
+
 
+
Many Fluent jobs that you run on our machines are likely to be quite large. To utilize the parallel structure of our machines, Fluent offers several options to execute the solver in a parallel environment, i.e. on several CPU's simultaneously. The default option for such runs is MPI i.e., it uses the Message Passing Interface for inter-process communication.
+
 
+
To take advantage of the parallel capabilities of Fluent, you have to call the program with additional command line options that specify the details of your parallel run:
+
 
+
* -tn where n is the number of processors requested, e.g. if you want to run with 8 processors, you would use the option -t12
+
* -g specifies that the GUI should be surpressed. This is required for batch jobs.
+
 
+
Parallel jobs of longer runtime should only be run in batch using the Grid Engine. The number of processors "12" specified in our example script appears only once, after
+
 
+
<pre>#$ -pe shm.pe</pre>
+
 
+
which is where you let the Grid Engine know how many processors to allocate to run the program. The internal environment variable '''NSLOTS''' will automatically be set to this value and can then be used in the fluent command line.
+
 
+
It is also necessary to source a setup file called '''setup_64bit.sh'''. This will set various environment variables and enable the Fluent program to properly interact with Grid Engine. If you are interested, take a look. The file is readable.
+
 
+
All processes are allocated within a single node. This is to make communication more efficient and to avoid problems with the control by Gridengine. The effect of this is that, while still using MPI, Fluent employs a so-called shared-memory layer for communication. The disadvantage is that the size of the job is restricted by the number of cores on a node. Once the script has been adapted (let's call it "fluent.sh"), it can be submitted to the Gridengine by
+
 
+
<pre>qsub fluent.sh</pre>
+
 
+
from the login node. Note that the job will appear as a parallel job on the Grid Engine's "qstat" or "qmon" commands. Note also that submission of a parallel job in this way is only profitable for large systems that use many CPU cycles, since the overhead for assigning processes, preparing nodes, and communication between them is considerable.
+
 
+
There is an easier way to do this: We are supplying a small perl script called that can be called directly, and will ask a few basic questions, such as the name for the job to be submitted and the number of processes to be used in the job. Simply type
+
 
+
<pre>AnsysSubmit</pre>
+
 
+
and answer the questions. The script expects a Fluent input file with "file extension" .flin to be present and will do everything else automatically. This is meant for simple Fluent job submissions. More complex job submissions are better done manually.
+
 
+
==== On Frontenac ====
+
 
+
On Frontenac, the scheduler in use is SLURM. Here is a SLURM example script of a Fluent production job:
+
 
+
<pre>
+
#!/bin/bash
+
#SBATCH --job-name=FLUENT_test
+
#SBATCH -c 12
+
#SBATCH -t 00:30
+
#SBATCH --mem=1000
+
 
+
 
module purge --force
 
module purge --force
module load ansys
+
module load ansys/ext181
fluent 3ddp -t$SLURM_CPUS_PER_TASK -g -i testsys.flin
+
export ANSYSLMD_LICENSE_FILE={port}@{license server address}
 +
ansys181 -np $SLURM_CPUS_PER_TASK -b -i testsys.txt -o test.out -j test
 
</pre>
 
</pre>
  
Here we are running the above example batch file "testsys.flin" using 12 processors on a parallel machine. The output and any error messages from the system are re-directed to a file called "slurm-XXXXX.out" (where XXXXX is the job number).
+
Here we are running the file "testsys.txt" using 8 processors on a parallel machine. The output and any error messages from the system are re-directed to a file called "slurm-XXXXX.out" (where XXXXX is the job number).
  
The -t option is used to specify a time limit. If it is omitted you are assigned a default limit. It is best to specify this limit, and choose it to be slightly longer than the largest expected execution time. This will make the job harder to schedule, but it will ensure that the job is not terminated before it finishes. Note that time limits are "hard", i.e. the job will be stopped when it exceeds its limit. This is necessary to make efficient scheduling possible.
+
The --time option is used to specify a time limit. If it is omitted you are assigned a default limit. It is best to specify this limit, and choose it to be slightly longer than the largest expected execution time. This will make the job harder to schedule, but it will ensure that the job is not terminated before it finishes. Note that time limits are "hard", i.e. the job will be stopped when it exceeds its limit. This is necessary to make efficient scheduling possible.
  
 
The --mem option is used to specify a memory limit. If it is omitted you are assigned a default limit. It is best to specify this limit, and choose it to be slightly larger than the largest expected memory usage. This will make the job harder to schedule, but it will ensure that the job is not for exceeding its memory allocation. Note that memory limits are "hard", i.e. the job will be stopped if it exceeds its allocated memory. This enable efficient memory allocation.
 
The --mem option is used to specify a memory limit. If it is omitted you are assigned a default limit. It is best to specify this limit, and choose it to be slightly larger than the largest expected memory usage. This will make the job harder to schedule, but it will ensure that the job is not for exceeding its memory allocation. Note that memory limits are "hard", i.e. the job will be stopped if it exceeds its allocated memory. This enable efficient memory allocation.
  
To take advantage of the parallel capabilities of Fluent, you have to call the program with additional command line options that specify the details of your parallel run:
+
Parallel jobs of longer runtime should only be run in batch using SLURM. The number of processors "8" specified in our example script appears only once, in
  
* -tn where n is the number of processors requested, e.g. if you want to run with 8 processors, you would use the option -t12
+
<pre>#SBATCH --cpus-per-task=8</pre>
* -g specifies that the GUI should be surpressed. This is required for batch jobs.
+
  
Parallel jobs of longer runtime should only be run in batch using the Grid Engine. The number of processors "12" specified in our example script appears only once, in
+
which is where you let SLURM know how many processors to allocate to run the program. The internal environment variable '''SLURM_CPUS_PER_TASK''' will automatically be set to this value and can then be used in the ansys command line.
  
<pre>#SBATCH -c 12</pre>
+
All processes are allocated within a single node. This means that the size of the job is restricted by the number of cores on a node. Once the script has been adapted (let's call it "ansys.sh"), it can be submitted to SLURM by
  
which is where you let the Grid Engine know how many processors to allocate to run the program. The internal environment variable '''SLURM_CPUS_PER_TASK''' will automatically be set to this value and can then be used in the fluent command line.
+
<pre>sbatch ansys.sh</pre>
 
+
All processes are allocated within a single node. This is to make communication more efficient and to avoid problems with the control by SLURM. The effect of this is that, while still using MPI, Fluent employs a so-called shared-memory layer for communication. The disadvantage is that the size of the job is restricted by the number of cores on a node. Once the script has been adapted (let's call it "fluent.sh"), it can be submitted to SLURM by
+
 
+
<pre>sbatch fluent.sh</pre>
+
  
 
from the login node. Note that the job will appear as a parallel job on the "squeue" command.  
 
from the login node. Note that the job will appear as a parallel job on the "squeue" command.  
Line 218: Line 143:
 
== Further Help ==
 
== Further Help ==
  
Fluent is a complex software package, and requires some practice to be used efficiently. In this FAQ we can not explain it use in any detail.
+
ANSYS Mechanical is a complex software, and requires some practice to be used efficiently. We can not explain it use in any detail here.
 
+
The documentation for Fluent can be access from inside the program GUI by clicking on the '''"Help" button on the upper right'''. This is in html format. The pdf version of the docs can be found in
+
  
<pre>/opt/fluent/ansys-16.0/v140/commonfiles/help/en-us/pdf</pre>
+
The documentation for ANSYS can be access from inside the program GUI (WorkBench).
  
Fluent documentation is subject to the same license terms as the software itself, i.e. you have to be signed up as a Fluent user in order to access it.
+
The documentation is subject to the same license terms as the software itself, i.e. you have to be signed up as a user in order to access it.
  
If you are experiencing trouble running a batch command script, check carefully if the sequence of commands is exactly in sync with the program. This might mean typing them in interactively as a test. If you have problems with the Grid Engine, [[FAQ:SGE|read our FAQ on that subject]], and maybe consult the [http://cac.queensu.ca/wp-content/files/sge_manual.pdf manual for that software] which is accessible as a PDF file. CAC also provide user support in the case of technical problems: just send [mailto:cac.help@queensu.ca email to cac.help@queensu.ca].
+
If you are experiencing trouble running a batch command script, check carefully if the sequence of commands is exactly in sync with the program. This might mean typing them in interactively as a test. If you have problems that you cannot resolve through the documentation, contact user support by sending [mailto:cac.help@queensu.ca email to cac.help@queensu.ca].
 
|}
 
|}

Latest revision as of 16:01, 21 August 2023

ANSYS Mechanical

Important Note: This software is provided on the basis of a "hosting" model. This means that users who want to run this software on our cluster will have to supply a license that supports the run.

This is a help file on using the Mechanical Engineering structural code "ANSYS Mechanical" on our systems. This software requires a user-supplied license. The software is only made available to persons who belong to a specific Unix group. See details below.

What is ANSYS Mechanical ?

ANSYS Mechanical is a Mechanical Engineering software that uses finite element analysis (FEA) for structural analysis. It covers a large range of applications ranging from geometry preparation to optimization. You can model advanced materials, complex environmental loadings and industry-specific requirements in areas such as offshore hydrodynamics and layered composite materials.

It can be used interactively and supplies a graphical user interface. It can also run in batch mode, if the required time for solving a problem is too long for interactive use. The latter situation is the standard if you are using it on CAC machines.

Version

The most current version on our systems is Ansys/ext2020R1. To see other versions, use Module spider ansys

Location and Access

ANSYS Mechanical runs under the Linux operating system.

The program is located in /global/software/ansys.

To use it, you have to be covered by a user-supplied license. It is furthermore required that you sign a statement. We will confirm the statement, and you will then be made a member of a Unix group fluent (so called for "historical" reasons), which enables you to run the software. Contact us if you are in doubt of whether you qualify to run ANSYS on our system or if you are looking for options for access to a license.

Licensing

To use ANSYS, you have to provide us with a license. The software is only accessible to users who are covered by such a license and are members of the "fluent" Posix group (so called for historical reasons). To be included in that group you need to complete a statement that you can download here. Note that prior inclusion in the fluent group is now void as the licensing terms for the software have changed. If you are in doubt, please contact us at [mailto:"cac.help@queensu.ca" cac.help@queensu.ca].

The license is "seat limited" and "process limited".

The number of seats and process available to you depends on the license under which you are covered

Running ANSYS

Setup

The setup for ANSYS on Frontenac is done via module. Type:

module purge --force
module load ansys/ext181
export ANSYSLMD_LICENSE_FILE={port}@{license server address}

on the workup node or include these commands in your setup (.bash_profile) file. Note that this is "purging" the present setup which may make the shell in which this done unusable for running other software. The setting of the environment variable ANSYSLMD_LICENSE_FILE points the system to the correct license server and port. This information is dependent on the user (substitute the information in the curly brackets).

Note

You have to be in the fluent Unix group for this to work on either system, as access permissions prevent general users from accessing ANSYS software.

Batch runs

ANSYS can (and usually must) be run in batch mode. Since you likely have access to ANSYS on your local machines, most interactive work should be done there, whereas the computationally intensive runs can be executed on a parallel system such as ours. For this, data and commands are written into a text file written in ANSYS Parametric Design Language (APDL) which is used to specify the system and describe the Analysis to be performed. Here is the top of an input file (the full file is too long to be displayed here):

/prep7
ET,1,SOLID185, ,2
MP,EX,1,70e09
MP,EY,1,70e09
MP,EZ,1,60e09
MP,NUXY,1,0.33
MP,NUYZ,1,0.30
MP,NUXZ,1,0.30
MP,GXY,1,26.5e09
MP,GYZ,1,22e09
MP,GXZ,1,22e09
*AFUN,DEG
thetax=0.000000
thetay=0.000000
fx= 0
fy= -1000
fz= 0
fxp= fx
fyp= fy*(cos(thetax)) - fz*(sin(thetax))
fzp= fy*(sin(thetax)) + fz*(cos(thetax))

[...]

Let's call this file "testsys.txt". The analysis can now be performed by calling ANSYS directly from the command line

ansys181 -b -i testsys.txt

In this case, output is sent to the screen, and output files are given the default name "file.*". No further input from the user is required. Once everything works you could submit this job into the background (using bash) by typing

ansys181 -b -i testsys.txt > test.out 2>&1 & 

This would redirect standard output and standard error to test.out. The point is that ANSYS is run non-interactively this way, i.e. we can use the same technique to submit a production job to the scheduler, as shown in the next section.

Production runs

To submit a production job on our clusters, you must use the scheduler. To obtain details, read through the wiki page for SLURM (Frontenac). Production jobs that are run without scheduler will be terminated by the system administrator.

On Frontenac, the scheduler in use is SLURM. Here is a SLURM example script of an ANSYS production job:

#!/bin/bash
#SBATCH --job-name=ansys-test
#SBATCH --mail-type=ALL
#SBATCH --mail-user={email address}
#SBATCH --output=STD.out
#SBATCH --error=STD.err
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=8
#SBATCH --time=00:30
#SBATCH --mem=1G
module purge --force
module load ansys/ext181
export ANSYSLMD_LICENSE_FILE={port}@{license server address}
ansys181 -np $SLURM_CPUS_PER_TASK -b -i testsys.txt -o test.out -j test

Here we are running the file "testsys.txt" using 8 processors on a parallel machine. The output and any error messages from the system are re-directed to a file called "slurm-XXXXX.out" (where XXXXX is the job number).

The --time option is used to specify a time limit. If it is omitted you are assigned a default limit. It is best to specify this limit, and choose it to be slightly longer than the largest expected execution time. This will make the job harder to schedule, but it will ensure that the job is not terminated before it finishes. Note that time limits are "hard", i.e. the job will be stopped when it exceeds its limit. This is necessary to make efficient scheduling possible.

The --mem option is used to specify a memory limit. If it is omitted you are assigned a default limit. It is best to specify this limit, and choose it to be slightly larger than the largest expected memory usage. This will make the job harder to schedule, but it will ensure that the job is not for exceeding its memory allocation. Note that memory limits are "hard", i.e. the job will be stopped if it exceeds its allocated memory. This enable efficient memory allocation.

Parallel jobs of longer runtime should only be run in batch using SLURM. The number of processors "8" specified in our example script appears only once, in

#SBATCH --cpus-per-task=8

which is where you let SLURM know how many processors to allocate to run the program. The internal environment variable SLURM_CPUS_PER_TASK will automatically be set to this value and can then be used in the ansys command line.

All processes are allocated within a single node. This means that the size of the job is restricted by the number of cores on a node. Once the script has been adapted (let's call it "ansys.sh"), it can be submitted to SLURM by

sbatch ansys.sh

from the login node. Note that the job will appear as a parallel job on the "squeue" command.

Further Help

ANSYS Mechanical is a complex software, and requires some practice to be used efficiently. We can not explain it use in any detail here.

The documentation for ANSYS can be access from inside the program GUI (WorkBench).

The documentation is subject to the same license terms as the software itself, i.e. you have to be signed up as a user in order to access it.

If you are experiencing trouble running a batch command script, check carefully if the sequence of commands is exactly in sync with the program. This might mean typing them in interactively as a test. If you have problems that you cannot resolve through the documentation, contact user support by sending email to cac.help@queensu.ca.