Difference between revisions of "Filesystems:Frontenac"

From CAC Wiki
Jump to: navigation, search
 
(18 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 +
 +
'''Important Note: Due to the transition to cost-recovery service many of the details of file system organization on the Frontenac GPFS file system are subject to change. Please refer back to these pages occasionally to stay abreast of these changes.'''
  
 
==Overview== <!--T:1-->
 
==Overview== <!--T:1-->
Line 4: Line 6:
 
<!--T:2-->
 
<!--T:2-->
 
The Frontenac cluster uses a shared [https://www.ibm.com/support/knowledgecenter/en/SSFKCN/gpfs_welcome.html GPFS filesystem] for all file storage.  
 
The Frontenac cluster uses a shared [https://www.ibm.com/support/knowledgecenter/en/SSFKCN/gpfs_welcome.html GPFS filesystem] for all file storage.  
User files are located under <code>/global/home</code> of 3TB quota, shared project space under <code>/global/project</code>, and network scratch space under <code>/global/scratch</code> of 5TB quota.  
+
User files are located under <code>/global/home</code> of '''500GB''' quota, shared project space under <code>/global/project</code>,  
 
+
and network scratch space under <code>/global/scratch</code> of '''5TB''' quota.  
In addition to the network storage, each compute node has a 1.5TB local hard disk for fast access to local scratch space by jobs using the location specified by the <code>$TMPDISK</code> environment variable. All files in the local scratch space are assumed to be deleted automatically when corresponding jobs finish.
+
In addition to the network storage, compute nodes have between up to '''1.5TB''' local hard disk for fast access to local scratch space by jobs using  
 
+
the location specified by the <code>$TMPDISK</code> environment variable. All files in the local scratch space are assumed to be deleted automatically  
 +
when corresponding jobs finish.
 
<!--T:3-->
 
<!--T:3-->
Note that it is the user's responsibility to manage the age of their data: these filesystems do not provide archiving. If data are no longer needed, they need to be moved off the system. If you need assistance with this, please contact us.
+
Note that it is the user's responsibility to manage the age of their data: these file systems do not provide archiving.  
 +
If data are no longer needed, they need to be moved off the system. If you need assistance with this, please contact us.
 +
This is especially important as we are charging for storage in the Terabyte range. At present, ''nearline'' data (see explanation below) are free, but ''project'' data (see below) are subject to charges on an annual basis. Detail about our cost structure can be found at [[Frontenac:Fees|our Fees Information Page]].
  
 
== Storage Areas == <!--T:5-->
 
== Storage Areas == <!--T:5-->
  
Unlike your personal computer, a Compute Canada system will typically have several storage spaces or filesystems and you should ensure that you are using the right space for the right task. In this section we will discuss the principal filesystems available on most Compute Canada systems and the intended use of each one along with its characteristics. Storage options are distinguished by the available hardware, access mode and write system. Typically, most Compute Canada systems offer the following storage types:
+
Unlike your personal computer, our system has several storage spaces or file systems and you should ensure that you are using the right space for the right task. In this section we will discuss the principal file systems available, and the intended use of each one along with its characteristics. Storage options are distinguished by the available hardware, access mode and write system. Typically, most Compute Canada systems offer the following storage types:
  
 
<!--T:6-->
 
<!--T:6-->
Line 63: Line 68:
 
'''Latency''' describes the efficiency of the file system for small operations. Low latency is good.
 
'''Latency''' describes the efficiency of the file system for small operations. Low latency is good.
  
== Best practices == <!--T:9-->
+
== Quotas == <!--T:10-->
* Avoid text format files for large data.
+
* Use local storage for temporary files. The scheduler provides this (<code>$SLURM_TMPDIR</code>).
+
* If your program must search within a file, it is fastest to do it by first reading it completely before searching, or to use a RAM disk.
+
* Regularly clean up your data in the scratch and project spaces, because those filesystems are used for huge data collections.
+
* If you no longer use certain files but they must be retained, [[Archiving and compressing files|archive and compress]] them, and if possible copy them elsewhere.
+
* If your needs are not well served by the available storage options please contact us by sending an e-mail to [mailto:support@computecanada.ca Compute Canada support].
+
 
+
==Filesystem Quotas and Policies== <!--T:10-->
+
  
 
<!--T:11-->
 
<!--T:11-->
In order to ensure that there is adequate space for all Compute Canada users, there are a variety of quotas and policy restrictions concerning back-ups and automatic purging of certain filesystems.
+
On our cluster, each user has access to the /global/home and /global/scratch spaces by default and each group has access to project space in /global/project. These areas are subject to disk quota
On a cluster, each user has access to the home and scratch spaces by default and each group has access to 1 TB of project space by default. The nearline space has a default quota of 5 TB per group which is made available upon request by writing to [mailto:support@computecanada.ca Compute Canada support]. The nearline filesystem is made up of medium to low performance storage in very high capacity. This filesystem should be used for storage of data that is infrequently accessed that needs to be kept for long periods of time. Both the nearline and project spaces may have their group-based quotas increased allocated through the annual RAC (resource allocation) process. You can see your current usage of the current quota for various filesystems on Cedar and Graham using the command <tt>diskusage_report</tt>.
+
 
+
 
<!--T:12-->
 
<!--T:12-->
 
{| class="wikitable" style="font-size: 95%; text-align: center;"
 
{| class="wikitable" style="font-size: 95%; text-align: center;"
 
|+Filesystem Characteristics  
 
|+Filesystem Characteristics  
! Filesystem
+
! Area
! Default Quota
+
! Quota
! Lustre-based?
+
! Backup ?
! Backed up?
+
! Purge ?
! Purged?
+
! Default ?
! Available by Default?
+
! On Nodes?
! Mounted on Compute Nodes?
+
 
|-
 
|-
|Home Space
+
|/global/home
|50 GB and 500K files per user
+
| 500 GB
|Yes for Cedar, No for Graham (NFS)
+
| Yes
|Yes
+
| No
|No
+
| Yes
|Yes
+
| Yes
|Yes
+
 
|-
 
|-
|Scratch Space
+
| /global/scratch
|20 TB (100 TB on Graham) and 1M files per user<ref>Scratch space on Cedar can be increased to 100 TB per user upon request to [mailto:support@computecanada.ca Compute Canada support].</ref>
+
| 5 TB
|Yes
+
| No
|No
+
| Yes
|Yes, all files older than 60 days are subject to purging.
+
| Yes
|Yes
+
| Yes
|Yes
+
 
|-
 
|-
|Project Space
+
| /global/project
|1 TB and 5M files per group<ref>Project space can be increased to 10 TB per group upon request to [mailto:support@computecanada.ca Compute Canada support] and requests by different members of the same group will be summed together up to the ceiling of 10 TB.</ref>
+
|
|Yes
+
| Yes
|Yes
+
| No
|No
+
| Yes
|Yes
+
| Yes
|Yes
+
 
|-
+
|Nearline Space
+
|5 TB per group
+
|No
+
|No
+
|No
+
|No
+
|No
+
 
|}
 
|}
The backup policy on the home and project space is nightly backups which are retained for 30 days, while deleted files are retained for a further 60 days. If you wish to recover a previous version of a file or directory, you should write to [mailto:support@computecanada.ca Compute Canada support] with the full path for the file(s) and desired version (by date). To copy data from the nearline storage to the project, home or scratch space, you should also write to [mailto:support@computecanada.ca Compute Canada support].
 
<references />
 
  
== See also == <!--T:13-->
+
== Some Tips == <!--T:9-->
 
+
* Avoid text format files for large data.
The Frontenac cluster uses a shared [https://www.ibm.com/support/knowledgecenter/en/SSFKCN/gpfs_welcome.html GPFS filesystem] for all file storage.  
+
* Use local storage for temporary files. The scheduler provides this (<code>$TMPDIR or $TMPDISK</code>) which is created when you job starts on a compute node, e.g. TMPDISK=/lscratch/slurm-job-6363084.
User files are located under <code>/global/home</code> of 3TB quota, shared project space under <code>/global/project</code>,
+
* Searches should be done in memory rather than on disk.
and network scratch space under <code>/global/scratch</code> of 5TB quota.  
+
* Regularly clean up data, especially in scratch.
 
+
* Unused files that have to be kept should be moved off-system.
In addition to the network storage, each compute node has a 1.5TB local hard disk for fast access to local scratch space by jobs using the location specified by the <code>$TMPDISK</code> environment variable. All files in the local scratch space are assumed to be deleted automatically when corresponding jobs finish.
+

Latest revision as of 13:38, 27 September 2023

Important Note: Due to the transition to cost-recovery service many of the details of file system organization on the Frontenac GPFS file system are subject to change. Please refer back to these pages occasionally to stay abreast of these changes.

Overview

The Frontenac cluster uses a shared GPFS filesystem for all file storage. User files are located under /global/home of 500GB quota, shared project space under /global/project, and network scratch space under /global/scratch of 5TB quota. In addition to the network storage, compute nodes have between up to 1.5TB local hard disk for fast access to local scratch space by jobs using the location specified by the $TMPDISK environment variable. All files in the local scratch space are assumed to be deleted automatically when corresponding jobs finish. Note that it is the user's responsibility to manage the age of their data: these file systems do not provide archiving. If data are no longer needed, they need to be moved off the system. If you need assistance with this, please contact us. This is especially important as we are charging for storage in the Terabyte range. At present, nearline data (see explanation below) are free, but project data (see below) are subject to charges on an annual basis. Detail about our cost structure can be found at our Fees Information Page.

Storage Areas

Unlike your personal computer, our system has several storage spaces or file systems and you should ensure that you are using the right space for the right task. In this section we will discuss the principal file systems available, and the intended use of each one along with its characteristics. Storage options are distinguished by the available hardware, access mode and write system. Typically, most Compute Canada systems offer the following storage types:

Global Parallel File System (GPFS)
This file system is visible on both login and compute nodes. Combining multiple disk arrays and fast servers, it offers excellent performance for large files and large input/output operations. Two types of storage are distinguished on such systems: long term storage and temporary storage (scratch). Performance is subject to variations caused by other users.
Local Filesystem
This is a local hard drive attached to each of the nodes. Its advantage is high performance (because it is rarely shared). Its disadvantage is that local files must be re-copied to a global area to be vi sible on other nodes such as the login ()workup) node. Typically, local disk is regularly "cleaned", i.e. data kept there are considered transitory.
RAM (memory) Filesystem
This is a filesystem that exists within a node's RAM, so it reduces the available memory. This makes it very fast but low-capacity. A RAM disk must be cleaned at the end of a job.

The following table summarizes the properties of these storage types.

Description of storage type
Type Accessibility Throughput Latency Longevity
GPFS (/global/home, /global/project ...) All nodes Fair High Long term
GPFS (/global/scratch) All nodes Fair High Short term (periodically cleaned)
Local Filesystem (TMPDIR) Local to the node Fair Medium Very short term
Memory (RAM) FS Local to the node Good Very low Very short term, cleaned after every job

Throughput describes the efficiency of the file system for large operations. Sometimes also called "bandwidth" in the context of FS-IO.

Latency describes the efficiency of the file system for small operations. Low latency is good.

Quotas

On our cluster, each user has access to the /global/home and /global/scratch spaces by default and each group has access to project space in /global/project. These areas are subject to disk quota

Filesystem Characteristics
Area Quota Backup ? Purge ? Default ? On Nodes?
/global/home 500 GB Yes No Yes Yes
/global/scratch 5 TB No Yes Yes Yes
/global/project Yes No Yes Yes

Some Tips

  • Avoid text format files for large data.
  • Use local storage for temporary files. The scheduler provides this ($TMPDIR or $TMPDISK) which is created when you job starts on a compute node, e.g. TMPDISK=/lscratch/slurm-job-6363084.
  • Searches should be done in memory rather than on disk.
  • Regularly clean up data, especially in scratch.
  • Unused files that have to be kept should be moved off-system.