Difference between revisions of "Frontenac:Fees"

From CAC Wiki
Jump to: navigation, search
(Compute and Storage)
 
(31 intermediate revisions by the same user not shown)
Line 1: Line 1:
= '''Fee Structure @ Frontenac''' =
+
== '''Fee Structure @ Frontenac''' ==
  
Frontenac serves as our main compute cluster and is operated through the SLURM scheduler. Until March 31, allocations from the 2018 Resource Allocation Competition of Compute Canada, are running on this cluster. The cluster is not among the allocatable systems for the 2019 Compute Canada allocation round ("RAC2019"). '''Therefore, the operation of Frontenac will be on a cost-recovery basis from April 1, 2019'''. This page provides details about the fee structure.
+
Frontenac serves as our main compute cluster and is operated through the SLURM scheduler. Until March 31, 2019 allocations from the 2018 Resource Allocation Competition ("RAC 2018") of Compute Canada, were running on this cluster. The cluster was not among the allocatable systems for the 2019 Compute Canada allocation round ("RAC 2019"). '''Therefore, the operation of Frontenac are on a cost-recovery basis since April 1, 2019'''. This page provides details about the fee structure.
  
== Compute and Storage ==
+
=== Price List ===
 
+
The new fee structure for the Frontenac Compute cluster applies both to the usage of CPU's and for storage on disk and tape. The fees are raised per annum basis, but can be pro-rated to a shorter duration without penalty. The standard units are :
+
  
 +
The following lists the basic charges for compute and storage usage on the Frontenac cluster. These are meant as a reference to facilitate the decision of whether to continue to use the Frontenac cluster or seek alternatives.
  
 
{| class="wikitable" | '''Difference between "old" SW (Linux) and "new" CAC (Frontenac) clusters'''
 
{| class="wikitable" | '''Difference between "old" SW (Linux) and "new" CAC (Frontenac) clusters'''
 +
|-
 +
|'''Type'''
 +
|'''Unit Price'''
 +
|-
 +
| Compute (CPU usage)
 +
| $225 / core year
 +
|-
 +
| Compute (CPU usage, special arrangements)
 +
| Contact us
 +
|-
 +
| Storage (Project)
 +
| $250 / Terabyte-year
 +
|-
 +
| Storage (Nearline)
 +
| $45 / Terabyte-year
 +
|-
 +
| Storage (special arrangements)
 +
| Contact us
 +
|-
 +
|}
 +
 +
The prices quoted are for 2019 and subject to change. They do not include HST.
 +
'''Until September 2019, Ontario users may keep data on our systems free of charge with the understanding that the data are handled as "nearline".'''
 +
This means that they may be moved to tape if they are not accessed for some time (>month). From 2020, the above charges apply to all data.
 +
We provide 500 GB of /home space without charge for users of the compute cluster. This only applies for user who have an agreement!
 +
 +
=== Compute and Storage ===
 +
 +
The new fee structure for the Frontenac Compute cluster applies to both to the usage of CPU's (GPU's) and storage on disk/tape. The fees are raised per annum, but can be pro-rated to a shorter duration without penalty. The standard units are :
 +
 +
{| class="wikitable" | ''' Explanation of Units'''
 
|-
 
|-
 
|'''Type'''  
 
|'''Type'''  
Line 15: Line 45:
 
|-
 
|-
 
| CPU usage
 
| CPU usage
| core-year (cyr)
+
| core-year
* One core for the duration of one year
+
|
* The unit is not bound to a specific CPU but scheduled on any of the systems on the Frontenac cluster
+
* One core for the duration of one year.
* Associated memory and other specifics of the CPU varies
+
* The unit is not bound to a specific CPU but scheduled on any of the systems on the Frontenac cluster.
* We are not charging for memory, but will use a memory-equivalent when memory usage exceeds CPU usage
+
* Associated memory and other specifics of the CPU varies. The quoted price is based on a 4GB/core ratio.
 +
* We are not charging for memory, but will use a standard memory-equivalent (4GB/core) when memory usage exceeds CPU usage.
 +
|-
 +
| Storage
 +
| Terabyte-year
 +
|
 +
* One terabyte of storage for the duration of one year.
 +
* Storage needs to be sized ahead of usage, and includes all project areas (home, scratch, project).
 +
* Different rates apply for disk (project) storage and tape storage with HSM access (nearline).
 +
* A small amount of "home" space for usage with CPU is included in the fees.
 +
|-
 +
|}
 +
 
 +
=== Metered Compute Access ===
 +
 
 +
The standard type of access to the Frontenac cluster is metered, i.e. usage is monitored through the scheduler and capped at the amount of compute time purchased.
 +
 
 +
{| class="wikitable" | '''Compute Access'''
 +
|-
 +
|'''Type'''
 +
|'''Explanation'''
 +
|-
 +
| Metered Compute Access
 +
|
 +
* Access entitles to user to a priority proportional to the number of core-years purchased.
 +
* Continuous usage results in the purchased number of core-years.
 +
* Overall usage is capped at the number of core-years purchased.
 +
* Unused portions of the purchase can be "rolled-over" to a second year, after which they expire.
 +
* Users will be notified when 80% usage is reached, and given the option to purchase further resources.
 +
* An automatic "top-up" option exists.
 +
|-
 +
| Special arrangements
 +
|
 +
* The CAC is open to special arrangements for short-term or long-term projects.
 +
* Such arrangement may include dedicated servers for a duration, contributed systems, and others.
 +
|}
 +
 
 +
=== Project and Nearline storage ===
 +
 
 +
There are two standard types of storage on the Frontenac file system, both part of the "Hierarchical Storage Management" system. "Project" storage refers to storage immediately accessible on a disk through the GPFS file system. "Nearline" storage refers to data that reside on tape, but are accessible through disk when needed, albeit with a delay. Here is a more detailed explanation:
 +
 
 +
{| class="wikitable" | '''Difference between "project" and "nearline" ''' storage
 +
|-
 +
|'''Type'''
 +
|'''Explanation'''
 
|-
 
|-
| April 1, 2019
+
| Project
|  
+
|
* Access changes from free to charged
+
* Used for frequently used, "active" data
* Charged accounts are active with priorities based on charge
+
* Data reside on disk
* Compute Canada free accounts (hpcXXXX) loose access
+
* Standard areas are : /global/home, /global/project
* Data not covered by charged accounts are purged
+
* Access is immediate (at the speed of the GPFS system)
* Some exceptions apply :
+
* Home and project are backed up, scratch is not
** grace period for residual data of RAC users, limited access
+
* The /project space is shared among members of a group, /home and /scratch are individual
** Queen's researchers & other Ontario research groups get additional time to move their data off system
+
 
|-
 
|-
| July 1, 2019
+
| Nearline
|  
+
|
* All data not covered by charged accounts are purged.
+
* Used for infrequently used, "passive" data
* This includes backups (tape).
+
* Data reside on tape, with "stubs" on disk
 +
* standard areas are : /global/home (individual), /global/project (shared)
 +
* access requires (automatic) retrieval to disk and entails delays depending on data size
 +
* backup policy the same as for project data
 +
* '''not''' suitable for IO during program runs or data analysis
 
|-
 
|-
 +
| Intermediate data
 +
|
 +
* Data reside on global or local disk
 +
* Subject to periodic purges
 +
* Standard areas are : /global/scratch, /lscratch, /tmp
 +
* Used for data transactions free of charge (for registered users)
 
|}
 
|}
  
== "Dedicated" and "A la carte" Compute access ==
+
=== Procedure ===
  
== Project and Nearline storage ==
+
To set up an agreement and to arrange for payment, please contact [mailto:cac.admin@queensu.ca cac.admin@queensu.ca]
 +
Our procedure usually involves the following steps:
 +
* You [mailto:cac.admin@queensu.ca contact us] to initiate the process
 +
* We set up a consultation call to determine what your needs are and how we can meet them. This involves surveying your past usage, explaining details of resource allocation, payment, etc. The goal is to arrive at an appropriate allocation size and prize.
 +
* You confirm the size of the allocation you want to purchase.
 +
* We send you a draft version of the contract with the specifics of the allocation.
 +
* You return the signed contract to us (scan/email to cac.admin@queensu.ca)
 +
* We send you an invoice
 +
* Once we receive payment we make the necessary technical alteration of your scheduling accounts. If you have used the systems before, you will likely not notice any difference.
 +
* Note that any usage after April 1, 2019 will be added to the tally. At our discretion, we can provide access to the systems before the contract is in place. Usage accrued in the interim will be added to the tally.
 +
* The preferred payment method is journal entry at Queen's, and via check for other users.

Latest revision as of 20:18, 3 January 2020

Fee Structure @ Frontenac

Frontenac serves as our main compute cluster and is operated through the SLURM scheduler. Until March 31, 2019 allocations from the 2018 Resource Allocation Competition ("RAC 2018") of Compute Canada, were running on this cluster. The cluster was not among the allocatable systems for the 2019 Compute Canada allocation round ("RAC 2019"). Therefore, the operation of Frontenac are on a cost-recovery basis since April 1, 2019. This page provides details about the fee structure.

Price List

The following lists the basic charges for compute and storage usage on the Frontenac cluster. These are meant as a reference to facilitate the decision of whether to continue to use the Frontenac cluster or seek alternatives.

Type Unit Price
Compute (CPU usage) $225 / core year
Compute (CPU usage, special arrangements) Contact us
Storage (Project) $250 / Terabyte-year
Storage (Nearline) $45 / Terabyte-year
Storage (special arrangements) Contact us

The prices quoted are for 2019 and subject to change. They do not include HST. Until September 2019, Ontario users may keep data on our systems free of charge with the understanding that the data are handled as "nearline". This means that they may be moved to tape if they are not accessed for some time (>month). From 2020, the above charges apply to all data. We provide 500 GB of /home space without charge for users of the compute cluster. This only applies for user who have an agreement!

Compute and Storage

The new fee structure for the Frontenac Compute cluster applies to both to the usage of CPU's (GPU's) and storage on disk/tape. The fees are raised per annum, but can be pro-rated to a shorter duration without penalty. The standard units are :

Type Unit Explanation
CPU usage core-year
  • One core for the duration of one year.
  • The unit is not bound to a specific CPU but scheduled on any of the systems on the Frontenac cluster.
  • Associated memory and other specifics of the CPU varies. The quoted price is based on a 4GB/core ratio.
  • We are not charging for memory, but will use a standard memory-equivalent (4GB/core) when memory usage exceeds CPU usage.
Storage Terabyte-year
  • One terabyte of storage for the duration of one year.
  • Storage needs to be sized ahead of usage, and includes all project areas (home, scratch, project).
  • Different rates apply for disk (project) storage and tape storage with HSM access (nearline).
  • A small amount of "home" space for usage with CPU is included in the fees.

Metered Compute Access

The standard type of access to the Frontenac cluster is metered, i.e. usage is monitored through the scheduler and capped at the amount of compute time purchased.

Type Explanation
Metered Compute Access
  • Access entitles to user to a priority proportional to the number of core-years purchased.
  • Continuous usage results in the purchased number of core-years.
  • Overall usage is capped at the number of core-years purchased.
  • Unused portions of the purchase can be "rolled-over" to a second year, after which they expire.
  • Users will be notified when 80% usage is reached, and given the option to purchase further resources.
  • An automatic "top-up" option exists.
Special arrangements
  • The CAC is open to special arrangements for short-term or long-term projects.
  • Such arrangement may include dedicated servers for a duration, contributed systems, and others.

Project and Nearline storage

There are two standard types of storage on the Frontenac file system, both part of the "Hierarchical Storage Management" system. "Project" storage refers to storage immediately accessible on a disk through the GPFS file system. "Nearline" storage refers to data that reside on tape, but are accessible through disk when needed, albeit with a delay. Here is a more detailed explanation:

Type Explanation
Project
  • Used for frequently used, "active" data
  • Data reside on disk
  • Standard areas are : /global/home, /global/project
  • Access is immediate (at the speed of the GPFS system)
  • Home and project are backed up, scratch is not
  • The /project space is shared among members of a group, /home and /scratch are individual
Nearline
  • Used for infrequently used, "passive" data
  • Data reside on tape, with "stubs" on disk
  • standard areas are : /global/home (individual), /global/project (shared)
  • access requires (automatic) retrieval to disk and entails delays depending on data size
  • backup policy the same as for project data
  • not suitable for IO during program runs or data analysis
Intermediate data
  • Data reside on global or local disk
  • Subject to periodic purges
  • Standard areas are : /global/scratch, /lscratch, /tmp
  • Used for data transactions free of charge (for registered users)

Procedure

To set up an agreement and to arrange for payment, please contact cac.admin@queensu.ca Our procedure usually involves the following steps:

  • You contact us to initiate the process
  • We set up a consultation call to determine what your needs are and how we can meet them. This involves surveying your past usage, explaining details of resource allocation, payment, etc. The goal is to arrive at an appropriate allocation size and prize.
  • You confirm the size of the allocation you want to purchase.
  • We send you a draft version of the contract with the specifics of the allocation.
  • You return the signed contract to us (scan/email to cac.admin@queensu.ca)
  • We send you an invoice
  • Once we receive payment we make the necessary technical alteration of your scheduling accounts. If you have used the systems before, you will likely not notice any difference.
  • Note that any usage after April 1, 2019 will be added to the tally. At our discretion, we can provide access to the systems before the contract is in place. Usage accrued in the interim will be added to the tally.
  • The preferred payment method is journal entry at Queen's, and via check for other users.