Difference between revisions of "Frontenac:Fees"

From CAC Wiki
Jump to: navigation, search
(Blanked the page)
Line 1: Line 1:
 +
== '''Fee Structure @ Frontenac''' ==
  
 +
Frontenac serves as our main compute cluster and is operated through the SLURM scheduler. Until March 31, allocations from the 2018 Resource Allocation Competition of Compute Canada, are running on this cluster. The cluster is not among the allocatable systems for the 2019 Compute Canada allocation round ("RAC2019"). '''Therefore, the operation of Frontenac will be on a cost-recovery basis from April 1, 2019'''. This page provides details about the fee structure.
 +
 +
=== Price List ===
 +
 +
The following lists the basic charges for compute and storage usage on the Frontenac cluster. These are meant as a reference to facilitate the decision of whether to continue to use the Frontenac cluster or seek alternatives.
 +
 +
{| class="wikitable" | '''Difference between "old" SW (Linux) and "new" CAC (Frontenac) clusters'''
 +
|-
 +
|'''Type'''
 +
|'''Unit Price'''
 +
|-
 +
| Compute (CPU usage, High-Priority or Metered)
 +
| $225/cyr
 +
|-
 +
| Compute (GPU usage)
 +
| TBA
 +
|-
 +
| Compute (CPU usage, special arrangements)
 +
| Contact us
 +
|-
 +
| Storage (Project)
 +
| $250/tyr
 +
|-
 +
| Storage (Nearline)
 +
| $45/tyr
 +
|-
 +
| Storage (special arrangements)
 +
| Contact us
 +
|-
 +
|}
 +
 +
The prices quoted are for 2019 and subject to an automatic 2% increase every calendar year. They do not include HST.
 +
 +
=== Compute and Storage ===
 +
 +
The new fee structure for the Frontenac Compute cluster applies to both to the usage of CPU's (GPU's) and storage on disk/tape. The fees are raised per annum, but can be pro-rated to a shorter duration without penalty. The standard units are :
 +
 +
{| class="wikitable" | '''Difference between "old" SW (Linux) and "new" CAC (Frontenac) clusters'''
 +
|-
 +
|'''Type'''
 +
|'''Unit'''
 +
|'''Explanation'''
 +
|-
 +
| CPU usage
 +
| core-year (cyr)
 +
|
 +
* One core for the duration of one year.
 +
* The unit is not bound to a specific CPU but scheduled on any of the systems on the Frontenac cluster.
 +
* Associated memory and other specifics of the CPU varies. The quoted price is based on a 4GB/core ratio.
 +
* We are not charging for memory, but will use a standard memory-equivalent (4GB/core) when memory usage exceeds CPU usage.
 +
|-
 +
| GPU usage
 +
| gpu-year (gyr)
 +
|
 +
* One GPU for the duration of one year.
 +
* The unit is not bound to a specific system but scheduled to GPU nodes on the Frontenac cluster.
 +
* Associated memory and other specifics may vary.
 +
* One "driver" CPU is included.
 +
|-
 +
| Storage
 +
| terabyte-year (tyr)
 +
|
 +
* One terabyte of storage for the duration of one year.
 +
* Storage needs to be sized ahead of usage, and includes all project areas (home, scratch, project).
 +
* Different rates apply for disk (project) storage and tape storage with HSM access (nearline).
 +
* A small amount of "home" space for usage with CPU access is free.
 +
|-
 +
|}
 +
 +
=== "High-priority" and "Metered" Compute Access ===
 +
 +
There are two standard types of access to the Frontenac cluster": "High-Priority" access which provides scheduled access to the cluster which will in most cases be "rapid" for smaller jobs, and "Metered" access which uses a standard priority that may entail longer waiting times but will only be charged according to actual usage. In addition we offer special arrangements. Here is a more detailed explanation:
 +
 +
{| class="wikitable" | '''Difference between "old" SW (Linux) and "new" CAC (Frontenac) clusters'''
 +
|-
 +
|'''Type'''
 +
|'''Explanation'''
 +
|-
 +
| High-Priority
 +
|
 +
* High-Priority access entitles to user to a priority on the scheduler on the scheduler that is proportional to the number of core-years purchased.
 +
* long-term usage continuous usage results in the purchased number of core-years.
 +
* overall usage is capped at the number of core-years purchased.
 +
* unused portions of the purchase are non-refundable.
 +
* the amount of available resources on the cluster is scaled to cover at least the number of core-years purchased.
 +
|-
 +
| Metered
 +
|
 +
* Resources are access at a standard priority that in most cases allows access when needed.
 +
* may entail longer waiting times.
 +
* at the end of each year, the total amount of core-years is billed to the user.
 +
* the amount of available resources on the cluster may be scaled to cover demands from "metered users".
 +
|-
 +
| Special arrangements
 +
|
 +
* The CAC is open to special arrangements for short-term or long-term projects.
 +
* Such arrangement may include dedicated servers for a duration, contributed systems, and others.
 +
|}
 +
 +
=== Project and Nearline storage ===
 +
 +
There are two standard types of storage on the Frontenac file system, both part of the "Hierarchical Storage Management" system. "Project" storage refers to storage immediately accessible on a disk through the GPFS file system. "Nearline" storage refers to data that reside on tape, but are accessible through disk when needed, albeit with a delay. Here is a more detailed explanation:
 +
 +
{| class="wikitable" | '''Difference between "old" SW (Linux) and "new" CAC (Frontenac) clusters'''
 +
|-
 +
|'''Type'''
 +
|'''Explanation'''
 +
|-
 +
| Project
 +
|
 +
* Used for frequently used, "active" data
 +
* Data reside on disk
 +
* Standard areas are : /global/home, /global/project
 +
* Access is immediate (at the speed of the GPFS system)
 +
* Home and project are backed up, scratch is not
 +
* The /project space is shared among members of a group, /home and /scratch are individual
 +
|-
 +
| Nearline
 +
|
 +
* Used for infrequently used, "passive" data
 +
* Data reside on tape, with "stubs" on disk
 +
* standard areas are : /global/home (individual), /global/project (shared)
 +
* access requires (automatic) retrieval to disk and entails delays depending on data size
 +
* backup policy the same as for project data
 +
|-
 +
| Intermediate data
 +
|
 +
* Data reside on global or local disk
 +
* Subject to frequent purges
 +
* Standard areas are : /global/scratch, /lscratch, /tmp
 +
* Used for data transactions free of charge (for registered users)
 +
|}

Revision as of 13:25, 1 February 2019

Fee Structure @ Frontenac

Frontenac serves as our main compute cluster and is operated through the SLURM scheduler. Until March 31, allocations from the 2018 Resource Allocation Competition of Compute Canada, are running on this cluster. The cluster is not among the allocatable systems for the 2019 Compute Canada allocation round ("RAC2019"). Therefore, the operation of Frontenac will be on a cost-recovery basis from April 1, 2019. This page provides details about the fee structure.

Price List

The following lists the basic charges for compute and storage usage on the Frontenac cluster. These are meant as a reference to facilitate the decision of whether to continue to use the Frontenac cluster or seek alternatives.

Type Unit Price
Compute (CPU usage, High-Priority or Metered) $225/cyr
Compute (GPU usage) TBA
Compute (CPU usage, special arrangements) Contact us
Storage (Project) $250/tyr
Storage (Nearline) $45/tyr
Storage (special arrangements) Contact us

The prices quoted are for 2019 and subject to an automatic 2% increase every calendar year. They do not include HST.

Compute and Storage

The new fee structure for the Frontenac Compute cluster applies to both to the usage of CPU's (GPU's) and storage on disk/tape. The fees are raised per annum, but can be pro-rated to a shorter duration without penalty. The standard units are :

Type Unit Explanation
CPU usage core-year (cyr)
  • One core for the duration of one year.
  • The unit is not bound to a specific CPU but scheduled on any of the systems on the Frontenac cluster.
  • Associated memory and other specifics of the CPU varies. The quoted price is based on a 4GB/core ratio.
  • We are not charging for memory, but will use a standard memory-equivalent (4GB/core) when memory usage exceeds CPU usage.
GPU usage gpu-year (gyr)
  • One GPU for the duration of one year.
  • The unit is not bound to a specific system but scheduled to GPU nodes on the Frontenac cluster.
  • Associated memory and other specifics may vary.
  • One "driver" CPU is included.
Storage terabyte-year (tyr)
  • One terabyte of storage for the duration of one year.
  • Storage needs to be sized ahead of usage, and includes all project areas (home, scratch, project).
  • Different rates apply for disk (project) storage and tape storage with HSM access (nearline).
  • A small amount of "home" space for usage with CPU access is free.

"High-priority" and "Metered" Compute Access

There are two standard types of access to the Frontenac cluster": "High-Priority" access which provides scheduled access to the cluster which will in most cases be "rapid" for smaller jobs, and "Metered" access which uses a standard priority that may entail longer waiting times but will only be charged according to actual usage. In addition we offer special arrangements. Here is a more detailed explanation:

Type Explanation
High-Priority
  • High-Priority access entitles to user to a priority on the scheduler on the scheduler that is proportional to the number of core-years purchased.
  • long-term usage continuous usage results in the purchased number of core-years.
  • overall usage is capped at the number of core-years purchased.
  • unused portions of the purchase are non-refundable.
  • the amount of available resources on the cluster is scaled to cover at least the number of core-years purchased.
Metered
  • Resources are access at a standard priority that in most cases allows access when needed.
  • may entail longer waiting times.
  • at the end of each year, the total amount of core-years is billed to the user.
  • the amount of available resources on the cluster may be scaled to cover demands from "metered users".
Special arrangements
  • The CAC is open to special arrangements for short-term or long-term projects.
  • Such arrangement may include dedicated servers for a duration, contributed systems, and others.

Project and Nearline storage

There are two standard types of storage on the Frontenac file system, both part of the "Hierarchical Storage Management" system. "Project" storage refers to storage immediately accessible on a disk through the GPFS file system. "Nearline" storage refers to data that reside on tape, but are accessible through disk when needed, albeit with a delay. Here is a more detailed explanation:

Type Explanation
Project
  • Used for frequently used, "active" data
  • Data reside on disk
  • Standard areas are : /global/home, /global/project
  • Access is immediate (at the speed of the GPFS system)
  • Home and project are backed up, scratch is not
  • The /project space is shared among members of a group, /home and /scratch are individual
Nearline
  • Used for infrequently used, "passive" data
  • Data reside on tape, with "stubs" on disk
  • standard areas are : /global/home (individual), /global/project (shared)
  • access requires (automatic) retrieval to disk and entails delays depending on data size
  • backup policy the same as for project data
Intermediate data
  • Data reside on global or local disk
  • Subject to frequent purges
  • Standard areas are : /global/scratch, /lscratch, /tmp
  • Used for data transactions free of charge (for registered users)