Difference between revisions of "Frontenac:Fees"
(→Fee Structure @ Frontenac) |
|||
(15 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
== '''Fee Structure @ Frontenac''' == | == '''Fee Structure @ Frontenac''' == | ||
− | Frontenac serves as our main compute cluster and is operated through the SLURM scheduler. Until March 31, 2019 allocations from the 2018 Resource Allocation Competition ("RAC 2018") of Compute Canada, | + | Frontenac serves as our main compute cluster and is operated through the SLURM scheduler. Until March 31, 2019 allocations from the 2018 Resource Allocation Competition ("RAC 2018") of Compute Canada, were running on this cluster. The cluster was not among the allocatable systems for the 2019 Compute Canada allocation round ("RAC 2019"). '''Therefore, the operation of Frontenac are on a cost-recovery basis since April 1, 2019'''. This page provides details about the fee structure. |
=== Price List === | === Price List === | ||
Line 12: | Line 12: | ||
|'''Unit Price''' | |'''Unit Price''' | ||
|- | |- | ||
− | | Compute (CPU usage | + | | Compute (CPU usage) |
| $225 / core year | | $225 / core year | ||
|- | |- | ||
Line 30: | Line 30: | ||
The prices quoted are for 2019 and subject to change. They do not include HST. | The prices quoted are for 2019 and subject to change. They do not include HST. | ||
+ | '''Until September 2019, Ontario users may keep data on our systems free of charge with the understanding that the data are handled as "nearline".''' | ||
+ | This means that they may be moved to tape if they are not accessed for some time (>month). From 2020, the above charges apply to all data. | ||
+ | We provide 500 GB of /home space without charge for users of the compute cluster. This only applies for user who have an agreement! | ||
=== Compute and Storage === | === Compute and Storage === | ||
Line 59: | Line 62: | ||
|} | |} | ||
− | === | + | === Metered Compute Access === |
− | + | The standard type of access to the Frontenac cluster is metered, i.e. usage is monitored through the scheduler and capped at the amount of compute time purchased. | |
− | {| class="wikitable" | ''' | + | {| class="wikitable" | '''Compute Access''' |
|- | |- | ||
|'''Type''' | |'''Type''' | ||
|'''Explanation''' | |'''Explanation''' | ||
|- | |- | ||
− | | | + | | Metered Compute Access |
| | | | ||
− | * | + | * Access entitles to user to a priority proportional to the number of core-years purchased. |
* Continuous usage results in the purchased number of core-years. | * Continuous usage results in the purchased number of core-years. | ||
* Overall usage is capped at the number of core-years purchased. | * Overall usage is capped at the number of core-years purchased. | ||
− | * Unused portions of the purchase | + | * Unused portions of the purchase can be "rolled-over" to a second year, after which they expire. |
− | + | * Users will be notified when 80% usage is reached, and given the option to purchase further resources. | |
− | + | * An automatic "top-up" option exists. | |
− | + | ||
− | + | ||
− | * | + | |
− | * | + | |
− | + | ||
|- | |- | ||
| Special arrangements | | Special arrangements | ||
Line 113: | Line 111: | ||
* access requires (automatic) retrieval to disk and entails delays depending on data size | * access requires (automatic) retrieval to disk and entails delays depending on data size | ||
* backup policy the same as for project data | * backup policy the same as for project data | ||
+ | * '''not''' suitable for IO during program runs or data analysis | ||
|- | |- | ||
| Intermediate data | | Intermediate data | ||
| | | | ||
* Data reside on global or local disk | * Data reside on global or local disk | ||
− | * Subject to | + | * Subject to periodic purges |
* Standard areas are : /global/scratch, /lscratch, /tmp | * Standard areas are : /global/scratch, /lscratch, /tmp | ||
* Used for data transactions free of charge (for registered users) | * Used for data transactions free of charge (for registered users) | ||
|} | |} | ||
+ | |||
+ | === Procedure === | ||
+ | |||
+ | To set up an agreement and to arrange for payment, please contact [mailto:cac.admin@queensu.ca cac.admin@queensu.ca] | ||
+ | Our procedure usually involves the following steps: | ||
+ | * You [mailto:cac.admin@queensu.ca contact us] to initiate the process | ||
+ | * We set up a consultation call to determine what your needs are and how we can meet them. This involves surveying your past usage, explaining details of resource allocation, payment, etc. The goal is to arrive at an appropriate allocation size and prize. | ||
+ | * You confirm the size of the allocation you want to purchase. | ||
+ | * We send you a draft version of the contract with the specifics of the allocation. | ||
+ | * You return the signed contract to us (scan/email to cac.admin@queensu.ca) | ||
+ | * We send you an invoice | ||
+ | * Once we receive payment we make the necessary technical alteration of your scheduling accounts. If you have used the systems before, you will likely not notice any difference. | ||
+ | * Note that any usage after April 1, 2019 will be added to the tally. At our discretion, we can provide access to the systems before the contract is in place. Usage accrued in the interim will be added to the tally. | ||
+ | * The preferred payment method is journal entry at Queen's, and via check for other users. |
Latest revision as of 20:18, 3 January 2020
Contents
Fee Structure @ Frontenac
Frontenac serves as our main compute cluster and is operated through the SLURM scheduler. Until March 31, 2019 allocations from the 2018 Resource Allocation Competition ("RAC 2018") of Compute Canada, were running on this cluster. The cluster was not among the allocatable systems for the 2019 Compute Canada allocation round ("RAC 2019"). Therefore, the operation of Frontenac are on a cost-recovery basis since April 1, 2019. This page provides details about the fee structure.
Price List
The following lists the basic charges for compute and storage usage on the Frontenac cluster. These are meant as a reference to facilitate the decision of whether to continue to use the Frontenac cluster or seek alternatives.
Type | Unit Price |
Compute (CPU usage) | $225 / core year |
Compute (CPU usage, special arrangements) | Contact us |
Storage (Project) | $250 / Terabyte-year |
Storage (Nearline) | $45 / Terabyte-year |
Storage (special arrangements) | Contact us |
The prices quoted are for 2019 and subject to change. They do not include HST. Until September 2019, Ontario users may keep data on our systems free of charge with the understanding that the data are handled as "nearline". This means that they may be moved to tape if they are not accessed for some time (>month). From 2020, the above charges apply to all data. We provide 500 GB of /home space without charge for users of the compute cluster. This only applies for user who have an agreement!
Compute and Storage
The new fee structure for the Frontenac Compute cluster applies to both to the usage of CPU's (GPU's) and storage on disk/tape. The fees are raised per annum, but can be pro-rated to a shorter duration without penalty. The standard units are :
Type | Unit | Explanation |
CPU usage | core-year |
|
Storage | Terabyte-year |
|
Metered Compute Access
The standard type of access to the Frontenac cluster is metered, i.e. usage is monitored through the scheduler and capped at the amount of compute time purchased.
Type | Explanation |
Metered Compute Access |
|
Special arrangements |
|
Project and Nearline storage
There are two standard types of storage on the Frontenac file system, both part of the "Hierarchical Storage Management" system. "Project" storage refers to storage immediately accessible on a disk through the GPFS file system. "Nearline" storage refers to data that reside on tape, but are accessible through disk when needed, albeit with a delay. Here is a more detailed explanation:
Type | Explanation |
Project |
|
Nearline |
|
Intermediate data |
|
Procedure
To set up an agreement and to arrange for payment, please contact cac.admin@queensu.ca Our procedure usually involves the following steps:
- You contact us to initiate the process
- We set up a consultation call to determine what your needs are and how we can meet them. This involves surveying your past usage, explaining details of resource allocation, payment, etc. The goal is to arrive at an appropriate allocation size and prize.
- You confirm the size of the allocation you want to purchase.
- We send you a draft version of the contract with the specifics of the allocation.
- You return the signed contract to us (scan/email to cac.admin@queensu.ca)
- We send you an invoice
- Once we receive payment we make the necessary technical alteration of your scheduling accounts. If you have used the systems before, you will likely not notice any difference.
- Note that any usage after April 1, 2019 will be added to the tally. At our discretion, we can provide access to the systems before the contract is in place. Usage accrued in the interim will be added to the tally.
- The preferred payment method is journal entry at Queen's, and via check for other users.