Frontenac:Fees
Contents
Fee Structure @ Frontenac
Frontenac serves as our main compute cluster and is operated through the SLURM scheduler. Until March 31, 2019 allocations from the 2018 Resource Allocation Competition ("RAC 2018") of Compute Canada, are running on this cluster. The cluster is not among the allocatable systems for the 2019 Compute Canada allocation round ("RAC 2019"). Therefore, the operation of Frontenac will be on a cost-recovery basis starting April 1, 2019. This page provides details about the fee structure.
Price List
The following lists the basic charges for compute and storage usage on the Frontenac cluster. These are meant as a reference to facilitate the decision of whether to continue to use the Frontenac cluster or seek alternatives.
Type | Unit Price |
Compute (CPU usage) | $225 / core year |
Compute (CPU usage, special arrangements) | Contact us |
Storage (Project) | $250 / Terabyte-year |
Storage (Nearline) | $45 / Terabyte-year |
Storage (special arrangements) | Contact us |
The prices quoted are for 2019 and subject to change. They do not include HST.
Compute and Storage
The new fee structure for the Frontenac Compute cluster applies to both to the usage of CPU's (GPU's) and storage on disk/tape. The fees are raised per annum, but can be pro-rated to a shorter duration without penalty. The standard units are :
Type | Unit | Explanation |
CPU usage | core-year |
|
Storage | Terabyte-year |
|
Metered Compute Access
There are two standard types of access to the Frontenac cluster": "High-Priority" access which provides scheduled access to the cluster which will in most cases be "rapid" for smaller jobs, and "Metered" access which uses a standard priority that may entail longer waiting times but will only be charged according to actual usage. In addition we offer special arrangements. Here is a more detailed explanation:
Type | Explanation |
Metered Compute Access |
|
Special arrangements |
|
Project and Nearline storage
There are two standard types of storage on the Frontenac file system, both part of the "Hierarchical Storage Management" system. "Project" storage refers to storage immediately accessible on a disk through the GPFS file system. "Nearline" storage refers to data that reside on tape, but are accessible through disk when needed, albeit with a delay. Here is a more detailed explanation:
Type | Explanation |
Project |
|
Nearline |
|
Intermediate data |
|
Procedure
To set up an agreement and to arrange for payment, please contact c ac.admin@queensu.ca Our procedure usually involves the following steps:
- You contact us to initiate the process
- We set up a consultation call to determine what your needs are and how we can meet them. This involves surveying your past usage, explaining details of resource allocation, payment, etc. The goal is to arrive at an appropriate allocation size and prize.
- You confirm the size of the allocation you want to purchase.
- We send you a Memorandum of Understanding with the specifics of the allocation.
- You return the signed MOU to us (scan/email to cac.admin@queensu.ca)
- We send you an invoice
- Once we receive payment we make the necessary technical alteration of your scheduling accounts. If you have used the systems before, you will likely not notice any difference.
- The preferred payment method is journal entry at Queen's, and via check for other users.