Frontenac:MigrateOff

From CAC Wiki
Revision as of 15:53, 11 January 2019 by Hasch (Talk | contribs) (Created page with "= '''Outline''' = Frontenac serves as our main compute cluster and is operated through the SLURM scheduler. Until March 31, allocations from the 2018 Resource Allocation Comp...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Outline

Frontenac serves as our main compute cluster and is operated through the SLURM scheduler. Until March 31, allocations from the 2018 Resource Allocation Competition of Compute Canada, are running on this cluster. Furthermore, the cluster is accessed by researchers with a "contributed" priority allocation, and on an "opportunistic" scheduling basis with low priority.

Since the cluster will not be among the allocatable system for the 2019 Compute Canada allocation round ("RAC2019"), the operation of Frontenac will be on a cost-recovery basis from April 1, 2019 onward. For details about the fee structure, please see our Frontenac Fee Wiki page for details. This is an important change that affects both compute access and the usage of storage.

From April 2019 we will cannot provide compute services and/or storage capacity on Frontenac free of charge.

If you are currently using Frontenac for computations, and

A set of guides on how to:

Migrating to the new Frontenac cluster

This is a basic guide for users of our current CentOS 6 production systems ("SW cluster") to explain and facilitate migration to our new CentOS 7 systems ("Frontenac", "CAC cluster").

Note: We are in the final phase of the migration process. All users will gain access to the new systems by mid-November, and lose acess to the old systems in early January 2018. Scheduling of new jobs on the old system will stop in mid-December! Please make yourself familiar with the new systems.

Why migrate ?

What's Different ?

new SW (Linux) cluster new CAC (Frontenac) cluster
Operating system CentOS 6 CentOS 7
File system type ZFS GPFS
Scheduler Sun Grid Engine (SGE) SLURM
Software manager usepackage lmod
Backup management samfs Hierarchical Storage Management (HSM)

Migration Time Table

Month (2017) Who moves ?
September
  • De-actived users
  • User who have not run a scheduled job for > 6 months
  • Volunteers
October
  • New accounts (i.e. new users will be going straight to Frontenac)
  • User who have not run a scheduled job for > 3 months
  • Volunteers
November
  • New accounts (i.e. new users will be going straight to Frontenac)
December
  • New accounts (i.e. new users will be going straight to Frontenac)
  • Everyone

We will transfer hardware from the "old" cluster (SW) to the new one (Frontenac) to accommodate the migrated users. This means that in the transition period, the old cluster will gradually become smaller while the new one grows. Dedicated hardware will be moved when its users migrate.

IMPORTANT DEADLINES

Date Migration Event System
November 6, 2017 Scheduling halted for all nodes with more than 24 cores SW ("old system")
December 1, 2017
  • User notification by email
  • All users receive access to new systems
Frontenac ("new system")
January 3, 2017
  • Data synchronization stops
  • User data that differ after this date must be transferred by users
  • Grid Engine scheduling disabled (nodes "draining")
SW ("old system")
January 19, 2018
  • All running jobs are terminated
  • Remaining hardware is transferred to new system
SW ("old system")
January 26, 2018
  • User access to sflogin0/swlogin1 closed
  • SNOlab (SX) cluster jobs terminated
  • SNOlab (SX) login nodes closed
SW ("old system")

Migration Schedule

  • 1 - Initiation of migration process
    • Email notification of the user (mid-November).
    • Create account on new cluster.
    • Issue temporary credentials to the new cluster and request initial login to change password.
  • 2 - Rolling rsync of user data
    • Will be repeated until update requires less than 2 hrs
      • /home/hpcXXXX
      • /u1/work/hpcXXXX
      • /scratch/hpcXXXX if required
      • other directories if required
    • Users can access both new and old systems for 1 month.
      • Data on the old system that are newer than on the new one are rsync'ed.
  • 3 - Final migration
    • Final rsync.
    • Jobs on old cluster are terminated.
    • User access to old system closed.

Migration Q&A

  • Q: Who migrates ?
A: All of our users will migrate from the old SW cluster to the new "Frontenac" cluster
  • Q: Can I use my old "stuff" ?
A: Much of the old data and software will be usable on the new systems. However, the data will have to be copied over as the new systems use a separate file system, and cross access is not possible.
  • Q Do I have to re-compile ?
A: It is possible that you will have to re-compile some of the software you are using. We will assist you with this.
  • Q: Do I copy my files over myself ?
A: Initially, we transfer your data for you. This synchronization process will end on December 15. If you are still altering your data after this date, it is your responsibility to transfer the data manually.
  • Q: Is this optional ?
A: No. We move both user data and hardware according to a schedule.
  • Q: Can I decide when to move ?
A: We are open to "early adopters", but we cannot grant extensions on the old systems.
  • Q: Will this disrupt my research ?
A: The moving of hardware and users causes unavoidable scheduling bottlenecks, as substantial portions of the clusters have to be kept inactive to "drain". Also, in the intermediate period when one cluster is dismantled and the other is being built up, both are substantially smaller. Especially larger jobs will be hard or impossible to schedule in the period between November'17 and February'18.
  • Q: How are resources allocated on the new cluster ?
A: Pleased read through our help file "Resource Allocations on Frontenac"

Help

If you have questions that you can't resolve by checking documentation, email to cac.help@queensu.ca.