|Operating System||Linux Centos 6.4|
|Number of Nodes||76 (1216 cores)|
|Cores/Node||16 (32 threads)|
|Login/Devel Node||gpc0[1-4] (from login.scinet)|
The Sandy cluster will be decommissioned by the end of 2017. A new system for large parallel jobs, Niagara, will be replacing the TCS, the GPC and contributed systems like Sandy, and is expected to be in production in early 2018. The aim is to keep at least 50% of the GPC available during the installation of the new system. Users will be informed about further details of the transition as they become available.
The Sandybrdige (Sandy) cluster, consists of 76 x86_64 nodes each with two octal core Intel Xeon (Sandybridge) E5-2650 2.0GHz CPUs with 64GB of RAM per node. The nodes are interconnected with 2.6:1 blocking QDR Infiniband for MPI communications and disk I/O to the SciNet GPFS filesystems. In total this cluster contains 1216 x86_64 cores with 4,864 GB of total RAM.
NB - Sandy is a user-contributed system acquired through a CFI LOF to a specific PI. Policies regarding use by other groups are under development and subject to change at any time.
First login via ssh with your scinet account at login.scinet.utoronto.ca, and from there you can proceed to the normal GPC devel nodes gpc0[1-8].
The Sandy nodes are fully compatible with all software/modules built for the standard GPC nodes, see GPC Quickstart Compilers ; however as they a newer architecture they also have added CPU instructions that your program may benefit from. To ensure that you are using these sandy specific optimizations use the following Intel compiler flags with the latest Intel compiler when you compile specifically for the sandy nodes.
$ module load intel/14.0.1
Optimize your code for the SANDY nodes using the following compiler flags:
- More questions about compiling? See the FAQ.
- NOTE: Code compiled using these option may not be backwards compatible with the regular GPC nodes.
To access the sandybridge compute nodes you need to use the queue, similar to the standard GPC compute nodes. Currently the nodes are scheduled by complete node, 16 cores and a maximum walltime of 48 hours.
For an interactive job use
qsub -l nodes=1:ppn=16,walltime=12:00:00 -q sandy -I
or for a batch job use
where script.sh is
#!/bin/bash # Torque submission script for Sandy # #PBS -l nodes=2:ppn=16,walltime=1:00:00 #PBS -N sandytest #PBS -q sandy cd $PBS_O_WORKDIR # EXECUTION COMMAND; -np = nodes*ppn mpirun -np 32 ./a.out
To check running jobs on the sandy nodes only use
showq -w class=sandy
The same software installed on the GPC is available on Sandy using the same modules framework. See here for full details.