|Intel Xeon Phi (Knights Landing )|
|Operating System||Linux Centos 7.2|
|Number of Nodes||4|
|Ram/Node||96GB DDR4 + 16GB MCDRAM|
This is develop/test system of four x86_64 self-hosted 2nd Generation Intel Xeon Phi (Knights Landing, KNL) nodes, aka an Intel "Ninja" platform. Each node has one 64-core Intel(R) Xeon Phi(TM) CPU 7210 @ 1.30GHz with 4 threads per core. These systems are not add-on accelerators, but instead act as full-fledged processors running a regular linux operating system. They are configured with 96GB of DDR4 system RAM along with 16GB of very fast MCDRAM, see here for details. The nodes are interconnected to the rest of the clusters with QDR Infiniband and shares the regular SciNet GPFS filesystems.
First login via ssh with your SciNet account at login.scinet.utoronto.ca, and from there you can proceed to knl01,knl02,knl03,knl04.
KNL Operational Modes
The four nodes all have identical hardware, however there are multiple options that control how the MCDRAM High Bandwidth Memory (HBM) is accessed. Mode changes are not dynamic and require the node to be rebooted to take affect.
Currently all KNL nodes have the Cluster Mode configured to "Quadrant". See this article for more details about the clustering options that contorl how memory is accessed on the KNL.
Two nodes, knl01,kn02 have the MCDRAM configured as "Cache" mode and the other two knl03,kn04 are configured with the "Flat" memory mode. See this article for more details about the MCDRAM memory modes.
Initially when you first compile/port your code, use the Cache mode nodes, however if you wish to try and optimize memory performance by directly using the HBM memory with the memkind library or the numactl options, use the Flat nodes.
user@knl03$ numactl --membind 1 ./mycode
Currently there is no queue, be nice.
Software is available using the standard modules framework used on other SciNet systems, however is separate from the GPC modules as the KNL has a newer Centos7 based operating system.
The Xeon Phi uses the standard intel compilers.
module load intel/16.0.3
IntelMPI is currently the default MPI
module load intelmpi/18.104.22.168