SciNet User Support Library

From SciNetWiki
Jump to: navigation, search

System Status

upGPC upTCS upSandy upFile System
upGravity upP7 upViz upBGQ upHPSS


NOTE: BGQ cannot currently be accessed directly from the outside. Please ssh to the login nodes (login.scinet.utoronto.ca), and once there do "ssh bgqdev".

Sat May 21 18:09:06 EDT 2016: BGQ was overheating due to stuck valve, and some jobs were killed. We have reset the valve and it's working apparently. TCS is up.

Sat May 21 16:08:57 EDT 2016: P7 and BGQ are up. TCS still has some issues.

Sat 21 May 2016 13:12:14 EDT: GPC and viz nodes available. Some issues delaying other systems

Sat 21 May 2016 10:07:14 EDT: Starting to bring up storage and other eqpt slowly to be sure there are no outstanding issues. Will be at least noon before any systems are available. Will update timeline as we progress.

Fri 20 May 2016 19:04:36 EDT: Problem was traced to a faulty valve controlling makeup water to the cooling-tower. Valve has been fixed and water removed but systems will remain down overnight in order to make sure the machine room sub-floor has dried properly. Next update will be about 10AM tomorrow (Saturday) morning. We hope to start bringing systems up at that time.

May 20, 9:50 AM: All systems are being brought down to investigate a water leak in the data centre. Keep checking here for updates. As investigation is ungoing, it is not yet possible to give an estimate when systems may be up again.


System News

  • May 3: GPC: Versions 15.0.6 and 16.0.3 of the Intel Compilers are installed as modules.
  • Feb 12: GPC: Version 6.0 of Allinea Forge (DDT Debugger, MAP, Performance Reports) installed as a module.
  • Jan 11: The 2016 Resource Allocations for compute cycles are now in effect.
  • Nov 23: The quota for home directories has been increased from 10 GB to 50 GB.
  • Nov 23, GPC: Two Visualization Nodes, viz01 and viz02, are being setup. They are 8-core Nehalem nodes with 2 graphics cards each, 64 GB of memory, and about 60GB of local hard disk. For now, you can directly log into viz01 to try it out. We would value users' feedback and request for suitable software, help with visualization projects etc.
  • Nov 16: ARC being decommissioned. During a transition period, the ARC head node and two compute nodes will be kept up. Users are encouraged to start using Gravity instead.
  • Nov 12, GPC: The number of GPC devel nodes has been doubled from 4 to 8, and the new ones can be accessed using gpc0[5-8].
  • Sept 7, GPC: The number of nodes with 32 GB of RAM has been increased from 84 to 205.
  • July 24, GPC: GCC 5.2.0 with Coarray Fortran support, installed as a module.

(Previous System News)

QuickStart Guides

Tutorials and Manuals

What's New On The Wiki

  • Dec 2014: Updated GPC Quickstart with info on email notifications from the scheduler.
  • Dec 2014: Hdf5 compilation page updated.
  • Sept 2014: Improved information on the Python versions installed on the GPC, and which modules are included in each version.
  • Sept 2014: Description on using job arrays on the GPC on the Scheduler page.
  • Sept 2014: Instructions on using Hadoop (for the Hadoop workshop held in September).

Previous new stuff can be found in the What's new archive.


Personal tools
Namespaces
Variants
Actions
Systems
Knowledge Base
Wiki
Toolbox