SciNet User Support Library

From SciNetWiki
Jump to: navigation, search

System Status

upGPC upTCS upSandy upFile System
upGravity upP7 downViz upBGQ upHPSS

Wed Feb 3 14:35:52 EST 2016: HPSS is down for maintenance.

Jan 15, 11:20 AM: Systems are in the process of being brought online.

Jan 14, 3:30 PM: Downtime extended to noon on Friday Jan 15th (estimate).

Our sincere apologies for this extension of the downtime. Unfortunately, a problem has come to light with some of the disks in the file system. Because of the way the file system is set up, no data is lost, but if we put the system back into production now, a single additional failure would run the risks of data loss or corruption, so this needs to be fixed now.

The BGQ file system hasn't suffered from this and may be brought up earlier.

Updates will be posted here.

Note: Because of the downtime, we'll be deferring the scratch purging that was scheduled for January 15th to Wednesday January 20th.

Jan 13, 7:00 AM: Downtime in effect.

SCHEDULED MAINTENANCE DOWNTIME ANNOUNCEMENT

There will be a full SciNet shutdown from January 13th to January 14th, 2016 for scheduled annual maintenance.

All systems will go down at 7 AM on Wednesday January 13th; all login sessions and jobs will be killed at that time.

System News

  • Jan 11: The 2016 Resource Allocations for compute cycles are now in effect.
  • Nov 23: The quota for home directories has been increased from 10 GB to 50 GB.
  • Nov 23, GPC: Two Visualization Nodes, viz01 and viz02, are being setup. They are 8-core Nehalem nodes with 2 graphics cards each, 64 GB of memory, and about 60GB of local hard disk. For now, you can directly log into viz01 to try it out. We would value users' feedback and request for suitable software, help with visualization projects etc.
  • Nov 16: ARC being decommisioned. During a transition period, the ARC head node and two compute nodes will be kept up. Users are encouraged to start using Gravity instead.
  • Nov 12, GPC: The number of GPC devel nodes has been doubled from 4 to 8, and the new ones can be accessed using gpc0[5-8].
  • Sept 7, GPC: The number of nodes with 32 GB of RAM has been increased from 84 to 205.
  • July 24, GPC: GCC 5.2.0 with Coarray Fortran support, installed as a module.

(Previous System News)

QuickStart Guides

Tutorials and Manuals

What's New On The Wiki

  • Dec 2014: Updated GPC Quickstart with info on email notifications from the scheduler.
  • Dec 2014: Hdf5 compilation page updated.
  • Sept 2014: Improved information on the Python versions installed on the GPC, and which modules are included in each version.
  • Sept 2014: Description on using job arrays on the GPC on the Scheduler page.
  • Sept 2014: Instructions on using Hadoop (for the Hadoop workshop held in September).

Previous new stuff can be found in the What's new archive.


Personal tools
Namespaces
Variants
Actions
Systems
Knowledge Base
Wiki
Toolbox