Skip to content

Access Computing

Access to the cluster is supported via SSH and via Open OnDemand. Open OnDemand makes is easy to access (through a web browser) scientific software for data visualization, simulations and modeling.

NCShare OnDemand

Currently NCShare OnDemand supports running a virtual Linux desktop, a pre-installed Jupyter Lab Apptainer, and an RStudio Apptainer. Experienced users may develop their own containers which also can be accessed through OnDemand (see Cluster Computing section for more info).

Current session limits (subject to change as we progress through deployment):

  • wall time: 24 hours
  • CPUs: 40
  • RAM: 208 GB

NCShare SSH Access

Users who would like to directly interact with the NCShare Cluster Computing environment through the Slurm Workload Manager may do so by enabling SSH key authentication from their workstation.

Cluster Storage

The NCShare cluster has several shared file systems available for all users of the cluster. General partitions are on Isilon, 40Gbps or 10Gbps network attached storage arrays.

Warning

Sensitive data is not permitted on cluster storage.

Path Size Description
/hpc/home/<ncshareid> 50 GB Use for personal scripts, and other environment setup. When an NCShare account expires this home directory is automatically removed from the cluster.
/work/<ncshare> 100 TB Unpartitioned, high speed volume, shared across all users. Files older than 75 days are purged automatically.
/data/projectname 100 TB Available by request only through institution contacts. Projects that need to share data can request a 6 month allocation of space.

Warning

This is not intended for long term storage and users should regularly remove files they are not using. Access to NCShare must be renewed annually. All files are automatically purged for expired users.

Sample interactive session

In this session, jmnewton requests an interactive session through SLURM and executes a shell using the Apptainer image Apptainer.def. Using Apptainer in this way lets you interact with the environment like it is a virtual machine.

$ srun -p common --pty bash -i
$ apptainer shell Apptainer.sif

Apptainer> cat /etc/os-release 

PRETTY_NAME="Ubuntu 24.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="24.04"
VERSION="24.04.3 LTS (Noble Numbat)"
VERSION_CODENAME=noble
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=noble
LOGO=ubuntu-logo

Sample batch job script

The sample below illustrates how to use Apptainer as part of a batch session. All defaults for number of nodes, time etc. are used here.

Sample batch script (slurm_apptainer.sh):

#!/bin/bash
#SBATCH --job-name=slurm_apptainer
#SBATCH --partition=common

apptainer exec /hpc/group/oit/jmnewton/detectron2.sif cat /etc/os-release

Submitting the batch script:

jmnewton@dcc-login-03  /hpc/group/oit/jmnewton $ sbatch slurm_singularity.sh
Submitted batch job 1518992

jmnewton@dcc-login-03  /hpc/group/oit/jmnewton $ cat slurm-1518992.out
NAME="Ubuntu"
VERSION="18.04.3 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.3 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

NCShare Cluster Partitions

Slurm partitions are separate queues that divide up a cluster's nodes based on specific attributes. Each partition has its own constraints, which control which jobs can run in it. Users may have multiple Slurm accounts and must specify the correct Slurm account to gain access to restricted partitions.

  • common for jobs that will run CPU only
  • gpu for jobs that will run on H200 GPU nodes. Limited access, by request only to your local contact.

Guidance on GPU usage can be found in the GPU documentation.

Note

If a partition is not specified, the default partition is the common partition