Skip to content

Access Computing

Access to the cluster is supported via SSH and via Open OnDemand. Open OnDemand makes is easy to acces (through a web browser) scientific software for data visualization, simulations and modeling.

NCShare OnDemand

Currently OnDemand supports running a virtual linux desktop, a pre-installed Jupyter Lab Apptainer, and an RStudio Apptainer. Experienced users man develop their own containers which also can be accessed through OnDemand.

Current session limits (subject to change as we progress through deployment): - wall time: 24 hours - CPUs: 40 - RAM: 208 GB

NCShare SSH Access

Users who would like to directly interact with the NCShare Cluster Computing environment through the Slurm Workload Manager may do so by enabling SSH key authentication from their workstation.

Cluster Storage

The DCC has several shared file systems available for all users of the cluster. General partitions are on Isilon, 40Gbps or 10Gbps network attached storage arrays.

Sensitive data is not permitted on cluster storage.

Path Size Description
/hpc/home/<ncshareid> 50 GB Use for personal scripts, and other environment setup. When an NCShare account expires this home directory is automatically removed from the cluster.
/work/<ncshare> 100 TB Unpartitioned, high speed volume, shared across all users. Files older than 75 days are purged automatically.
/data/projectname 100 TB Available by request only through institution contacts. Projects that need to share data can request a 6 month allocation of space.

*** Note: this is not intended for long term storage and users should regularly remove files they are not using. Access to NCShare must be renewed annually. All files are automatically purged for expired users. ***

Sample interactive session

In this session, jmnewton requests an interactive session through SLURM and executes a shell using the apptainer image xxx. Using singularity in this way lets you interact with the environment like it is a virtual machine.

Sample batch session

The sample below illustrates how to use singularity as part of a batch session.

Sample batch script(slurm_singularity.sh):

#!/bin/bash
# Submit to a random node with a command e.g.
#   sbatch slurm_singularity.sh
#SBATCH --job-name=slurm_singularity
#SBATCH --partition=common
singularity exec /hpc/group/oit/jmnewton/detectron2.sif cat /etc/os-release

Submitting the batch script:

jmnewton@dcc-login-03  /hpc/group/oit/jmnewton $ sbatch slurm_singularity.sh
Submitted batch job 1518992
jmnewton@dcc-login-03  /hpc/group/oit/jmnewton $ cat slurm-1518992.out
NAME="Ubuntu"
VERSION="18.04.3 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.3 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
jmnewton@dcc-login-03  /hpc/group/oit/jmnewton $

NCShare Cluster Partitions

SLURM partitions are separate queues that divide up a cluster's nodes based on specific attributes. Each partition has its own constraints, which control which jobs can run in it. Users may have multiple slurm accounts and must specify the correct slurm account to gain access to restricted partitions.

  • common for jobs that will run CPU only
  • gpu for jobs that will run on H200 GPU nodes. Limited access, by request only to your local contact.

Note: If a partition is not specified, the default partition is the common partition

GPU Partition

The GPU partition is comprised of 4 nodes with a total of 32 H200 GPUs.

General use of an H200 GPU

Slurm settings:

Account: institution_h200
Partition: gpu
--gres=gpu:h200
(base) kkilroy2@login-01:~$ scontrol show partition gpu
PartitionName=gpu
AllowGroups=appstate_h200,campbell_h200,catawba_h200,charlotte_h200,chowan_h200,davidson_h200,duke_h200,ecu_h200,elon_h200,fsu_h200,guilford_h200,meredith_h200,ncat_h200,nccu_h200,ncssm_h200,ncsu_h200,uncp_h200,uncw_h200,unc_h200,wfu_h200,wssu_h200 AllowAccounts=ALL AllowQos=ALL
AllocNodes=ALL Default=NO QoS=N/A
DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO ExclusiveTopo=NO GraceTime=0 Hidden=NO
MaxNodes=UNLIMITED MaxTime=2-00:00:00 MinNodes=0 LLN=NO MaxCPUsPerNode=UNLIMITED MaxCPUsPerSocket=UNLIMITED
Nodes=compute-gpu-[01-04]
PriorityJobFactor=30 PriorityTier=30 RootOnly=NO ReqResv=NO OverSubscribe=NO
OverTimeLimit=NONE PreemptMode=OFF
State=UP TotalCPUs=768 TotalNodes=4 SelectTypeParameters=NONE
JobDefaults=(null)
DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED
TRES=cpu=768,mem=8255656M,node=4,billing=768,gres/gpu=32