Additional Research Services
The NCShare Compute Cluster is a general purpose high-throughput installation and is container ready to host workloads for a broad array of scientific projects.
Quick facts:
- 8 compute nodes with 128 cores and 512 GB RAM each (1024 cores and 4 TB of RAM total)
- Coming soon: 4 H200 w/141GB of NVRAM GPU nodes with 8 H200 GPUS and 2TB RAM each (32 H200 GPUs total)
- Interconnects are 10 Gbps or 40 Gbps
- Utilizes a 400 TB FreeNAS NFS share
- Systems run Ubuntu and SLURM is the job scheduler
The NCShare Compute Cluster is managed and supported by a regional consortium. We are currently in an early adopters phase of deployment and actively seeking researchers from participating institutions to try out the resources.
While we are in this phase, substantial grants for resource hours are available.
Computing
Access to the cluster is supported via SSH and via Open OnDemand. Open OnDemand makes is easy to acces (through a web browser) scientific software for data visualization, simulations and modeling.
All cluster users are automatically provided a home directory at /hpc/home/<username>
and should save all their files to this location.
*** Note: this is not intended for long term storage and users should regularly remove files they are not using. Access to NCShare must be renewed annually. All files are automatically purged for expired users. ***
NCShare OnDemand
Currently OnDemand supports running a virtual linux desktop, a pre-installed Jupyter Lab Apptainer, and an RStudio Apptainer. Experienced users man develop their own containers which also can be accessed through OnDemand.
Current session limits (subject to change as we progress through deployment): - wall time: 24 hours - CPUs: 40 - RAM: 208 GB
NCShare SSH Access
Users who would like to directly interact with the NCShare Cluster Computing environment through the Slurm Workload Manager may do so by enabling SSH key authentication from their workstation.
Data Hosting
Coming soon