Cluster Software
Basic linux software is installed on the cluster for general use. For scientific computation users are welcome to install software in their /hpc/home/${USER} space if sudo access is not required. Most commonly, users may self install Conda to use Python and other supported languages.
For all other software we recommend building and deploying your container to NCShare.
Warning
Do not install software in the /work space as this is a temporary storage location where files older than 75 days will be purged automatically.
Conda / Python Install
Users are encouraged to install their own instance of Conda in their home directory. This allows users to manage their own Python environments and install packages as needed without requiring admin access.
To install Conda, login to the cluster either through SSH, Open OnDemand shell, or an Open OnDemand Jupyter Terminal and run the following commands from your home directory,
-
Download the Miniconda installer,
wget -O Miniforge3.sh "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh" -
Run the installation script,
and and then follow the instructions. It will offer to update your ~/.bashrc, (init), say no if you use multiple environments. Say yes if you will only be using this environment.bash Miniforge3.sh -
Restart your shell,
exec bash -
Finally, delete your installation file to save space,
rm Miniforge3.sh
Now you should be an run conda install, pip install, create environments, etc.
To learn about managing conda environments on Open OnDemand JupyterLab instances, please read the Managing Conda Environments on Open OnDemand JupyterLab instances guide.
Software Containers
NCShare offers container support in two flavors; Containers through Container Manager for classroom environments, and Apptainer (formerly Singularity) containers for general use on the cluster. Container Manager container reservations last for a semester and provide access via web browser to packages such as RStudio and JupyterLab. Apptainer containers can be run either through Open OnDemand (e.g., Jupyter, RStudio) sessions or through the command line on the cluster.
A few Apptainer containers available on NCShare are listed below.
- Jupyter
- RStudio
The source for these containers (.def files) are hosted on the github repository ncshare/Apptainer and the containers themselves (.sif files) are placed in /opt/apps/containers/ for global access. This location is on a NAS share that is available on all Slurm compute nodes as well as the login and Open OnDemand nodes. Note that this list is not exhaustive and more containers may be added over time by admins or users.
Adding containers to NCShare
Pre-built Docker and Apptainer containers are often available within scientific repositories and on GitHub. Images can be moved to the cluster using the same methods as any other file.
Info
Because many images are quite large, we recommend storing them in /opt/apps/containers/user.
Most Docker containers are fully supported with Apptainer and can be run using Apptainer. A list of useful Apptainer container repositories are listed below.
Similar pre-built containers are available in external repositories and can be loaded to the NCShare using apptainer pull or apptainer build commands.
Building your own Apptainer containers
We offer two methods to build your own Apptainer containers for use on NCShare.
1. Deploy automatically from a gitlab repository
NCShare provides a GitLab CI/CD pipeline to automatically build and deploy Apptainer containers from a GitLab repository to NCShare. In order to use this method, users will need to have created an account on gitlab.com and then be invited to be a member of the ncshare/Apptainer group (https://gitlab.com/groups/ncshare/apptainer/-/group_members).
Once a member of the group, a user should create a project under the group. The project name will then be the name of the Apptainer image file. In the project, the user should create an Apptainer.def file and a .gitlab-ci.yml file. The .gitlab-ci.yml file is the same for all projects and should look as follows,
.gitlab-ci.yml:
image:
name: almalinux:9.6-minimal
entrypoint: ["/bin/sh", "-c"]
default:
tags:
- NCShare
stages:
- build
build:
stage: build
script:
- microdnf install -y epel-release
- microdnf install -y apptainer apptainer-suid
- apptainer build --disable-cache ${CI_PROJECT_NAME}.sif Apptainer.def
- cp ${CI_PROJECT_NAME}.sif /containers/
The Apptainer.def file contains the recipe to build the container. A simple example is shown below, which creates a container with Ubuntu 24.04.
Apptainer.def:
Bootstrap: docker
From: ubuntu:24.04
%post
apt-get -y update
apt-get -y install cowsay
%environment
export LC_ALL=C
export PATH=/usr/games:$PATH
%runscript
date | cowsay
Once committed and pushed to the repository, the GitLab CI/CD pipeline will automatically build the container and deploy it to NCShare. The resulting container will be available in /opt/apps/containers/user/${CI_PROJECT_NAME}.sif on NCShare, where $CI_PROJECT_NAME is the name of the project repository.
Info
The gitlab.com CI/CD process uses a gitlab-runner installation on an NCShare Proxmox VM named gitlab-runner. The gitlab-runner software uses the the NCShare tag to accept jobs (see .gitlab-ci.yml above). The runner uses the Docker executor and the toml config file bind mounts in /opt/apps so that the built image file can be written out to /opt/apps/containers/users owned by the Apptainer service account user.
2. Build on the NCShare cluster
Users can also build Apptainer containers directly on the NCShare cluster command line. Once a container definition file is created (e.g., Apptainer.def) in a directory, users can build the container using the following commands,
export APPTAINER_CACHEDIR=/work/${USER}/tmp
export APPTAINER_TMPDIR=/work/${USER}/tmp
apptainer build Apptainer.sif Apptainer.def
Apptainer.sif, in the same directory. For global access, this may be moved to /opt/apps/containers/user/.
For more information on building Apptainer containers, please refer to the following resources,
Containers for use by Open OnDemand
Users can build containers for use by the Open OnDemand applications that launch software in containers (e.g., Jupyter Lab and RStudio). The example given below would build an image with Jupyter Lab installed that could be used by the Jupyter Lab Apptainer application and users could install additional Python modules besides PyTorch.
Apptainer.def:
Bootstrap: docker
FROM: nvidia/cuda:12.8.1-cudnn-devel-ubuntu24.04
%help
This image contains NVidia CUDA, Jupyter Lab, and PyTorch
%setup
%files
%labels
Maintainer Mike Newton
Maintainer_Email jmnewton@duke.edu
%environment
export TZ="America/New_York"
export LANG="en_US.UTF-8"
export LC_COLLATE="en_US.UTF-8"
export LC_CTYPE="en_US.UTF-8"
export LC_MESSAGES="en_US.UTF-8"
export LC_MONETARY="en_US.UTF-8"
export LC_NUMERIC="en_US.UTF-8"
export LC_TIME="en_US.UTF-8"
export LC_PAPER="en_US.UTF-8"
export LC_MEASUREMENT="en_US.UTF-8"
export LC_ALL="en_US.UTF-8"
export PATH="/opt/conda/bin:$PATH"
%post
export TZ="America/New_York"
export DEBIAN_FRONTEND=noninteractive
# Update Apt index and install some packages
apt update -qq
apt install -y build-essential wget curl libcurl4 zlib1g zlib1g-dev sudo
# Install miniconda
wget -O Miniforge3.sh "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh"
bash Miniforge3.sh -b -p /opt/conda
rm Miniforge3.sh
# Install Jupyter and PyTorch
export PATH="/opt/conda/bin:$PATH"
mamba install -y jupyter jupyterlab
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
# Install any other additional modules as needed
%runscript
exec "${@}"
Running Containers
Once a container is built it can be run in several ways.
1. apptainer run
This command will execute the container's runscript, which are the commands defined under the %runscript portion of the .def file . In a previous example, we had the following in the .def file,
...
%runscript
date | cowsay
When this container is run using apptainer run, it will execute the date | cowsay command.
$ apptainer run Apptainer.sif
______________________________
< Mon Dec 15 20:32:56 UTC 2025 >
------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
2. apptainer exec
This command will execute a specific command within the container. For example,
$ apptainer exec Apptainer.sif python some_python_script.py
will run some_python_script.py using the Python interpreter within the container.
3. apptainer shell
This command will open an interactive shell session within the container. For example,
$ apptainer shell Apptainer.sif
Apptainer> cat /etc/os-release
PRETTY_NAME="Ubuntu 24.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="24.04"
VERSION="24.04.3 LTS (Noble Numbat)"
VERSION_CODENAME=noble
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=noble
LOGO=ubuntu-logo
Type exit to leave the container shell.
4. Open OnDemand Applications
NCShare's Open OnDemand provides applications to launch Jupyter Lab and RStudio sessions within Apptainer containers. Users can select a desired container under "Apptainer Container File" including ones they built when launching these applications.
Important Notes
-
For computationally intensive jobs, always run containers on the compute nodes through a slurm interactive session or a batch job script.
E.g., to run an interactive session with a container, use the following commands,
$ srun -p common --pty bash -i $ apptainer shell Apptainer.sifExample slurm batch job script that uses MPI within the container to run a parallel application across 1 nodes with 64 tasks per node,
#!/bin/bash #SBATCH -J apptainer_test # Job name #SBATCH -p common # Queue (partition) name #SBATCH -N 1 # Total # of nodes #SBATCH --ntasks-per-node 64 # Tasks per node #SBATCH --mem=10G # Memory per node cd $SLURM_SUBMIT_DIR apptainer exec Apptainer.sif mpirun -n $SLURM_NTASKS aims.x > aims.out 2> aims.errHowever, for multi-node MPI jobs, both the host system and the container must have compatible MPI installations. An example job script for this configuration is shown below,
#!/bin/bash #SBATCH -J apptainer_test_2N # Job name #SBATCH -p common # Queue (partition) name #SBATCH -N 2 # Total # of nodes #SBATCH --ntasks-per-node 64 # Tasks per node #SBATCH --mem=10G # Memory per node source /hpc/home/uherathmudiyanselage1/intel/oneapi/setvars.sh --force > /dev/null cd $SLURM_SUBMIT_DIR mpirun -n $SLURM_NTASKS \ apptainer exec \ Apptainer.sif \ aims.x > aims.out 2> aims.err -
For GPU enabled containers, be sure to include the
--nvflag when running the container to enable NVIDIA GPU support from the host GPU.$ apptainer exec --nv Apptainer.sif nvidia-smi -
By default, Apptainer automatically mounts several host directories into the container including
$HOME,CWD(current working directory),/tmp,/var/tmp,/dev,/etc/hosts,/etc/localtime,/proc, and/sys. You can specify additional bind mounts using the--bindor-Boption when running the container. E.g.,$ apptainer exec --bind /path/on/host:/path/in/container Apptainer.sif <command_to_run>Multiple bind mounts can be specified by separating them with commas.