840 words
4 minutes
Linux Tools

I. HPC Setup Without sudo — CUDA, CMake & .bashrc Configuration#

Overview: This guide covers three tasks on HPC clusters where you have no root access: installing the CUDA Toolkit to your home directory without sudo; installing CMake locally from a pre-built binary; and structuring your ~/.bashrc with a modular ~/.bashrc.d/ system for managing CUDA versions, libtorch, cuDNN, and custom paths.

1. Installing CUDA Without sudo#

CUDA only truly requires two things: the nvcc compiler and user-space libraries like libcudart. Both can be installed entirely inside your home directory:

$HOME/cuda

1) Check Your Linux Distribution and Architecture#

Terminal window
cat /etc/os-release
NAME="Rocky Linux"
VERSION="9.7 (Blue Onyx)"
ID="rocky"
...
Terminal window
uname -m
x86_64

2) Download the Runfile from NVIDIA#

Go to: https://developer.nvidia.com/cuda-downloads

Select your OS/architecture and choose the runfile (local) installer type.

CUDA download page

The downloaded file will be named something like:

cuda_12.9.1_575.57.08_linux.run
Note: To install an older CUDA version, visit the archive at https://developer.nvidia.com/cuda-toolkit-archive. Make sure the CUDA version you choose is less than or equal to the CUDA Driver version reported by nvidia-smi.

3) Install the Toolkit Only (No Driver)#

Make the runfile executable:

Terminal window
chmod +x cuda_*.run

Run the installer with driver installation disabled:

Terminal window
./cuda_12.9*.run \
--silent \
--toolkit \
--toolkitpath=$HOME/cuda-12.9 \
--no-drm \
--no-man-page
FlagPurpose
—silentNon-interactive installation
—toolkitInstall CUDA Toolkit only
—toolkitpathTarget installation directory (user home)
No driver flagAvoids any root requirement

4) Configure Environment Variables (Modular .bashrc.d)#

Create the directory:

Terminal window
mkdir -p ~/.bashrc.d

Create a CUDA config file:

Terminal window
nano ~/.bashrc.d/cuda.sh

Write the following:

Terminal window
# ===== Default CUDA =====
export CUDA_HOME=$HOME/cuda-12.9
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
# ===== CUDA version switcher =====
use_cuda () {
local ver=$1
if [ ! -d "$HOME/cuda-$ver" ]; then
echo "CUDA $ver not found in \$HOME"
return 1
fi
export CUDA_HOME=$HOME/cuda-$ver
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
echo "Switched to CUDA $ver"
nvcc --version | head -n 1
}

Ensure ~/.bashrc loads all files in ~/.bashrc.d/ (add if not present):

Terminal window
if [ -d ~/.bashrc.d ]; then
for rc in ~/.bashrc.d/*; do
[ -f "$rc" ] && . "$rc"
done
fi

Reload the environment:

Terminal window
source ~/.bashrc

5) Verify the Installation#

Check the compiler:

Terminal window
nvcc -V

If a CUDA version string is printed, the toolkit is installed correctly.

Check GPU availability:

Terminal window
nvidia-smi
Note: This is a critical and often overlooked distinction. If nvidia-smi runs successfully, the server already has a GPU driver installed and you can use the GPU. If it fails, the GPU driver is missing — without sudo you cannot install the driver yourself, meaning CUDA can only be used for compilation, not for GPU execution.

2. Installing CMake Locally (No Root)#

1) Download the Official Pre-built Installer#

The official binary installer requires no source compilation, no gcc or make, installs quickly, and is compatible with most Linux environments.

Terminal window
cd ~
wget https://github.com/Kitware/CMake/releases/download/v3.29.6/cmake-3.29.6-linux-x86_64.sh

2) Install to Your User Directory#

Terminal window
bash cmake-3.29.6-linux-x86_64.sh --skip-license --prefix=$HOME/.local
FlagPurpose
—skip-licenseSkip the interactive license confirmation
—prefix=$HOME/.localInstall into the user-level software directory (Linux convention)

3) Add to PATH and Verify#

Terminal window
echo 'export PATH=$HOME/.local/bin:$PATH' >> ~/.bashrc
source ~/.bashrc
cmake --version

3. .bash_profile — Auto-load .bashrc on Login#

.bash_profile
# Load aliases and functions from .bashrc
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

This ensures .bashrc is sourced automatically on every SSH login session.


4. .bashrc Section-by-Section Walkthrough#

1) System Initialization#

Loads the system-level bash configuration (modules, colors, completions, etc.):

Terminal window
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi

2) User PATH Initialization#

Adds user program directories to PATH without creating duplicates:

Terminal window
# Only add if not already present (prevents duplicate PATH entries)
if ! [[ "$PATH" =~ "$HOME/.local/bin:$HOME/bin:" ]]
then
# Prepend user-level bin dirs so locally installed tools take priority
PATH="$HOME/.local/bin:$HOME/bin:$PATH"
fi
# Export so child processes (Python, bash, etc.) inherit this PATH
export PATH

3) The .bashrc.d Modular Config System#

Splits shell configuration into separate, focused files. Recommended when managing multiple CUDA versions, multiple Conda environments, multi-project research, or many custom aliases.

Terminal window
# Load all user config modules from ~/.bashrc.d/
if [ -d ~/.bashrc.d ]; then
for rc in ~/.bashrc.d/*; do
# Only source regular files (not directories or other types)
if [ -f "$rc" ]; then
# Source the file — equivalent to: source "$rc"
# Makes aliases, functions, and exports take effect immediately
. "$rc"
fi
done
fi

4) PATH and Library Configuration#

CUDA 12.6 Environment#

Terminal window
# ===== CUDA 12.6 =====
export CUDA_HOME=$HOME/cuda-12.6
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH

Sets the default CUDA version to 12.6.

libtorch Headers and Library Paths#

Terminal window
# ===== libtorch — PyTorch's official C++ API and runtime =====
export CPATH=$HOME/libtorch/include:$HOME/libtorch/include/torch/csrc/api/include:$CPATH
export LIBRARY_PATH=$HOME/libtorch/lib:$LIBRARY_PATH
export LD_LIBRARY_PATH=$HOME/libtorch/lib:$LD_LIBRARY_PATH

Enables calling PyTorch from C++.

Use CaseKeep?
Writing C++/CUDA code with libtorch✔ Required
Python-only PyTorch usage❌ Can be removed

cuDNN Paths#

Only needed for C++ builds, custom CUDA kernels, or TensorRT:

Terminal window
# ===== cuDNN =====
export CPATH=$HOME/cudnn/include:$CPATH
export LIBRARY_PATH=$HOME/cudnn/lib:$LIBRARY_PATH
export LD_LIBRARY_PATH=$HOME/cudnn/lib:$LD_LIBRARY_PATH

CUTLASS#

Terminal window
# ===== CUTLASS =====
export CUTLASS=$HOME/cutlass

Custom Command Paths#

Terminal window
# ===== PATH: tells the shell where to find executables =====
export PATH=$HOME/.local/bin:$PATH
export PATH="$HOME/bin:$PATH"

💡 One-line Takeaway
On a no-root HPC cluster: install CUDA with --toolkitpath=$HOME/cuda-X.Y --no-drm, install CMake with --prefix=$HOME/.local, and keep your shell config clean by splitting everything into ~/.bashrc.d/ modules — remember that CUDA without a GPU driver can compile but not run.
Linux Tools
https://lxy-alexander.github.io/blog/posts/tools/linux-tools/
Author
Alexander Lee
Published at
2026-02-07
License
CC BY-NC-SA 4.0