871 words
4 minutes
vllm contributor

I. Contributing to vLLM — Development Guide#

Overview: This document covers the complete workflow for contributing to vLLM, including environment setup, two installation paths (Python-only vs. CUDA/C++ compilation), linting, documentation preview, test execution, and PR submission guidelines. Whether you are contributing for the first time or working on daily development, this guide serves as a handy reference.

1. Contributing to vLLM#

Ways to contribute include:

  • Reporting bugs / opening issues
  • Adding support for new models
  • Implementing new features
  • Improving documentation
  • Helping others, reviewing PRs
  • Starring the repo, writing articles — these count too

2. Developing#

1) Step 1: Clone the Repository#

Terminal window
git clone https://github.com/vllm-project/vllm.git
cd vllm
Terminal window
uv venv --python 3.12 --seed
source .venv/bin/activate

If you don’t have uv, install it first:

Terminal window
curl -LsSf https://astral.sh/uv/install.sh | sh
Note: Why Python 3.12? Because vLLM's CI (official automated tests) primarily uses 3.12. Using the same version prevents situations where tests pass locally but fail in CI.

To delete the virtual environment:

Terminal window
rm -rf .venv
uv cache clean

3. Installing vLLM (Two Paths)#

Terminal window
VLLM_USE_PRECOMPILED=1 uv pip install -e .

What this means:

  • Installs in Editable Mode (-e) — changes to source files take effect immediately
  • Does not compile C++/CUDA locally
  • Downloads pre-compiled binaries from the corresponding pre-built wheel

👉 Advantage: Very fast, suitable for the majority of PRs.


2) Path B: CUDA/C++ Changes (Requires Local Compilation)#

If you previously ran Path A, first force-remove the installed vllm Python package:

Terminal window
uv pip uninstall vllm

Install PyTorch (cu129):

Terminal window
uv pip install torch torchvision torchaudio \
--extra-index-url https://download.pytorch.org/whl/cu129

Install the current project in Editable Mode:

Terminal window
CCACHE_NOHASHDIR="true" uv pip install --no-build-isolation -e . -v
CCACHE_NOHASHDIR="true" uv pip install -e . -v
Note: uv pip install -e . installs the project in the current directory in editable mode. . refers to the current directory (i.e., the vllm repo root). It reads pyproject.toml (primary) or setup.py (legacy), then installs the project into your virtual environment.

Common Error: ImportError: undefined symbol#

If you encounter the following error:

(vllm) [xli49@ghpc008 vllm]$ python examples/offline_inference/basic/basic.py
Traceback (most recent call last):
...
File "/data/home/xli49/vllm/vllm/platforms/cuda.py", line 16, in <module>
import vllm._C # noqa
^^^^^^^^^^^^^^
ImportError: /data/home/xli49/vllm/vllm/_C.abi3.so: undefined symbol: _ZN3c104cuda9SetDeviceEa

The cause is a mismatch between the torch ABI used at compile time and the torch version at runtime. Ensure you use --no-build-isolation and recompile with the correct CUDA version:

Terminal window
uv pip install -e . --no-build-isolation

Why Does vLLM Require --no-build-isolation?#

Because compiling vLLM’s C++/CUDA extensions depends heavily on:

  • The torch installed in your current environment
  • The matching CUDA version (cu129/cu128, etc.)
  • Other compilation-related packages

Without this flag, the build system uses an isolated temporary environment, which may result in:

  • A mismatched torch being installed in the temporary environment
  • The current torch’s CUDA configuration not being found
  • Compilation failures or incompatible binaries being generated

4. Linting (Code Style & Formatting)#

vLLM uses pre-commit to enforce a unified code style.

  • uv pip install pre-commit: installs the pre-commit tool
  • pre-commit install: installs hooks into .git/hooks/ so that checks run automatically on every git commit

1) Install and Enable#

Terminal window
uv pip install pre-commit
pre-commit install

From now on, every git commit will automatically run the checks ✅

2) Run Manually#

Terminal window
pre-commit run # Check only staged files
pre-commit run -a # Check all files (= --all-files)

3) CI-only Hooks (Trigger Locally on Demand)#

Terminal window
pre-commit run --hook-stage manual markdownlint
pre-commit run --hook-stage manual mypy-3.10

5. Documentation#

vLLM’s docs are built with MkDocs.

1) Install Documentation Dependencies#

Terminal window
uv pip install -r requirements/docs.txt

2) Preview the Docs Site Locally#

Terminal window
mkdocs serve

3) Faster Preview (Skip API Reference Generation)#

Controls whether the API Reference is generated.

Terminal window
API_AUTONAV_EXCLUDE=vllm mkdocs serve
Note: Ensure your Python version is compatible with the plugins. For example, mkdocs-awesome-nav requires Python 3.10+.

4) Forward the Port from a Remote Server#

-L = Local port forwarding: maps a port on the remote machine to a port on your local machine.

Terminal window
ssh -L 8000:127.0.0.1:8000 xli49@spiedie.binghamton.edu

5) Connect to a Remote GPU Node via Jump Host#

-J = Jump host: connect to a target machine by hopping through an intermediate host first.

Terminal window
ssh -J xli49@spiedie.binghamton.edu -L 8000:127.0.0.1:8000 xli49@ghpc005

6. Testing#

vLLM uses pytest.

1) Path A: Full CI-equivalent Setup (CUDA)#

Terminal window
uv pip install -r requirements/common.txt -r requirements/dev.txt --torch-backend=auto
pytest tests/

2) Path B: Minimal Test Tooling Only#

Terminal window
uv pip install pytest pytest-asyncio
pytest tests/

3) Run a Single Test File (Useful for Debugging)#

Terminal window
pytest -s -v tests/test_logger.py

7. Common Errors#

1) Missing Python.h#

If you encounter the following error during compilation or dependency installation:

Python.h: No such file or directory

Fix on Ubuntu:

Terminal window
sudo apt install python3-dev

8. Important Warnings#

✅ The repository is not yet fully covered by mypy — do not rely on mypy being fully green.

⚠️ Not all tests pass on CPU — without a GPU, many tests will fail locally. The official stance is: rely on CI for those tests.


9. PR Submission Guidelines#

1) DCO Sign-off#

Every commit must include a Signed-off-by line:

Terminal window
git commit -s -m "xxx"

2) PR Title Must Include a Category Prefix#

Examples:

  • [Bugfix] ...
  • [Kernel] ...
  • [Core] ...
  • [Doc] ...
  • [CI/Build] ...

PRs without a valid prefix may not be reviewed.


💡 One-line Takeaway
For Python-only changes, use VLLM_USE_PRECOMPILED=1 uv pip install -e . to get started in seconds; for CUDA/C++ changes, always compile with --no-build-isolation and match your torch CUDA version to avoid ABI symbol errors.
vllm contributor
https://lxy-alexander.github.io/blog/posts/vllm/vllm-contributor/
Author
Alexander Lee
Published at
2026-03-08
License
CC BY-NC-SA 4.0