Skip to content

Latest commit

 

History

History
550 lines (409 loc) · 22.8 KB

File metadata and controls

550 lines (409 loc) · 22.8 KB

Releases

This page describes how to install and use our release artifacts for ROCm and external builds like PyTorch. We produce build artifacts as part of our Continuous Integration (CI) build/test workflows as well as release artifacts as part of Continuous Delivery (CD) nightly releases. For the development-status of GPU architecture support in TheRock, please see the SUPPORTED_GPUS.md document, which tracks readiness and onboarding progress for each AMD GPU architecture.

See also the Roadmap for support and Build artifacts overview pages.

Important

These instructions assume familiarity with how to use ROCm. Please see https://rocm.docs.amd.com/ for general information about the ROCm software platform.

Prerequisites:

Table of contents:

Installing releases using pip

We recommend installing ROCm and projects like PyTorch via pip, the Python package installer.

We currently support Python 3.11, 3.12, and 3.13.

Tip

We highly recommend working within a Python virtual environment:

python -m venv .venv
source .venv/bin/activate

Multiple virtual environments can be present on a system at a time, allowing you to switch between them at will.

Warning

If you really want a system-wide install, you can pass --break-system-packages to pip outside a virtual enivornment. In this case, commandline interface shims for executables are installed to /usr/local/bin, which normally has precedence over /usr/bin and might therefore conflict with a previous installation of ROCm.

Python packages release status

Important

Known issues with the Python wheels are tracked at ROCm#808.

⚠️ Windows packages are new and may be unstable! ⚠️

Platform ROCm Python packages PyTorch Python packages
Linux Release portable Linux packages Release Linux PyTorch Wheels
Windows Release Windows packages Release Windows PyTorch Wheels

Index page listing

For now, rocm and torch packages are published to GPU-architecture-specific index pages and must be installed using an appropriate --find-links argument to pip. They may later be pushed to the Python Package Index (PyPI) or other channels using a process like https://wheelnext.dev/. Please check back regularly as these instructions will change as we migrate to official indexes and adjust project layouts.

Product Name GFX Target GFX Family Install instructions
MI300A/MI300X gfx942 gfx94X-dcgpu rocm // torch
MI350X/MI355X gfx950 gfx950-dcgpu rocm // torch
AMD RX 7900 XTX gfx1100 gfx110X-all rocm // torch
AMD RX 7800 XT gfx1101 gfx110X-all rocm // torch
AMD RX 7700S / Framework Laptop 16 gfx1102 gfx110X-all rocm // torch
AMD Radeon 780M Laptop iGPU gfx1103 gfx110X-all rocm // torch
AMD Strix Halo iGPU gfx1151 gfx1151 rocm // torch
AMD RX 9060 / XT gfx1200 gfx120X-all rocm // torch
AMD RX 9070 / XT gfx1201 gfx120X-all rocm // torch

Installing ROCm Python packages

We provide several Python packages which together form the complete ROCm SDK.

Package name Description
rocm Primary sdist meta package that dynamically determines other deps
rocm-sdk-core OS-specific core of the ROCm SDK (e.g. compiler and utility tools)
rocm-sdk-devel OS-specific development tools
rocm-sdk-libraries OS-specific libraries

rocm for gfx94X-dcgpu

Supported devices in this family:

Product Name GFX Target
MI300A/MI300X gfx942

Install instructions:

pip install --index-url https://rocm.nightlies.amd.com/v2/gfx94X-dcgpu/ "rocm[libraries,devel]"

rocm for gfx950-dcgpu

Supported devices in this family:

Product Name GFX Target
MI350X/MI355X gfx950

Install instructions:

pip install --index-url https://rocm.nightlies.amd.com/v2/gfx950-dcgpu/ "rocm[libraries,devel]"

rocm for gfx110X-all

Supported devices in this family:

Product Name GFX Target
AMD RX 7900 XTX gfx1100
AMD RX 7800 XT gfx1101
AMD RX 7700S / Framework Laptop 16 gfx1102
AMD Radeon 780M Laptop iGPU gfx1103

Install instructions:

pip install --index-url https://rocm.nightlies.amd.com/v2/gfx110X-all/ "rocm[libraries,devel]"

rocm for gfx1151

Supported devices in this family:

Product Name GFX Target
AMD Strix Halo iGPU gfx1151

Install instructions:

pip install --index-url https://rocm.nightlies.amd.com/v2/gfx1151/ "rocm[libraries,devel]"

rocm for gfx120X-all

Supported devices in this family:

Product Name GFX Target
AMD RX 9060 / XT gfx1200
AMD RX 9070 / XT gfx1201

Install instructions:

pip install --index-url https://rocm.nightlies.amd.com/v2/gfx120X-all/ "rocm[libraries,devel]"

Using ROCm Python packages

After installing the ROCm Python packages, you should see them in your environment:

pip freeze | grep rocm
# rocm==6.5.0rc20250610
# rocm-sdk-core==6.5.0rc20250610
# rocm-sdk-devel==6.5.0rc20250610
# rocm-sdk-libraries-gfx110X-all==6.5.0rc20250610

You should also see various tools on your PATH and in the bin directory:

which rocm-sdk
# .../.venv/bin/rocm-sdk

ls .venv/bin
# activate       amdclang++    hipcc      python                 rocm-sdk
# activate.csh   amdclang-cl   hipconfig  python3                rocm-smi
# activate.fish  amdclang-cpp  pip        python3.12             roc-obj
# Activate.ps1   amdflang      pip3       rocm_agent_enumerator  roc-obj-extract
# amdclang       amdlld        pip3.12    rocminfo               roc-obj-ls

The rocm-sdk tool can be used to inspect and test the installation:

$ rocm-sdk --help
usage: rocm-sdk {command} ...

ROCm SDK Python CLI

positional arguments:
  {path,test,version,targets,init}
    path                Print various paths to ROCm installation
    test                Run installation tests to verify integrity
    version             Print version information
    targets             Print information about the GPU targets that are supported
    init                Expand devel contents to initialize rocm[devel]

$ rocm-sdk test
...
Ran 22 tests in 8.284s
OK

$ rocm-sdk targets
gfx1100;gfx1101;gfx1102

To initialize the rocm[devel] package, use the rocm-sdk tool to eagerly expand development contents:

$ rocm-sdk init
Devel contents expanded to '.venv/lib/python3.12/site-packages/_rocm_sdk_devel'

These contents are useful for using the package outside of Python and lazily expanded on the first use when used from Python.

Once you have verified your installation, you can continue to use it for standard ROCm development or install PyTorch or another supported Python ML framework.

Installing PyTorch Python packages

Using the index pages listed above, you can also install torch, torchaudio, and torchvision.

Note

By default, pip will install the latest stable versions of each package.

Tip

The torch packages depend on rocm[libraries], so ROCm packages should be installed automatically for you and you do not need to explicitly install ROCm first.

Tip

If you previously installed PyTorch with the pytorch-triton-rocm package, please uninstall it before installing the new packages:

pip uninstall pytorch-triton-rocm

The triton package is now named triton.

torch for gfx94X-dcgpu

Supported devices in this family:

Product Name GFX Target
MI300A/MI300X gfx942
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx94X-dcgpu/ torch torchaudio torchvision

torch for gfx950-dcgpu

Supported devices in this family:

Product Name GFX Target
MI350X/MI355X gfx950
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx950-dcgpu/ torch torchaudio torchvision

torch for gfx110X-all

Supported devices in this family:

Product Name GFX Target
AMD RX 7900 XTX gfx1100
AMD RX 7800 XT gfx1101
AMD RX 7700S / Framework Laptop 16 gfx1102
AMD Radeon 780M Laptop iGPU gfx1103
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx110X-all/ torch torchaudio torchvision

torch for gfx1151

Supported devices in this family:

Product Name GFX Target
AMD Strix Halo iGPU gfx1151
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx1151/ torch torchaudio torchvision

torch for gfx120X-all

Supported devices in this family:

Product Name GFX Target
AMD RX 9060 / XT gfx1200
AMD RX 9070 / XT gfx1201
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx120X-all/ torch torchaudio torchvision

Using PyTorch Python packages

After installing the torch package with ROCm support, PyTorch can be used normally:

import torch

print(torch.cuda.is_available())
# True
print(torch.cuda.get_device_name(0))
# e.g. AMD Radeon Pro W7900 Dual Slot

See also the Testing the PyTorch installation instructions in the AMD ROCm documentation.

Installing from tarballs

Standalone "ROCm SDK tarballs" are assembled from the same artifacts as the Python packages which can be installed using pip, without the additional wrapper Python wheels or utility scripts.

Installing release tarballs

Release tarballs are automatically uploaded to AWS S3 buckets.

S3 bucket Description
therock-nightly-tarball Nightly builds from the main branch
therock-dev-tarball ⚠️ Development builds from project maintainers ⚠️

After downloading, simply extract the release tarball into place:

mkdir therock-tarball && cd therock-tarball
# For example...
wget https://therock-nightly-tarball.s3.us-east-2.amazonaws.com/therock-dist-linux-gfx110X-all-6.5.0rc20250610.tar.gz

mkdir install
tar -xf *.tar.gz -C install

Installing per-commit CI build tarballs manually

Our CI builds artifacts at every commit. These can be installed by "flattening" them from the expanded artifacts down to a ROCm SDK "dist folder" using the artifact-flatten command from build_tools/fileset_tool.py.

  1. Download TheRock's source code and setup your Python environment:

    # Clone the repository
    git clone https://github.com/ROCm/TheRock.git
    cd TheRock
    
    # Init python virtual environment and install python dependencies
    python3 -m venv .venv && source .venv/bin/activate
    pip install -r requirements.txt
  2. Find the CI workflow run that you want to install from. For example, search through recent successful runs of the ci.yml workflow for push events on the main branch using this page (choosing a build that took more than a few minutes - documentation only changes skip building and uploading).

  3. Download the artifacts for that workflow run from S3 using either the AWS CLI or AWS SDK for Python (Boto3):

    export LOCAL_ARTIFACTS_DIR=~/therock-artifacts
    export LOCAL_INSTALL_DIR=${LOCAL_ARTIFACTS_DIR}/install
    mkdir -p ${LOCAL_ARTIFACTS_DIR}
    mkdir -p ${LOCAL_INSTALL_DIR}
    
    # Example: https://github.com/ROCm/TheRock/actions/runs/15575624591
    export RUN_ID=15575624591
    export OPERATING_SYSTEM=linux # or 'windows'
    aws s3 cp s3://therock-artifacts/${RUN_ID}-${OPERATING_SYSTEM}/ \
      ${LOCAL_ARTIFACTS_DIR} \
      --no-sign-request --recursive --exclude "*" --include "*.tar.xz"
  4. Flatten the artifacts:

    python build_tools/fileset_tool.py artifact-flatten \
      ${LOCAL_ARTIFACTS_DIR}/*.tar.xz -o ${LOCAL_INSTALL_DIR}

Installing tarballs using install_rocm_from_artifacts.py

This script installs ROCm community builds produced by TheRock from either a developer/nightly tarball, a specific CI runner build or an already existing installation of TheRock. This script is used by CI and can be used locally. Please run pip install boto3 to get the necessary library.

Examples:

  • Downloads all gfx94X S3 artifacts from GitHub CI workflow run 15052158890 to the default output directory therock-build:

    python build_tools/install_rocm_from_artifacts.py --run-id 15052158890 --amdgpu-family gfx94X-dcgpu --tests
  • Downloads the version 6.4.0rc20250516 gfx110X artifacts from GitHub release tag nightly-tarball to the specified output directory build:

    python build_tools/install_rocm_from_artifacts.py --release 6.4.0rc20250516 --amdgpu-family gfx110X-all --output-dir build
  • Downloads the version 6.4.0.dev0+e015c807437eaf32dac6c14a9c4f752770c51b14 gfx110X artifacts from GitHub release tag dev-tarball to the default output directory therock-build:

    python build_tools/install_rocm_from_artifacts.py --release 6.4.0.dev0+e015c807437eaf32dac6c14a9c4f752770c51b14 --amdgpu-family gfx110X-all
  • Downloads all gfx94X S3 artifacts from GitHub CI workflow run 19644138192 in the ROCm/rocm-libraries repository:

    python build_tools/install_rocm_from_artifacts.py --run-id 19644138192 --amdgpu-family gfx94X-dcgpu --tests --run-github-repo ROCm/rocm-libraries

Select your AMD GPU family from this file therock_amdgpu_targets.cmake

By default for CI workflow retrieval, all artifacts (excluding test artifacts) will be downloaded. For specific artifacts, pass in the flag such as --rand (RAND artifacts). For test artifacts, pass in the flag --tests (test artifacts). For base artifacts only, pass in the flag --base-only

Using installed tarballs

After installing (downloading and extracting) a tarball, you can test it by running programs from the bin/ directory:

ls install
# bin  include  lib  libexec  llvm  share

# Now test some of the installed tools:
./install/bin/rocminfo
./install/bin/test_hip_api

You may also want to add the install directory to your PATH or set other environment variables like ROCM_HOME.

Verifying your installation

After installing ROCm via either pip packages or tarballs, you can verify that your GPU is properly recognized.

Linux

Run one of the following commands to verify that your GPU is detected and properly initialized by the ROCm stack:

rocminfo
# or
amd-smi

Windows

Run the following command to verify GPU detection:

hipInfo.exe

Additional troubleshooting

If your GPU is not recognized or you encounter issues: