This page describes how to install and use our release artifacts for ROCm and external builds like PyTorch. We produce build artifacts as part of our Continuous Integration (CI) build/test workflows as well as release artifacts as part of Continuous Delivery (CD) nightly releases. For the development-status of GPU architecture support in TheRock, please see the SUPPORTED_GPUS.md document, which tracks readiness and onboarding progress for each AMD GPU architecture.
See also the Roadmap for support and Build artifacts overview pages.
Important
These instructions assume familiarity with how to use ROCm. Please see https://rocm.docs.amd.com/ for general information about the ROCm software platform.
Prerequisites:
- We recommend installing the latest AMDGPU driver on Linux and Adrenaline driver on Windows
- Linux users, please be aware of Configuring permissions for GPU access needed for ROCm
Table of contents:
We recommend installing ROCm and projects like PyTorch via pip, the
Python package installer.
We currently support Python 3.11, 3.12, and 3.13.
Tip
We highly recommend working within a Python virtual environment:
python -m venv .venv
source .venv/bin/activateMultiple virtual environments can be present on a system at a time, allowing you to switch between them at will.
Warning
If you really want a system-wide install, you can pass --break-system-packages to pip outside a virtual enivornment.
In this case, commandline interface shims for executables are installed to /usr/local/bin, which normally has precedence over /usr/bin and might therefore conflict with a previous installation of ROCm.
Important
Known issues with the Python wheels are tracked at ROCm#808.
| Platform | ROCm Python packages | PyTorch Python packages |
|---|---|---|
| Linux | ||
| Windows |
For now, rocm and torch packages are published to GPU-architecture-specific index
pages and must be installed using an appropriate --find-links argument to pip.
They may later be pushed to the
Python Package Index (PyPI) or other channels using a process
like https://wheelnext.dev/. Please check back regularly
as these instructions will change as we migrate to official indexes and adjust
project layouts.
| Product Name | GFX Target | GFX Family | Install instructions |
|---|---|---|---|
| MI300A/MI300X | gfx942 | gfx94X-dcgpu | rocm // torch |
| MI350X/MI355X | gfx950 | gfx950-dcgpu | rocm // torch |
| AMD RX 7900 XTX | gfx1100 | gfx110X-all | rocm // torch |
| AMD RX 7800 XT | gfx1101 | gfx110X-all | rocm // torch |
| AMD RX 7700S / Framework Laptop 16 | gfx1102 | gfx110X-all | rocm // torch |
| AMD Radeon 780M Laptop iGPU | gfx1103 | gfx110X-all | rocm // torch |
| AMD Strix Halo iGPU | gfx1151 | gfx1151 | rocm // torch |
| AMD RX 9060 / XT | gfx1200 | gfx120X-all | rocm // torch |
| AMD RX 9070 / XT | gfx1201 | gfx120X-all | rocm // torch |
We provide several Python packages which together form the complete ROCm SDK.
- See ROCm Python Packaging via TheRock for information about the each package.
- The packages are defined in the
build_tools/packaging/python/templates/directory.
| Package name | Description |
|---|---|
rocm |
Primary sdist meta package that dynamically determines other deps |
rocm-sdk-core |
OS-specific core of the ROCm SDK (e.g. compiler and utility tools) |
rocm-sdk-devel |
OS-specific development tools |
rocm-sdk-libraries |
OS-specific libraries |
Supported devices in this family:
| Product Name | GFX Target |
|---|---|
| MI300A/MI300X | gfx942 |
Install instructions:
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx94X-dcgpu/ "rocm[libraries,devel]"Supported devices in this family:
| Product Name | GFX Target |
|---|---|
| MI350X/MI355X | gfx950 |
Install instructions:
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx950-dcgpu/ "rocm[libraries,devel]"Supported devices in this family:
| Product Name | GFX Target |
|---|---|
| AMD RX 7900 XTX | gfx1100 |
| AMD RX 7800 XT | gfx1101 |
| AMD RX 7700S / Framework Laptop 16 | gfx1102 |
| AMD Radeon 780M Laptop iGPU | gfx1103 |
Install instructions:
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx110X-all/ "rocm[libraries,devel]"Supported devices in this family:
| Product Name | GFX Target |
|---|---|
| AMD Strix Halo iGPU | gfx1151 |
Install instructions:
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx1151/ "rocm[libraries,devel]"Supported devices in this family:
| Product Name | GFX Target |
|---|---|
| AMD RX 9060 / XT | gfx1200 |
| AMD RX 9070 / XT | gfx1201 |
Install instructions:
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx120X-all/ "rocm[libraries,devel]"After installing the ROCm Python packages, you should see them in your environment:
pip freeze | grep rocm
# rocm==6.5.0rc20250610
# rocm-sdk-core==6.5.0rc20250610
# rocm-sdk-devel==6.5.0rc20250610
# rocm-sdk-libraries-gfx110X-all==6.5.0rc20250610You should also see various tools on your PATH and in the bin directory:
which rocm-sdk
# .../.venv/bin/rocm-sdk
ls .venv/bin
# activate amdclang++ hipcc python rocm-sdk
# activate.csh amdclang-cl hipconfig python3 rocm-smi
# activate.fish amdclang-cpp pip python3.12 roc-obj
# Activate.ps1 amdflang pip3 rocm_agent_enumerator roc-obj-extract
# amdclang amdlld pip3.12 rocminfo roc-obj-lsThe rocm-sdk tool can be used to inspect and test the installation:
$ rocm-sdk --help
usage: rocm-sdk {command} ...
ROCm SDK Python CLI
positional arguments:
{path,test,version,targets,init}
path Print various paths to ROCm installation
test Run installation tests to verify integrity
version Print version information
targets Print information about the GPU targets that are supported
init Expand devel contents to initialize rocm[devel]
$ rocm-sdk test
...
Ran 22 tests in 8.284s
OK
$ rocm-sdk targets
gfx1100;gfx1101;gfx1102To initialize the rocm[devel] package, use the rocm-sdk tool to eagerly expand development
contents:
$ rocm-sdk init
Devel contents expanded to '.venv/lib/python3.12/site-packages/_rocm_sdk_devel'These contents are useful for using the package outside of Python and lazily expanded on the first use when used from Python.
Once you have verified your installation, you can continue to use it for standard ROCm development or install PyTorch or another supported Python ML framework.
Using the index pages listed above, you can
also install torch, torchaudio, and torchvision.
Note
By default, pip will install the latest stable versions of each package.
-
If you want to allow installing prerelease versions, use the
--pre -
If you want to install other versions, take note of the compatibility matrix:
torch version torchaudio version torchvision version 2.10 2.10 0.25 2.9 2.9 0.24 2.8 2.8 0.23 2.7 2.7.1a0 0.22.1 For example,
torch2.7.1 and compatible wheels can be installed by specifyingtorch==2.7.1 torchaudio==2.7.1a0 torchvision==0.22.1See also
Tip
The torch packages depend on rocm[libraries], so ROCm packages should
be installed automatically for you and you do not need to explicitly install
ROCm first.
Tip
If you previously installed PyTorch with the pytorch-triton-rocm package,
please uninstall it before installing the new packages:
pip uninstall pytorch-triton-rocmThe triton package is now named triton.
Supported devices in this family:
| Product Name | GFX Target |
|---|---|
| MI300A/MI300X | gfx942 |
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx94X-dcgpu/ torch torchaudio torchvisionSupported devices in this family:
| Product Name | GFX Target |
|---|---|
| MI350X/MI355X | gfx950 |
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx950-dcgpu/ torch torchaudio torchvisionSupported devices in this family:
| Product Name | GFX Target |
|---|---|
| AMD RX 7900 XTX | gfx1100 |
| AMD RX 7800 XT | gfx1101 |
| AMD RX 7700S / Framework Laptop 16 | gfx1102 |
| AMD Radeon 780M Laptop iGPU | gfx1103 |
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx110X-all/ torch torchaudio torchvisionSupported devices in this family:
| Product Name | GFX Target |
|---|---|
| AMD Strix Halo iGPU | gfx1151 |
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx1151/ torch torchaudio torchvisionSupported devices in this family:
| Product Name | GFX Target |
|---|---|
| AMD RX 9060 / XT | gfx1200 |
| AMD RX 9070 / XT | gfx1201 |
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx120X-all/ torch torchaudio torchvisionAfter installing the torch package with ROCm support, PyTorch can be used
normally:
import torch
print(torch.cuda.is_available())
# True
print(torch.cuda.get_device_name(0))
# e.g. AMD Radeon Pro W7900 Dual SlotSee also the Testing the PyTorch installation instructions in the AMD ROCm documentation.
Standalone "ROCm SDK tarballs" are assembled from the same artifacts as the Python packages which can be installed using pip, without the additional wrapper Python wheels or utility scripts.
Release tarballs are automatically uploaded to AWS S3 buckets.
| S3 bucket | Description |
|---|---|
| therock-nightly-tarball | Nightly builds from the main branch |
| therock-dev-tarball |
After downloading, simply extract the release tarball into place:
mkdir therock-tarball && cd therock-tarball
# For example...
wget https://therock-nightly-tarball.s3.us-east-2.amazonaws.com/therock-dist-linux-gfx110X-all-6.5.0rc20250610.tar.gz
mkdir install
tar -xf *.tar.gz -C installOur CI builds artifacts at every commit. These can be installed by "flattening"
them from the expanded artifacts down to a ROCm SDK "dist folder" using the
artifact-flatten command from
build_tools/fileset_tool.py.
-
Download TheRock's source code and setup your Python environment:
# Clone the repository git clone https://github.com/ROCm/TheRock.git cd TheRock # Init python virtual environment and install python dependencies python3 -m venv .venv && source .venv/bin/activate pip install -r requirements.txt
-
Find the CI workflow run that you want to install from. For example, search through recent successful runs of the
ci.ymlworkflow forpushevents on themainbranch using this page (choosing a build that took more than a few minutes - documentation only changes skip building and uploading). -
Download the artifacts for that workflow run from S3 using either the AWS CLI or AWS SDK for Python (Boto3):
export LOCAL_ARTIFACTS_DIR=~/therock-artifacts export LOCAL_INSTALL_DIR=${LOCAL_ARTIFACTS_DIR}/install mkdir -p ${LOCAL_ARTIFACTS_DIR} mkdir -p ${LOCAL_INSTALL_DIR} # Example: https://github.com/ROCm/TheRock/actions/runs/15575624591 export RUN_ID=15575624591 export OPERATING_SYSTEM=linux # or 'windows' aws s3 cp s3://therock-artifacts/${RUN_ID}-${OPERATING_SYSTEM}/ \ ${LOCAL_ARTIFACTS_DIR} \ --no-sign-request --recursive --exclude "*" --include "*.tar.xz"
-
Flatten the artifacts:
python build_tools/fileset_tool.py artifact-flatten \ ${LOCAL_ARTIFACTS_DIR}/*.tar.xz -o ${LOCAL_INSTALL_DIR}
This script installs ROCm community builds produced by TheRock from either a developer/nightly tarball, a specific CI runner build or an already existing installation of TheRock. This script is used by CI and can be used locally. Please run pip install boto3 to get the necessary library.
Examples:
-
Downloads all gfx94X S3 artifacts from GitHub CI workflow run 15052158890 to the default output directory
therock-build:python build_tools/install_rocm_from_artifacts.py --run-id 15052158890 --amdgpu-family gfx94X-dcgpu --tests
-
Downloads the version
6.4.0rc20250516gfx110X artifacts from GitHub release tagnightly-tarballto the specified output directorybuild:python build_tools/install_rocm_from_artifacts.py --release 6.4.0rc20250516 --amdgpu-family gfx110X-all --output-dir build
-
Downloads the version
6.4.0.dev0+e015c807437eaf32dac6c14a9c4f752770c51b14gfx110X artifacts from GitHub release tagdev-tarballto the default output directorytherock-build:python build_tools/install_rocm_from_artifacts.py --release 6.4.0.dev0+e015c807437eaf32dac6c14a9c4f752770c51b14 --amdgpu-family gfx110X-all
-
Downloads all gfx94X S3 artifacts from GitHub CI workflow run 19644138192 in the
ROCm/rocm-librariesrepository:python build_tools/install_rocm_from_artifacts.py --run-id 19644138192 --amdgpu-family gfx94X-dcgpu --tests --run-github-repo ROCm/rocm-libraries
Select your AMD GPU family from this file therock_amdgpu_targets.cmake
By default for CI workflow retrieval, all artifacts (excluding test artifacts) will be downloaded. For specific artifacts, pass in the flag such as --rand (RAND artifacts). For test artifacts, pass in the flag --tests (test artifacts). For base artifacts only, pass in the flag --base-only
After installing (downloading and extracting) a tarball, you can test it by
running programs from the bin/ directory:
ls install
# bin include lib libexec llvm share
# Now test some of the installed tools:
./install/bin/rocminfo
./install/bin/test_hip_apiYou may also want to add the install directory to your PATH or set other
environment variables like ROCM_HOME.
After installing ROCm via either pip packages or tarballs, you can verify that your GPU is properly recognized.
Run one of the following commands to verify that your GPU is detected and properly initialized by the ROCm stack:
rocminfo
# or
amd-smiRun the following command to verify GPU detection:
hipInfo.exeIf your GPU is not recognized or you encounter issues:
- Linux users: Check system logs using
dmesg | grep amdgpufor specific error messages - Review memory allocation settings (see the FAQ for GTT configuration on unified memory systems)
- Ensure you have the latest AMDGPU driver on Linux or Adrenaline driver on Windows
- For platform-specific troubleshooting when using PyTorch, see: