diff --git a/docs/03-github-orchestrator/01-introduction.mdx b/docs/03-github-orchestrator/01-introduction.mdx
index bb9e9b57..70b7918c 100644
--- a/docs/03-github-orchestrator/01-introduction.mdx
+++ b/docs/03-github-orchestrator/01-introduction.mdx
@@ -1,100 +1,109 @@
# Introduction
-## Concept - What Does Orchestrator Do
-
-**Orchestrator enables you to run, build and test (Unity) projects in the cloud. You can start jobs
-from the command line, the "Workbench" GUI in the Unity Editor or from GitHub Actions.**
-
-**Orchestrator will automatically provision an environment at a Cloud Provider such as GCP and AWS.
-It will then send the project to be built and/or tested depending on your workflow configuration.**
-
-**Orchestrator is especially useful for game development because it supports large projects.
-Orchestrator provides first-class support for the Unity game engine.**
-
-Orchestrator uses git to track and syncronize your projects and uses native cloud services such as
-AWS Fargate and Kubernetes to run your jobs. Other version control systems are not actively
-supported.
+## What Does Orchestrator Do?
+
+**Orchestrator is an advanced build layer on top of
+[Game CI unity-builder](https://github.com/game-ci/unity-builder).** It takes whatever machines you
+give it and provides the flexibility, control, and tools to manage all your build workflows across
+it. Instead of running builds directly on your CI runner, Orchestrator dispatches them to any
+infrastructure you choose, from a local machine to a Kubernetes cluster. While Unity is the built-in
+default, Orchestrator is engine agnostic. Godot, Unreal, and custom engines can plug in via the
+[engine plugin system](advanced-topics/engine-plugins). Start jobs from GitHub Actions, the command
+line, or any CI system.
+
+```mermaid
+flowchart LR
+ subgraph trigger["Trigger"]
+ A["GitHub Actions\nGitLab CI\nCLI"]
+ end
+ subgraph orchestrator["Orchestrator"]
+ B["Provision\nSync\nCache\nBuild\nCleanup"]
+ end
+ subgraph target["Build Target"]
+ C["AWS Fargate\nKubernetes\nDocker\nYour Hardware"]
+ end
+ A --> B --> C
+ C -- "artifacts + logs" --> A
+```
+
+:::info Built-in and Standalone
+
+The orchestrator is built into [`game-ci/unity-builder`](https://github.com/game-ci/unity-builder)
+and activates automatically when you set `providerStrategy`. It is also available as a
+[standalone CLI](https://github.com/game-ci/orchestrator) for use outside GitHub Actions.
+
+:::
## Why Orchestrator?
-1. **Orchestrator is flexible and elastic**
- 1. _You can balance your use of high-performance and cost-saving modes._ Configurable cost/speed
- effeciency
- 2. _Works great for projects of almost any size, from AAA projects and assets to micro projects_
- 3. _Extended configuration options._ More control over disk size, memory and CPU
- 4. _Easily scale all job resources as your project grows_ e.g. storage, CPU and memory
-2. **Scale fully on demand from zero (not in use) to many concurrent jobs.** Benefits from
- "pay-for-what-you-use" cloud billing models. We have made an effort to make sure that it costs
- you nothing (aside from artifact and cache storage) while there are no builds running (no
- guarantees)
-3. **Easy setup and configuration**
-4. **Run custom jobs** and extend the system for any workload
-
-## Why not orchestrator?
-
-1. Your project is small in size. Below 5GB Orchestrator should not be needed.
-2. You already have dedicated servers running you can use.
-
-Although the speed of a CI pipelines is an important metric to consider, there are real challenges
-for game development pipelines.
-
-This solution prefers convenience, ease of use, scalability, throughput and flexibility.
-
-Faster solutions exist, but would all involve self-hosted hardware with an immediate local cache of
-the large project files and working directory and a dedicated server.
-
-# Orchestrator Release Status
-
-Orchestrator is in "active development" ⚠️🔨
-
-Orchestrator overall release status: `preview` This means some APIs may change, features are still
-being added but the minimum feature set works and is stable.
-
-Release Stages: `experimental` ➡️ `preview` ➡️ `full release`
-
-You must use a provider with Orchestrator, each provider's release status is described below. This
-indicates the stability and support for orchestrator features and workflows.
-
-### Supported Orchestrator Platforms
-
-| Cloud Provider Platform | Release Status |
-| ----------------------- | ------------------ |
-| Kubernetes | ✔️ preview release |
-| AWS | ✔️ full release |
-| GCP | ⚠ Considered |
-| Azure | ⚠ Considered |
-
-_Note for Kubernetes support:_ _Usually the cluster needs to be up and running at all times, as
-starting up a cluster is slow._ _Use Google Cloud's Kubernetes Autopilot you can scale down to the
-free tier automatically while not in use._ _Kubernetes support currently requires cloud storage,
-only S3 support is built-in._
-
-| Git Platform | Release Status |
-| --------------------- | ------------------ |
-| GitHub | ✔️ full release |
-| GitLab | ✔️ preview release |
-| Command Line | ✔️ preview release |
-| Any Git repository | ✔️ preview release |
-| Any Git automation/Ci | ✔️ preview release |
+Orchestrator benefits projects of any size. Even small projects gain access to configurable
+resources, caching, and cost-efficient scaling. Larger projects get retained workspaces, automatic
+failover, and multi-provider load balancing.
+
+| Benefit | What it means |
+| -------------------------- | -------------------------------------------------------------------------------------------------------------------------------- |
+| **Configurable resources** | Set CPU, memory, and disk per build instead of accepting CI runner defaults |
+| **Scale from zero** | No idle servers. Cloud providers provision on demand and tear down when done |
+| **Retained workspaces** | Cache the entire project folder across builds for faster rebuilds on large projects |
+| **Automatic caching** | Unity Library, LFS objects, and build output cached to S3 or 70+ backends via rclone |
+| **Provider failover** | Automatically route to a fallback provider when the primary is unavailable or overloaded |
+| **Extensible** | Run [custom hooks](advanced-topics/hooks/container-hooks), middleware, or your own [provider plugin](providers/custom-providers) |
+| **Engine agnostic** | Built-in Unity support with a plugin system for [other engines](advanced-topics/engine-plugins) (Godot, Unreal, custom) |
+| **Self-hosted friendly** | Complements self-hosted runners with automatic fallback, load balancing, and runner availability checks |
+
+## When You Might Not Need It
+
+- Your project fits comfortably on standard GitHub runners and you don't need caching, hooks, or
+ custom resources
+- You already have a fully managed build pipeline that meets your needs
+
+See [Standard Game-CI vs Orchestrator](game-ci-vs-orchestrator) for a detailed comparison.
+
+## What Orchestrator Handles
+
+Orchestrator manages the full build lifecycle so you don't have to script it yourself:
+
+- **Provisioning** - creates cloud resources (CloudFormation stacks, Kubernetes jobs, Docker
+ containers) and tears them down after the build
+- **Git sync** - clones your repo with configurable depth, pulls LFS objects, initializes
+ submodules, and handles SSH/HTTP auth
+- **Caching** - persists the Unity Library folder, LFS objects, and build output across builds using
+ S3 or rclone
+- **Hooks** - inject custom containers or shell commands at any point in the build lifecycle with
+ phase, provider, and platform filtering
+- **Secrets** - pulls secrets from AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, or
+ HashiCorp Vault and injects them as environment variables
+- **Logging** - streams structured build logs in real-time via Kinesis (AWS), kubectl (K8s), or
+ stdout (local)
+- **Cleanup** - removes cloud resources, temporary files, and expired caches automatically
+
+## Supported Providers
+
+| Provider | Description |
+| -------------------------------------- | -------------------------------------------------------- |
+| [AWS Fargate](providers/aws) | Fully managed containers on AWS. No servers to maintain. |
+| [Kubernetes](providers/kubernetes) | Run on any Kubernetes cluster. |
+| [Local Docker](providers/local-docker) | Docker containers on the local machine. |
+| [Local](providers/local) | Direct execution on the host machine. |
+
+See [Providers](providers/overview) for the full list including
+[GCP Cloud Run](providers/gcp-cloud-run), [Azure ACI](providers/azure-aci),
+[custom](providers/custom-providers), and [community](providers/community-providers) providers.
+
+## Supported Platforms
+
+| Platform | Description |
+| ---------------------------------------------- | ------------------------------------- |
+| [GitHub Actions](providers/github-integration) | First-class support with Checks API. |
+| [GitLab CI](providers/gitlab-integration) | Via the command line mode. |
+| [Command Line](examples/command-line) | Run from any terminal or script. |
+| Any CI system | Anything that can run shell commands. |
## External Links
-### Orchestrator Releases
-
-[Game CI Releases - GitHub](https://github.com/game-ci/unity-builder/releases) _Packaged and
-released with game-ci's unity-builder module._
-
-### Open Incoming Pull Requests
-
-[Orchestrator PRs - GitHub](https://github.com/game-ci/unity-builder/pulls?q=is%3Apr+orchestrator)
-
-### 💬Suggestions and 🐛Bugs (GitHub Issues):
-
-[Game CI Issues - GitHub](https://github.com/game-ci/unity-builder/labels/orchestrator)
-
-### Community
-
-**Share your feedback with us!**
-
-- [**Discord Channel**](https://discord.com/channels/710946343828455455/789631903157583923)
-- [**Feedback Form**](https://forms.gle/3Wg1gGf9FnZ72RiJ9)
+- [Orchestrator Repository](https://github.com/game-ci/orchestrator) - standalone orchestrator
+ package
+- [Releases](https://github.com/game-ci/orchestrator/releases) - orchestrator releases
+- [Pull Requests](https://github.com/game-ci/orchestrator/pulls) - open orchestrator PRs
+- [Issues](https://github.com/game-ci/orchestrator/issues) - bugs and feature requests
+- [Discord](https://discord.com/channels/710946343828455455/789631903157583923) - community chat
diff --git a/docs/03-github-orchestrator/02-game-ci-vs-orchestrator.mdx b/docs/03-github-orchestrator/02-game-ci-vs-orchestrator.mdx
index 15939fcc..41c8e600 100644
--- a/docs/03-github-orchestrator/02-game-ci-vs-orchestrator.mdx
+++ b/docs/03-github-orchestrator/02-game-ci-vs-orchestrator.mdx
@@ -1,42 +1,59 @@
-# Game-CI vs Orchestrator
-
-# Standard Game-CI (Use Cases)
-
-The Game CI core is a maintained set of docker images that can be used to run workloads in many
-scenarios.
-
-Game CI also provides specific GitHub actions for running workflows on GitHub. And a similar
-workflow for running Game CI on GitLab and Circle CI. _All of these options use the build server
-resources provided by those systems, this can be a constraint or very convenient depending on the
-size of your project and the workloads you need to run._
-
-# Orchestrator (Use Cases)
-
-## Sending Builds to the cloud
-
-You may want to take advantage of cloud resources for lots of reasons (scale, speed, cost,
-flexibility) or may want to start remote builds from the command line without slowing down your
-development machine. Orchestrator can help you do this.
-
-This may be a preference, more efficient, or you may want to use systems that struggle to handle
-large game development projects (GitHub being a good example).
-
-### Large GitHub Projects
-
-GitHub Actions by default run on
-[build machines provided by GitHub](https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners).
-For Unity projects the available disk size is quite small. You may experience an error related to
-running out of disk space. You may also want to run the build on a server with more memory or
-processing resources.
-
-### GitHub Self-Hosted Runners vs Game CI Orchestrator
-
-_GitHub users can consider:
-[GitHub self-hosted runners](https://docs.github.com/en/actions/hosting-your-own-runners/about-self-hosted-runners)
-and Orchestrator. Both can enable you to build larger projects._
-
-_Orchestrator is better if you don't have a server setup or don't want to manage or maintain your
-own build server._
-
-_Self-hosted runners are best used when you already have a server available, running 24/7 that you
-can setup as a runner. And you're happy to maintain and keep that server available and running._
+---
+sidebar_label: Game-CI vs Orchestrator
+---
+
+# Standard Game-CI vs Orchestrator Mode
+
+## Standard Game-CI
+
+Game CI provides Docker images and GitHub Actions for running Unity workflows on the build server
+resources provided by your CI platform (GitHub, GitLab, Circle CI).
+
+**Best for:** Projects that fit within your CI runner's resource limits and don't need advanced
+caching, hooks, or multi-provider routing.
+
+## Orchestrator Mode
+
+Orchestrator is an advanced layer on top of Game CI. It takes whatever machines you give it and
+manages the full build lifecycle across it: provisioning, git sync, caching, hooks, and cleanup.
+Whether you're running on a cloud provider, a local machine, or your own servers, Orchestrator gives
+you the tools and flexibility to manage your workflows.
+
+Projects of any size can benefit from orchestrator features like configurable resources, automatic
+caching, and extensible hooks. Larger projects additionally benefit from retained workspaces,
+provider failover, and load balancing.
+
+```mermaid
+flowchart LR
+ subgraph Standard Game-CI
+ GR["GitHub Runner\n(builds locally)\n\n~14 GB disk\nFixed resources"]
+ end
+ subgraph Orchestrator Mode
+ GA["GitHub Action\nCLI / Any CI\n(dispatches only)\n\nConfigurable CPU, memory, disk\nScales to zero when idle"]
+ CC["Build Target\n(any machine)"]
+ GA <--> CC
+ end
+```
+
+## Self-Hosted Runners + Orchestrator
+
+Self-hosted runners and orchestrator are not mutually exclusive. Orchestrator **complements**
+self-hosted runners by adding automatic failover, load balancing, and runner availability checks.
+
+| | Self-Hosted Runners Alone | Self-Hosted + Orchestrator |
+| ------------------ | ----------------------------------- | ------------------------------------------------------ |
+| **Failover** | Manual intervention if server fails | Automatic fallback to cloud when runner is unavailable |
+| **Load balancing** | Fixed capacity | Overflow to cloud during peak demand |
+| **Caching** | Local disk only | S3/rclone-backed caching with retained workspaces |
+| **Hooks** | Custom scripting | Built-in middleware pipeline with lifecycle hooks |
+| **Maintenance** | You manage everything | Orchestrator handles provisioning, sync, and cleanup |
+
+## Choosing Your Setup
+
+| Scenario | Recommendation |
+| ---------------------------------------------- | ----------------------------------------------------------------- |
+| Small project, standard runners work fine | Standard Game-CI |
+| Need configurable resources or caching | Orchestrator with any provider |
+| Large project, no existing servers | Orchestrator with AWS Fargate or Kubernetes |
+| Existing self-hosted runners, want reliability | Orchestrator with self-hosted primary + cloud fallback |
+| Want to test orchestrator locally before cloud | Orchestrator with [Local Docker](providers/local-docker) provider |
diff --git a/docs/03-github-orchestrator/02-getting-started.mdx b/docs/03-github-orchestrator/02-getting-started.mdx
index f14c1a59..4e65a6ca 100644
--- a/docs/03-github-orchestrator/02-getting-started.mdx
+++ b/docs/03-github-orchestrator/02-getting-started.mdx
@@ -1,34 +1,159 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
# Getting Started
-Orchestrator lets you run Unity builds on remote cloud infrastructure instead of GitHub-hosted
-runners. This is useful for large projects that exceed GitHub's disk or resource limits.
+Orchestrator lets you run Unity builds on whatever machines you have: cloud, on-premise, or your
+local machine. It works as a built-in plugin for `game-ci/unity-builder` in GitHub Actions, or as a
+standalone CLI tool.
## Prerequisites
-- A Unity project in a GitHub repository
-- A cloud provider account (AWS or a Kubernetes cluster)
-- Provider credentials configured as GitHub secrets
+- A Unity project
+- A cloud provider account (AWS or a Kubernetes cluster) or Docker installed locally
+- Provider credentials (as GitHub secrets or environment variables)
-## Quick Start
+## Choose a Provider
-1. **Choose a provider**: `aws` (AWS Fargate) or `k8s` (Kubernetes)
-2. **Configure credentials** for your chosen provider
-3. **Add the orchestrator step** to your workflow
+Pick the provider that fits your setup. Each tab shows a minimal quick-start snippet.
-See the provider-specific examples for complete setup:
+
+
-- [AWS Example](examples/github-examples/aws)
-- [Kubernetes Example](examples/github-examples/kubernetes)
-- [Command Line](examples/command-line)
+**Easiest option — fully managed, no servers to maintain.**
-## Minimal Example
+Set up your AWS credentials as GitHub secrets (`AWS_ROLE_ARN` for OIDC or `AWS_ACCESS_KEY_ID` /
+`AWS_SECRET_ACCESS_KEY` for static keys), then use `providerStrategy: aws`:
```yaml
-- uses: game-ci/unity-builder@main
+- name: Configure AWS Credentials
+ uses: aws-actions/configure-aws-credentials@v4
+ with:
+ role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
+ aws-region: eu-west-2
+
+- uses: game-ci/unity-builder@v4
with:
providerStrategy: aws
targetPlatform: StandaloneLinux64
gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
```
-For full parameter documentation, see the [API Reference](api-reference).
+See the [AWS provider page](providers/aws) for full setup.
+
+
+
+
+**Best if you already have a cluster.**
+
+Base64-encode your kubeconfig and store it as a GitHub secret (`KUBE_CONFIG`):
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: k8s
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ env:
+ kubeConfig: ${{ secrets.KUBE_CONFIG }}
+```
+
+See the [Kubernetes provider page](providers/kubernetes) for cluster tips.
+
+
+
+
+**No cloud account needed — runs on your own machine.**
+
+Requires Docker and a self-hosted GitHub Actions runner:
+
+```yaml
+# runs-on: self-hosted
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: local-docker
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+See the [Local Docker provider page](providers/local-docker) for details.
+
+
+
+
+See [Providers](providers/overview) for the full list and detailed setup guides.
+
+## GitHub Actions
+
+When you set `providerStrategy` in `game-ci/unity-builder`, the orchestrator activates
+automatically. No separate install step is needed.
+
+### Basic example
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+### Full workflow example
+
+```yaml
+name: Build
+on: push
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+
+ - uses: game-ci/unity-builder@v4
+ env:
+ UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }}
+ UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }}
+ UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }}
+ with:
+ providerStrategy: aws
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+## Command Line
+
+### 1. Install
+
+```bash
+# Linux / macOS
+curl -fsSL https://raw.githubusercontent.com/game-ci/orchestrator/main/install.sh | sh
+
+# Windows (PowerShell)
+irm https://raw.githubusercontent.com/game-ci/orchestrator/main/install.ps1 | iex
+```
+
+### 2. Set credentials
+
+```bash
+export UNITY_SERIAL="XX-XXXX-XXXX-XXXX-XXXX-XXXX"
+export AWS_ACCESS_KEY_ID="..."
+export AWS_SECRET_ACCESS_KEY="..."
+```
+
+### 3. Run a build
+
+```bash
+game-ci build \
+ --target-platform StandaloneLinux64 \
+ --provider-strategy aws
+```
+
+See the [CLI documentation](cli/getting-started) for full details.
+
+## Next Steps
+
+- [Provider setup guides](providers/overview) - configure your cloud credentials
+- [API Reference](api-reference) - full parameter documentation
+- [Examples](examples/github-actions) - more workflow examples
+- [CLI Reference](cli/build-command) - full CLI flag reference
diff --git a/docs/03-github-orchestrator/03-examples/01-command-line.mdx b/docs/03-github-orchestrator/03-examples/01-command-line.mdx
index 5ada128f..b6a4a611 100644
--- a/docs/03-github-orchestrator/03-examples/01-command-line.mdx
+++ b/docs/03-github-orchestrator/03-examples/01-command-line.mdx
@@ -1,49 +1,91 @@
# Command Line
-_Preview Support Only_
+You can run orchestrator builds directly from your terminal - no GitHub Actions or CI platform
+required. All parameters in the [API Reference](../api-reference) can be specified as CLI flags.
-You can install Game CI locally and start orchestrator jobs from the command line or by integrating
-your own tools. All parameters in [API Reference](../api-reference) can be specified as command line
-input fields.
+## Install
-# Install
+### Linux / macOS
-Currently (development)
+```bash
+curl -fsSL https://raw.githubusercontent.com/game-ci/orchestrator/main/install.sh | sh
+```
+
+### Windows (PowerShell)
+
+```powershell
+irm https://raw.githubusercontent.com/game-ci/orchestrator/main/install.ps1 | iex
+```
+
+Pre-built binaries for every platform are available on the
+[GitHub Releases](https://github.com/game-ci/orchestrator/releases) page.
+
+## Examples
+
+### Local build
```bash
-git clone https://github.com/game-ci/unity-builder.git
-yarn install
-yarn run cli -m {mode parameter} --projectPath {Your project path} {... other command line parameters}
+game-ci build \
+ --target-platform StandaloneLinux64
```
-# Planned (does not work currently)
+### Cloud build (AWS)
+
+```bash
+game-ci build \
+ --target-platform StandaloneLinux64 \
+ --provider-strategy aws \
+ --git-private-token $GIT_TOKEN
+```
-We plan to offer support for Game CI via Deno. This will enable fast, TypeScript native runtime, and
-you will be able to access this via the following:
+### Cloud build (Kubernetes)
```bash
-dpx game-ci build
+game-ci build \
+ --target-platform StandaloneLinux64 \
+ --provider-strategy k8s \
+ --git-private-token $GIT_TOKEN
```
-# Help
+### List active resources
-_You can run `yarn run cli -h` or `yarn run cli --help` to list all modes and paramters with
-descriptions_
+```bash
+game-ci list-resources --provider-strategy aws
+```
+
+### Watch a running build
+
+```bash
+game-ci watch --provider-strategy aws
+```
-# Main Command Parameters
+### Clean up old resources
-- Default: `cli` (runs a standard build workflow)
-- See API Reference "Modes"
+```bash
+game-ci garbage-collect --provider-strategy aws
+```
-# Keeping command line parameters short
+Cleans up stale cloud resources (AWS only): stops old ECS tasks, deletes job CloudFormation stacks,
+and removes old CloudWatch log groups. Resources are deleted immediately without confirmation. Use
+`--garbageMaxAge` to control the age threshold (default: 24 hours).
-You can avoid specifying long command line input for credentials by using environment variables or
-[the input override feature](../advanced-topics/configuration-override#example) to shorten commands
-significantly.
+## Keeping Commands Short
-This enables you to provide a command to pull input, e.g. you can pull from a file or from a secret
-manager.
+Avoid long CLI flags for credentials by using environment variables or the
+[Pull Secrets](../secrets#-pulling-secrets-from-external-sources) feature:
```bash
-yarn run cli --populateOverride true --pullInputList UNITY_EMAIL,UNITY_SERIAL,UNITY_PASSWORD --inputPullCommand="gcloud secrets versions access 1 --secret=\"{0}\""
+game-ci build \
+ --target-platform StandaloneLinux64 \
+ --populate-override true \
+ --pull-input-list UNITY_EMAIL,UNITY_SERIAL,UNITY_PASSWORD \
+ --input-pull-command='gcloud secrets versions access 1 --secret="{0}"'
```
+
+## Further Reading
+
+- [CLI Getting Started](../cli/getting-started) - installation options, license activation, first
+ build walkthrough
+- [Build Command Reference](../cli/build-command) - full list of build flags
+- [Orchestrate Command](../cli/orchestrate-command) - cloud-specific options
+- [Other Commands](../cli/other-commands) - license activation, cache management, and more
diff --git a/docs/03-github-orchestrator/03-examples/02-github-actions.mdx b/docs/03-github-orchestrator/03-examples/02-github-actions.mdx
new file mode 100644
index 00000000..d6441dcb
--- /dev/null
+++ b/docs/03-github-orchestrator/03-examples/02-github-actions.mdx
@@ -0,0 +1,349 @@
+# GitHub Actions
+
+Orchestrator has first-class GitHub Actions support. This page shows complete, copy-paste workflow
+files for every provider.
+
+## 🔑 Prerequisites
+
+1. A Unity project in a GitHub repository
+2. Provider credentials stored as
+ [GitHub Actions secrets](https://docs.github.com/en/actions/security/encrypted-secrets)
+3. A `UNITY_LICENSE` or activation secret (see the
+ [Game CI activation docs](https://game.ci/docs/github/activation))
+
+## Minimal Workflow
+
+The simplest possible Orchestrator workflow. Uses AWS Fargate with default CPU and memory.
+
+```yaml
+name: Build with Orchestrator
+
+on:
+ push:
+ branches: [main]
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ permissions:
+ id-token: write
+ contents: read
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ lfs: true
+
+ - name: Configure AWS Credentials
+ uses: aws-actions/configure-aws-credentials@v4
+ with:
+ role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
+ # For static keys fallback, use aws-access-key-id/aws-secret-access-key instead
+ aws-region: eu-west-2
+
+ - uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+## ☁️ AWS Fargate
+
+Full workflow with custom CPU/memory, S3 artifact export, and GitHub Checks.
+
+```yaml
+name: Orchestrator - AWS Fargate
+
+on:
+ push:
+ branches: [main, develop]
+ pull_request:
+ branches: [main]
+
+jobs:
+ build:
+ name: Build (${{ matrix.targetPlatform }})
+ runs-on: ubuntu-latest
+ permissions:
+ id-token: write
+ contents: read
+ strategy:
+ fail-fast: false
+ matrix:
+ targetPlatform:
+ - StandaloneLinux64
+ - StandaloneWindows64
+ - StandaloneOSX
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ lfs: true
+
+ - name: Configure AWS Credentials
+ uses: aws-actions/configure-aws-credentials@v4
+ with:
+ role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
+ # For static keys fallback, use aws-access-key-id/aws-secret-access-key instead
+ aws-region: eu-west-2
+
+ - uses: game-ci/unity-builder@v4
+ id: build
+ with:
+ providerStrategy: aws
+ targetPlatform: ${{ matrix.targetPlatform }}
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ unityVersion: 2022.3.0f1
+ containerCpu: 2048
+ containerMemory: 8192
+ # Export build artifacts to S3:
+ containerHookFiles: aws-s3-upload-build
+ # Show build progress as GitHub Checks:
+ githubCheck: true
+```
+
+### Required Secrets
+
+| Secret | Description |
+| -------------- | ----------------------------------------------------------------------------- |
+| `AWS_ROLE_ARN` | IAM role ARN for OIDC with ECS, CloudFormation, S3, Kinesis, CloudWatch perms |
+
+See the [AWS provider page](../providers/aws) for allowed CPU/memory combinations and full setup.
+
+## ☸️ Kubernetes
+
+Full workflow targeting a Kubernetes cluster.
+
+```yaml
+name: Orchestrator - Kubernetes
+
+on:
+ push:
+ branches: [main]
+
+jobs:
+ build:
+ name: Build (${{ matrix.targetPlatform }})
+ runs-on: ubuntu-latest
+ strategy:
+ fail-fast: false
+ matrix:
+ targetPlatform:
+ - StandaloneLinux64
+ - StandaloneWindows64
+ env:
+ kubeConfig: ${{ secrets.KUBE_CONFIG }}
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ lfs: true
+
+ - uses: game-ci/unity-builder@v4
+ id: build
+ with:
+ providerStrategy: k8s
+ targetPlatform: ${{ matrix.targetPlatform }}
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ unityVersion: 2022.3.0f1
+ kubeVolumeSize: 10Gi
+ containerCpu: 1024
+ containerMemory: 4096
+ containerHookFiles: aws-s3-upload-build
+ githubCheck: true
+```
+
+### Required Secrets
+
+| Secret | Description |
+| ------------- | ------------------------------- |
+| `KUBE_CONFIG` | Base64-encoded kubeconfig file. |
+
+Generate it with:
+
+```bash
+cat ~/.kube/config | base64 -w 0
+```
+
+See the [Kubernetes provider page](../providers/kubernetes) for cluster tips and full setup.
+
+## 🐳 Local Docker (Self-Hosted Runner)
+
+Run builds in Docker on your own machine. No cloud account needed.
+
+```yaml
+name: Orchestrator - Local Docker
+
+on:
+ push:
+ branches: [main]
+
+jobs:
+ build:
+ runs-on: self-hosted
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ lfs: true
+
+ - uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: local-docker
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+Requires Docker installed on the self-hosted runner.
+
+## ⏳ Async Mode
+
+For long builds, use async mode so the GitHub Action returns immediately. Monitor progress via
+GitHub Checks.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ asyncOrchestrator: true
+ githubCheck: true
+```
+
+The build runs in the background. Check progress from the **Checks** tab on your pull request.
+
+See [GitHub Integration](../providers/github-integration) for more on async mode and GitHub Checks.
+
+## 🗑️ Scheduled Garbage Collection
+
+Add a scheduled workflow to clean up stale cloud resources. Useful as a safety net alongside the
+automatic cleanup cron.
+
+```yaml
+name: Orchestrator - Garbage Collect
+
+on:
+ schedule:
+ - cron: '0 4 * * *' # Daily at 4 AM UTC
+
+jobs:
+ cleanup:
+ runs-on: ubuntu-latest
+ permissions:
+ id-token: write
+ contents: read
+ steps:
+ - uses: actions/checkout@v4
+
+ - name: Configure AWS Credentials
+ uses: aws-actions/configure-aws-credentials@v4
+ with:
+ role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
+ # For static keys fallback, use aws-access-key-id/aws-secret-access-key instead
+ aws-region: eu-west-2
+
+ - uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ mode: garbage-collect
+ garbageMaxAge: 24
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+See [Garbage Collection](../advanced-topics/garbage-collection) for details.
+
+## 📦 Multi-Platform Matrix Build
+
+Build for multiple platforms in parallel. Each platform runs as a separate Orchestrator job.
+
+```yaml
+name: Orchestrator - Multi-Platform
+
+on:
+ push:
+ branches: [main]
+
+jobs:
+ build:
+ name: Build ${{ matrix.targetPlatform }}
+ runs-on: ubuntu-latest
+ permissions:
+ id-token: write
+ contents: read
+ strategy:
+ fail-fast: false
+ matrix:
+ targetPlatform:
+ - StandaloneLinux64
+ - StandaloneWindows64
+ - StandaloneOSX
+ - iOS
+ - Android
+ - WebGL
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ lfs: true
+
+ - name: Configure AWS Credentials
+ uses: aws-actions/configure-aws-credentials@v4
+ with:
+ role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
+ # For static keys fallback, use aws-access-key-id/aws-secret-access-key instead
+ aws-region: eu-west-2
+
+ - uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ targetPlatform: ${{ matrix.targetPlatform }}
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ containerCpu: 2048
+ containerMemory: 8192
+ containerHookFiles: aws-s3-upload-build
+ githubCheck: true
+```
+
+## 🔁 Retained Workspaces for Faster Rebuilds
+
+For large projects, keep the entire project folder cached between builds. Dramatically speeds up
+rebuilds at the cost of more storage.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ maxRetainedWorkspaces: 3
+ containerCpu: 2048
+ containerMemory: 8192
+```
+
+See [Retained Workspaces](../advanced-topics/retained-workspace) and
+[Caching](../advanced-topics/caching) for details on storage strategies.
+
+## 🪝 Container Hooks - S3 Upload + Steam Deploy
+
+Chain multiple container hooks to export builds to S3 and deploy to Steam in a single workflow.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ containerHookFiles: aws-s3-upload-build,steam-deploy-client
+ env:
+ STEAM_USERNAME: ${{ secrets.STEAM_USERNAME }}
+ STEAM_PASSWORD: ${{ secrets.STEAM_PASSWORD }}
+ STEAM_APPID: ${{ secrets.STEAM_APPID }}
+```
+
+See [Built-In Hooks](../advanced-topics/hooks/built-in-hooks) for all available hooks (S3, rclone,
+Steam).
+
+## 🔗 Reference
+
+- [API Reference](../api-reference) - full list of all parameters
+- [Providers](../providers/overview) - setup guides for each provider
+- [Secrets](../secrets) - how credentials are transferred to build containers
+- [Real-world pipeline](https://github.com/game-ci/unity-builder/blob/main/.github/workflows/orchestrator-integrity.yml)
+ - Game CI's own Orchestrator test pipeline
diff --git a/docs/03-github-orchestrator/03-examples/02-github-examples/02-aws.mdx b/docs/03-github-orchestrator/03-examples/02-github-examples/02-aws.mdx
deleted file mode 100644
index 3c18f3e3..00000000
--- a/docs/03-github-orchestrator/03-examples/02-github-examples/02-aws.mdx
+++ /dev/null
@@ -1,71 +0,0 @@
-# AWS
-
-## Requirements
-
-- You must have an AWS account setup and ready to create resources.
-- Create a service account and generate an AWS access key and key id.
-
-## AWS Credentials
-
-Setup the following as `env` variables for orchestrator to use:
-
-- `AWS_ACCESS_KEY_ID`
-- `AWS_SECRET_ACCESS_KEY`
-- `AWS_DEFAULT_REGION` (should be the same AWS region as the base stack e.g `eu-west-2`)
-
-If you're using GitHub you can use a GitHub Action:
-
-```yaml
-- name: Configure AWS Credentials
- uses: aws-actions/configure-aws-credentials@v4
- with:
- aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
- aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- aws-region: eu-west-2
-```
-
-_Note:_ _This enables Orchestrator access AWS._
-
-## Configuration For AWS Orchestrator Jobs
-
-Refer to [Configuration page](../../api-reference) or the [example below](#example).
-
-### Allowed CPU/Memory Combinations
-
-There are some limitations to the CPU and Memory parameters. AWS will only accept the following
-combinations:
-[AWS Fargate Documentation, Allowed CPU and memory values (Task Definitions)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#task_size)
-
-#### Summary Of Format
-
-- Values are represented as 1024:1 GB or CPU.
-- Do not include the vCPU or GB suffix.
-- 1 CPU can go to a max of 6 GB of memory. 2 CPU's are required to go higher.
-
-#### Valid CPU and Memory Values
-
-```yaml
-- orchestratorMemory: 4096
-- orchestratorCpu: 1024
-```
-
-## Example
-
-```yaml
-- uses: game-ci/unity-builder@orchestrator-develop
- id: aws-fargate-unity-build
- with:
- providerStrategy: aws
- versioning: None
- projectPath: `your path here`
- unityVersion: `unity version here`
- targetPlatform: ${{ matrix.targetPlatform }}
- gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
- # You may want to export your builds somewhere external so you can access it:
- containerHookFiles: aws-s3-upload-build
-```
-
-_[Custom Steps](../../advanced-topics/custom-hooks/container-hooks)_
-
-A full workflow example can be seen in builder's
-[Orchestrator GitHub sourcecode for GitHub Pipeline](https://github.com/game-ci/unity-builder/blob/309d668d637ae3e7ffe90d61612968db92e1e376/.github/workflows/orchestrator-pipeline.yml#L109).
diff --git a/docs/03-github-orchestrator/03-examples/02-github-examples/03-kubernetes.mdx b/docs/03-github-orchestrator/03-examples/02-github-examples/03-kubernetes.mdx
deleted file mode 100644
index a71f41a8..00000000
--- a/docs/03-github-orchestrator/03-examples/02-github-examples/03-kubernetes.mdx
+++ /dev/null
@@ -1,51 +0,0 @@
-# Kubernetes
-
-## Requirements
-
-- You must have a Kubernetes cluster setup and ready that supports persistent volumes.
-- Create a kubeconfig and encode it as base64.
-
-## K8s Credentials
-
-Setup the following as `env` variables for the GitHub build step:
-
-- `kubeConfig` (should be encoded as base64)
-
-## Configuration For Kubernetes Orchestrator Jobs
-
-Refer to [Configuration page](../../api-reference) or the [example below](#example).
-
-### Allowed CPU/Memory Combinations
-
-- `0.25 vCPU` - 0.5 GB, 1 GB, 2 GB
-- `0.5 vCPU` - 1 GB, 2 GB, 3 GB, 4 GB
-- `1 vCPU` - 2 GB, 3 GB, 4 GB, 5 GB, 6 GB, 7 GB, 8 GB
-- `2 vCPU` - Between 4 GB and 16 GB in 1-GB increments
-- `4 vCPU` - Between 8 GB and 30 GB in 1-GB increments
-
-#### Summary Of Format
-
-- Values are represented as 1024:1 GB or CPU.
-
-Do not include the vCPU or GB suffix.
-
-### Example
-
-```yaml
-- uses: game-ci/unity-builder@orchestrator-develop
- id: k8s-unity-build
- with:
- providerStrategy: k8s
- versioning: None
- projectPath: `your path here`
- unityVersion: `unity version here`
- targetPlatform: ${{ matrix.targetPlatform }}
- gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
- # You may want to export your builds somewhere external so you can access it:
- containerHookFiles: aws-s3-upload-build
-```
-
-_[Custom Steps](../../advanced-topics/custom-hooks/container-hooks)_
-
-A full workflow example can be seen in builder's
-[Orchestrator GitHub sourcecode for AWS Pipeline](https://github.com/game-ci/unity-builder/blob/main/.github/workflows/orchestrator-k8s-pipeline.yml).
diff --git a/docs/03-github-orchestrator/03-examples/03-aws.mdx b/docs/03-github-orchestrator/03-examples/03-aws.mdx
new file mode 100644
index 00000000..569fbeab
--- /dev/null
+++ b/docs/03-github-orchestrator/03-examples/03-aws.mdx
@@ -0,0 +1,231 @@
+# AWS Examples
+
+Complete workflow examples for running Unity builds on AWS Fargate via Orchestrator.
+
+## Minimal AWS Build
+
+The simplest AWS workflow. Uses default CPU and memory.
+
+```yaml
+name: Build with Orchestrator (AWS)
+
+on:
+ push:
+ branches: [main]
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ lfs: true
+
+ - name: Configure AWS Credentials
+ uses: aws-actions/configure-aws-credentials@v4
+ with:
+ aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
+ aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
+ aws-region: eu-west-2
+
+ - uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+## Multi-Platform Matrix Build
+
+Build for multiple platforms in parallel. Each platform runs as a separate Fargate task.
+
+```yaml
+name: Orchestrator - AWS Multi-Platform
+
+on:
+ push:
+ branches: [main, develop]
+ pull_request:
+ branches: [main]
+
+jobs:
+ build:
+ name: Build (${{ matrix.targetPlatform }})
+ runs-on: ubuntu-latest
+ strategy:
+ fail-fast: false
+ matrix:
+ targetPlatform:
+ - StandaloneLinux64
+ - StandaloneWindows64
+ - StandaloneOSX
+ - iOS
+ - Android
+ - WebGL
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ lfs: true
+
+ - name: Configure AWS Credentials
+ uses: aws-actions/configure-aws-credentials@v4
+ with:
+ aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
+ aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
+ aws-region: eu-west-2
+
+ - uses: game-ci/unity-builder@v4
+ id: build
+ with:
+ providerStrategy: aws
+ targetPlatform: ${{ matrix.targetPlatform }}
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ unityVersion: 2022.3.0f1
+ containerCpu: 2048
+ containerMemory: 8192
+ containerHookFiles: aws-s3-upload-build
+ githubCheck: true
+```
+
+## Custom Resources and S3 Artifacts
+
+Specify CPU/memory and export build artifacts to S3.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ id: aws-build
+ with:
+ providerStrategy: aws
+ versioning: None
+ projectPath: path/to/your/project
+ unityVersion: 2022.3.0f1
+ targetPlatform: ${{ matrix.targetPlatform }}
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ containerCpu: 2048
+ containerMemory: 8192
+ # Export builds to S3:
+ containerHookFiles: aws-s3-upload-build
+```
+
+### Valid CPU/Memory Combinations
+
+AWS Fargate only accepts specific combinations (`1024 = 1 vCPU`, memory in MB):
+
+| CPU (`containerCpu`) | Memory (`containerMemory`) |
+| -------------------- | -------------------------- |
+| `256` (0.25 vCPU) | `512`, `1024`, `2048` |
+| `512` (0.5 vCPU) | `1024` – `4096` |
+| `1024` (1 vCPU) | `2048` – `8192` |
+| `2048` (2 vCPU) | `4096` – `16384` |
+| `4096` (4 vCPU) | `8192` – `30720` |
+
+## Async Mode
+
+For long builds, use async mode so the GitHub Action returns immediately. Monitor via GitHub Checks.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ asyncOrchestrator: true
+ githubCheck: true
+```
+
+## Retained Workspaces
+
+Keep the entire project cached between builds for dramatically faster rebuilds.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ maxRetainedWorkspaces: 3
+ containerCpu: 2048
+ containerMemory: 8192
+```
+
+## S3 Upload + Steam Deploy
+
+Chain container hooks to export to S3 and deploy to Steam in one step.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ containerHookFiles: aws-s3-upload-build,steam-deploy-client
+ env:
+ STEAM_USERNAME: ${{ secrets.STEAM_USERNAME }}
+ STEAM_PASSWORD: ${{ secrets.STEAM_PASSWORD }}
+ STEAM_APPID: ${{ secrets.STEAM_APPID }}
+```
+
+## Scheduled Garbage Collection
+
+Clean up stale CloudFormation stacks and Fargate tasks.
+
+```yaml
+name: Orchestrator - Garbage Collect
+
+on:
+ schedule:
+ - cron: '0 4 * * *' # Daily at 4 AM UTC
+
+jobs:
+ cleanup:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+
+ - name: Configure AWS Credentials
+ uses: aws-actions/configure-aws-credentials@v4
+ with:
+ aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
+ aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
+ aws-region: eu-west-2
+
+ - uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ mode: garbage-collect
+ garbageMaxAge: 24
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+## CLI Usage
+
+Run AWS builds from the command line:
+
+```bash
+game-ci build \
+ --providerStrategy aws \
+ --projectPath /path/to/unity/project \
+ --targetPlatform StandaloneLinux64 \
+ --containerCpu 2048 \
+ --containerMemory 8192
+```
+
+List active AWS resources:
+
+```bash
+game-ci status --providerStrategy aws
+```
+
+## Required Secrets
+
+| Secret | Description |
+| ----------------------- | ---------------------------------------------------------------- |
+| `AWS_ACCESS_KEY_ID` | IAM access key with ECS, CloudFormation, S3, Kinesis, CloudWatch |
+| `AWS_SECRET_ACCESS_KEY` | IAM secret key |
+
+## Next Steps
+
+- [AWS Provider Reference](../providers/aws) - architecture, parameters, and setup
+- [Container Hooks](../advanced-topics/hooks/built-in-hooks) - S3, rclone, Steam hooks
+- [Garbage Collection](../advanced-topics/garbage-collection) - automated cleanup
+- [API Reference](../api-reference) - full parameter list
diff --git a/docs/03-github-orchestrator/03-examples/04-kubernetes.mdx b/docs/03-github-orchestrator/03-examples/04-kubernetes.mdx
new file mode 100644
index 00000000..e4a16466
--- /dev/null
+++ b/docs/03-github-orchestrator/03-examples/04-kubernetes.mdx
@@ -0,0 +1,194 @@
+# Kubernetes Examples
+
+Complete workflow examples for running Unity builds on Kubernetes via Orchestrator.
+
+## Minimal Kubernetes Build
+
+The simplest K8s workflow. Requires a running cluster and a base64-encoded kubeconfig.
+
+```yaml
+name: Build with Orchestrator (Kubernetes)
+
+on:
+ push:
+ branches: [main]
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ env:
+ kubeConfig: ${{ secrets.KUBE_CONFIG }}
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ lfs: true
+
+ - uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: k8s
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ kubeVolumeSize: 10Gi
+```
+
+## Multi-Platform Matrix Build
+
+Build for multiple platforms in parallel. Each platform runs as a separate Kubernetes Job.
+
+```yaml
+name: Orchestrator - Kubernetes Multi-Platform
+
+on:
+ push:
+ branches: [main, develop]
+ pull_request:
+ branches: [main]
+
+jobs:
+ build:
+ name: Build (${{ matrix.targetPlatform }})
+ runs-on: ubuntu-latest
+ strategy:
+ fail-fast: false
+ matrix:
+ targetPlatform:
+ - StandaloneLinux64
+ - StandaloneWindows64
+ env:
+ kubeConfig: ${{ secrets.KUBE_CONFIG }}
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ lfs: true
+
+ - uses: game-ci/unity-builder@v4
+ id: build
+ with:
+ providerStrategy: k8s
+ targetPlatform: ${{ matrix.targetPlatform }}
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ unityVersion: 2022.3.0f1
+ kubeVolumeSize: 10Gi
+ containerCpu: 1024
+ containerMemory: 4096
+ containerHookFiles: aws-s3-upload-build
+ githubCheck: true
+```
+
+## Custom Resources and Storage
+
+Specify CPU/memory and persistent volume size for the build workspace.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ id: k8s-build
+ with:
+ providerStrategy: k8s
+ versioning: None
+ projectPath: path/to/your/project
+ unityVersion: 2022.3.0f1
+ targetPlatform: ${{ matrix.targetPlatform }}
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ kubeVolumeSize: 25Gi
+ kubeStorageClass: gp3
+ containerCpu: 2048
+ containerMemory: 8192
+ containerHookFiles: aws-s3-upload-build
+```
+
+### CPU and Memory
+
+Kubernetes uses the same unit format as AWS (`1024 = 1 vCPU`, memory in MB):
+
+| CPU (`containerCpu`) | Memory (`containerMemory`) |
+| -------------------- | -------------------------- |
+| `256` (0.25 vCPU) | `512`, `1024`, `2048` |
+| `512` (0.5 vCPU) | `1024` – `4096` |
+| `1024` (1 vCPU) | `2048` – `8192` |
+| `2048` (2 vCPU) | `4096` – `16384` |
+| `4096` (4 vCPU) | `8192` – `30720` |
+
+## Kubernetes with S3 Caching
+
+Use S3-backed caching for the Library folder to speed up rebuilds.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: k8s
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ kubeVolumeSize: 15Gi
+ containerCpu: 2048
+ containerMemory: 8192
+ # Cache Library folder to S3:
+ containerHookFiles: aws-s3-pull-cache,aws-s3-upload-cache,aws-s3-upload-build
+```
+
+Hook execution order matters - `aws-s3-pull-cache` restores the cache before the build,
+`aws-s3-upload-cache` saves it after, and `aws-s3-upload-build` uploads the final artifact.
+
+## Retained Workspaces
+
+Keep the build workspace persistent between builds for large projects.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: k8s
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ kubeVolumeSize: 50Gi
+ maxRetainedWorkspaces: 3
+ containerCpu: 2048
+ containerMemory: 8192
+```
+
+## CLI Usage
+
+Run Kubernetes builds from the command line:
+
+```bash
+game-ci build \
+ --providerStrategy k8s \
+ --projectPath /path/to/unity/project \
+ --targetPlatform StandaloneLinux64 \
+ --kubeVolumeSize 10Gi \
+ --containerCpu 1024 \
+ --containerMemory 4096
+```
+
+Make sure `KUBECONFIG` or `kubeConfig` is set in your environment.
+
+## Generating the kubeConfig Secret
+
+Base64-encode your kubeconfig file:
+
+```bash
+cat ~/.kube/config | base64 -w 0
+```
+
+Store the output as a GitHub Actions secret named `KUBE_CONFIG`.
+
+## Required Secrets
+
+| Secret | Description |
+| ------------- | ----------------------------------------------------- |
+| `KUBE_CONFIG` | Base64-encoded kubeconfig for your Kubernetes cluster |
+
+## Cluster Tips
+
+- **Keep the cluster running.** Cold-starting a Kubernetes cluster is slow. If you need auto-scaling
+ to zero, consider Google Cloud Kubernetes Autopilot.
+- **Cloud storage required.** Kubernetes requires cloud storage for
+ [caching](../advanced-topics/caching). S3 is built-in, or use rclone for other backends.
+- **Volume size matters.** Unity projects can be large. Start with `10Gi` and increase if builds
+ fail with disk space errors.
+
+## Next Steps
+
+- [Kubernetes Provider Reference](../providers/kubernetes) - full setup and parameters
+- [Caching](../advanced-topics/caching) - S3 and rclone caching strategies
+- [Retained Workspaces](../advanced-topics/retained-workspace) - persistent build environments
+- [Container Hooks](../advanced-topics/hooks/built-in-hooks) - S3, rclone, Steam hooks
+- [API Reference](../api-reference) - full parameter list
diff --git a/docs/03-github-orchestrator/04-api-reference.mdx b/docs/03-github-orchestrator/04-api-reference.mdx
deleted file mode 100644
index 6a46b5c3..00000000
--- a/docs/03-github-orchestrator/04-api-reference.mdx
+++ /dev/null
@@ -1,259 +0,0 @@
-# API Reference
-
-## Configuration
-
-_You can specify input parameters via any of the following methods._
-
-- **GitHub Action `with`** _See "Getting Started" examples._
-- **Command Line** _You can specify input parameters via command line._
-- **Environment Variables** _You can specify input parameters via environment variables._
-- **Configuration Override** _[Advanced Topics / Overrides](advanced-topics/configuration-override)_
-
-## Modes
-
-Orchestrator can accept a parameter to run a specific mode, by default cli-build is run.
-
-```bash
-cli-build
-```
-
-_runs an orchestrator build_
-
-```bash
-list-resources
-```
-
-_lists active resources_
-
-```bash
-list-workflow
-```
-
-_lists running workflows_
-
-```bash
-watch
-```
-
-_follows logs of a running workflow_
-
-```bash
-garbage-collect
-```
-
-_runs garbage collection_
-
-```bash
-- cache-push
-- cache-pull
-```
-
-Cache commands to push and pull from the local caching directory. Used in orchestrator workflows.
-Uses `cachePullFrom` and `cachePushTo` parameters.
-
-```bash
-- hash (hash folder contents recursively)
-- print-input (prints all input parameters)
-```
-
-Utility commands
-
-```bash
-- remote-cli-pre-build (sets up a repository, usually before a game-ci build)
-- remote-cli-post-build (pushes to LFS and Library cache)
-```
-
-Commands called during orchestrator workflows before/after a build.
-
-## Common Parameters
-
-### Git synchronization parameters
-
-```bash
-gitPrivateToken (should be a GitHub access token with permission to get repositories)
-```
-
-Used to authenticate remote job's access to repository. Also used for LFS file pulling if
-`GIT_PRIVATE_TOKEN` is not set separately.
-
-```bash
-- GITHUB_REPOSITORY
-- GITHUB_REF || branch || GitSHA
-```
-
-Used to synchronize the repository to the Orchestrator job. If parameters are not provided, will
-attempt to read them from current directory's git repo (e.g branch, commit SHA, remote URL).
-
-### Orchestrator parameters
-
-```bash
-providerStrategy
-```
-
-Specifies the Cloud Provider to use for Orchestrator jobs. Accepted values: `aws`, `k8s`,
-`local-docker`, `local`.
-
-```bash
-- containerCpu
-- containerMemory
-```
-
-Specifies the CPU and Memory resources to be used for cloud containers created by Orchestrator.
-(See: getting started section for more configuration options per provider.)
-
-```bash
-orchestratorBranch
-```
-
-Specifies the release branch of Orchestrator to use for remote containers. Accepted values: `main`
-(default), `orchestrator-develop` (latest/development).
-
-```bash
-cloneDepth
-```
-
-Specifies the depth of the git clone for the repository. Defaults to `50`. Use `0` for a full clone.
-
-### Custom commands from files parameters
-
-```bash
-- containerHookFiles
-- commandHookFiles
-- commandHooks
-- postBuildContainerHooks
-- preBuildContainerHooks
-```
-
-Specifies the name of custom hook or step files to include in workflow. (Accepted Format: see
-"[container hooks](advanced-topics/custom-hooks/container-hooks)
-[command hooks](advanced-topics/custom-hooks/command-hooks)")
-
-### Custom commands from yaml parameters
-
-```bash
-customJob
-```
-
-Specifies a custom job to override default build workflow. (Accepted Format: see
-"[advanced topics / custom job](advanced-topics/custom-hooks/custom-job)")
-
-### Configuration Override
-
-```bash
-readInputOverrideCommand
-```
-
-Read parameter from command line output, such as a secret manager. Must include a `{0}` to inject
-the name of the parameter to pull. Built-in presets: `gcp-secret-manager`, `aws-secret-manager`.
-(See: [Configuration Override](advanced-topics/configuration-override))
-
-```bash
-readInputFromOverrideList
-```
-
-Comma separated list of parameters to apply with `readInputOverrideCommand`. (See:
-[Configuration Override](advanced-topics/configuration-override))
-
-### Storage
-
-```bash
-storageProvider
-```
-
-Specifies the storage backend for caching and artifacts. Accepted values: `s3` (default), `rclone`.
-
-```bash
-rcloneRemote
-```
-
-Configures the rclone remote storage endpoint. Required when using `storageProvider: rclone`.
-
-### AWS
-
-```bash
-awsStackName
-```
-
-Name of the persistent shared base stack, used to store artifacts and caching. Defaults to
-`game-ci`.
-
-```bash
-- awsEndpoint (base endpoint override for all AWS services)
-- awsCloudFormationEndpoint
-- awsEcsEndpoint
-- awsKinesisEndpoint
-- awsCloudWatchLogsEndpoint
-- awsS3Endpoint
-```
-
-Optional AWS service endpoint overrides. Useful for testing with LocalStack or other AWS-compatible
-services.
-
-### K8s
-
-```bash
-- kubeConfig (base64 encoded kubernetes config)
-- kubeVolume
-- kubeVolumeSize (default: 5Gi)
-- kubeStorageClass
-```
-
-Override name of persistent volume used, size of volume and storage class used.
-
-### Caching
-
-```bash
-cacheKey
-```
-
-Defaults to branch name. Defines the scope for sharing cache entries.
-
-### Utility
-
-```bash
-- orchestratorDebug (Debug logging for Orchestrator)
-- resourceTracking (Enable resource tracking logs for disk usage summaries)
-- useLargePackages (Any packages in manifest.json containing phrase "LargePackage" will be
- redirected to a shared folder for all builds sharing a cache key)
-- useSharedBuilder (Use a shared clone of Game-CI, saves some storage space and can be used if
- you're using one release branch of Orchestrator)
-- useCompressionStrategy (Use Lz4 compression for cache and build artifacts. Enabled by default)
-- watchToEnd (Whether to watch the build to the end, default: true)
-- asyncOrchestrator (Run in async mode, returns immediately without waiting for build completion)
-```
-
-### Retained Workspace
-
-```bash
-- maxRetainedWorkspaces
-```
-
-See: [Advanced Topics / Retained Workspaces](advanced-topics/retained-workspace), enables caching
-entire project folder.
-
-### Garbage Collection
-
-```bash
-- garbageMaxAge (Maximum age in hours before resources are cleaned up, default: 24)
-```
-
-## Command Line Only Parameters
-
-```bash
-- populateOverride
-- cachePushFrom
-- cachePushTo
-- artifactName
-- select
-```
-
-## Other Environment Variables
-
-```bash
-- USE_IL2CPP (Set to `false`)
-```
-
-# External Links
-
-All accepted parameters given here with a description:
-[https://github.com/game-ci/unity-builder/blob/main/action.yml](https://github.com/game-ci/unity-builder/blob/main/action.yml)
diff --git a/docs/03-github-orchestrator/04-jobs.mdx b/docs/03-github-orchestrator/04-jobs.mdx
new file mode 100644
index 00000000..264a49e0
--- /dev/null
+++ b/docs/03-github-orchestrator/04-jobs.mdx
@@ -0,0 +1,189 @@
+# Orchestrator Jobs
+
+Orchestrator executes work as **jobs** - containerized or local tasks that run on your chosen
+provider. Understanding job types and their flow is key to customizing your build pipeline.
+
+## Job Flow
+
+Every Orchestrator run follows the same lifecycle, regardless of provider:
+
+```mermaid
+flowchart LR
+ S["Setup\nProvider"] --> PRE["Pre-Build\nJobs"] --> B["Build\nJob"] --> POST["Post-Build\nJobs"]
+ S --> C["Cleanup"]
+ POST --> C
+```
+
+1. **Setup** - Provision cloud resources (stacks, volumes, secrets). Skipped for local providers.
+2. **Pre-build jobs** - Clone the repository, pull LFS, restore caches, run pre-build hooks.
+3. **Build job** - Execute the Unity build (or custom editor method).
+4. **Post-build jobs** - Push caches, upload artifacts, run post-build hooks.
+5. **Cleanup** - Release locks, tear down cloud resources, update GitHub Checks.
+
+## Job Types
+
+### Build Job
+
+The standard job - runs the Unity Editor to produce a build artifact. This is what most users care
+about.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ targetPlatform: StandaloneLinux64
+ providerStrategy: aws
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+The build job:
+
+- Installs the toolchain (Node.js, git-lfs) inside the container
+- Clones `game-ci/unity-builder` into the container
+- Runs `remote-cli-pre-build` to set up the workspace
+- Executes the Unity build via the Game CI entrypoint
+- Runs `remote-cli-post-build` to push caches and artifacts
+
+### Test Jobs
+
+Run Unity tests without producing a build. Use a custom `buildMethod` that runs tests and exits:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ targetPlatform: StandaloneLinux64
+ buildMethod: MyNamespace.TestRunner.RunEditModeTests
+ manualExit: true
+ providerStrategy: aws
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+With `manualExit: true`, Unity doesn't quit automatically - your build method should call
+`EditorApplication.Exit(0)` after tests complete. This gives you full control over the test
+lifecycle.
+
+### Custom Editor Method Jobs
+
+Run any static C# method in the Unity Editor. Useful for:
+
+- Asset processing or validation
+- Addressables builds
+- Custom pipeline steps
+- Code generation
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ targetPlatform: StandaloneLinux64
+ buildMethod: MyNamespace.Pipeline.ProcessAssets
+ manualExit: true
+ providerStrategy: aws
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+The `buildMethod` must be a fully qualified static method: `Namespace.Class.Method`.
+
+### Custom Jobs
+
+Replace the entire build workflow with your own container steps. Useful for non-Unity workloads or
+fully custom pipelines that still benefit from Orchestrator's cloud infrastructure.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ customJob: |
+ - name: my-custom-step
+ image: ubuntu:22.04
+ commands: |
+ echo "Running custom workload"
+ ./my-script.sh
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+See [Custom Job](advanced-topics/custom-job) for the full reference.
+
+### Async Jobs
+
+For long-running builds, Orchestrator can dispatch the job and return immediately. The build
+continues in the cloud. Progress is reported via GitHub Checks.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ targetPlatform: StandaloneLinux64
+ providerStrategy: aws
+ asyncOrchestrator: true
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+This is useful when builds exceed GitHub Actions' job time limits, or when you want to free up your
+CI runner immediately.
+
+## Pre-Build and Post-Build
+
+Orchestrator runs additional steps before and after the main build job.
+
+### Pre-Build Steps
+
+The `remote-cli-pre-build` phase handles:
+
+- Git clone of the target repository
+- Git LFS pull
+- Cache restoration (Library folder, LFS objects)
+- Retained workspace setup
+- Submodule initialization (if using [submodule profiles](advanced-topics/build-services))
+- Custom LFS agent configuration (e.g., [elastic-git-storage](advanced-topics/lfs-agents))
+
+You can inject additional pre-build steps:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ preBuildSteps: |
+ - name: install-dependencies
+ image: node:18
+ commands: npm install
+```
+
+### Post-Build Steps
+
+The `remote-cli-post-build` phase handles:
+
+- Library folder cache push
+- Build artifact upload
+- LFS cache push
+
+You can inject additional post-build steps:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ postBuildSteps: |
+ - name: upload-to-steam
+ image: steamcmd
+ commands: ./upload.sh
+```
+
+### Hooks
+
+For more granular control, use [container hooks](advanced-topics/hooks/container-hooks) or
+[command hooks](advanced-topics/hooks/command-hooks) to inject steps at specific points in the build
+lifecycle.
+
+## Job Execution by Provider
+
+Each provider runs jobs differently:
+
+| Provider | How jobs execute |
+| ---------------- | ---------------------------------------------------------------------- |
+| **AWS Fargate** | ECS Fargate task with CloudFormation stack. Logs streamed via Kinesis. |
+| **Kubernetes** | Kubernetes Job with PVC for workspace. Logs streamed from pod. |
+| **Local Docker** | Docker container with volume mounts. Logs piped to stdout. |
+| **Local** | Direct shell execution on the host. No container isolation. |
+| **CLI Provider** | Delegated to external executable via JSON protocol. |
+
+## Next Steps
+
+- [Custom Job](advanced-topics/custom-job) - Full reference for custom job definitions
+- [Hooks](advanced-topics/hooks/container-hooks) - Inject steps at specific lifecycle points
+- [Architecture](advanced-topics/architecture) - Deep dive into internal components
diff --git a/docs/03-github-orchestrator/05-api-reference.mdx b/docs/03-github-orchestrator/05-api-reference.mdx
new file mode 100644
index 00000000..ab19fdbf
--- /dev/null
+++ b/docs/03-github-orchestrator/05-api-reference.mdx
@@ -0,0 +1,165 @@
+# API Reference
+
+## ⚙️ Configuration Methods
+
+| Method | Description |
+| ------------------------- | ------------------------------------------------------------------------------------------------------------------------- |
+| **GitHub Action `with`** | Pass parameters directly in your workflow file. See [Getting Started](getting-started). |
+| **Command Line** | Pass parameters as CLI flags. See [Command Line](examples/command-line). |
+| **Environment Variables** | Set parameters as environment variables in your shell or CI environment. |
+| **Pull Secrets** | Pull parameters dynamically from secret managers or files. See [Secrets](secrets#-pulling-secrets-from-external-sources). |
+
+## 🔧 Modes
+
+Set the mode to control what Orchestrator does. Default: `cli-build`.
+
+| Mode | Description |
+| ----------------------- | ------------------------------------------------------------------------------------- |
+| `cli-build` | Run a standard build workflow. |
+| `list-resources` | List active cloud resources. |
+| `list-workflow` | List running workflows. |
+| `watch` | Follow logs of a running workflow. |
+| `garbage-collect` | Clean up old resources. See [Garbage Collection](advanced-topics/garbage-collection). |
+| `cache-push` | Push to the caching directory. Uses `cachePushTo`. |
+| `cache-pull` | Pull from the caching directory. Uses `cachePullFrom`. |
+| `hash` | Hash folder contents recursively. |
+| `print-input` | Print all resolved input parameters. |
+| `remote-cli-pre-build` | Set up a repository before a build (used internally by workflows). |
+| `remote-cli-post-build` | Push LFS files and Library cache after a build (used internally). |
+
+## 📋 Parameters
+
+### Provider
+
+| Parameter | Default | Description |
+| ---------------------- | ----------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `providerStrategy` | `local` | Cloud provider to use. Built-in: `aws`, `k8s`, `local-docker`, `local`. Also accepts a GitHub URL, NPM package, or local path for [custom providers](providers/custom-providers). |
+| `containerCpu` | `1024` | CPU units for cloud containers (`1024` = 1 vCPU). See provider setup guides for allowed values. |
+| `containerMemory` | `3072` | Memory in MB for cloud containers (`4096` = 4 GB). See provider setup guides for allowed values. |
+| `orchestratorBranch` | `main` | Release branch of Orchestrator for remote containers. Use `orchestrator-develop` for latest development builds. |
+| `orchestratorRepoName` | `game-ci/unity-builder` | Repository for Orchestrator source. Override to use a fork for testing or custom builds. |
+
+### Engine
+
+| Parameter | Default | Description |
+| -------------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `engine` | `unity` | Game engine name. Built-in: `unity`. Other engines require `enginePlugin`. |
+| `enginePlugin` | - | Engine plugin source. NPM package, `cli:`, or `docker:`. See [Engine Plugins](advanced-topics/engine-plugins) for details and source formats. |
+
+### Git Synchronization
+
+| Parameter | Default | Description |
+| ------------------- | -------- | ------------------------------------------------------------------- |
+| `gitPrivateToken` | - | GitHub access token with repo access. Used for git clone and LFS. |
+| `githubOwner` | - | GitHub owner or organization name. |
+| `GITHUB_REPOSITORY` | _(auto)_ | Repository in `owner/repo` format. Auto-detected in GitHub Actions. |
+| `GITHUB_REF` | _(auto)_ | Git ref to build. Falls back to `branch` or `GitSHA` parameters. |
+| `cloneDepth` | `50` | Depth of the git clone. Use `0` for a full clone. |
+| `allowDirtyBuild` | `false` | Allow building from a branch with uncommitted changes. |
+
+### Custom Hooks
+
+| Parameter | Default | Description |
+| ------------------------- | ------- | -------------------------------------------------------------------------------------------------------- |
+| `containerHookFiles` | - | Names of [container hook](advanced-topics/hooks/container-hooks) files from `.game-ci/container-hooks/`. |
+| `customHookFiles` | - | Names of custom hook files from `.game-ci/hooks/`. |
+| `customCommandHooks` | - | Inline [command hooks](advanced-topics/hooks/command-hooks) as YAML. |
+| `postBuildSteps` | - | Post-build job in YAML format with keys: `image`, `secrets`, `command`. |
+| `preBuildSteps` | - | Pre-build job (after repo setup, before build) in YAML format. |
+| `postBuildContainerHooks` | - | Container hook files to run after the build step. |
+| `preBuildContainerHooks` | - | Container hook files to run before the build step. |
+| `customJob` | - | Custom job YAML to override the default build workflow. See [Custom Job](advanced-topics/custom-job). |
+
+### Pull Secrets
+
+| Parameter | Default | Description |
+| --------------------------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `readInputOverrideCommand` | - | Command to read a parameter value from an external source. Use `{0}` as the parameter name placeholder. Built-in presets: `gcp-secret-manager`, `aws-secret-manager`. See [Secrets](secrets). |
+| `readInputFromOverrideList` | - | Comma-separated list of parameter names to pull via `readInputOverrideCommand`. |
+
+### Storage
+
+| Parameter | Default | Description |
+| ----------------- | ------- | ------------------------------------------------------------------------------------------------------ |
+| `storageProvider` | `s3` | Storage backend for [caching](advanced-topics/caching) and artifacts. Accepted values: `s3`, `rclone`. |
+| `rcloneRemote` | - | Rclone remote endpoint. Required when `storageProvider` is `rclone`. |
+
+### AWS
+
+| Parameter | Default | Description |
+| --------------------------- | --------- | -------------------------------------------------------------- |
+| `awsStackName` | `game-ci` | Name of the persistent shared CloudFormation base stack. |
+| `awsEndpoint` | - | Base endpoint override for all AWS services (e.g. LocalStack). |
+| `awsCloudFormationEndpoint` | - | CloudFormation service endpoint override. |
+| `awsEcsEndpoint` | - | ECS service endpoint override. |
+| `awsKinesisEndpoint` | - | Kinesis service endpoint override. |
+| `awsCloudWatchLogsEndpoint` | - | CloudWatch Logs service endpoint override. |
+| `awsS3Endpoint` | - | S3 service endpoint override. |
+
+### Kubernetes
+
+| Parameter | Default | Description |
+| ------------------ | ------- | ----------------------------------------------------------------------- |
+| `kubeConfig` | - | Base64-encoded Kubernetes config file. |
+| `kubeVolume` | - | Name of the persistent volume claim to use. |
+| `kubeVolumeSize` | `5Gi` | Size of the persistent volume. |
+| `kubeStorageClass` | - | Storage class for the persistent volume. Empty = auto-install via rook. |
+
+### Caching
+
+| Parameter | Default | Description |
+| ----------------------- | --------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
+| `cacheKey` | _(branch name)_ | Scope for sharing cache entries. Builds with the same key share a cache. |
+| `maxRetainedWorkspaces` | `0` | Maximum number of [retained workspaces](advanced-topics/retained-workspace). `0` = unlimited. Above the limit, jobs use standard caching. |
+
+### GitHub Integration
+
+| Parameter | Default | Description |
+| ------------------- | ------- | --------------------------------------------------------------------------------------------------------- |
+| `githubCheck` | `false` | Create a GitHub Check for each orchestrator step. See [GitHub Integration](providers/github-integration). |
+| `asyncOrchestrator` | `false` | Run in async mode - returns immediately without waiting for the build to complete. |
+| `watchToEnd` | `true` | Whether to follow the build logs until completion. |
+
+### Build Options
+
+| Parameter | Default | Description |
+| ------------------------ | ------- | ------------------------------------------------------------------------------------------------------------------------ |
+| `orchestratorDebug` | `false` | Enable verbose debug logging (resource tracking, directory tree, environment listing). |
+| `resourceTracking` | `false` | Enable resource tracking logs with disk usage summaries. |
+| `useLargePackages` | `false` | Redirect packages containing "LargePackage" in `manifest.json` to a shared folder across builds with the same cache key. |
+| `useSharedBuilder` | `false` | Use a shared clone of Game CI. Saves storage when using a single Orchestrator release branch. |
+| `useCompressionStrategy` | `true` | Use LZ4 compression for cache and build artifacts. |
+| `useCleanupCron` | `true` | Create an AWS CloudFormation cron job to automatically clean up old resources. |
+
+### Garbage Collection
+
+| Parameter | Default | Description |
+| --------------- | ------- | ----------------------------------------------------- |
+| `garbageMaxAge` | `24` | Maximum age in hours before resources are cleaned up. |
+
+## 🖥️ CLI-Only Parameters
+
+These parameters are only available when using Orchestrator from the command line.
+
+| Parameter | Description |
+| ------------------ | -------------------------------------------------------------------------------------------------- |
+| `populateOverride` | Enable [pulling secrets](secrets#-pulling-secrets-from-external-sources) from an external command. |
+| `cachePushFrom` | Local directory to push cache from. |
+| `cachePushTo` | Remote path to push cache to. |
+| `artifactName` | Name for the build artifact. |
+| `select` | Select a specific workflow or resource by name. |
+
+## 🌍 Environment Variables
+
+| Variable | Description |
+| ---------------------------------- | --------------------------------------------------------------------------------- |
+| `USE_IL2CPP` | Set to `false` to disable IL2CPP builds. |
+| `AWS_FORCE_PROVIDER` | Force provider when LocalStack is detected. Values: `aws`, `aws-local`, or empty. |
+| `ORCHESTRATOR_AWS_STACK_WAIT_TIME` | CloudFormation stack timeout in seconds. Default: `600`. |
+| `PURGE_REMOTE_BUILDER_CACHE` | Set to clear the remote builder cache before a build. |
+| `GIT_PRIVATE_TOKEN` | Separate token for LFS pulls (falls back to `gitPrivateToken`). |
+
+## 🔗 External Links
+
+All parameters with descriptions:
+[game-ci/unity-builder action.yml](https://github.com/game-ci/unity-builder/blob/main/action.yml)
diff --git a/docs/03-github-orchestrator/05-providers/01-overview.mdx b/docs/03-github-orchestrator/05-providers/01-overview.mdx
new file mode 100644
index 00000000..dd83efcb
--- /dev/null
+++ b/docs/03-github-orchestrator/05-providers/01-overview.mdx
@@ -0,0 +1,63 @@
+# Providers
+
+A **provider** is the backend that Orchestrator uses to run your builds. You choose a provider by
+setting the `providerStrategy` parameter.
+
+```mermaid
+graph LR
+ aws["aws
Fargate
Fully managed
Cloud scaling
No servers"]
+ k8s["k8s
Cluster
Bring your own cluster
Cloud scaling
Flexible"]
+ ld["local-docker
Container
No cloud needed
Local builds
Docker required"]
+ local["local
Direct
No container needed
Local builds
Simplest setup"]
+```
+
+## Built-in Providers
+
+These providers ship with Orchestrator and are maintained by the Game CI team.
+
+| Provider | `providerStrategy` | Description |
+| -------------- | ------------------ | ----------------------------------------------------------------------------- |
+| AWS Fargate | `aws` | Runs jobs on AWS Fargate (ECS). Fully managed, no servers to maintain. |
+| Kubernetes | `k8s` | Runs jobs on any Kubernetes cluster. Flexible but requires a running cluster. |
+| Local Docker | `local-docker` | Runs jobs in Docker containers on the local machine. |
+| Local (direct) | `local` | Runs jobs directly on the local machine without containers. |
+
+Each provider has its own page with setup instructions:
+
+- [AWS Fargate](aws)
+- [Kubernetes](kubernetes)
+- [Local Docker](local-docker)
+- [Local](local)
+
+## Dispatch Providers
+
+Route builds to external CI systems instead of running them directly.
+
+- [GitHub Actions Dispatch](github-actions-dispatch) - trigger a GitHub Actions workflow
+- [GitLab CI Dispatch](gitlab-ci-dispatch) - trigger a GitLab CI pipeline
+
+## Experimental Providers
+
+These providers are under active development. APIs and behavior may change between releases.
+
+- [GCP Cloud Run](gcp-cloud-run) - serverless containers on Google Cloud
+- [Azure ACI](azure-aci) - serverless containers on Azure
+
+## Additional Providers
+
+- [Ansible](ansible) - provision and run builds via Ansible playbooks
+- [Remote PowerShell](remote-powershell) - execute builds on remote Windows machines
+
+## Custom Providers
+
+Extend Orchestrator with your own provider by pointing `providerStrategy` at a GitHub repository,
+NPM package, or local file path.
+
+See [Custom Providers](custom-providers) for the full guide.
+
+## Community Providers
+
+Third-party providers shared by the Game CI community.
+
+See the [Community Providers](community-providers) page for the current list and how to submit your
+own.
diff --git a/docs/03-github-orchestrator/05-providers/02-aws.mdx b/docs/03-github-orchestrator/05-providers/02-aws.mdx
new file mode 100644
index 00000000..83101830
--- /dev/null
+++ b/docs/03-github-orchestrator/05-providers/02-aws.mdx
@@ -0,0 +1,114 @@
+# AWS
+
+## Architecture
+
+Orchestrator creates and manages these AWS resources automatically:
+
+```mermaid
+graph LR
+ subgraph CF["CloudFormation (base stack)"]
+ ECS["ECS Fargate
(build tasks)"]
+ S3["S3 Bucket
(cache + artifacts)"]
+ CW["CloudWatch Logs"]
+ KS["Kinesis Stream"]
+ end
+ CW --> KS --> Log["Log stream to CI"]
+```
+
+## Requirements
+
+- An AWS account with permission to create resources (ECS, CloudFormation, S3, Kinesis, CloudWatch).
+- An IAM user or role with an access key and secret key.
+
+## AWS Credentials
+
+Set the following as `env` variables in your workflow:
+
+| Variable | Description |
+| ----------------------- | ------------------------------------------------------- |
+| `AWS_ACCESS_KEY_ID` | IAM access key ID. |
+| `AWS_SECRET_ACCESS_KEY` | IAM secret access key. |
+| `AWS_DEFAULT_REGION` | AWS region matching your base stack (e.g. `eu-west-2`). |
+
+If you're using GitHub Actions, configure credentials with:
+
+```yaml
+- name: Configure AWS Credentials
+ uses: aws-actions/configure-aws-credentials@v4
+ with:
+ aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
+ aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
+ aws-region: eu-west-2
+```
+
+## CPU and Memory
+
+AWS Fargate only accepts specific CPU/memory combinations. Values use the format `1024 = 1 vCPU` or
+`1 GB`. Do not include the vCPU or GB suffix.
+
+See the full list:
+[AWS Fargate Task Definitions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#task_size)
+
+Common combinations:
+
+| CPU (`containerCpu`) | Memory (`containerMemory`) |
+| -------------------- | -------------------------- |
+| `256` (0.25 vCPU) | `512`, `1024`, `2048` |
+| `512` (0.5 vCPU) | `1024` – `4096` |
+| `1024` (1 vCPU) | `2048` – `8192` |
+| `2048` (2 vCPU) | `4096` – `16384` |
+| `4096` (4 vCPU) | `8192` – `30720` |
+
+## Example Workflow
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ id: aws-fargate-unity-build
+ with:
+ providerStrategy: aws
+ versioning: None
+ projectPath: path/to/your/project
+ unityVersion: 2022.3.0f1
+ targetPlatform: ${{ matrix.targetPlatform }}
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ containerCpu: 1024
+ containerMemory: 4096
+ # Export builds to S3:
+ containerHookFiles: aws-s3-upload-build
+```
+
+See [Container Hooks](../advanced-topics/hooks/container-hooks) for more on `containerHookFiles`.
+
+A full workflow example is available in the builder source:
+[orchestrator-pipeline.yml](https://github.com/game-ci/unity-builder/blob/main/.github/workflows/orchestrator-pipeline.yml).
+
+## Troubleshooting
+
+### Container Override Size (8192 bytes)
+
+AWS ECS/Fargate limits the `containerOverrides` payload to 8192 bytes. This payload includes all
+build environment variables, secrets, and the build command. Complex workflows with many custom
+parameters or large secret values can exceed this limit, producing the error:
+
+```plaintext
+Container Overrides length must be at most 8192
+```
+
+To reduce payload size, use the orchestrator's built-in
+[secret pulling](/docs/github-orchestrator/secrets) to fetch secrets at runtime instead of passing
+them inline:
+
+```yaml
+env:
+ pullInputList: UNITY_LICENSE,UNITY_SERIAL,UNITY_EMAIL,UNITY_PASSWORD
+ secretSource: aws-secrets-manager
+```
+
+See
+[Troubleshooting: Container Overrides](/docs/troubleshooting/common-issues#container-overrides-length-must-be-at-most-8192-aws)
+for more details.
+
+## AWS Parameters
+
+For the full list of AWS-specific parameters (`awsStackName`, endpoint overrides, etc.), see the
+[API Reference - AWS section](../api-reference#aws).
diff --git a/docs/03-github-orchestrator/05-providers/03-kubernetes.mdx b/docs/03-github-orchestrator/05-providers/03-kubernetes.mdx
new file mode 100644
index 00000000..849a04c6
--- /dev/null
+++ b/docs/03-github-orchestrator/05-providers/03-kubernetes.mdx
@@ -0,0 +1,64 @@
+# Kubernetes
+
+## Requirements
+
+- A running Kubernetes cluster that supports persistent volumes.
+- A kubeconfig file encoded as base64.
+
+## K8s Credentials
+
+Pass the base64-encoded kubeconfig via the `kubeConfig` parameter or as an environment variable:
+
+```yaml
+env:
+ kubeConfig: ${{ secrets.KUBE_CONFIG }}
+```
+
+## CPU and Memory
+
+Kubernetes accepts the same unit format as AWS - `1024 = 1 vCPU`, memory in MB. Do not include the
+vCPU or GB suffix.
+
+| CPU (`containerCpu`) | Memory (`containerMemory`) |
+| -------------------- | -------------------------- |
+| `256` (0.25 vCPU) | `512`, `1024`, `2048` |
+| `512` (0.5 vCPU) | `1024` – `4096` |
+| `1024` (1 vCPU) | `2048` – `8192` |
+| `2048` (2 vCPU) | `4096` – `16384` |
+| `4096` (4 vCPU) | `8192` – `30720` |
+
+## Example Workflow
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ id: k8s-unity-build
+ with:
+ providerStrategy: k8s
+ versioning: None
+ projectPath: path/to/your/project
+ unityVersion: 2022.3.0f1
+ targetPlatform: ${{ matrix.targetPlatform }}
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ kubeVolumeSize: 10Gi
+ containerCpu: 1024
+ containerMemory: 4096
+ # Export builds to S3:
+ containerHookFiles: aws-s3-upload-build
+```
+
+See [Container Hooks](../advanced-topics/hooks/container-hooks) for more on `containerHookFiles`.
+
+A full workflow example is available in the builder source:
+[orchestrator-k8s-pipeline.yml](https://github.com/game-ci/unity-builder/blob/main/.github/workflows/orchestrator-k8s-pipeline.yml).
+
+### Cluster Tips
+
+- **Keep the cluster running.** Cold-starting a Kubernetes cluster is slow. If you need auto-scaling
+ to zero, consider Google Cloud Kubernetes Autopilot.
+- **Cloud storage required.** Kubernetes requires cloud storage for
+ [caching](../advanced-topics/caching). S3 is built-in, or use rclone for other backends.
+
+## K8s Parameters
+
+For the full list of Kubernetes parameters (`kubeConfig`, `kubeVolume`, `kubeStorageClass`, etc.),
+see the [API Reference - Kubernetes section](../api-reference#kubernetes).
diff --git a/docs/03-github-orchestrator/05-providers/04-github-actions-dispatch.mdx b/docs/03-github-orchestrator/05-providers/04-github-actions-dispatch.mdx
new file mode 100644
index 00000000..1751b4a4
--- /dev/null
+++ b/docs/03-github-orchestrator/05-providers/04-github-actions-dispatch.mdx
@@ -0,0 +1,170 @@
+---
+sidebar_position: 4
+---
+
+# GitHub Actions Dispatch Provider
+
+The **GitHub Actions Dispatch** provider triggers Unity builds as `workflow_dispatch` events on a
+target GitHub Actions workflow. Instead of running the build on the orchestrating runner, the
+orchestrator dispatches the work to another repository's workflow and monitors it to completion.
+
+## Use Cases
+
+- **Separate build infrastructure** from your game repository - keep build runners and Unity
+ licenses in a dedicated repo while orchestrating from your main project.
+- **Distribute builds across organizations** - trigger workflows in repos owned by different GitHub
+ organizations or teams.
+- **Specialized runner pools** - route builds to self-hosted runners registered against a different
+ repository with specific machines (GPU, high memory, fast SSD).
+- **License isolation** - keep Unity license activation in a controlled environment while allowing
+ multiple game repos to dispatch builds to it.
+
+## Prerequisites
+
+1. **GitHub CLI (`gh`)** must be available on the runner executing the orchestrator step.
+2. A **Personal Access Token (PAT)** with `actions:write` scope on the target repository.
+3. A **target workflow** in the destination repository with a `workflow_dispatch` trigger that
+ accepts the orchestrator's build inputs.
+
+### Target Workflow Template
+
+The target repository needs a workflow that accepts the orchestrator's dispatched inputs. A minimal
+example:
+
+```yaml
+# .github/workflows/unity-build.yml (in the target repo)
+name: Unity Build
+on:
+ workflow_dispatch:
+ inputs:
+ buildGuid:
+ description: 'Build GUID from orchestrator'
+ required: true
+ image:
+ description: 'Unity Docker image'
+ required: true
+ commands:
+ description: 'Base64-encoded build commands'
+ required: true
+ mountdir:
+ description: 'Mount directory'
+ required: false
+ workingdir:
+ description: 'Working directory'
+ required: false
+ environment:
+ description: 'JSON environment variables'
+ required: false
+
+jobs:
+ build:
+ runs-on: [self-hosted, unity]
+ steps:
+ - uses: actions/checkout@v4
+ - name: Run build
+ run: |
+ echo "${{ inputs.commands }}" | base64 -d | bash
+```
+
+## Configuration
+
+Set `providerStrategy: github-actions` and supply the required inputs:
+
+```yaml
+- uses: game-ci/unity-builder@main
+ with:
+ providerStrategy: github-actions
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ githubActionsRepo: my-org/unity-build-farm
+ githubActionsWorkflow: unity-build.yml
+ githubActionsToken: ${{ secrets.BUILD_FARM_PAT }}
+ githubActionsRef: main
+```
+
+## How It Works
+
+```mermaid
+sequenceDiagram
+ participant O as Orchestrator (your repo)
+ participant T as Target repo
+ O->>O: 1. Validate target workflow exists
+ O->>T: 2. Dispatch event with build inputs (workflow_dispatch)
+ O->>O: 3. Wait for run to appear
+ T->>T: 4. Run build job on target runner
+ T->>T: 5. Execute commands
+ O->>T: Poll status
+ T-->>O: Status updates
+ T->>T: 6. Complete
+ T-->>O: Fetch logs
+ O->>O: 7. Stream logs and report result
+```
+
+1. **Setup** - The orchestrator verifies the target workflow exists by querying the GitHub API.
+2. **Dispatch** - A `workflow_dispatch` event is sent with build parameters (build GUID, image,
+ base64-encoded commands, environment variables) as workflow inputs.
+3. **Poll for run** - The orchestrator polls the target repository's workflow runs (filtering by
+ creation time) until the dispatched run appears. This typically takes 10-30 seconds.
+4. **Monitor** - Once the run is identified, the orchestrator polls its status every 15 seconds
+ until it reaches a terminal state (`completed`).
+5. **Result** - On success, logs are fetched via `gh run view --log`. On failure, an error is raised
+ with the run's conclusion.
+6. **Cleanup** - No cloud resources are created, so cleanup is a no-op.
+
+## Full Workflow Example
+
+```yaml
+name: Build Game (Dispatched)
+on:
+ push:
+ branches: [main]
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ strategy:
+ matrix:
+ targetPlatform:
+ - StandaloneLinux64
+ - StandaloneWindows64
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ lfs: true
+
+ - uses: game-ci/unity-builder@main
+ with:
+ providerStrategy: github-actions
+ targetPlatform: ${{ matrix.targetPlatform }}
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ githubActionsRepo: my-org/unity-build-farm
+ githubActionsWorkflow: unity-build.yml
+ githubActionsToken: ${{ secrets.BUILD_FARM_PAT }}
+ githubActionsRef: main
+```
+
+## Limitations and Considerations
+
+- **Run identification delay** - After dispatching, the orchestrator must wait for the run to appear
+ in the GitHub API. This adds 10-30 seconds of overhead per build.
+- **API rate limits** - Each status poll is an API call. Long builds will accumulate many calls. The
+ 15-second poll interval keeps usage well within GitHub's rate limits for authenticated requests
+ (5,000/hour).
+- **No artifact transfer** - Build artifacts remain in the target repository's workflow run. You
+ must configure artifact upload/download separately (e.g., via `actions/upload-artifact` in the
+ target workflow).
+- **PAT scope** - The token needs `actions:write` on the target repo. Use a fine-grained PAT scoped
+ to only the build farm repository for least-privilege access.
+- **Concurrent dispatch** - If multiple dispatches happen simultaneously, the orchestrator
+ identifies its run by filtering on creation time. Rapid concurrent dispatches to the same workflow
+ could theoretically cause misidentification.
+
+## Inputs Reference
+
+| Input | Required | Default | Description |
+| ----------------------- | -------- | ------- | ------------------------------------------------------------------- |
+| `providerStrategy` | Yes | - | Must be `github-actions` |
+| `githubActionsRepo` | Yes | - | Target repository in `owner/repo` format |
+| `githubActionsWorkflow` | Yes | - | Workflow filename (e.g., `unity-build.yml`) or workflow ID |
+| `githubActionsToken` | Yes | - | Personal Access Token with `actions:write` scope on the target repo |
+| `githubActionsRef` | No | `main` | Branch or ref to run the dispatched workflow on |
diff --git a/docs/03-github-orchestrator/05-providers/04-local-docker.mdx b/docs/03-github-orchestrator/05-providers/04-local-docker.mdx
new file mode 100644
index 00000000..0d20acf9
--- /dev/null
+++ b/docs/03-github-orchestrator/05-providers/04-local-docker.mdx
@@ -0,0 +1,34 @@
+# Local Docker
+
+Runs the build workflow inside a Docker container on the local machine. No cloud account required.
+
+## Requirements
+
+- Docker installed and running on the build machine.
+
+## Example Workflow
+
+### GitHub Actions (self-hosted runner)
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: local-docker
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+### Command Line
+
+```bash
+yarn run cli -m cli-build \
+ --providerStrategy local-docker \
+ --projectPath /path/to/your/project \
+ --targetPlatform StandaloneLinux64
+```
+
+## When to Use
+
+- You have a self-hosted runner with Docker installed
+- You want container isolation without cloud infrastructure
+- Testing builds locally before deploying to AWS or Kubernetes
diff --git a/docs/03-github-orchestrator/05-providers/05-gitlab-ci-dispatch.mdx b/docs/03-github-orchestrator/05-providers/05-gitlab-ci-dispatch.mdx
new file mode 100644
index 00000000..3183599d
--- /dev/null
+++ b/docs/03-github-orchestrator/05-providers/05-gitlab-ci-dispatch.mdx
@@ -0,0 +1,161 @@
+---
+sidebar_position: 5
+---
+
+# GitLab CI Dispatch Provider
+
+The **GitLab CI Dispatch** provider triggers Unity builds as GitLab CI pipelines via the GitLab REST
+API. This enables teams using GitLab for CI infrastructure to benefit from Game CI's orchestration
+while keeping their existing pipeline runners and configuration.
+
+## Use Cases
+
+- **Hybrid GitHub/GitLab setups** - Game source lives on GitHub, but CI runners and Unity licenses
+ are managed through GitLab.
+- **GitLab Runner infrastructure** - Leverage existing GitLab Runners (including GPU-equipped or
+ macOS runners) for Unity builds.
+- **Self-hosted GitLab** - Organizations running their own GitLab instance can route builds to their
+ internal infrastructure.
+- **GitLab CI/CD catalog** - Integrate orchestrator-triggered builds with existing GitLab CI/CD
+ components and templates.
+
+## Prerequisites
+
+1. A **GitLab project** with CI/CD pipelines enabled.
+2. A **pipeline trigger token** - created in the GitLab project under **Settings > CI/CD > Pipeline
+ trigger tokens**.
+3. A **`.gitlab-ci.yml`** in the target project that accepts orchestrator variables.
+
+### Target Pipeline Template
+
+The GitLab project needs a CI configuration that consumes the orchestrator's pipeline variables:
+
+```yaml
+# .gitlab-ci.yml (in the GitLab project)
+stages:
+ - build
+
+unity-build:
+ stage: build
+ tags:
+ - unity
+ script:
+ - echo "$BUILD_COMMANDS" | base64 -d | bash
+ variables:
+ BUILD_GUID: ''
+ BUILD_IMAGE: ''
+ BUILD_COMMANDS: ''
+ MOUNT_DIR: ''
+ WORKING_DIR: ''
+```
+
+## Configuration
+
+Set `providerStrategy: gitlab-ci` and supply the required inputs:
+
+```yaml
+- uses: game-ci/unity-builder@main
+ with:
+ providerStrategy: gitlab-ci
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ gitlabProjectId: my-group/unity-builds
+ gitlabTriggerToken: ${{ secrets.GITLAB_TRIGGER_TOKEN }}
+ gitlabApiUrl: https://gitlab.com
+ gitlabRef: main
+```
+
+### Self-Hosted GitLab
+
+For self-hosted GitLab instances, set `gitlabApiUrl` to your instance URL:
+
+```yaml
+gitlabApiUrl: https://gitlab.internal.company.com
+```
+
+## How It Works
+
+```mermaid
+sequenceDiagram
+ participant O as Orchestrator (GitHub)
+ participant G as GitLab CI
+ O->>O: 1. Verify project access
+ O->>G: 2. Trigger pipeline with variables (POST /trigger)
+ G->>G: 3. Run pipeline jobs on GitLab Runners
+ G->>G: 4. Execute build commands
+ O->>G: 5. Monitor pipeline status (poll)
+ G-->>O: Status updates
+ G->>G: 5. Complete
+ G-->>O: Fetch job logs
+ O->>O: 6. Collect per-job logs and report
+```
+
+1. **Setup** - The orchestrator verifies access to the GitLab project using the provided token.
+2. **Trigger** - A pipeline is triggered via the GitLab
+ [Pipeline Triggers API](https://docs.gitlab.com/ee/ci/triggers/) with build parameters passed as
+ pipeline variables (`BUILD_GUID`, `BUILD_IMAGE`, `BUILD_COMMANDS`, etc.).
+3. **Monitor** - The orchestrator polls the pipeline status every 15 seconds until it reaches a
+ terminal state (`success`, `failed`, `canceled`, or `skipped`).
+4. **Logs** - On completion, the orchestrator fetches logs for each job in the pipeline individually
+ via the GitLab Jobs API, producing a combined log output.
+5. **Result** - If the pipeline status is not `success`, an error is raised with the terminal
+ status.
+6. **Cleanup** - No resources are created, so cleanup is a no-op.
+
+## Full Workflow Example
+
+```yaml
+name: Build Game (GitLab CI)
+on:
+ push:
+ branches: [main]
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ strategy:
+ matrix:
+ targetPlatform:
+ - StandaloneLinux64
+ - StandaloneWindows64
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ lfs: true
+
+ - uses: game-ci/unity-builder@main
+ with:
+ providerStrategy: gitlab-ci
+ targetPlatform: ${{ matrix.targetPlatform }}
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ gitlabProjectId: my-group/unity-builds
+ gitlabTriggerToken: ${{ secrets.GITLAB_TRIGGER_TOKEN }}
+ gitlabApiUrl: https://gitlab.com
+ gitlabRef: main
+```
+
+## Limitations and Considerations
+
+- **API rate limits** - GitLab.com enforces API rate limits (authenticated: 2,000 requests/minute
+ for most plans). The 15-second poll interval keeps usage low, but very long builds with many
+ parallel pipelines should account for this.
+- **Token permissions** - The trigger token is scoped to the project. For fetching logs and pipeline
+ status, the token must also have `read_api` access. A project access token with `api` scope covers
+ both triggering and log retrieval.
+- **No direct artifact transfer** - Artifacts stay in GitLab. Configure GitLab CI artifacts or
+ external storage (S3, GCS) in your `.gitlab-ci.yml` to export build outputs.
+- **Variable size limits** - GitLab pipeline variables have a combined size limit. For large build
+ command payloads, the base64-encoded commands variable may approach this limit. Consider storing
+ build scripts in the GitLab repository instead of passing them inline.
+- **Self-hosted TLS** - When using `gitlabApiUrl` with a self-hosted instance using self-signed
+ certificates, ensure the runner's certificate store trusts the GitLab instance's CA.
+
+## Inputs Reference
+
+| Input | Required | Default | Description |
+| -------------------- | -------- | -------------------- | --------------------------------------------------------------------------------- |
+| `providerStrategy` | Yes | - | Must be `gitlab-ci` |
+| `gitlabProjectId` | Yes | - | GitLab project ID (numeric) or URL-encoded path (e.g., `my-group%2Funity-builds`) |
+| `gitlabTriggerToken` | Yes | - | Pipeline trigger token (created in GitLab project settings) |
+| `gitlabApiUrl` | No | `https://gitlab.com` | GitLab API base URL (for self-hosted instances) |
+| `gitlabRef` | No | `main` | Branch or ref to trigger the pipeline on |
diff --git a/docs/03-github-orchestrator/05-providers/05-local.mdx b/docs/03-github-orchestrator/05-providers/05-local.mdx
new file mode 100644
index 00000000..4dc12e77
--- /dev/null
+++ b/docs/03-github-orchestrator/05-providers/05-local.mdx
@@ -0,0 +1,43 @@
+# Local
+
+Runs builds directly on the host machine with no container isolation. The simplest provider - useful
+for development and testing.
+
+## Requirements
+
+- Unity installed on the build machine.
+- No Docker or cloud account needed.
+
+## Example Workflow
+
+### GitHub Actions (self-hosted runner)
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: local
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+### Command Line
+
+```bash
+yarn run cli -m cli-build \
+ --providerStrategy local \
+ --projectPath /path/to/your/project \
+ --targetPlatform StandaloneLinux64
+```
+
+## When to Use
+
+- Local development and testing of Orchestrator workflows
+- Debugging build issues before deploying to cloud providers
+- Self-hosted runners where you want direct execution without containers
+
+## ⚠️ Notes
+
+- Builds run directly on the host with no isolation. Ensure the machine has the required Unity
+ version and dependencies installed.
+- This is the fallback provider - if a custom provider fails to load, Orchestrator falls back to
+ `local`.
diff --git a/docs/03-github-orchestrator/05-providers/06-custom-providers.mdx b/docs/03-github-orchestrator/05-providers/06-custom-providers.mdx
new file mode 100644
index 00000000..1625c0b0
--- /dev/null
+++ b/docs/03-github-orchestrator/05-providers/06-custom-providers.mdx
@@ -0,0 +1,190 @@
+# Custom Providers
+
+Orchestrator uses a plugin system that lets you extend it with custom providers. A **provider** is a
+pluggable backend that controls where and how your builds run. Built-in providers include `aws`,
+`k8s`, `local-docker`, and `local`.
+
+With custom providers, you can point `providerStrategy` at a GitHub repository, NPM package, or
+local path and Orchestrator will dynamically load it at runtime.
+
+```mermaid
+graph LR
+ PS["providerStrategy
"user/repo"
"npm-package"
"./local/path""]
+ Fetch["Clone repo /
Install pkg /
Resolve path
cached in .provider-cache/"]
+ PI["Provider Interface
setupWorkflow
runTask
cleanup"]
+ PS -->|fetch| Fetch --> PI
+```
+
+## Using a Custom Provider
+
+Set `providerStrategy` to a provider source instead of a built-in name:
+
+```yaml
+# GitHub repository
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: 'https://github.com/your-org/my-provider'
+ targetPlatform: StandaloneLinux64
+
+# GitHub shorthand
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: 'your-org/my-provider'
+ targetPlatform: StandaloneLinux64
+
+# Specific branch
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: 'your-org/my-provider@develop'
+ targetPlatform: StandaloneLinux64
+```
+
+### Supported Source Formats
+
+| Format | Example |
+| ------------------------------------- | -------------------------------------------------------- |
+| GitHub HTTPS URL | `https://github.com/user/repo` |
+| GitHub URL with branch | `https://github.com/user/repo/tree/main` |
+| GitHub URL with branch and path | `https://github.com/user/repo/tree/main/src/my-provider` |
+| GitHub shorthand | `user/repo` |
+| GitHub shorthand with branch | `user/repo@develop` |
+| GitHub shorthand with branch and path | `user/repo@develop/src/my-provider` |
+| GitHub SSH | `git@github.com:user/repo.git` |
+| NPM package | `my-provider-package` |
+| Scoped NPM package | `@scope/my-provider` |
+| Local relative path | `./my-local-provider` |
+| Local absolute path | `/path/to/provider` |
+
+## Creating a Custom Provider
+
+A provider is a module that exports a class implementing the `ProviderInterface`. The module must
+have an entry point at one of: `index.js`, `index.ts`, `src/index.js`, `src/index.ts`,
+`lib/index.js`, `lib/index.ts`, or `dist/index.js`.
+
+### Required Methods
+
+Every provider must implement these 7 methods:
+
+```typescript
+interface ProviderInterface {
+ setupWorkflow(
+ buildGuid: string,
+ buildParameters: BuildParameters,
+ branchName: string,
+ defaultSecretsArray: {
+ ParameterKey: string;
+ EnvironmentVariable: string;
+ ParameterValue: string;
+ }[],
+ ): Promise;
+
+ runTaskInWorkflow(
+ buildGuid: string,
+ image: string,
+ commands: string,
+ mountdir: string,
+ workingdir: string,
+ environment: OrchestratorEnvironmentVariable[],
+ secrets: OrchestratorSecret[],
+ ): Promise;
+
+ cleanupWorkflow(
+ buildParameters: BuildParameters,
+ branchName: string,
+ defaultSecretsArray: {
+ ParameterKey: string;
+ EnvironmentVariable: string;
+ ParameterValue: string;
+ }[],
+ ): Promise;
+
+ garbageCollect(
+ filter: string,
+ previewOnly: boolean,
+ olderThan: Number,
+ fullCache: boolean,
+ baseDependencies: boolean,
+ ): Promise;
+
+ listResources(): Promise;
+ listWorkflow(): Promise;
+ watchWorkflow(): Promise;
+}
+```
+
+### Example Implementation
+
+```typescript
+// index.ts
+export default class MyProvider {
+ constructor(private buildParameters: any) {}
+
+ async setupWorkflow(buildGuid, buildParameters, branchName, defaultSecretsArray) {
+ // Initialize your build environment
+ }
+
+ async runTaskInWorkflow(buildGuid, image, commands, mountdir, workingdir, environment, secrets) {
+ // Execute the build task in your environment
+ return 'Build output';
+ }
+
+ async cleanupWorkflow(buildParameters, branchName, defaultSecretsArray) {
+ // Tear down resources after the build
+ }
+
+ async garbageCollect(filter, previewOnly, olderThan, fullCache, baseDependencies) {
+ // Clean up old resources
+ return 'Garbage collection complete';
+ }
+
+ async listResources() {
+ // Return active resources
+ return [];
+ }
+
+ async listWorkflow() {
+ // Return running workflows
+ return [];
+ }
+
+ async watchWorkflow() {
+ // Stream logs from a running workflow
+ return '';
+ }
+}
+```
+
+## How It Works
+
+When `providerStrategy` is set to a value that doesn't match a built-in provider name, Orchestrator
+will:
+
+1. **Detect the source type** - GitHub URL, NPM package, or local path.
+2. **Fetch the provider** - For GitHub repos, the repository is cloned (shallow, depth 1) into a
+ `.provider-cache/` directory. Cached repos are automatically updated on subsequent runs.
+3. **Load the module** - The entry point is imported and the default export is used.
+4. **Validate the interface** - All 7 required methods are checked. If any are missing, loading
+ fails.
+5. **Fallback** - If loading fails for any reason, Orchestrator logs the error and falls back to the
+ local provider so your pipeline doesn't break.
+
+## Caching
+
+GitHub repositories are cached in the `.provider-cache/` directory, keyed by owner, repo, and
+branch. On subsequent runs the loader checks for updates and pulls them automatically.
+
+### Environment Variables
+
+| Variable | Default | Description |
+| -------------------- | ----------------- | --------------------------------------- |
+| `PROVIDER_CACHE_DIR` | `.provider-cache` | Custom cache directory for cloned repos |
+| `GIT_TIMEOUT` | `30000` | Git operation timeout in milliseconds |
+
+## Best Practices
+
+- **Pin a branch or tag** - Use `user/repo@v1.0` or a specific branch to avoid unexpected changes.
+- **Test locally first** - Use a local path during development before publishing.
+- **Handle errors gracefully** - Your provider methods should throw clear errors so Orchestrator can
+ log them and fall back if needed.
+- **Keep it lightweight** - The provider module is loaded at runtime. Minimize dependencies to keep
+ startup fast.
diff --git a/docs/03-github-orchestrator/05-providers/06-remote-powershell.mdx b/docs/03-github-orchestrator/05-providers/06-remote-powershell.mdx
new file mode 100644
index 00000000..11d1c062
--- /dev/null
+++ b/docs/03-github-orchestrator/05-providers/06-remote-powershell.mdx
@@ -0,0 +1,172 @@
+---
+sidebar_position: 6
+---
+
+# Remote PowerShell Provider
+
+The **Remote PowerShell** provider executes Unity builds on remote Windows machines via PowerShell
+Remoting. Unlike CI dispatch providers that trigger pipelines on another CI system, this provider
+connects directly to target machines and runs build commands over a remote session.
+
+**Category:** Infrastructure Automation - executes directly on target machines rather than
+dispatching to a CI platform.
+
+## Use Cases
+
+- **Dedicated build machines** - On-premises Windows workstations or servers with Unity installed
+ that are not part of any CI system.
+- **Windows build farms** - Multiple Windows machines available for parallel builds, managed without
+ a formal CI/CD platform.
+- **Air-gapped environments** - Build machines on internal networks not reachable by cloud CI
+ services.
+- **Quick prototyping** - Run builds on a colleague's powerful workstation without setting up a full
+ CI pipeline.
+
+## Prerequisites
+
+1. **PowerShell 5.1+** on both the orchestrator runner and target machine(s).
+2. **PowerShell Remoting enabled** on the target machine via one of:
+ - **WinRM (WSMan)** - Windows default. Enable with `Enable-PSRemoting -Force` on the target.
+ - **SSH** - Cross-platform transport. Requires OpenSSH server on the target.
+3. **Network connectivity** - The orchestrator runner must reach the target machine on port 5985
+ (WinRM HTTP), 5986 (WinRM HTTPS), or 22 (SSH).
+4. **Unity installed** on the target machine with the required build support modules.
+
+### Enabling WinRM on the Target
+
+```powershell
+# Run as Administrator on the target machine
+Enable-PSRemoting -Force
+Set-Item WSMan:\localhost\Client\TrustedHosts -Value "*" -Force # Or specific host
+```
+
+### Enabling SSH on the Target
+
+```powershell
+# Install OpenSSH Server (Windows 10/11, Server 2019+)
+Add-WindowsCapability -Online -Name OpenSSH.Server
+Start-Service sshd
+Set-Service -Name sshd -StartupType Automatic
+```
+
+## Configuration
+
+Set `providerStrategy: remote-powershell` and supply the connection details:
+
+### WinRM Transport (Default)
+
+```yaml
+- uses: game-ci/unity-builder@main
+ with:
+ providerStrategy: remote-powershell
+ targetPlatform: StandaloneWindows64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ remotePowershellHost: build-server.internal.local
+ remotePowershellTransport: wsman
+ remotePowershellCredential: ${{ secrets.BUILD_SERVER_USER }}:${{ secrets.BUILD_SERVER_PASS }}
+```
+
+### SSH Transport
+
+```yaml
+- uses: game-ci/unity-builder@main
+ with:
+ providerStrategy: remote-powershell
+ targetPlatform: StandaloneWindows64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ remotePowershellHost: build-server.internal.local
+ remotePowershellTransport: ssh
+```
+
+When using SSH transport, authentication uses the runner's SSH key configuration rather than
+explicit credentials.
+
+## How It Works
+
+```mermaid
+sequenceDiagram
+ participant O as Orchestrator (runner)
+ participant T as Target Machine
+
+ O->>T: 1. Test connection (Test-WSMan)
+ O->>T: 2. Send build commands (Invoke-Command)
+ T->>T: 3. Set environment variables
+ T->>T: 4. Execute Unity build commands
+ T->>O: 5. Stream output back
+ T->>T: 6. Complete
+```
+
+1. **Setup** - The orchestrator tests connectivity to the remote host using `Test-WSMan`.
+2. **Execution** - Build commands are wrapped in a PowerShell script block that sets environment
+ variables, changes to the working directory, and runs the build. The entire block is sent via
+ `Invoke-Command`.
+3. **Transport** - For WinRM, credentials are constructed as a `PSCredential` object from the
+ `remotePowershellCredential` input. For SSH, `Invoke-Command -HostName` is used instead.
+4. **Output** - Command output streams back to the orchestrator in real time through the remote
+ session.
+5. **Cleanup** - Remote PowerShell sessions are stateless per invocation, so no cleanup is needed.
+
+## Security Considerations
+
+- **Credential handling** - The `remotePowershellCredential` input expects a `username:password`
+ format. Always store this as a GitHub secret, never in plain text. The provider constructs a
+ `PSCredential` object at runtime and does not persist credentials.
+- **WinRM HTTPS** - For production use, configure WinRM over HTTPS (port 5986) with a valid TLS
+ certificate to encrypt traffic. The default HTTP transport (port 5985) sends credentials in clear
+ text over the network.
+- **SSH key authentication** - Prefer SSH transport with key-based authentication over WinRM with
+ password credentials when possible. SSH keys avoid transmitting passwords entirely.
+- **Network segmentation** - Restrict WinRM/SSH access to the build machines from only the
+ orchestrator runners' IP range.
+- **Least privilege** - The remote user account should have only the permissions needed to run Unity
+ builds (read/write to the project directory, execute Unity).
+
+## Full Workflow Example
+
+```yaml
+name: Build Game (Remote PowerShell)
+on:
+ push:
+ branches: [main]
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ lfs: true
+
+ - uses: game-ci/unity-builder@main
+ with:
+ providerStrategy: remote-powershell
+ targetPlatform: StandaloneWindows64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ remotePowershellHost: ${{ secrets.BUILD_SERVER_HOST }}
+ remotePowershellTransport: wsman
+ remotePowershellCredential:
+ ${{ secrets.BUILD_SERVER_USER }}:${{ secrets.BUILD_SERVER_PASS }}
+```
+
+## Limitations and Considerations
+
+- **Windows targets only** - PowerShell Remoting via WinRM is a Windows technology. For Linux/macOS
+ build machines, use SSH transport or consider the Ansible provider instead.
+- **No container isolation** - Builds run directly on the target machine's environment. Conflicting
+ Unity versions or project dependencies between concurrent builds can cause issues.
+- **No built-in queuing** - The provider does not queue builds. If multiple orchestrator runs
+ dispatch to the same machine simultaneously, they will execute concurrently (or fail if resources
+ conflict).
+- **Garbage collection** - Not supported. Build artifacts and temporary files on the remote machine
+ must be managed separately.
+- **Firewall configuration** - Ensure the required ports (5985/5986 for WinRM, 22 for SSH) are open
+ between the orchestrator runner and the target machine.
+
+## Inputs Reference
+
+| Input | Required | Default | Description |
+| ---------------------------- | -------- | ------- | -------------------------------------------------------------- |
+| `providerStrategy` | Yes | - | Must be `remote-powershell` |
+| `remotePowershellHost` | Yes | - | Hostname or IP address of the target machine |
+| `remotePowershellCredential` | No | - | Credentials in `username:password` format (required for WinRM) |
+| `remotePowershellTransport` | No | `wsman` | Transport protocol: `wsman` (WinRM) or `ssh` |
diff --git a/docs/03-github-orchestrator/05-providers/07-ansible.mdx b/docs/03-github-orchestrator/05-providers/07-ansible.mdx
new file mode 100644
index 00000000..5656af70
--- /dev/null
+++ b/docs/03-github-orchestrator/05-providers/07-ansible.mdx
@@ -0,0 +1,265 @@
+---
+sidebar_position: 7
+---
+
+# Ansible Provider
+
+The **Ansible** provider orchestrates Unity builds by running Ansible playbooks against managed
+infrastructure. This enables teams with existing Ansible-based infrastructure management to
+integrate Unity builds into their configuration-as-code workflows.
+
+**Category:** Infrastructure Automation - runs playbooks against managed inventory rather than
+dispatching to a CI platform.
+
+## Use Cases
+
+- **Large-scale build infrastructure** - Distribute builds across a fleet of machines managed by
+ Ansible, with automatic host selection and load distribution handled by your playbooks.
+- **Configuration-as-code** - Define build machine setup, Unity installation, and build execution as
+ versioned Ansible playbooks alongside your game source.
+- **Heterogeneous environments** - Target different machine types (Windows, Linux, macOS) from a
+ single inventory with platform-specific playbooks.
+- **Existing Ansible infrastructure** - Teams already using Ansible for server management can extend
+ their playbooks to handle Unity builds without adopting a separate CI system.
+
+## Prerequisites
+
+1. **Ansible** installed on the orchestrator runner (`ansible-playbook` must be on `PATH`).
+2. An **inventory file** or dynamic inventory script defining available build machines.
+3. A **playbook** that accepts the orchestrator's build parameters as variables and executes the
+ Unity build.
+4. SSH connectivity from the orchestrator runner to the target machines (Ansible's default
+ transport).
+
+### Installing Ansible on the Runner
+
+```yaml
+# In your GitHub Actions workflow
+- name: Install Ansible
+ run: pip install ansible
+```
+
+Or use a runner image that includes Ansible pre-installed.
+
+## Configuration
+
+Set `providerStrategy: ansible` and supply the required inputs:
+
+```yaml
+- uses: game-ci/unity-builder@main
+ with:
+ providerStrategy: ansible
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ ansibleInventory: ./infrastructure/inventory.yml
+ ansiblePlaybook: ./infrastructure/unity-build.yml
+ ansibleExtraVars: '{"unity_version": "2022.3.0f1"}'
+ ansibleVaultPassword: ${{ secrets.ANSIBLE_VAULT_PASSWORD_FILE }}
+```
+
+## Playbook Requirements
+
+The orchestrator passes build parameters to the playbook as extra variables. Your playbook must
+accept and use these variables:
+
+| Variable | Description |
+| ---------------- | ---------------------------------- |
+| `build_guid` | Unique identifier for the build |
+| `build_image` | Unity Docker image (if applicable) |
+| `build_commands` | Build commands to execute |
+| `mount_dir` | Mount directory path |
+| `working_dir` | Working directory path |
+
+Additionally, any environment variables from the orchestrator are passed as lowercase variable
+names.
+
+### Example Playbook
+
+```yaml
+# infrastructure/unity-build.yml
+---
+- name: Unity Build
+ hosts: build_machines
+ become: false
+ vars:
+ build_guid: ''
+ build_commands: ''
+ working_dir: '/builds'
+
+ tasks:
+ - name: Ensure build directory exists
+ file:
+ path: '{{ working_dir }}/{{ build_guid }}'
+ state: directory
+
+ - name: Clone repository
+ git:
+ repo: '{{ git_repo_url }}'
+ dest: '{{ working_dir }}/{{ build_guid }}/project'
+ version: "{{ git_ref | default('main') }}"
+
+ - name: Execute build commands
+ shell: '{{ build_commands }}'
+ args:
+ chdir: '{{ working_dir }}/{{ build_guid }}/project'
+ register: build_result
+
+ - name: Upload build artifacts
+ fetch:
+ src: '{{ working_dir }}/{{ build_guid }}/project/Builds/'
+ dest: './artifacts/{{ build_guid }}/'
+ flat: true
+ when: build_result.rc == 0
+
+ - name: Cleanup build directory
+ file:
+ path: '{{ working_dir }}/{{ build_guid }}'
+ state: absent
+ when: cleanup | default(true) | bool
+```
+
+### Example Inventory
+
+```yaml
+# infrastructure/inventory.yml
+all:
+ children:
+ build_machines:
+ hosts:
+ build-01:
+ ansible_host: 192.168.1.10
+ ansible_user: builder
+ build-02:
+ ansible_host: 192.168.1.11
+ ansible_user: builder
+ vars:
+ ansible_python_interpreter: /usr/bin/python3
+```
+
+## How It Works
+
+```mermaid
+sequenceDiagram
+ participant O as Orchestrator (runner)
+ participant A as Ansible / Target Machines
+ O->>O: 1. Check ansible is installed, verify inventory
+ O->>O: 2. Build extra-vars from build parameters
+ O->>A: Run ansible-playbook
+ A->>A: 3. Connect to hosts in inventory
+ A->>A: 4. Execute playbook tasks
+ A-->>O: stdout stream
+ O->>O: 5. Capture output and report
+ A->>A: 6. Complete
+```
+
+1. **Setup** - The orchestrator verifies that `ansible` is available on `PATH` and that the
+ inventory file exists.
+2. **Variable assembly** - Build parameters are assembled into a JSON extra-vars payload. User-
+ provided `ansibleExtraVars` are merged in. Orchestrator secrets are passed as environment
+ variables to the `ansible-playbook` process.
+3. **Execution** - The orchestrator runs `ansible-playbook` with the inventory, playbook, extra
+ vars, and optional vault password file. Output streams to the orchestrator in real time.
+4. **Result** - A zero exit code means success. Any non-zero exit code raises an error with the
+ playbook output.
+5. **Cleanup** - No orchestrator-side resources to clean up. Playbook cleanup tasks should handle
+ remote machine cleanup.
+
+## Ansible Vault Integration
+
+For sensitive variables (license keys, credentials), use
+[Ansible Vault](https://docs.ansible.com/ansible/latest/vault_guide/index.html):
+
+1. Create an encrypted variables file:
+
+ ```bash
+ ansible-vault create infrastructure/secrets.yml
+ ```
+
+2. Reference it in your playbook:
+
+ ```yaml
+ - name: Unity Build
+ hosts: build_machines
+ vars_files:
+ - secrets.yml
+ ```
+
+3. Pass the vault password file path to the orchestrator:
+
+ ```yaml
+ ansibleVaultPassword: /path/to/vault-password-file
+ ```
+
+ In GitHub Actions, write the vault password to a temporary file from a secret:
+
+ ```yaml
+ - name: Write vault password
+ run: echo "${{ secrets.ANSIBLE_VAULT_PASSWORD }}" > /tmp/vault-pass
+ - uses: game-ci/unity-builder@main
+ with:
+ providerStrategy: ansible
+ ansibleVaultPassword: /tmp/vault-pass
+ # ... other inputs
+ ```
+
+## Full Workflow Example
+
+```yaml
+name: Build Game (Ansible)
+on:
+ push:
+ branches: [main]
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ lfs: true
+
+ - name: Install Ansible
+ run: pip install ansible
+
+ - name: Write vault password
+ run: echo "${{ secrets.ANSIBLE_VAULT_PASSWORD }}" > /tmp/vault-pass
+
+ - uses: game-ci/unity-builder@main
+ with:
+ providerStrategy: ansible
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ ansibleInventory: ./infrastructure/inventory.yml
+ ansiblePlaybook: ./infrastructure/unity-build.yml
+ ansibleExtraVars: '{"unity_version": "2022.3.0f1", "cleanup": true}'
+ ansibleVaultPassword: /tmp/vault-pass
+
+ - name: Clean up vault password
+ if: always()
+ run: rm -f /tmp/vault-pass
+```
+
+## Limitations and Considerations
+
+- **Playbook required** - The Ansible provider does not ship with a default playbook. You must
+ provide a playbook that handles repository checkout, Unity build execution, and artifact
+ collection for your specific infrastructure.
+- **Ansible installation** - Ansible must be installed on the runner. GitHub-hosted runners do not
+ include Ansible by default; add a `pip install ansible` step or use a custom runner image.
+- **No garbage collection** - The provider does not manage remote machine state. Build cleanup
+ (temporary files, old builds) should be handled within playbook tasks.
+- **Sequential execution** - The orchestrator runs a single `ansible-playbook` command and waits for
+ it to complete. Parallelism across hosts is managed by Ansible's built-in `forks` setting, not by
+ the orchestrator.
+- **Extra-vars JSON parsing** - The `ansibleExtraVars` input is parsed as JSON. If parsing fails,
+ the raw string is passed through, which may cause unexpected behavior. Always provide valid JSON.
+
+## Inputs Reference
+
+| Input | Required | Default | Description |
+| ---------------------- | -------- | ------- | ---------------------------------------------------------- |
+| `providerStrategy` | Yes | - | Must be `ansible` |
+| `ansibleInventory` | Yes | - | Path to Ansible inventory file or dynamic inventory script |
+| `ansiblePlaybook` | Yes | - | Path to Ansible playbook for Unity builds |
+| `ansibleExtraVars` | No | - | Additional Ansible variables as a JSON string |
+| `ansibleVaultPassword` | No | - | Path to Ansible Vault password file |
diff --git a/docs/03-github-orchestrator/05-providers/07-community-providers.mdx b/docs/03-github-orchestrator/05-providers/07-community-providers.mdx
new file mode 100644
index 00000000..7e5b7ed4
--- /dev/null
+++ b/docs/03-github-orchestrator/05-providers/07-community-providers.mdx
@@ -0,0 +1,47 @@
+# Community Providers
+
+Community providers are third-party Orchestrator providers built and shared by the community. They
+are **not maintained by the Game CI team** but are listed here to help you discover and evaluate
+options.
+
+:::caution
+
+Community providers are provided as-is. Review the source code and documentation of any community
+provider before using it in your pipelines.
+
+:::
+
+## Provider List
+
+_No community providers have been submitted yet. Yours could be the first!_
+
+## Submit Your Provider
+
+Built a custom provider? Share it with the community by adding it to this page.
+
+1. [Edit this file directly on GitHub](https://github.com/game-ci/documentation/edit/main/docs/03-github-orchestrator/05-providers/07-community-providers.mdx)
+2. Add your provider using the template below
+3. Submit a pull request for review
+
+### Template
+
+```markdown
+### Provider Name
+
+| | |
+| -------------------- | -------------------------------------------- |
+| **Repository** | [user/repo](https://github.com/user/repo) |
+| **providerStrategy** | `user/repo` |
+| **Description** | Brief description of what the provider does. |
+| **Maintainer** | [@username](https://github.com/username) |
+```
+
+Your provider should:
+
+- Have a public GitHub repository or published NPM package
+- Implement the full [ProviderInterface](custom-providers#required-methods)
+- Include a README with setup instructions
+- Be actively maintained
+
+The Game CI team will review submissions for completeness before merging. Inclusion in this list
+does not imply endorsement or a security guarantee.
diff --git a/docs/03-github-orchestrator/05-providers/08-github-integration.mdx b/docs/03-github-orchestrator/05-providers/08-github-integration.mdx
new file mode 100644
index 00000000..9b22691f
--- /dev/null
+++ b/docs/03-github-orchestrator/05-providers/08-github-integration.mdx
@@ -0,0 +1,54 @@
+# GitHub Integration
+
+Orchestrator has first-class support for GitHub Actions. When running from a GitHub Actions
+workflow, Orchestrator automatically detects the repository, branch, and commit from the
+environment.
+
+## GitHub Checks
+
+By enabling the [`githubCheck`](../api-reference#github-integration) parameter, the orchestrator job
+will create a GitHub check for each step.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ githubCheck: true
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+The step will show useful details about the job. This is especially useful in combination with async
+mode, as you can run very long jobs and monitor their progress directly from the GitHub pull request
+UI.
+
+## Async Mode
+
+Set [`asyncOrchestrator: true`](../api-reference#github-integration) to start a build without
+waiting for it to complete. The GitHub Action will return immediately and you can check progress via
+GitHub Checks or by running the [`watch` mode](../api-reference#modes).
+
+```mermaid
+sequenceDiagram
+ participant GA as GitHub Action
+ participant C as Cloud
+ participant PR as GitHub PR
+ GA->>C: 1. Dispatch
+ GA->>GA: 3. Return (done, action finishes in seconds)
+ C->>C: 2. Building...
+ C->>PR: 4. Check updated
+ C->>C: 5. Complete (build runs for minutes/hours)
+ C->>PR: 6. Check updated (monitor from the PR page)
+```
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ asyncOrchestrator: true
+ githubCheck: true
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+See the [AWS](aws) and [Kubernetes](kubernetes) provider pages for full workflow files.
diff --git a/docs/03-github-orchestrator/05-providers/09-gitlab-integration.mdx b/docs/03-github-orchestrator/05-providers/09-gitlab-integration.mdx
new file mode 100644
index 00000000..b4b74050
--- /dev/null
+++ b/docs/03-github-orchestrator/05-providers/09-gitlab-integration.mdx
@@ -0,0 +1,30 @@
+# GitLab Integration
+
+You can use GitLab with Orchestrator via the Command Line mode. Orchestrator is not limited to
+GitHub Actions - any CI system that can run shell commands can trigger orchestrator builds.
+
+## Setup
+
+1. Install the Orchestrator CLI (see [Command Line](../examples/command-line))
+2. Set your git credentials and provider configuration as environment variables
+3. Run the CLI from your `.gitlab-ci.yml` pipeline
+
+## Example `.gitlab-ci.yml`
+
+```yaml
+build-unity:
+ stage: build
+ script:
+ - git clone https://github.com/game-ci/unity-builder.git /tmp/game-ci
+ - cd /tmp/game-ci && yarn install
+ - >
+ yarn run cli -m cli-build --projectPath $CI_PROJECT_DIR --providerStrategy aws
+ --gitPrivateToken $GIT_TOKEN
+ variables:
+ AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
+ AWS_DEFAULT_REGION: eu-west-2
+```
+
+See the [Command Line](../examples/command-line) page for more details on CLI usage and the
+[Secrets](../secrets) page for managing credentials.
diff --git a/docs/03-github-orchestrator/05-providers/10-gcp-cloud-run.mdx b/docs/03-github-orchestrator/05-providers/10-gcp-cloud-run.mdx
new file mode 100644
index 00000000..a53f00de
--- /dev/null
+++ b/docs/03-github-orchestrator/05-providers/10-gcp-cloud-run.mdx
@@ -0,0 +1,113 @@
+# GCP Cloud Run (Experimental)
+
+Run Unity builds as [Cloud Run Jobs](https://cloud.google.com/run/docs/create-jobs) - one-off
+serverless container executions with configurable storage backends.
+
+:::caution Experimental
+
+This provider is experimental. APIs and behavior may change between releases.
+
+:::
+
+```mermaid
+graph LR
+ Runner["GitHub Actions Runner
unity-builder
providerStrategy: gcp-cloud-run
gcpStorageType:
gcs-fuse / gcs-copy /
nfs / in-memory"]
+ API["Cloud Run Jobs API"]
+ Job["Job: unity-build
Image: unityci
Storage: ..."]
+ Runner -->|gcloud CLI| API --> Job
+```
+
+## Prerequisites
+
+- Google Cloud SDK installed and authenticated (`gcloud auth login` or
+ `GOOGLE_APPLICATION_CREDENTIALS`)
+- Cloud Run Jobs API enabled: `gcloud services enable run.googleapis.com`
+- Service account with roles: **Cloud Run Admin**, **Storage Admin**, **Logs Viewer**
+
+## Storage Types
+
+Set `gcpStorageType` to control how the build accesses large files. Each type has different
+trade-offs for performance, persistence, and complexity.
+
+| Type | How it works | Best for | Size limit |
+| ----------- | -------------------------------------------------- | ---------------------------------- | ---------- |
+| `gcs-fuse` | Mounts a GCS bucket as a POSIX filesystem via FUSE | Large sequential I/O, artifacts | Unlimited |
+| `gcs-copy` | Copies artifacts in/out via `gsutil` | Simple upload/download | Unlimited |
+| `nfs` | Mounts a Filestore instance as NFS share | Unity Library (small random reads) | 100 TiB |
+| `in-memory` | tmpfs volume inside the container | Scratch/temp during builds | 32 GiB |
+
+### When to use each type
+
+- **gcs-fuse** (default) - Good general-purpose option. Handles very large files well and persists
+ across builds. Has some latency on small file I/O and eventual consistency edge cases.
+
+- **gcs-copy** - Simpler than FUSE (no driver). Copies everything before the build starts and
+ uploads after it finishes. Best when you only need artifact upload/download, not live filesystem
+ access during the build.
+
+- **nfs** - True POSIX semantics with good random I/O performance. The best choice for caching the
+ Unity Library folder (thousands of small files). Requires a
+ [Filestore](https://cloud.google.com/filestore) instance and a VPC connector.
+
+- **in-memory** - Fastest option (RAM-backed). Data is lost when the job ends. Capped at 32 GiB. Use
+ for temporary build artifacts that don't need to persist.
+
+## Inputs
+
+| Input | Default | Description |
+| ------------------- | ----------------------- | ----------------------------------------------- |
+| `gcpProject` | `$GOOGLE_CLOUD_PROJECT` | GCP project ID |
+| `gcpRegion` | `us-central1` | Cloud Run region |
+| `gcpStorageType` | `gcs-fuse` | Storage backend (see above) |
+| `gcpBucket` | - | GCS bucket name (for `gcs-fuse`, `gcs-copy`) |
+| `gcpFilestoreIp` | - | Filestore IP address (for `nfs`) |
+| `gcpFilestoreShare` | `/share1` | Filestore share name (for `nfs`) |
+| `gcpMachineType` | `e2-standard-4` | Machine type |
+| `gcpDiskSizeGb` | `100` | In-memory volume size (for `in-memory`, max 32) |
+| `gcpServiceAccount` | - | Service account email |
+| `gcpVpcConnector` | - | VPC connector (required for `nfs`) |
+
+## Examples
+
+### GCS FUSE - mount bucket as filesystem
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: gcp-cloud-run
+ gcpProject: my-project
+ gcpBucket: my-unity-builds
+ targetPlatform: StandaloneLinux64
+```
+
+### NFS - Filestore for fast Library caching
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: gcp-cloud-run
+ gcpProject: my-project
+ gcpStorageType: nfs
+ gcpFilestoreIp: 10.0.0.2
+ gcpFilestoreShare: /share1
+ gcpVpcConnector: my-connector
+ targetPlatform: StandaloneLinux64
+```
+
+### Copy - simple artifact upload/download
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: gcp-cloud-run
+ gcpProject: my-project
+ gcpStorageType: gcs-copy
+ gcpBucket: my-unity-builds
+ targetPlatform: StandaloneLinux64
+```
+
+## Related
+
+- [Azure ACI](azure-aci) - Azure Container Instances provider
+- [Custom Providers](custom-providers) - TypeScript provider plugins
+- [CLI Provider Protocol](cli-provider-protocol) - Write providers in any language
diff --git a/docs/03-github-orchestrator/05-providers/11-azure-aci.mdx b/docs/03-github-orchestrator/05-providers/11-azure-aci.mdx
new file mode 100644
index 00000000..b8c16462
--- /dev/null
+++ b/docs/03-github-orchestrator/05-providers/11-azure-aci.mdx
@@ -0,0 +1,113 @@
+# Azure ACI (Experimental)
+
+Run Unity builds as
+[Azure Container Instances](https://learn.microsoft.com/en-us/azure/container-instances/) -
+serverless containers with configurable storage backends.
+
+:::caution Experimental
+
+This provider is experimental. APIs and behavior may change between releases.
+
+:::
+
+```mermaid
+graph LR
+ Runner["GitHub Actions Runner
unity-builder
providerStrategy: azure-aci
azureStorageType:
azure-files / blob-copy /
azure-files-nfs / in-memory"]
+ API["Container Instances API"]
+ Container["Container: unity-build
Image: unityci
Storage: ..."]
+ Runner -->|Azure CLI| API --> Container
+```
+
+## Prerequisites
+
+- Azure CLI installed and authenticated (`az login` or service principal)
+- A resource group (auto-created if it doesn't exist)
+- **Contributor** role on the resource group
+
+## Storage Types
+
+Set `azureStorageType` to control how the build accesses large files.
+
+| Type | How it works | Best for | Size limit |
+| ----------------- | --------------------------------------------- | ---------------------------------- | ---------------- |
+| `azure-files` | Mounts an Azure File Share via SMB | General artifact storage | 100 TiB |
+| `blob-copy` | Copies artifacts in/out via `az storage blob` | Simple upload/download | Unlimited |
+| `azure-files-nfs` | Mounts an Azure File Share via NFS 4.1 | Unity Library (small random reads) | 100 TiB |
+| `in-memory` | emptyDir volume (tmpfs) | Scratch/temp during builds | Container memory |
+
+### When to use each type
+
+- **azure-files** (default) - Simplest persistent option. Works out of the box - auto-creates the
+ storage account and file share if they don't exist. SMB has some overhead from opportunistic
+ locking but is reliable for most use cases.
+
+- **blob-copy** - Avoids mount overhead entirely. Copies everything before the build starts and
+ uploads after it finishes. Good when you only need artifact upload/download.
+
+- **azure-files-nfs** - Eliminates SMB lock overhead for better random I/O performance with Unity
+ Library files (thousands of small files). Requires **Premium FileStorage** (auto-created) and
+ **VNet integration** via `azureSubnetId`.
+
+- **in-memory** - Fastest option (RAM-backed). Data is lost when the container stops. Size is
+ limited by the container's memory allocation. Use for temporary build artifacts.
+
+## Inputs
+
+| Input | Default | Description |
+| --------------------- | ------------------------ | ------------------------------------------------------ |
+| `azureResourceGroup` | `$AZURE_RESOURCE_GROUP` | Resource group name |
+| `azureLocation` | `eastus` | Azure region |
+| `azureStorageType` | `azure-files` | Storage backend (see above) |
+| `azureStorageAccount` | `$AZURE_STORAGE_ACCOUNT` | Storage account name |
+| `azureBlobContainer` | `unity-builds` | Blob container (for `blob-copy`) |
+| `azureFileShareName` | `unity-builds` | File share name (for `azure-files`, `azure-files-nfs`) |
+| `azureSubscriptionId` | `$AZURE_SUBSCRIPTION_ID` | Subscription ID |
+| `azureCpu` | `4` | CPU cores (1–16) |
+| `azureMemoryGb` | `16` | Memory in GB (1–16) |
+| `azureDiskSizeGb` | `100` | File share quota in GB |
+| `azureSubnetId` | - | Subnet ID for VNet (required for `azure-files-nfs`) |
+
+## Examples
+
+### Azure Files - SMB mount (default)
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: azure-aci
+ azureResourceGroup: my-rg
+ azureStorageAccount: myunitybuilds
+ targetPlatform: StandaloneLinux64
+```
+
+### NFS - better POSIX performance
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: azure-aci
+ azureResourceGroup: my-rg
+ azureStorageType: azure-files-nfs
+ azureStorageAccount: myunitybuilds
+ azureSubnetId: /subscriptions/.../subnets/default
+ targetPlatform: StandaloneLinux64
+```
+
+### Blob copy - simple artifact upload/download
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: azure-aci
+ azureResourceGroup: my-rg
+ azureStorageType: blob-copy
+ azureStorageAccount: myunitybuilds
+ azureBlobContainer: my-builds
+ targetPlatform: StandaloneLinux64
+```
+
+## Related
+
+- [GCP Cloud Run](gcp-cloud-run) - Google Cloud Run Jobs provider
+- [Custom Providers](custom-providers) - TypeScript provider plugins
+- [CLI Provider Protocol](cli-provider-protocol) - Write providers in any language
diff --git a/docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx b/docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx
new file mode 100644
index 00000000..672f83b6
--- /dev/null
+++ b/docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx
@@ -0,0 +1,188 @@
+# CLI Provider Protocol
+
+Write orchestrator providers in **any language** - Go, Python, Rust, shell, or anything that can
+read stdin and write stdout. The CLI provider protocol uses JSON messages over stdin/stdout with the
+subcommand as the first argument.
+
+```mermaid
+graph LR
+ Orch["providerExecutable
./my-provider
Orchestrator spawns your
executable per subcommand"]
+ Exec["Your executable
setup-workflow
run-task
cleanup-workflow
garbage-collect
list-resources
list-workflow
watch-workflow"]
+ Orch -->|"argv[1], JSON stdin"| Exec
+ Exec -->|"JSON stdout, stderr→logs"| Orch
+```
+
+## Quick Start
+
+Set `providerExecutable` to the path of your executable:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerExecutable: ./my-provider
+ targetPlatform: StandaloneLinux64
+```
+
+The CLI provider takes precedence over `providerStrategy` when set.
+
+## Protocol
+
+### Invocation
+
+```
+
+```
+
+Orchestrator spawns your executable once per operation, passing the subcommand as `argv[1]`.
+
+### Input
+
+A single JSON object written to **stdin**, then stdin is closed:
+
+```json
+{
+ "command": "run-task",
+ "params": {
+ "buildGuid": "abc-123",
+ "image": "unityci/editor:2022.3.0f1-linux-il2cpp-3",
+ "commands": "/bin/sh -c 'unity-editor -batchmode ...'",
+ "mountdir": "/workspace",
+ "workingdir": "/workspace",
+ "environment": [{ "name": "UNITY_LICENSE", "value": "..." }],
+ "secrets": []
+ }
+}
+```
+
+### Output
+
+Write a single JSON object to **stdout** when the operation completes:
+
+```json
+{
+ "success": true,
+ "result": "Build completed successfully"
+}
+```
+
+On failure:
+
+```json
+{
+ "success": false,
+ "error": "Container exited with code 1"
+}
+```
+
+### Streaming Output
+
+During `run-task` and `watch-workflow`, **non-JSON lines on stdout are treated as real-time build
+output** and forwarded to the orchestrator logs. Only the final JSON line is parsed as the response.
+
+This means your provider can freely print build logs:
+
+```
+Pulling image...
+Starting build...
+[Build] Compiling scripts...
+[Build] Build succeeded
+{"success": true, "result": "Build completed"}
+```
+
+### stderr
+
+Everything written to **stderr** is forwarded to the orchestrator logger. Use stderr for diagnostic
+messages, warnings, and debug output.
+
+## Subcommands
+
+| Subcommand | Purpose | Timeout |
+| ------------------ | --------------------------------- | -------- |
+| `setup-workflow` | Initialize infrastructure | 300s |
+| `run-task` | Execute the build | No limit |
+| `cleanup-workflow` | Tear down infrastructure | 300s |
+| `garbage-collect` | Remove old resources | 300s |
+| `list-resources` | List active resources | 300s |
+| `list-workflow` | List active workflows | 300s |
+| `watch-workflow` | Watch a workflow until completion | No limit |
+
+`run-task` and `watch-workflow` have no timeout because builds can run for hours.
+
+## Example: Shell Provider
+
+A minimal provider that runs builds via Docker:
+
+```bash
+#!/bin/bash
+case "$1" in
+ setup-workflow)
+ read request
+ echo '{"success": true, "result": "ready"}'
+ ;;
+ run-task)
+ read request
+ image=$(echo "$request" | jq -r '.params.image')
+ commands=$(echo "$request" | jq -r '.params.commands')
+ # Stream build output, then send JSON result
+ docker run --rm "$image" /bin/sh -c "$commands" 2>&1
+ echo '{"success": true, "result": "done"}'
+ ;;
+ cleanup-workflow)
+ read request
+ echo '{"success": true, "result": "cleaned"}'
+ ;;
+ garbage-collect)
+ read request
+ docker system prune -f >&2
+ echo '{"success": true, "result": "pruned"}'
+ ;;
+ list-resources)
+ read request
+ echo '{"success": true, "result": []}'
+ ;;
+ list-workflow)
+ read request
+ echo '{"success": true, "result": []}'
+ ;;
+ watch-workflow)
+ read request
+ echo '{"success": true, "result": ""}'
+ ;;
+ *)
+ echo '{"success": false, "error": "Unknown command: '"$1"'"}' >&2
+ exit 1
+ ;;
+esac
+```
+
+Make it executable and point `providerExecutable` at it:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerExecutable: ./my-provider.sh
+ targetPlatform: StandaloneLinux64
+```
+
+## CLI Provider vs TypeScript Provider
+
+| Feature | CLI Provider | TypeScript Provider |
+| ------------- | --------------------- | ----------------------- |
+| Language | Any | TypeScript/JavaScript |
+| Setup | `providerExecutable` | `providerStrategy` |
+| Communication | JSON stdin/stdout | Direct function calls |
+| Streaming | stdout lines | Native logging |
+| Distribution | Any executable | GitHub, NPM, local path |
+| Dependencies | None (self-contained) | Node.js runtime |
+
+Use CLI providers when you want to write in a non-TypeScript language, need to wrap an existing
+tool, or want a self-contained executable with no Node.js dependency.
+
+Use [TypeScript custom providers](custom-providers) when you want direct access to the
+BuildParameters object and orchestrator internals.
+
+## Related
+
+- [Custom Providers](custom-providers) - TypeScript provider plugin system
+- [Build Services](/docs/github-orchestrator/advanced-topics/build-services) - Submodule profiles,
+ caching, LFS agents, hooks
diff --git a/docs/03-github-orchestrator/03-examples/02-github-examples/_category_.yaml b/docs/03-github-orchestrator/05-providers/_category_.yaml
similarity index 55%
rename from docs/03-github-orchestrator/03-examples/02-github-examples/_category_.yaml
rename to docs/03-github-orchestrator/05-providers/_category_.yaml
index b9807118..94aafd4d 100644
--- a/docs/03-github-orchestrator/03-examples/02-github-examples/_category_.yaml
+++ b/docs/03-github-orchestrator/05-providers/_category_.yaml
@@ -1,5 +1,5 @@
---
-position: 2.0
-label: GitHub
+position: 5.0
+label: Providers
collapsible: true
collapsed: true
diff --git a/docs/03-github-orchestrator/06-advanced-topics/01-caching.mdx b/docs/03-github-orchestrator/06-advanced-topics/01-caching.mdx
deleted file mode 100644
index 7e04489b..00000000
--- a/docs/03-github-orchestrator/06-advanced-topics/01-caching.mdx
+++ /dev/null
@@ -1,42 +0,0 @@
-# Caching
-
-Orchestrator supports two main caching modes:
-
-- Standard Caching
-- Retained Workspace
-
-_You can even mix the two by specifying a "MaxRetainedWorkspaces" parameter. Above the max_
-_concurrent jobs a new job will use standard caching._
-
-## Storage Providers
-
-Orchestrator supports two storage backends for caching:
-
-- **S3** (default) - Uses AWS S3 for cache storage. Works with both AWS and LocalStack.
-- **Rclone** - Uses rclone for flexible cloud storage. Supports many storage backends.
-
-Configure via the `storageProvider` parameter. When using rclone, also set `rcloneRemote` to your
-configured remote endpoint.
-
-## Standard Caching
-
-#### Good For
-
-- Minimum storage use
-- Smaller projects
-
-#### What is cached between builds
-
-- LFS files
-- Unity Library folder
-
-## Retained Workspace
-
-#### Good For
-
-- Maximum build speed
-- Larger projects with long import times
-
-#### What is cached between builds
-
-- Entire Project Folder
diff --git a/docs/03-github-orchestrator/06-advanced-topics/02-retained-workspace.mdx b/docs/03-github-orchestrator/06-advanced-topics/02-retained-workspace.mdx
deleted file mode 100644
index 0e1bfbcd..00000000
--- a/docs/03-github-orchestrator/06-advanced-topics/02-retained-workspace.mdx
+++ /dev/null
@@ -1,8 +0,0 @@
-# Retained Workspaces
-
-Caches the entire project folder. This option provides the best responsiveness, but also can consume
-lots of storage.
-
-Using API: `maxRetainedWorkspaces`. You can specify a maximum number of retained workspaces, only
-one job can use a retained workspace at one time. Each retained workspace consumes more cloud
-storage.
diff --git a/docs/03-github-orchestrator/06-advanced-topics/03-garbage-collection.mdx b/docs/03-github-orchestrator/06-advanced-topics/03-garbage-collection.mdx
deleted file mode 100644
index b49d1a52..00000000
--- a/docs/03-github-orchestrator/06-advanced-topics/03-garbage-collection.mdx
+++ /dev/null
@@ -1,37 +0,0 @@
-# Garbage Collection
-
-Orchestrator creates, manages and destroys cloud workloads you request. Resources have to be
-created.
-
-It is possible a resource doesn't get deleted by orchestrator after a failed or interrupted build.
-
-You can use garbage collection to verify everything has been cleaned up.
-
-Use the **Mode**: `garbage-collect`.
-
-## Parameters
-
-```bash
-garbageMaxAge
-```
-
-Maximum age in hours before resources are considered stale and eligible for cleanup. Defaults to
-`24`.
-
-## Usage
-
-### GitHub Actions
-
-```yaml
-- uses: game-ci/unity-builder@main
- with:
- providerStrategy: aws
- mode: garbage-collect
- gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
-```
-
-### Command Line
-
-```bash
-yarn run cli -m garbage-collect --providerStrategy aws
-```
diff --git a/docs/03-github-orchestrator/06-advanced-topics/04-configuration-override.mdx b/docs/03-github-orchestrator/06-advanced-topics/04-configuration-override.mdx
deleted file mode 100644
index ab9ae1cf..00000000
--- a/docs/03-github-orchestrator/06-advanced-topics/04-configuration-override.mdx
+++ /dev/null
@@ -1,22 +0,0 @@
-# Configuration Override
-
-When running any unity workload you must provide valid unity credentials. In addition to any other
-credentials this is already quite a lot of input. For this reason, it is common to use the command
-line mode with input override (link here). This enables you to provide a command to pull input, with
-this approach you can create a file to store credentials or pull from a secret manager.
-
-## Example
-
-```bash
-game-ci -m cli \
- --populateOverride true \
- --readInputFromOverrideList UNITY_EMAIL,UNITY_SERIAL,UNITY_PASSWORD \
- --readInputOverrideCommand="gcloud secrets versions access 1 --secret=\"{0}\""
-```
-
-## Required Parameters
-
-- `populateOverride` - Must be true to run the commands.
-- `readInputFromOverrideList` - Comma separated list of parameters to read from override command.
-- `readInputOverrideCommand` - A command line command to run (The command is formatted to replace
- "{0}" with the parameter parameter name).
diff --git a/docs/03-github-orchestrator/06-advanced-topics/04-custom-hooks/01-custom-job.mdx b/docs/03-github-orchestrator/06-advanced-topics/04-custom-hooks/01-custom-job.mdx
deleted file mode 100644
index ade52d59..00000000
--- a/docs/03-github-orchestrator/06-advanced-topics/04-custom-hooks/01-custom-job.mdx
+++ /dev/null
@@ -1,4 +0,0 @@
-# Custom Jobs
-
-You can run a custom job instead of the default build workflow simplfy by specifying the `customJob`
-parameter.
diff --git a/docs/03-github-orchestrator/06-advanced-topics/04-custom-hooks/03-command-hooks.mdx b/docs/03-github-orchestrator/06-advanced-topics/04-custom-hooks/03-command-hooks.mdx
deleted file mode 100644
index ee1f2378..00000000
--- a/docs/03-github-orchestrator/06-advanced-topics/04-custom-hooks/03-command-hooks.mdx
+++ /dev/null
@@ -1,12 +0,0 @@
-# Command Hooks
-
-Custom commands can be injected into the standard build workflow via yaml or files.
-
-```yaml
-- name: example hook
- hook: |
- step:
- - before
- commands: |
- echo "hello world!"
-```
diff --git a/docs/03-github-orchestrator/06-advanced-topics/04-custom-hooks/04-container-hooks.mdx b/docs/03-github-orchestrator/06-advanced-topics/04-custom-hooks/04-container-hooks.mdx
deleted file mode 100644
index 6e0a9adc..00000000
--- a/docs/03-github-orchestrator/06-advanced-topics/04-custom-hooks/04-container-hooks.mdx
+++ /dev/null
@@ -1,13 +0,0 @@
-# Container Hooks
-
-Custom docker container steps can be run as part of the standard build workflow. Custom steps can
-also be run standalone.
-
-Custom steps can be specified via the CustomSteps parameter or via Custom Step files.
-
-```yaml
-- name: upload
- image: amazon/aws-cli
- commands: |
- echo "hello world!"
-```
diff --git a/docs/03-github-orchestrator/06-advanced-topics/04-custom-hooks/05-premade-container-jobs.mdx b/docs/03-github-orchestrator/06-advanced-topics/04-custom-hooks/05-premade-container-jobs.mdx
deleted file mode 100644
index b859d23d..00000000
--- a/docs/03-github-orchestrator/06-advanced-topics/04-custom-hooks/05-premade-container-jobs.mdx
+++ /dev/null
@@ -1,38 +0,0 @@
-# Premade Container Jobs
-
-## Cache synchronization
-
-### Upload Cache entry to AWS S3
-
-Upload cache results from build to AWS S3.
-
-`aws-s3-upload-cache`
-
-### Download Latest Cache entry from AWS S3
-
-Downloads library and git LFS cache from AWS S3.
-
-`aws-s3-pull-cache`
-
-## Artifacts
-
-## Upload Build Artifact To AWS S3
-
-`aws-s3-upload-build`
-
-Upload build artifact to AWS S3. (Currently only supports lz4 enabled, which is default.)
-
-## Upload Project Artifact To AWS S3 (To Do)
-
-Upload project artifact to AWS S3. (Currently only supports lz4 enabled, which is default.)
-
-## Artifact entire project folder (To Do)
-
-archive to tar format, compress with lz4 if enabled and store in a persistent cache folder. (Can
-then upload.)
-
-## Deploy
-
-## Upload to Steam (To Do)
-
-upload a build folder to a given steam branch
diff --git a/docs/03-github-orchestrator/06-advanced-topics/05-logging.mdx b/docs/03-github-orchestrator/06-advanced-topics/05-logging.mdx
deleted file mode 100644
index 99660d31..00000000
--- a/docs/03-github-orchestrator/06-advanced-topics/05-logging.mdx
+++ /dev/null
@@ -1,13 +0,0 @@
-# Logging
-
-Logs are streamed from the workload to the GameCI origin unless you use the
-
-## Kubernetes
-
-- Native Kubernetes logging api
-
-## AWS
-
-- Fargate log to Cloud Watch
-- Subscription from Cloud Watch to Kinesis
-- GameCI streams from logs from Kinesis
diff --git a/docs/03-github-orchestrator/06-advanced-topics/06-secrets.mdx b/docs/03-github-orchestrator/06-advanced-topics/06-secrets.mdx
deleted file mode 100644
index 17bae040..00000000
--- a/docs/03-github-orchestrator/06-advanced-topics/06-secrets.mdx
+++ /dev/null
@@ -1,14 +0,0 @@
-# Secrets
-
-Secrets are transferred to workload containers as secrets via the built-in secrets system the
-provider being used supports.
-
-## Kubernetes
-
-Secrets are created as native Kubernetes secret objects and mounted to job containers as env
-variables.
-
-## AWS
-
-Secrets are created as aws secret manager secrets and mounted to fargate tasks from secrets to env
-variables.
diff --git a/docs/03-github-orchestrator/06-advanced-topics/09-gitlab/01-introduction-gitlab.mdx b/docs/03-github-orchestrator/06-advanced-topics/09-gitlab/01-introduction-gitlab.mdx
deleted file mode 100644
index 70e5079e..00000000
--- a/docs/03-github-orchestrator/06-advanced-topics/09-gitlab/01-introduction-gitlab.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-# GitLab Introduction
-
-You can use GitLab with Orchestrator via the Command Line mode.
diff --git a/docs/03-github-orchestrator/06-advanced-topics/09-gitlab/_category_.yaml b/docs/03-github-orchestrator/06-advanced-topics/09-gitlab/_category_.yaml
deleted file mode 100644
index 0407bcd5..00000000
--- a/docs/03-github-orchestrator/06-advanced-topics/09-gitlab/_category_.yaml
+++ /dev/null
@@ -1,5 +0,0 @@
----
-position: 9.0
-label: GitLab Integration
-collapsible: true
-collapsed: true
diff --git a/docs/03-github-orchestrator/06-advanced-topics/10-github/01-github-checks.mdx b/docs/03-github-orchestrator/06-advanced-topics/10-github/01-github-checks.mdx
deleted file mode 100644
index c0625fe3..00000000
--- a/docs/03-github-orchestrator/06-advanced-topics/10-github/01-github-checks.mdx
+++ /dev/null
@@ -1,7 +0,0 @@
-# GitHub Checks
-
-By enabling the `githubCheck` parameter, the orchestrator job will create a GitHub check for each
-step.
-
-The step will show useful details about the job. This is especially useful in combination with async
-mode, as you can run very long jobs.
diff --git a/docs/03-github-orchestrator/06-advanced-topics/10-github/_category_.yaml b/docs/03-github-orchestrator/06-advanced-topics/10-github/_category_.yaml
deleted file mode 100644
index ecbee03f..00000000
--- a/docs/03-github-orchestrator/06-advanced-topics/10-github/_category_.yaml
+++ /dev/null
@@ -1,5 +0,0 @@
----
-position: 10.0
-label: GitHub Integration
-collapsible: true
-collapsed: true
diff --git a/docs/03-github-orchestrator/06-secrets.mdx b/docs/03-github-orchestrator/06-secrets.mdx
new file mode 100644
index 00000000..dc30c05c
--- /dev/null
+++ b/docs/03-github-orchestrator/06-secrets.mdx
@@ -0,0 +1,225 @@
+# Secrets
+
+Orchestrator supports multiple ways to pull secrets into your build environment. Secrets are
+transferred to workload containers as environment variables via the provider's native secret system.
+
+## Secret Sources
+
+Set `secretSource` to use a premade integration or custom command. This is the recommended approach
+
+- it replaces the older `inputPullCommand` mechanism with a cleaner API.
+
+### Premade Sources
+
+| Source | Value | CLI Required |
+| ----------------------- | ---------------------------- | ------------------------------------- |
+| AWS Secrets Manager | `aws-secrets-manager` | `aws` CLI |
+| AWS Parameter Store | `aws-parameter-store` | `aws` CLI |
+| GCP Secret Manager | `gcp-secret-manager` | `gcloud` CLI |
+| Azure Key Vault | `azure-key-vault` | `az` CLI + `AZURE_VAULT_NAME` env var |
+| HashiCorp Vault (KV v2) | `hashicorp-vault` or `vault` | `vault` CLI + `VAULT_ADDR` env var |
+| HashiCorp Vault (KV v1) | `hashicorp-vault-kv1` | `vault` CLI + `VAULT_ADDR` env var |
+| Environment Variables | `env` | None |
+
+### Usage
+
+Specify `secretSource` and `pullInputList` (comma-separated list of secret keys to fetch):
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ env:
+ pullInputList: UNITY_LICENSE,UNITY_SERIAL,UNITY_EMAIL,UNITY_PASSWORD
+ secretSource: aws-parameter-store
+ with:
+ targetPlatform: StandaloneLinux64
+ providerStrategy: aws
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+The orchestrator fetches each key from the specified source before the build starts. The values are
+injected as environment variables into the build container.
+
+:::tip AWS users: avoid the 8192-byte container override limit
+
+On AWS ECS/Fargate, all environment variables and secrets are sent in a `containerOverrides` payload
+that is limited to 8192 bytes. Using `secretSource` with `pullInputList` keeps secret values out of
+this payload, which is important for workflows with many custom parameters. See
+[Troubleshooting](/docs/troubleshooting/common-issues#container-overrides-length-must-be-at-most-8192-aws)
+for details.
+
+:::
+
+### AWS Secrets Manager
+
+Fetches secrets using `aws secretsmanager get-secret-value`. Your build environment needs AWS
+credentials configured (e.g., via `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`, or an IAM role).
+
+```yaml
+env:
+ pullInputList: UNITY_LICENSE,UNITY_SERIAL
+ secretSource: aws-secrets-manager
+ AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
+ AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
+ AWS_DEFAULT_REGION: us-east-1
+```
+
+### AWS Parameter Store
+
+Fetches parameters using `aws ssm get-parameter --with-decryption`. Supports SecureString
+parameters. Often cheaper and simpler than Secrets Manager for key-value secrets.
+
+```yaml
+env:
+ pullInputList: UNITY_LICENSE,UNITY_SERIAL
+ secretSource: aws-parameter-store
+```
+
+### GCP Secret Manager
+
+Fetches secrets using `gcloud secrets versions access latest`. Requires `gcloud` CLI authenticated
+in the build environment.
+
+```yaml
+env:
+ pullInputList: UNITY_LICENSE,UNITY_SERIAL
+ secretSource: gcp-secret-manager
+```
+
+### Azure Key Vault
+
+Fetches secrets using `az keyvault secret show`. Requires the `AZURE_VAULT_NAME` environment
+variable to specify which vault to use.
+
+```yaml
+env:
+ pullInputList: UNITY_LICENSE,UNITY_SERIAL
+ secretSource: azure-key-vault
+ AZURE_VAULT_NAME: my-game-ci-vault
+```
+
+### HashiCorp Vault
+
+Fetches secrets using the `vault` CLI. Requires `VAULT_ADDR` to be set. Authentication is handled by
+standard Vault environment variables (`VAULT_TOKEN`, AppRole, Kubernetes auth, etc.).
+
+Use `hashicorp-vault` (or the `vault` shorthand) for **KV v2** secrets engines, or
+`hashicorp-vault-kv1` for **KV v1**.
+
+```yaml
+env:
+ pullInputList: UNITY_LICENSE,UNITY_SERIAL
+ secretSource: vault
+ VAULT_ADDR: ${{ secrets.VAULT_ADDR }}
+ VAULT_TOKEN: ${{ secrets.VAULT_TOKEN }}
+```
+
+By default, secrets are read from the `secret` mount path. Override with `VAULT_MOUNT`:
+
+```yaml
+env:
+ pullInputList: UNITY_LICENSE,UNITY_SERIAL
+ secretSource: hashicorp-vault
+ VAULT_ADDR: https://vault.example.com
+ VAULT_TOKEN: ${{ secrets.VAULT_TOKEN }}
+ VAULT_MOUNT: game-ci
+```
+
+For **KV v1** engines:
+
+```yaml
+env:
+ pullInputList: UNITY_LICENSE,UNITY_SERIAL
+ secretSource: hashicorp-vault-kv1
+ VAULT_ADDR: https://vault.example.com
+ VAULT_TOKEN: ${{ secrets.VAULT_TOKEN }}
+```
+
+### Environment Variables
+
+Reads secrets directly from the process environment. Useful when secrets are already injected by the
+CI platform (e.g., GitHub Actions secrets).
+
+```yaml
+env:
+ pullInputList: UNITY_LICENSE,UNITY_SERIAL
+ secretSource: env
+ UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }}
+ UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }}
+```
+
+## Custom Commands
+
+If the premade sources don't cover your setup, pass a shell command with `{0}` as a placeholder for
+the secret key:
+
+```yaml
+env:
+ pullInputList: UNITY_LICENSE,UNITY_SERIAL
+ secretSource: 'vault kv get -field=value secret/game-ci/{0}'
+```
+
+The orchestrator runs this command once per key in `pullInputList`, replacing `{0}` with each key
+name.
+
+## Custom YAML Definitions
+
+For complex setups, define secret sources in a YAML file:
+
+```yaml
+# .game-ci/secrets.yml
+sources:
+ - name: my-vault
+ command: 'vault kv get -field=value secret/{0}'
+ - name: my-api
+ command: 'curl -s https://secrets.internal/api/{0}'
+ parseOutput: json-field
+ jsonField: value
+```
+
+Reference the file path as the `secretSource`:
+
+```yaml
+env:
+ pullInputList: UNITY_LICENSE,UNITY_SERIAL
+ secretSource: .game-ci/secrets.yml
+```
+
+The `parseOutput` field controls how the command output is interpreted:
+
+- `raw` (default) - Use the output as-is
+- `json-field` - Parse the output as JSON and extract the field specified by `jsonField`
+
+## Legacy: inputPullCommand
+
+The older `inputPullCommand` mechanism is still supported for backward compatibility. If
+`secretSource` is set, it takes precedence.
+
+```yaml
+env:
+ pullInputList: UNITY_LICENSE,UNITY_SERIAL
+ inputPullCommand: aws-secret-manager
+```
+
+The legacy mechanism supports two premade shortcuts:
+
+- `aws-secret-manager` - Expands to `aws secretsmanager get-secret-value --secret-id {0}`
+- `gcp-secret-manager` - Expands to `gcloud secrets versions access 1 --secret="{0}"`
+
+New projects should use `secretSource` instead, which provides more premade sources, better output
+parsing, and YAML file support.
+
+## Provider-Specific Secret Handling
+
+### Kubernetes
+
+Secrets are created as native Kubernetes Secret objects and mounted to job containers as environment
+variables. The orchestrator handles creation and cleanup automatically.
+
+### AWS (ECS/Fargate)
+
+Secrets are created as AWS Secrets Manager entries and mounted to Fargate tasks as environment
+variables via the ECS task definition's `secrets` configuration.
+
+### Local Docker
+
+Secrets are passed as environment variables to the Docker container via `-e` flags.
diff --git a/docs/03-github-orchestrator/07-advanced-topics/01-caching.mdx b/docs/03-github-orchestrator/07-advanced-topics/01-caching.mdx
new file mode 100644
index 00000000..c121d85f
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/01-caching.mdx
@@ -0,0 +1,76 @@
+# Caching
+
+Orchestrator supports two caching strategies. You can mix both by setting
+[`maxRetainedWorkspaces`](../api-reference#caching) - once the limit is reached, additional jobs
+fall back to standard caching.
+
+```mermaid
+flowchart LR
+ subgraph Standard Caching
+ A1["LFS files\nLibrary/\n\nSmaller storage\nGood for small projects"]
+ end
+ subgraph Retained Workspace
+ B1["Entire project folder\n\nFaster builds\nGood for large projects"]
+ end
+```
+
+## Standard Caching
+
+Caches the engine's asset folders and **LFS** files between builds. For Unity this is the `Library`
+folder. For other engines, the cached folders are defined by the [engine plugin](engine-plugins)
+(e.g. `.godot/imported` for Godot). Uses less storage but requires re-importing unchanged assets.
+
+- ✅ Minimum storage cost
+- ✅ Best for smaller projects
+- ⚠️ Slower rebuilds for large asset libraries
+
+## Build Caching
+
+Orchestrator automatically caches build output alongside the Library cache. After each build, the
+compiled output folder is archived and stored using the same cache key (branch name by default). On
+the next build with the same cache key, the previous build output is available at
+`/data/cache/{cacheKey}/build/`.
+
+This happens automatically - no configuration required. The cache key controls which builds share
+output:
+
+```yaml
+# Builds on the same branch share cached output (default behavior)
+cacheKey: ${{ github.ref_name }}
+
+# Or share across branches by using a fixed key
+cacheKey: shared-cache
+```
+
+Build caching uses the same compression and storage provider as Library caching. Archives are stored
+as `build-{buildGuid}.tar.lz4` (or `.tar` if compression is disabled). See [Storage](storage) for
+details on compression and storage backends.
+
+## Retained Workspace
+
+Caches the **entire project folder** between builds. Provides the fastest rebuilds but consumes more
+storage.
+
+- ✅ Maximum build speed
+- ✅ Best for large projects with long import times
+- ⚠️ Higher storage cost
+
+See [Retained Workspaces](retained-workspace) for configuration details.
+
+## 🗄️ Storage Providers
+
+| Provider | `storageProvider` | Description |
+| -------- | ----------------- | -------------------------------------------------------------------------------------------------------------------------------- |
+| S3 | `s3` (default) | AWS S3 storage. Works with both AWS and LocalStack. |
+| Rclone | `rclone` | Flexible cloud storage via [rclone](https://rclone.org). Supports 70+ backends (Google Cloud, Azure Blob, Backblaze, SFTP, etc). |
+
+Configure with the [`storageProvider`](../api-reference#storage) parameter. When using rclone, also
+set `rcloneRemote` to your configured remote endpoint.
+
+## 🔒 Workspace Locking
+
+When using retained workspaces, Orchestrator uses distributed locking (via S3 or rclone) to ensure
+only one build uses a workspace at a time. This enables safe concurrent builds that share and reuse
+workspaces without conflicts.
+
+Locking is managed automatically - no configuration required beyond setting `maxRetainedWorkspaces`.
diff --git a/docs/03-github-orchestrator/07-advanced-topics/02-retained-workspace.mdx b/docs/03-github-orchestrator/07-advanced-topics/02-retained-workspace.mdx
new file mode 100644
index 00000000..e7581b3f
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/02-retained-workspace.mdx
@@ -0,0 +1,44 @@
+# Retained Workspaces
+
+Retained workspaces cache the **entire project folder** between builds. This provides the fastest
+possible rebuilds at the cost of more cloud storage.
+
+## Configuration
+
+Set `maxRetainedWorkspaces` to control how many workspaces are kept:
+
+| Value | Behavior |
+| ----- | ------------------------------------------------------------------------- |
+| `0` | Unlimited retained workspaces (default). |
+| `> 0` | Keep at most N workspaces. Additional jobs fall back to standard caching. |
+
+Each retained workspace is locked during use - only one build can use a workspace at a time.
+Orchestrator handles locking automatically via S3 or rclone. See [Caching](caching) for storage
+provider details.
+
+```mermaid
+flowchart LR
+ subgraph maxRetainedWorkspaces: 3
+ W1["Workspace 1\n🔒 locked\nBuild A\nFull project"]
+ W2["Workspace 2\n🔒 locked\nBuild B\nFull project"]
+ W3["Workspace 3\n💤 idle\nFull project"]
+ end
+ C["Build C arrives"] --> W3
+ D["Build D arrives"] -.->|all locked| FB["Falls back to\nstandard caching"]
+```
+
+## Example
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ maxRetainedWorkspaces: 3
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+## ⚠️ Storage Considerations
+
+Each retained workspace stores a full copy of your project. For a 20 GB project with 3 retained
+workspaces, expect ~60 GB of cloud storage usage.
diff --git a/docs/03-github-orchestrator/07-advanced-topics/03-custom-job.mdx b/docs/03-github-orchestrator/07-advanced-topics/03-custom-job.mdx
new file mode 100644
index 00000000..4bcaddc8
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/03-custom-job.mdx
@@ -0,0 +1,19 @@
+# Custom Job
+
+Override the default build workflow entirely by specifying the `customJob` parameter with a YAML job
+definition.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ customJob: |
+ - name: my-custom-step
+ image: ubuntu
+ commands: |
+ echo "Running custom job"
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+This replaces the standard build steps with your own. Useful for running non-Unity workloads or
+fully custom pipelines through Orchestrator's cloud infrastructure.
diff --git a/docs/03-github-orchestrator/07-advanced-topics/04-garbage-collection.mdx b/docs/03-github-orchestrator/07-advanced-topics/04-garbage-collection.mdx
new file mode 100644
index 00000000..7097d86e
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/04-garbage-collection.mdx
@@ -0,0 +1,49 @@
+# Garbage Collection
+
+Orchestrator creates cloud resources (containers, stacks, volumes) for each build and cleans them up
+automatically. If a build fails or is interrupted, resources may be left behind.
+
+```mermaid
+flowchart LR
+ subgraph Normal Build
+ N1[Create resources] --> N2[Build] --> N3[Auto cleanup]
+ end
+ subgraph Failed / Interrupted Build
+ F1[Create resources] --> F2[Build] --> F3["crash ✕"]
+ F3 --> F4[Resources left behind]
+ F4 --> F5["garbage-collect\nremoves after\ngarbageMaxAge"]
+ end
+```
+
+Use garbage collection to clean up stale resources. See the
+[API Reference](../api-reference#garbage-collection) for all parameters.
+
+## Usage
+
+### GitHub Actions
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ mode: garbage-collect
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+### Command Line
+
+```bash
+yarn run cli -m garbage-collect --providerStrategy aws
+```
+
+## Parameters
+
+| Parameter | Default | Description |
+| --------------- | ------- | ----------------------------------------------------- |
+| `garbageMaxAge` | `24` | Maximum age in hours before resources are cleaned up. |
+
+## 🔄 Automatic Cleanup
+
+When using the AWS provider, Orchestrator can create a CloudFormation-based cleanup cron job that
+automatically removes old ECS task definitions and resources. This is controlled by the
+`useCleanupCron` parameter (enabled by default).
diff --git a/docs/03-github-orchestrator/07-advanced-topics/05-hooks/03-command-hooks.mdx b/docs/03-github-orchestrator/07-advanced-topics/05-hooks/03-command-hooks.mdx
new file mode 100644
index 00000000..5e0a71cc
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/05-hooks/03-command-hooks.mdx
@@ -0,0 +1,40 @@
+# Command Hooks
+
+Inject custom shell commands into the standard build workflow at specific points.
+
+## Format
+
+```yaml
+- name: example hook
+ hook: |
+ step:
+ - before
+ commands: |
+ echo "hello world!"
+```
+
+## Hook Points
+
+| Step | When it runs |
+| -------- | ------------------------------- |
+| `before` | Before the build step starts. |
+| `after` | After the build step completes. |
+
+## Usage
+
+Pass hooks inline via the `commandHooks` parameter or reference files from the `.game-ci/hooks/`
+directory via `customHookFiles`.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ customHookFiles: my-hook
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+Place hook files at `.game-ci/hooks/my-hook.yaml` in your repository.
+
+For running Docker containers as build steps, see [Container Hooks](container-hooks). For
+ready-to-use hooks (S3, rclone, Steam), see [Built-In Hooks](built-in-hooks).
diff --git a/docs/03-github-orchestrator/07-advanced-topics/05-hooks/04-container-hooks.mdx b/docs/03-github-orchestrator/07-advanced-topics/05-hooks/04-container-hooks.mdx
new file mode 100644
index 00000000..67929a73
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/05-hooks/04-container-hooks.mdx
@@ -0,0 +1,47 @@
+# Container Hooks
+
+Run custom Docker containers as steps in the build workflow. Useful for uploading artifacts,
+deploying builds, or running additional tools. For inline shell commands instead, see
+[Command Hooks](command-hooks).
+
+```mermaid
+flowchart LR
+ subgraph preBuildContainerHooks
+ A["Pull cache\nSetup deps\n..."]
+ end
+ subgraph Unity Build
+ B["Build"]
+ end
+ subgraph postBuildContainerHooks
+ C["Upload build\nDeploy to Steam\n..."]
+ end
+ A -->|Runs before build| B -->|Core build step| C
+```
+
+## Format
+
+```yaml
+- name: upload
+ image: amazon/aws-cli
+ commands: |
+ echo "hello world!"
+```
+
+## Usage
+
+Define container hooks inline via `preBuildContainerHooks` / `postBuildContainerHooks`, or reference
+files from `.game-ci/container-hooks/` via `containerHookFiles`.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ containerHookFiles: aws-s3-upload-build
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+## Built-In Hooks
+
+Orchestrator ships with ready-to-use hooks for S3, rclone, and Steam. See
+[Built-In Hooks](built-in-hooks) for the full list.
diff --git a/docs/03-github-orchestrator/07-advanced-topics/05-hooks/05-built-in-hooks.mdx b/docs/03-github-orchestrator/07-advanced-topics/05-hooks/05-built-in-hooks.mdx
new file mode 100644
index 00000000..40b53c80
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/05-hooks/05-built-in-hooks.mdx
@@ -0,0 +1,46 @@
+# Built-In Hooks
+
+Orchestrator ships with pre-built container hooks for common tasks. Use them by name with the
+`containerHookFiles` parameter.
+
+```yaml
+containerHookFiles: aws-s3-upload-build
+```
+
+## 📦 AWS S3
+
+| Hook Name | Description |
+| --------------------- | ----------------------------------------- |
+| `aws-s3-upload-build` | Upload build artifacts to S3. |
+| `aws-s3-pull-build` | Pull cached build artifacts from S3. |
+| `aws-s3-upload-cache` | Upload Unity Library and LFS cache to S3. |
+| `aws-s3-pull-cache` | Pull Unity Library and LFS cache from S3. |
+
+Requires AWS credentials configured. Respects `useCompressionStrategy` for LZ4 compression.
+
+## 📂 Rclone
+
+| Hook Name | Description |
+| --------------------- | ---------------------------------------------- |
+| `rclone-upload-build` | Upload build artifacts via rclone. |
+| `rclone-pull-build` | Pull cached build artifacts via rclone. |
+| `rclone-upload-cache` | Upload Unity Library and LFS cache via rclone. |
+| `rclone-pull-cache` | Pull Unity Library and LFS cache via rclone. |
+
+Requires `storageProvider: rclone` and `rcloneRemote` to be configured. Uses the `rclone/rclone`
+Docker image. Respects `useCompressionStrategy` for LZ4 compression.
+
+## 🎮 Steam
+
+| Hook Name | Description |
+| ---------------------- | --------------------------------------------- |
+| `steam-deploy-client` | Deploy a client build to Steam via SteamCMD. |
+| `steam-deploy-project` | Deploy a project build to Steam via SteamCMD. |
+
+Uses the `steamcmd/steamcmd` Docker image. Requires the following secrets to be configured:
+
+- `STEAM_USERNAME`, `STEAM_PASSWORD`
+- `STEAM_APPID`
+- `STEAM_SSFN_FILE_NAME`, `STEAM_SSFN_FILE_CONTENTS`
+- `STEAM_CONFIG_VDF_1` through `STEAM_CONFIG_VDF_4`
+- `BUILD_GUID_TARGET`, `RELEASE_BRANCH`
diff --git a/docs/03-github-orchestrator/06-advanced-topics/04-custom-hooks/_category_.yaml b/docs/03-github-orchestrator/07-advanced-topics/05-hooks/_category_.yaml
similarity index 72%
rename from docs/03-github-orchestrator/06-advanced-topics/04-custom-hooks/_category_.yaml
rename to docs/03-github-orchestrator/07-advanced-topics/05-hooks/_category_.yaml
index 445f7ab4..6983ed19 100644
--- a/docs/03-github-orchestrator/06-advanced-topics/04-custom-hooks/_category_.yaml
+++ b/docs/03-github-orchestrator/07-advanced-topics/05-hooks/_category_.yaml
@@ -1,5 +1,5 @@
---
position: 4.0
-label: Custom Hooks
+label: Hooks
collapsible: true
collapsed: true
diff --git a/docs/03-github-orchestrator/07-advanced-topics/06-logging.mdx b/docs/03-github-orchestrator/07-advanced-topics/06-logging.mdx
new file mode 100644
index 00000000..1d2c351c
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/06-logging.mdx
@@ -0,0 +1,31 @@
+# Logging
+
+Orchestrator streams logs from the remote build back to your CI runner in real time.
+
+```mermaid
+flowchart LR
+ A["Cloud Container\nBuild output"] --> B["Orchestrator\nLog stream"] --> C["Your CI\nConsole output"]
+```
+
+## Provider-Specific Log Transport
+
+### Kubernetes
+
+Uses the native Kubernetes logging API to stream pod logs directly.
+
+### AWS
+
+Logs flow through a CloudWatch → Kinesis pipeline:
+
+1. Orchestrator job (Fargate task) writes logs to **CloudWatch Logs**
+2. A **Kinesis** subscription forwards logs in real time
+3. Orchestrator consumes from the Kinesis stream
+
+## 🐛 Debug Logging
+
+Enable [`orchestratorDebug: true`](../api-reference#build-options) to get verbose output including:
+
+- Resource allocation summaries (CPU, memory, disk)
+- Directory structure via `tree`
+- Environment variable listing
+- Disk usage snapshots (`df -h`, `du -sh`)
diff --git a/docs/03-github-orchestrator/07-advanced-topics/07-load-balancing.mdx b/docs/03-github-orchestrator/07-advanced-topics/07-load-balancing.mdx
new file mode 100644
index 00000000..79a774e1
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/07-load-balancing.mdx
@@ -0,0 +1,533 @@
+# Load Balancing
+
+Orchestrator can intelligently route builds across providers. Built-in load balancing checks runner
+availability and routes to the best provider automatically - no custom scripting needed. For
+advanced scenarios, standard GitHub Actions scripting gives you full control.
+
+```mermaid
+flowchart TD
+ UB["unity-builder\n1. Check runner availability\n2. Route to best provider\n3. Build (sync or async)\n4. Retry on alternate if failed"]
+ UB --> LD["local-docker\n(self-hosted)"]
+ UB --> AWS["aws\n(scalable)"]
+ UB --> K8S["k8s\n(flexible)"]
+```
+
+## Built-in Load Balancing
+
+Set `fallbackProviderStrategy` to an alternate provider and the action handles routing
+automatically. Three mechanisms work together:
+
+- **Runner availability check** - Queries the GitHub Runners API. If your self-hosted runners are
+ busy or offline, routes to the alternate provider.
+- **Retry on alternate provider** - If the primary provider fails mid-build (transient cloud error,
+ quota limit), retries the entire build on the alternate provider.
+- **Provider init timeout** - If the primary provider is slow to spin up, switches to the alternate
+ after a configurable timeout.
+
+### Inputs
+
+| Input | Default | Description |
+| -------------------------- | ------- | --------------------------------------------------- |
+| `fallbackProviderStrategy` | `''` | Alternate provider for load balancing |
+| `runnerCheckEnabled` | `false` | Check runner availability before routing |
+| `runnerCheckLabels` | `''` | Filter runners by labels (e.g. `self-hosted,linux`) |
+| `runnerCheckMinAvailable` | `1` | Minimum idle runners before routing to alternate |
+| `retryOnFallback` | `false` | Retry failed builds on the alternate provider |
+| `providerInitTimeout` | `0` | Max seconds for provider startup (0 = no limit) |
+
+### Outputs
+
+| Output | Description |
+| ------------------------ | ----------------------------------------------- |
+| `providerFallbackUsed` | `true` if the build was routed to the alternate |
+| `providerFallbackReason` | Why the build was rerouted |
+
+### Route busy runners to cloud
+
+The most common pattern: prefer your self-hosted runner, but offload to the cloud when it's busy.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: local-docker
+ fallbackProviderStrategy: aws
+ runnerCheckEnabled: true
+ runnerCheckLabels: self-hosted,linux
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+Runner idle? Build runs locally. Runner busy? Routes to AWS automatically.
+
+### Non-blocking with async mode
+
+For long Unity builds, combine load balancing with `asyncOrchestrator: true`. The build dispatches
+to the best available provider and returns immediately - the GitHub runner is freed in seconds
+regardless of which provider handles the build.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: local-docker
+ fallbackProviderStrategy: aws
+ runnerCheckEnabled: true
+ runnerCheckLabels: self-hosted,linux
+ asyncOrchestrator: true # dispatch and return immediately
+ githubCheck: true # report progress via PR checks
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+Without async mode the routing still works correctly, but the build occupies the GitHub runner for
+the full duration. Async mode is the key to truly non-blocking load balancing.
+
+### Retry failed builds on alternate provider
+
+Enable `retryOnFallback` to automatically retry on the alternate provider when the primary fails.
+This is useful for long builds where transient cloud failures are common.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ fallbackProviderStrategy: local-docker
+ retryOnFallback: true
+ targetPlatform: StandaloneLinux64
+```
+
+If AWS fails (transient error, ECS quota exceeded, network issue), the build retries on the local
+Docker provider instead of failing the entire workflow.
+
+### Timeout slow provider startup
+
+Cloud providers sometimes take a long time to provision infrastructure. Set `providerInitTimeout` to
+swap to the alternate provider if startup takes too long.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: k8s
+ fallbackProviderStrategy: aws
+ providerInitTimeout: 120 # 2 minutes max for K8s to spin up
+ retryOnFallback: true
+ targetPlatform: StandaloneLinux64
+```
+
+### Graceful degradation
+
+The built-in load balancing is designed to never block a build:
+
+- **No token** - Skips the runner check, uses the primary provider.
+- **API error** (permissions, rate limit) - Logs a warning, uses the primary provider.
+- **No alternate set** - The runner check runs for informational logging but never swaps.
+
+### Log the routing decision
+
+Use the outputs to track which provider was selected and why:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ id: build
+ with:
+ providerStrategy: local-docker
+ fallbackProviderStrategy: aws
+ runnerCheckEnabled: true
+ targetPlatform: StandaloneLinux64
+
+- name: Log routing
+ if: always()
+ run: |
+ echo "Routed to alternate: ${{ steps.build.outputs.providerFallbackUsed }}"
+ echo "Reason: ${{ steps.build.outputs.providerFallbackReason }}"
+```
+
+## Script-Based Routing
+
+For scenarios that the built-in inputs don't cover, use GitHub Actions scripting. This gives you
+full control over routing logic.
+
+### Route by Platform
+
+```yaml
+name: Platform-Based Routing
+
+on:
+ push:
+ branches: [main]
+
+jobs:
+ build:
+ name: Build ${{ matrix.targetPlatform }}
+ runs-on: ubuntu-latest
+ strategy:
+ fail-fast: false
+ matrix:
+ include:
+ # Linux builds go to AWS (fast, scalable)
+ - targetPlatform: StandaloneLinux64
+ provider: aws
+ # Windows builds go to self-hosted runner
+ - targetPlatform: StandaloneWindows64
+ provider: local-docker
+ # WebGL builds go to Kubernetes
+ - targetPlatform: WebGL
+ provider: k8s
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ lfs: true
+
+ - uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: ${{ matrix.provider }}
+ targetPlatform: ${{ matrix.targetPlatform }}
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+### Route by Branch
+
+Send production builds to a high-resource cloud provider and development builds to a cheaper option.
+
+```yaml
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ lfs: true
+
+ - name: Select provider
+ id: provider
+ run: |
+ if [[ "${{ github.ref }}" == "refs/heads/main" ]]; then
+ echo "strategy=aws" >> "$GITHUB_OUTPUT"
+ echo "cpu=4096" >> "$GITHUB_OUTPUT"
+ echo "memory=16384" >> "$GITHUB_OUTPUT"
+ else
+ echo "strategy=local-docker" >> "$GITHUB_OUTPUT"
+ echo "cpu=1024" >> "$GITHUB_OUTPUT"
+ echo "memory=3072" >> "$GITHUB_OUTPUT"
+ fi
+
+ - uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: ${{ steps.provider.outputs.strategy }}
+ targetPlatform: StandaloneLinux64
+ containerCpu: ${{ steps.provider.outputs.cpu }}
+ containerMemory: ${{ steps.provider.outputs.memory }}
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+### Manual Runner Check
+
+If you need custom runner check logic beyond what the built-in inputs support (e.g. checking runner
+groups, org-level runners, or external capacity APIs):
+
+```yaml
+jobs:
+ check-runner:
+ name: Check self-hosted availability
+ runs-on: ubuntu-latest
+ outputs:
+ provider: ${{ steps.pick.outputs.provider }}
+ steps:
+ - name: Check if self-hosted runner is available
+ id: pick
+ env:
+ GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+ run: |
+ RUNNERS=$(gh api repos/${{ github.repository }}/actions/runners --jq '[.runners[] | select(.status == "online")] | length')
+ if [[ "$RUNNERS" -gt 0 ]]; then
+ echo "provider=local-docker" >> "$GITHUB_OUTPUT"
+ else
+ echo "provider=aws" >> "$GITHUB_OUTPUT"
+ fi
+
+ build:
+ name: Build
+ needs: check-runner
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ lfs: true
+
+ - uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: ${{ needs.check-runner.outputs.provider }}
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+### Weighted Distribution
+
+Distribute builds based on a ratio. This example sends 70% of builds to AWS and 30% to a local
+runner using a stable hash.
+
+```yaml
+- name: Weighted provider selection
+ id: provider
+ run: |
+ HASH=$(echo "${{ github.run_id }}" | md5sum | cut -c1-8)
+ DECIMAL=$((16#$HASH % 100))
+ if [[ $DECIMAL -lt 70 ]]; then
+ echo "strategy=aws" >> "$GITHUB_OUTPUT"
+ else
+ echo "strategy=local-docker" >> "$GITHUB_OUTPUT"
+ fi
+```
+
+### Route to alternate workflow when runners are busy
+
+Instead of switching providers within the same job, you can dispatch an entirely different workflow.
+This is useful when the alternate provider needs different runner labels, secrets, or setup steps
+that don't fit in the same job.
+
+The pattern uses two workflows: a primary workflow that checks runner availability and either builds
+locally or dispatches a cloud workflow, and a cloud workflow that handles the remote build
+independently.
+
+**Primary workflow** - checks runners and either builds or dispatches:
+
+```yaml
+name: Build (Primary)
+
+on:
+ push:
+ branches: [main]
+
+jobs:
+ route-and-build:
+ runs-on: ubuntu-latest
+ steps:
+ - name: Check self-hosted runner availability
+ id: check
+ env:
+ GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+ run: |
+ IDLE=$(gh api repos/${{ github.repository }}/actions/runners \
+ --jq '[.runners[] | select(.status == "online" and .busy == false)] | length')
+ echo "idle=$IDLE" >> "$GITHUB_OUTPUT"
+
+ # If no runners are idle, dispatch the cloud build workflow
+ - name: Dispatch cloud build
+ if: steps.check.outputs.idle == '0'
+ env:
+ GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+ run: |
+ gh workflow run cloud-build.yml \
+ -f targetPlatform=StandaloneLinux64 \
+ -f ref=${{ github.sha }}
+ echo "Self-hosted runners busy - dispatched to cloud-build workflow"
+
+ # If runners are available, build locally
+ - uses: actions/checkout@v4
+ if: steps.check.outputs.idle != '0'
+ with:
+ lfs: true
+
+ - uses: game-ci/unity-builder@v4
+ if: steps.check.outputs.idle != '0'
+ with:
+ providerStrategy: local-docker
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+**Cloud build workflow** - runs on `workflow_dispatch`, handles the remote build:
+
+```yaml
+name: Build (Cloud)
+
+on:
+ workflow_dispatch:
+ inputs:
+ targetPlatform:
+ description: Platform to build
+ required: true
+ ref:
+ description: Git ref to build
+ required: true
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ ref: ${{ inputs.ref }}
+ lfs: true
+
+ - uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ targetPlatform: ${{ inputs.targetPlatform }}
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+This pattern lets each workflow have completely different configurations - different runners,
+secrets, environment variables, and setup steps. The trade-off is that the dispatched workflow runs
+independently, so you need to check its status separately (via GitHub Actions UI or
+[`gh run list`](https://cli.github.com/manual/gh_run_list)).
+
+For a simpler approach that keeps everything in one workflow, use the
+[built-in load balancing](#built-in-load-balancing) or [reusable workflow](#reusable-workflow)
+patterns instead.
+
+### Reusable Workflow
+
+Extract the build step into a reusable workflow and call it with different provider settings based
+on runner availability. This keeps the routing decision and build execution in separate jobs while
+maintaining a single workflow run for visibility.
+
+```yaml
+# .github/workflows/unity-build-reusable.yml
+name: Unity Build (Reusable)
+
+on:
+ workflow_call:
+ inputs:
+ providerStrategy:
+ required: true
+ type: string
+ targetPlatform:
+ required: true
+ type: string
+ secrets:
+ GH_TOKEN:
+ required: true
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ lfs: true
+
+ - uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: ${{ inputs.providerStrategy }}
+ targetPlatform: ${{ inputs.targetPlatform }}
+ gitPrivateToken: ${{ secrets.GH_TOKEN }}
+```
+
+```yaml
+# .github/workflows/build-on-push.yml
+name: Build on Push
+
+on:
+ push:
+ branches: [main]
+
+jobs:
+ route:
+ runs-on: ubuntu-latest
+ outputs:
+ provider: ${{ steps.check.outputs.provider }}
+ steps:
+ - name: Check runners and select provider
+ id: check
+ env:
+ GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+ run: |
+ IDLE=$(gh api repos/${{ github.repository }}/actions/runners \
+ --jq '[.runners[] | select(.status == "online" and .busy == false)] | length')
+ if [[ "$IDLE" -gt 0 ]]; then
+ echo "provider=local-docker" >> "$GITHUB_OUTPUT"
+ else
+ echo "provider=aws" >> "$GITHUB_OUTPUT"
+ fi
+
+ build:
+ needs: route
+ uses: ./.github/workflows/unity-build-reusable.yml
+ with:
+ providerStrategy: ${{ needs.route.outputs.provider }}
+ targetPlatform: StandaloneLinux64
+ secrets:
+ GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+```
+
+Unlike the workflow dispatch pattern, this keeps everything in a single workflow run - you can see
+the routing decision and build result together in the GitHub Actions UI.
+
+## Async Mode and Load Balancing
+
+The [`asyncOrchestrator`](../api-reference#github-integration) parameter is essential for effective
+load balancing of long builds. When enabled, the action dispatches the build and returns immediately
+
+- no runner minutes wasted waiting.
+
+```mermaid
+flowchart TD
+ WF["Workflow Step (seconds)\n1. Check runners\n2. Route to provider\n3. Dispatch build\n4. Return (done)\n\nCompletes instantly"]
+ WF --> PA["Provider A\nBuilding...\nComplete"]
+ WF --> PB["Provider B\nBuilding...\nComplete"]
+ PA --> GH["GitHub Check updated\n(monitor from PR page)"]
+ PB --> GH
+```
+
+- **Without async** - The build occupies the runner. Routing still works, but you're paying for
+ runner time while the build runs remotely.
+- **With async** - The step finishes in seconds. The build continues on the selected provider and
+ reports status via GitHub Checks. This is the recommended approach for long builds.
+
+```yaml
+jobs:
+ build:
+ name: Build ${{ matrix.targetPlatform }}
+ runs-on: ubuntu-latest
+ strategy:
+ fail-fast: false
+ matrix:
+ include:
+ - targetPlatform: StandaloneLinux64
+ provider: aws
+ - targetPlatform: StandaloneWindows64
+ provider: k8s
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ lfs: true
+
+ - uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: ${{ matrix.provider }}
+ targetPlatform: ${{ matrix.targetPlatform }}
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ asyncOrchestrator: true
+ githubCheck: true
+```
+
+## When to Use What
+
+| Scenario | Approach |
+| --------------------------------------------- | --------------------------------------- |
+| Runners busy → offload to cloud | Built-in (`runnerCheckEnabled` + async) |
+| Retry transient cloud failures | Built-in (`retryOnFallback`) |
+| Slow provider startup | Built-in (`providerInitTimeout`) |
+| Filter runners by labels | Built-in (`runnerCheckLabels`) |
+| Route by platform or branch | Matrix or script |
+| Custom capacity logic (org runners, external) | Script-based runner check |
+| Weighted distribution (70/30 split) | Script with hash |
+| Dispatch entirely different workflow | `workflow_dispatch` routing |
+| Shared build config, dynamic routing | Reusable workflow (`workflow_call`) |
+| Chained routing (A → B → C) | Script |
+
+## Tips
+
+- **Start with built-in** - For most teams, `runnerCheckEnabled` + `fallbackProviderStrategy` +
+ `asyncOrchestrator` covers the common case. Add script-based routing only when you need custom
+ logic.
+- **Always use async for long builds** - Combining `asyncOrchestrator: true` with
+ `githubCheck: true` keeps your routing step fast and gives you build status on the PR page.
+- **Cache keys are provider-independent** - The [`cacheKey`](../api-reference#caching) parameter
+ works the same across all providers, so builds routed to different providers can still share
+ caches if they use the same storage backend.
+- **Test routing logic** - Temporarily disable your self-hosted runner to verify that routing works
+ before you need it in production.
+- **Custom providers** - The same routing patterns work with
+ [custom providers](../providers/custom-providers). Set `providerStrategy` to a GitHub repo or NPM
+ package and Orchestrator loads it dynamically.
diff --git a/docs/03-github-orchestrator/07-advanced-topics/08-storage.mdx b/docs/03-github-orchestrator/07-advanced-topics/08-storage.mdx
new file mode 100644
index 00000000..c2804ce0
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/08-storage.mdx
@@ -0,0 +1,200 @@
+# Storage
+
+Orchestrator manages three categories of files during a build: **project files** (your Unity project
+and Git LFS assets), **build output** (the compiled game), and **caches** (Unity Library folder and
+LFS files). This page explains how each category flows through the system and how to configure the
+storage backend.
+
+```mermaid
+flowchart TD
+ CI["CI / Local Machine\nGit repository\n+ LFS assets"]
+ CI -->|clone| BC
+ subgraph BC["Build Container"]
+ REPO["/data/{buildGuid}/repo/\n├── Assets/\n├── Library/ (cached)\n└── .git/lfs/ (cached)"]
+ CACHE["/data/cache/{cacheKey}/\n├── Library/ (tar.lz4)\n└── build/ (tar.lz4)"]
+ end
+ BC --> CS["Cloud Storage (S3 / rclone)\n├── Library cache archives\n├── Build artifacts\n└── Workspace locks"]
+```
+
+## File Categories
+
+### Project Files
+
+Your Unity project is cloned into the build container at `/data/{buildGuid}/repo/`. Orchestrator
+handles Git and LFS automatically:
+
+1. **Shallow clone** - The repository is cloned with `--depth` controlled by the
+ [`cloneDepth`](../api-reference#git-synchronization) parameter (default: 50).
+2. **LFS pull** - Git LFS is installed and configured inside the container. LFS files are pulled
+ after the clone completes.
+3. **LFS hashing** - Orchestrator generates `.lfs-assets-guid` and `.lfs-assets-guid-sum` files to
+ track LFS content for cache invalidation.
+
+For retained workspaces, the project folder persists between builds at
+`/data/{lockedWorkspace}/repo/` instead of being cloned fresh each time. See
+[Retained Workspaces](retained-workspace) for details.
+
+### Build Output
+
+After Unity finishes building, the compiled output lives at `/data/{buildGuid}/repo/{buildPath}/`.
+Orchestrator archives this folder as `build-{buildGuid}.tar` (or `.tar.lz4` with compression
+enabled).
+
+To export build artifacts out of the container, use [container hooks](hooks/container-hooks). The
+most common approach is the built-in S3 or rclone upload hooks:
+
+```yaml
+# Upload build artifacts to S3
+containerHookFiles: aws-s3-upload-build
+
+# Or upload via rclone to any backend
+containerHookFiles: rclone-upload-build
+```
+
+See [Built-In Hooks](hooks/built-in-hooks) for all available hooks.
+
+### Caches
+
+Orchestrator caches two things between builds to speed up subsequent runs:
+
+| Cache | Contents | Path in container |
+| ------------- | ------------------------------------------- | --------------------------------- |
+| Library cache | Unity's `Library/` folder (imported assets) | `/data/cache/{cacheKey}/Library/` |
+| Build cache | Previous build output | `/data/cache/{cacheKey}/build/` |
+
+Caches are scoped by the [`cacheKey`](../api-reference#caching) parameter, which defaults to the
+branch name. Builds on the same branch share a cache.
+
+After a build completes, Orchestrator runs `remote-cli-post-build` which:
+
+1. Archives the Library folder as `lib-{buildGuid}.tar` (or `.tar.lz4`).
+2. Archives the build output as `build-{buildGuid}.tar` (or `.tar.lz4`).
+3. Pushes both archives to cloud storage via the configured storage provider.
+
+Before the next build, `remote-cli-pre-build` pulls these archives and extracts them, so Unity can
+skip re-importing unchanged assets.
+
+## Storage Providers
+
+Orchestrator supports two storage backends for caches, artifacts, and workspace locks.
+
+```mermaid
+flowchart LR
+ subgraph "storageProvider: s3"
+ S3["AWS S3\n\n- Default backend\n- Works with LocalStack\n- Built-in lock support"]
+ end
+ subgraph "storageProvider: rclone"
+ RC["Rclone\n\n- 70+ backends\n- Google Cloud\n- Azure Blob\n- Backblaze B2\n- SFTP, FTP\n- Any rclone remote"]
+ end
+```
+
+### S3 (Default)
+
+S3 is the default storage backend. It works with AWS S3 and LocalStack (for local testing).
+
+No extra configuration is needed when using the `aws` provider - the S3 bucket is created
+automatically as part of the CloudFormation base stack. For other providers, ensure AWS credentials
+and region are set in the environment.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ # storageProvider defaults to "s3"
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+### Rclone
+
+[Rclone](https://rclone.org) is a command-line tool that supports 70+ cloud storage backends. Use it
+when you want to store caches and artifacts somewhere other than S3.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: k8s
+ storageProvider: rclone
+ rcloneRemote: 'myremote:bucket/path'
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+```
+
+Rclone hooks run in the `rclone/rclone` Docker image. Configure your rclone remote beforehand (see
+[rclone docs](https://rclone.org/docs/)).
+
+**Supported backends include:** Google Cloud Storage, Azure Blob, Backblaze B2, DigitalOcean Spaces,
+Wasabi, MinIO, SFTP, FTP, and [many more](https://rclone.org/overview/).
+
+## Compression
+
+Orchestrator uses **LZ4 compression** by default for all cache and build archives. LZ4 is optimized
+for speed over compression ratio, which is ideal for large Unity Library folders where fast
+decompression matters more than file size.
+
+| `useCompressionStrategy` | Archive format | Description |
+| ------------------------ | -------------- | ------------------------------------------------ |
+| `true` (default) | `.tar.lz4` | LZ4 compressed. Faster extract, ~30-50% smaller. |
+| `false` | `.tar` | Uncompressed. Slightly faster to create. |
+
+```yaml
+# Disable compression (not recommended for most projects)
+useCompressionStrategy: false
+```
+
+## Workspace Locking
+
+When using [retained workspaces](retained-workspace), Orchestrator uses distributed locking to
+ensure only one build occupies a workspace at a time. Locks are stored in the configured storage
+provider:
+
+- **S3**: Lock files at `s3://{awsStackName}/locks/{workspaceName}/{buildGuid}`
+- **Rclone**: Lock files at `{rcloneRemote}/locks/{workspaceName}/{buildGuid}`
+
+Locking is fully automatic. When a build starts, it acquires a lock. When it finishes (or fails),
+the lock is released. If all workspaces are locked, the build falls back to standard caching.
+
+## Large Packages
+
+The [`useLargePackages`](../api-reference#build-options) parameter optimizes storage for Unity
+packages containing "LargePackage" in `manifest.json`. When enabled, these packages are redirected
+to a shared folder so multiple builds with the same cache key share one copy instead of duplicating
+data.
+
+```yaml
+useLargePackages: true
+```
+
+## Container File System Layout
+
+Inside the build container, Orchestrator uses the `/data/` volume mount as the root for all project
+and cache data.
+
+```
+/data/
+├── {buildGuid}/ # Unique job folder (or {lockedWorkspace}/)
+│ ├── repo/ # Cloned repository
+│ │ ├── Assets/
+│ │ ├── Library/ # Unity Library (restored from cache)
+│ │ ├── .git/
+│ │ │ └── lfs/ # Git LFS objects
+│ │ └── {buildPath}/ # Build output
+│ └── builder/ # Cloned game-ci/unity-builder
+│ └── dist/
+│ └── index.js
+└── cache/
+ └── {cacheKey}/ # Scoped by branch name
+ ├── Library/
+ │ └── lib-{guid}.tar.lz4 # Library cache archive
+ └── build/
+ └── build-{guid}.tar.lz4 # Build output archive
+```
+
+## Storage Parameters
+
+For the full list of storage-related parameters, see the API Reference:
+
+- [Storage](../api-reference#storage) - `storageProvider`, `rcloneRemote`
+- [Caching](../api-reference#caching) - `cacheKey`, `maxRetainedWorkspaces`
+- [Build Options](../api-reference#build-options) - `useCompressionStrategy`, `useLargePackages`
+- [AWS](../api-reference#aws) - `awsStackName`, `awsS3Endpoint`
diff --git a/docs/03-github-orchestrator/07-advanced-topics/09-architecture.mdx b/docs/03-github-orchestrator/07-advanced-topics/09-architecture.mdx
new file mode 100644
index 00000000..26ee0bad
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/09-architecture.mdx
@@ -0,0 +1,264 @@
+# Architecture
+
+This page describes the internal architecture of Orchestrator - how the components fit together, how
+a build flows through the system, and where to look in the source code.
+
+```mermaid
+flowchart TD
+ EP["Entry Point\nGitHub Action / CLI"]
+ EP --> O["Orchestrator\n(orchestrator.ts)\n\n- setup()\n- run()\n- Provider selection"]
+ O --> P["Provider\n(pluggable)\n\n- aws\n- k8s\n- local-docker\n- local\n- custom plugin"]
+ O --> W["Workflow\nComposition Root\n\n- Standard\n- Async\n- Custom Job"]
+ O --> S["Services\n\n- Logger\n- Hooks\n- Caching\n- Locking\n- LFS\n- GitHub Checks"]
+```
+
+## Build Lifecycle
+
+A standard Orchestrator build follows these steps:
+
+```mermaid
+flowchart LR
+ S1["1. Initialize\nParse inputs\nSelect provider\nGenerate GUID\nCreate GH check"]
+ S2["2. Setup Provider\nCreate cloud resources\n(stacks, PVCs)"]
+ S3["3. Acquire Workspace\nLock retained workspace\n(if enabled)"]
+ S4["4. Build\nClone repo + LFS\nRestore cache\nRun Unity build\nRun pre/post\ncontainer hooks"]
+ S5["5. Post-Build\nPush Library\nand build cache\nRun post hooks"]
+ S6["6. Cleanup\nRelease workspace\nDelete cloud resources\nUpdate GH check"]
+ S1 --> S2 --> S3 --> S4 --> S5 --> S6
+```
+
+### Step-by-Step
+
+1. **Initialize** - `Orchestrator.setup()` parses inputs from GitHub Action `with`, CLI flags, or
+ environment variables. It selects a provider, generates a unique build GUID, and optionally
+ creates a GitHub Check.
+
+2. **Setup Provider** - `Provider.setupWorkflow()` provisions cloud resources. For AWS this means
+ creating a CloudFormation base stack (ECS cluster, S3 bucket, Kinesis stream). For Kubernetes it
+ creates a service account. Local providers skip this step.
+
+3. **Acquire Workspace** - If `maxRetainedWorkspaces` is set, `SharedWorkspaceLocking` acquires a
+ distributed lock on a workspace via S3 or rclone. If all workspaces are locked, the build falls
+ back to standard caching.
+
+4. **Build** - The workflow composition root selects a workflow (standard, async, or custom job).
+ The standard workflow clones your repo, restores caches, runs the Unity build, and executes
+ pre/post container hooks.
+
+5. **Post-Build** - `remote-cli-post-build` archives the Unity Library folder and build output,
+ pushes them to cloud storage, and runs any post-build hooks.
+
+6. **Cleanup** - The workspace lock is released, cloud resources are torn down
+ (`Provider.cleanupWorkflow()`), and the GitHub Check is updated with the final result.
+
+## Core Components
+
+### Orchestrator (`orchestrator.ts`)
+
+The static `Orchestrator` class is the central coordinator. It holds:
+
+- `Provider` - the selected provider implementation
+- `buildParameters` - all resolved configuration
+- `buildGuid` - unique identifier for this build
+- `lockedWorkspace` - retained workspace name (if any)
+
+`Orchestrator.run()` is the main entry point that drives the full lifecycle. Provider selection
+happens in `setupSelectedBuildPlatform()`, which handles LocalStack detection, `AWS_FORCE_PROVIDER`
+overrides, and fallback to the plugin loader for custom providers.
+
+### Provider System
+
+Providers implement the `ProviderInterface`:
+
+```typescript
+interface ProviderInterface {
+ setupWorkflow(buildGuid, buildParameters, branchName, secrets): Promise;
+ runTaskInWorkflow(
+ buildGuid,
+ image,
+ commands,
+ mountdir,
+ workingdir,
+ env,
+ secrets,
+ ): Promise;
+ cleanupWorkflow(buildParameters, branchName, secrets): Promise;
+ garbageCollect(filter, previewOnly, olderThan, fullCache, baseDependencies): Promise;
+ listResources(): Promise;
+ listWorkflow(): Promise;
+ watchWorkflow(): Promise;
+}
+```
+
+Each provider handles `runTaskInWorkflow()` differently:
+
+| Provider | How it runs the build |
+| -------------- | -------------------------------------------------------------- |
+| `aws` | Creates a CloudFormation job stack, starts an ECS Fargate task |
+| `k8s` | Creates a PVC, Kubernetes Secret, and Job; streams pod logs |
+| `local-docker` | Runs a Docker container with volume mounts |
+| `local` | Executes shell commands directly on the host |
+
+#### Custom Provider Loading
+
+When `providerStrategy` doesn't match a built-in name, the provider loader:
+
+1. Parses the source (GitHub URL, NPM package, or local path) via `ProviderUrlParser`
+2. Clones or installs the module via `ProviderGitManager`
+3. Validates that all 7 interface methods exist
+4. Falls back to the local provider if loading fails
+
+See [Custom Providers](../providers/custom-providers) for the user-facing guide.
+
+### Workflow System
+
+The `WorkflowCompositionRoot` selects which workflow to run:
+
+```mermaid
+flowchart TD
+ Q1{"asyncOrchestrator: true?"}
+ Q1 -->|Yes| AW["AsyncWorkflow\nDispatches build to cloud container\nand returns immediately"]
+ Q1 -->|No| Q2{"customJob set?"}
+ Q2 -->|Yes| CW["CustomWorkflow\nParses YAML job definition\nand runs container steps"]
+ Q2 -->|No| BAW["BuildAutomationWorkflow\nStandard build pipeline"]
+```
+
+**BuildAutomationWorkflow** generates a shell script that runs inside the container. The script:
+
+1. Installs toolchain (Node.js, npm, yarn, git-lfs) for remote providers
+2. Clones game-ci/unity-builder into the container
+3. Runs `remote-cli-pre-build` (restores caches, clones the target repo)
+4. Executes the Unity build via the standard Game CI entrypoint
+5. Runs `remote-cli-post-build` (pushes caches)
+6. Writes log markers for collection
+
+**AsyncWorkflow** runs the entire build inside a cloud container. It installs the AWS CLI, clones
+both the builder and target repos, and executes `index.js -m async-workflow`. The calling GitHub
+Action returns immediately. Progress is reported via
+[GitHub Checks](../providers/github-integration).
+
+### Hook System
+
+Orchestrator has two hook mechanisms:
+
+**Command Hooks** - Shell commands injected before or after the setup and build steps. Defined via
+the `customCommandHooks` YAML parameter or as files in `.game-ci/command-hooks/`.
+
+**Container Hooks** - Separate Docker containers that run before or after the build. Defined via
+`containerHookFiles` (built-in names like `aws-s3-upload-build`) or `preBuildSteps` /
+`postBuildSteps` YAML. Each hook specifies an image, commands, and optional secrets.
+
+See [Hooks](hooks/container-hooks) for the full guide.
+
+### Configuration Resolution
+
+Orchestrator reads configuration from multiple sources with this priority:
+
+```mermaid
+flowchart TD
+ A["GitHub Action inputs\nwith: providerStrategy: aws"] -->|overrides| B["CLI flags\n--providerStrategy aws"]
+ B -->|overrides| C["Query overrides\nPull Secrets from external sources"]
+ C -->|overrides| D["Environment variables\nPROVIDER_STRATEGY=aws"]
+ style A fill:#4a9,color:#fff
+ style D fill:#a94,color:#fff
+```
+
+The `OrchestratorOptions` class handles this resolution. Environment variables accept both
+`camelCase` and `UPPER_SNAKE_CASE` formats.
+
+### Remote Client
+
+The remote client runs **inside** the build container (not on the CI runner). It provides two CLI
+modes:
+
+- **`remote-cli-pre-build`** - Called before the Unity build. Handles git clone, LFS pull, cache
+ restoration, retained workspace setup, large package optimization, and custom hook execution.
+- **`remote-cli-post-build`** - Called after the Unity build. Pushes the Library and build caches to
+ cloud storage.
+
+### GitHub Integration
+
+The `GitHub` class manages GitHub Checks and async workflow dispatch:
+
+- **`createGitHubCheck()`** - Creates a check run on the commit via the GitHub API.
+- **`updateGitHubCheck()`** - Updates check status. In async environments, updates are routed
+ through the `Async Checks API` workflow (since containers can't call the Checks API directly).
+- **`runUpdateAsyncChecksWorkflow()`** - Triggers a GitHub Actions workflow that updates the check
+ run on behalf of the container.
+
+### Caching and Storage
+
+Caching is split between the remote client (push/pull logic) and the storage provider (S3 or
+rclone):
+
+- **Standard caching** - Archives the `Library/` folder and LFS files as `.tar.lz4` archives.
+- **Retained workspaces** - Keeps the entire project folder. Uses distributed locking via S3 or
+ rclone to prevent concurrent access.
+
+See [Storage](storage) for the full breakdown of file categories, compression, and storage backends.
+
+## CLI Modes
+
+The CLI system uses a decorator-based registry (`@CliFunction`). Each mode maps to a static method:
+
+| Mode | Description |
+| ----------------------- | ---------------------------------------------------- |
+| `cli-build` | Full build workflow (default) |
+| `async-workflow` | Async build execution (called from within container) |
+| `remote-cli-pre-build` | Pre-build setup (runs inside container) |
+| `remote-cli-post-build` | Post-build cache push (runs inside container) |
+| `remote-cli-log-stream` | Pipe and capture build logs |
+| `checks-update` | Update GitHub Checks from async container |
+| `cache-push` | Push a directory to cache storage |
+| `cache-pull` | Pull a directory from cache storage |
+| `garbage-collect` | Clean up old cloud resources |
+| `list-resources` | List active cloud resources |
+| `list-workflow` | List running workflows |
+| `watch` | Follow logs of a running workflow |
+| `hash` | Hash folder contents recursively |
+| `print-input` | Print all resolved parameters |
+
+## Source Code Map
+
+```
+src/model/orchestrator/
+├── orchestrator.ts # Main coordinator
+├── options/
+│ ├── orchestrator-options.ts # Configuration resolution
+│ └── orchestrator-folders.ts # Path management (/data/...)
+├── workflows/
+│ ├── workflow-composition-root.ts # Workflow selection
+│ ├── build-automation-workflow.ts # Standard build pipeline
+│ ├── async-workflow.ts # Async dispatch
+│ └── custom-workflow.ts # Custom job execution
+├── providers/
+│ ├── provider-interface.ts # 7-method contract
+│ ├── provider-loader.ts # Dynamic plugin loading
+│ ├── provider-url-parser.ts # GitHub/NPM/local parsing
+│ ├── provider-git-manager.ts # Clone and cache repos
+│ ├── aws/ # AWS Fargate provider
+│ ├── k8s/ # Kubernetes provider
+│ ├── docker/ # Local Docker provider
+│ └── local/ # Direct execution provider
+├── services/
+│ ├── core/
+│ │ ├── orchestrator-logger.ts
+│ │ ├── orchestrator-system.ts # Shell command execution
+│ │ ├── shared-workspace-locking.ts
+│ │ └── follow-log-stream-service.ts
+│ ├── hooks/
+│ │ ├── command-hook-service.ts
+│ │ └── container-hook-service.ts
+│ └── cache/
+│ └── local-cache-service.ts
+├── remote-client/
+│ ├── index.ts # Pre/post build logic
+│ └── caching.ts # Cache push/pull with LZ4
+└── tests/
+
+src/model/cli/
+├── cli.ts # CLI entry point
+└── cli-functions-repository.ts # @CliFunction registry
+
+src/model/github.ts # GitHub Checks + async dispatch
+```
diff --git a/docs/03-github-orchestrator/07-advanced-topics/10-build-services.mdx b/docs/03-github-orchestrator/07-advanced-topics/10-build-services.mdx
new file mode 100644
index 00000000..54d4dc44
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/10-build-services.mdx
@@ -0,0 +1,198 @@
+# Build Services
+
+Build services run during the build lifecycle to handle submodule initialization, caching, LFS
+configuration, and git hooks. They work with any provider - local, AWS, Kubernetes, GCP Cloud Run,
+Azure ACI, or custom CLI providers.
+
+```mermaid
+flowchart TD
+ subgraph Build Lifecycle
+ S1["1. Submodule init (selective, from YAML profile)"]
+ S2["2. LFS agent config (custom transfer agent)"]
+ S3["3. Cache restore (Library + LFS from filesystem)"]
+ S4["4. Hook install (lefthook / husky)"]
+ S5["5. BUILD"]
+ S6["6. Cache save (Library + LFS to filesystem)"]
+ S1 --> S2 --> S3 --> S4 --> S5 --> S6
+ end
+```
+
+## Submodule Profiles
+
+Selectively initialize submodules from a YAML profile instead of cloning everything. Useful for
+monorepos where builds only need a subset of submodules.
+
+### Profile Format
+
+```yaml
+primary_submodule: MyGameFramework
+submodules:
+ - name: CoreFramework
+ branch: main # initialize this submodule
+ - name: OptionalModule
+ branch: empty # skip this submodule (empty branch)
+ - name: Plugins* # glob pattern - matches PluginsCore, PluginsAudio, etc.
+ branch: main
+```
+
+- `branch: main` - initialize the submodule on its configured branch
+- `branch: empty` - skip the submodule (checked out to an empty branch)
+- Trailing `*` enables glob matching against submodule names
+
+### Variant Overlays
+
+A variant file merges on top of the base profile for build-type or platform-specific overrides:
+
+```yaml
+# server-variant.yml
+submodules:
+ - name: ClientOnlyAssets
+ branch: empty # skip client assets for server builds
+ - name: ServerTools
+ branch: main # add server-only tools
+```
+
+### Inputs
+
+| Input | Default | Description |
+| ---------------------- | ------- | --------------------------------------- |
+| `submoduleProfilePath` | - | Path to YAML submodule profile |
+| `submoduleVariantPath` | - | Path to variant overlay (merged on top) |
+| `submoduleToken` | - | Auth token for private submodule clones |
+
+### How It Works
+
+1. Parses the profile YAML and optional variant overlay
+2. Reads `.gitmodules` to discover all submodules
+3. Matches each submodule against profile entries (exact name or glob)
+4. Initializes matched submodules; skips the rest
+5. If `submoduleToken` is set, configures git URL rewriting for auth
+
+### Example
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: local
+ submoduleProfilePath: config/submodule-profiles/game/client/profile.yml
+ submoduleVariantPath: config/submodule-profiles/game/client/server.yml
+ submoduleToken: ${{ secrets.SUBMODULE_TOKEN }}
+ targetPlatform: StandaloneLinux64
+```
+
+---
+
+## Local Build Caching
+
+Cache the Unity Library folder and LFS objects between local builds without external cache actions.
+Filesystem-based - works on self-hosted runners with persistent storage.
+
+### How It Works
+
+- **Cache key**: `{platform}-{version}-{branch}` (sanitized)
+- **Cache root**: `localCacheRoot` > `$RUNNER_TEMP/game-ci-cache` > `.game-ci/cache`
+- **Restore**: extracts `library-{key}.tar` / `lfs-{key}.tar` if they exist
+- **Save**: creates tar archives of the Library and LFS folders after the build
+- **Garbage collection**: removes cache entries that haven't been accessed recently
+
+### Inputs
+
+| Input | Default | Description |
+| ------------------- | ------- | -------------------------- |
+| `localCacheEnabled` | `false` | Enable filesystem caching |
+| `localCacheRoot` | - | Cache directory override |
+| `localCacheLibrary` | `true` | Cache Unity Library folder |
+| `localCacheLfs` | `true` | Cache LFS objects |
+
+### Example
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: local
+ localCacheEnabled: true
+ localCacheRoot: /mnt/cache # persistent disk on self-hosted runner
+ targetPlatform: StandaloneLinux64
+```
+
+---
+
+## Custom LFS Transfer Agents
+
+Register external Git LFS transfer agents that handle LFS object storage via custom backends like
+[elastic-git-storage](https://github.com/frostebite/elastic-git-storage), S3-backed agents, or any
+custom transfer protocol.
+
+### How It Works
+
+Configures git to use a custom transfer agent:
+
+```
+git config lfs.customtransfer.{name}.path
+git config lfs.customtransfer.{name}.args
+git config lfs.standalonetransferagent {name}
+```
+
+The agent name is derived from the executable filename (e.g. `elastic-git-storage` from
+`./tools/elastic-git-storage`).
+
+### Inputs
+
+| Input | Default | Description |
+| ---------------------- | ------- | --------------------------------------------- |
+| `lfsTransferAgent` | - | Path to custom LFS agent executable |
+| `lfsTransferAgentArgs` | - | Arguments passed to the agent |
+| `lfsStoragePaths` | - | Sets `LFS_STORAGE_PATHS` environment variable |
+
+### Example
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: local
+ lfsTransferAgent: ./tools/elastic-git-storage
+ lfsTransferAgentArgs: --config ./lfs-config.yml
+ lfsStoragePaths: /mnt/lfs-cache
+ targetPlatform: StandaloneLinux64
+```
+
+---
+
+## Git Hooks
+
+Detect and install lefthook or husky during builds. **Disabled by default** for build performance -
+enable when your build pipeline depends on hooks running.
+
+### How It Works
+
+1. **Detect**: looks for `lefthook.yml` / `.lefthook.yml` (lefthook) or `.husky/` directory (husky)
+2. **If enabled**: runs `npx lefthook install` or sets up husky
+3. **If disabled** (default): sets `core.hooksPath` to an empty directory to bypass all hooks
+4. **Skip list**: specific hooks can be skipped via environment variables:
+ - Lefthook: `LEFTHOOK_EXCLUDE=pre-commit,prepare-commit-msg`
+ - Husky: `HUSKY=0` disables all hooks
+
+### Inputs
+
+| Input | Default | Description |
+| ------------------ | ------- | -------------------------------------- |
+| `gitHooksEnabled` | `false` | Install and run git hooks during build |
+| `gitHooksSkipList` | - | Comma-separated hooks to skip |
+
+### Example
+
+```yaml
+# Enable hooks but skip pre-commit
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: local
+ gitHooksEnabled: true
+ gitHooksSkipList: pre-commit,prepare-commit-msg
+ targetPlatform: StandaloneLinux64
+```
+
+## Related
+
+- [CLI Provider Protocol](../providers/cli-provider-protocol) - Write providers in any language
+- [Cloud Providers](/docs/github-orchestrator/providers/gcp-cloud-run) - GCP Cloud Run and Azure ACI
+- [Caching](caching) - Orchestrator caching strategies
diff --git a/docs/03-github-orchestrator/07-advanced-topics/10-lfs-agents.mdx b/docs/03-github-orchestrator/07-advanced-topics/10-lfs-agents.mdx
new file mode 100644
index 00000000..6294b9ff
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/10-lfs-agents.mdx
@@ -0,0 +1,92 @@
+# Custom LFS Agents
+
+Orchestrator supports custom Git LFS transfer agents - external executables that handle LFS upload
+and download instead of the default HTTPS transport.
+
+## elastic-git-storage (Built-in)
+
+[elastic-git-storage](https://github.com/frostebite/elastic-git-storage) is a custom Git LFS
+transfer agent with first-class support in Orchestrator. It supports multiple storage backends:
+local filesystem, WebDAV, and rclone remotes.
+
+When you set `lfsTransferAgent: elastic-git-storage`, Orchestrator will:
+
+1. Search PATH and known locations for an existing installation
+2. If not found, download the correct platform binary from GitHub releases
+3. Configure it as the standalone LFS transfer agent via `git config`
+
+### Basic Usage
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ targetPlatform: StandaloneLinux64
+ lfsTransferAgent: elastic-git-storage
+ lfsStoragePaths: '/mnt/lfs-cache'
+```
+
+### Version Pinning
+
+Append `@version` to pin a specific release:
+
+```yaml
+lfsTransferAgent: elastic-git-storage@v1.0.0
+```
+
+Without a version suffix, the latest release is downloaded.
+
+### Multiple Storage Backends
+
+`lfsStoragePaths` accepts semicolon-separated paths. The agent tries each in order:
+
+```yaml
+lfsStoragePaths: '/mnt/fast-ssd;webdav://lfs.example.com/storage;rclone://remote:lfs-bucket'
+```
+
+### Agent Arguments
+
+Pass additional flags via `lfsTransferAgentArgs`:
+
+```yaml
+lfsTransferAgent: elastic-git-storage
+lfsTransferAgentArgs: '--verbose --concurrency 4'
+lfsStoragePaths: '/mnt/lfs-cache'
+```
+
+## Custom Agents
+
+Any Git LFS custom transfer agent can be used. Provide the path to the executable:
+
+```yaml
+lfsTransferAgent: /usr/local/bin/lfs-folderstore
+lfsTransferAgentArgs: '-dir /mnt/lfs-store'
+```
+
+Orchestrator configures the agent via:
+
+```
+git config lfs.customtransfer..path
+git config lfs.customtransfer..args
+git config lfs.standalonetransferagent
+```
+
+The agent name is derived from the executable filename (without extension).
+
+## Storage Paths
+
+The `lfsStoragePaths` input sets the `LFS_STORAGE_PATHS` environment variable. How these paths are
+interpreted depends on the agent:
+
+| Agent | Path format |
+| ------------------- | -------------------------------------------------- |
+| elastic-git-storage | Local paths, `webdav://` URLs, `rclone://` remotes |
+| lfs-folderstore | Local directory paths |
+| Custom | Agent-specific |
+
+## Inputs Reference
+
+| Input | Description |
+| ---------------------- | ------------------------------------------------------------------------------------------------------ |
+| `lfsTransferAgent` | Agent name (e.g., `elastic-git-storage`) or path to executable. Append `@version` for release pinning. |
+| `lfsTransferAgentArgs` | Additional arguments passed to the agent |
+| `lfsStoragePaths` | Semicolon-separated storage paths (set as `LFS_STORAGE_PATHS` env var) |
diff --git a/docs/03-github-orchestrator/07-advanced-topics/11-test-workflow-engine.mdx b/docs/03-github-orchestrator/07-advanced-topics/11-test-workflow-engine.mdx
new file mode 100644
index 00000000..c6790e26
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/11-test-workflow-engine.mdx
@@ -0,0 +1,176 @@
+---
+sidebar_position: 11
+---
+
+# Test Workflow Engine
+
+Orchestrator includes a test workflow engine that supports YAML-based test suite definitions,
+multi-dimensional taxonomy filtering, and structured test result reporting.
+
+## Overview
+
+Instead of running tests via a single `buildMethod`, the test workflow engine lets you define test
+suites as YAML configurations - specifying exactly which tests run for each CI event, filtered by
+taxonomy metadata, with sequential execution dependencies.
+
+## Test Suite Definitions
+
+Test suites are YAML files that define ordered runs with filters:
+
+```yaml
+name: pull-request
+description: Fast feedback for pull requests
+runs:
+ - name: fast
+ editMode: true
+ filters:
+ Maturity: Trusted
+ FeedbackSpeed: Fast,Moderate
+ Scope: Unit,Integration
+ timeout: 300
+
+ - name: basic
+ needs: [fast]
+ editMode: true
+ playMode: true
+ filters:
+ Maturity: Trusted,Stable
+ Scope: Unit,Integration,System
+ timeout: 600
+
+ - name: extended
+ needs: [basic]
+ playMode: true
+ filters:
+ Rigor: Strict
+ Scope: End To End
+ timeout: 1200
+```
+
+### Suite Fields
+
+| Field | Description |
+| ------------- | --------------------------------------------------- |
+| `name` | Suite identifier, used for cache keys and reporting |
+| `description` | Human-readable description |
+| `runs` | Ordered list of test runs |
+
+### Run Fields
+
+| Field | Description |
+| ------------- | ------------------------------------------------- |
+| `name` | Run identifier |
+| `needs` | List of run names that must complete first |
+| `editMode` | Run Unity EditMode tests (default: false) |
+| `playMode` | Run Unity PlayMode tests (default: false) |
+| `builtClient` | Run tests against a built client (default: false) |
+| `filters` | Taxonomy filters to select tests |
+| `timeout` | Maximum run duration in seconds |
+
+## Taxonomy Filters
+
+Tests are categorized by multi-dimensional taxonomy metadata. Filters select tests by matching
+against these dimensions:
+
+### Example Dimensions
+
+The dimensions below are provided as a starting point. Projects can rename, remove, or replace any
+of these, and add entirely new dimensions -the taxonomy system is fully extensible.
+
+| Dimension | Values | Description |
+| -------------- | ------------------------------------- | ----------------------------- |
+| Scope | Unit, Integration, System, End To End | Test boundary |
+| Maturity | Trusted, Stable, Experimental | Test reliability |
+| FeedbackSpeed | Fast, Moderate, Slow | Expected execution time |
+| Execution | Synchronous, Asynchronous, Coroutine | Execution model |
+| Rigor | Strict, Normal, Relaxed | Assertion strictness |
+| Determinism | Deterministic, NonDeterministic | Repeatability |
+| IsolationLevel | Full, Partial, None | External dependency isolation |
+
+### Filter Syntax
+
+Filters accept comma-separated values, regex patterns, and hierarchical dot-notation:
+
+```yaml
+filters:
+ Scope: Unit,Integration # Match any of these values
+ Maturity: /Trusted|Stable/ # Regex pattern
+ Domain: Combat.Melee # Hierarchical match
+```
+
+### Defining Your Own Dimensions
+
+Projects can define their own taxonomy dimensions (or override the examples above) via a taxonomy
+definition file:
+
+```yaml
+# .game-ci/taxonomy.yml
+extensible_groups:
+ - name: SubjectLevel
+ values: [Class, Feature, System, Product]
+ - name: DataScenario
+ values: [HappyPath, EdgeCase, BoundaryValue, ErrorPath]
+```
+
+## Test Execution
+
+### EditMode Tests
+
+Standard Unity Test Framework tests that run in the editor without entering play mode:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ testSuitePath: .game-ci/test-suites/pr-suite.yml
+ testSuiteEvent: pr
+ targetPlatform: StandaloneLinux64
+```
+
+### PlayMode Tests
+
+Unity tests that require entering play mode:
+
+```yaml
+runs:
+ - name: playmode-tests
+ playMode: true
+ filters:
+ Scope: Integration,System
+```
+
+### Built-Client Tests
+
+Run tests against a previously built game client. Requires a build step before the test step:
+
+```yaml
+runs:
+ - name: client-tests
+ builtClient: true
+ builtClientPath: ./Builds/StandaloneLinux64
+ filters:
+ Scope: End To End
+```
+
+## Structured Results
+
+Test results are output in machine-readable formats:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ testSuitePath: .game-ci/test-suites/pr-suite.yml
+ testResultFormat: junit # junit, json, or both
+ testResultPath: ./test-results/
+```
+
+Results integrate with GitHub Checks for inline failure reporting on pull requests.
+
+## Inputs Reference
+
+| Input | Description |
+| ------------------ | ----------------------------------------------------- |
+| `testSuitePath` | Path to YAML test suite definition file |
+| `testSuiteEvent` | CI event name for suite selection (pr, push, release) |
+| `testTaxonomyPath` | Path to custom taxonomy definition YAML |
+| `testResultFormat` | Output format: `junit`, `json`, or `both` |
+| `testResultPath` | Directory for structured result output |
diff --git a/docs/03-github-orchestrator/07-advanced-topics/12-hot-runner-protocol.mdx b/docs/03-github-orchestrator/07-advanced-topics/12-hot-runner-protocol.mdx
new file mode 100644
index 00000000..71b4fad8
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/12-hot-runner-protocol.mdx
@@ -0,0 +1,168 @@
+---
+sidebar_position: 12
+---
+
+# Hot Runner Protocol
+
+Orchestrator supports persistent, process-based build/test providers. Hot runners are long-lived
+Unity editor processes that accept jobs immediately without cold-start overhead.
+
+## Overview
+
+Traditional providers follow a cold-start model: provision → clone → cache restore → build → tear
+down. Hot runners eliminate startup latency by keeping editors warm and ready.
+
+A hot runner is an **actively running process** that:
+
+- Stays alive between jobs
+- Accepts work via a transport protocol
+- Returns structured results via the [artifact system](build-output-system)
+- Can run on any machine (local, server, cloud, container)
+
+## Runner Transports
+
+Runners connect via pluggable transport protocols:
+
+| Transport | Description | Use Case |
+| --------------- | -------------------------------------- | --------------------------------- |
+| `github` | Self-hosted GitHub Actions runner | Standard CI infrastructure |
+| `websocket` | WebSocket/SignalR real-time connection | Custom dashboards, game platforms |
+| `file` | Shared directory watch for job files | Simple setups, NAS-based farms |
+| `local-network` | mDNS/Bonjour discovery + HTTP API | LAN build farms, office setups |
+| `custom` | User-implemented runner interface | Any transport |
+
+### GitHub Runner Transport
+
+Register a hot runner as a self-hosted GitHub Actions runner:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ runnerTransport: github
+ runnerLabels: unity-2022,linux,hot
+ editorMode: persistent
+```
+
+### WebSocket Transport
+
+Connect to a coordinator via WebSocket or SignalR:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ runnerTransport: websocket
+ runnerEndpoint: wss://build.example.com/runners
+ runnerLabels: unity-2022,windows,gpu
+```
+
+### File-Based Transport
+
+Watch a shared directory for JSON job files - the simplest transport:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ runnerTransport: file
+ runnerEndpoint: /mnt/shared/build-jobs/
+ runnerLabels: unity-2022,linux
+```
+
+### Local Network Transport
+
+Discover runners on the local network via mDNS:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ runnerTransport: local-network
+ runnerLabels: unity-2022,macos,m1
+```
+
+## Editor Modes
+
+### Ephemeral (Default)
+
+Editor launches for each job and exits. Current behavior for all providers:
+
+```yaml
+editorMode: ephemeral
+```
+
+### Persistent
+
+Editor stays running between jobs. Combine with [incremental sync](incremental-sync-protocol) for
+fastest iteration:
+
+```yaml
+editorMode: persistent
+```
+
+Benefits:
+
+- No editor startup time (saves 30–120 seconds per build)
+- Unity Library folder stays warm - only changed assets reimport
+- Domain reload only when scripts change
+
+### Hybrid
+
+Pool of persistent editors, scale up ephemeral instances for burst load:
+
+```yaml
+editorMode: hybrid
+```
+
+## Runner Labels
+
+Runners self-describe with labels. Jobs are dispatched to runners matching required labels:
+
+```yaml
+runnerLabels: unity-2022,linux,gpu,hot
+```
+
+## Runner Registration Protocol
+
+Runners communicate using a JSON protocol, regardless of transport:
+
+**Register:**
+
+```json
+{
+ "type": "register",
+ "labels": ["unity-2022", "linux", "gpu"],
+ "capabilities": {
+ "unityVersion": "2022.3.20f1",
+ "platform": "StandaloneLinux64",
+ "gpu": true
+ }
+}
+```
+
+**Heartbeat:**
+
+```json
+{
+ "type": "heartbeat",
+ "runnerId": "runner-abc-123",
+ "status": "idle"
+}
+```
+
+Job dispatch and results follow the same JSON protocol.
+
+## Composability
+
+Hot runners compose with other orchestrator features:
+
+- **[Incremental Sync](incremental-sync-protocol)** - Receive workspace updates without full clone.
+ Hot runners + incremental sync = fastest possible iteration.
+- **[Artifact System](build-output-system)** - Return structured, typed results.
+- **[Test Workflow Engine](test-workflow-engine)** - Execute test suites with taxonomy filtering.
+
+## Inputs Reference
+
+| Input | Description |
+| ----------------- | ---------------------------------------------------------------------------- |
+| `runnerTransport` | Transport protocol: `github`, `websocket`, `file`, `local-network`, `custom` |
+| `runnerEndpoint` | Connection endpoint for the transport |
+| `runnerLabels` | Comma-separated runner labels for job routing |
+| `editorMode` | Editor lifecycle: `ephemeral`, `persistent`, `hybrid` |
diff --git a/docs/03-github-orchestrator/07-advanced-topics/13-build-output-system.mdx b/docs/03-github-orchestrator/07-advanced-topics/13-build-output-system.mdx
new file mode 100644
index 00000000..f845b7c6
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/13-build-output-system.mdx
@@ -0,0 +1,219 @@
+---
+sidebar_position: 13
+---
+
+# Structured Build Output System
+
+Orchestrator supports multiple build output types with structured manifests, per-type
+post-processing pipelines, and first-class GitHub integration.
+
+## Overview
+
+Production Unity workflows produce many output types beyond a single build artifact - test results,
+screenshots, server builds, exported data, metrics, and logs. Each type needs different handling:
+storage, processing, reporting, and retention. The output system provides a structured way to
+declare, collect, process, and report on all of these.
+
+## Declaring Output Types
+
+Specify which output types a build produces:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ targetPlatform: StandaloneLinux64
+ outputTypes: build,test-results,metrics,images
+```
+
+## Built-in Output Types
+
+| Type | Default Path | Description |
+| -------------- | ----------------------------- | -------------------------------------------- |
+| `build` | `./Builds/{platform}/` | Standard game build artifact |
+| `test-results` | `./TestResults/` | NUnit/JUnit XML test results |
+| `server-build` | `./Builds/{platform}-server/` | Dedicated server build |
+| `data-export` | `./Exports/` | Exported data files (CSV, JSON, binary) |
+| `images` | `./Captures/` | Screenshots, render captures, atlas previews |
+| `logs` | `./Logs/` | Structured build and test logs |
+| `metrics` | `./Metrics/` | Build performance metrics, asset statistics |
+| `coverage` | `./Coverage/` | Code coverage reports |
+
+## Output Manifest
+
+Every build produces a JSON manifest describing all outputs:
+
+```json
+{
+ "buildGuid": "abc-123",
+ "timestamp": "2024-01-15T10:30:00Z",
+ "outputs": [
+ {
+ "type": "build",
+ "path": "./Builds/StandaloneLinux64/",
+ "size": 524288000,
+ "hash": "sha256:abc...",
+ "metadata": {
+ "platform": "StandaloneLinux64",
+ "scripting": "IL2CPP"
+ }
+ },
+ {
+ "type": "test-results",
+ "path": "./TestResults/editmode-results.xml",
+ "format": "nunit3",
+ "summary": {
+ "total": 342,
+ "passed": 340,
+ "failed": 1,
+ "skipped": 1
+ }
+ },
+ {
+ "type": "images",
+ "path": "./Captures/",
+ "files": ["main-menu.png", "gameplay-01.png"]
+ },
+ {
+ "type": "metrics",
+ "path": "./Metrics/build-metrics.json",
+ "metadata": {
+ "compileTime": 45.2,
+ "assetCount": 12500,
+ "buildSize": 524288000
+ }
+ }
+ ]
+}
+```
+
+Control manifest output with:
+
+```yaml
+outputManifestPath: ./build-manifest.json
+```
+
+## Post-Processing Pipelines
+
+Each output type has a configurable processing pipeline that runs after the build:
+
+### Test Results
+
+Parse test result XML files and report to GitHub:
+
+```yaml
+outputPipelines:
+ test-results:
+ - parse: nunit3
+ - report: github-checks
+ - archive: s3
+```
+
+Test results appear as inline annotations on pull request files, showing exactly where failures
+occurred.
+
+### Image Captures
+
+Thumbnail, diff against baselines, and attach to PRs:
+
+```yaml
+outputPipelines:
+ images:
+ - thumbnail: { maxWidth: 256 }
+ - diff: { baseline: ./Baselines/ }
+ - report: github-pr-comment
+```
+
+Provide a baseline directory for visual regression testing:
+
+```yaml
+imageBaselinePath: ./Tests/Baselines/
+```
+
+### Build Metrics
+
+Aggregate metrics and track trends across builds:
+
+```yaml
+outputPipelines:
+ metrics:
+ - aggregate: { groupBy: platform }
+ - trend: { history: 30 }
+ - report: github-check-summary
+```
+
+Set the number of historical builds to retain for trend analysis:
+
+```yaml
+metricsHistory: 30
+```
+
+## Custom Output Types
+
+Register project-specific output types:
+
+```yaml
+customOutputTypes:
+ - name: addressables-catalog
+ path: ./ServerData/
+ postProcess:
+ - validate: json-schema
+ - upload: cdn
+ - name: localization-export
+ path: ./Exports/Localization/
+ postProcess:
+ - validate: csv
+ - archive: s3
+```
+
+Custom types can also be defined in a file at `.game-ci/output-types.yml`.
+
+## GitHub Integration
+
+Output types automatically integrate with GitHub CI surfaces:
+
+| Output Type | GitHub Surface |
+| --------------- | ------------------------------------------------------ |
+| Test results | Check annotations - inline failure markers on PR diffs |
+| Images | PR comment - image grid with baseline diffs |
+| Metrics | Check summary - trend charts and delta tables |
+| Coverage | PR comment - coverage percentage and delta |
+| Build artifacts | Check run - download links |
+
+## Combining with Test Suites
+
+The output system works with the [Test Workflow Engine](test-workflow-engine) to provide structured
+test results per suite run:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ testSuitePath: .game-ci/test-suites/pr-suite.yml
+ testResultFormat: both # junit + json
+ testResultPath: ./TestResults/
+ outputTypes: test-results,metrics,coverage
+```
+
+Each test suite run produces its own output entries in the manifest.
+
+## Combining with Hot Runners
+
+[Hot runners](hot-runner-protocol) can produce multiple output types per job. The output manifest is
+returned as part of the job result:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ editorMode: persistent
+ outputTypes: build,test-results,images,metrics
+```
+
+## Inputs Reference
+
+| Input | Description |
+| -------------------- | ----------------------------------------------------- |
+| `outputTypes` | Comma-separated output types to collect |
+| `outputManifestPath` | Path for the output manifest JSON file |
+| `outputPipelines` | YAML defining per-type post-processing pipelines |
+| `customOutputTypes` | YAML defining project-specific output types |
+| `imageBaselinePath` | Path to baseline images for visual regression diffing |
+| `metricsHistory` | Number of historical builds for trend tracking |
diff --git a/docs/03-github-orchestrator/07-advanced-topics/14-incremental-sync-protocol.mdx b/docs/03-github-orchestrator/07-advanced-topics/14-incremental-sync-protocol.mdx
new file mode 100644
index 00000000..841eb67f
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/14-incremental-sync-protocol.mdx
@@ -0,0 +1,131 @@
+---
+sidebar_position: 14
+---
+
+# Incremental Sync Protocol
+
+The incremental sync protocol delivers workspace changes to build/test environments without
+traditional caching. Changes come from git deltas, direct input, or generic storage.
+
+## Overview
+
+Traditional caching archives the Library folder, pushes to storage, pulls on next build, and
+extracts. Even with cache hits, this takes minutes. The incremental sync protocol eliminates caching
+entirely for warm environments - instead of "restore the world," it says "here's what changed."
+
+## Sync Strategies
+
+| Strategy | Source | Use Case |
+| -------------- | -------------------------------- | ---------------------------------- |
+| `full` | Full clone + cache restore | Default, cold environments |
+| `git-delta` | Git diff since last sync | Standard CI, PR builds |
+| `direct-input` | File content passed as job input | No-push workflows, rapid iteration |
+| `storage-pull` | Changed files from rclone remote | Large inputs, binary assets |
+
+## Git Delta Sync
+
+For git-based workflows:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ syncStrategy: git-delta
+```
+
+The protocol:
+
+1. Runner tracks its last sync commit SHA
+2. On job dispatch, receives target commit SHA
+3. Fetches and diffs: `git diff --name-only ..`
+4. Checks out target commit
+5. Unity reimports only changed assets (Library stays warm)
+
+## Direct Input Sync
+
+Trigger jobs **without pushing to git**. Changes are passed as input:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ syncStrategy: direct-input
+ syncInputRef: storage://my-remote/inputs/changes.tar.lz4
+```
+
+For small changes, content can be inline. For large inputs, content is pulled from generic storage -
+backed by rclone, so **any storage provider works**: S3, GCS, Azure Blob, local filesystem, WebDAV,
+SFTP, and dozens more.
+
+This enables powerful workflows:
+
+- "Run tests on these changes before I commit"
+- "Build with this asset override without touching the repo"
+- "Apply this config and validate"
+- "Test a hotfix without a PR"
+
+## Storage-Backed Input
+
+Large inputs are stored via rclone and referenced by URI:
+
+```yaml
+syncInputRef: storage://my-remote/job-inputs/changes.tar.lz4
+```
+
+The protocol:
+
+1. Job includes a storage URI reference
+2. Runner pulls the archive from the rclone remote
+3. Extracts and overlays onto workspace
+4. Executes the build/test
+5. Optionally reverts the overlay after completion
+
+Users don't need git push access to trigger builds - just write access to the storage backend.
+
+### Why rclone?
+
+rclone supports 70+ storage backends out of the box. By using rclone as the storage abstraction, any
+provider is supported without additional configuration:
+
+```
+storage://s3:my-bucket/inputs/ # AWS S3
+storage://gcs:my-bucket/inputs/ # Google Cloud Storage
+storage://azure:container/inputs/ # Azure Blob
+storage:///mnt/shared/inputs/ # Local filesystem
+storage://webdav:server/inputs/ # WebDAV
+storage://sftp:server/inputs/ # SFTP
+```
+
+## Composability
+
+The incremental sync protocol is **independent of hot runners**. It's a workspace update strategy
+that works with any provider:
+
+| Provider | Sync Strategy | How It Works |
+| ----------------------- | -------------- | ------------------------------------------- |
+| Hot runner (persistent) | `git-delta` | Fastest - warm editor, minimal changes |
+| Hot runner (persistent) | `direct-input` | No-push iteration |
+| Retained workspace | `git-delta` | Fast - workspace persists, no editor warmth |
+| Cold container | `storage-pull` | Pull overlay, apply to fresh clone |
+
+Hot runners + incremental sync = fastest possible iteration cycle.
+
+## Sync State
+
+Runners maintain sync state for delta calculations:
+
+```json
+{
+ "lastSyncCommit": "abc123def",
+ "lastSyncTimestamp": "2024-01-15T10:30:00Z",
+ "workspaceHash": "sha256:...",
+ "pendingOverlays": []
+}
+```
+
+## Inputs Reference
+
+| Input | Description |
+| ------------------- | -------------------------------------------------------------------- |
+| `syncStrategy` | Sync approach: `full`, `git-delta`, `direct-input`, `storage-pull` |
+| `syncInputRef` | URI for direct-input or storage-pull content |
+| `syncStorageRemote` | rclone remote for storage-backed inputs (defaults to `rcloneRemote`) |
+| `syncRevertAfter` | Revert overlaid changes after job (default: `true`) |
diff --git a/docs/03-github-orchestrator/07-advanced-topics/15-massive-projects.mdx b/docs/03-github-orchestrator/07-advanced-topics/15-massive-projects.mdx
new file mode 100644
index 00000000..6fb20ebf
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/15-massive-projects.mdx
@@ -0,0 +1,240 @@
+---
+sidebar_position: 15
+---
+
+# Massive Projects
+
+Orchestrator includes specific support for Unity projects at extreme scale - repositories exceeding
+100 GB, asset counts above 500,000 files, and Library folders that dwarf the source tree itself.
+
+## Overview
+
+Standard CI assumptions break down at this scale:
+
+- **Clone times** - A 100 GB repository with Git LFS takes 15–45 minutes to clone cold, consuming
+ most of a build budget before Unity opens.
+- **Library folder sizes** - Import caches for large projects routinely reach 30–80 GB. Tarring,
+ uploading, downloading, and extracting this on every build is impractical.
+- **LFS bandwidth** - Pulling all LFS objects for every build exhausts quotas and slows pipelines.
+ Most builds need only a fraction of the asset set.
+- **Cache inflation** - GitHub Actions cache and similar systems impose size limits (typically 10 GB
+ per entry) that Library folders quickly exceed.
+- **CI timeouts** - Default job timeouts of 6 hours are insufficient for cold clones followed by
+ full imports on large projects.
+
+Orchestrator addresses each of these with a two-level workspace architecture, move-centric caching,
+selective LFS hydration, and submodule profile filtering.
+
+## Two-Level Workspace Architecture
+
+Orchestrator manages workspaces at two levels:
+
+**Root workspace** - A lean, long-lived clone of the repository. It contains the full git history
+and index, a minimal set of LFS objects (only those needed for compilation), and no Unity Library
+folder. The root workspace is cached across builds and updated incrementally.
+
+**Child workspaces** - Per-build-target workspaces derived from the root. Each child is LFS-hydrated
+for its specific asset paths, contains the Library folder for its platform target, and is retained
+between builds of the same target.
+
+```
+root-workspace/
+ .git/ ← full history, minimal LFS
+ Assets/ ← source files only
+ Packages/
+
+child-workspaces/
+ StandaloneLinux64/
+ Assets/ ← LFS-hydrated for this target
+ Library/ ← warm, 40 GB, retained
+ StandaloneWindows64/
+ Assets/
+ Library/ ← warm, separate cache, retained
+ WebGL/
+ Assets/
+ Library/
+```
+
+The orchestrator manages this layout automatically when `retainedWorkspaces: true` is set. Child
+workspaces are created on first build and reused on subsequent builds of the same target. Only
+changed files from git delta sync are applied to each child.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ retainedWorkspaces: true
+ workspaceRoot: /mnt/build-storage/my-game
+ targetPlatform: StandaloneLinux64
+```
+
+## Move-Centric Caching
+
+Traditional caching copies files: archive → upload → download → extract. For a 50 GB Library folder
+this is a substantial I/O operation even with a cache hit.
+
+Orchestrator uses atomic folder moves on local storage instead. On NTFS and ext4, moving a directory
+is an O(1) metadata operation regardless of how many files it contains. A 50 GB Library folder moves
+in milliseconds.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ localCacheRoot: /mnt/build-storage/cache
+ cacheStrategy: move
+```
+
+The cache lifecycle for a Library folder:
+
+1. Build starts - move Library from cache to workspace (instant)
+2. Unity runs - Library is warm, only changed assets reimport
+3. Build ends - move Library back to cache (instant)
+4. Next build - Library is already warm at cache location
+
+This eliminates the archive/upload/download/extract cycle entirely for builds running on retained
+storage. Remote cache fallback (S3, GCS, Azure Blob via rclone) is still available for cold runners
+that do not have local cache access.
+
+| Cache Strategy | Library Move Time | Suitable For |
+| -------------- | ------------------------ | ------------------------------- |
+| `move` | Milliseconds | Retained storage, build farms |
+| `rclone` | Minutes (size-dependent) | Remote cache, ephemeral runners |
+| `github-cache` | Minutes (10 GB limit) | Small projects only |
+
+## Custom LFS Transfer Agents
+
+Standard Git LFS transfers every object through a single HTTP endpoint. For large projects this
+creates a bottleneck, especially when only a subset of assets is needed for a given build.
+
+Orchestrator supports alternative LFS transfer agents via the `lfsTransferAgent` input. A transfer
+agent is a binary that Git invokes in place of the standard LFS client. Agents can implement partial
+transfer, parallel streams, resumable downloads, and custom storage backends.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ lfsTransferAgent: elastic-git-storage
+ lfsTransferAgentArgs: '--verbose'
+ lfsStoragePaths: 'Assets/LargeAssets,Assets/Cinematics'
+```
+
+`lfsStoragePaths` limits LFS hydration to the specified asset directories. Files outside these paths
+are not downloaded, reducing transfer volume to only what the build target needs.
+
+**elastic-git-storage** is the recommended agent for large projects. It supports:
+
+- Parallel multi-stream transfers
+- Resumable downloads after network interruption
+- Content-addressed deduplication across workspaces
+- Direct object storage access (S3, GCS, Azure Blob) without a relay server
+
+**rclone-based agents** are an alternative when the LFS server cannot be replaced. They proxy
+transfers through any rclone-supported backend, enabling caching and bandwidth throttling.
+
+To use a custom agent, provide the agent binary path:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ lfsTransferAgent: /usr/local/bin/my-lfs-agent
+ lfsTransferAgentArgs: '--threads 8 --cache /mnt/lfs-cache'
+```
+
+## Submodule Profiles
+
+Monorepos with many submodules suffer initialization overhead proportional to the number of active
+submodules. A project with 30 submodules where any given build target needs 8 wastes time and
+bandwidth initializing the other 22.
+
+Submodule profiles define exactly which submodules to initialize for a given build context. See the
+[Monorepo Support](monorepo-support) page for full profile format and configuration.
+
+For massive projects, the key practice is keeping each build target's profile minimal:
+
+```yaml
+primary_submodule: CoreFramework
+submodules:
+ - name: CoreFramework
+ branch: main
+ - name: RenderPipeline
+ branch: main
+ - name: OptionalCinematicTools
+ branch: empty # skipped for standard builds
+ - name: Plugins*
+ branch: main
+```
+
+Profile initialization is atomic - skipped submodules are never touched, so build time scales with
+the active submodule set rather than the total repository size.
+
+## Incremental Sync
+
+For projects where even a targeted LFS pull is expensive, incremental sync avoids full re-clones
+entirely. See the [Incremental Sync Protocol](incremental-sync-protocol) page for the full protocol.
+
+The two strategies most relevant to massive projects:
+
+**git-delta** - The runner tracks its last sync commit SHA. On job dispatch it receives the target
+SHA, diffs the two, and checks out only the changed files. Assets that have not changed are not
+touched, and their Library import state remains valid.
+
+**storage-pull** - For assets that live outside git (too large for LFS, or managed by a separate
+pipeline), the runner pulls only the changed files from a generic storage remote. This combines with
+git-delta so that code changes and asset changes are both handled incrementally.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ syncStrategy: git-delta
+ retainedWorkspaces: true
+```
+
+Together, retained workspaces and git-delta sync deliver the minimal possible import work on every
+build: Unity sees only the files that actually changed.
+
+## Build Performance Tips
+
+**Parallelise platform builds.** Use a GitHub Actions matrix across platform targets. Each target
+maintains its own child workspace and Library folder, so builds do not interfere.
+
+```yaml
+strategy:
+ matrix:
+ targetPlatform:
+ - StandaloneLinux64
+ - StandaloneWindows64
+ - WebGL
+```
+
+**Warm the cache before the sprint.** At the start of a sprint cycle, run a full import on each
+target platform during off-hours. Retained workspaces mean subsequent PR builds start with warm
+Library folders.
+
+**Use LFS partial clone patterns.** Structure the repository so assets are grouped by build target
+under predictable paths (`Assets/Platforms/Linux/`, `Assets/Platforms/WebGL/`). This makes
+`lfsStoragePaths` filtering straightforward and predictable.
+
+**Reserve timeouts generously.** Set `buildTimeout` to account for cold-start scenarios, even when
+warm builds are expected. The first build after a runner restart will be cold.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ buildTimeout: 360 # minutes
+```
+
+**Monitor Library folder health.** Occasional full reimports are necessary when Unity upgrades or
+large-scale asset reorganizations occur. Schedule these explicitly rather than discovering them
+mid-sprint when a runner's Library becomes stale.
+
+## Inputs Reference
+
+| Input | Description |
+| ---------------------- | ------------------------------------------------------- |
+| `retainedWorkspaces` | Keep child workspaces between builds (`true` / `false`) |
+| `workspaceRoot` | Base path for root and child workspace storage |
+| `localCacheRoot` | Local filesystem path for move-centric Library cache |
+| `cacheStrategy` | Cache approach: `move`, `rclone`, `github-cache` |
+| `lfsTransferAgent` | Name or path of a custom LFS transfer agent binary |
+| `lfsTransferAgentArgs` | Additional arguments passed to the LFS transfer agent |
+| `lfsStoragePaths` | Comma-separated asset paths to limit LFS hydration |
+| `buildTimeout` | Maximum build duration in minutes |
diff --git a/docs/03-github-orchestrator/07-advanced-topics/16-monorepo-support.mdx b/docs/03-github-orchestrator/07-advanced-topics/16-monorepo-support.mdx
new file mode 100644
index 00000000..b824a5d4
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/16-monorepo-support.mdx
@@ -0,0 +1,255 @@
+---
+sidebar_position: 16
+---
+
+# Monorepo Support
+
+Orchestrator provides first-class support for Unity monorepos - single repositories that contain
+multiple products sharing engine code, submodules, and build configuration.
+
+## Overview
+
+A Unity monorepo typically contains:
+
+- A shared engine layer (`Assets/_Engine/`) used by all products
+- Game-specific code separated into submodules (`Assets/_Game/Submodules/`)
+- Per-product build configuration (build methods, target platforms, Steam IDs)
+- Shared tooling, CI workflows, and automation scripts
+
+The challenge is that different products need different subsets of the repository. A build of
+Product A should not initialize submodules that only Product B needs. A server build should not pull
+in client-only assets. A CI run for a hotfix should not reimport assets belonging to an unrelated
+product.
+
+Orchestrator addresses this through a profile system that controls submodule initialization per
+product and build type, combined with per-product framework configuration.
+
+## Submodule Profile System
+
+Profiles are YAML files that declare which submodules to initialize for a given product and context.
+Each submodule entry specifies either `branch: main` (initialize) or `branch: empty` (skip).
+
+### Profile Format
+
+```yaml
+primary_submodule: MyGameFramework
+submodules:
+ - name: CoreFramework
+ branch: main
+ - name: OptionalModule
+ branch: empty
+ - name: Plugins*
+ branch: main
+```
+
+`primary_submodule` identifies the main game submodule - the orchestrator uses this as the root of
+the build context.
+
+Each entry under `submodules` names a submodule (or a glob pattern matching multiple submodules) and
+declares whether it should be initialized. Submodules marked `branch: empty` are never touched
+during initialization, regardless of what other profiles or defaults specify.
+
+### Glob Pattern Support
+
+Glob patterns match multiple submodules with a single entry. This is useful for plugin groups that
+always travel together:
+
+```yaml
+submodules:
+ - name: Plugins*
+ branch: main # initializes PluginsCore, PluginsAudio, PluginsLocalization, etc.
+ - name: ThirdParty*
+ branch: empty # skips all ThirdParty submodules
+```
+
+Patterns are matched in order. The first matching entry wins, so more specific entries should appear
+before broader patterns.
+
+### Variant Overlays
+
+Variants override specific submodule entries from a base profile. A common use case is a
+`server.yml` variant that skips client-only assets and adds server-specific submodules:
+
+Base profile (`config/submodule-profiles/my-game/ci/profile.yml`):
+
+```yaml
+primary_submodule: MyGameFramework
+submodules:
+ - name: CoreFramework
+ branch: main
+ - name: ClientUI
+ branch: main
+ - name: ServerRuntime
+ branch: empty
+```
+
+Server variant (`config/submodule-profiles/my-game/ci/server.yml`):
+
+```yaml
+overrides:
+ - name: ClientUI
+ branch: empty # not needed for server builds
+ - name: ServerRuntime
+ branch: main # required for server builds
+```
+
+The variant is merged on top of the base profile at initialization time. Only the listed entries are
+overridden; all others inherit from the base.
+
+## Configuration
+
+Specify profiles and variants as action inputs:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ submoduleProfilePath: config/submodule-profiles/my-game/ci/profile.yml
+ submoduleVariantPath: config/submodule-profiles/my-game/ci/server.yml
+```
+
+When `submoduleProfilePath` is set, orchestrator reads the profile before any git operations and
+initializes only the listed submodules. Skipped submodules are never cloned, fetched, or touched.
+
+When `submoduleVariantPath` is also set, it is merged on top of the base profile before
+initialization begins.
+
+If neither input is set, orchestrator falls back to standard git submodule initialization (all
+submodules, no filtering).
+
+## Multi-Product CI Matrix
+
+A monorepo with multiple products can build all products in parallel using a GitHub Actions matrix.
+Each matrix entry specifies a different profile, producing independent builds from the same
+repository checkout.
+
+```yaml
+jobs:
+ build:
+ strategy:
+ matrix:
+ include:
+ - product: my-game
+ profile: config/submodule-profiles/my-game/ci/profile.yml
+ buildMethod: MyGame.BuildScripts.BuildGame
+ targetPlatform: StandaloneLinux64
+ - product: my-game-server
+ profile: config/submodule-profiles/my-game/ci/profile.yml
+ variant: config/submodule-profiles/my-game/ci/server.yml
+ buildMethod: MyGame.BuildScripts.BuildServer
+ targetPlatform: StandaloneLinux64
+ - product: my-other-game
+ profile: config/submodule-profiles/my-other-game/ci/profile.yml
+ buildMethod: MyOtherGame.BuildScripts.BuildGame
+ targetPlatform: StandaloneWindows64
+ steps:
+ - uses: game-ci/unity-builder@v4
+ with:
+ submoduleProfilePath: ${{ matrix.profile }}
+ submoduleVariantPath: ${{ matrix.variant }}
+ buildMethod: ${{ matrix.buildMethod }}
+ targetPlatform: ${{ matrix.targetPlatform }}
+```
+
+Each matrix job initializes only the submodules its profile declares. Products do not interfere with
+each other's initialization or build output.
+
+## Shared Code and Assembly Definitions
+
+Unity Assembly Definitions (`.asmdef` files) enforce code boundaries within the monorepo. The
+recommended layout separates shared engine code from game-specific code:
+
+```
+Assets/_Engine/ ← shared engine systems
+ Physics/
+ Engine.Physics.asmdef
+ Rendering/
+ Engine.Rendering.asmdef
+
+Assets/_Game/
+ Shared/Code/ ← shared game framework
+ Services/
+ Game.Services.asmdef
+ UI/
+ Game.UI.asmdef
+ Submodules/
+ MyGameFramework/ ← product-specific, in its own submodule
+ Code/
+ MyGame.asmdef
+```
+
+Assembly definitions in `Assets/_Engine/` reference only engine-level namespaces. Assembly
+definitions in `Assets/_Game/Shared/Code/` may reference engine assemblies. Product-specific
+assemblies reference both, but never each other across product boundaries.
+
+This structure means Unity can compile and test each product's assembly graph independently, and
+compile errors in one product do not block builds of another.
+
+## Framework Configuration
+
+Per-product build configuration belongs in a central frameworks file. This is the single source of
+truth for Steam App IDs, build methods, and supported platforms per product:
+
+```yaml
+# config/frameworks.yml
+frameworks:
+ my-game:
+ steamAppId: 123456
+ buildMethod: MyGame.BuildScripts.BuildGame
+ platforms:
+ - StandaloneLinux64
+ - StandaloneWindows64
+ - StandaloneOSX
+ my-game-server:
+ steamAppId: 123457
+ buildMethod: MyGame.BuildScripts.BuildServer
+ platforms:
+ - StandaloneLinux64
+ my-other-game:
+ steamAppId: 789012
+ buildMethod: MyOtherGame.BuildScripts.BuildGame
+ platforms:
+ - StandaloneWindows64
+ - WebGL
+```
+
+CI workflows read from this file to populate matrix entries and avoid duplicating product-specific
+values across workflow files. A single change to `frameworks.yml` propagates to all workflows that
+reference it.
+
+## Best Practices
+
+**Keep profiles small.** A profile should list only what a build actually requires. Err toward
+`branch: empty` for anything that is not clearly needed. Unnecessary submodule initialization adds
+clone time and increases the chance of conflicts.
+
+**Use glob patterns for plugin groups.** Plugins that always travel together (a vendor SDK split
+across three submodules, for example) should share a naming prefix so a single glob entry controls
+them all. This reduces profile maintenance as new plugins are added.
+
+**Use variant overlays for build-type differences.** Do not create separate base profiles for client
+and server builds. Start with a shared base profile and apply a server variant on top. This keeps
+the diff between build types explicit and minimizes duplication.
+
+**Validate profiles in CI.** Add a profile validation step that checks every submodule named in a
+profile actually exists in `.gitmodules`. This catches typos and stale entries before they cause
+initialization failures on build agents.
+
+```yaml
+- name: Validate submodule profiles
+ run: |
+ .\automation\ValidateSubmoduleProfiles.ps1 \
+ -ProfileDir config/submodule-profiles \
+ -GitmodulesPath .gitmodules
+ shell: powershell
+```
+
+**Document the profile per product.** Each product's profile directory should contain a brief README
+explaining which submodules are in scope, which are intentionally excluded, and which variant files
+exist. New contributors should not need to reverse-engineer profile intent from the YAML alone.
+
+## Inputs Reference
+
+| Input | Description |
+| ---------------------- | ------------------------------------------------------------------ |
+| `submoduleProfilePath` | Path to the profile YAML file controlling submodule initialization |
+| `submoduleVariantPath` | Path to a variant YAML file overlaid on top of the base profile |
diff --git a/docs/03-github-orchestrator/07-advanced-topics/17-build-reliability.mdx b/docs/03-github-orchestrator/07-advanced-topics/17-build-reliability.mdx
new file mode 100644
index 00000000..1a627c3c
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/17-build-reliability.mdx
@@ -0,0 +1,210 @@
+---
+sidebar_position: 17
+---
+
+# Build Reliability
+
+Build reliability features harden CI builds against common failure modes: git corruption on
+persistent runners, Windows filesystem issues with cross-platform repositories, and build output
+management. All features are opt-in via action inputs and fail gracefully - a reliability check that
+encounters an error logs a warning rather than failing the build.
+
+## Git Integrity Checking
+
+Self-hosted runners with persistent workspaces accumulate state between builds. Aborted jobs, disk
+errors, and concurrent git operations can leave the repository in a corrupted state. When this
+happens, the next build fails with cryptic git errors that are difficult to diagnose.
+
+Git integrity checking catches corruption before it causes build failures.
+
+### What It Checks
+
+The integrity check runs three validations in sequence:
+
+1. **`git fsck --no-dangling`** - Detects broken links, missing objects, and corrupt pack data in
+ the local repository. The `--no-dangling` flag suppresses harmless warnings about unreachable
+ objects that are normal in CI environments.
+
+2. **Stale lock files** - Scans the `.git/` directory recursively for any files ending in `.lock`
+ (`index.lock`, `shallow.lock`, `config.lock`, `HEAD.lock`, `refs/**/*.lock`). These are left
+ behind by git processes that were killed mid-operation and prevent subsequent git commands from
+ running. All lock files found are removed.
+
+3. **Submodule backing stores** - For each submodule declared in `.gitmodules`, validates that the
+ `.git` file inside the submodule directory points to an existing backing store under
+ `.git/modules/`. A broken backing store reference means the submodule's history is inaccessible,
+ and any git operation inside it will fail.
+
+### Configuration
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ gitIntegrityCheck: 'true'
+```
+
+### Automatic Recovery
+
+When `gitAutoRecover` is enabled (the default when `gitIntegrityCheck` is on) and corruption is
+detected, the service attempts recovery:
+
+1. Remove the corrupted `.git` directory entirely
+2. Re-initialize the repository with `git init`
+3. The checkout action completes the clone on the next step
+
+This is a last-resort recovery. It works because the orchestrator's checkout step will re-populate
+the repository from the remote after re-initialization.
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ gitIntegrityCheck: 'true'
+ gitAutoRecover: 'true' # this is the default when gitIntegrityCheck is enabled
+```
+
+To run integrity checks without automatic recovery (report-only mode), disable it explicitly:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ gitIntegrityCheck: 'true'
+ gitAutoRecover: 'false'
+```
+
+In report-only mode, detected corruption is logged as a warning and the build continues. This is
+useful for monitoring repository health without taking corrective action.
+
+## Reserved Filename Cleanup
+
+Windows has reserved device names (`CON`, `PRN`, `AUX`, `NUL`, `COM1`–`COM9`, `LPT1`–`LPT9`) that
+cannot be used as filenames. When a git repository created on macOS or Linux contains files with
+these names, checking them out on Windows causes problems.
+
+### The Problem
+
+Unity is particularly sensitive to reserved filenames. When the asset importer encounters a file
+named `aux.meta`, `nul.png`, or similar, it can enter an infinite reimport loop - detecting the
+file, failing to process it, detecting it again. This manifests as:
+
+- Unity hanging during asset import with no progress
+- 100% CPU usage from the asset import worker
+- Build jobs that run until they hit the timeout limit
+- Explorer crashes when navigating to affected directories
+
+These files are valid on macOS and Linux, so they can easily enter a repository through
+cross-platform contributions.
+
+### Solution
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ cleanReservedFilenames: 'true'
+```
+
+When enabled, the service scans the `Assets/` directory tree before Unity processes the project. Any
+file or directory whose name (without extension) matches a reserved device name is removed. Each
+removal is logged as a warning so the source of the problematic files can be traced.
+
+### Reserved Names
+
+The full list of reserved names checked (case-insensitive, with any file extension):
+
+`CON`, `PRN`, `AUX`, `NUL`, `COM1`, `COM2`, `COM3`, `COM4`, `COM5`, `COM6`, `COM7`, `COM8`, `COM9`,
+`LPT1`, `LPT2`, `LPT3`, `LPT4`, `LPT5`, `LPT6`, `LPT7`, `LPT8`, `LPT9`
+
+Examples of files that would be removed: `aux.meta`, `nul.png`, `CON.txt`, `com1.asset`.
+
+## Build Output Archival
+
+Automatically archive build outputs after successful builds. Archives are organized per platform and
+managed with a count-based retention policy.
+
+### Configuration
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ buildArchiveEnabled: 'true'
+ buildArchivePath: '/mnt/build-archives'
+ buildArchiveRetention: '5'
+```
+
+### How It Works
+
+1. After a successful build, the build output directory is moved (or copied, if a cross-device move
+ is not possible) to `{archivePath}/{platform}/build-{timestamp}`.
+2. Archives are organized by platform - each target platform gets its own subdirectory.
+3. The retention policy keeps the N most recent builds per platform. Older builds are automatically
+ removed.
+
+The archive path must be set when archival is enabled. This can be a local directory on the runner
+or a mounted network volume.
+
+### Retention Strategy
+
+Retention is count-based: the `buildArchiveRetention` value specifies how many builds to keep per
+platform. When a new build is archived and the total exceeds the retention count, the oldest
+archives are removed.
+
+- Default retention: **3** builds per platform
+- Set a higher value for release branches where rollback capability is important
+- Archives are sorted by modification time, so the most recent builds are always retained
+
+### Archive Layout
+
+```
+/mnt/build-archives/
+ StandaloneLinux64/
+ build-2025-01-15T10-30-00-000Z/
+ build-2025-01-16T14-22-00-000Z/
+ build-2025-01-17T09-15-00-000Z/
+ StandaloneWindows64/
+ build-2025-01-15T10-45-00-000Z/
+ build-2025-01-16T14-35-00-000Z/
+```
+
+## Git Environment Configuration
+
+The reliability service configures a git environment variable automatically:
+
+- `GIT_CONFIG_NOSYSTEM=1` - Bypasses the system-level git configuration file. This prevents
+ corrupted or misconfigured system git configs on self-hosted runners from affecting builds.
+
+This is applied automatically and does not require any configuration.
+
+## Inputs Reference
+
+| Input | Description | Default |
+| ------------------------ | -------------------------------------------------------------------------------- | --------- |
+| `gitIntegrityCheck` | Run git integrity checks before build | `'false'` |
+| `gitAutoRecover` | Attempt automatic recovery if corruption detected (requires `gitIntegrityCheck`) | `'true'` |
+| `cleanReservedFilenames` | Remove Windows reserved filenames from `Assets/` | `'false'` |
+| `buildArchiveEnabled` | Archive build output after successful build | `'false'` |
+| `buildArchivePath` | Path to store build archives (required when archival is enabled) | `''` |
+| `buildArchiveRetention` | Number of builds to retain per platform | `'3'` |
+
+## Recommended Configuration
+
+For self-hosted runners with persistent workspaces:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ gitIntegrityCheck: 'true'
+ gitAutoRecover: 'true'
+ cleanReservedFilenames: 'true'
+ buildArchiveEnabled: 'true'
+ buildArchivePath: '/mnt/build-archives'
+ buildArchiveRetention: '5'
+```
+
+For ephemeral runners (GitHub-hosted or fresh containers), git integrity checking is less valuable
+since the workspace is created fresh each time. Reserved filename cleanup is still useful if the
+repository contains cross-platform contributions:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ cleanReservedFilenames: 'true'
+```
diff --git a/docs/03-github-orchestrator/07-advanced-topics/18-engine-plugins.mdx b/docs/03-github-orchestrator/07-advanced-topics/18-engine-plugins.mdx
new file mode 100644
index 00000000..82edd541
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/18-engine-plugins.mdx
@@ -0,0 +1,203 @@
+---
+sidebar_position: 18
+---
+
+# Engine Plugins
+
+Orchestrator is game-engine agnostic. While Unity is the built-in default, you can plug in any game
+engine -Godot, Unreal, a custom engine, or anything else -without forking the orchestrator.
+
+An **engine plugin** tells the orchestrator how to handle engine-specific concerns like cache
+folders and container lifecycle hooks. Everything else (provisioning, git sync, logging, hooks,
+secrets) works the same regardless of engine.
+
+## How It Works
+
+The orchestrator only needs to know two things about your engine:
+
+1. **Which folders to cache** between builds (e.g. `Library` for Unity, `.godot/imported` for Godot)
+2. **What to run on container shutdown** (e.g. Unity needs to return its license)
+
+That's the entire `EnginePlugin` interface:
+
+```typescript
+interface EnginePlugin {
+ name: string; // 'unity', 'godot', 'unreal', etc.
+ cacheFolders: string[]; // folders to cache, relative to project root
+ preStopCommand?: string; // shell command for container shutdown (optional)
+}
+```
+
+## Built-in: Unity
+
+Unity is the default engine plugin. When you don't specify `--engine` or `--engine-plugin`, the
+orchestrator behaves exactly as it always has -caching the `Library` folder and returning the Unity
+license on container shutdown.
+
+No configuration needed. Existing workflows are unchanged.
+
+## Using a Different Engine
+
+Set `engine` and `enginePlugin` to load a community or custom engine plugin:
+
+```yaml
+# GitHub Actions
+- uses: game-ci/unity-builder@v4
+ with:
+ engine: godot
+ enginePlugin: '@game-ci/godot-engine'
+ targetPlatform: StandaloneLinux64
+```
+
+```bash
+# CLI
+game-ci build \
+ --engine godot \
+ --engine-plugin @game-ci/godot-engine \
+ --target-platform linux
+```
+
+## Plugin Sources
+
+Engine plugins can be loaded from three sources:
+
+| Source | Format | Example |
+| ------------------ | -------------------------- | ----------------------------------------- |
+| **NPM module** | Package name or local path | `@game-ci/godot-engine`, `./my-plugin.js` |
+| **CLI executable** | `cli:` | `cli:/usr/local/bin/my-engine-plugin` |
+| **Docker image** | `docker:` | `docker:gameci/godot-engine-plugin` |
+
+When no prefix is specified, the plugin is loaded as an NPM module.
+
+### NPM Module
+
+The simplest way to distribute an engine plugin. Publish an NPM package that exports an
+`EnginePlugin` object:
+
+```typescript
+// index.ts
+export default {
+ name: 'godot',
+ cacheFolders: ['.godot/imported', '.godot/shader_cache'],
+};
+```
+
+Supports default export, named `plugin` export, or `module.exports`:
+
+```javascript
+// CommonJS
+module.exports = {
+ name: 'godot',
+ cacheFolders: ['.godot/imported'],
+};
+```
+
+Install the package in your project, then reference it:
+
+```yaml
+enginePlugin: '@your-org/godot-engine'
+```
+
+Or point to a local file during development:
+
+```yaml
+enginePlugin: './my-engine-plugin.js'
+```
+
+### CLI Executable
+
+For plugins written in any language (Go, Python, Rust, shell, etc.). The executable receives a
+`get-engine-config` argument and must print a JSON config on stdout:
+
+```bash
+$ my-engine-plugin get-engine-config
+{"name": "godot", "cacheFolders": [".godot/imported"], "preStopCommand": ""}
+```
+
+Reference it with the `cli:` prefix:
+
+```yaml
+enginePlugin: 'cli:/usr/local/bin/my-engine-plugin'
+```
+
+### Docker Image
+
+For containerized plugin distribution. The image is run with
+`docker run --rm get-engine-config` and must print JSON config on stdout:
+
+```dockerfile
+FROM alpine
+COPY config.json /config.json
+ENTRYPOINT ["sh", "-c", "cat /config.json"]
+```
+
+Reference it with the `docker:` prefix:
+
+```yaml
+enginePlugin: 'docker:your-org/godot-engine-plugin'
+```
+
+## Writing an Engine Plugin
+
+To create a plugin for your engine, you need to answer two questions:
+
+1. **What folders should be cached?** These are directories that take a long time to regenerate but
+ don't change between builds. For Unity this is `Library`, for Godot it's `.godot/imported`.
+
+2. **Does your engine need cleanup on container shutdown?** Unity needs to return its license. Most
+ engines don't need anything here -just omit `preStopCommand`.
+
+### Example: Minimal Godot Plugin
+
+```typescript
+export default {
+ name: 'godot',
+ cacheFolders: ['.godot/imported', '.godot/shader_cache'],
+};
+```
+
+### Example: Engine with License Cleanup
+
+```typescript
+export default {
+ name: 'my-engine',
+ cacheFolders: ['Cache', 'Intermediate'],
+ preStopCommand: '/opt/my-engine/return-license.sh',
+};
+```
+
+## What the Plugin Controls
+
+The engine plugin **only** controls orchestrator-level behavior that varies by engine:
+
+| Behavior | Controlled by plugin | Notes |
+| ---------------------- | -------------------- | ----------------------------------------------------- |
+| Cache folders | Yes | Which project folders to persist between builds |
+| Container preStop hook | Yes | Shell command run on K8s container shutdown |
+| Docker image | No | Passed by the caller via `customImage` or `baseImage` |
+| Build scripts | No | Owned by the builder action (e.g. unity-builder) |
+| Version detection | No | Handled by the caller or builder action |
+| License activation | No | Handled by the builder action's entrypoint |
+
+This keeps plugins minimal. A complete engine plugin is typically 3-5 lines of config.
+
+## Programmatic Usage
+
+If you're building a custom integration, you can use the engine plugin API directly:
+
+```typescript
+import { setEngine, getEngine, loadEngineFromModule } from '@game-ci/orchestrator';
+
+// Load from an NPM package
+const plugin = loadEngineFromModule('@game-ci/godot-engine');
+setEngine(plugin);
+
+// Or set inline
+setEngine({
+ name: 'godot',
+ cacheFolders: ['.godot/imported'],
+});
+
+// Check current engine
+console.log(getEngine().name); // 'godot'
+```
diff --git a/docs/03-github-orchestrator/07-advanced-topics/19-unity-accelerator.mdx b/docs/03-github-orchestrator/07-advanced-topics/19-unity-accelerator.mdx
new file mode 100644
index 00000000..5d0a445d
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/19-unity-accelerator.mdx
@@ -0,0 +1,246 @@
+---
+sidebar_position: 19
+---
+
+# Unity Accelerator
+
+Unity Accelerator is a caching proxy that stores imported asset data. When Unity reimports an asset
+it has seen before, the Accelerator serves the cached result instead of recomputing it. This
+dramatically reduces import times for large projects — especially on cold builds where the Library
+folder is empty or missing.
+
+Orchestrator does not ship a built-in Accelerator integration, but the container hook system makes
+it straightforward to add one. There are two approaches depending on your infrastructure.
+
+## Architecture Options
+
+```mermaid
+flowchart TB
+ subgraph "Option A: Sidecar (per-build)"
+ A1[before hook] -->|start accelerator| A2[Unity Accelerator]
+ A2 --> A3[Unity Build]
+ A3 -->|after hook: upload cache| A4[S3]
+ end
+
+ subgraph "Option B: Persistent (shared)"
+ B1[Unity Build] -->|connects to| B2[Accelerator on EC2/ECS]
+ B3[Unity Build 2] -->|connects to| B2
+ end
+```
+
+| Approach | Pros | Cons |
+| -------------- | -------------------------------------------- | --------------------------------------- |
+| **Sidecar** | No infra to manage, works on any provider | Must persist cache to S3 between builds |
+| **Persistent** | Instant cache hits, shared across all builds | Requires always-on instance in same VPC |
+
+## Option A: Sidecar Accelerator (per-build)
+
+Run the Accelerator inside the build container using `before` / `after` hooks. The accelerator
+starts before Unity opens, and its cache is uploaded to S3 after the build completes.
+
+### 1. Create the hook files
+
+Create two files in your repository under `game-ci/container-hooks/`:
+
+**`game-ci/container-hooks/accelerator-start.yaml`**
+
+```yaml
+- name: accelerator-start
+ image: ubuntu
+ hook: before
+ commands: |
+ # Download Unity Accelerator
+ apt-get update -qq && apt-get install -y -qq curl tar > /dev/null 2>&1
+ ACCELERATOR_VERSION="${ACCELERATOR_VERSION:-1.0.1252+g42e1273}"
+ curl -fsSL "https://accelerator.cloud.unity3d.com/api/v1/downloads/unity-accelerator-linux-x86_64-${ACCELERATOR_VERSION}.tar.gz" \
+ -o /tmp/accelerator.tar.gz
+ mkdir -p /opt/unity-accelerator
+ tar -xzf /tmp/accelerator.tar.gz -C /opt/unity-accelerator --strip-components=1
+
+ # Restore accelerator cache from S3 if available
+ if command -v aws > /dev/null 2>&1 && [ -n "$AWS_ACCESS_KEY_ID" ]; then
+ ACCEL_CACHE_PATH="s3://${AWS_STACK_NAME}/orchestrator-cache/${CACHE_KEY}/accelerator"
+ aws s3 cp --recursive "$ACCEL_CACHE_PATH" /data/accelerator-cache/ 2>/dev/null || true
+ fi
+
+ # Start accelerator in background with persistent cache directory
+ /opt/unity-accelerator/unity-accelerator \
+ --persist-dir /data/accelerator-cache \
+ --log-stdout \
+ --listen-port 10080 &
+
+ # Wait for accelerator to be ready
+ for i in $(seq 1 10); do
+ curl -sf http://127.0.0.1:10080/api/health && break
+ sleep 1
+ done
+
+ # Export endpoint so Unity discovers it
+ echo "Unity Accelerator started on 127.0.0.1:10080"
+```
+
+**`game-ci/container-hooks/accelerator-upload.yaml`**
+
+```yaml
+- name: accelerator-upload
+ image: amazon/aws-cli
+ hook: after
+ commands: |
+ # Upload accelerator cache to S3 for next build
+ if [ -d "/data/accelerator-cache" ]; then
+ ACCEL_CACHE_PATH="s3://${AWS_STACK_NAME}/orchestrator-cache/${CACHE_KEY}/accelerator"
+ aws s3 cp --recursive /data/accelerator-cache/ "$ACCEL_CACHE_PATH" || true
+ echo "Accelerator cache uploaded to $ACCEL_CACHE_PATH"
+ fi
+ secrets:
+ - name: AWS_ACCESS_KEY_ID
+ value: ${process.env.AWS_ACCESS_KEY_ID || ``}
+ - name: AWS_SECRET_ACCESS_KEY
+ value: ${process.env.AWS_SECRET_ACCESS_KEY || ``}
+ - name: AWS_DEFAULT_REGION
+ value: ${process.env.AWS_REGION || ``}
+```
+
+### 2. Configure the workflow
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ env:
+ UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }}
+ UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }}
+ UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }}
+ UNITY_ACCELERATOR_ENDPOINT: '127.0.0.1:10080'
+ with:
+ providerStrategy: aws
+ awsStackName: ${{ secrets.AWS_STACK_NAME }}
+ targetPlatform: StandaloneLinux64
+ containerHookFiles: accelerator-start,aws-s3-upload-build,accelerator-upload
+ containerMemory: 16384
+ containerCpu: 4096
+```
+
+### 3. How it works
+
+1. **`accelerator-start`** (before hook) downloads, restores cache from S3, and starts the
+ accelerator on port 10080
+2. Unity reads `UNITY_ACCELERATOR_ENDPOINT` and routes all import requests through the local
+ accelerator
+3. First build: accelerator has an empty cache, imports run normally but results are cached
+4. **`accelerator-upload`** (after hook) pushes the accelerator cache to S3
+5. Next build: accelerator cache is restored from S3, previously imported assets resolve instantly
+
+### Performance characteristics
+
+| Build | Library behaviour | Accelerator behaviour |
+| ----------------------------- | --------------------------------- | -------------------------------------------------- |
+| First (cold) | Full reimport | Cache miss — stores results |
+| Second onward | Partial reimport (changed assets) | Cache hit — serves stored results |
+| After OOM / interrupted build | Library lost | Accelerator cache survives (separate from Library) |
+
+The key advantage over Library caching alone: even if a build is OOM-killed and the Library cache is
+lost, the accelerator cache on S3 survives. The next build still benefits from cached imports.
+
+## Option B: Persistent Accelerator (shared instance)
+
+For teams running frequent builds, a persistent accelerator instance provides instant cache hits
+without the S3 upload/download cycle on every build.
+
+### 1. Deploy the accelerator
+
+Run Unity Accelerator on a small EC2 instance or ECS service in the same VPC as your Fargate tasks:
+
+```bash
+# On an EC2 instance in your build VPC
+docker run -d \
+ --name unity-accelerator \
+ -p 10080:10080 \
+ -v /data/accelerator:/persist \
+ --restart unless-stopped \
+ unitytechnologies/unity-accelerator:latest
+```
+
+Ensure the security group allows inbound TCP 10080 from your Fargate task security group.
+
+### 2. Configure the workflow
+
+Pass the accelerator's private IP or DNS name as an environment variable:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ env:
+ UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }}
+ UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }}
+ UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }}
+ UNITY_ACCELERATOR_ENDPOINT: '10.0.1.50:10080'
+ with:
+ providerStrategy: aws
+ awsStackName: ${{ secrets.AWS_STACK_NAME }}
+ targetPlatform: StandaloneLinux64
+ containerHookFiles: aws-s3-upload-build
+ containerMemory: 16384
+ containerCpu: 4096
+```
+
+No hooks required — Unity connects directly to the persistent accelerator.
+
+### 3. Considerations
+
+- **Cost**: A `t3.medium` instance (~$30/month) is sufficient for most teams
+- **Storage**: Accelerator cache grows with your asset count; use gp3 EBS with enough capacity
+- **Multi-region**: Deploy one accelerator per region where your Fargate tasks run
+- **Cache invalidation**: The accelerator handles this automatically based on asset hashes
+
+## Combining with Library caching
+
+The accelerator and Library caching are complementary:
+
+- **Library cache** stores the fully-imported asset database — avoids reimport entirely
+- **Accelerator** caches individual import results — speeds up reimports when Library is missing
+
+For maximum performance, use both:
+
+```yaml
+containerHookFiles: accelerator-start,aws-s3-pull-cache,aws-s3-upload-build,aws-s3-upload-cache,accelerator-upload
+```
+
+This gives you:
+
+1. Library cache restored from S3 (fast path — no reimport needed)
+2. Accelerator as safety net (if Library is stale or missing, imports still resolve from cache)
+3. Both caches updated after a successful build
+
+## Troubleshooting
+
+### Unity ignores the accelerator
+
+Verify the environment variable name is exactly `UNITY_ACCELERATOR_ENDPOINT` (not a URL — no
+`http://` prefix). The value should be `host:port` format:
+
+```
+UNITY_ACCELERATOR_ENDPOINT=127.0.0.1:10080 # correct
+UNITY_ACCELERATOR_ENDPOINT=http://127.0.0.1:10080 # wrong
+```
+
+### Accelerator not reachable from Fargate
+
+For persistent accelerator setups, ensure:
+
+- The accelerator EC2/ECS instance is in the same VPC as your Fargate tasks
+- The accelerator's security group allows inbound on port 10080 from the Fargate security group
+- The Fargate task's security group allows outbound to port 10080
+
+### Cache not persisting between builds (sidecar)
+
+Check that the `accelerator-upload` hook runs after the build. Verify S3 permissions allow writing
+to the cache path. Check the build logs for upload errors.
+
+### Large accelerator cache slowing builds
+
+The accelerator cache grows over time. Set a maximum cache size in the accelerator config:
+
+```bash
+/opt/unity-accelerator/unity-accelerator \
+ --persist-dir /data/accelerator-cache \
+ --cache-max-size 10g \
+ --listen-port 10080 &
+```
diff --git a/docs/03-github-orchestrator/07-advanced-topics/20-cache-checkpointing.mdx b/docs/03-github-orchestrator/07-advanced-topics/20-cache-checkpointing.mdx
new file mode 100644
index 00000000..4b911b36
--- /dev/null
+++ b/docs/03-github-orchestrator/07-advanced-topics/20-cache-checkpointing.mdx
@@ -0,0 +1,257 @@
+---
+sidebar_position: 20
+---
+
+# Cache Checkpointing & Survival
+
+For large Unity projects where the initial Library import exceeds available build time (e.g. the
+6-hour GitHub Actions limit), standard caching fails — the cache is only saved after a successful
+build, so interrupted builds lose all progress and start from zero every time.
+
+Cache checkpointing solves this by saving Library state periodically **during** the build and on
+failure, so each attempt makes forward progress.
+
+## The Problem
+
+```mermaid
+flowchart LR
+ A[Build starts] --> B[Library import: 8 hours]
+ B -->|6h timeout| C[Task killed]
+ C --> D[Cache lost — no save happened]
+ D -->|Next build| A
+```
+
+Without checkpointing, a project that needs 8 hours of import time will **never** succeed on a
+6-hour runner — it's stuck in an infinite loop of starting from scratch.
+
+## Cache Checkpointing
+
+Set `cacheCheckpointInterval` to save the Library folder at regular intervals during the build:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ targetPlatform: StandaloneLinux64
+ cacheCheckpointInterval: 30
+ containerMemory: 16384
+```
+
+This starts a background process that tars the Library folder every 30 minutes. If the build is
+interrupted at any point, the latest checkpoint is available for the S3 upload hook to push.
+
+```mermaid
+flowchart LR
+ A[Build starts] --> B[Import begins]
+ B -->|30 min| C[Checkpoint 1 saved]
+ C -->|30 min| D[Checkpoint 2 saved]
+ D -->|6h timeout| E[Task killed]
+ E --> F[Checkpoint 2 uploaded to S3]
+ F -->|Next build| G[Restore from checkpoint 2]
+ G --> H[Continue importing from 5h mark]
+```
+
+### How it works
+
+1. A background shell process runs alongside Unity
+2. Every N minutes, it tars the Library folder to `/data/cache/$CACHE_KEY/Library/`
+3. Only the latest 2 checkpoints are kept on disk (older ones deleted to save space)
+4. If the task is killed, the `aws-s3-upload-cache` hook uploads whatever checkpoints exist
+5. Next build restores from the latest checkpoint — Unity only reimports what changed since then
+
+### Choosing an interval
+
+| Project size | Recommended interval | Rationale |
+| ---------------- | -------------------- | ----------------------------------------------- |
+| < 10 GB Library | Not needed | Build likely completes within timeout |
+| 10-30 GB Library | 30 minutes | Balance between I/O overhead and progress saved |
+| 30-80 GB Library | 60 minutes | Large tars take time; less frequent = less I/O |
+| > 80 GB Library | 90 minutes | Use with retained workspaces instead |
+
+## Save on Failure
+
+For builds that might OOM or crash (rather than timeout), `cacheSaveOnFailure` installs a shell trap
+that captures partial Library state on any non-zero exit:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ targetPlatform: StandaloneLinux64
+ cacheSaveOnFailure: true
+ containerMemory: 8192
+```
+
+When the build process exits with a non-zero code (OOM, crash, assertion failure), the trap:
+
+1. Checks if a Library folder with content exists
+2. Tars it to the cache directory as `recovery-partial.tar[.lz4]`
+3. The upload hook then pushes it to S3
+
+Next build restores from this partial cache. Unity skips assets that were already imported.
+
+### When to use each
+
+| Scenario | Use `cacheCheckpointInterval` | Use `cacheSaveOnFailure` |
+| ------------------ | -------------------------------- | -------------------------------- |
+| Timeout (6h limit) | Yes — saves progress before kill | May not trigger (SIGKILL) |
+| OOM kill | Checkpoints already saved | Yes — trap catches EXIT |
+| Crash / assertion | Checkpoints already saved | Yes — trap catches non-zero exit |
+| Normal build | Overhead but harmless | No-op (exit 0 skips trap) |
+
+**Recommendation:** Use both together for maximum resilience:
+
+```yaml
+cacheCheckpointInterval: 30
+cacheSaveOnFailure: true
+```
+
+## Cache Retention
+
+Control how long old cache entries persist on S3 with `cacheRetentionDays`:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ providerStrategy: aws
+ cacheRetentionDays: 14
+```
+
+After each successful cache upload, entries older than 14 days are automatically removed from the S3
+bucket. This prevents unbounded cache growth for projects with many branches.
+
+| Setting | Effect |
+| ------------- | ---------------------------------------------- |
+| `0` (default) | Keep cache forever (manual cleanup only) |
+| `7` | One week retention — good for feature branches |
+| `30` | One month ��� good for main/develop branches |
+| `90` | Three months — good for release branches |
+
+### Per-branch retention
+
+Use different `cacheRetentionDays` values per workflow trigger:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ with:
+ cacheRetentionDays: ${{ github.ref == 'refs/heads/main' && '90' || '14' }}
+```
+
+## Combining with Unity Accelerator
+
+Cache checkpointing and [Unity Accelerator](unity-accelerator) are complementary:
+
+- **Checkpointing** saves the entire Library folder state periodically
+- **Accelerator** caches individual asset import results
+
+Together, they provide the fastest recovery from interrupted builds:
+
+```yaml
+- uses: game-ci/unity-builder@v4
+ env:
+ UNITY_ACCELERATOR_ENDPOINT: '127.0.0.1:10080'
+ with:
+ providerStrategy: aws
+ containerHookFiles: accelerator-start,aws-s3-upload-build,aws-s3-upload-cache,accelerator-upload
+ cacheCheckpointInterval: 30
+ cacheSaveOnFailure: true
+ cacheRetentionDays: 30
+ containerMemory: 16384
+```
+
+This gives you:
+
+1. **Checkpoint every 30 min** — progress saved even on timeout
+2. **Failure trap** — partial save on OOM/crash
+3. **Accelerator** — individual imports cached and survive independently
+4. **Retention** — old entries cleaned after 30 days
+
+## Inputs Reference
+
+| Input | Type | Default | Description |
+| ------------------------- | ------- | ------------------ | ---------------------------------------- |
+| `cacheCheckpointInterval` | number | `0` (disabled) | Minutes between Library checkpoints |
+| `cacheSaveOnFailure` | boolean | `false` | Save partial cache on build failure |
+| `cacheRetentionDays` | number | `0` (keep forever) | Auto-delete S3 entries older than N days |
+| `skipCache` | boolean | `false` | Skip cache restore entirely |
+| `useCompressionStrategy` | boolean | `false` | Use LZ4 compression for cache archives |
+| `cacheKey` | string | branch name | Override cache key for isolation |
+
+## Pre-warming the Cache
+
+For projects where even the first checkpoint cycle takes too long, you can pre-warm the cache by
+uploading a Library folder from your local machine. This skips the cold-start entirely — the first
+cloud build pulls the warm cache and only reimports assets that changed since.
+
+### From a local Unity project
+
+```bash
+# 1. Tar your local Library folder
+cd /path/to/your/unity-project
+tar -cf library-warm.tar Library
+
+# 2. Upload to S3 at the path the orchestrator expects
+aws s3 cp library-warm.tar \
+ s3:///orchestrator-cache//Library/library-warm.tar
+```
+
+Where:
+
+- `` is the `awsStackName` input from your workflow
+- `` is either your branch name (default) or the `cacheKey` input value
+
+### Using the CLI
+
+If you have the orchestrator CLI installed, you can push directly:
+
+```bash
+game-ci cache-push \
+ --cachePushFrom ./Library \
+ --cachePushTo /tmp/cache-upload/Library \
+ --artifactName library-warm
+
+# Then upload the generated tar to S3
+aws s3 cp /tmp/cache-upload/Library/ \
+ s3:///orchestrator-cache//Library/ \
+ --recursive
+```
+
+### From a self-hosted runner or EC2 instance
+
+Run the full build once on a machine without time limits (self-hosted runner, EC2 instance, or local
+Docker). The build completes, cache is pushed to S3, and all subsequent GitHub Actions builds get a
+warm cache hit:
+
+```bash
+game-ci build \
+ --providerStrategy local-docker \
+ --targetPlatform StandaloneLinux64 \
+ --cacheKey main \
+ --awsStackName my-stack \
+ --containerHookFiles aws-s3-upload-cache
+```
+
+After this one-time run, all cloud builds on the `main` branch restore from the warm cache.
+
+## Troubleshooting
+
+### Checkpoints not being saved
+
+- Verify `cacheCheckpointInterval` is set to a value > 0
+- Check container logs for `[Cache Checkpoint]` messages
+- Ensure the Library folder exists at `$GITHUB_WORKSPACE/Library` (Unity must have started
+ importing)
+- Check disk space — checkpoints are skipped if disk is full
+
+### Partial cache not restoring
+
+- The upload hook must run after the task is killed. For OOM kills on AWS Fargate, the task stops
+ and the hook container runs separately. For hard SIGKILL, the checkpoint files must already exist
+ in `/data/cache/` from a previous checkpoint interval.
+- Verify S3 contains checkpoint files: `aws s3 ls s3:///orchestrator-cache//Library/`
+
+### Cache growing too large
+
+- Set `cacheRetentionDays` to automatically purge old entries
+- Use `useCompressionStrategy: true` to compress archives with LZ4 (~50% size reduction)
+- Use branch-specific `cacheKey` values to isolate feature branch caches from main
diff --git a/docs/03-github-orchestrator/06-advanced-topics/_category_.yaml b/docs/03-github-orchestrator/07-advanced-topics/_category_.yaml
similarity index 81%
rename from docs/03-github-orchestrator/06-advanced-topics/_category_.yaml
rename to docs/03-github-orchestrator/07-advanced-topics/_category_.yaml
index da411968..24f7ad99 100644
--- a/docs/03-github-orchestrator/06-advanced-topics/_category_.yaml
+++ b/docs/03-github-orchestrator/07-advanced-topics/_category_.yaml
@@ -1,5 +1,5 @@
---
-position: 5.0
+position: 7.0
label: Advanced topics
collapsible: true
collapsed: true
diff --git a/docs/03-github-orchestrator/08-cli/01-getting-started.mdx b/docs/03-github-orchestrator/08-cli/01-getting-started.mdx
new file mode 100644
index 00000000..c9b7c22e
--- /dev/null
+++ b/docs/03-github-orchestrator/08-cli/01-getting-started.mdx
@@ -0,0 +1,143 @@
+---
+sidebar_position: 1
+---
+
+# Getting Started with the CLI
+
+The `game-ci` CLI lets you run Unity builds, activate licenses, and manage caches directly from your
+terminal - no GitHub Actions or CI platform required.
+
+## Installation
+
+### Linux / macOS
+
+```bash
+curl -fsSL https://raw.githubusercontent.com/game-ci/orchestrator/main/install.sh | sh
+```
+
+### Windows (PowerShell)
+
+```powershell
+irm https://raw.githubusercontent.com/game-ci/orchestrator/main/install.ps1 | iex
+```
+
+### Options
+
+| Environment variable | Description |
+| -------------------- | ----------------------------------------------------------- |
+| `GAME_CI_VERSION` | Pin a specific release (e.g. `v2.0.0`). Defaults to latest. |
+| `GAME_CI_INSTALL` | Override install directory. Defaults to `~/.game-ci/bin`. |
+
+```bash
+# Example: install a specific version
+GAME_CI_VERSION=v2.0.0 curl -fsSL https://raw.githubusercontent.com/game-ci/orchestrator/main/install.sh | sh
+```
+
+### Manual Download
+
+Pre-built binaries for every platform (Linux x64/arm64, macOS x64/arm64, Windows x64) are available
+on the [GitHub Releases](https://github.com/game-ci/orchestrator/releases) page. Download the binary
+for your OS and architecture, make it executable, and place it on your `PATH`.
+
+## Unity License Activation
+
+Before building, you need a Unity license. Set one of the following environment variables:
+
+| Variable | Description |
+| --------------- | ----------------------------------------------------------- |
+| `UNITY_SERIAL` | Unity serial key (Professional/Plus licenses) |
+| `UNITY_LICENSE` | Contents of a Unity `.ulf` license file (base64 or raw XML) |
+
+You can verify your license is detected by running:
+
+```bash
+game-ci activate
+```
+
+This checks for valid license credentials and reports whether activation will succeed. If using a
+floating license server, pass the `--unity-licensing-server` flag instead:
+
+```bash
+game-ci activate --unity-licensing-server http://license-server:8080
+```
+
+### Environment Variables
+
+Set these in your shell profile or pass them inline:
+
+```bash
+export UNITY_SERIAL="XX-XXXX-XXXX-XXXX-XXXX-XXXX"
+```
+
+Or for personal licenses:
+
+```bash
+export UNITY_LICENSE="$(cat ~/Unity_v2022.x.ulf)"
+```
+
+> **AWS SSO / credential chain:** If you don't set `AWS_ACCESS_KEY_ID`/`AWS_SECRET_ACCESS_KEY`, the
+> orchestrator falls back to the AWS SDK default credential chain — SSO profiles, IAM roles, ECS
+> task roles, etc. For SSO: `aws sso login --profile my-profile` then set `AWS_PROFILE=my-profile`.
+
+## Your First Build
+
+Run a build by specifying your target platform. The Unity version is auto-detected from your
+project's `ProjectSettings/ProjectVersion.txt` by default:
+
+```bash
+game-ci build \
+ --target-platform StandaloneLinux64
+```
+
+You can also specify the Unity version explicitly:
+
+```bash
+game-ci build \
+ --unity-version 2022.3.56f1 \
+ --target-platform StandaloneLinux64
+```
+
+The CLI will:
+
+1. Pull the matching Unity Docker image
+2. Mount your project into a container
+3. Run the Unity build
+4. Output build artifacts to the `build/` directory
+
+### Specifying a Project Path
+
+If your Unity project is not in the current directory, use `--project-path`:
+
+```bash
+game-ci build \
+ --target-platform StandaloneWindows64 \
+ --project-path ./my-unity-project
+```
+
+### Common Target Platforms
+
+| Platform | Value |
+| ---------------- | --------------------- |
+| Linux (64-bit) | `StandaloneLinux64` |
+| Windows (64-bit) | `StandaloneWindows64` |
+| macOS | `StandaloneOSX` |
+| WebGL | `WebGL` |
+| Android | `Android` |
+| iOS | `iOS` |
+
+## Checking Your Setup
+
+Use the `status` command to verify your environment:
+
+```bash
+game-ci status
+```
+
+This reports whether a Unity project is detected, the Unity version, Library cache status, Docker
+availability, and which license environment variables are set.
+
+## Next Steps
+
+- [Build Command Reference](build-command) - full list of build flags and options
+- [Orchestrate Command](orchestrate-command) - run builds on cloud infrastructure
+- [Other Commands](other-commands) - license activation, cache management, and more
diff --git a/docs/03-github-orchestrator/08-cli/02-build-command.mdx b/docs/03-github-orchestrator/08-cli/02-build-command.mdx
new file mode 100644
index 00000000..6181ad3a
--- /dev/null
+++ b/docs/03-github-orchestrator/08-cli/02-build-command.mdx
@@ -0,0 +1,176 @@
+---
+sidebar_position: 2
+---
+
+# Build Command
+
+The `build` command runs a Unity build inside a Docker container (or natively on macOS). It accepts
+the same parameters as the `game-ci/unity-builder` GitHub Action, translated to CLI flags. The CLI
+is provided by the [`@game-ci/orchestrator`](https://github.com/game-ci/orchestrator) package.
+
+```bash
+game-ci build [options]
+```
+
+## Required Options
+
+| Flag | Description |
+| ------------------- | ------------------------------------------------ |
+| `--target-platform` | Build target platform (e.g. `StandaloneLinux64`) |
+
+## Project Options
+
+| Flag | Default | Description |
+| --------------------- | --------- | ----------------------------------------------------------------------------------------------- |
+| `--unity-version` | `auto` | Unity Editor version to use. Set to `auto` to detect from `ProjectSettings/ProjectVersion.txt`. |
+| `--project-path` | `.` | Path to the Unity project directory |
+| `--build-name` | _(empty)_ | Name of the build output file (no file extension) |
+| `--builds-path` | `build` | Output directory for build artifacts |
+| `--build-method` | _(empty)_ | Custom static C# build method to invoke (e.g. `MyBuild.PerformBuild`) |
+| `--build-profile` | _(empty)_ | Path to the build profile to activate, relative to the project root |
+| `--custom-parameters` | _(empty)_ | Additional parameters appended to the Unity command line |
+
+## Versioning Options
+
+| Flag | Default | Description |
+| -------------- | ---------- | --------------------------------------------------------- |
+| `--versioning` | `Semantic` | Versioning strategy: `Semantic`, `Tag`, `Custom`, `None` |
+| `--version` | _(empty)_ | Explicit version string (used with `--versioning Custom`) |
+
+## Unity Options
+
+| Flag | Default | Description |
+| -------------------------- | --------- | --------------------------------------------------------------------------------------------------- |
+| `--manual-exit` | `false` | Suppresses the `-quit` flag. Use when your build method calls `EditorApplication.Exit(0)` manually. |
+| `--enable-gpu` | `false` | Launches Unity without specifying `-nographics` |
+| `--skip-activation` | `false` | Skip Unity license activation/deactivation |
+| `--unity-licensing-server` | _(empty)_ | Unity floating license server address |
+
+## Engine Options
+
+| Flag | Default | Description |
+| ----------------- | --------- | ------------------------------------------------------------------------------------------------------------------------------ |
+| `--engine` | `unity` | Game engine name (`unity`, `godot`, `unreal`, etc.) |
+| `--engine-plugin` | _(empty)_ | Engine plugin source: NPM package, `cli:`, or `docker:`. See [Engine Plugins](../advanced-topics/engine-plugins). |
+
+### Non-Unity Engine Example
+
+```bash
+game-ci build \
+ --engine godot \
+ --engine-plugin @game-ci/godot-engine \
+ --target-platform linux \
+ --custom-image my-godot-image:4.2 \
+ --provider-strategy aws
+```
+
+## Custom Build Parameters
+
+Pass arbitrary parameters to the Unity build process:
+
+```bash
+game-ci build \
+ --target-platform StandaloneLinux64 \
+ --custom-parameters "-myFlag -myKey myValue"
+```
+
+The `--custom-parameters` string is appended to the Unity command line arguments.
+
+## Android Options
+
+| Flag | Default | Description |
+| ------------------------------ | ---------------- | ---------------------------------------------------------------------------------------- |
+| `--android-version-code` | _(empty)_ | Android `versionCode` override |
+| `--android-export-type` | `androidPackage` | Export type: `androidPackage` (APK), `androidAppBundle` (AAB), or `androidStudioProject` |
+| `--android-keystore-name` | _(empty)_ | Filename of the keystore |
+| `--android-keystore-base64` | _(empty)_ | Base64-encoded keystore file contents |
+| `--android-keystore-pass` | _(empty)_ | Keystore password |
+| `--android-keyalias-name` | _(empty)_ | Key alias name within the keystore |
+| `--android-keyalias-pass` | _(empty)_ | Key alias password |
+| `--android-target-sdk-version` | _(empty)_ | Target Android SDK version (e.g. `AndroidApiLevel31`) |
+| `--android-symbol-type` | `none` | Android symbol type to export: `none`, `public`, or `debugging` |
+
+### Android Build Example
+
+```bash
+game-ci build \
+ --target-platform Android \
+ --android-keystore-base64 "$(base64 -w 0 release.keystore)" \
+ --android-keystore-pass "$KEYSTORE_PASS" \
+ --android-keyalias-name "release" \
+ --android-keyalias-pass "$KEY_PASS" \
+ --android-target-sdk-version AndroidApiLevel31 \
+ --android-export-type androidAppBundle
+```
+
+## Docker Options
+
+Control the Docker container used for the build:
+
+| Flag | Default | Description |
+| ------------------------- | ------------------- | ----------------------------------------------------------------------------------------------- |
+| `--custom-image` | _(empty)_ | Override the Docker image (defaults to `unityci/editor` with the detected version and platform) |
+| `--docker-cpu-limit` | _(empty)_ | CPU limit for the container (e.g. `4` for 4 cores) |
+| `--docker-memory-limit` | _(empty)_ | Memory limit for the container (e.g. `8g` for 8 GB) |
+| `--docker-workspace-path` | `/github/workspace` | Path where the workspace is mounted inside the container |
+| `--run-as-host-user` | `false` | Run as a user that matches the host system |
+| `--chown-files-to` | _(empty)_ | User and optionally group to give ownership of build artifacts (e.g. `1000:1000`) |
+
+### Custom Docker Image Example
+
+```bash
+game-ci build \
+ --target-platform StandaloneLinux64 \
+ --custom-image my-registry.com/unity-editor:2022.3.56f1-linux
+```
+
+## Authentication Options
+
+| Flag | Default | Description |
+| --------------------- | --------- | ---------------------------------------------------------- |
+| `--ssh-agent` | _(empty)_ | SSH Agent path to forward to the container |
+| `--git-private-token` | _(empty)_ | GitHub private token for pulling from private repositories |
+
+## Provider Strategy
+
+By default, the `build` command runs locally. You can redirect execution to a remote orchestrator
+provider:
+
+| Flag | Default | Description |
+| --------------------- | ------- | -------------------------------------------- |
+| `--provider-strategy` | `local` | Execution strategy: `local`, `k8s`, or `aws` |
+
+When set to anything other than `local`, the build is handed off to the orchestrator. See
+[Orchestrate Command](orchestrate-command) for cloud-specific options.
+
+## Output
+
+Build artifacts are written to the path specified by `--builds-path` (default `build/`). The
+directory structure is:
+
+```
+build/
+ /
+
+```
+
+## Full Example
+
+```bash
+game-ci build \
+ --target-platform StandaloneLinux64 \
+ --unity-version 2022.3.56f1 \
+ --project-path ./my-project \
+ --build-name MyGame \
+ --builds-path ./dist \
+ --versioning Semantic \
+ --custom-parameters "-enableAnalytics" \
+ --docker-cpu-limit 4 \
+ --docker-memory-limit 8g
+```
+
+## See Also
+
+- [Getting Started](getting-started) - installation and first build
+- [Orchestrate Command](orchestrate-command) - run builds on cloud infrastructure
+- [API Reference](../api-reference) - full parameter reference for the GitHub Action
diff --git a/docs/03-github-orchestrator/08-cli/03-orchestrate-command.mdx b/docs/03-github-orchestrator/08-cli/03-orchestrate-command.mdx
new file mode 100644
index 00000000..91b6d0fc
--- /dev/null
+++ b/docs/03-github-orchestrator/08-cli/03-orchestrate-command.mdx
@@ -0,0 +1,149 @@
+---
+sidebar_position: 3
+---
+
+# Orchestrate Command
+
+The `orchestrate` command runs Unity builds on cloud infrastructure instead of the local machine. It
+provisions a remote environment, syncs your project, runs the build, and retrieves artifacts.
+
+```bash
+game-ci orchestrate [options]
+```
+
+This is the CLI equivalent of using `game-ci/unity-builder` with a `providerStrategy` in GitHub
+Actions.
+
+## Required Options
+
+| Flag | Description |
+| ------------------- | ------------------------------------------------ |
+| `--target-platform` | Build target platform (e.g. `StandaloneLinux64`) |
+
+## Provider Selection
+
+Choose where your build runs with the `--provider-strategy` flag:
+
+| Provider | Value | Description |
+| ------------ | -------------- | -------------------------------------- |
+| AWS Fargate | `aws` | Serverless containers on AWS (default) |
+| Kubernetes | `k8s` | Run on any Kubernetes cluster |
+| Local Docker | `local-docker` | Run in a local Docker container |
+| Local System | `local-system` | Run directly on the local system |
+
+```bash
+game-ci orchestrate \
+ --target-platform StandaloneLinux64 \
+ --provider-strategy aws
+```
+
+## Build Options
+
+| Flag | Default | Description |
+| --------------------- | --------- | ------------------------------------------------------------------ |
+| `--unity-version` | `auto` | Unity Editor version to use. Set to `auto` to detect from project. |
+| `--project-path` | `.` | Path to the Unity project directory |
+| `--build-name` | _(empty)_ | Name of the build |
+| `--builds-path` | `build` | Path where build artifacts are stored |
+| `--build-method` | _(empty)_ | Custom static C# build method to invoke |
+| `--custom-parameters` | _(empty)_ | Additional parameters for the Unity build |
+| `--versioning` | `None` | Versioning strategy: `None`, `Semantic`, `Tag`, `Custom` |
+
+## AWS Options
+
+| Flag | Default | Description |
+| -------------------- | --------- | ---------------------------------------------- |
+| `--aws-stack-name` | `game-ci` | CloudFormation stack name for shared resources |
+| `--container-cpu` | `1024` | CPU units for the Fargate task (1024 = 1 vCPU) |
+| `--container-memory` | `3072` | Memory in MB for the Fargate task |
+
+AWS credentials are read from the standard AWS environment variables or credential files:
+
+```bash
+export AWS_ACCESS_KEY_ID="your-key"
+export AWS_SECRET_ACCESS_KEY="your-secret"
+export AWS_DEFAULT_REGION="us-east-1"
+
+game-ci orchestrate \
+ --target-platform StandaloneLinux64 \
+ --provider-strategy aws \
+ --container-cpu 2048 \
+ --container-memory 8192
+```
+
+## Kubernetes Options
+
+| Flag | Default | Description |
+| -------------------- | --------- | --------------------------------------- |
+| `--kube-config` | _(empty)_ | Base64-encoded kubeconfig file contents |
+| `--kube-volume` | _(empty)_ | Name of the persistent volume to use |
+| `--kube-volume-size` | `5Gi` | Size of the persistent volume |
+
+```bash
+game-ci orchestrate \
+ --target-platform StandaloneLinux64 \
+ --provider-strategy k8s \
+ --kube-config "$(base64 -w 0 ~/.kube/config)" \
+ --kube-volume-size 25Gi
+```
+
+## Cache Options
+
+| Flag | Default | Description |
+| ------------- | --------- | ------------------------------- |
+| `--cache-key` | _(empty)_ | Key used to scope cache entries |
+
+## Workspace Options
+
+| Flag | Default | Description |
+| --------------- | ------- | ---------------------------------------- |
+| `--clone-depth` | `50` | Git clone depth (use `0` for full clone) |
+
+## Execution Options
+
+| Flag | Default | Description |
+| --------------------- | ------- | ------------------------------------------------------ |
+| `--watch-to-end` | `true` | Follow build logs until completion |
+| `--allow-dirty-build` | `false` | Allow builds from dirty (uncommitted changes) branches |
+| `--skip-activation` | `false` | Skip Unity license activation/deactivation |
+
+## Git Authentication
+
+The remote environment needs access to your repository. Provide a token with repo access:
+
+```bash
+game-ci orchestrate \
+ --target-platform StandaloneLinux64 \
+ --provider-strategy aws \
+ --git-private-token "$GITHUB_TOKEN"
+```
+
+| Flag | Default | Description |
+| --------------------- | --------- | --------------------------------------------------- |
+| `--git-private-token` | _(empty)_ | GitHub access token with repository read permission |
+
+If not provided, the CLI attempts to read the token from the `GITHUB_TOKEN` environment variable.
+
+## Full Example
+
+```bash
+export AWS_ACCESS_KEY_ID="AKIA..."
+export AWS_SECRET_ACCESS_KEY="..."
+export AWS_DEFAULT_REGION="us-east-1"
+
+game-ci orchestrate \
+ --target-platform StandaloneLinux64 \
+ --provider-strategy aws \
+ --git-private-token "$GITHUB_TOKEN" \
+ --container-cpu 2048 \
+ --container-memory 8192 \
+ --cache-key "main" \
+ --builds-path ./dist
+```
+
+## See Also
+
+- [Getting Started](getting-started) - installation and first build
+- [Build Command](build-command) - local Docker builds
+- [API Reference](../api-reference) - full parameter reference
+- [Providers](../providers/overview) - provider-specific setup guides
diff --git a/docs/03-github-orchestrator/08-cli/04-other-commands.mdx b/docs/03-github-orchestrator/08-cli/04-other-commands.mdx
new file mode 100644
index 00000000..656bb2af
--- /dev/null
+++ b/docs/03-github-orchestrator/08-cli/04-other-commands.mdx
@@ -0,0 +1,185 @@
+---
+sidebar_position: 4
+---
+
+# Other Commands
+
+Beyond `build` and `orchestrate`, the CLI provides commands for license management, cache
+operations, environment diagnostics, version information, and self-updating.
+
+## activate
+
+Verify and prepare Unity license activation. Checks that valid license credentials are available
+before running builds.
+
+```bash
+game-ci activate
+```
+
+The `activate` command checks for license credentials from environment variables and reports whether
+activation will succeed. License activation itself is handled automatically when a build runs.
+
+### License Methods
+
+Provide credentials via environment variables:
+
+| Variable | Description |
+| --------------- | ----------------------------------------------------- |
+| `UNITY_SERIAL` | Serial key (Professional/Plus licenses) |
+| `UNITY_LICENSE` | Contents of a `.ulf` license file (base64 or raw XML) |
+
+```bash
+# Using a serial key
+UNITY_SERIAL=XX-XXXX-XXXX-XXXX-XXXX-XXXX game-ci activate
+
+# Using a license file
+export UNITY_LICENSE="$(cat ~/Unity_v2022.x.ulf)"
+game-ci activate
+```
+
+### Flags
+
+| Flag | Default | Description |
+| -------------------------- | --------- | ------------------------------------- |
+| `--unity-version` | `auto` | Version of Unity to activate |
+| `--unity-licensing-server` | _(empty)_ | Unity floating license server address |
+
+```bash
+# Using a floating license server
+game-ci activate --unity-licensing-server http://license-server:8080
+```
+
+## cache
+
+Manage build caches. Caches store the Unity Library folder and other intermediate artifacts to speed
+up subsequent builds.
+
+```bash
+game-ci cache [options]
+```
+
+The `` positional argument is required and must be one of: `list`, `restore`, or `clear`.
+
+### cache list
+
+List cache status, including Library folder presence and any cache archives:
+
+```bash
+game-ci cache list
+```
+
+Output includes Library folder path, entry count, last-modified timestamp, and key subdirectory
+status (PackageCache, ScriptAssemblies, ShaderCache, Bee).
+
+### cache restore
+
+Check for available cache archives to restore:
+
+```bash
+game-ci cache restore --cache-dir ./my-cache
+```
+
+### cache clear
+
+Remove cache archive files:
+
+```bash
+game-ci cache clear
+```
+
+### Cache Flags
+
+| Flag | Default | Description |
+| ---------------- | --------- | ------------------------------------------------------------------ |
+| `--cache-dir` | _(empty)_ | Path to the cache directory (defaults to `/Library`) |
+| `--project-path` | `.` | Path to the Unity project |
+
+## status
+
+Display information about the current environment, useful for debugging setup issues:
+
+```bash
+game-ci status
+```
+
+Reports:
+
+- **Project** - detected Unity project path and whether a project was found
+- **Unity Version** - version detected from `ProjectSettings/ProjectVersion.txt`
+- **Library Cache** - whether the Library folder is present and when it was last modified
+- **Build Outputs** - any existing build output directories
+- **Environment** - platform, Node.js version, and which license environment variables are set
+ (`UNITY_SERIAL`, `UNITY_LICENSE`, `UNITY_EMAIL`, `UNITY_PASSWORD` - shown as Set/Not set)
+- **Docker** - whether Docker is available and its version
+
+### Flags
+
+| Flag | Default | Description |
+| ---------------- | ------- | ------------------------- |
+| `--project-path` | `.` | Path to the Unity project |
+
+## version
+
+Print the installed CLI version, Node.js version, and platform:
+
+```bash
+game-ci version
+```
+
+```
+game-ci (@game-ci/orchestrator) v3.0.0
+Node.js v20.5.1
+Platform: win32 x64
+```
+
+## update
+
+Update the `game-ci` CLI to the latest version. Downloads the appropriate binary from the
+[orchestrator GitHub Releases](https://github.com/game-ci/orchestrator/releases) for your platform
+and architecture.
+
+```bash
+game-ci update
+```
+
+### Flags
+
+| Flag | Default | Description |
+| --------------- | --------- | -------------------------------------------------- |
+| `--force`, `-f` | `false` | Force update even if already on the latest version |
+| `--version` | _(empty)_ | Update to a specific version (e.g. `v2.1.0`) |
+
+```bash
+# Update to the latest version
+game-ci update
+
+# Update to a specific version
+game-ci update --version v2.1.0
+
+# Force reinstall of the current version
+game-ci update --force
+```
+
+If running via Node.js (not as a standalone binary), the command will print instructions for
+updating via npm instead.
+
+## Global Flags
+
+These flags are available on all commands:
+
+| Flag | Description |
+| -------------- | ------------------------- |
+| `--help`, `-h` | Show help for any command |
+
+```bash
+# Get help for any command
+game-ci build --help
+game-ci orchestrate --help
+game-ci cache --help
+```
+
+## See Also
+
+- [Getting Started](getting-started) - installation and first build
+- [Build Command](build-command) - full build reference
+- [Orchestrate Command](orchestrate-command) - cloud builds
diff --git a/docs/03-github-orchestrator/08-cli/_category_.yaml b/docs/03-github-orchestrator/08-cli/_category_.yaml
new file mode 100644
index 00000000..4939b69d
--- /dev/null
+++ b/docs/03-github-orchestrator/08-cli/_category_.yaml
@@ -0,0 +1,4 @@
+position: 8.0
+label: CLI
+collapsible: true
+collapsed: true
diff --git a/docs/09-troubleshooting/common-issues.mdx b/docs/09-troubleshooting/common-issues.mdx
index d6a6779c..6c248808 100644
--- a/docs/09-troubleshooting/common-issues.mdx
+++ b/docs/09-troubleshooting/common-issues.mdx
@@ -306,7 +306,7 @@ Error: Global Illumination requires a graphics device to render albedo.
This error occurs when Unity fails to find a suitable OpenCL device for the GPU Lightmapper. This
can be due to a variety of reasons, such as:
-- The GPU Lightmapper is not supported on the current hardware.
+- The GPU Lightmapper is not supported on the current machine.
- The GPU Lightmapper is not supported on the current operating system.
- The GPU Lightmapper is not supported on the current Unity version.
- The GPU Lightmapper is not supported on the current graphics driver.
@@ -349,6 +349,55 @@ Here’s how to find and change the Lightmapper setting in Unity:
Once the lightmapper is set to `Progressive CPU`, you should commit the changes to your project and
re-run the pipeline to see if the error persists.
+### Container Overrides length must be at most 8192 (AWS)
+
+#### Error
+
+```plaintext
+Container Overrides length must be at most 8192
+```
+
+#### Explanation
+
+AWS ECS/Fargate imposes a hard 8192-byte limit on the `containerOverrides` JSON payload sent when
+starting a task. The orchestrator passes all build environment variables, secrets, and the build
+command in this payload. Workflows with many custom parameters, long file paths, base64-encoded
+values, or large numbers of secrets can exceed this limit.
+
+The main contributors to payload size are:
+
+- **Built-in environment variables** (~60+ variables set by the orchestrator for every build)
+- **Secrets passed as environment variables** (Unity license, serial, email, password, custom
+ secrets)
+- **The build command** (including any command hooks)
+
+#### Solution
+
+1. **Use secret pulling instead of inline secrets.** The orchestrator can fetch secrets at runtime
+ from your cloud provider's secret manager, keeping them out of the container override payload.
+ Set `secretSource` and `pullInputList` to move secrets to a pull-based model:
+
+ ```yaml
+ - uses: game-ci/unity-builder@v4
+ env:
+ pullInputList: UNITY_LICENSE,UNITY_SERIAL,UNITY_EMAIL,UNITY_PASSWORD
+ secretSource: aws-secrets-manager
+ with:
+ providerStrategy: aws
+ targetPlatform: StandaloneLinux64
+ gitPrivateToken: ${{ secrets.GITHUB_TOKEN }}
+ ```
+
+ See [Secrets](/docs/github-orchestrator/secrets) for all available secret sources (AWS Secrets
+ Manager, AWS Parameter Store, GCP, Azure Key Vault, HashiCorp Vault).
+
+2. **Shorten custom environment variable values.** Long file paths, base64-encoded content, and
+ verbose custom parameters are common contributors. Where possible, store large values in a secret
+ manager and pull them at runtime.
+
+3. **Reduce the number of custom environment variables.** Each additional `env` variable in your
+ workflow adds to the payload. Only pass what the build actually needs.
+
## General tips
These are tips that are usually applicable to problems you may encounter while working with a game
diff --git a/docs/11-circleci/05-executors.mdx b/docs/11-circleci/05-executors.mdx
index e802e505..bd0086b5 100644
--- a/docs/11-circleci/05-executors.mdx
+++ b/docs/11-circleci/05-executors.mdx
@@ -108,7 +108,7 @@ unsure how to create or configure it, follow
The `macos-runner` is an excellent alternative over [macos](#macos) to build macOS IL2CPP with
faster build times. Its non-ephemeral nature saves times on installing external dependencies after
-the first run. And in addition, you are free to use an agent with hardware exceeding the
+the first run. And in addition, you are free to use an agent with specs exceeding the
[available](https://circleci.com/docs/configuration-reference#macos-execution-environment) for the
web macOS executor.
@@ -372,7 +372,7 @@ unsure how to create or configure it, follow
The `windows-runner` is an excellent alternative over [windows](#windows) to build Windows IL2CPP
with faster build times. Its non-ephemeral nature saves times on installing external dependencies
-after the first run. And in addition, you are free to use an agent with hardware exceeding the
+after the first run. And in addition, you are free to use an agent with specs exceeding the
[available](https://circleci.com/docs/configuration-reference#windows-execution) for the web Windows
executor.
diff --git a/docs/12-self-hosting/01-overview.mdx b/docs/12-self-hosting/01-overview.mdx
index d8b0af61..e93272ac 100644
--- a/docs/12-self-hosting/01-overview.mdx
+++ b/docs/12-self-hosting/01-overview.mdx
@@ -34,8 +34,8 @@ included some resources for you to start your learning journey below.
There are many ways to self-host CI/CD runners, and which one is best for you will depend on your
own situation and constraints. For the purpose of this guide we will make the following assumptions:
-- 💻 User already has their own hardware
-- 💸 Budget for new hardware, software, or services is $0
+- 💻 User already has their own machines
+- 💸 Budget for new machines, software, or services is $0
- 🛠️ FOSS tools should be prioritized where possible
- 📜 We define `Self-Hosting` in this context to refer to a user taking responsibility for the
operating-system level configuration and life-cycle-management of a given compute resource (metal,
@@ -44,7 +44,7 @@ own situation and constraints. For the purpose of this guide we will make the fo
## ⚠️ Security Disclaimer
This guide strives to maintain a balance between convenience and security for the sake of usability.
-The examples included in this guide are intended for use with on-prem hardware without public IP
+The examples included in this guide are intended for use with on-prem machines without public IP
addresses accessible from external networks. Security is a constantly moving target which requires
continuous effort to maintain. Users should conduct their own security review before using the
following techniques on production or public systems.
@@ -52,7 +52,7 @@ following techniques on production or public systems.
## ⚡️ Power Costs
Hosting your own runners also comes with an increase in power consumption. This will vary based on
-the hardware you use and the prices of energy in your area. Below are some useful resources for
+the machines you use and the prices of energy in your area. Below are some useful resources for
discovering the potential energy costs of self-hosting.
- https://outervision.com/power-supply-calculator
@@ -72,7 +72,7 @@ This guide is tested on devices which meet the following requirements:
### Host Creation
"Host Creation" in this context is the process of installing an operating system onto a piece of
-physical hardware, or the creation and configuration of virtualised compute resources.
+physical machine, or the creation and configuration of virtualised compute resources.
- [Bare-Metal](./03-host-creation/02-bare-metal.mdx)
- [Virtual Machines using Multipass](./03-host-creation/02-multipass.mdx)
diff --git a/docs/12-self-hosting/02-host-types.mdx b/docs/12-self-hosting/02-host-types.mdx
index 43a696c9..e4af49cf 100644
--- a/docs/12-self-hosting/02-host-types.mdx
+++ b/docs/12-self-hosting/02-host-types.mdx
@@ -13,7 +13,7 @@ import Layers012 from '/assets/images/k8s-layers012.drawio.png';
## Bare-Metal
-"Bare Metal" means that your host OS is running directly on a piece of hardware without any
+"Bare Metal" means that your host OS is running directly on a physical machine without any
virtualisation. This reduces the complexity of deployment at the cost of increased time and effort
for re-provisioning the host.
@@ -91,7 +91,7 @@ as many 5,000 per cluster.
Once installed, Kubernetes creates
-[standardised interfaces](https://matt-rickard.com/kubernetes-interfaces) to control the hardware &
+[standardised interfaces](https://matt-rickard.com/kubernetes-interfaces) to control the machines &
software components of the underlying nodes (networking, storage, GPUs, CPU cores etc...) as well as
a distributed key-value store which facilitates communication between all nodes in the cluster.
@@ -104,7 +104,7 @@ a distributed key-value store which facilitates communication between all nodes
-With the underlying hardware abstracted into a generic pool of resources, Kubernetes is then able to
+With the underlying machines abstracted into a generic pool of resources, Kubernetes is then able to
re-compose those assets into isolated environments called "Namespaces" where it deploys
containerised workloads in groups called "Pods". This layer of Kubernetes is very similar to a
typical container host but with many more features for multi-tenancy, security, and life-cycle
diff --git a/docs/12-self-hosting/03-host-creation/03-QEMU/04-windows.mdx b/docs/12-self-hosting/03-host-creation/03-QEMU/04-windows.mdx
index 2c6ac1f0..a724c2b3 100644
--- a/docs/12-self-hosting/03-host-creation/03-QEMU/04-windows.mdx
+++ b/docs/12-self-hosting/03-host-creation/03-QEMU/04-windows.mdx
@@ -25,7 +25,7 @@ import DiskmarkSata from '/assets/images/diskmark-sata.png';
# Windows
Windows VMs on QEMU + KVM work very well provided that you have tailored your VM to match your
-needs. There are multiple possible combinations of images, hardware types, and installation methods
+needs. There are multiple possible combinations of images, machine types, and installation methods
that each have their own benefits and drawbacks.
## Choosing an Image
diff --git a/docs/12-self-hosting/03-host-creation/03-QEMU/06-configuration.mdx b/docs/12-self-hosting/03-host-creation/03-QEMU/06-configuration.mdx
index f902a3a3..85d630a3 100644
--- a/docs/12-self-hosting/03-host-creation/03-QEMU/06-configuration.mdx
+++ b/docs/12-self-hosting/03-host-creation/03-QEMU/06-configuration.mdx
@@ -2,7 +2,7 @@
The following are advanced configuration options for QEMU Virtual Machines that users may find
useful but carry significant risks for system corruption and data-loss. Many of the processes below
-are hardware-level options which will differ greatly across hardware types and vendors. Users are
+are machine-level options which will differ greatly across machine types and vendors. Users are
advised to proceed with caution and the understanding that support is provided on a 'best-effort'
basis only.
diff --git a/docusaurus.config.js b/docusaurus.config.js
index fd432e6c..e32eee4c 100644
--- a/docusaurus.config.js
+++ b/docusaurus.config.js
@@ -17,6 +17,10 @@ const config = {
favicon: 'icons/favicon.ico',
organizationName: 'game-ci', // Usually your GitHub org/user name.
projectName: 'documentation', // Usually your repo name.
+ markdown: {
+ mermaid: true,
+ },
+ themes: ['@docusaurus/theme-mermaid'],
plugins: [
['docusaurus-plugin-sass', {}],
@@ -38,6 +42,24 @@ const config = {
},
};
},
+ function cytoscapeWebpackFix() {
+ return {
+ name: 'cytoscape-webpack-fix',
+ configureWebpack() {
+ return {
+ resolve: {
+ alias: {
+ // cytoscape 3.33+ only exports UMD under "require" condition,
+ // but mermaid imports it in a webpack "import" context.
+ 'cytoscape/dist/cytoscape.umd.js': require.resolve(
+ 'cytoscape/dist/cytoscape.cjs.js',
+ ),
+ },
+ },
+ };
+ },
+ };
+ },
],
presets: [
diff --git a/package.json b/package.json
index 4df3f01c..a0ba8732 100644
--- a/package.json
+++ b/package.json
@@ -36,6 +36,7 @@
"@docusaurus/module-type-aliases": "^2.4.1",
"@docusaurus/preset-classic": "^2.4.1",
"@docusaurus/theme-common": "^2.4.1",
+ "@docusaurus/theme-mermaid": "2.4.1",
"@docusaurus/tsconfig": "^3.0.0-alpha.0",
"@mdx-js/react": "^1.6.22",
"@reduxjs/toolkit": "^1.8.1",
@@ -92,6 +93,11 @@
"prettier": "^2.6.2",
"tsc-files": "^1.1.3"
},
+ "resolutions": {
+ "mermaid": "11.14.0",
+ "dompurify": "3.4.2",
+ "uuid@npm:^11.1.0": "14.0.0"
+ },
"browserslist": {
"production": [
">0.5%",
diff --git a/versioned_docs/version-1/04-github/02-activation.mdx b/versioned_docs/version-1/04-github/02-activation.mdx
index 135715fb..63e9a06d 100644
--- a/versioned_docs/version-1/04-github/02-activation.mdx
+++ b/versioned_docs/version-1/04-github/02-activation.mdx
@@ -19,7 +19,7 @@ You may use the
action using below instructions.
The activation file uses machine identifiers and the Unity version number. All github virtual
-machines emit the same hardware ID. You cannot perform this step locally.
+machines emit the same machine ID. You cannot perform this step locally.
Let's go!
diff --git a/versioned_docs/version-2/03-github-orchestrator/01-introduction.mdx b/versioned_docs/version-2/03-github-orchestrator/01-introduction.mdx
index 3ae13efb..e723c376 100644
--- a/versioned_docs/version-2/03-github-orchestrator/01-introduction.mdx
+++ b/versioned_docs/version-2/03-github-orchestrator/01-introduction.mdx
@@ -40,7 +40,7 @@ for game development pipelines.
This solution prefers convenience, ease of use, scalability, throughput and flexibility.
-Faster solutions exist, but would all involve self-hosted hardware with an immediate local cache of
+Faster solutions exist, but would all involve self-hosted machines with an immediate local cache of
the large project files and working directory and a dedicated server.
# Orchestrator Release Status
diff --git a/versioned_docs/version-2/11-circleci/05-executors.mdx b/versioned_docs/version-2/11-circleci/05-executors.mdx
index e802e505..bd0086b5 100644
--- a/versioned_docs/version-2/11-circleci/05-executors.mdx
+++ b/versioned_docs/version-2/11-circleci/05-executors.mdx
@@ -108,7 +108,7 @@ unsure how to create or configure it, follow
The `macos-runner` is an excellent alternative over [macos](#macos) to build macOS IL2CPP with
faster build times. Its non-ephemeral nature saves times on installing external dependencies after
-the first run. And in addition, you are free to use an agent with hardware exceeding the
+the first run. And in addition, you are free to use an agent with specs exceeding the
[available](https://circleci.com/docs/configuration-reference#macos-execution-environment) for the
web macOS executor.
@@ -372,7 +372,7 @@ unsure how to create or configure it, follow
The `windows-runner` is an excellent alternative over [windows](#windows) to build Windows IL2CPP
with faster build times. Its non-ephemeral nature saves times on installing external dependencies
-after the first run. And in addition, you are free to use an agent with hardware exceeding the
+after the first run. And in addition, you are free to use an agent with specs exceeding the
[available](https://circleci.com/docs/configuration-reference#windows-execution) for the web Windows
executor.
diff --git a/versioned_docs/version-3/03-github-orchestrator/01-introduction.mdx b/versioned_docs/version-3/03-github-orchestrator/01-introduction.mdx
index 3ae13efb..e723c376 100644
--- a/versioned_docs/version-3/03-github-orchestrator/01-introduction.mdx
+++ b/versioned_docs/version-3/03-github-orchestrator/01-introduction.mdx
@@ -40,7 +40,7 @@ for game development pipelines.
This solution prefers convenience, ease of use, scalability, throughput and flexibility.
-Faster solutions exist, but would all involve self-hosted hardware with an immediate local cache of
+Faster solutions exist, but would all involve self-hosted machines with an immediate local cache of
the large project files and working directory and a dedicated server.
# Orchestrator Release Status
diff --git a/versioned_docs/version-3/11-circleci/05-executors.mdx b/versioned_docs/version-3/11-circleci/05-executors.mdx
index e802e505..bd0086b5 100644
--- a/versioned_docs/version-3/11-circleci/05-executors.mdx
+++ b/versioned_docs/version-3/11-circleci/05-executors.mdx
@@ -108,7 +108,7 @@ unsure how to create or configure it, follow
The `macos-runner` is an excellent alternative over [macos](#macos) to build macOS IL2CPP with
faster build times. Its non-ephemeral nature saves times on installing external dependencies after
-the first run. And in addition, you are free to use an agent with hardware exceeding the
+the first run. And in addition, you are free to use an agent with specs exceeding the
[available](https://circleci.com/docs/configuration-reference#macos-execution-environment) for the
web macOS executor.
@@ -372,7 +372,7 @@ unsure how to create or configure it, follow
The `windows-runner` is an excellent alternative over [windows](#windows) to build Windows IL2CPP
with faster build times. Its non-ephemeral nature saves times on installing external dependencies
-after the first run. And in addition, you are free to use an agent with hardware exceeding the
+after the first run. And in addition, you are free to use an agent with specs exceeding the
[available](https://circleci.com/docs/configuration-reference#windows-execution) for the web Windows
executor.
diff --git a/yarn.lock b/yarn.lock
index be0823be..6b081977 100644
--- a/yarn.lock
+++ b/yarn.lock
@@ -259,6 +259,16 @@ __metadata:
languageName: node
linkType: hard
+"@antfu/install-pkg@npm:^1.1.0":
+ version: 1.1.0
+ resolution: "@antfu/install-pkg@npm:1.1.0"
+ dependencies:
+ package-manager-detector: "npm:^1.3.0"
+ tinyexec: "npm:^1.0.1"
+ checksum: 10/e20b7cd1c37eff832cc878cddd794f8c3779175681cf6d75c4cc1ae1475526126a4c1f71fa027161aa1ee35a8850782be9ca0ec01b621893defebe97ba9dc70e
+ languageName: node
+ linkType: hard
+
"@babel/code-frame@npm:7.12.11":
version: 7.12.11
resolution: "@babel/code-frame@npm:7.12.11"
@@ -1824,6 +1834,53 @@ __metadata:
languageName: node
linkType: hard
+"@braintree/sanitize-url@npm:^7.1.1":
+ version: 7.1.2
+ resolution: "@braintree/sanitize-url@npm:7.1.2"
+ checksum: 10/d9626ff8f8eb5e192cd055e6e743449c21102c76bb59e405b7028fe56230fa080bfcc80dfb1e21850a6876e75adda9f7b3c888cf0685942bb74da4d2866d6ec3
+ languageName: node
+ linkType: hard
+
+"@chevrotain/cst-dts-gen@npm:12.0.0":
+ version: 12.0.0
+ resolution: "@chevrotain/cst-dts-gen@npm:12.0.0"
+ dependencies:
+ "@chevrotain/gast": "npm:12.0.0"
+ "@chevrotain/types": "npm:12.0.0"
+ checksum: 10/d5af4f2580613501eb02cd68ceb93463e47c5d2103226d698efb6bd1c01864543fff729984961b403ce7627e2268352abec76d854122aa3731432baef5f6ceec
+ languageName: node
+ linkType: hard
+
+"@chevrotain/gast@npm:12.0.0":
+ version: 12.0.0
+ resolution: "@chevrotain/gast@npm:12.0.0"
+ dependencies:
+ "@chevrotain/types": "npm:12.0.0"
+ checksum: 10/113dd2830b7fa738da56913102e52f682604675a22e88798e16e892fa0f4838fd17260090d225c332e513ec4e4caa6da0161e44f3ee8a7d7eb6dbfee4af6196b
+ languageName: node
+ linkType: hard
+
+"@chevrotain/regexp-to-ast@npm:12.0.0, @chevrotain/regexp-to-ast@npm:~12.0.0":
+ version: 12.0.0
+ resolution: "@chevrotain/regexp-to-ast@npm:12.0.0"
+ checksum: 10/61cbd3a064c71b5deb1d53bc652db5cfb6f7c162720874e28919edf07ac52f097bcd68982d9acd24a45bcb49f0217b5472876cd2abeb3c1ad40997d765ad08b1
+ languageName: node
+ linkType: hard
+
+"@chevrotain/types@npm:12.0.0":
+ version: 12.0.0
+ resolution: "@chevrotain/types@npm:12.0.0"
+ checksum: 10/a2033266ef6cef121a4787b7bc9b62c653527b91061539458249bea1574a4a752d31c270d4d78af292eb9e4b939cec35600f6729adb3c923b369a87c6ac2b994
+ languageName: node
+ linkType: hard
+
+"@chevrotain/utils@npm:12.0.0":
+ version: 12.0.0
+ resolution: "@chevrotain/utils@npm:12.0.0"
+ checksum: 10/c72ba1e953df4a9196af374966f67588e585b4700b5dfea661eae4cb3730d78e97690b48ab4d9e8db7478c74fd4f0deaa71eb5a0b9767483db9a2f7e02230a31
+ languageName: node
+ linkType: hard
+
"@colors/colors@npm:1.5.0":
version: 1.5.0
resolution: "@colors/colors@npm:1.5.0"
@@ -2282,6 +2339,25 @@ __metadata:
languageName: node
linkType: hard
+"@docusaurus/theme-mermaid@npm:2.4.1":
+ version: 2.4.1
+ resolution: "@docusaurus/theme-mermaid@npm:2.4.1"
+ dependencies:
+ "@docusaurus/core": "npm:2.4.1"
+ "@docusaurus/module-type-aliases": "npm:2.4.1"
+ "@docusaurus/theme-common": "npm:2.4.1"
+ "@docusaurus/types": "npm:2.4.1"
+ "@docusaurus/utils-validation": "npm:2.4.1"
+ "@mdx-js/react": "npm:^1.6.22"
+ mermaid: "npm:^9.2.2"
+ tslib: "npm:^2.4.0"
+ peerDependencies:
+ react: ^16.8.4 || ^17.0.0
+ react-dom: ^16.8.4 || ^17.0.0
+ checksum: 10/75b7dcac2cb4b62fc21c83c587514e97316bce235b40da77ca1fafc3226b5d2383f42f9c7570e7e1ea79176018212514e05f642b7649206c76bd112b5085bc62
+ languageName: node
+ linkType: hard
+
"@docusaurus/theme-search-algolia@npm:2.4.1":
version: 2.4.1
resolution: "@docusaurus/theme-search-algolia@npm:2.4.1"
@@ -2982,6 +3058,24 @@ __metadata:
languageName: node
linkType: hard
+"@iconify/types@npm:^2.0.0":
+ version: 2.0.0
+ resolution: "@iconify/types@npm:2.0.0"
+ checksum: 10/1b3425ecbc0eef44f23d3f27355ae7ef306d5119c566f013ef1849995b016e1fdcc5af6b74c3bc0554485d70cf5179cb9c1095b14d662a55abcae1148e1a13c9
+ languageName: node
+ linkType: hard
+
+"@iconify/utils@npm:^3.0.2":
+ version: 3.1.1
+ resolution: "@iconify/utils@npm:3.1.1"
+ dependencies:
+ "@antfu/install-pkg": "npm:^1.1.0"
+ "@iconify/types": "npm:^2.0.0"
+ mlly: "npm:^1.8.2"
+ checksum: 10/e0cdb485c0479bd0bfaa6801c46a95c90586a33334e29d11ac753ed6fcafec3125bc316185f484ae29f630482f81d018e6eecad877b2a1def8525fcae13bdf59
+ languageName: node
+ linkType: hard
+
"@isaacs/cliui@npm:^8.0.2":
version: 8.0.2
resolution: "@isaacs/cliui@npm:8.0.2"
@@ -3402,6 +3496,15 @@ __metadata:
languageName: node
linkType: hard
+"@mermaid-js/parser@npm:^1.1.0":
+ version: 1.1.0
+ resolution: "@mermaid-js/parser@npm:1.1.0"
+ dependencies:
+ langium: "npm:^4.0.0"
+ checksum: 10/845bafce5d2be94a8104112e2197ebd925e58b3c9aae5b3e4b7795e308b0bc7c8016a5c955974da956cf7351a5572e4797ad5c991ecbb771326ffa18d1db2074
+ languageName: node
+ linkType: hard
+
"@nicolo-ribaudo/eslint-scope-5-internals@npm:5.1.1-v1":
version: 5.1.1-v1
resolution: "@nicolo-ribaudo/eslint-scope-5-internals@npm:5.1.1-v1"
@@ -4082,6 +4185,278 @@ __metadata:
languageName: node
linkType: hard
+"@types/d3-array@npm:*":
+ version: 3.2.2
+ resolution: "@types/d3-array@npm:3.2.2"
+ checksum: 10/1afebd05b688cafaaea295f765b409789f088b274b8a7ca40a4bc2b79760044a898e06a915f40bbc59cf39eabdd2b5d32e960b136fc025fd05c9a9d4435614c6
+ languageName: node
+ linkType: hard
+
+"@types/d3-axis@npm:*":
+ version: 3.0.6
+ resolution: "@types/d3-axis@npm:3.0.6"
+ dependencies:
+ "@types/d3-selection": "npm:*"
+ checksum: 10/8af56b629a0597ac8ef5051b6ad5390818462d8e588e1b52fb181808b1c0525d12a658730fad757e1ae256d0db170a0e29076acdef21acc98b954608d1c37b84
+ languageName: node
+ linkType: hard
+
+"@types/d3-brush@npm:*":
+ version: 3.0.6
+ resolution: "@types/d3-brush@npm:3.0.6"
+ dependencies:
+ "@types/d3-selection": "npm:*"
+ checksum: 10/4095cee2512d965732147493c471a8dd97dfb5967479d9aef43397f8b0e074b03296302423b8379c4274f9249b52bd1d74cc021f98d4f64b5a8a4a7e6fe48335
+ languageName: node
+ linkType: hard
+
+"@types/d3-chord@npm:*":
+ version: 3.0.6
+ resolution: "@types/d3-chord@npm:3.0.6"
+ checksum: 10/ca9ba8b00debd24a2b51527b9c3db63eafa5541c08dc721d1c52ca19960c5cec93a7b1acfc0ec072dbca31d134924299755e20a4d1d4ee04b961fc0de841b418
+ languageName: node
+ linkType: hard
+
+"@types/d3-color@npm:*":
+ version: 3.1.3
+ resolution: "@types/d3-color@npm:3.1.3"
+ checksum: 10/1cf0f512c09357b25d644ab01b54200be7c9b15c808333b0ccacf767fff36f17520b2fcde9dad45e1bd7ce84befad39b43da42b4fded57680fa2127006ca3ece
+ languageName: node
+ linkType: hard
+
+"@types/d3-contour@npm:*":
+ version: 3.0.6
+ resolution: "@types/d3-contour@npm:3.0.6"
+ dependencies:
+ "@types/d3-array": "npm:*"
+ "@types/geojson": "npm:*"
+ checksum: 10/e7b7e3972aa71003c21f2c864116ffb95a9175a62ec56ec656a855e5198a66a0830b2ad7fc26811214cfa8c98cdf4190d7d351913ca0913f799fbcf2a4c99b2d
+ languageName: node
+ linkType: hard
+
+"@types/d3-delaunay@npm:*":
+ version: 6.0.4
+ resolution: "@types/d3-delaunay@npm:6.0.4"
+ checksum: 10/cb8d2c9ed0b39ade3107b9792544a745b2de3811a6bd054813e9dc708b1132fbacd796e54c0602c11b3a14458d14487c5276c1affb7c2b9f25fe55fff88d6d25
+ languageName: node
+ linkType: hard
+
+"@types/d3-dispatch@npm:*":
+ version: 3.0.7
+ resolution: "@types/d3-dispatch@npm:3.0.7"
+ checksum: 10/ce7ab5a7d5c64aacf563797c0c61f3862b9ff687cb35470fe462219f09e402185646f51707339beede616586d92ded6974c3958dbeb15e35a85b1ecfafdf13a8
+ languageName: node
+ linkType: hard
+
+"@types/d3-drag@npm:*":
+ version: 3.0.7
+ resolution: "@types/d3-drag@npm:3.0.7"
+ dependencies:
+ "@types/d3-selection": "npm:*"
+ checksum: 10/93aba299c3a8d41ee326c5304ab694ceea135ed115c3b2ccab727a5d9bfc935f7f36d3fc416c013010eb755ac536c52adfcb15c195f241dc61f62650cc95088e
+ languageName: node
+ linkType: hard
+
+"@types/d3-dsv@npm:*":
+ version: 3.0.7
+ resolution: "@types/d3-dsv@npm:3.0.7"
+ checksum: 10/8507f542135cae472781dff1c3b391eceedad0f2032d24ac4a0814e72e2f6877e4ddcb66f44627069977ee61029dc0a729edf659ed73cbf1040f55a7451f05ef
+ languageName: node
+ linkType: hard
+
+"@types/d3-ease@npm:*":
+ version: 3.0.2
+ resolution: "@types/d3-ease@npm:3.0.2"
+ checksum: 10/d8f92a8a7a008da71f847a16227fdcb53a8938200ecdf8d831ab6b49aba91e8921769761d3bfa7e7191b28f62783bfd8b0937e66bae39d4dd7fb0b63b50d4a94
+ languageName: node
+ linkType: hard
+
+"@types/d3-fetch@npm:*":
+ version: 3.0.7
+ resolution: "@types/d3-fetch@npm:3.0.7"
+ dependencies:
+ "@types/d3-dsv": "npm:*"
+ checksum: 10/d496475cec7750f75740936e750a0150ca45e924a4f4697ad2c564f3a8f6c4ebc1b1edf8e081936e896532516731dbbaf2efd4890d53274a8eae13f51f821557
+ languageName: node
+ linkType: hard
+
+"@types/d3-force@npm:*":
+ version: 3.0.10
+ resolution: "@types/d3-force@npm:3.0.10"
+ checksum: 10/9c35abed2af91b94fc72d6b477188626e628ed89a01016437502c1deaf558da934b5d0cc808c2f2979ac853b6302b3d6ef763eddaff3a55552a55c0be710d5ca
+ languageName: node
+ linkType: hard
+
+"@types/d3-format@npm:*":
+ version: 3.0.4
+ resolution: "@types/d3-format@npm:3.0.4"
+ checksum: 10/b937ecd2712d4aa38d5b4f5daab9cc8a576383868be1809e046aec99eeb1f1798c139f2e862dc400a82494c763be46087d154891773417f8eb53c73762ba3eb8
+ languageName: node
+ linkType: hard
+
+"@types/d3-geo@npm:*":
+ version: 3.1.0
+ resolution: "@types/d3-geo@npm:3.1.0"
+ dependencies:
+ "@types/geojson": "npm:*"
+ checksum: 10/e759d98470fe605ff0088247af81c3197cefce72b16eafe8acae606216c3e0a9f908df4e7cd5005ecfe13b8ac8396a51aaa0d282f3ca7d1c3850313a13fac905
+ languageName: node
+ linkType: hard
+
+"@types/d3-hierarchy@npm:*":
+ version: 3.1.7
+ resolution: "@types/d3-hierarchy@npm:3.1.7"
+ checksum: 10/9ff6cdedf5557ef9e1e7a65ca3c6846c895c84c1184e11ec6fa48565e96ebf5482d8be5cc791a8bc7f7debbd0e62604ee3da3ddca4f9d58bf6c8b4030567c6c6
+ languageName: node
+ linkType: hard
+
+"@types/d3-interpolate@npm:*":
+ version: 3.0.4
+ resolution: "@types/d3-interpolate@npm:3.0.4"
+ dependencies:
+ "@types/d3-color": "npm:*"
+ checksum: 10/72a883afd52c91132598b02a8cdfced9e783c54ca7e4459f9e29d5f45d11fb33f2cabc844e42fd65ba6e28f2a931dcce1add8607d2f02ef6fb8ea5b83ae84127
+ languageName: node
+ linkType: hard
+
+"@types/d3-path@npm:*":
+ version: 3.1.1
+ resolution: "@types/d3-path@npm:3.1.1"
+ checksum: 10/0437994d45d852ecbe9c4484e5abe504cd48751796d23798b6d829503a15563fdd348d93ac44489ba9c656992d16157f695eb889d9ce1198963f8e1dbabb1266
+ languageName: node
+ linkType: hard
+
+"@types/d3-polygon@npm:*":
+ version: 3.0.2
+ resolution: "@types/d3-polygon@npm:3.0.2"
+ checksum: 10/7cf1eadb54f02dd3617512b558f4c0f3811f8a6a8c887d9886981c3cc251db28b68329b2b0707d9f517231a72060adbb08855227f89bef6ef30caedc0a67cab2
+ languageName: node
+ linkType: hard
+
+"@types/d3-quadtree@npm:*":
+ version: 3.0.6
+ resolution: "@types/d3-quadtree@npm:3.0.6"
+ checksum: 10/4c260c9857d496b7f112cf57680c411c1912cc72538a5846c401429e3ed89a097c66410cfd38b394bfb4733ec2cb47d345b4eb5e202cbfb8e78ab044b535be02
+ languageName: node
+ linkType: hard
+
+"@types/d3-random@npm:*":
+ version: 3.0.3
+ resolution: "@types/d3-random@npm:3.0.3"
+ checksum: 10/2c126dda6846f6c7e02c9123a30b4cdf27f3655d19b78456bbb330fbac27acceeeb987318055d3964dba8e6450377ff737db91d81f27c81ca6f4522c9b994ef2
+ languageName: node
+ linkType: hard
+
+"@types/d3-scale-chromatic@npm:*":
+ version: 3.1.0
+ resolution: "@types/d3-scale-chromatic@npm:3.1.0"
+ checksum: 10/6b04af931b7cd4aa09f21519970cab44aaae181faf076013ab93ccb0d550ec16f4c8d444c1e9dee1493be4261a8a8bb6f8e6356e6f4c6ba0650011b1e8a38aef
+ languageName: node
+ linkType: hard
+
+"@types/d3-scale@npm:*":
+ version: 4.0.9
+ resolution: "@types/d3-scale@npm:4.0.9"
+ dependencies:
+ "@types/d3-time": "npm:*"
+ checksum: 10/2cae90a5e39252ae51388f3909ffb7009178582990462838a4edd53dd7e2e08121b38f0d2e1ac0e28e41167e88dea5b99e064ca139ba917b900a8020cf85362f
+ languageName: node
+ linkType: hard
+
+"@types/d3-selection@npm:*":
+ version: 3.0.11
+ resolution: "@types/d3-selection@npm:3.0.11"
+ checksum: 10/2d2d993b9e9553d066566cb22916c632e5911090db99e247bd8c32855a344e6b7c25b674f3c27956c367a6b3b1214b09931ce854788c3be2072003e01f2c75d7
+ languageName: node
+ linkType: hard
+
+"@types/d3-shape@npm:*":
+ version: 3.1.8
+ resolution: "@types/d3-shape@npm:3.1.8"
+ dependencies:
+ "@types/d3-path": "npm:*"
+ checksum: 10/ebc161d49101d84409829fea516ba7ec71ad51a1e97438ca0fafc1c30b56b3feae802d220375323632723a338dda7237c652e831e0b53527a6222ab0d1bb7809
+ languageName: node
+ linkType: hard
+
+"@types/d3-time-format@npm:*":
+ version: 4.0.3
+ resolution: "@types/d3-time-format@npm:4.0.3"
+ checksum: 10/9dfc1516502ac1c657d6024bdb88b6dc7e21dd7bff88f6187616cf9a0108250f63507a2004901ece4f97cc46602005a2ca2d05c6dbe53e8a0f6899bd60d4ff7a
+ languageName: node
+ linkType: hard
+
+"@types/d3-time@npm:*":
+ version: 3.0.4
+ resolution: "@types/d3-time@npm:3.0.4"
+ checksum: 10/b1eb4255066da56023ad243fd4ae5a20462d73bd087a0297c7d49ece42b2304a4a04297568c604a38541019885b2bc35a9e0fd704fad218e9bc9c5f07dc685ce
+ languageName: node
+ linkType: hard
+
+"@types/d3-timer@npm:*":
+ version: 3.0.2
+ resolution: "@types/d3-timer@npm:3.0.2"
+ checksum: 10/1643eebfa5f4ae3eb00b556bbc509444d88078208ec2589ddd8e4a24f230dd4cf2301e9365947e70b1bee33f63aaefab84cd907822aae812b9bc4871b98ab0e1
+ languageName: node
+ linkType: hard
+
+"@types/d3-transition@npm:*":
+ version: 3.0.9
+ resolution: "@types/d3-transition@npm:3.0.9"
+ dependencies:
+ "@types/d3-selection": "npm:*"
+ checksum: 10/dad647c485440f176117e8a45f31aee9427d8d4dfa174eaa2f01e702641db53ad0f752a144b20987c7189723c4f0afe0bf0f16d95b2a91aa28937eee4339c161
+ languageName: node
+ linkType: hard
+
+"@types/d3-zoom@npm:*":
+ version: 3.0.8
+ resolution: "@types/d3-zoom@npm:3.0.8"
+ dependencies:
+ "@types/d3-interpolate": "npm:*"
+ "@types/d3-selection": "npm:*"
+ checksum: 10/cc6ba975cf4f55f94933413954d81b87feb1ee8b8cee8f2202cf526f218dcb3ba240cbeb04ed80522416201c4a7394b37de3eb695d840a36d190dfb2d3e62cb5
+ languageName: node
+ linkType: hard
+
+"@types/d3@npm:^7.4.3":
+ version: 7.4.3
+ resolution: "@types/d3@npm:7.4.3"
+ dependencies:
+ "@types/d3-array": "npm:*"
+ "@types/d3-axis": "npm:*"
+ "@types/d3-brush": "npm:*"
+ "@types/d3-chord": "npm:*"
+ "@types/d3-color": "npm:*"
+ "@types/d3-contour": "npm:*"
+ "@types/d3-delaunay": "npm:*"
+ "@types/d3-dispatch": "npm:*"
+ "@types/d3-drag": "npm:*"
+ "@types/d3-dsv": "npm:*"
+ "@types/d3-ease": "npm:*"
+ "@types/d3-fetch": "npm:*"
+ "@types/d3-force": "npm:*"
+ "@types/d3-format": "npm:*"
+ "@types/d3-geo": "npm:*"
+ "@types/d3-hierarchy": "npm:*"
+ "@types/d3-interpolate": "npm:*"
+ "@types/d3-path": "npm:*"
+ "@types/d3-polygon": "npm:*"
+ "@types/d3-quadtree": "npm:*"
+ "@types/d3-random": "npm:*"
+ "@types/d3-scale": "npm:*"
+ "@types/d3-scale-chromatic": "npm:*"
+ "@types/d3-selection": "npm:*"
+ "@types/d3-shape": "npm:*"
+ "@types/d3-time": "npm:*"
+ "@types/d3-time-format": "npm:*"
+ "@types/d3-timer": "npm:*"
+ "@types/d3-transition": "npm:*"
+ "@types/d3-zoom": "npm:*"
+ checksum: 10/12234aa093c8661546168becdd8956e892b276f525d96f65a7b32fed886fc6a569fe5a1171bff26fef2a5663960635f460c9504a6f2d242ba281a2b6c8c6465c
+ languageName: node
+ linkType: hard
+
"@types/eslint-scope@npm:^3.7.3":
version: 3.7.4
resolution: "@types/eslint-scope@npm:3.7.4"
@@ -4133,6 +4508,13 @@ __metadata:
languageName: node
linkType: hard
+"@types/geojson@npm:*":
+ version: 7946.0.16
+ resolution: "@types/geojson@npm:7946.0.16"
+ checksum: 10/34d07421bdd60e7b99fa265441d17ac6e9aef48e3ce22d04324127d0de1daf7fbaa0bd3be1cece2092eb6995f21da84afa5231e24621a2910ff7340bc98f496f
+ languageName: node
+ linkType: hard
+
"@types/graceful-fs@npm:^4.1.3":
version: 4.1.6
resolution: "@types/graceful-fs@npm:4.1.6"
@@ -4504,6 +4886,13 @@ __metadata:
languageName: node
linkType: hard
+"@types/trusted-types@npm:^2.0.7":
+ version: 2.0.7
+ resolution: "@types/trusted-types@npm:2.0.7"
+ checksum: 10/8e4202766a65877efcf5d5a41b7dd458480b36195e580a3b1085ad21e948bc417d55d6f8af1fd2a7ad008015d4117d5fdfe432731157da3c68678487174e4ba3
+ languageName: node
+ linkType: hard
+
"@types/unist@npm:^2, @types/unist@npm:^2.0.0, @types/unist@npm:^2.0.2, @types/unist@npm:^2.0.3":
version: 2.0.7
resolution: "@types/unist@npm:2.0.7"
@@ -4790,6 +5179,21 @@ __metadata:
languageName: node
linkType: hard
+"@upsetjs/venn.js@npm:^2.0.0":
+ version: 2.0.0
+ resolution: "@upsetjs/venn.js@npm:2.0.0"
+ dependencies:
+ d3-selection: "npm:^3.0.0"
+ d3-transition: "npm:^3.0.1"
+ dependenciesMeta:
+ d3-selection:
+ optional: true
+ d3-transition:
+ optional: true
+ checksum: 10/84505be440490666566a6f59e765e1f24b154e4ca45cfed9102f02261be45912149f706e21b3cc7a6247816a8703e1f1b667561295b587c6dcc3162a9a48a5f2
+ languageName: node
+ linkType: hard
+
"@webassemblyjs/ast@npm:1.11.6, @webassemblyjs/ast@npm:^1.11.5":
version: 1.11.6
resolution: "@webassemblyjs/ast@npm:1.11.6"
@@ -5039,6 +5443,15 @@ __metadata:
languageName: node
linkType: hard
+"acorn@npm:^8.16.0":
+ version: 8.16.0
+ resolution: "acorn@npm:8.16.0"
+ bin:
+ acorn: bin/acorn
+ checksum: 10/690c673bb4d61b38ef82795fab58526471ad7f7e67c0e40c4ff1e10ecd80ce5312554ef633c9995bfc4e6d170cef165711f9ca9e49040b62c0c66fbf2dd3df2b
+ languageName: node
+ linkType: hard
+
"address@npm:^1.0.1, address@npm:^1.1.2":
version: 1.2.2
resolution: "address@npm:1.2.2"
@@ -6133,6 +6546,30 @@ __metadata:
languageName: node
linkType: hard
+"chevrotain-allstar@npm:~0.4.3":
+ version: 0.4.3
+ resolution: "chevrotain-allstar@npm:0.4.3"
+ dependencies:
+ lodash-es: "npm:^4.18.1"
+ peerDependencies:
+ chevrotain: ^12.0.0
+ checksum: 10/a2498dfa48e8138a5caa515013f864530f72bf28ffd4385f597b0c6dfbd8b00eb71c2b21b6ba30a8f2f38ec5efbb78b5eb931900d7bf33be3f0cf6bce0342e7f
+ languageName: node
+ linkType: hard
+
+"chevrotain@npm:~12.0.0":
+ version: 12.0.0
+ resolution: "chevrotain@npm:12.0.0"
+ dependencies:
+ "@chevrotain/cst-dts-gen": "npm:12.0.0"
+ "@chevrotain/gast": "npm:12.0.0"
+ "@chevrotain/regexp-to-ast": "npm:12.0.0"
+ "@chevrotain/types": "npm:12.0.0"
+ "@chevrotain/utils": "npm:12.0.0"
+ checksum: 10/6100982d3e0278ef62b5322a38ade4e99226e025a9d203c1e7dd53ad73974b18f190000f222a721db2cb4879e869eb25979cd28ca05756f4f3bbf00ffab57049
+ languageName: node
+ linkType: hard
+
"chokidar@npm:>=3.0.0 <4.0.0, chokidar@npm:^3.4.2":
version: 3.5.3
resolution: "chokidar@npm:3.5.3"
@@ -6404,6 +6841,13 @@ __metadata:
languageName: node
linkType: hard
+"commander@npm:7, commander@npm:^7.2.0":
+ version: 7.2.0
+ resolution: "commander@npm:7.2.0"
+ checksum: 10/9973af10727ad4b44f26703bf3e9fdc323528660a7590efe3aa9ad5042b4584c0deed84ba443f61c9d6f02dade54a5a5d3c95e306a1e1630f8374ae6db16c06d
+ languageName: node
+ linkType: hard
+
"commander@npm:^2.20.0":
version: 2.20.3
resolution: "commander@npm:2.20.3"
@@ -6425,13 +6869,6 @@ __metadata:
languageName: node
linkType: hard
-"commander@npm:^7.2.0":
- version: 7.2.0
- resolution: "commander@npm:7.2.0"
- checksum: 10/9973af10727ad4b44f26703bf3e9fdc323528660a7590efe3aa9ad5042b4584c0deed84ba443f61c9d6f02dade54a5a5d3c95e306a1e1630f8374ae6db16c06d
- languageName: node
- linkType: hard
-
"commander@npm:^8.3.0":
version: 8.3.0
resolution: "commander@npm:8.3.0"
@@ -6504,6 +6941,13 @@ __metadata:
languageName: node
linkType: hard
+"confbox@npm:^0.1.8":
+ version: 0.1.8
+ resolution: "confbox@npm:0.1.8"
+ checksum: 10/4ebcfb1c6a3b25276734ec5722e88768eb61fc02f98e11960b845c5c62bc27fd05f493d2a8244d9675b24ef95afe4c0d511cdcad02c72f5eeea463cc26687999
+ languageName: node
+ linkType: hard
+
"configstore@npm:^5.0.1":
version: 5.0.1
resolution: "configstore@npm:5.0.1"
@@ -6652,6 +7096,24 @@ __metadata:
languageName: node
linkType: hard
+"cose-base@npm:^1.0.0":
+ version: 1.0.3
+ resolution: "cose-base@npm:1.0.3"
+ dependencies:
+ layout-base: "npm:^1.0.0"
+ checksum: 10/52e1f4ae173738aebe14395e3f865dc10ce430156554bab52f4b8ef0c583375644348c2a226b83d97eebc7d35340919e7bc10d23a3e2fe51b853bf56f27b5da7
+ languageName: node
+ linkType: hard
+
+"cose-base@npm:^2.2.0":
+ version: 2.2.0
+ resolution: "cose-base@npm:2.2.0"
+ dependencies:
+ layout-base: "npm:^2.0.0"
+ checksum: 10/4d4b16a84188b8f9419d9dbaffca62561f0e0ee125569339782141111aaf2bec1d180270bbaf5a13ac956f6a8c6b74ab2431e456da239982046b9ddb612bde6a
+ languageName: node
+ linkType: hard
+
"cosmiconfig@npm:^6.0.0":
version: 6.0.0
resolution: "cosmiconfig@npm:6.0.0"
@@ -6809,135 +7271,527 @@ __metadata:
languageName: node
linkType: hard
-"css-what@npm:^6.0.1, css-what@npm:^6.1.0":
- version: 6.1.0
- resolution: "css-what@npm:6.1.0"
- checksum: 10/c67a3a2d0d81843af87f8bf0a4d0845b0f952377714abbb2884e48942409d57a2110eabee003609d02ee487b054614bdfcfc59ee265728ff105bd5aa221c1d0e
+"css-what@npm:^6.0.1, css-what@npm:^6.1.0":
+ version: 6.1.0
+ resolution: "css-what@npm:6.1.0"
+ checksum: 10/c67a3a2d0d81843af87f8bf0a4d0845b0f952377714abbb2884e48942409d57a2110eabee003609d02ee487b054614bdfcfc59ee265728ff105bd5aa221c1d0e
+ languageName: node
+ linkType: hard
+
+"cssesc@npm:^3.0.0":
+ version: 3.0.0
+ resolution: "cssesc@npm:3.0.0"
+ bin:
+ cssesc: bin/cssesc
+ checksum: 10/0e161912c1306861d8f46e1883be1cbc8b1b2879f0f509287c0db71796e4ddfb97ac96bdfca38f77f452e2c10554e1bb5678c99b07a5cf947a12778f73e47e12
+ languageName: node
+ linkType: hard
+
+"cssnano-preset-advanced@npm:^5.3.8":
+ version: 5.3.10
+ resolution: "cssnano-preset-advanced@npm:5.3.10"
+ dependencies:
+ autoprefixer: "npm:^10.4.12"
+ cssnano-preset-default: "npm:^5.2.14"
+ postcss-discard-unused: "npm:^5.1.0"
+ postcss-merge-idents: "npm:^5.1.1"
+ postcss-reduce-idents: "npm:^5.2.0"
+ postcss-zindex: "npm:^5.1.0"
+ peerDependencies:
+ postcss: ^8.2.15
+ checksum: 10/6196ee1f81ef9d26fecb45ade9f965bf706ae3ac3d7eee4fa39e68ea5c4ff6a81937cd19baf2406a9db26046193d5c20cde11126e9dc7fbb93b736dbd5c4b776
+ languageName: node
+ linkType: hard
+
+"cssnano-preset-default@npm:^5.2.14":
+ version: 5.2.14
+ resolution: "cssnano-preset-default@npm:5.2.14"
+ dependencies:
+ css-declaration-sorter: "npm:^6.3.1"
+ cssnano-utils: "npm:^3.1.0"
+ postcss-calc: "npm:^8.2.3"
+ postcss-colormin: "npm:^5.3.1"
+ postcss-convert-values: "npm:^5.1.3"
+ postcss-discard-comments: "npm:^5.1.2"
+ postcss-discard-duplicates: "npm:^5.1.0"
+ postcss-discard-empty: "npm:^5.1.1"
+ postcss-discard-overridden: "npm:^5.1.0"
+ postcss-merge-longhand: "npm:^5.1.7"
+ postcss-merge-rules: "npm:^5.1.4"
+ postcss-minify-font-values: "npm:^5.1.0"
+ postcss-minify-gradients: "npm:^5.1.1"
+ postcss-minify-params: "npm:^5.1.4"
+ postcss-minify-selectors: "npm:^5.2.1"
+ postcss-normalize-charset: "npm:^5.1.0"
+ postcss-normalize-display-values: "npm:^5.1.0"
+ postcss-normalize-positions: "npm:^5.1.1"
+ postcss-normalize-repeat-style: "npm:^5.1.1"
+ postcss-normalize-string: "npm:^5.1.0"
+ postcss-normalize-timing-functions: "npm:^5.1.0"
+ postcss-normalize-unicode: "npm:^5.1.1"
+ postcss-normalize-url: "npm:^5.1.0"
+ postcss-normalize-whitespace: "npm:^5.1.1"
+ postcss-ordered-values: "npm:^5.1.3"
+ postcss-reduce-initial: "npm:^5.1.2"
+ postcss-reduce-transforms: "npm:^5.1.0"
+ postcss-svgo: "npm:^5.1.0"
+ postcss-unique-selectors: "npm:^5.1.1"
+ peerDependencies:
+ postcss: ^8.2.15
+ checksum: 10/4103f879a594e24eef7b2f175cd46b59d777982be23f0d1b84e962d044e0bea2f26aa107dea59a711e6394fdd77faf313cee6ae4be61d34656fdf33ff278f69d
+ languageName: node
+ linkType: hard
+
+"cssnano-utils@npm:^3.1.0":
+ version: 3.1.0
+ resolution: "cssnano-utils@npm:3.1.0"
+ peerDependencies:
+ postcss: ^8.2.15
+ checksum: 10/975c84ce9174cf23bb1da1e9faed8421954607e9ea76440cd3bb0c1bea7e17e490d800fca5ae2812d1d9e9d5524eef23ede0a3f52497d7ccc628e5d7321536f2
+ languageName: node
+ linkType: hard
+
+"cssnano@npm:^5.1.12, cssnano@npm:^5.1.8":
+ version: 5.1.15
+ resolution: "cssnano@npm:5.1.15"
+ dependencies:
+ cssnano-preset-default: "npm:^5.2.14"
+ lilconfig: "npm:^2.0.3"
+ yaml: "npm:^1.10.2"
+ peerDependencies:
+ postcss: ^8.2.15
+ checksum: 10/8c5acbeabd10ffc05d01c63d3a82dcd8742299ead3f6da4016c853548b687d9b392de43e6d0f682dad1c2200d577c9360d8e709711c23721509aa4e55e052fb3
+ languageName: node
+ linkType: hard
+
+"csso@npm:^4.2.0":
+ version: 4.2.0
+ resolution: "csso@npm:4.2.0"
+ dependencies:
+ css-tree: "npm:^1.1.2"
+ checksum: 10/8b6a2dc687f2a8165dde13f67999d5afec63cb07a00ab100fbb41e4e8b28d986cfa0bc466b4f5ba5de7260c2448a64e6ad26ec718dd204d3a7d109982f0bf1aa
+ languageName: node
+ linkType: hard
+
+"cssom@npm:^0.5.0":
+ version: 0.5.0
+ resolution: "cssom@npm:0.5.0"
+ checksum: 10/b502a315b1ce020a692036cc38cb36afa44157219b80deadfa040ab800aa9321fcfbecf02fd2e6ec87db169715e27978b4ab3701f916461e9cf7808899f23b54
+ languageName: node
+ linkType: hard
+
+"cssom@npm:~0.3.6":
+ version: 0.3.8
+ resolution: "cssom@npm:0.3.8"
+ checksum: 10/49eacc88077555e419646c0ea84ddc73c97e3a346ad7cb95e22f9413a9722d8964b91d781ce21d378bd5ae058af9a745402383fa4e35e9cdfd19654b63f892a9
+ languageName: node
+ linkType: hard
+
+"cssstyle@npm:^2.3.0":
+ version: 2.3.0
+ resolution: "cssstyle@npm:2.3.0"
+ dependencies:
+ cssom: "npm:~0.3.6"
+ checksum: 10/46f7f05a153446c4018b0454ee1464b50f606cb1803c90d203524834b7438eb52f3b173ba0891c618f380ced34ee12020675dc0052a7f1be755fe4ebc27ee977
+ languageName: node
+ linkType: hard
+
+"csstype@npm:^3.0.2":
+ version: 3.1.2
+ resolution: "csstype@npm:3.1.2"
+ checksum: 10/1f39c541e9acd9562996d88bc9fb62d1cb234786ef11ed275567d4b2bd82e1ceacde25debc8de3d3b4871ae02c2933fa02614004c97190711caebad6347debc2
+ languageName: node
+ linkType: hard
+
+"cytoscape-cose-bilkent@npm:^4.1.0":
+ version: 4.1.0
+ resolution: "cytoscape-cose-bilkent@npm:4.1.0"
+ dependencies:
+ cose-base: "npm:^1.0.0"
+ peerDependencies:
+ cytoscape: ^3.2.0
+ checksum: 10/9ec2999159af62f1a251bf1e146a9a779085c4fdb1b8146596208f0097c0512fc4bffda53d3b00c87a1e8ae5024db3ebfb97162115216f5b4d024e314f4a03bb
+ languageName: node
+ linkType: hard
+
+"cytoscape-fcose@npm:^2.2.0":
+ version: 2.2.0
+ resolution: "cytoscape-fcose@npm:2.2.0"
+ dependencies:
+ cose-base: "npm:^2.2.0"
+ peerDependencies:
+ cytoscape: ^3.2.0
+ checksum: 10/927aa3b29c1d514c6513c5a785d7af7a8d0499eb166de1f42b958ef20d264ef9cbe238da0b65ae01860424972dce1c73017cf2afdae4f02f9a247f7031b00de3
+ languageName: node
+ linkType: hard
+
+"cytoscape@npm:^3.33.1":
+ version: 3.33.3
+ resolution: "cytoscape@npm:3.33.3"
+ checksum: 10/1a5aa931936bf7afb055ea0e26905b1fa7062fc34011f8da44d64514cc7b8142a3c6a944cf382e11950ac32d288bdadb85cce5bede45cf620da5be53424018b0
+ languageName: node
+ linkType: hard
+
+"d3-array@npm:1 - 2":
+ version: 2.12.1
+ resolution: "d3-array@npm:2.12.1"
+ dependencies:
+ internmap: "npm:^1.0.0"
+ checksum: 10/9fdfb91f428915006e126090fe9aa9d5fcbecc78e925eceb32de9dfb989135f6ad940a8f1b086d0b569523679f85453c5335772aa9e6d5d41b480c2610857c7f
+ languageName: node
+ linkType: hard
+
+"d3-array@npm:2 - 3, d3-array@npm:2.10.0 - 3, d3-array@npm:2.5.0 - 3, d3-array@npm:3, d3-array@npm:^3.2.0":
+ version: 3.2.4
+ resolution: "d3-array@npm:3.2.4"
+ dependencies:
+ internmap: "npm:1 - 2"
+ checksum: 10/5800c467f89634776a5977f6dae3f4e127d91be80f1d07e3e6e35303f9de93e6636d014b234838eea620f7469688d191b3f41207a30040aab750a63c97ec1d7c
+ languageName: node
+ linkType: hard
+
+"d3-axis@npm:3":
+ version: 3.0.0
+ resolution: "d3-axis@npm:3.0.0"
+ checksum: 10/15ec43ecbd4e7b606fcda60f67a522e45576dfd6aa83dff47f3e91ef6c8448841a09cd91f630b492250dcec67c6ea64463510ead5e632ff6b827aeefae1d42ad
+ languageName: node
+ linkType: hard
+
+"d3-brush@npm:3":
+ version: 3.0.0
+ resolution: "d3-brush@npm:3.0.0"
+ dependencies:
+ d3-dispatch: "npm:1 - 3"
+ d3-drag: "npm:2 - 3"
+ d3-interpolate: "npm:1 - 3"
+ d3-selection: "npm:3"
+ d3-transition: "npm:3"
+ checksum: 10/fa3a461b62f0f0ee6fe41f5babf45535a0a8f6d4999f675fb1dce932ee02eff72dec14c7296af31ca15998dc0141ccf5d02aa6499363f8bf2941d90688a1d644
+ languageName: node
+ linkType: hard
+
+"d3-chord@npm:3":
+ version: 3.0.1
+ resolution: "d3-chord@npm:3.0.1"
+ dependencies:
+ d3-path: "npm:1 - 3"
+ checksum: 10/4febcdca4fdc8ba91fc4f7545f4b6321c440150dff80c1ebef887db07bb4200395dfebede63b257393259de07f914da10842da5ab3135e1e281e33ad153e0849
+ languageName: node
+ linkType: hard
+
+"d3-color@npm:1 - 3, d3-color@npm:3":
+ version: 3.1.0
+ resolution: "d3-color@npm:3.1.0"
+ checksum: 10/536ba05bfd9f4fcd6fa289b5974f5c846b21d186875684637e22bf6855e6aba93e24a2eb3712985c6af3f502fbbfa03708edb72f58142f626241a8a17258e545
+ languageName: node
+ linkType: hard
+
+"d3-contour@npm:4":
+ version: 4.0.2
+ resolution: "d3-contour@npm:4.0.2"
+ dependencies:
+ d3-array: "npm:^3.2.0"
+ checksum: 10/0b252267e0c3c5e97d7e0c720bd35654de99f981199f7240d7dd1acfd4e2d5bf1638829f6db486452eff9c38608efa4a6ab5a0d1525131735c011ee7be3cb4ba
+ languageName: node
+ linkType: hard
+
+"d3-delaunay@npm:6":
+ version: 6.0.4
+ resolution: "d3-delaunay@npm:6.0.4"
+ dependencies:
+ delaunator: "npm:5"
+ checksum: 10/4588e2872d4154daaf2c3f34fefe74e43b909cc460238a7b02823907ca6dd109f2c488c57c8551f1a2607fe4b44fdf24e3a190cea29bca70ef5606678dd9e2de
+ languageName: node
+ linkType: hard
+
+"d3-dispatch@npm:1 - 3, d3-dispatch@npm:3":
+ version: 3.0.1
+ resolution: "d3-dispatch@npm:3.0.1"
+ checksum: 10/2b82f41bf4ef88c2f9033dfe32815b67e2ef1c5754a74137a74c7d44d6f0d6ecfa934ac56ed8afe358f6c1f06462e8aa42ca0a388397b5b77a42721570e80487
+ languageName: node
+ linkType: hard
+
+"d3-drag@npm:2 - 3, d3-drag@npm:3":
+ version: 3.0.0
+ resolution: "d3-drag@npm:3.0.0"
+ dependencies:
+ d3-dispatch: "npm:1 - 3"
+ d3-selection: "npm:3"
+ checksum: 10/80bc689935e5a46ee92b2d7f71e1c792279382affed9fbcf46034bff3ff7d3f50cf61a874da4bdf331037292b9e7dca5c6401a605d4bb699fdcb4e0c87e176ec
+ languageName: node
+ linkType: hard
+
+"d3-dsv@npm:1 - 3, d3-dsv@npm:3":
+ version: 3.0.1
+ resolution: "d3-dsv@npm:3.0.1"
+ dependencies:
+ commander: "npm:7"
+ iconv-lite: "npm:0.6"
+ rw: "npm:1"
+ bin:
+ csv2json: bin/dsv2json.js
+ csv2tsv: bin/dsv2dsv.js
+ dsv2dsv: bin/dsv2dsv.js
+ dsv2json: bin/dsv2json.js
+ json2csv: bin/json2dsv.js
+ json2dsv: bin/json2dsv.js
+ json2tsv: bin/json2dsv.js
+ tsv2csv: bin/dsv2dsv.js
+ tsv2json: bin/dsv2json.js
+ checksum: 10/a628ac42a272466940f713f310db2e5246690b22035121dc1230077070c9135fb7c9b4d260f093fcadf63b0528202a1953107448a4be3a860c4f42f50d09504d
+ languageName: node
+ linkType: hard
+
+"d3-ease@npm:1 - 3, d3-ease@npm:3":
+ version: 3.0.1
+ resolution: "d3-ease@npm:3.0.1"
+ checksum: 10/985d46e868494e9e6806fedd20bad712a50dcf98f357bf604a843a9f6bc17714a657c83dd762f183173dcde983a3570fa679b2bc40017d40b24163cdc4167796
+ languageName: node
+ linkType: hard
+
+"d3-fetch@npm:3":
+ version: 3.0.1
+ resolution: "d3-fetch@npm:3.0.1"
+ dependencies:
+ d3-dsv: "npm:1 - 3"
+ checksum: 10/cd35d55f8fbb1ea1e37be362a575bb0161449957133aa5b45b9891889b2aca1dc0769c240a236736e33cd823e820a0e73fb3744582307a5d26d1df7bed0ccecb
+ languageName: node
+ linkType: hard
+
+"d3-force@npm:3":
+ version: 3.0.0
+ resolution: "d3-force@npm:3.0.0"
+ dependencies:
+ d3-dispatch: "npm:1 - 3"
+ d3-quadtree: "npm:1 - 3"
+ d3-timer: "npm:1 - 3"
+ checksum: 10/85945f8d444d78567009518f0ab54c0f0c8873eb8eb9a2ff0ab667b0f81b419e101a411415d4a2c752547ec7143f89675e8c33b8f111e55e5579a04cb7f4591c
+ languageName: node
+ linkType: hard
+
+"d3-format@npm:1 - 3, d3-format@npm:3":
+ version: 3.1.2
+ resolution: "d3-format@npm:3.1.2"
+ checksum: 10/811d913c2c7624cb0d2a8f0ccd7964c50945b3de3c7f7aa14c309fba7266a3ec53cbee8c05f6ad61b2b65b93e157c55a0e07db59bc3180c39dac52be8e841ab1
+ languageName: node
+ linkType: hard
+
+"d3-geo@npm:3":
+ version: 3.1.1
+ resolution: "d3-geo@npm:3.1.1"
+ dependencies:
+ d3-array: "npm:2.5.0 - 3"
+ checksum: 10/dc5e980330d891dabf92869b98871b05ca2021c64d7ef253bcfd4f2348839ad33576fba474baecc2def86ebd3d943a11d93c0af26be0a2694f5bd59824838133
+ languageName: node
+ linkType: hard
+
+"d3-hierarchy@npm:3":
+ version: 3.1.2
+ resolution: "d3-hierarchy@npm:3.1.2"
+ checksum: 10/497b79dc6c35e28b21e8a7b94db92876abd1d4ec082d9803a07ea8964e55b0e71c511a21489363a36f1456f069adb8ff7d33c633678730d6ae961ed350b27733
+ languageName: node
+ linkType: hard
+
+"d3-interpolate@npm:1 - 3, d3-interpolate@npm:1.2.0 - 3, d3-interpolate@npm:3":
+ version: 3.0.1
+ resolution: "d3-interpolate@npm:3.0.1"
+ dependencies:
+ d3-color: "npm:1 - 3"
+ checksum: 10/988d66497ef5c190cf64f8c80cd66e1e9a58c4d1f8932d776a8e3ae59330291795d5a342f5a97602782ccbef21a5df73bc7faf1f0dc46a5145ba6243a82a0f0e
+ languageName: node
+ linkType: hard
+
+"d3-path@npm:1":
+ version: 1.0.9
+ resolution: "d3-path@npm:1.0.9"
+ checksum: 10/6ce1747837ea2a449d9ea32e169a382978ab09a4805eb408feb6bbc12cb5f5f6ce29aefc252dd9a815d420f4813d672f75578b78b3bbaf7811f54d8c7f93fd11
+ languageName: node
+ linkType: hard
+
+"d3-path@npm:1 - 3, d3-path@npm:3, d3-path@npm:^3.1.0":
+ version: 3.1.0
+ resolution: "d3-path@npm:3.1.0"
+ checksum: 10/8e97a9ab4930a05b18adda64cf4929219bac913a5506cf8585631020253b39309549632a5cbeac778c0077994442ddaaee8316ee3f380e7baf7566321b84e76a
+ languageName: node
+ linkType: hard
+
+"d3-polygon@npm:3":
+ version: 3.0.1
+ resolution: "d3-polygon@npm:3.0.1"
+ checksum: 10/c4fa2ed19dcba13fd341815361d27e64597aa0d38d377e401e1353c4acbe8bd73c0afb3e49a1cf4119fadc3651ec8073d06aa6d0e34e664c868d071e58912cd1
+ languageName: node
+ linkType: hard
+
+"d3-quadtree@npm:1 - 3, d3-quadtree@npm:3":
+ version: 3.0.1
+ resolution: "d3-quadtree@npm:3.0.1"
+ checksum: 10/1915b6a7b031fc312f9af61947072db9468c5a2b03837f6a90b38fdaebcd0ea17a883bffd94d16b8a6848e81711a06222f7d39f129386ef1850297219b8d32ba
+ languageName: node
+ linkType: hard
+
+"d3-random@npm:3":
+ version: 3.0.1
+ resolution: "d3-random@npm:3.0.1"
+ checksum: 10/9f41d6ca3a1826cea8d88392917b5039504337d442a4d1357c870fa3031701e60209a2689a6ddae7df8fca824383d038c957eb545bc49a7428c71aaf3b11f56f
+ languageName: node
+ linkType: hard
+
+"d3-sankey@npm:^0.12.3":
+ version: 0.12.3
+ resolution: "d3-sankey@npm:0.12.3"
+ dependencies:
+ d3-array: "npm:1 - 2"
+ d3-shape: "npm:^1.2.0"
+ checksum: 10/d5c679135a26d435e9970de3fc0778c6ef5c911f0c878b246939517b57a8daa2e2db6ef99318a0dad16e6079e4b89ef9166f1f661d8d247637875b764628094d
languageName: node
linkType: hard
-"cssesc@npm:^3.0.0":
- version: 3.0.0
- resolution: "cssesc@npm:3.0.0"
- bin:
- cssesc: bin/cssesc
- checksum: 10/0e161912c1306861d8f46e1883be1cbc8b1b2879f0f509287c0db71796e4ddfb97ac96bdfca38f77f452e2c10554e1bb5678c99b07a5cf947a12778f73e47e12
+"d3-scale-chromatic@npm:3":
+ version: 3.1.0
+ resolution: "d3-scale-chromatic@npm:3.1.0"
+ dependencies:
+ d3-color: "npm:1 - 3"
+ d3-interpolate: "npm:1 - 3"
+ checksum: 10/25df6a7c621b9171df8b2225e98e41c0a6bcac4de02deb4807280b31116e8f495c5ac93301796098ee5b698cb690154e8138d90d72fd1fe36744c60e02a3d8c4
languageName: node
linkType: hard
-"cssnano-preset-advanced@npm:^5.3.8":
- version: 5.3.10
- resolution: "cssnano-preset-advanced@npm:5.3.10"
+"d3-scale@npm:4":
+ version: 4.0.2
+ resolution: "d3-scale@npm:4.0.2"
dependencies:
- autoprefixer: "npm:^10.4.12"
- cssnano-preset-default: "npm:^5.2.14"
- postcss-discard-unused: "npm:^5.1.0"
- postcss-merge-idents: "npm:^5.1.1"
- postcss-reduce-idents: "npm:^5.2.0"
- postcss-zindex: "npm:^5.1.0"
- peerDependencies:
- postcss: ^8.2.15
- checksum: 10/6196ee1f81ef9d26fecb45ade9f965bf706ae3ac3d7eee4fa39e68ea5c4ff6a81937cd19baf2406a9db26046193d5c20cde11126e9dc7fbb93b736dbd5c4b776
+ d3-array: "npm:2.10.0 - 3"
+ d3-format: "npm:1 - 3"
+ d3-interpolate: "npm:1.2.0 - 3"
+ d3-time: "npm:2.1.1 - 3"
+ d3-time-format: "npm:2 - 4"
+ checksum: 10/e2dc4243586eae2a0fdf91de1df1a90d51dfacb295933f0ca7e9184c31203b01436bef69906ad40f1100173a5e6197ae753cb7b8a1a8fcfda43194ea9cad6493
languageName: node
linkType: hard
-"cssnano-preset-default@npm:^5.2.14":
- version: 5.2.14
- resolution: "cssnano-preset-default@npm:5.2.14"
- dependencies:
- css-declaration-sorter: "npm:^6.3.1"
- cssnano-utils: "npm:^3.1.0"
- postcss-calc: "npm:^8.2.3"
- postcss-colormin: "npm:^5.3.1"
- postcss-convert-values: "npm:^5.1.3"
- postcss-discard-comments: "npm:^5.1.2"
- postcss-discard-duplicates: "npm:^5.1.0"
- postcss-discard-empty: "npm:^5.1.1"
- postcss-discard-overridden: "npm:^5.1.0"
- postcss-merge-longhand: "npm:^5.1.7"
- postcss-merge-rules: "npm:^5.1.4"
- postcss-minify-font-values: "npm:^5.1.0"
- postcss-minify-gradients: "npm:^5.1.1"
- postcss-minify-params: "npm:^5.1.4"
- postcss-minify-selectors: "npm:^5.2.1"
- postcss-normalize-charset: "npm:^5.1.0"
- postcss-normalize-display-values: "npm:^5.1.0"
- postcss-normalize-positions: "npm:^5.1.1"
- postcss-normalize-repeat-style: "npm:^5.1.1"
- postcss-normalize-string: "npm:^5.1.0"
- postcss-normalize-timing-functions: "npm:^5.1.0"
- postcss-normalize-unicode: "npm:^5.1.1"
- postcss-normalize-url: "npm:^5.1.0"
- postcss-normalize-whitespace: "npm:^5.1.1"
- postcss-ordered-values: "npm:^5.1.3"
- postcss-reduce-initial: "npm:^5.1.2"
- postcss-reduce-transforms: "npm:^5.1.0"
- postcss-svgo: "npm:^5.1.0"
- postcss-unique-selectors: "npm:^5.1.1"
- peerDependencies:
- postcss: ^8.2.15
- checksum: 10/4103f879a594e24eef7b2f175cd46b59d777982be23f0d1b84e962d044e0bea2f26aa107dea59a711e6394fdd77faf313cee6ae4be61d34656fdf33ff278f69d
+"d3-selection@npm:2 - 3, d3-selection@npm:3, d3-selection@npm:^3.0.0":
+ version: 3.0.0
+ resolution: "d3-selection@npm:3.0.0"
+ checksum: 10/0e5acfd305b31628b7be5009ba7303d84bb34817a88ed4dde9c8bd9c23528573fc5272f89fc04e5be03d2cbf5441a248d7274aaf55a8ef3dad46e16333d72298
languageName: node
linkType: hard
-"cssnano-utils@npm:^3.1.0":
- version: 3.1.0
- resolution: "cssnano-utils@npm:3.1.0"
- peerDependencies:
- postcss: ^8.2.15
- checksum: 10/975c84ce9174cf23bb1da1e9faed8421954607e9ea76440cd3bb0c1bea7e17e490d800fca5ae2812d1d9e9d5524eef23ede0a3f52497d7ccc628e5d7321536f2
+"d3-shape@npm:3":
+ version: 3.2.0
+ resolution: "d3-shape@npm:3.2.0"
+ dependencies:
+ d3-path: "npm:^3.1.0"
+ checksum: 10/2e861f4d4781ee8abd85d2b435f848d667479dcf01a4e0db3a06600a5bdeddedb240f88229ec7b3bf7fa300c2b3526faeaf7e75f9a24dbf4396d3cc5358ff39d
languageName: node
linkType: hard
-"cssnano@npm:^5.1.12, cssnano@npm:^5.1.8":
- version: 5.1.15
- resolution: "cssnano@npm:5.1.15"
+"d3-shape@npm:^1.2.0":
+ version: 1.3.7
+ resolution: "d3-shape@npm:1.3.7"
dependencies:
- cssnano-preset-default: "npm:^5.2.14"
- lilconfig: "npm:^2.0.3"
- yaml: "npm:^1.10.2"
- peerDependencies:
- postcss: ^8.2.15
- checksum: 10/8c5acbeabd10ffc05d01c63d3a82dcd8742299ead3f6da4016c853548b687d9b392de43e6d0f682dad1c2200d577c9360d8e709711c23721509aa4e55e052fb3
+ d3-path: "npm:1"
+ checksum: 10/1e40fdcfdc8edc9c53a77a6aaea2dbf31bf06df12ebd66cc8d91f76bbde753049ad21dfee0577f7dc5d0a4468554ede4783f6df7d809e291745334dba977c09e
languageName: node
linkType: hard
-"csso@npm:^4.2.0":
- version: 4.2.0
- resolution: "csso@npm:4.2.0"
+"d3-time-format@npm:2 - 4, d3-time-format@npm:4":
+ version: 4.1.0
+ resolution: "d3-time-format@npm:4.1.0"
dependencies:
- css-tree: "npm:^1.1.2"
- checksum: 10/8b6a2dc687f2a8165dde13f67999d5afec63cb07a00ab100fbb41e4e8b28d986cfa0bc466b4f5ba5de7260c2448a64e6ad26ec718dd204d3a7d109982f0bf1aa
+ d3-time: "npm:1 - 3"
+ checksum: 10/ffc0959258fbb90e3890bfb31b43b764f51502b575e87d0af2c85b85ac379120d246914d07fca9f533d1bcedc27b2841d308a00fd64848c3e2cad9eff5c9a0aa
languageName: node
linkType: hard
-"cssom@npm:^0.5.0":
- version: 0.5.0
- resolution: "cssom@npm:0.5.0"
- checksum: 10/b502a315b1ce020a692036cc38cb36afa44157219b80deadfa040ab800aa9321fcfbecf02fd2e6ec87db169715e27978b4ab3701f916461e9cf7808899f23b54
+"d3-time@npm:1 - 3, d3-time@npm:2.1.1 - 3, d3-time@npm:3":
+ version: 3.1.0
+ resolution: "d3-time@npm:3.1.0"
+ dependencies:
+ d3-array: "npm:2 - 3"
+ checksum: 10/c110bed295ce63e8180e45b82a9b0ba114d5f33ff315871878f209c1a6d821caa505739a2b07f38d1396637155b8e7372632dacc018e11fbe8ceef58f6af806d
languageName: node
linkType: hard
-"cssom@npm:~0.3.6":
- version: 0.3.8
- resolution: "cssom@npm:0.3.8"
- checksum: 10/49eacc88077555e419646c0ea84ddc73c97e3a346ad7cb95e22f9413a9722d8964b91d781ce21d378bd5ae058af9a745402383fa4e35e9cdfd19654b63f892a9
+"d3-timer@npm:1 - 3, d3-timer@npm:3":
+ version: 3.0.1
+ resolution: "d3-timer@npm:3.0.1"
+ checksum: 10/004128602bb187948d72c7dc153f0f063f38ac7a584171de0b45e3a841ad2e17f1e40ad396a4af9cce5551b6ab4a838d5246d23492553843d9da4a4050a911e2
languageName: node
linkType: hard
-"cssstyle@npm:^2.3.0":
- version: 2.3.0
- resolution: "cssstyle@npm:2.3.0"
+"d3-transition@npm:2 - 3, d3-transition@npm:3, d3-transition@npm:^3.0.1":
+ version: 3.0.1
+ resolution: "d3-transition@npm:3.0.1"
dependencies:
- cssom: "npm:~0.3.6"
- checksum: 10/46f7f05a153446c4018b0454ee1464b50f606cb1803c90d203524834b7438eb52f3b173ba0891c618f380ced34ee12020675dc0052a7f1be755fe4ebc27ee977
+ d3-color: "npm:1 - 3"
+ d3-dispatch: "npm:1 - 3"
+ d3-ease: "npm:1 - 3"
+ d3-interpolate: "npm:1 - 3"
+ d3-timer: "npm:1 - 3"
+ peerDependencies:
+ d3-selection: 2 - 3
+ checksum: 10/02571636acb82f5532117928a87fe25de68f088c38ab4a8b16e495f0f2d08a3fd2937eaebdefdfcf7f1461545524927d2632d795839b88d2e4c71e387aaaffac
languageName: node
linkType: hard
-"csstype@npm:^3.0.2":
- version: 3.1.2
- resolution: "csstype@npm:3.1.2"
- checksum: 10/1f39c541e9acd9562996d88bc9fb62d1cb234786ef11ed275567d4b2bd82e1ceacde25debc8de3d3b4871ae02c2933fa02614004c97190711caebad6347debc2
+"d3-zoom@npm:3":
+ version: 3.0.0
+ resolution: "d3-zoom@npm:3.0.0"
+ dependencies:
+ d3-dispatch: "npm:1 - 3"
+ d3-drag: "npm:2 - 3"
+ d3-interpolate: "npm:1 - 3"
+ d3-selection: "npm:2 - 3"
+ d3-transition: "npm:2 - 3"
+ checksum: 10/0e6e5c14e33c4ecdff311a900dd037dea407734f2dd2818988ed6eae342c1799e8605824523678bd404f81e37824cc588f62dbde46912444c89acc7888036c6b
+ languageName: node
+ linkType: hard
+
+"d3@npm:^7.9.0":
+ version: 7.9.0
+ resolution: "d3@npm:7.9.0"
+ dependencies:
+ d3-array: "npm:3"
+ d3-axis: "npm:3"
+ d3-brush: "npm:3"
+ d3-chord: "npm:3"
+ d3-color: "npm:3"
+ d3-contour: "npm:4"
+ d3-delaunay: "npm:6"
+ d3-dispatch: "npm:3"
+ d3-drag: "npm:3"
+ d3-dsv: "npm:3"
+ d3-ease: "npm:3"
+ d3-fetch: "npm:3"
+ d3-force: "npm:3"
+ d3-format: "npm:3"
+ d3-geo: "npm:3"
+ d3-hierarchy: "npm:3"
+ d3-interpolate: "npm:3"
+ d3-path: "npm:3"
+ d3-polygon: "npm:3"
+ d3-quadtree: "npm:3"
+ d3-random: "npm:3"
+ d3-scale: "npm:4"
+ d3-scale-chromatic: "npm:3"
+ d3-selection: "npm:3"
+ d3-shape: "npm:3"
+ d3-time: "npm:3"
+ d3-time-format: "npm:4"
+ d3-timer: "npm:3"
+ d3-transition: "npm:3"
+ d3-zoom: "npm:3"
+ checksum: 10/b0b418996bdf279b01f5c7a0117927f9ad3e833c9ce4657550ce6f6ace70b70cf829c4144b01df0be5a0f716d4e5f15ab0cadc5ff1ce1561d7be29ac86493d83
+ languageName: node
+ linkType: hard
+
+"dagre-d3-es@npm:7.0.14":
+ version: 7.0.14
+ resolution: "dagre-d3-es@npm:7.0.14"
+ dependencies:
+ d3: "npm:^7.9.0"
+ lodash-es: "npm:^4.17.21"
+ checksum: 10/f2787049ae2684de27950dfc61eb23437cbb5c3ca7ec7f58620e19f16059465b6d23324ca861961353a60bb4cdaa5c66edfa9bbe44ac2304b72dd00ab4199714
languageName: node
linkType: hard
@@ -6975,6 +7829,13 @@ __metadata:
languageName: node
linkType: hard
+"dayjs@npm:^1.11.19":
+ version: 1.11.20
+ resolution: "dayjs@npm:1.11.20"
+ checksum: 10/5347533f21a55b8bb1b1ef559be9b805514c3a8fb7e68b75fb7e73808131c59e70909c073aa44ce8a0d159195cd110cdd4081cf87ab96cb06fee3edacae791c6
+ languageName: node
+ linkType: hard
+
"debug@npm:2.6.9, debug@npm:^2.6.0":
version: 2.6.9
resolution: "debug@npm:2.6.9"
@@ -7124,6 +7985,15 @@ __metadata:
languageName: node
linkType: hard
+"delaunator@npm:5":
+ version: 5.1.0
+ resolution: "delaunator@npm:5.1.0"
+ dependencies:
+ robust-predicates: "npm:^3.0.2"
+ checksum: 10/ede01ddbb69c2d12672e1fd334e4265099d73c082a10fc2d33b4e527c61e0221b25011e9d49a701941211bbedd9d39b9eccac232803edf22d6308ad08ef61a98
+ languageName: node
+ linkType: hard
+
"delayed-stream@npm:~1.0.0":
version: 1.0.0
resolution: "delayed-stream@npm:1.0.0"
@@ -7384,6 +8254,18 @@ __metadata:
languageName: node
linkType: hard
+"dompurify@npm:3.4.2":
+ version: 3.4.2
+ resolution: "dompurify@npm:3.4.2"
+ dependencies:
+ "@types/trusted-types": "npm:^2.0.7"
+ dependenciesMeta:
+ "@types/trusted-types":
+ optional: true
+ checksum: 10/8681a27f17412e754b38d080e39abb36fb14dbf58194d520eedd45b47986323966a0391df540a2011bce72dd37d67f2a015246a58f39d48e3e8f6051274fb974
+ languageName: node
+ linkType: hard
+
"domutils@npm:^2.5.2, domutils@npm:^2.8.0":
version: 2.8.0
resolution: "domutils@npm:2.8.0"
@@ -8814,6 +9696,7 @@ __metadata:
"@docusaurus/module-type-aliases": "npm:^2.4.1"
"@docusaurus/preset-classic": "npm:^2.4.1"
"@docusaurus/theme-common": "npm:^2.4.1"
+ "@docusaurus/theme-mermaid": "npm:2.4.1"
"@docusaurus/tsconfig": "npm:^3.0.0-alpha.0"
"@mdx-js/react": "npm:^1.6.22"
"@playwright/test": "npm:^1.58.2"
@@ -9179,6 +10062,13 @@ __metadata:
languageName: node
linkType: hard
+"hachure-fill@npm:^0.5.2":
+ version: 0.5.2
+ resolution: "hachure-fill@npm:0.5.2"
+ checksum: 10/d78f1b992d1c8951a4fc893bf32045748132a8b481c15d6d31c77c05557f5fa86913a2b66b3c3a3c8ce46ca8e0a46b0b2aa11f979bc804d8edba77b8c30eb1ca
+ languageName: node
+ linkType: hard
+
"handle-thing@npm:^2.0.0":
version: 2.0.1
resolution: "handle-thing@npm:2.0.1"
@@ -9625,7 +10515,7 @@ __metadata:
languageName: node
linkType: hard
-"iconv-lite@npm:0.6.3, iconv-lite@npm:^0.6.2":
+"iconv-lite@npm:0.6, iconv-lite@npm:0.6.3, iconv-lite@npm:^0.6.2":
version: 0.6.3
resolution: "iconv-lite@npm:0.6.3"
dependencies:
@@ -9802,6 +10692,20 @@ __metadata:
languageName: node
linkType: hard
+"internmap@npm:1 - 2":
+ version: 2.0.3
+ resolution: "internmap@npm:2.0.3"
+ checksum: 10/873e0e7fcfe32f999aa0997a0b648b1244508e56e3ea6b8259b5245b50b5eeb3853fba221f96692bd6d1def501da76c32d64a5cb22a0b26cdd9b445664f805e0
+ languageName: node
+ linkType: hard
+
+"internmap@npm:^1.0.0":
+ version: 1.0.1
+ resolution: "internmap@npm:1.0.1"
+ checksum: 10/429cb9e28f393f10c73a826d71ba9e359711b7e42345bd684aba708f43b8139ce90f09b15abbf977a981474ac61615294854e5b9520d3f65187d0f6a2ff27665
+ languageName: node
+ linkType: hard
+
"interpret@npm:^1.0.0":
version: 1.4.0
resolution: "interpret@npm:1.4.0"
@@ -11135,6 +12039,17 @@ __metadata:
languageName: node
linkType: hard
+"katex@npm:^0.16.25":
+ version: 0.16.45
+ resolution: "katex@npm:0.16.45"
+ dependencies:
+ commander: "npm:^8.3.0"
+ bin:
+ katex: cli.js
+ checksum: 10/8c82f9651c3615459722166a6ccb16f23ecd8323850a956568b69b1f641331a57679aa9f3ad4903c4be26089434cbbde850c0384b9b0dff022e4357b84a20ebc
+ languageName: node
+ linkType: hard
+
"keyv@npm:^3.0.0":
version: 3.1.0
resolution: "keyv@npm:3.1.0"
@@ -11144,6 +12059,13 @@ __metadata:
languageName: node
linkType: hard
+"khroma@npm:^2.1.0":
+ version: 2.1.0
+ resolution: "khroma@npm:2.1.0"
+ checksum: 10/a195e317bf6f3a1cba98df2677bf9bf6d14195ee0b1c3e5bc20a542cd99652682f290c196a8963956d87aed4ad65ac0bc8a15d75cddf00801fdafd148e01a5d2
+ languageName: node
+ linkType: hard
+
"kind-of@npm:^6.0.0, kind-of@npm:^6.0.2":
version: 6.0.3
resolution: "kind-of@npm:6.0.3"
@@ -11165,6 +12087,20 @@ __metadata:
languageName: node
linkType: hard
+"langium@npm:^4.0.0":
+ version: 4.2.3
+ resolution: "langium@npm:4.2.3"
+ dependencies:
+ "@chevrotain/regexp-to-ast": "npm:~12.0.0"
+ chevrotain: "npm:~12.0.0"
+ chevrotain-allstar: "npm:~0.4.3"
+ vscode-languageserver: "npm:~9.0.1"
+ vscode-languageserver-textdocument: "npm:~1.0.11"
+ vscode-uri: "npm:~3.1.0"
+ checksum: 10/64f777ad6f28f74e7c933c26f2bffbb7fe299a5f35296b5f23be5a798e0e65f77597ae1936add1cd19ad7f3ec51121e2ae5f6d12c8f469c15dbebb3a312003d5
+ languageName: node
+ linkType: hard
+
"language-subtag-registry@npm:~0.3.2":
version: 0.3.22
resolution: "language-subtag-registry@npm:0.3.22"
@@ -11200,6 +12136,20 @@ __metadata:
languageName: node
linkType: hard
+"layout-base@npm:^1.0.0":
+ version: 1.0.2
+ resolution: "layout-base@npm:1.0.2"
+ checksum: 10/34504e61e4770e563cf49d4a56c8c10f1da0fb452cff89a652118783189c642ebc86a300d97cbc247e59a9c1eb06a2d419982f7dd10e8eedcab2414bc46d32f8
+ languageName: node
+ linkType: hard
+
+"layout-base@npm:^2.0.0":
+ version: 2.0.1
+ resolution: "layout-base@npm:2.0.1"
+ checksum: 10/b5cca04a2e327ea16374a0058f73544291aeb0026972677a128594aca3b627d26949140ab7d275798c7d39193a33b41c5a856d4509c1518f49c9a5f1dad39a20
+ languageName: node
+ linkType: hard
+
"lefthook-darwin-arm64@npm:1.6.1":
version: 1.6.1
resolution: "lefthook-darwin-arm64@npm:1.6.1"
@@ -11389,6 +12339,13 @@ __metadata:
languageName: node
linkType: hard
+"lodash-es@npm:^4.17.21, lodash-es@npm:^4.17.23, lodash-es@npm:^4.18.1":
+ version: 4.18.1
+ resolution: "lodash-es@npm:4.18.1"
+ checksum: 10/8bfad225ef09ef42b04283cdaf7830efcc2ba29ae41b56501c74422155ee1ccaa1f0f6e8319def3451a1fe54dec501c8e4bee622bae2b2d98ac993731e0a5cce
+ languageName: node
+ linkType: hard
+
"lodash.camelcase@npm:^4.3.0":
version: 4.3.0
resolution: "lodash.camelcase@npm:4.3.0"
@@ -11594,6 +12551,15 @@ __metadata:
languageName: node
linkType: hard
+"marked@npm:^16.3.0":
+ version: 16.4.2
+ resolution: "marked@npm:16.4.2"
+ bin:
+ marked: bin/marked.js
+ checksum: 10/6e40e40661dce97e271198daa2054fc31e6445892a735e416c248fba046bdfa4573cafa08dc254529f105e7178a34485eb7f82573979cfb377a4530f66e79187
+ languageName: node
+ linkType: hard
+
"mdast-squeeze-paragraphs@npm:^4.0.0":
version: 4.0.0
resolution: "mdast-squeeze-paragraphs@npm:4.0.0"
@@ -11693,6 +12659,35 @@ __metadata:
languageName: node
linkType: hard
+"mermaid@npm:11.14.0":
+ version: 11.14.0
+ resolution: "mermaid@npm:11.14.0"
+ dependencies:
+ "@braintree/sanitize-url": "npm:^7.1.1"
+ "@iconify/utils": "npm:^3.0.2"
+ "@mermaid-js/parser": "npm:^1.1.0"
+ "@types/d3": "npm:^7.4.3"
+ "@upsetjs/venn.js": "npm:^2.0.0"
+ cytoscape: "npm:^3.33.1"
+ cytoscape-cose-bilkent: "npm:^4.1.0"
+ cytoscape-fcose: "npm:^2.2.0"
+ d3: "npm:^7.9.0"
+ d3-sankey: "npm:^0.12.3"
+ dagre-d3-es: "npm:7.0.14"
+ dayjs: "npm:^1.11.19"
+ dompurify: "npm:^3.3.1"
+ katex: "npm:^0.16.25"
+ khroma: "npm:^2.1.0"
+ lodash-es: "npm:^4.17.23"
+ marked: "npm:^16.3.0"
+ roughjs: "npm:^4.6.6"
+ stylis: "npm:^4.3.6"
+ ts-dedent: "npm:^2.2.0"
+ uuid: "npm:^11.1.0"
+ checksum: 10/44a4d1884702956b61e99a54a7a8f369749c65da4c2278d6aa6dd2c4a6904f9950ce9925b310c37fc9f71c7684c38f5853995e4c0002256e7d887621bb24a4ec
+ languageName: node
+ linkType: hard
+
"methods@npm:~1.1.2":
version: 1.1.2
resolution: "methods@npm:1.1.2"
@@ -11900,6 +12895,18 @@ __metadata:
languageName: node
linkType: hard
+"mlly@npm:^1.7.4, mlly@npm:^1.8.2":
+ version: 1.8.2
+ resolution: "mlly@npm:1.8.2"
+ dependencies:
+ acorn: "npm:^8.16.0"
+ pathe: "npm:^2.0.3"
+ pkg-types: "npm:^1.3.1"
+ ufo: "npm:^1.6.3"
+ checksum: 10/e13b79edb113ac9d3ce8b5998d490cd979e907d31b562b9c6630e59623d32710cc83be1da46755ccd3143c57d50debcf98a9903d55e6e07e57910dc3369d96c1
+ languageName: node
+ linkType: hard
+
"moment@npm:^2.24.0, moment@npm:^2.29.2":
version: 2.29.4
resolution: "moment@npm:2.29.4"
@@ -12450,6 +13457,13 @@ __metadata:
languageName: node
linkType: hard
+"package-manager-detector@npm:^1.3.0":
+ version: 1.6.0
+ resolution: "package-manager-detector@npm:1.6.0"
+ checksum: 10/b38a9532198cefdb98a1b7131c42cbffa55d8b997d6117811cf83f00079fd57a572db2aa5e3db5e36bcd0af84d0bec5a7d6251142427314390ed99a3d76cd0a0
+ languageName: node
+ linkType: hard
+
"param-case@npm:^3.0.4":
version: 3.0.4
resolution: "param-case@npm:3.0.4"
@@ -12545,6 +13559,13 @@ __metadata:
languageName: node
linkType: hard
+"path-data-parser@npm:0.1.0, path-data-parser@npm:^0.1.0":
+ version: 0.1.0
+ resolution: "path-data-parser@npm:0.1.0"
+ checksum: 10/a23a214adb38074576a8873d25e8dea7e090b8396d86f58f83f3f6c6298ff56b06adc694147b67f0ed22f14dc478efa1d525710d3ec7b2d7b1efbac57e3fafe6
+ languageName: node
+ linkType: hard
+
"path-exists@npm:^3.0.0":
version: 3.0.0
resolution: "path-exists@npm:3.0.0"
@@ -12637,6 +13658,13 @@ __metadata:
languageName: node
linkType: hard
+"pathe@npm:^2.0.1, pathe@npm:^2.0.3":
+ version: 2.0.3
+ resolution: "pathe@npm:2.0.3"
+ checksum: 10/01e9a69928f39087d96e1751ce7d6d50da8c39abf9a12e0ac2389c42c83bc76f78c45a475bd9026a02e6a6f79be63acc75667df855862fe567d99a00a540d23d
+ languageName: node
+ linkType: hard
+
"picocolors@npm:^1.0.0":
version: 1.0.0
resolution: "picocolors@npm:1.0.0"
@@ -12681,6 +13709,17 @@ __metadata:
languageName: node
linkType: hard
+"pkg-types@npm:^1.3.1":
+ version: 1.3.1
+ resolution: "pkg-types@npm:1.3.1"
+ dependencies:
+ confbox: "npm:^0.1.8"
+ mlly: "npm:^1.7.4"
+ pathe: "npm:^2.0.1"
+ checksum: 10/6d491f2244597b24fb59a50e3c258f27da3839555d2a4e112b31bcf536e9359fc4edc98639cd74d2cf16fcd4269e5a09d99fc05d89e2acc896a2f027c2f6ec44
+ languageName: node
+ linkType: hard
+
"pkg-up@npm:^3.1.0":
version: 3.1.0
resolution: "pkg-up@npm:3.1.0"
@@ -12721,6 +13760,23 @@ __metadata:
languageName: node
linkType: hard
+"points-on-curve@npm:0.2.0, points-on-curve@npm:^0.2.0":
+ version: 0.2.0
+ resolution: "points-on-curve@npm:0.2.0"
+ checksum: 10/3f9a4a9f5a624bb307a72f5cdf1f7c29bedc546716664a2cfd7228085308575e63b461a3e64a88d3b451031655714eb49469d2ced392ee014b709132cd59be93
+ languageName: node
+ linkType: hard
+
+"points-on-path@npm:^0.2.1":
+ version: 0.2.1
+ resolution: "points-on-path@npm:0.2.1"
+ dependencies:
+ path-data-parser: "npm:0.1.0"
+ points-on-curve: "npm:0.2.0"
+ checksum: 10/8b3f42feb24433b4a3e0b1c1f951340f06f523b26ed4d87446829f500f1468ad1484a6bf7fedf076ff4b492ae6b1daa7ffc07c7a8f7c00f4d072f17f79fe9ed0
+ languageName: node
+ linkType: hard
+
"postcss-calc@npm:^8.2.3":
version: 8.2.4
resolution: "postcss-calc@npm:8.2.4"
@@ -15048,6 +16104,25 @@ __metadata:
languageName: node
linkType: hard
+"robust-predicates@npm:^3.0.2":
+ version: 3.0.3
+ resolution: "robust-predicates@npm:3.0.3"
+ checksum: 10/38464ec7a839b366e039410fa375ec9ea6d365f30eb38dab1c33c0269160ffa682c552a4c2f0098d89aff085fd51023db0cf039e7e6d24f87b4fc4b74ac5e89b
+ languageName: node
+ linkType: hard
+
+"roughjs@npm:^4.6.6":
+ version: 4.6.6
+ resolution: "roughjs@npm:4.6.6"
+ dependencies:
+ hachure-fill: "npm:^0.5.2"
+ path-data-parser: "npm:^0.1.0"
+ points-on-curve: "npm:^0.2.0"
+ points-on-path: "npm:^0.2.1"
+ checksum: 10/76bd1e892d79b002dbc0591a28442462e027a77edfcdcd3dbbd2e404fa6d248891ade84ca656b24b1d40a29e3a9df5831633b7a7bb5c8551adcdac480a3dce79
+ languageName: node
+ linkType: hard
+
"rtl-detect@npm:^1.0.4":
version: 1.0.4
resolution: "rtl-detect@npm:1.0.4"
@@ -15078,6 +16153,13 @@ __metadata:
languageName: node
linkType: hard
+"rw@npm:1":
+ version: 1.3.3
+ resolution: "rw@npm:1.3.3"
+ checksum: 10/e90985d64777a00f4ab5f8c0bfea2fb5645c6bda5238840afa339c8a4f86f776e8ce83731155643a7425a0b27ce89077dab27b2f57519996ba4d2fe54cac1941
+ languageName: node
+ linkType: hard
+
"rxfire@npm:5.0.0-rc.3":
version: 5.0.0-rc.3
resolution: "rxfire@npm:5.0.0-rc.3"
@@ -16012,6 +17094,13 @@ __metadata:
languageName: node
linkType: hard
+"stylis@npm:^4.3.6":
+ version: 4.4.0
+ resolution: "stylis@npm:4.4.0"
+ checksum: 10/e5a149b571ea88b801b43deb6f6a9f5a5711b7ce501540ac2be18ca63a4cfb3fab86309c661a2f88491a5b98f27f95ef3eb397d412fba532c50f13964d9a1013
+ languageName: node
+ linkType: hard
+
"sucrase@npm:^3.32.0":
version: 3.35.0
resolution: "sucrase@npm:3.35.0"
@@ -16288,6 +17377,13 @@ __metadata:
languageName: node
linkType: hard
+"tinyexec@npm:^1.0.1":
+ version: 1.1.2
+ resolution: "tinyexec@npm:1.1.2"
+ checksum: 10/2bbe37f9001c6f5723ab39eb8dc1e88f77e830d7cf2e8f34bb75019eb505fcfe3b061b4799c502ff31fa63aa1a9adc649add5ff1e17b7fbd8c16e1afb75d0b9e
+ languageName: node
+ linkType: hard
+
"tinyglobby@npm:^0.2.12":
version: 0.2.15
resolution: "tinyglobby@npm:0.2.15"
@@ -16416,6 +17512,13 @@ __metadata:
languageName: node
linkType: hard
+"ts-dedent@npm:^2.2.0":
+ version: 2.2.0
+ resolution: "ts-dedent@npm:2.2.0"
+ checksum: 10/93ed8f7878b6d5ed3c08d99b740010eede6bccfe64bce61c5a4da06a2c17d6ddbb80a8c49c2d15251de7594a4f93ffa21dd10e7be75ef66a4dc9951b4a94e2af
+ languageName: node
+ linkType: hard
+
"ts-interface-checker@npm:^0.1.9":
version: 0.1.13
resolution: "ts-interface-checker@npm:0.1.13"
@@ -16622,6 +17725,13 @@ __metadata:
languageName: node
linkType: hard
+"ufo@npm:^1.6.3":
+ version: 1.6.4
+ resolution: "ufo@npm:1.6.4"
+ checksum: 10/dbf85425e00dd106abb852c0ea4cef6e58b395b9a43858049a8be0b0825e5cc4b53cf58a41da695c3c2a9ab4f8605923b64812be1358c39a56b3920504759d3a
+ languageName: node
+ linkType: hard
+
"unbox-primitive@npm:^1.0.2":
version: 1.0.2
resolution: "unbox-primitive@npm:1.0.2"
@@ -16980,6 +18090,15 @@ __metadata:
languageName: node
linkType: hard
+"uuid@npm:14.0.0":
+ version: 14.0.0
+ resolution: "uuid@npm:14.0.0"
+ bin:
+ uuid: dist-node/bin/uuid
+ checksum: 10/8ee9b98f9650e25555515f7a28d3c3ae9364e72f7bb19b9e08b681bc135338beba5509b2830f6ae1cfaba4d45401da0d16d4d109b977097bc3d6ba0c5583341b
+ languageName: node
+ linkType: hard
+
"uuid@npm:^8.3.2":
version: 8.3.2
resolution: "uuid@npm:8.3.2"
@@ -17060,6 +18179,55 @@ __metadata:
languageName: node
linkType: hard
+"vscode-jsonrpc@npm:8.2.0":
+ version: 8.2.0
+ resolution: "vscode-jsonrpc@npm:8.2.0"
+ checksum: 10/6d57c3aed591d0bc89d1c226061d265b04de528582bef183f5998cac5de78a736887e5238fe48b9f6a14ec32f05d8fda71599f92862ac5dacc7f26bf7399b532
+ languageName: node
+ linkType: hard
+
+"vscode-languageserver-protocol@npm:3.17.5":
+ version: 3.17.5
+ resolution: "vscode-languageserver-protocol@npm:3.17.5"
+ dependencies:
+ vscode-jsonrpc: "npm:8.2.0"
+ vscode-languageserver-types: "npm:3.17.5"
+ checksum: 10/aeb9c190184c365fa6b835e5aa7574c86cb3ecb2789386bcff76a09b22bc8b8e0d5da47c28193a9c73cfb32c10a12a91191779280324a38efb401e3ef7bad294
+ languageName: node
+ linkType: hard
+
+"vscode-languageserver-textdocument@npm:~1.0.11":
+ version: 1.0.12
+ resolution: "vscode-languageserver-textdocument@npm:1.0.12"
+ checksum: 10/2bc0fde952d40f35a31179623d1491b0fafdee156aaf58557f40f5d394a25fc84826763cdde55fa6ce2ed9cd35a931355ad6dd7fe5db82e7f21e5d865f0af8c6
+ languageName: node
+ linkType: hard
+
+"vscode-languageserver-types@npm:3.17.5":
+ version: 3.17.5
+ resolution: "vscode-languageserver-types@npm:3.17.5"
+ checksum: 10/900d0b81df5bef8d90933e75be089142f6989cc70fdb2d5a3a5f11fa20feb396aaea23ccffc8fbcc83a2f0e1b13c6ee48ff8151f236cbd6e61a4f856efac1a58
+ languageName: node
+ linkType: hard
+
+"vscode-languageserver@npm:~9.0.1":
+ version: 9.0.1
+ resolution: "vscode-languageserver@npm:9.0.1"
+ dependencies:
+ vscode-languageserver-protocol: "npm:3.17.5"
+ bin:
+ installServerIntoExtension: bin/installServerIntoExtension
+ checksum: 10/1cb643b1b1f41a620aaf4a62e152acad694c22b4d98de73fa614a0bddf3b4b4832460465bdbc43f27ba23dad7e61aba533e77b8bfac74cc8de310c39623a7ba1
+ languageName: node
+ linkType: hard
+
+"vscode-uri@npm:~3.1.0":
+ version: 3.1.0
+ resolution: "vscode-uri@npm:3.1.0"
+ checksum: 10/80c2a2421f44b64008ef1f91dfa52a2d68105cbb4dcea197dbf5b00c65ccaccf218b615e93ec587f26fc3ba04796898f3631a9406e3b04cda970c3ca8eadf646
+ languageName: node
+ linkType: hard
+
"w3c-hr-time@npm:^1.0.2":
version: 1.0.2
resolution: "w3c-hr-time@npm:1.0.2"