Skip to content

Commit cc81a21

Browse files
committed
Add Advanced Vulkan Compute tutorial sections on memory models, OpenCL, and conclusion
Add comprehensive documentation covering Vulkan Memory Model (availability/visibility/domain operations), shared memory (LDS) with bank conflict details, memory consistency with GroupMemoryBarrierWithGroupSync, OpenCL C to SPIR-V pipeline (clspv), kernel portability guidelines, clvk layering, and tutorial conclusion. Include navigation entries for all new compute architecture sections.
1 parent 1778b46 commit cc81a21

46 files changed

Lines changed: 2843 additions & 0 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

antora/modules/ROOT/nav.adoc

Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -149,3 +149,60 @@
149149
*** xref:Building_a_Simple_Engine/Advanced_Topics/Robustness2.adoc[Robustness2]
150150
** Appendix
151151
*** xref:Building_a_Simple_Engine/Appendix/appendix.adoc[Appendix]
152+
* Advanced Vulkan Compute
153+
** xref:Advanced_Vulkan_Compute/introduction.adoc[Introduction]
154+
** The Compute Architecture and Execution Model
155+
*** xref:Advanced_Vulkan_Compute/02_Compute_Architecture/01_introduction.adoc[Introduction]
156+
*** xref:Advanced_Vulkan_Compute/02_Compute_Architecture/02_workgroups_and_invocations.adoc[Workgroups and Invocations]
157+
*** xref:Advanced_Vulkan_Compute/02_Compute_Architecture/03_occupancy_and_latency_hiding.adoc[Occupancy and Latency Hiding]
158+
*** xref:Advanced_Vulkan_Compute/02_Compute_Architecture/04_vulkan_1_4_scalar_layouts.adoc[Vulkan 1.4 Scalar Layouts]
159+
** Memory Models and Consistency
160+
*** xref:Advanced_Vulkan_Compute/03_Memory_Models/01_introduction.adoc[Introduction]
161+
*** xref:Advanced_Vulkan_Compute/03_Memory_Models/02_vulkan_memory_model.adoc[The Vulkan Memory Model]
162+
*** xref:Advanced_Vulkan_Compute/03_Memory_Models/03_shared_memory_lds.adoc[Shared Memory (LDS)]
163+
*** xref:Advanced_Vulkan_Compute/03_Memory_Models/04_memory_consistency.adoc[Memory Consistency]
164+
** Subgroup Operations: The Hidden Power
165+
*** xref:Advanced_Vulkan_Compute/04_Subgroup_Operations/01_introduction.adoc[Introduction]
166+
*** xref:Advanced_Vulkan_Compute/04_Subgroup_Operations/02_cross_invocation_communication.adoc[Cross-Invocation Communication]
167+
*** xref:Advanced_Vulkan_Compute/04_Subgroup_Operations/03_subgroup_partitioning.adoc[Subgroup Partitioning]
168+
*** xref:Advanced_Vulkan_Compute/04_Subgroup_Operations/04_non_uniform_indexing.adoc[Non-Uniform Indexing]
169+
** Heterogeneous Ecosystem: OpenCL on Vulkan
170+
*** xref:Advanced_Vulkan_Compute/05_OpenCL_on_Vulkan/01_introduction.adoc[Introduction]
171+
*** xref:Advanced_Vulkan_Compute/05_OpenCL_on_Vulkan/02_setup_and_installation.adoc[Setup and Installation]
172+
*** xref:Advanced_Vulkan_Compute/05_OpenCL_on_Vulkan/03_clspv_pipeline.adoc[The clspv Pipeline]
173+
*** xref:Advanced_Vulkan_Compute/05_OpenCL_on_Vulkan/04_kernel_portability.adoc[Kernel Portability]
174+
*** xref:Advanced_Vulkan_Compute/05_OpenCL_on_Vulkan/05_clvk_and_layering.adoc[clvk and Layering]
175+
** High-Level Abstraction: SYCL and Single-Source C++
176+
*** xref:Advanced_Vulkan_Compute/06_SYCL_and_Single_Source_CPP/01_introduction.adoc[Introduction]
177+
*** xref:Advanced_Vulkan_Compute/06_SYCL_and_Single_Source_CPP/02_setup_and_installation.adoc[Setup and Installation]
178+
*** xref:Advanced_Vulkan_Compute/06_SYCL_and_Single_Source_CPP/03_single_source_gpgpu.adoc[Single-Source GPGPU]
179+
*** xref:Advanced_Vulkan_Compute/06_SYCL_and_Single_Source_CPP/04_vulkan_interoperability.adoc[Vulkan Interoperability]
180+
*** xref:Advanced_Vulkan_Compute/06_SYCL_and_Single_Source_CPP/05_unified_shared_memory_usm.adoc[Unified Shared Memory (USM)]
181+
** Advanced Data Structures on the GPU
182+
*** xref:Advanced_Vulkan_Compute/07_Advanced_Data_Structures/01_introduction.adoc[Introduction]
183+
*** xref:Advanced_Vulkan_Compute/07_Advanced_Data_Structures/02_gpu_resident_trees.adoc[GPU-Resident Trees]
184+
*** xref:Advanced_Vulkan_Compute/07_Advanced_Data_Structures/03_global_atomic_management.adoc[Global Atomic Management]
185+
*** xref:Advanced_Vulkan_Compute/07_Advanced_Data_Structures/04_device_addressable_buffers.adoc[Device-Addressable Buffers]
186+
** Indirect Dispatch and GPU-Driven Pipelines
187+
*** xref:Advanced_Vulkan_Compute/08_GPU_Driven_Pipelines/01_introduction.adoc[Introduction]
188+
*** xref:Advanced_Vulkan_Compute/08_GPU_Driven_Pipelines/02_indirect_dispatch.adoc[Indirect Dispatch]
189+
*** xref:Advanced_Vulkan_Compute/08_GPU_Driven_Pipelines/03_gpu_side_command_generation.adoc[GPU-Side Command Generation]
190+
*** xref:Advanced_Vulkan_Compute/08_GPU_Driven_Pipelines/04_multi_draw_indirect_mdi.adoc[Multi-Draw Indirect (MDI)]
191+
** Asynchronous Compute Orchestration
192+
*** xref:Advanced_Vulkan_Compute/09_Asynchronous_Compute/01_introduction.adoc[Introduction]
193+
*** xref:Advanced_Vulkan_Compute/09_Asynchronous_Compute/02_concurrent_execution.adoc[Concurrent Execution]
194+
*** xref:Advanced_Vulkan_Compute/09_Asynchronous_Compute/03_timeline_semaphores.adoc[Timeline Semaphores]
195+
*** xref:Advanced_Vulkan_Compute/09_Asynchronous_Compute/04_queue_priority.adoc[Queue Priority]
196+
** Cooperative Matrices and Specialized Math
197+
*** xref:Advanced_Vulkan_Compute/10_Specialized_Math/01_introduction.adoc[Introduction]
198+
*** xref:Advanced_Vulkan_Compute/10_Specialized_Math/02_cooperative_matrices.adoc[Cooperative Matrices]
199+
*** xref:Advanced_Vulkan_Compute/10_Specialized_Math/03_mixed_precision.adoc[Mixed Precision]
200+
** Performance Auditing and Optimization
201+
*** xref:Advanced_Vulkan_Compute/11_Performance_Optimization/01_introduction.adoc[Introduction]
202+
*** xref:Advanced_Vulkan_Compute/11_Performance_Optimization/02_instruction_throughput.adoc[Instruction Throughput Analysis]
203+
*** xref:Advanced_Vulkan_Compute/11_Performance_Optimization/03_divergence_audit.adoc[The "Divergence" Audit]
204+
** Diagnostics and AI-Assisted Compute Refinement
205+
*** xref:Advanced_Vulkan_Compute/12_Diagnostics_and_Refinement/01_introduction.adoc[Introduction]
206+
*** xref:Advanced_Vulkan_Compute/12_Diagnostics_and_Refinement/02_compute_validation.adoc[Compute Validation]
207+
*** xref:Advanced_Vulkan_Compute/12_Diagnostics_and_Refinement/03_assistant_led_optimization.adoc[Assistant-Led Optimization]
208+
** xref:Advanced_Vulkan_Compute/conclusion.adoc[Conclusion]
Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
:pp: {plus}{plus}
2+
3+
= The Compute Architecture and Execution Model: Introduction
4+
5+
== Overview
6+
7+
To write efficient compute kernels, you must look beyond the abstract execution model of "workgroups" and "invocations" and understand how these concepts map to the physical hardware. While Vulkan provides a cross-vendor API, the silicon beneath it from AMD, NVIDIA, and Intel has specific ways of handling your data.
8+
9+
In this chapter, we will bridge the gap between your shader code and the silicon. We'll explore how the 3D grid system you define in `vkCmdDispatch` is sliced, diced, and distributed across the GPU's **Compute Units (CU)** or **Streaming Multiprocessors (SM)**.
10+
11+
=== The Language of Silicon
12+
13+
Before we dive in, let's align our vocabulary. Different vendors use different names for the same concepts:
14+
15+
* **Workgroups** (Vulkan/OpenCL) are often mapped to **Thread Blocks** (CUDA).
16+
* **Invocations** (Vulkan) are simply **Threads**.
17+
* **Subgroups** (Vulkan) are called **Wavefronts** (AMD) or **Warps** (NVIDIA).
18+
* **Compute Units** (AMD) are equivalent to **Streaming Multiprocessors** (NVIDIA).
19+
20+
Understanding these mappings allows you to read hardware-specific documentation and performance guides regardless of which GPU you are targeting.
21+
22+
== Hardware Mapping
23+
24+
When you dispatch a workload, the GPU's hardware command processor breaks the global grid into individual workgroups. These workgroups are the fundamental unit of scheduling.
25+
26+
A critical rule of the GPU execution model is **workgroup atomicity**: once a workgroup is assigned to a physical compute unit, all its invocations will stay on that unit until the workgroup completes. They cannot be split across multiple units. This locality is what enables **Shared Memory (LDS - Local Data Store)**—since all threads in a workgroup are physically on the same hardware block, they can share a dedicated, ultra-fast cache.
27+
28+
=== Invocations and SIMD
29+
30+
While workgroups are the scheduling unit, the **invocation** is the smallest unit of execution. However, GPUs are **SIMD (Single Instruction, Multiple Data)** machines. They don't execute invocations one by one; instead, they group them into small bundles (Subgroups).
31+
32+
In these bundles, every invocation executes the exact same instruction at the same time, but on different data. This is incredibly efficient for math, but it introduces a major pitfall: **Branch Divergence**. If your code contains an `if` statement where some threads go left and others go right, the hardware must execute *both* paths, masking out the inactive threads for each.
33+
34+
== Performance Metrics
35+
36+
Throughout this section, we will focus on two key metrics that determine how well you're utilizing the hardware:
37+
38+
1. **Occupancy**: This is the "concurrency" metric. It represents how many active workgroups are residing on a compute unit compared to its theoretical maximum. High occupancy helps **hide latency**—if one bundle is waiting for a memory fetch from slow VRAM, the scheduler can instantly switch to another bundle that's ready to do math.
39+
2. **Bandwidth Efficiency**: This is the "throughput" metric. Modern GPUs have massive memory bandwidth, but it's easily wasted by poor data alignment. We'll see how Vulkan 1.4's **Scalar Layouts** allow us to pack data tightly, ensuring that the shader actually uses every byte fetched from VRAM.
40+
41+
== What's Next?
42+
43+
We'll start by diving into the 3D grid system and seeing exactly how it maps to physical hardware. From there, we'll learn how to calculate theoretical occupancy and use engine tools to monitor real-world utilization. Finally, we'll master the scalar block layouts to maximize your data throughput.
44+
45+
xref:../introduction.adoc[Previous: Introduction] | xref:02_workgroups_and_invocations.adoc[Next: Workgroups and Invocations]
Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
:pp: {plus}{plus}
2+
3+
= Workgroups and Invocations: The 3D Lattice
4+
5+
== Introduction
6+
7+
In the basic compute tutorial, we used a simple one-dimensional dispatch. While that works for simple tasks, it doesn't represent how the GPU actually schedules work. To write high-performance kernels, you need to understand how Vulkan's 3D grid system maps to the physical silicon of the GPU.
8+
9+
The grid system is more than just a convenient way to index into textures; it defines how your workload is subdivided and scheduled across the hardware.
10+
11+
== The Three-Tier Hierarchy
12+
13+
When you define a compute dispatch, you are working with a hierarchy of units. Getting these dimensions right is the first step toward high performance.
14+
15+
1. **Global Dispatch Grid**: This is the entire workload, defined in `vkCmdDispatch(x, y, z)`.
16+
2. **Workgroups**: The global grid is subdivided into workgroups. The GPU's hardware scheduler assigns these workgroups to physical compute units.
17+
3. **Invocations**: Each workgroup contains multiple individual threads, defined by the `local_size` in your shader.
18+
19+
=== Workgroup Locality
20+
21+
In the previous section, we mentioned that a workgroup cannot be split across multiple physical **Compute Units** (CU, on AMD/Intel) or **Streaming Multiprocessors** (SM, on NVIDIA). This means that all invocations within a workgroup are physically executed on the same hardware block.
22+
23+
This locality is a key design constraint. It allows invocations in the same workgroup to share a fast, local memory known as **LDS** (Local Data Store) or **groupshared** memory, but it also means that the size of your workgroup is limited by the physical resources of a single CU/SM. If your workgroup size is too large, the GPU simply won't be able to schedule it.
24+
25+
== The Math of Indexing
26+
27+
Vulkan provides several built-in variables to help you find your place in the grid. In Slang, these are typically passed as parameters to the entry point using semantics like `SV_DispatchThreadID`, `SV_GroupThreadID`, and `SV_GroupID`.
28+
29+
Let's look at how these relate in a typical shader:
30+
31+
[source,slang]
32+
----
33+
[numthreads(16, 16, 1)]
34+
void main(
35+
uint3 groupID : SV_GroupID, // gl_WorkGroupID
36+
uint3 localID : SV_GroupThreadID, // gl_LocalInvocationID
37+
uint3 globalID : SV_DispatchThreadID // gl_GlobalInvocationID
38+
) {
39+
// globalID: The unique index for this thread in the entire grid
40+
// Formula: globalID = groupID * numthreads + localID
41+
uint x = globalID.x;
42+
uint y = globalID.y;
43+
44+
// Process pixel (x, y)
45+
}
46+
----
47+
48+
Using a 2D or 3D grid makes spatial tasks (like image processing or physics simulations) much cleaner. Instead of manually calculating a 1D index, you can use `.xy` or `.xyz` coordinates that match your data structure.
49+
50+
== Choosing Optimal Sizes
51+
52+
A common mistake is choosing workgroup sizes based solely on what "fits" your data. For example, if you're processing a 10x10 image, you might choose a workgroup size of (10, 10, 1).
53+
54+
However, GPUs execute invocations in bundles of 32 or 64—known as **Subgroups**, **Warps** (NVIDIA), or **Wavefronts** (AMD). If your workgroup size is not a multiple of the hardware's native bundle size, you are leaving silicon idle. This is called **internal fragmentation**.
55+
56+
=== The Rule of 32/64
57+
58+
* **NVIDIA** GPUs typically prefer multiples of **32** (Warps).
59+
* **AMD** GPUs typically prefer multiples of **64** (Wavefronts), though modern RDNA architectures can also handle 32.
60+
* **Intel** GPUs have variable sizes (8, 16, 32).
61+
62+
A safe, portable choice for many workloads is a workgroup size of **64** or **256** (e.g., `16x16` or `8x8x4`). This ensures that most hardware can keep its **SIMD** (Single Instruction, Multiple Data) lanes full.
63+
64+
== Dispatching the Work
65+
66+
When you call `vkCmdDispatch(groupCountX, groupCountY, groupCountZ)`, you are defining how many times the `local_size` block is repeated.
67+
68+
If you have an image of size `width` x `height` and a workgroup size of `16x16`, your dispatch would look like this:
69+
70+
[source,cpp]
71+
----
72+
uint32_t groupCountX = (width + 15) / 16;
73+
uint32_t groupCountY = (height + 15) / 16;
74+
commandBuffer.dispatch(groupCountX, groupCountY, 1);
75+
----
76+
77+
Note the use of "rounding up" (`(width + 15) / 16`). This ensures that if your image size isn't a perfect multiple of 16, you don't miss the last few pixels. Inside the shader, you would then use a bounds check: `if (x < width && y < height)`.
78+
79+
== What's Next?
80+
81+
Understanding how workgroups map to hardware is the foundation of GPU compute. But mapping work to hardware is only part of the story; we also need to keep that hardware busy. In the next section, we'll talk about **Occupancy** and how to hide the massive latency of VRAM.
82+
83+
xref:01_introduction.adoc[Previous: Introduction] | xref:03_occupancy_and_latency_hiding.adoc[Next: Occupancy and Latency Hiding]
Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
:pp: {plus}{plus}
2+
3+
= Occupancy and Latency Hiding: Keeping the GPU Busy
4+
5+
== Introduction
6+
7+
In the previous section, we learned how workgroups are mapped to the GPU's factory floor (the Compute Units or SMs). But simply getting a workgroup onto a CU is only half the battle. If that workgroup is poorly designed, it might only use a fraction of the hardware's potential, leaving expensive silicon sitting idle.
8+
9+
To understand why this happens, we must talk about **Latency** and **Occupancy**.
10+
11+
== The Latency Gap
12+
13+
GPUs are memory-bound. While a modern GPU can perform trillions of floating-point operations per second (**TFLOPS**), fetching a single piece of data from **VRAM** (Video Random Access Memory) can take hundreds or even thousands of clock cycles.
14+
15+
If a bundle of invocations (a warp or wavefront) needs to read from memory, it has to wait. If that CU only has one bundle to run, the entire CU goes silent until the data arrives. This is a disaster for performance, and is known as **memory latency**.
16+
17+
The GPU's solution is **Concurrency**. Instead of waiting for one bundle, the CU switches to another bundle that is ready to execute. The more bundles you have "in flight" on a single CU, the better you can hide the latency of memory fetches.
18+
19+
== Defining Occupancy
20+
21+
**Occupancy** is a measure of how many bundles are active on a CU compared to the theoretical maximum. It's often expressed as a percentage.
22+
23+
* **100% Occupancy**: The CU is completely packed with bundles. Whenever one waits for memory, there's almost certainly another one ready to go.
24+
* **Low Occupancy**: Only a few bundles are active. If they all hit a memory fetch at the same time, the CU will stall.
25+
26+
=== The Resource Tug-of-War
27+
28+
You might wonder: "Why not just always dispatch thousands of threads?" The problem is that each Compute Unit has a fixed pool of physical resources. Every thread you add consumes a portion of that pool.
29+
30+
The three primary limiters of occupancy are:
31+
32+
1. **Registers**: Each thread needs a set of registers to store its variables. If your shader uses 128 registers, you can fit fewer threads than if it used 32.
33+
2. **Shared Memory (LDS)**: This memory is shared by the whole workgroup. If your workgroup uses 32KB of LDS and the CU only has 64KB, you can only fit two workgroups on that CU, regardless of how many threads they have.
34+
3. **Thread/Warp Slots**: There is a hard limit on how many threads the hardware scheduler can track at once (e.g., 2048 threads per CU).
35+
36+
|===
37+
| Resource Usage | Impact on Occupancy | Result
38+
39+
| High Register Count
40+
| **Negative**
41+
| Fewer bundles per CU; harder to hide latency.
42+
43+
| High LDS Usage
44+
| **Negative**
45+
| Fewer workgroups per CU; limited concurrency.
46+
47+
| Small Workgroup Size
48+
| **Neutral/Negative**
49+
| May not fill all warp slots; scheduling overhead.
50+
|===
51+
52+
== Calculating Theoretical Occupancy
53+
54+
Most GPU vendors provide tools (like NVIDIA's Nsight or AMD's RGP) that calculate occupancy for you. However, you can estimate it yourself by looking at your shader's resource usage.
55+
56+
If a CU has 64KB of shared memory and your workgroup uses 32KB, your CU can only ever host two workgroups at a time. If your workgroup size is small (say, 64 threads), you'll have 128 threads per CU. If that hardware is capable of tracking 2048 threads, your occupancy is only around 6%.
57+
58+
This is why "fat" shaders (those that use lots of registers or shared memory) often perform poorly unless they are carefully tuned.
59+
60+
== Monitoring Utilization
61+
62+
In a real engine, you don't just want to guess. Modern Vulkan engines use performance counters (via the `VK_KHR_performance_query` extension) to monitor hardware utilization in real-time.
63+
64+
By tracking metrics like **ValuUtilization** (AMD) or **SM Active** (NVIDIA), you can see if your kernels are actually keeping the hardware busy. If you see high memory latency but low occupancy, you know you need to optimize your register usage or shared memory footprint.
65+
66+
== What's Next?
67+
68+
Now that we know how to keep the GPU busy, we need to make sure that when it *is* busy, it's being efficient. In the final section of this chapter, we'll look at **Scalar Layouts**—a Vulkan 1.4 feature that allows us to pack our data tightly and maximize the bandwidth we've worked so hard to hide.
69+
70+
xref:02_workgroups_and_invocations.adoc[Previous: Workgroups and Invocations] | xref:04_vulkan_1_4_scalar_layouts.adoc[Next: Vulkan 1.4 Scalar Layouts]

0 commit comments

Comments
 (0)