diff --git a/docs/cloud/guides/microbatch-compiled-code.mdx b/docs/cloud/guides/microbatch-compiled-code.mdx
new file mode 100644
index 000000000..6831795d8
--- /dev/null
+++ b/docs/cloud/guides/microbatch-compiled-code.mdx
@@ -0,0 +1,8 @@
+---
+title: "Capture compiled code for microbatch models"
+sidebarTitle: "Microbatch compiled code"
+---
+
+import MicrobatchCompiledCode from '/snippets/guides/microbatch-compiled-code.mdx';
+
+
diff --git a/docs/data-tests/dbt/package-models.mdx b/docs/data-tests/dbt/package-models.mdx
index 65f204688..66e6e77a4 100644
--- a/docs/data-tests/dbt/package-models.mdx
+++ b/docs/data-tests/dbt/package-models.mdx
@@ -29,7 +29,7 @@ New data is loaded to this model on an on-run-end hook named `elementary.upload_
- `compile_completed_at` (string) - End time of resource compile action.
- `rows_affected` (int) - Number of rows affected by the execution.
- `full_refresh` (boolean) - Whether this was a full refresh execution.
-- `compiled_code` (string) - The compiled code (SQL / Python) executed against the database.
+- `compiled_code` (string) - The compiled code (SQL / Python) executed against the database. For microbatch incremental models, this column requires [extra setup](/cloud/guides/microbatch-compiled-code).
- `failures` (int) - Number of failures in this run.
- `query_id` (string) - Query ID in the data warehouse, if returned by the adapter (currently only supported in Snowflake, is null for any other adapter).
- `thread_id` (string) - Id of the thread of this resource run.
diff --git a/docs/docs.json b/docs/docs.json
index ae7a5552b..6b396a7a2 100644
--- a/docs/docs.json
+++ b/docs/docs.json
@@ -187,6 +187,7 @@
"cloud/guides/reduce-on-run-end-time",
"cloud/guides/collect-job-data",
"cloud/guides/collect-source-freshness",
+ "cloud/guides/microbatch-compiled-code",
"cloud/guides/troubleshoot"
]
},
@@ -521,7 +522,8 @@
"oss/guides/collect-job-data",
"oss/guides/collect-dbt-source-freshness",
"oss/guides/reduce-on-run-end-time",
- "oss/guides/performance-alerts"
+ "oss/guides/performance-alerts",
+ "oss/guides/microbatch-compiled-code"
]
},
{
diff --git a/docs/oss/guides/microbatch-compiled-code.mdx b/docs/oss/guides/microbatch-compiled-code.mdx
new file mode 100644
index 000000000..6831795d8
--- /dev/null
+++ b/docs/oss/guides/microbatch-compiled-code.mdx
@@ -0,0 +1,8 @@
+---
+title: "Capture compiled code for microbatch models"
+sidebarTitle: "Microbatch compiled code"
+---
+
+import MicrobatchCompiledCode from '/snippets/guides/microbatch-compiled-code.mdx';
+
+
diff --git a/docs/snippets/guides/microbatch-compiled-code.mdx b/docs/snippets/guides/microbatch-compiled-code.mdx
new file mode 100644
index 000000000..f769404ba
--- /dev/null
+++ b/docs/snippets/guides/microbatch-compiled-code.mdx
@@ -0,0 +1,51 @@
+Elementary can capture and store the compiled SQL of [dbt microbatch incremental models](https://docs.getdbt.com/docs/build/incremental-microbatch) in `dbt_run_results.compiled_code`. By default dbt does not surface compiled code for the microbatch strategy, so this column is empty for microbatch models until you enable the setup below.
+
+## How it works
+
+Elementary provides an override macro for dbt's `get_incremental_microbatch_sql` that captures the compiled SQL of each batch as it runs. The captured code is cached during the invocation and later written to `dbt_run_results.compiled_code`, so microbatch models populate this column the same way other incremental strategies do.
+
+## Enabling microbatch compiled code capture
+
+
+
+ Add a macro that delegates to Elementary's implementation. Place it under your project's `macros/` directory:
+
+ ```sql filename="macros/get_incremental_microbatch_sql.sql"
+ {% macro get_incremental_microbatch_sql(arg_dict) %}
+ {{ return(elementary.get_incremental_microbatch_sql(arg_dict)) }}
+ {% endmacro %}
+ ```
+
+
+
+ Add the `require_batched_execution_for_custom_microbatch_strategy` flag to your `dbt_project.yml`:
+
+ ```yaml filename="dbt_project.yml"
+ flags:
+ require_batched_execution_for_custom_microbatch_strategy: True
+ ```
+
+ This flag tells dbt to use your project-level override of the microbatch strategy with batched execution.
+
+
+
+ On the next `dbt run` or `dbt build`, Elementary captures the compiled SQL of each microbatch model and writes it to `dbt_run_results.compiled_code`.
+
+
+
+## Unsupported configurations
+
+
+The override flow is currently not supported on the following adapters:
+
+- Spark
+- BigQuery
+- Athena
+- ClickHouse
+- Dremio
+- Vertica
+
+It is also not supported on dbt Fusion.
+
+On unsupported adapters and on Fusion, microbatch models continue to run normally but `dbt_run_results.compiled_code` remains empty for them.
+