Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions docs/cloud/guides/microbatch-compiled-code.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
title: "Capture compiled code for microbatch models"
sidebarTitle: "Microbatch compiled code"
---

import MicrobatchCompiledCode from '/snippets/guides/microbatch-compiled-code.mdx';

<MicrobatchCompiledCode />
2 changes: 1 addition & 1 deletion docs/data-tests/dbt/package-models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ New data is loaded to this model on an on-run-end hook named `elementary.upload_
- `compile_completed_at` (string) - End time of resource compile action.
- `rows_affected` (int) - Number of rows affected by the execution.
- `full_refresh` (boolean) - Whether this was a full refresh execution.
- `compiled_code` (string) - The compiled code (SQL / Python) executed against the database.
- `compiled_code` (string) - The compiled code (SQL / Python) executed against the database. For microbatch incremental models, this column requires [extra setup](/cloud/guides/microbatch-compiled-code).
- `failures` (int) - Number of failures in this run.
- `query_id` (string) - Query ID in the data warehouse, if returned by the adapter (currently only supported in Snowflake, is null for any other adapter).
- `thread_id` (string) - Id of the thread of this resource run.
Expand Down
4 changes: 3 additions & 1 deletion docs/docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -187,6 +187,7 @@
"cloud/guides/reduce-on-run-end-time",
"cloud/guides/collect-job-data",
"cloud/guides/collect-source-freshness",
"cloud/guides/microbatch-compiled-code",
"cloud/guides/troubleshoot"
]
},
Expand Down Expand Up @@ -521,7 +522,8 @@
"oss/guides/collect-job-data",
"oss/guides/collect-dbt-source-freshness",
"oss/guides/reduce-on-run-end-time",
"oss/guides/performance-alerts"
"oss/guides/performance-alerts",
"oss/guides/microbatch-compiled-code"
]
},
{
Expand Down
8 changes: 8 additions & 0 deletions docs/oss/guides/microbatch-compiled-code.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
title: "Capture compiled code for microbatch models"
sidebarTitle: "Microbatch compiled code"
---

import MicrobatchCompiledCode from '/snippets/guides/microbatch-compiled-code.mdx';

<MicrobatchCompiledCode />
51 changes: 51 additions & 0 deletions docs/snippets/guides/microbatch-compiled-code.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
Elementary can capture and store the compiled SQL of [dbt microbatch incremental models](https://docs.getdbt.com/docs/build/incremental-microbatch) in `dbt_run_results.compiled_code`. By default dbt does not surface compiled code for the microbatch strategy, so this column is empty for microbatch models until you enable the setup below.

## How it works

Elementary provides an override macro for dbt's `get_incremental_microbatch_sql` that captures the compiled SQL of each batch as it runs. The captured code is cached during the invocation and later written to `dbt_run_results.compiled_code`, so microbatch models populate this column the same way other incremental strategies do.

## Enabling microbatch compiled code capture

<Steps>
<Step title="Override the microbatch strategy macro in your project">
Add a macro that delegates to Elementary's implementation. Place it under your project's `macros/` directory:

```sql filename="macros/get_incremental_microbatch_sql.sql"
{% macro get_incremental_microbatch_sql(arg_dict) %}
{{ return(elementary.get_incremental_microbatch_sql(arg_dict)) }}
{% endmacro %}
```
</Step>

<Step title="Enable the dbt behavior flag">
Add the `require_batched_execution_for_custom_microbatch_strategy` flag to your `dbt_project.yml`:

```yaml filename="dbt_project.yml"
flags:
require_batched_execution_for_custom_microbatch_strategy: True
```

This flag tells dbt to use your project-level override of the microbatch strategy with batched execution.
</Step>

<Step title="Run your microbatch models">
On the next `dbt run` or `dbt build`, Elementary captures the compiled SQL of each microbatch model and writes it to `dbt_run_results.compiled_code`.
</Step>
</Steps>

## Unsupported configurations

<Warning>
The override flow is currently not supported on the following adapters:

- Spark
- BigQuery
- Athena
- ClickHouse
- Dremio
- Vertica

It is also not supported on dbt Fusion.

On unsupported adapters and on Fusion, microbatch models continue to run normally but `dbt_run_results.compiled_code` remains empty for them.
</Warning>
Loading