-
Notifications
You must be signed in to change notification settings - Fork 551
updates for v5.0 #2541
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
updates for v5.0 #2541
Changes from all commits
126e4c1
cba495b
8288d11
58ce73b
45157fe
5565d17
77ec168
f5f6583
67aa492
08de425
9e08cde
c9eb7db
4e6e789
e1eb097
932a10b
65cdee3
c948333
2ff7e99
9f49035
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -16,6 +16,32 @@ | |||||
|
|
||||||
| If you have Prometheus dashboards or alerting rules that reference `fluentbit_hot_reloaded_times`, update them to use counter-appropriate PromQL functions (for example, `rate()` or `increase()` instead of gauge-specific functions like `delta()`). | ||||||
|
|
||||||
| ### Shared HTTP listener settings for HTTP-based inputs | ||||||
|
|
||||||
| The HTTP-based input plugins now use a shared HTTP listener configuration model. In Fluent Bit `v5.0`, the canonical setting names are: | ||||||
|
|
||||||
| - `http_server.http2` | ||||||
| - `http_server.buffer_chunk_size` | ||||||
| - `http_server.buffer_max_size` | ||||||
| - `http_server.max_connections` | ||||||
| - `http_server.workers` | ||||||
|
|
||||||
| Legacy per-plugin names such as `http2`, `buffer_chunk_size`, and `buffer_max_size` are still accepted as compatibility aliases, but new configurations should use the `http_server.*` names. | ||||||
|
|
||||||
| If you tune `http`, `splunk`, `elasticsearch`, `opentelemetry`, or `prometheus_remote_write` inputs, review those sections and migrate to the shared naming so future upgrades are clearer. | ||||||
|
|
||||||
| ### Mutual TLS for input plugins | ||||||
|
|
||||||
| Input plugins that support TLS now also support `tls.verify_client_cert`. Enable this option to require and validate the client certificate presented by the sender. | ||||||
|
|
||||||
| If you terminate TLS directly in Fluent Bit and need mutual TLS (`mTLS`), add `tls.verify_client_cert on` together with the usual `tls.crt_file` and `tls.key_file` settings. | ||||||
|
Check warning on line 37 in installation/upgrade-notes.md
|
||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Replace “terminate” to satisfy style lint. Line 37 currently triggers the Vale rule. Suggested wording: Suggested edit-If you terminate TLS directly in Fluent Bit and need mutual TLS (`mTLS`), add `tls.verify_client_cert on` together with the usual `tls.crt_file` and `tls.key_file` settings.
+If you run TLS directly in Fluent Bit and need mutual TLS (`mTLS`), add `tls.verify_client_cert on` together with the usual `tls.crt_file` and `tls.key_file` settings.📝 Committable suggestion
Suggested change
🧰 Tools🪛 GitHub Actions: Lint PRs[warning] 37-37: vale (reviewdog) [FluentBit.WordList] Use 'stop', 'exit', 'cancel', or 'end' instead of 'terminate'. 🪛 GitHub Check: runner / vale[warning] 37-37: Raw Output: 🤖 Prompt for AI Agents |
||||||
|
|
||||||
| ### New internal logs input | ||||||
|
|
||||||
| Fluent Bit `v5.0` adds the `fluentbit_logs` input plugin, which mirrors Fluent Bit's own internal log stream back into the data pipeline as structured log records. | ||||||
|
Check warning on line 41 in installation/upgrade-notes.md
|
||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Rewrite possessive form to satisfy style lint. Line 41 triggers the possessive rule; switch to a non-possessive phrasing. Suggested edit-Fluent Bit `v5.0` adds the `fluentbit_logs` input plugin, which mirrors Fluent Bit's own internal log stream back into the data pipeline as structured log records.
+Fluent Bit `v5.0` adds the `fluentbit_logs` input plugin, which mirrors the internal log stream of Fluent Bit back into the data pipeline as structured log records.📝 Committable suggestion
Suggested change
🧰 Tools🪛 GitHub Actions: Lint PRs[warning] 41-41: vale (reviewdog) [FluentBit.Possessives] Rewrite 'Bit's' to not use 's. 🪛 GitHub Check: runner / vale[warning] 41-41: Raw Output: 🤖 Prompt for AI Agents |
||||||
|
|
||||||
| Use this input if you want to forward Fluent Bit diagnostics to another destination, filter them, or store them alongside the rest of your telemetry. | ||||||
|
|
||||||
| ### Emitter backpressure with filesystem storage | ||||||
|
|
||||||
| The internal emitter plugin, used by filters such as `rewrite_tag`, now automatically enables `storage.pause_on_chunks_overlimit` when filesystem storage is in use and that option hasn't been explicitly configured. | ||||||
|
|
@@ -24,6 +50,17 @@ | |||||
|
|
||||||
| If you rely on the previous unlimited accumulation behavior, explicitly set `storage.pause_on_chunks_overlimit off` on the relevant input. Otherwise, review your `storage.max_chunks_up` value to ensure it's tuned for your expected throughput. | ||||||
|
|
||||||
| ### More `OAuth 2.0` coverage | ||||||
|
|
||||||
| Fluent Bit `v5.0` expands `OAuth 2.0` support in both directions: | ||||||
|
|
||||||
| - HTTP-based inputs can validate incoming bearer tokens using `oauth2.validate`, `oauth2.issuer`, and `oauth2.jwks_url`. | ||||||
| - The HTTP output can acquire access tokens with `oauth2.enable` and supports `basic`, `post`, and `private_key_jwt` client authentication. | ||||||
|
|
||||||
| If you previously handled authentication outside Fluent Bit for these cases, review the plugin pages for the new built-in options. | ||||||
|
|
||||||
| For a broader overview of user-visible additions in this release, see [What's new in Fluent Bit v5.0](whats-new-in-fluent-bit-v5.0.md). | ||||||
|
|
||||||
| ## Fluent Bit v4.2 | ||||||
|
|
||||||
| ### Vivo exporter output plugin | ||||||
|
|
||||||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,161 @@ | ||||||
| # What's new in Fluent Bit v5.0 | ||||||
|
|
||||||
| Fluent Bit `v5.0` adds new inputs and processors, expands authentication and TLS options, and standardizes configuration for HTTP-based plugins. It also delivers an important round of performance and scalability work, especially for pipelines that ingest logs, metrics, and traces through HTTP-based protocols. This page gives a quick user-focused overview of the main changes since Fluent Bit `v4.2`. | ||||||
|
|
||||||
| For migration-impacting changes, see [Upgrade notes](upgrade-notes.md). | ||||||
|
|
||||||
| ## Performance and scalability | ||||||
|
|
||||||
| ### Unified processing and delivery model | ||||||
|
|
||||||
| Fluent Bit `v5.0` continues the move toward a more unified runtime for logs, metrics, and traces. In practice, this means the same core engine improvements benefit more of the pipeline, instead of individual signal paths evolving separately. | ||||||
|
|
||||||
| For end users, the result is a more consistent behavior across telemetry types and a better base for high-throughput pipelines that mix logs, metrics, and traces in the same deployment. | ||||||
|
|
||||||
| ### Refactored HTTP stack | ||||||
|
|
||||||
| One of the most important `v5.0` changes is the refactoring of the HTTP listener stack used by several input plugins. Fluent Bit now uses a shared HTTP server implementation across the major HTTP-based receivers instead of maintaining separate code paths. | ||||||
|
|
||||||
| This work improves: | ||||||
|
|
||||||
| - concurrency through shared listener worker support | ||||||
| - consistency of request handling across HTTP-based inputs | ||||||
| - buffer enforcement and connection handling | ||||||
| - maintainability, which reduces drift between plugin implementations | ||||||
|
|
||||||
| The biggest user-facing beneficiaries are: | ||||||
|
|
||||||
| - [HTTP input](../pipeline/inputs/http.md) | ||||||
| - [Splunk input](../pipeline/inputs/splunk.md) | ||||||
| - [Elasticsearch input](../pipeline/inputs/elasticsearch.md) | ||||||
| - [OpenTelemetry input](../pipeline/inputs/opentelemetry.md) | ||||||
| - [Prometheus remote write input](../pipeline/inputs/prometheus-remote-write.md) | ||||||
|
|
||||||
| If you run large HTTP or OTLP ingestion workloads, `v5.0` is not only a feature release. It is also a meaningful runtime improvement. | ||||||
|
Check warning on line 34 in installation/whats-new-in-fluent-bit-v5.0.md
|
||||||
|
|
||||||
| ## Configuration and operations | ||||||
|
|
||||||
| ### Shared HTTP listener settings | ||||||
|
|
||||||
| HTTP-based inputs now use a shared listener configuration model. The preferred setting names are: | ||||||
|
|
||||||
| - `http_server.http2` | ||||||
| - `http_server.buffer_chunk_size` | ||||||
| - `http_server.buffer_max_size` | ||||||
| - `http_server.max_connections` | ||||||
| - `http_server.workers` | ||||||
|
|
||||||
| Legacy aliases such as `http2`, `buffer_chunk_size`, and `buffer_max_size` still work, but new configurations should use the `http_server.*` names. | ||||||
|
|
||||||
| Affected plugin families include: | ||||||
|
|
||||||
| - [HTTP input](../pipeline/inputs/http.md) | ||||||
| - [Splunk input](../pipeline/inputs/splunk.md) | ||||||
| - [Elasticsearch input](../pipeline/inputs/elasticsearch.md) | ||||||
| - [OpenTelemetry input](../pipeline/inputs/opentelemetry.md) | ||||||
| - [Prometheus remote write input](../pipeline/inputs/prometheus-remote-write.md) | ||||||
|
|
||||||
| ### Mutual TLS for inputs | ||||||
|
|
||||||
| Input plugins that support TLS can now require client certificate verification with `tls.verify_client_cert`. This makes it easier to run mutual TLS (`mTLS`) directly on Fluent Bit listeners. | ||||||
|
|
||||||
| See [TLS](../administration/transport-security.md). | ||||||
|
|
||||||
| ### JSON health endpoint in API v2 | ||||||
|
|
||||||
| The built-in HTTP server exposes `/api/v2/health`, which returns health status as JSON and uses the HTTP status code to indicate healthy (`200`) or unhealthy (`500`) state. | ||||||
|
|
||||||
| See [Monitoring](../administration/monitoring.md). | ||||||
|
|
||||||
| ## Inputs | ||||||
|
|
||||||
| ### New `fluentbit_logs` input | ||||||
|
|
||||||
| The [Fluent Bit logs input](../pipeline/inputs/fluentbit-logs.md) routes Fluent Bit internal logs back into the pipeline as structured records. This lets you forward agent diagnostics to any supported destination. | ||||||
|
|
||||||
| ### HTTP input remote address capture | ||||||
|
|
||||||
| The [HTTP input](../pipeline/inputs/http.md) adds: | ||||||
|
|
||||||
| - `add_remote_addr` | ||||||
| - `remote_addr_key` | ||||||
|
|
||||||
| These settings let you attach the client address from `X-Forwarded-For` to each ingested record. | ||||||
|
|
||||||
| ### `OAuth 2.0` bearer token validation on HTTP-based inputs | ||||||
|
|
||||||
| HTTP-based receivers can validate incoming bearer tokens with: | ||||||
|
|
||||||
| - `oauth2.validate` | ||||||
| - `oauth2.issuer` | ||||||
| - `oauth2.jwks_url` | ||||||
| - `oauth2.allowed_audience` | ||||||
| - `oauth2.allowed_clients` | ||||||
| - `oauth2.jwks_refresh_interval` | ||||||
|
|
||||||
| This is available on the relevant input plugins, including [HTTP](../pipeline/inputs/http.md) and [OpenTelemetry](../pipeline/inputs/opentelemetry.md). | ||||||
|
|
||||||
| ### OpenTelemetry input improvements | ||||||
|
|
||||||
| The [OpenTelemetry input](../pipeline/inputs/opentelemetry.md) in `v5.0` expands user-visible behavior with: | ||||||
|
|
||||||
| - shared HTTP listener worker support | ||||||
| - `OAuth 2.0` bearer token validation | ||||||
| - stable JSON metrics ingestion over `OTLP/HTTP` | ||||||
| - improved JSON trace validation and error reporting | ||||||
|
|
||||||
| ### Kubernetes events state database controls | ||||||
|
|
||||||
| The [Kubernetes events input](../pipeline/inputs/kubernetes-events.md) documents additional SQLite controls: | ||||||
|
|
||||||
| - `db.journal_mode` | ||||||
| - `db.locking` | ||||||
|
|
||||||
| These settings help tune event cursor persistence and database access behavior. | ||||||
|
|
||||||
| ## Processors | ||||||
|
|
||||||
| ### New cumulative-to-delta processor | ||||||
|
|
||||||
| The [cumulative to delta processor](../pipeline/processors/cumulative-to-delta.md) converts cumulative monotonic metrics to delta values, which is useful when scraping Prometheus-style metrics but exporting to backends that expect deltas. | ||||||
|
Check warning on line 120 in installation/whats-new-in-fluent-bit-v5.0.md
|
||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Use neutral phrasing and keep compound adjective hyphenated. Current wording trips style checks. Suggested rewrite: Suggested edit-The [cumulative to delta processor](../pipeline/processors/cumulative-to-delta.md) converts cumulative monotonic metrics to delta values, which is useful when scraping Prometheus-style metrics but exporting to backends that expect deltas.
+The [cumulative-to-delta processor](../pipeline/processors/cumulative-to-delta.md) converts cumulative monotonic metrics to delta values for deployments that scrape Prometheus-style metrics but export to backends that expect deltas.📝 Committable suggestion
Suggested change
🧰 Tools🪛 GitHub Actions: Lint PRs[warning] 120-120: vale (reviewdog) [FluentBit.Simplicity] Avoid words like "useful" that imply ease of use, because the user may find this action difficult. 🪛 GitHub Check: runner / vale[warning] 120-120: Raw Output: 🪛 LanguageTool[grammar] ~120-~120: Use a hyphen to join words. (QB_NEW_EN_HYPHEN) 🤖 Prompt for AI Agents |
||||||
|
|
||||||
| ### New topological data analysis processor | ||||||
|
|
||||||
| The [topological data analysis processor](../pipeline/processors/tda.md) adds a metrics processor for topology-based analysis workflows. | ||||||
|
Check warning on line 124 in installation/whats-new-in-fluent-bit-v5.0.md
|
||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Replace flagged terms to satisfy lint rules. Two wording tweaks should remove current warnings: Suggested edit-The [topological data analysis processor](../pipeline/processors/tda.md) adds a metrics processor for topology-based analysis workflows.
+The [topological data analysis processor](../pipeline/processors/tda.md) adds a metrics processor for shape-based analysis workflows.-- [Amazon Kinesis Data Firehose](../pipeline/outputs/firehose.md): `snappy` added alongside existing codecs
-- [Amazon S3](../pipeline/outputs/s3.md): `snappy` added alongside existing codecs
+- [Amazon Kinesis Data Firehose](../pipeline/outputs/firehose.md): `snappy` added alongside existing compression algorithms
+- [Amazon S3](../pipeline/outputs/s3.md): `snappy` added alongside existing compression algorithmsAlso applies to: 147-148 🧰 Tools🪛 GitHub Actions: Lint PRs[warning] 124-124: vale (reviewdog) [FluentBit.Terms] Use 'placement' instead of 'topology'. 🪛 GitHub Check: runner / vale[warning] 124-124: Raw Output: 🤖 Prompt for AI Agents |
||||||
|
|
||||||
| ### Sampling processor updates | ||||||
|
|
||||||
| The [sampling processor](../pipeline/processors/sampling.md) adds `legacy_reconcile` for tail sampling, which helps compare the optimized reconciler with the previous behavior when validating upgrades. | ||||||
|
|
||||||
| ## Outputs | ||||||
|
|
||||||
| ### HTTP output `OAuth 2.0` client credentials | ||||||
|
|
||||||
| The [HTTP output](../pipeline/outputs/http.md) now supports built-in `OAuth 2.0` client credentials with: | ||||||
|
|
||||||
| - `basic` | ||||||
| - `post` | ||||||
| - `private_key_jwt` | ||||||
|
|
||||||
| You can configure token acquisition directly in Fluent Bit with the `oauth2.*` settings. | ||||||
|
|
||||||
| ### More compression options for cloud outputs | ||||||
|
|
||||||
| Several outputs gained additional compression support in the `v4.2` to `v5.0` range: | ||||||
|
|
||||||
| - [Amazon Kinesis Data Streams](../pipeline/outputs/kinesis.md): `gzip`, `zstd`, `snappy` | ||||||
| - [Amazon Kinesis Data Firehose](../pipeline/outputs/firehose.md): `snappy` added alongside existing codecs | ||||||
|
Check warning on line 147 in installation/whats-new-in-fluent-bit-v5.0.md
|
||||||
| - [Amazon S3](../pipeline/outputs/s3.md): `snappy` added alongside existing codecs | ||||||
|
Check warning on line 148 in installation/whats-new-in-fluent-bit-v5.0.md
|
||||||
| - [Azure Blob](../pipeline/outputs/azure_blob.md): `zstd` support for transfer compression | ||||||
|
|
||||||
| ## Monitoring changes | ||||||
|
|
||||||
| ### `fluentbit_hot_reloaded_times` is now a counter | ||||||
|
|
||||||
| The `fluentbit_hot_reloaded_times` metric changed from a gauge to a counter, which makes it safe to use with PromQL functions such as `rate()` and `increase()`. | ||||||
|
|
||||||
| ### New output backpressure visibility | ||||||
|
|
||||||
| `v5.0` adds output backpressure duration metrics so you can observe time spent waiting because of downstream pressure. | ||||||
|
|
||||||
| See [Monitoring](../administration/monitoring.md). | ||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Expand
JWTonce in this description to clear lint warning.Suggested update:
Suggested edit
📝 Committable suggestion
🧰 Tools
🪛 GitHub Actions: Lint PRs
[warning] 176-176: vale (reviewdog) [FluentBit.Acronyms] Spell out 'JWT', if it's unfamiliar to the audience.
🪛 GitHub Check: runner / vale
[warning] 176-176:
[vale] reported by reviewdog 🐶
[FluentBit.Acronyms] Spell out 'JWT', if it's unfamiliar to the audience.
Raw Output:
{"message": "[FluentBit.Acronyms] Spell out 'JWT', if it's unfamiliar to the audience.", "location": {"path": "administration/configuring-fluent-bit/yaml/pipeline-section.md", "range": {"start": {"line": 176, "column": 25}}}, "severity": "INFO"}
🤖 Prompt for AI Agents