diff --git a/README.md b/README.md index 7927a15c9..e4bf816c1 100644 --- a/README.md +++ b/README.md @@ -37,6 +37,8 @@ description: High Performance Telemetry Agent for Logs, Metrics and Traces For more details about changes in each release, refer to the [official release notes](https://fluentbit.io/announcements/). +If you are upgrading from the Fluent Bit `4.2` series, start with [What's new in Fluent Bit v5.0](installation/whats-new-in-fluent-bit-v5.0.md) and [Upgrade notes](installation/upgrade-notes.md). + ## Fluent Bit, Fluentd, and CNCF Fluent Bit is a [CNCF](https://www.cncf.io/) graduated sub-project under the umbrella of [Fluentd](https://www.fluentd.org). diff --git a/SUMMARY.md b/SUMMARY.md index 83773871c..18d8948ee 100644 --- a/SUMMARY.md +++ b/SUMMARY.md @@ -33,6 +33,7 @@ * [Kubernetes](installation/downloads/kubernetes.md) * [macOS](installation/downloads/macos.md) * [Windows](installation/downloads/windows.md) +* [What's new in Fluent Bit v5.0](installation/whats-new-in-fluent-bit-v5.0.md) * [Upgrade notes](installation/upgrade-notes.md) ## Administration diff --git a/administration/configuring-fluent-bit/yaml/pipeline-section.md b/administration/configuring-fluent-bit/yaml/pipeline-section.md index 39689190d..ce2c7a0b9 100644 --- a/administration/configuring-fluent-bit/yaml/pipeline-section.md +++ b/administration/configuring-fluent-bit/yaml/pipeline-section.md @@ -77,6 +77,35 @@ The `inputs` section defines one or more [input plugins](../../../pipeline/input The `name` parameter is required and defines for Fluent Bit which input plugin should be loaded. The `tag` parameter is required for all plugins except for the `forward` plugin, which provides dynamic tags. +### Shared HTTP listener settings for inputs + +Some HTTP-based input plugins share the same listener implementation and support the following common settings in addition to their plugin-specific parameters: + +| Key | Description | Default | +| --- | ----------- | ------- | +| `http_server.http2` | Enable HTTP/2 support for the input listener. | `true` | +| `http_server.buffer_max_size` | Set the maximum size of the HTTP request buffer. | `4M` | +| `http_server.buffer_chunk_size` | Set the allocation chunk size used for the HTTP request buffer. | `512K` | +| `http_server.max_connections` | Set the maximum number of concurrent active HTTP connections. `0` means unlimited. | `0` | +| `http_server.workers` | Set the number of HTTP listener worker threads. | `1` | + +For backward compatibility, some plugins also accept the legacy aliases `http2`, `buffer_max_size`, `buffer_chunk_size`, `max_connections`, and `workers`. + +### Incoming `OAuth 2.0` `JWT` validation settings + +The HTTP-based input plugins that support bearer token validation share the following `oauth2.*` settings: + +| Key | Description | Default | +| --- | ----------- | ------- | +| `oauth2.validate` | Enable `OAuth 2.0` `JWT` validation for incoming requests. | `false` | +| `oauth2.issuer` | Expected issuer (`iss`) claim. Required when `oauth2.validate` is `true`. | _none_ | +| `oauth2.jwks_url` | `JWKS` endpoint URL used to fetch public keys for token validation. Required when `oauth2.validate` is `true`. | _none_ | +| `oauth2.allowed_audience` | Audience claim to enforce when validating tokens. | _none_ | +| `oauth2.allowed_clients` | Authorized `client_id` or `azp` claim values. This key can be specified multiple times. | _none_ | +| `oauth2.jwks_refresh_interval` | How often in seconds to refresh cached `JWKS` keys. | `300` | + +When validation is enabled, requests without a valid `Authorization: Bearer ` header are rejected. + ### Example input configuration The following is an example of an `inputs` section that contains a `cpu` plugin. @@ -127,6 +156,29 @@ The `outputs` section defines one or more [output plugins](../../../pipeline/out Fluent Bit can route up to 256 output plugins. +### Outgoing `OAuth 2.0` client credentials settings + +Output plugins that support outgoing `OAuth 2.0` authentication can expose the following shared `oauth2.*` settings: + +| Key | Description | Default | +| --- | ----------- | ------- | +| `oauth2.enable` | Enable `OAuth 2.0` client credentials for outgoing requests. | `false` | +| `oauth2.token_url` | Token endpoint URL. | _none_ | +| `oauth2.client_id` | Client ID. | _none_ | +| `oauth2.client_secret` | Client secret. | _none_ | +| `oauth2.scope` | Optional scope parameter. | _none_ | +| `oauth2.audience` | Optional audience parameter. | _none_ | +| `oauth2.resource` | Optional resource parameter. | _none_ | +| `oauth2.auth_method` | Client authentication method. Supported values: `basic`, `post`, `private_key_jwt`. | `basic` | +| `oauth2.jwt_key_file` | PEM private key file used with `private_key_jwt`. | _none_ | +| `oauth2.jwt_cert_file` | Certificate file used to derive the `kid` or `x5t` header value for `private_key_jwt`. | _none_ | +| `oauth2.jwt_aud` | Audience to use in `private_key_jwt` assertions. Defaults to `oauth2.token_url` when unset. | _none_ | +| `oauth2.jwt_header` | JWT header claim name used for the thumbprint. Supported values: `kid`, `x5t`. | `kid` | +| `oauth2.jwt_ttl_seconds` | Lifetime in seconds for `private_key_jwt` client assertions. | `300` | +| `oauth2.refresh_skew_seconds` | Seconds before expiry at which to refresh the access token. | `60` | +| `oauth2.timeout` | Timeout for token requests. | `0s` | +| `oauth2.connect_timeout` | Connect timeout for token requests. | `0s` | + ### Example output configuration The following is an example of an `outputs` section that contains a `stdout` plugin: diff --git a/administration/configuring-fluent-bit/yaml/service-section.md b/administration/configuring-fluent-bit/yaml/service-section.md index 37842a084..2a1e6f557 100644 --- a/administration/configuring-fluent-bit/yaml/service-section.md +++ b/administration/configuring-fluent-bit/yaml/service-section.md @@ -36,6 +36,8 @@ The `service` section of YAML configuration files defines global properties of t | `streams_file` | Path for the [stream processor](../../../stream-processing/overview.md) configuration file. This file defines the rules and operations for stream processing in Fluent Bit. Stream processor configurations can also be defined directly in the `streams` section of YAML configuration files. | _none_ | | `windows.maxstdio` | If specified, adjusts the limit of `stdio`. Only provided for Windows. Values from `512` to `2048` are allowed. | `512` | +The `service` section only controls the built-in monitoring and control HTTP server. Plugin-specific HTTP listener settings such as `http_server.http2`, `http_server.buffer_max_size`, `http_server.buffer_chunk_size`, `http_server.max_connections`, and `http_server.workers` are configured on the relevant input plugin in the [`pipeline.inputs`](../yaml/pipeline-section.md#shared-http-listener-settings-for-inputs) section. + ## Storage configuration The following storage-related keys can be set as children to the `storage` key: diff --git a/installation/upgrade-notes.md b/installation/upgrade-notes.md index 4f099aadf..c866c80b1 100644 --- a/installation/upgrade-notes.md +++ b/installation/upgrade-notes.md @@ -16,6 +16,32 @@ The internal metric `fluentbit_hot_reloaded_times` has changed from a gauge to a If you have Prometheus dashboards or alerting rules that reference `fluentbit_hot_reloaded_times`, update them to use counter-appropriate PromQL functions (for example, `rate()` or `increase()` instead of gauge-specific functions like `delta()`). +### Shared HTTP listener settings for HTTP-based inputs + +The HTTP-based input plugins now use a shared HTTP listener configuration model. In Fluent Bit `v5.0`, the canonical setting names are: + +- `http_server.http2` +- `http_server.buffer_chunk_size` +- `http_server.buffer_max_size` +- `http_server.max_connections` +- `http_server.workers` + +Legacy per-plugin names such as `http2`, `buffer_chunk_size`, and `buffer_max_size` are still accepted as compatibility aliases, but new configurations should use the `http_server.*` names. + +If you tune `http`, `splunk`, `elasticsearch`, `opentelemetry`, or `prometheus_remote_write` inputs, review those sections and migrate to the shared naming so future upgrades are clearer. + +### Mutual TLS for input plugins + +Input plugins that support TLS now also support `tls.verify_client_cert`. Enable this option to require and validate the client certificate presented by the sender. + +If you terminate TLS directly in Fluent Bit and need mutual TLS (`mTLS`), add `tls.verify_client_cert on` together with the usual `tls.crt_file` and `tls.key_file` settings. + +### New internal logs input + +Fluent Bit `v5.0` adds the `fluentbit_logs` input plugin, which mirrors Fluent Bit's own internal log stream back into the data pipeline as structured log records. + +Use this input if you want to forward Fluent Bit diagnostics to another destination, filter them, or store them alongside the rest of your telemetry. + ### Emitter backpressure with filesystem storage The internal emitter plugin, used by filters such as `rewrite_tag`, now automatically enables `storage.pause_on_chunks_overlimit` when filesystem storage is in use and that option hasn't been explicitly configured. @@ -24,6 +50,17 @@ Previously, the emitter could accumulate chunks beyond the `storage.max_chunks_u If you rely on the previous unlimited accumulation behavior, explicitly set `storage.pause_on_chunks_overlimit off` on the relevant input. Otherwise, review your `storage.max_chunks_up` value to ensure it's tuned for your expected throughput. +### More `OAuth 2.0` coverage + +Fluent Bit `v5.0` expands `OAuth 2.0` support in both directions: + +- HTTP-based inputs can validate incoming bearer tokens using `oauth2.validate`, `oauth2.issuer`, and `oauth2.jwks_url`. +- The HTTP output can acquire access tokens with `oauth2.enable` and supports `basic`, `post`, and `private_key_jwt` client authentication. + +If you previously handled authentication outside Fluent Bit for these cases, review the plugin pages for the new built-in options. + +For a broader overview of user-visible additions in this release, see [What's new in Fluent Bit v5.0](whats-new-in-fluent-bit-v5.0.md). + ## Fluent Bit v4.2 ### Vivo exporter output plugin diff --git a/installation/whats-new-in-fluent-bit-v5.0.md b/installation/whats-new-in-fluent-bit-v5.0.md new file mode 100644 index 000000000..63eb30666 --- /dev/null +++ b/installation/whats-new-in-fluent-bit-v5.0.md @@ -0,0 +1,161 @@ +# What's new in Fluent Bit v5.0 + +Fluent Bit `v5.0` adds new inputs and processors, expands authentication and TLS options, and standardizes configuration for HTTP-based plugins. It also delivers an important round of performance and scalability work, especially for pipelines that ingest logs, metrics, and traces through HTTP-based protocols. This page gives a quick user-focused overview of the main changes since Fluent Bit `v4.2`. + +For migration-impacting changes, see [Upgrade notes](upgrade-notes.md). + +## Performance and scalability + +### Unified processing and delivery model + +Fluent Bit `v5.0` continues the move toward a more unified runtime for logs, metrics, and traces. In practice, this means the same core engine improvements benefit more of the pipeline, instead of individual signal paths evolving separately. + +For end users, the result is a more consistent behavior across telemetry types and a better base for high-throughput pipelines that mix logs, metrics, and traces in the same deployment. + +### Refactored HTTP stack + +One of the most important `v5.0` changes is the refactoring of the HTTP listener stack used by several input plugins. Fluent Bit now uses a shared HTTP server implementation across the major HTTP-based receivers instead of maintaining separate code paths. + +This work improves: + +- concurrency through shared listener worker support +- consistency of request handling across HTTP-based inputs +- buffer enforcement and connection handling +- maintainability, which reduces drift between plugin implementations + +The biggest user-facing beneficiaries are: + +- [HTTP input](../pipeline/inputs/http.md) +- [Splunk input](../pipeline/inputs/splunk.md) +- [Elasticsearch input](../pipeline/inputs/elasticsearch.md) +- [OpenTelemetry input](../pipeline/inputs/opentelemetry.md) +- [Prometheus remote write input](../pipeline/inputs/prometheus-remote-write.md) + +If you run large HTTP or OTLP ingestion workloads, `v5.0` is not only a feature release. It is also a meaningful runtime improvement. + +## Configuration and operations + +### Shared HTTP listener settings + +HTTP-based inputs now use a shared listener configuration model. The preferred setting names are: + +- `http_server.http2` +- `http_server.buffer_chunk_size` +- `http_server.buffer_max_size` +- `http_server.max_connections` +- `http_server.workers` + +Legacy aliases such as `http2`, `buffer_chunk_size`, and `buffer_max_size` still work, but new configurations should use the `http_server.*` names. + +Affected plugin families include: + +- [HTTP input](../pipeline/inputs/http.md) +- [Splunk input](../pipeline/inputs/splunk.md) +- [Elasticsearch input](../pipeline/inputs/elasticsearch.md) +- [OpenTelemetry input](../pipeline/inputs/opentelemetry.md) +- [Prometheus remote write input](../pipeline/inputs/prometheus-remote-write.md) + +### Mutual TLS for inputs + +Input plugins that support TLS can now require client certificate verification with `tls.verify_client_cert`. This makes it easier to run mutual TLS (`mTLS`) directly on Fluent Bit listeners. + +See [TLS](../administration/transport-security.md). + +### JSON health endpoint in API v2 + +The built-in HTTP server exposes `/api/v2/health`, which returns health status as JSON and uses the HTTP status code to indicate healthy (`200`) or unhealthy (`500`) state. + +See [Monitoring](../administration/monitoring.md). + +## Inputs + +### New `fluentbit_logs` input + +The [Fluent Bit logs input](../pipeline/inputs/fluentbit-logs.md) routes Fluent Bit internal logs back into the pipeline as structured records. This lets you forward agent diagnostics to any supported destination. + +### HTTP input remote address capture + +The [HTTP input](../pipeline/inputs/http.md) adds: + +- `add_remote_addr` +- `remote_addr_key` + +These settings let you attach the client address from `X-Forwarded-For` to each ingested record. + +### `OAuth 2.0` bearer token validation on HTTP-based inputs + +HTTP-based receivers can validate incoming bearer tokens with: + +- `oauth2.validate` +- `oauth2.issuer` +- `oauth2.jwks_url` +- `oauth2.allowed_audience` +- `oauth2.allowed_clients` +- `oauth2.jwks_refresh_interval` + +This is available on the relevant input plugins, including [HTTP](../pipeline/inputs/http.md) and [OpenTelemetry](../pipeline/inputs/opentelemetry.md). + +### OpenTelemetry input improvements + +The [OpenTelemetry input](../pipeline/inputs/opentelemetry.md) in `v5.0` expands user-visible behavior with: + +- shared HTTP listener worker support +- `OAuth 2.0` bearer token validation +- stable JSON metrics ingestion over `OTLP/HTTP` +- improved JSON trace validation and error reporting + +### Kubernetes events state database controls + +The [Kubernetes events input](../pipeline/inputs/kubernetes-events.md) documents additional SQLite controls: + +- `db.journal_mode` +- `db.locking` + +These settings help tune event cursor persistence and database access behavior. + +## Processors + +### New cumulative-to-delta processor + +The [cumulative to delta processor](../pipeline/processors/cumulative-to-delta.md) converts cumulative monotonic metrics to delta values, which is useful when scraping Prometheus-style metrics but exporting to backends that expect deltas. + +### New topological data analysis processor + +The [topological data analysis processor](../pipeline/processors/tda.md) adds a metrics processor for topology-based analysis workflows. + +### Sampling processor updates + +The [sampling processor](../pipeline/processors/sampling.md) adds `legacy_reconcile` for tail sampling, which helps compare the optimized reconciler with the previous behavior when validating upgrades. + +## Outputs + +### HTTP output `OAuth 2.0` client credentials + +The [HTTP output](../pipeline/outputs/http.md) now supports built-in `OAuth 2.0` client credentials with: + +- `basic` +- `post` +- `private_key_jwt` + +You can configure token acquisition directly in Fluent Bit with the `oauth2.*` settings. + +### More compression options for cloud outputs + +Several outputs gained additional compression support in the `v4.2` to `v5.0` range: + +- [Amazon Kinesis Data Streams](../pipeline/outputs/kinesis.md): `gzip`, `zstd`, `snappy` +- [Amazon Kinesis Data Firehose](../pipeline/outputs/firehose.md): `snappy` added alongside existing codecs +- [Amazon S3](../pipeline/outputs/s3.md): `snappy` added alongside existing codecs +- [Azure Blob](../pipeline/outputs/azure_blob.md): `zstd` support for transfer compression + +## Monitoring changes + +### `fluentbit_hot_reloaded_times` is now a counter + +The `fluentbit_hot_reloaded_times` metric changed from a gauge to a counter, which makes it safe to use with PromQL functions such as `rate()` and `increase()`. + +### New output backpressure visibility + +`v5.0` adds output backpressure duration metrics so you can observe time spent waiting because of downstream pressure. + +See [Monitoring](../administration/monitoring.md). diff --git a/pipeline/filters/modify.md b/pipeline/filters/modify.md index 3d35debf2..5ac198e44 100644 --- a/pipeline/filters/modify.md +++ b/pipeline/filters/modify.md @@ -49,10 +49,13 @@ The plugin supports the following rules: | `Remove_regex` | `REGEXP:KEY` | _none_ | Remove all key/value pairs with key matching regexp `KEY`. | | `Rename` | `STRING:KEY` | `STRING:RENAMED_KEY` | Rename a key/value pair with key `KEY` to `RENAMED_KEY` if `KEY` exists and `RENAMED_KEY` doesn't exist. | | `Hard_rename` | `STRING:KEY` | `STRING:RENAMED_KEY` | Rename a key/value pair with key `KEY` to `RENAMED_KEY` if `KEY` exists. If `RENAMED_KEY` already exists, this field is overwritten. | +| `Hard_Rename` | `STRING:KEY` | `STRING:RENAMED_KEY` | Equivalent to `Hard_rename`. This spelling is also accepted by the plugin. | | `Copy` | `STRING:KEY` | `STRING:COPIED_KEY` | Copy a key/value pair with key `KEY` to `COPIED_KEY` if `KEY` exists and `COPIED_KEY` doesn't exist. | | `Hard_copy` | `STRING:KEY` | `STRING:COPIED_KEY` | Copy a key/value pair with key `KEY` to `COPIED_KEY` if `KEY` exists. If `COPIED_KEY` already exists, this field is overwritten. | | `Move_to_start` | `WILDCARD:KEY` | _none_ | Move key/value pairs with keys matching `KEY` to the start of the message. | +| `Move_To_Start` | `WILDCARD:KEY` | _none_ | Equivalent to `Move_to_start`. This spelling is also accepted by the plugin. | | `Move_to_end` | `WILDCARD:KEY` | _none_ | Move key/value pairs with keys matching `KEY` to the end of the message. | +| `Move_To_End` | `WILDCARD:KEY` | _none_ | Equivalent to `Move_to_end`. This spelling is also accepted by the plugin. | - Rules are case-insensitive, but parameters aren't. - Any number of rules can be set in a filter instance. diff --git a/pipeline/filters/parser.md b/pipeline/filters/parser.md index 4d37a13d0..1db516612 100644 --- a/pipeline/filters/parser.md +++ b/pipeline/filters/parser.md @@ -16,6 +16,7 @@ The plugin supports the following configuration parameters: | `parser` | Specify the parser name to interpret the field. Multiple parser entries are allowed (one per line). | _none_ | | `preserve_key` | Keep the original `key_name` field in the parsed result. If false, the field will be removed. | `false` | | `reserve_data` | Keep all other original fields in the parsed result. If false, all other original fields will be removed. | `false` | +| `Unescape_key` | Deprecated. This option is retained only for backward compatibility and should not be used in new configurations. | _deprecated_ | ## Get started diff --git a/pipeline/inputs/dummy.md b/pipeline/inputs/dummy.md index 62b7efed0..8cb21f649 100644 --- a/pipeline/inputs/dummy.md +++ b/pipeline/inputs/dummy.md @@ -23,6 +23,7 @@ The plugin supports the following configuration parameters: | `samples` | Limit the number of events generated. For example, if `samples=3`, the plugin generates only three events and stops. `0` means no limit. | `0` | | `start_time_nsec` | Set a dummy base timestamp, in nanoseconds. If set to `-1`, the current time is used. | `-1` | | `start_time_sec` | Set a dummy base timestamp, in seconds. If set to `-1`, the current time is used. | `-1` | +| `test_hang_on_exit` | Test-only option that simulates a hang during shutdown for hot reload watchdog testing. Don't use this in production configurations. | `false` | | `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` | ## Get started diff --git a/pipeline/inputs/elasticsearch.md b/pipeline/inputs/elasticsearch.md index 93ba82988..b11e7462f 100644 --- a/pipeline/inputs/elasticsearch.md +++ b/pipeline/inputs/elasticsearch.md @@ -16,6 +16,7 @@ The plugin supports the following configuration parameters: | `buffer_max_size` | Set the maximum size of buffer. Compatibility alias for `http_server.buffer_max_size`. | `4M` | | `hostname` | Specify hostname or fully qualified domain name. This parameter can be used for "sniffing" (auto-discovery of) cluster node information. | `localhost` | | `http2` | Enable HTTP/2 support. Compatibility alias for `http_server.http2`. | `true` | +| `http_server.max_connections` | Maximum number of concurrent active HTTP connections. `0` means unlimited. | `0` | | `http_server.workers` | Number of HTTP listener worker threads. | `1` | | `listen` | The address to listen on. | `0.0.0.0` | | `meta_key` | Specify a key name for meta information. | `@meta` | diff --git a/pipeline/inputs/http.md b/pipeline/inputs/http.md index 8fdb2c274..c7edfc735 100644 --- a/pipeline/inputs/http.md +++ b/pipeline/inputs/http.md @@ -15,6 +15,7 @@ The _HTTP_ input plugin lets Fluent Bit open an HTTP port that you can then rout | `buffer_chunk_size` | This sets the chunk size for incoming JSON messages. These chunks are then stored and managed in the space available by `buffer_max_size`. Compatibility alias for `http_server.buffer_chunk_size`. | `512K` | | `buffer_max_size` | Specify the maximum buffer size to receive a JSON message. Compatibility alias for `http_server.buffer_max_size`. | `4M` | | `http2` | Enable HTTP/2 support. Compatibility alias for `http_server.http2`. | `true` | +| `http_server.max_connections` | Maximum number of concurrent active HTTP connections. `0` means unlimited. | `0` | | `http_server.workers` | Number of HTTP listener worker threads. | `1` | | `listen` | The address to listen on. | `0.0.0.0` | | `oauth2.allowed_audience` | Audience claim to enforce when validating incoming `OAuth 2.0` `JWT` tokens. | _none_ | diff --git a/pipeline/inputs/opentelemetry.md b/pipeline/inputs/opentelemetry.md index bb8f19f4a..cdc108287 100644 --- a/pipeline/inputs/opentelemetry.md +++ b/pipeline/inputs/opentelemetry.md @@ -22,6 +22,7 @@ Fluent Bit has a compliant implementation which fully supports `OTLP/HTTP` and ` | `encode_profiles_as_log` | Encode profiles received as text and ingest them in the logging pipeline. | `true` | | `host` | The hostname. | `localhost` | | `http2` | Enable HTTP/2 protocol support for the OpenTelemetry receiver. | `true` | +| `http_server.max_connections` | Maximum number of concurrent active HTTP connections. `0` means unlimited. | `0` | | `http_server.workers` | Number of HTTP listener worker threads. | `1` | | `listen` | The network address to listen on. | `0.0.0.0` | | `log_level` | Specifies the log level for this plugin. If not set here, the plugin uses the global log level specified in the `service` section of your configuration file. | `info` | diff --git a/pipeline/inputs/prometheus-remote-write.md b/pipeline/inputs/prometheus-remote-write.md index 67ecd3c10..5e7109c78 100644 --- a/pipeline/inputs/prometheus-remote-write.md +++ b/pipeline/inputs/prometheus-remote-write.md @@ -17,6 +17,7 @@ The _Prometheus remote write_ input plugin lets you ingest a payload in the Prom | `buffer_chunk_size` | Sets the chunk size for incoming data. These chunks are then stored and managed in the space specified by `buffer_max_size`. Compatibility alias for `http_server.buffer_chunk_size`. | `512K` | | `buffer_max_size` | Specifies the maximum buffer size to receive a request. Compatibility alias for `http_server.buffer_max_size`. | `4M` | | `http2` | Enable HTTP/2 support. Compatibility alias for `http_server.http2`. | `true` | +| `http_server.max_connections` | Maximum number of concurrent active HTTP connections. `0` means unlimited. | `0` | | `http_server.workers` | Number of HTTP listener worker threads. | `1` | | `listen` | The address to listen on. | `0.0.0.0` | | `port` | The port to listen on. | `8080` | @@ -105,4 +106,4 @@ pipeline: {% endtab %} {% endtabs %} -Now, you should be able to send data over TLS to the remote-write input. \ No newline at end of file +Now, you should be able to send data over TLS to the remote-write input. diff --git a/pipeline/inputs/splunk.md b/pipeline/inputs/splunk.md index c913a586e..82c4943b9 100644 --- a/pipeline/inputs/splunk.md +++ b/pipeline/inputs/splunk.md @@ -16,6 +16,7 @@ This plugin uses the following configuration parameters: | `buffer_chunk_size` | Set the chunk size for incoming JSON messages. These chunks are then stored and managed in the space available by `buffer_max_size`. Compatibility alias for `http_server.buffer_chunk_size`. | `512K` | | `buffer_max_size` | Set the maximum buffer size to receive a JSON message. Compatibility alias for `http_server.buffer_max_size`. | `4M` | | `http2` | Enable HTTP/2 support. Compatibility alias for `http_server.http2`. | `true` | +| `http_server.max_connections` | Maximum number of concurrent active HTTP connections. `0` means unlimited. | `0` | | `http_server.workers` | Number of HTTP listener worker threads. | `1` | | `listen` | The address to listen on. | `0.0.0.0` | | `port` | The port for Fluent Bit to listen on. | `8088` | diff --git a/pipeline/inputs/windows-event-log-winevtlog.md b/pipeline/inputs/windows-event-log-winevtlog.md index eb3491624..6bdb32cc9 100644 --- a/pipeline/inputs/windows-event-log-winevtlog.md +++ b/pipeline/inputs/windows-event-log-winevtlog.md @@ -24,6 +24,11 @@ The plugin supports the following configuration parameters: | `remote.password` | Specify password of remote access for Windows EventLog. | _none_ | | `remote.server` | Specify server name of remote access for Windows EventLog. | _none_ | | `remote.username` | Specify user name of remote access for Windows EventLog. | _none_ | +| `reconnect.base_ms` | Base reconnect delay in milliseconds after a subscription failure. | `500` | +| `reconnect.max_ms` | Maximum reconnect delay in milliseconds. | `30000` | +| `reconnect.multiplier` | Backoff multiplier applied between reconnect attempts. | `2.0` | +| `reconnect.jitter_pct` | Jitter percentage applied to the reconnect delay to avoid synchronized retries. | `20` | +| `reconnect.max_retries` | Maximum number of reconnect attempts before the channel stops retrying. | `8` | | `render_event_as_text` | Optional. Render the Windows EventLog event as newline-separated `key=value` text. Mutually exclusive with `render_event_as_xml`. | `false` | | `render_event_as_xml` | Optional. Render the Windows EventLog event as XML, including the System and Message fields. Mutually exclusive with `render_event_as_text`. | `false` | | `render_event_text_key` | Optional. Record key name used to store the rendered text when `render_event_as_text` is enabled. | `log` | diff --git a/pipeline/inputs/windows-event-log.md b/pipeline/inputs/windows-event-log.md index ac83d5bf1..98ae2d161 100644 --- a/pipeline/inputs/windows-event-log.md +++ b/pipeline/inputs/windows-event-log.md @@ -12,10 +12,13 @@ The plugin supports the following configuration parameters: | Key | Description | Default | |----------------|---------------------------------------------------------------------------------------------------------|---------| -| `channels` | A comma-separated list of channels to read from. | _none_ | -| `db` | Set the path to save the read offsets. (optional) | _none_ | -| `interval_sec` | Set the polling interval for each channel. (optional) | `1` | -| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` | +| `channels` | A comma-separated list of channels to read from. | _none_ | +| `db` | Set the path to save the read offsets. (optional) | _none_ | +| `interval_sec` | Set the polling interval for each channel. (optional) | `1` | +| `interval_nsec` | Set the polling interval for each channel in nanoseconds. (optional) | `0` | +| `string_inserts` | Whether to include string inserts in output records. | `true` | +| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` | +| `use_ansi` | Use ANSI encoding for Event Log messages. This can help on older Windows versions that return blank strings with wide-character decoding. | `false` | If `db` isn't set, the plugin will read channels from the beginning on each startup. diff --git a/pipeline/inputs/windows-exporter-metrics.md b/pipeline/inputs/windows-exporter-metrics.md index 86a37bb5b..95384d1db 100644 --- a/pipeline/inputs/windows-exporter-metrics.md +++ b/pipeline/inputs/windows-exporter-metrics.md @@ -28,13 +28,14 @@ Each `collector.xxx.scrape_interval` option only overrides the interval for that Overridden intervals only change the collection interval, not the interval for publishing the metrics which is taken from the global setting. -For example, if the global interval is set to `5` and an override interval of `60` is used, the published metrics will be reported every five seconds. However, the specific collector will stay the same for 60 seconds until it's collected again. +For example, if the global interval is set to `1` and an override interval of `60` is used, the published metrics will be reported every second. However, the specific collector will stay the same for 60 seconds until it's collected again. This helps with down-sampling when collecting metrics. | Key | Description | Default | |------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------| -| `scrape_interval` | The rate in seconds at which metrics are collected from the Windows host. | `5` | +| `scrape_interval` | The rate in seconds at which metrics are collected from the Windows host. | `1` | +| `enable_collector` | Enable one collector by name. This key can be specified multiple times to build an allow-list of collectors to run. | _none_ | | `we.logical_disk.allow_disk_regex` | Specify the regular expression for logical disk metrics to allow collection of. | `"/.+/"` (all) | | `we.logical_disk.deny_disk_regex` | Specify the regular expression for logical disk metrics to prevent collection of or ignore. | `NULL` (all) | | `we.net.allow_nic_regex` | Specify the regular expression for network metrics captured by the name of the NIC. | `"/.+/"` (all) | @@ -58,7 +59,7 @@ This helps with down-sampling when collecting metrics. | `collector.process.scrape_interval` | The rate in seconds at which `process` metrics are collected. Values greater than `0` override the global default. Otherwise, the global default is used. | `0` | | `collector.tcp.scrape_interval` | The rate in seconds at which `tcp` metrics are collected. Values greater than `0` override the global default. Otherwise, the global default is used. | `0` | | `collector.cache.scrape_interval` | The rate in seconds at which `cache` metrics are collected. Values greater than `0` override the global default. Otherwise, the global default is used. | `0` | -| `metrics` | Specify which metrics are collected. Comma-separated list of collector names. | `"cpu,cpu_info,os,net,logical_disk,cs,cache,thermalzone,logon,system,service,memory,paging_file,process,tcp"` | +| `metrics` | Specify which metrics are collected. Comma-separated list of collector names. | `"cpu,cpu_info,os,net,logical_disk,cs,cache,thermalzone,logon,system,service,tcp"` | ## Collectors available diff --git a/pipeline/outputs/forward.md b/pipeline/outputs/forward.md index 2af2d35dd..1abb42afa 100644 --- a/pipeline/outputs/forward.md +++ b/pipeline/outputs/forward.md @@ -19,16 +19,19 @@ The following parameters are mandatory for both Forward and Secure Forward modes | Key | Description | Default | | --- | ------------ | --------- | -| `Host` | Target host where Fluent Bit or Fluentd are listening for Forward messages. | `127.0.0.1` | -| `Port` | TCP Port of the target service. | `24224` | -| `Time_as_Integer` | Set timestamps in integer format, it enables compatibility mode for Fluentd v0.12 series. | `False` | -| `Upstream` | If Forward will connect to an `Upstream` instead of a basic host, this property defines the absolute path for the Upstream configuration file, for more details about this, see [Upstream Servers ](../../administration/configuring-fluent-bit/classic-mode/upstream-servers.md). | _none_ | -| `Unix_Path` | Specify the path to a Unix socket to send a Forward message. If set, `Upstream` is ignored. | _none_ | -| `Tag` | Overwrite the tag as Fluent Bit transmits. This allows the receiving pipeline start fresh, or to attribute a source. | _none_ | -| `Send_options` | Always send options (with `"size"=count of messages`) | `False` | -| `Require_ack_response` | Send `chunk` option and wait for an `ack` response from the server. Enables at-least-once and receiving server can control rate of traffic. Requires Fluentd v0.14.0+ or later | `False` | -| `Compress` | Set to `gzip` to enable gzip compression. Incompatible with `Time_as_Integer=True` and tags set dynamically using the [Rewrite Tag](../filters/rewrite-tag.md) filter. Requires Fluentd server v0.14.7 or later. | _none_ | -| `Workers` | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `2` | +| `host` | Target host where Fluent Bit or Fluentd are listening for Forward messages. | `127.0.0.1` | +| `port` | TCP port of the target service. | `24224` | +| `time_as_integer` | Set timestamps in integer format. This enables compatibility mode for Fluentd `v0.12`. | `false` | +| `upstream` | If Forward connects to an `upstream` definition instead of a basic host, this property defines the absolute path for the upstream configuration file. See [Upstream Servers](../../administration/configuring-fluent-bit/classic-mode/upstream-servers.md). | _none_ | +| `unix_path` | Specify the path to a Unix socket to send a Forward message. If set, `upstream` is ignored. | _none_ | +| `tag` | Overwrite the tag as Fluent Bit transmits. This lets the receiving pipeline start fresh or attribute a source. | _none_ | +| `send_options` | Always send Forward protocol options, including `"size"`. | `false` | +| `require_ack_response` | Send the `chunk` option and wait for an `ack` response from the server. This enables at-least-once delivery and lets the receiving server control traffic rate. Requires Fluentd `v0.14.0` or later. | `false` | +| `compress` | Set to `gzip` to enable gzip compression. Incompatible with `time_as_integer true` and tags set dynamically using the [Rewrite Tag](../filters/rewrite-tag.md) filter. Requires Fluentd server `v0.14.7` or later. | _none_ | +| `fluentd_compat` | Send metrics and traces using a Fluentd-compatible format. | `false` | +| `retain_metadata_in_forward_mode` | Retain metadata when operating in forward mode. | `false` | +| `add_option` | Add an extra Forward protocol option. This is an advanced setting and can be specified multiple times. Enabling it also enables `send_options`. | _none_ | +| `workers` | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `2` | ## Secure Forward mode configuration parameters @@ -36,11 +39,11 @@ When using Secure Forward mode, the [TLS](../../administration/transport-securit | Key | Description | Default | | --- | ----------- | ------- | -| `Shared_Key` | A key string known by the remote Fluentd used for authorization. | _none_ | -| `Empty_Shared_Key` | Use this option to connect to Fluentd with a zero-length secret. | `False` | -| `Username` | Specify the username to present to a Fluentd server that enables `user_auth`. | _none_ | -| `Password` | Specify the password corresponding to the username. | _none_ | -| `Self_Hostname` | Default value of the auto-generated certificate common name (CN). | `localhost` | +| `shared_key` | A key string known by the remote Fluentd used for authorization. | _none_ | +| `empty_shared_key` | Connect to Fluentd with a zero-length shared secret. | `false` | +| `username` | Specify the username to present to a Fluentd server that enables `user_auth`. | _none_ | +| `password` | Specify the password corresponding to the username. | _none_ | +| `self_hostname` | Default value of the auto-generated certificate common name (CN). | `localhost` | | `tls` | Enable or disable TLS support. | `Off` | | `tls.verify` | Force certificate validation. | `On` | | `tls.debug` | Set TLS debug verbosity level. Allowed values: `0` (No debug), `1` (Error), `2` (State change), `3` (Informational), and `4` (Verbose). | `1` | diff --git a/pipeline/outputs/stackdriver.md b/pipeline/outputs/stackdriver.md index af81f403a..b92e901d2 100644 --- a/pipeline/outputs/stackdriver.md +++ b/pipeline/outputs/stackdriver.md @@ -34,8 +34,10 @@ This plugin uses the following configuration parameters. For more details about | `service_account_secret` | Private key content associated with the service account. Only available if no credentials file has been provided. | Value of environment variable `$SERVICE_ACCOUNT_SECRET` | | `severity_key` | The name of the key from the original record that contains the severity. | `logging.googleapis.com/severity` | | `span_id_key` | The name of the key from the original record that contains the span ID. | `logging.googleapis.com/spanId` | +| `stackdriver_agent` | Set a custom `User-Agent` header value for requests sent to Cloud Logging. | _none_ | | `tag_prefix` | Set the `tag_prefix` used to validate the tag of logs with Kubernetes resource type. Without this option, the tag of the log must be in format of `k8s_container(pod/node).*` to use the `k8s_container` resource type. Now the tag prefix is configurable by this option (note the ending dot). | `k8s_container.`, `k8s_pod.`, `k8s_node.` | | `task_id` | A unique identifier for the task within the namespace and job, such as a replica index identifying the task within the job. If the resource type is `generic_task`, this field is required. | _none_ | +| `test_log_entry_format` | Test-only option that prints the generated Cloud Logging payload without sending it. Intended for validation and debugging. | `false` | | `text_payload_key` | Set the key from the record to use as the `textPayload` field in the log entry. | _none_ | | `trace_key` | The name of the key from the original record that contains the trace value. | `logging.googleapis.com/trace` | | `trace_sampled_key` | The name of the key from the original record that contains the trace sampled flag. | `logging.googleapis.com/traceSampled` | diff --git a/pipeline/outputs/treasure-data.md b/pipeline/outputs/treasure-data.md index b6fa740d2..f6c729f52 100644 --- a/pipeline/outputs/treasure-data.md +++ b/pipeline/outputs/treasure-data.md @@ -11,6 +11,7 @@ The plugin supports the following configuration parameters: | `api` | The Treasure Data API key. To obtain it, log into the [Console](https://console.treasuredata.com/users/sign_in) and in the API keys box, copy the API key hash. | _none_ | | `database` | Specify the name of your target database. | _none_ | | `region` | Set the service region. Allowed values: `US`, `JP`. | `US` | +| `Region` | Classic-mode spelling of `region`. Allowed values: `us`, `jp`. | `US` | | `table` | Specify the name of your target table where the records will be stored. | _none_ | | `workers` | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` | diff --git a/pipeline/processors/sampling.md b/pipeline/processors/sampling.md index 56daccf0c..4c1663fc5 100644 --- a/pipeline/processors/sampling.md +++ b/pipeline/processors/sampling.md @@ -76,6 +76,7 @@ Tail sampling uses the following `sampling_settings` configuration parameters: | Key | Description | Default | | --- | :---------- | ------- | | `decision_wait` | Specifies how long to buffer spans before making a sampling decision, allowing full trace evaluation. | `30s` | +| `legacy_reconcile` | Uses the legacy tail-sampling reconciliation path instead of the optimized reconciler. Keep this disabled unless you need behavior parity while comparing results with older deployments. | `false` | | `max_traces` | Specifies the maximum number of traces that can be held in memory. When the limit is reached, the oldest trace is deleted. | _none_ | ### Conditions