Skip to content

fix: raise default --kube-api-qps/--kube-api-burst to 50/100#7615

Open
Fedosin wants to merge 1 commit intokedacore:mainfrom
Fedosin:raise-default-api-rate-limits
Open

fix: raise default --kube-api-qps/--kube-api-burst to 50/100#7615
Fedosin wants to merge 1 commit intokedacore:mainfrom
Fedosin:raise-default-api-rate-limits

Conversation

@Fedosin
Copy link
Copy Markdown
Contributor

@Fedosin Fedosin commented Apr 7, 2026

Summary

  • Raise the default --kube-api-qps from 20 to 50 and --kube-api-burst from 30 to 100 across all three components (operator, metrics adapter, webhooks)
  • The previous defaults cause client-side throttling with as few as 100 ScaledObjects, adding ~5s delay per status PATCH request and making scale-to-zero take 15–20 minutes instead of ~5.5 minutes
  • The new values (50/100) match what other large-scale controllers use (e.g. ArgoCD) while remaining well below API server capacity

Relates to #7613

Context

The current QPS=20 / Burst=30 defaults override controller-runtime's default behavior of disabling the client-side rate limiter entirely (QPS=-1), which relies on API Priority and Fairness (APF) for server-side throttling. This means KEDA self-throttles more aggressively than a standard controller-runtime controller would.

With 100 ScaledObjects, each reconciliation cycle generates multiple status PATCH calls. Once the burst budget is exhausted, every subsequent request is throttled to the QPS rate, producing consistent ~5s delays visible in operator logs:

"delay": "4.972807726s",
"reason": "client-side throttling, not priority and fairness"

Raising to 50/100 provides 2.5× the throughput headroom, supporting 200-300 ScaledObjects without throttling under typical workloads.

Test plan

  • All three components build successfully (go build ./cmd/operator/ ./cmd/adapter/ ./cmd/webhooks/)
  • Verify with 100+ ScaledObjects that throttling delays are eliminated under the new defaults

@Fedosin Fedosin requested a review from a team as a code owner April 7, 2026 16:29
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 7, 2026

Thank you for your contribution! 🙏

Please understand that we will do our best to review your PR and give you feedback as soon as possible, but please bear with us if it takes a little longer as expected.

While you are waiting, make sure to:

  • Add an entry in our changelog in alphabetical order and link related issue
  • Update the documentation, if needed
  • Add unit & e2e tests for your changes
  • GitHub checks are passing
  • Is the DCO check failing? Here is how you can fix DCO issues

Once the initial tests are successful, a KEDA member will ensure that the e2e tests are run. Once the e2e tests have been successfully completed, the PR may be merged at a later date. Please be patient.

Learn more about our contribution guide.

@snyk-io
Copy link
Copy Markdown

snyk-io Bot commented Apr 7, 2026

Snyk checks have passed. No issues have been found so far.

Status Scan Engine Critical High Medium Low Total (0)
Open Source Security 0 0 0 0 0 issues

💻 Catch issues earlier using the plugins for VS Code, JetBrains IDEs, Visual Studio, and Eclipse.

@keda-automation keda-automation requested a review from a team April 7, 2026 16:29
The previous defaults (QPS=20, Burst=30) cause client-side throttling
with as few as 100 ScaledObjects, adding ~5s delay per status PATCH and
making scale-to-zero take 15-20 minutes instead of ~5.5 minutes.

Raise all three components (operator, metrics adapter, webhooks) to
QPS=50 / Burst=100 to match what other large-scale controllers use
(e.g. ArgoCD 50/100) while remaining well below API server capacity.

Relates to kedacore#7613

Signed-off-by: Mikhail Fedosin <mfedosin@redhat.com>
@Fedosin Fedosin force-pushed the raise-default-api-rate-limits branch from eab4de5 to 7770299 Compare April 7, 2026 16:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant