Skip to content

[Proposal] Binding preemption Design for Karmada scheduler#7327

Open
seanlaii wants to merge 1 commit intokarmada-io:masterfrom
seanlaii:binding-preemption
Open

[Proposal] Binding preemption Design for Karmada scheduler#7327
seanlaii wants to merge 1 commit intokarmada-io:masterfrom
seanlaii:binding-preemption

Conversation

@seanlaii
Copy link
Copy Markdown
Contributor

What type of PR is this?
/kind documentation
/kind feature

What this PR does / why we need it:
This PR adds a design proposal for binding-level preemption in the Karmada scheduler.
Currently, priority-based scheduling only orders the queue — a high-priority binding that arrives after cluster resources are consumed by lower-priority bindings remains pending indefinitely. This proposal introduces preemption so the scheduler can evict lower-priority bindings to make room, scoped to single-cluster Divided scheduling in Phase 1 with estimator-based node-level simulation planned for Phase 2.

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?:


@karmada-bot karmada-bot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/documentation Categorizes issue or PR as related to documentation. kind/feature Categorizes issue or PR as related to a new feature. labels Mar 25, 2026
@karmada-bot karmada-bot requested review from Tingtal and zach593 March 25, 2026 03:57
@karmada-bot karmada-bot added the size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. label Mar 25, 2026
@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a comprehensive design proposal for binding-level preemption within the Karmada scheduler. The core purpose is to resolve resource contention by allowing higher-priority workloads to preempt lower-priority ones, preventing high-priority tasks from being indefinitely blocked. The proposal outlines a strategic, phased approach, beginning with a foundational summary-based preemption for single-cluster environments and planning for a more advanced estimator-based system in the future. This enhancement significantly improves resource utilization and fairness for critical workloads.

Highlights

  • Binding-Level Preemption Introduced: Introduced binding-level preemption to the Karmada scheduler, enabling high-priority ResourceBindings to evict lower-priority ones when cluster resources are insufficient, initially scoped to single-cluster scenarios.
  • Phased Implementation Strategy: Designed a two-phase implementation, starting with summary-based preemption (Phase 1) and evolving to estimator-based node-level simulation for more precise victim selection (Phase 2).
  • API Extension for Preemption Policy: Added a PreemptionPolicy field to ResourceBindingSpec.SchedulePriority, which defaults to Never and requires explicit opt-in for PreemptLowerPriority.
  • In-Memory Preemption Claims Mechanism: Implemented an in-memory preemption claims store to coordinate preemption, prevent repeated preemption attempts, and reserve cluster capacity for the preemptor.
  • Integration with Existing Karmada Features: Reused the existing SchedulePriority and PriorityClass mechanisms for preemption ordering and integrated the eviction process with GracefulEviction.
  • Feature Gating: Gated the new preemption functionality behind the PriorityBasedPreemptiveScheduling alpha feature gate, ensuring it is opt-in and can be safely rolled out.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive proposal for binding-level preemption within the Karmada scheduler, specifically for single-cluster scheduling. The feature aims to allow high-priority ResourceBindings to evict lower-priority ones when cluster resources are insufficient. The proposal details a two-phase implementation, starting with summary-based preemption and progressing to estimator-based preemption for more precise victim selection. A review comment suggests enhancing the clarity of eviction messages for victim bindings by including the workload's kind, namespace, and name for better user understanding.

Comment thread docs/proposals/scheduling/binding-preemption/binding-preemption.md Outdated
@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Mar 25, 2026

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 41.88%. Comparing base (6012227) to head (93cb0db).
⚠️ Report is 72 commits behind head on master.
❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #7327      +/-   ##
==========================================
- Coverage   42.04%   41.88%   -0.17%     
==========================================
  Files         874      879       +5     
  Lines       53544    54285     +741     
==========================================
+ Hits        22515    22738     +223     
- Misses      29341    29826     +485     
- Partials     1688     1721      +33     
Flag Coverage Δ
unittests 41.88% <ø> (-0.17%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@seanlaii seanlaii force-pushed the binding-preemption branch 2 times, most recently from 16f153a to 97d4a47 Compare March 31, 2026 22:21
@seanlaii seanlaii force-pushed the binding-preemption branch from 97d4a47 to 0618be4 Compare April 6, 2026 18:47
@RainbowMango RainbowMango added this to the v1.18 milestone Apr 11, 2026
@RainbowMango
Copy link
Copy Markdown
Member

@seanlaii, I see this PR marked with Draft. Is there anything you want to update? Please also cc @whitewindmills once it's ready for review.

PS: I can get back to this next week.

@seanlaii seanlaii marked this pull request as ready for review April 11, 2026 05:04
Copilot AI review requested due to automatic review settings April 11, 2026 05:04
@karmada-bot karmada-bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Apr 11, 2026
@karmada-bot karmada-bot requested a review from mszacillo April 11, 2026 05:04
@seanlaii
Copy link
Copy Markdown
Contributor Author

seanlaii commented Apr 11, 2026

@seanlaii, I see this PR marked with Draft. Is there anything you want to update? Please also cc @whitewindmills once it's ready for review.

PS: I can get back to this next week.

Thanks for the info! @whitewindmills @RainbowMango Please help review when you have a chance.
Also cc @shellfish007 to take a look if interested

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Note

Copilot was unable to run its full agentic suite in this review.

Adds a design proposal for binding-level preemption in the Karmada scheduler, focused on enabling higher-priority ResourceBindings to reclaim cluster capacity from lower-priority ones in single-cluster scenarios.

Changes:

  • Introduces a phased design (Phase 1 summary-based; Phase 2 estimator/node-simulation) for binding preemption.
  • Proposes API additions (PreemptionPolicy under SchedulePriority) and a new feature gate (PriorityBasedPreemptiveScheduling).
  • Describes scheduler flow changes, victim selection, and an in-memory “preemption claim” coordination mechanism.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread docs/proposals/scheduling/binding-preemption/binding-preemption.md Outdated
Comment thread docs/proposals/scheduling/binding-preemption/binding-preemption.md Outdated
Comment thread docs/proposals/scheduling/binding-preemption/binding-preemption.md Outdated
Comment thread docs/proposals/scheduling/binding-preemption/binding-preemption.md Outdated
Comment thread docs/proposals/scheduling/binding-preemption/binding-preemption.md Outdated
@seanlaii seanlaii force-pushed the binding-preemption branch 2 times, most recently from 95d2c8c to 0baf151 Compare April 13, 2026 00:56
@seanlaii seanlaii changed the title Binding preemption Design for Karmada scheduler [Proposal] Binding preemption Design for Karmada scheduler Apr 14, 2026
@seanlaii seanlaii force-pushed the binding-preemption branch from 0baf151 to f7bced4 Compare April 14, 2026 01:58
@shellfish007
Copy link
Copy Markdown

shellfish007 commented Apr 14, 2026

Thanks for the proposal! I'm still getting familiar with the codebase, so apologies if any of this is already covered somewhere.
A couple of questions:

  1. Cross-namespace preemption: Is it guaranteed that preemption victims are always ResourceBindings in the same namespace as the preemptor, or could a higher-priority workload preempt jobs from a different namespace? If cross-namespace preemption is possible, would it make sense to have a configurable option to restrict preemption within namespace boundaries? Thinking about multi-tenancy scenarios where teams could unintentionally (or intentionally) affect each other's workloads by simply raising their job priorities.
  2. Namespace quota as a preemption trigger: It looks like preemption is only triggered when cluster-level resource assignment fails (assignReplicas). Would namespace quota exhaustion also be considered as a trigger? A higher-priority workload could remain pending not because the cluster is full, but because its namespace quota is exhausted — in that case preemption would never fire even if it should. Is this a known limitation or out of scope for now?

Happy to be corrected if either of these is already handled!

@seanlaii
Copy link
Copy Markdown
Contributor Author

Thanks for the proposal! I'm still getting familiar with the codebase, so apologies if any of this is already covered somewhere. A couple of questions:

  1. Cross-namespace preemption: Is it guaranteed that preemption victims are always ResourceBindings in the same namespace as the preemptor, or could a higher-priority workload preempt jobs from a different namespace? If cross-namespace preemption is possible, would it make sense to have a configurable option to restrict preemption within namespace boundaries? Thinking about multi-tenancy scenarios where teams could unintentionally (or intentionally) affect each other's workloads by simply raising their job priorities.
  2. Namespace quota as a preemption trigger: It looks like preemption is only triggered when cluster-level resource assignment fails (assignReplicas). Would namespace quota exhaustion also be considered as a trigger? A higher-priority workload could remain pending not because the cluster is full, but because its namespace quota is exhausted — in that case preemption would never fire even if it should. Is this a known limitation or out of scope for now?

Happy to be corrected if either of these is already handled!

Thanks for the great questions! These are both important considerations.

Cross-namespace preemption

In the current design, Preemption is cross-namespace, consistent with Kubernetes. Also, PriorityClass is a cluster-scoped resource, so priority is not namespace-bounded. In Kubernetes, pod preemption can evict pods from any namespace on the same node, and the community's recommended approach for multi-tenancy protection is access control (e.g., admission webhooks restricting which PriorityClasses a namespace can use), rather than restricting preemption scope in the scheduler.

We follow the same approach here. That said, if namespace-scoped restriction becomes important based on real-world user feedback, it can be added as a configurable option (e.g., preemptionScope: Namespace | Cluster) in a follow-up without breaking changes. The current cross-namespace behavior would remain the default. I've added a section in the design doc to make this rationale explicit.

Namespace quota as a preemption trigger

This is also a very good question! You're right, in the current design, preemption is only triggered by cluster-level resource exhaustion (assignReplicas failure), not by namespace quota exhaustion. This is consistent with Kubernetes, where ResourceQuota is enforced by an admission controller that is independent of the scheduler's preemption logic.

However, there is a subtle architectural difference worth noting:

  • In Kubernetes: Quota is consumed at pod creation time (before scheduling), so the preemptor's quota is already reserved before preemption runs — there is no conflict between preemption and quota.
  • In Karmada: FederatedResourceQuota is checked at binding patch time (during scheduling), so a theoretical edge case exists where preemption frees cluster resources but the preemptor's namespace quota is still insufficient.

This might be rare in practice (requires both cluster capacity and namespace quota to be exhausted simultaneously) and the system is self-healing (evicted victims reschedule, preemptor retries). Properly addressing it would require changes to the quota architecture (e.g., quota reservation at binding creation time), which is beyond the scope of this proposal.

I've added both topics to the Risks and Limitations section. Thanks again for the thoughtful review!

@seanlaii seanlaii force-pushed the binding-preemption branch 2 times, most recently from 2d252f4 to f2cf982 Compare April 15, 2026 05:58
@whitewindmills
Copy link
Copy Markdown
Member

/assign

@whitewindmills
Copy link
Copy Markdown
Member

@seanlaii Thanks for the detailed proposal. Overall, I think the direction makes sense: extending priority-based scheduling from queue ordering to binding-level preemption is a real gap for scarce-resource workloads, especially GPU/batch scenarios.

Before moving this forward, I think we should clarify and tighten several points.

Major Design Risks / Gaps

  1. Phase 1 victim accounting may be inaccurate.

    The proposal currently selects victims mostly by replica count. However, AllocatableReplicas is calculated against the preemptor's ReplicaRequirements, while each victim may have very different per-replica resource requirements. Evicting 10 small victim replicas does not necessarily free enough resources for 10 preemptor replicas.

    I think Phase 1 should either:

    • convert each victim's ReplicaRequirements * replicasOnCluster into an equivalent number of preemptor replicas, or
    • explicitly document that Phase 1 only works reliably when victims and preemptors have comparable per-replica requirements.
  2. Claim accounting for equal-priority bindings needs clarification.

    The current claim deduction only affects lower-priority bindings. This allows another binding with the same priority to consume the claimed capacity before the original preemptor is retried, especially because the preemptor may stay in unschedulableBindings until the periodic flush.

    I suggest claim deductions should apply to priority <= claim.priority, except for the claim owner itself. Otherwise, the claim does not fully protect the preemptor's nominated capacity.

  3. The mayPreempt() retry path may change behavior for unsupported scenarios.

    Retrying SelectBestClusters(..., InvalidReplicas) before knowing whether the final selection is truly single-cluster may affect multi-cluster workloads that will later fail preemptionEnabled(). This may change error paths and diagnostics even when preemption is not actually supported.

    Can we make mayPreempt() stricter, for example by requiring MaxGroups == 1 or len(feasibleClusters) == 1 before bypassing the capacity check?

  4. The 5-minute preemption delay should probably be handled in Phase 1.

    The proposal mentions that ResourceSummary changes do not currently requeue unschedulable bindings, so the preemptor may wait up to the 5-minute unschedulable flush interval. For preemption, especially GPU/batch workloads, this latency is quite visible.

    I would prefer adding ResourceSummary-change requeue as part of Phase 1, or at least making it an explicit required follow-up before enabling this feature beyond alpha.

  5. Partial-cluster eviction blast radius should be documented clearly.

    GracefulEvictCluster removes the victim's assignment from the target cluster. If the preemptor only needs a small amount of capacity, we may still evict all victim replicas from that cluster. That is acceptable as a Phase 1 tradeoff, but it should be clearly documented as a limitation.

  6. Claim identity should use a stable binding identity.

    The proposal uses Kind/Namespace/Name as the preemptor claim key. This may be insufficient for recreated workloads or same kind/name across API groups. I suggest using the ResourceBinding namespace/name plus UID, or the permanent-id label, as the claim identity.

  7. Preemption execution should handle patch failures more explicitly.

    If all victim patches fail, the claim is still set and capacity is reserved for up to the TTL even though no victim is actually being evicted. I think executePreemption should return the number of successfully patched victims and an error. If zero victim patches succeed, the scheduler should clear the claim and treat the scheduling attempt as failed.

Development Plan Suggestions

I suggest splitting the implementation into smaller, reviewable pieces:

  1. API and feature gate:

    • add PreemptionPolicy
    • add generated artifacts
    • add PriorityBasedPreemptiveScheduling
    • clearly define dependency on PriorityBasedScheduling
  2. Priority resolution:

    • map Kubernetes PriorityClass.preemptionPolicy
    • confirm whether PropagationPolicy.spec.schedulePriority should allow an explicit override
  3. Scheduler Phase 1:

    • cluster-to-bindings index
    • applicability checks
    • victim filtering and selection
    • preemption claim store
    • claim-aware capacity calculation
    • ScheduleResult.PreemptionResult
    • scheduler main-loop handling and conditions
  4. Eviction and observability:

    • GracefulEviction integration
    • events on preemptor and victims
    • scheduler metrics
    • clear handling of partial patch failures
  5. Queue wake-up:

    • requeue relevant unschedulable bindings when Cluster.Status.ResourceSummary changes
  6. Phase 2 should be a separate implementation milestone:

    • estimator SelectVictims API
    • pod-to-binding index
    • pod template label injection
    • estimator-side snapshot simulation

Phase 2 touches the estimator, interpreter path, and member-cluster pod indexing, so I would not bundle it with Phase 1 implementation.

Test Plan Feedback

The proposed test plan is a good start. I think we should add coverage for these cases before considering the design complete:

  1. Heterogeneous resources:

    • victim replicas require fewer resources than preemptor replicas
    • victim replicas require more resources than preemptor replicas
    • victim selection should not rely on raw replica count alone
  2. Claim behavior:

    • lower-priority binding is blocked from consuming claimed capacity
    • equal-priority binding behavior is explicitly tested
    • higher-priority binding can override an existing claim
    • claim is cleared on successful scheduling
    • claim is cleared when the preemptor binding is deleted
    • claim replacement when the target cluster changes
  3. Failure handling:

    • all victim patches fail
    • partial victim patch success
    • scheduler restart after preemption
    • claim TTL expiry and retry behavior
  4. Unsupported scenarios:

    • no preemption for Duplicated
    • no preemption for StaticWeight
    • no preemption for ClusterAffinities
    • no preemption for multi-component preemptors
    • no preemption for ClusterResourceBinding
    • no preemption when feature gate is disabled
    • no preemption when PreemptionPolicy is Never
  5. Queue and latency:

    • preemptor is requeued when victim resources are reflected in ResourceSummary
    • preemptor does not have to wait for the full unschedulable flush interval when resources are freed
  6. Multi-tenancy and quota:

    • cross-namespace preemption behavior is covered by e2e or integration tests
    • quota rejection after preemption is documented and, if possible, covered by a recovery test
  7. Observability:

    • event on preemptor when preemption starts
    • event on each victim
    • final successful scheduling event after resources are freed
    • metrics for attempts and victim count

I am supportive of the overall direction, but I think the Phase 1 resource accounting, claim semantics, requeue behavior, and patch-failure handling should be clarified before we proceed to implementation.

@whitewindmills
Copy link
Copy Markdown
Member

This is a complex and extensive proposal. Breaking it down into smaller, more manageable parts would likely expedite the review process.

PS: To be honest, I often forget the previous context halfway through reading it.

Copy link
Copy Markdown
Member

@RainbowMango RainbowMango left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Start working on it.
/assign

PS:
I'm not sure if we can reach a consensus in release 1.18(DL: May 31st), but I do want to have it as soon as possible.

@karmada-bot
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from rainbowmango. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@seanlaii
Copy link
Copy Markdown
Contributor Author

@whitewindmills Thank you for the thorough review! I've addressed all seven points in the design document.

  1. Phase 1 victim accounting
    Good catch. The victim selection algorithm now operates on resource quantities instead of raw replica counts. The design sums freed resources across victims (victim.ReplicaRequirements * victim.replicas) and converts to preemptor-equivalent replicas using calculateFittingReplicas — which uses floor and min across resource types (the bottleneck resource determines how many preemptor replicas can fit). This is distinct from calculateReplicasFromResources (used for claim deductions, which uses ceil and max). The reprieve loop also operates on resource quantities, ensuring correct accounting when victims and preemptors have different per-replica sizes.

  2. Equal-priority claim deductions
    Agreed. I verified the Kubernetes source — RunFilterPluginsWithNominatedPods uses priority >= nominatedPodPriority && UID != nominatedPodUID. The design now aligns with this: claim.priority >= currentPriority && claim.bindingKey != currentBindingKey. This protects the preemptor's nominated capacity from both lower-priority and equal-priority bindings, while the self-exception ensures the preemptor is not penalized by its own claim when retrying.

  3. mayPreempt() requires explicit MaxGroups == 1
    Good suggestion — adopted. Both mayPreempt() (the SelectBestClusters retry gate) and isPreemptionApplicable() (the preemption gate) now require explicit MaxGroups == 1 in SpreadConstraints. The runtime selectedClusters == 1 fallback has been removed. Workloads must declare single-cluster intent explicitly to opt into preemption, consistent with component scheduling. Multiple clusters can still pass filters (the preemptor can have multiple candidates), but preemption is only attempted on the single cluster selected by SelectBestClusters. This avoids changing error paths for multi-cluster workloads.

  4. 5-minute preemption delay
    Agreed. This is being addressed in scheduler: requeue unschedulable bindings on cluster status changes #7369, which adds ResourceSummary change detection to requeue preemptors immediately when victim resources are freed. Updated the design document to reference this PR.

  5. Per-cluster eviction blast radius
    Added a "Per-cluster eviction granularity" paragraph documenting that GracefulEvictCluster removes all victim replicas on the target cluster, even if the preemptor only needs partial capacity. This is consistent with Kubernetes where pod preemption is all-or-nothing per pod. Partial-replica eviction would require:
    (1) a new API operation for graceful replica reduction,
    (2) GracefulEviction controller support for scaling down, and
    (3) a mechanism to prevent the victim's scheduler from scaling back up during the preemption window. This is deferred to a future proposal.

  6. Claim identity
    The claim key uses Kind/Namespace/Name from spec.Resource because Schedule() operates on ResourceBindingSpec, which does not carry binding metadata. Could you provide a specific scenario where this would be insufficient?
    The cross-API-group collision (same Kind name in different API groups) is already prevented at the ResourceBinding layer — GenerateBindingName(kind, name) produces the same binding name for both, so the detector cannot create separate bindings for them. If they can't have separate bindings, they can't have separate claims. Using the binding's ResourceBindingPermanentIDLabel would require passing binding metadata into Schedule(), which is a larger refactor. I've added a "Claim identity" paragraph in the design document explaining this rationale.

  7. Patch failure handling
    Agreed. The design now specifies that executePreemption returns the eviction count. When zero victims are successfully evicted, the caller clears the preemption claim immediately to avoid holding a stale reservation for the full 10-minute TTL. The preemptor still enters unschedulableBindings and retries normally.

@seanlaii
Copy link
Copy Markdown
Contributor Author

This is a complex and extensive proposal. Breaking it down into smaller, more manageable parts would likely expedite the review process.

PS: To be honest, I often forget the previous context halfway through reading it.

Thanks for the feedback. I will try to break it down.

@seanlaii
Copy link
Copy Markdown
Contributor Author

Start working on it. /assign

PS: I'm not sure if we can reach a consensus in release 1.18(DL: May 31st), but I do want to have it as soon as possible.

Thanks for the info and time! I have moved it to 1.19.

@seanlaii seanlaii force-pushed the binding-preemption branch 2 times, most recently from 3c06090 to 220d1fd Compare April 17, 2026 15:46

## Motivation

Karmada v1.13 introduced priority-based scheduling (`PriorityBasedScheduling` feature gate), which orders the scheduling queue by priority. However, a high-priority binding that arrives after cluster resources are consumed by low-priority bindings will remain pending indefinitely. This is especially painful for batch/AI workloads where GPU resources are scarce. Binding preemption closes this gap.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the way, are we ready to move this PriorityBasedScheduling to beta now?

If not, what else do we need to do?
@seanlaii @whitewindmills

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it’s up to user feedback.
I don't know if anyone has already applied it to production.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@seanlaii is using it, but I don't know if he uses the same code as the upstream.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are using this feature in production. But also open to feedback from other users.

Copy link
Copy Markdown
Member

@RainbowMango RainbowMango left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review is still ongoing.


Phase 1 preemption is attempted only when all of the following hold:
1. The binding is a workload (`IsWorkload() == true`).
2. The binding is single-component (`len(Components) <= 1`).
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you mind explaining why we restrict the number of components to 1 here?

I remember your use case that you don't need it to support multiple components, but this might be a limitation in the future. Personally, I don't want the following implementation to be based on this forever. For instance, what if a user runs Pytorch workloads with multiple components?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main reason is that I want to focus on replica scheduling in this proposal while maintaining the extendability to support component scheduling. The framework can be extend to support the component scheduling, and I am also happy to work on it, but I would prefer to put it in the other proposal as the current proposal is already a bit complex.
I have also updated the doc with more information in [Component Scheduling Path] : https://github.com/karmada-io/karmada/pull/7327/changes#:~:text=%23%23%23-,Component,-Scheduling%20Path.

Phase 1 preemption is attempted only when all of the following hold:
1. The binding is a workload (`IsWorkload() == true`).
2. The binding is single-component (`len(Components) <= 1`).
3. The binding uses `ClusterAffinity` (single affinity), not `ClusterAffinities` (preference-ordered list).
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, I wonder to know the reason why it can't support ClusterAffinities.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment thread docs/proposals/scheduling/binding-preemption/binding-preemption.md Outdated
@seanlaii seanlaii force-pushed the binding-preemption branch from 220d1fd to 2168a11 Compare April 23, 2026 18:21
@seanlaii
Copy link
Copy Markdown
Contributor Author

@RainbowMango @whitewindmills Thanks for your time for the review. I have updated the document to make it shorter by removing some unnecessary information and implementation detail.

Signed-off-by: seanlaii <qazwsx0939059006@gmail.com>
@seanlaii seanlaii force-pushed the binding-preemption branch from 2168a11 to 93cb0db Compare April 28, 2026 00:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

kind/documentation Categorizes issue or PR as related to documentation. kind/feature Categorizes issue or PR as related to a new feature. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants