[Proposal] Binding preemption Design for Karmada scheduler#7327
[Proposal] Binding preemption Design for Karmada scheduler#7327seanlaii wants to merge 1 commit intokarmada-io:masterfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a comprehensive design proposal for binding-level preemption within the Karmada scheduler. The core purpose is to resolve resource contention by allowing higher-priority workloads to preempt lower-priority ones, preventing high-priority tasks from being indefinitely blocked. The proposal outlines a strategic, phased approach, beginning with a foundational summary-based preemption for single-cluster environments and planning for a more advanced estimator-based system in the future. This enhancement significantly improves resource utilization and fairness for critical workloads. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a comprehensive proposal for binding-level preemption within the Karmada scheduler, specifically for single-cluster scheduling. The feature aims to allow high-priority ResourceBindings to evict lower-priority ones when cluster resources are insufficient. The proposal details a two-phase implementation, starting with summary-based preemption and progressing to estimator-based preemption for more precise victim selection. A review comment suggests enhancing the clarity of eviction messages for victim bindings by including the workload's kind, namespace, and name for better user understanding.
|
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #7327 +/- ##
==========================================
- Coverage 42.04% 41.88% -0.17%
==========================================
Files 874 879 +5
Lines 53544 54285 +741
==========================================
+ Hits 22515 22738 +223
- Misses 29341 29826 +485
- Partials 1688 1721 +33
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
16f153a to
97d4a47
Compare
97d4a47 to
0618be4
Compare
|
@seanlaii, I see this PR marked with PS: I can get back to this next week. |
Thanks for the info! @whitewindmills @RainbowMango Please help review when you have a chance. |
There was a problem hiding this comment.
Pull request overview
Note
Copilot was unable to run its full agentic suite in this review.
Adds a design proposal for binding-level preemption in the Karmada scheduler, focused on enabling higher-priority ResourceBindings to reclaim cluster capacity from lower-priority ones in single-cluster scenarios.
Changes:
- Introduces a phased design (Phase 1 summary-based; Phase 2 estimator/node-simulation) for binding preemption.
- Proposes API additions (
PreemptionPolicyunderSchedulePriority) and a new feature gate (PriorityBasedPreemptiveScheduling). - Describes scheduler flow changes, victim selection, and an in-memory “preemption claim” coordination mechanism.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
95d2c8c to
0baf151
Compare
0baf151 to
f7bced4
Compare
|
Thanks for the proposal! I'm still getting familiar with the codebase, so apologies if any of this is already covered somewhere.
Happy to be corrected if either of these is already handled! |
Thanks for the great questions! These are both important considerations. Cross-namespace preemptionIn the current design, We follow the same approach here. That said, if namespace-scoped restriction becomes important based on real-world user feedback, it can be added as a configurable option (e.g., Namespace quota as a preemption triggerThis is also a very good question! You're right, in the current design, preemption is only triggered by However, there is a subtle architectural difference worth noting:
This might be rare in practice (requires both cluster capacity and namespace quota to be exhausted simultaneously) and the system is self-healing (evicted victims reschedule, preemptor retries). Properly addressing it would require changes to the quota architecture (e.g., quota reservation at binding creation time), which is beyond the scope of this proposal. I've added both topics to the Risks and Limitations section. Thanks again for the thoughtful review! |
2d252f4 to
f2cf982
Compare
|
/assign |
|
@seanlaii Thanks for the detailed proposal. Overall, I think the direction makes sense: extending priority-based scheduling from queue ordering to binding-level preemption is a real gap for scarce-resource workloads, especially GPU/batch scenarios. Before moving this forward, I think we should clarify and tighten several points. Major Design Risks / Gaps
Development Plan SuggestionsI suggest splitting the implementation into smaller, reviewable pieces:
Phase 2 touches the estimator, interpreter path, and member-cluster pod indexing, so I would not bundle it with Phase 1 implementation. Test Plan FeedbackThe proposed test plan is a good start. I think we should add coverage for these cases before considering the design complete:
I am supportive of the overall direction, but I think the Phase 1 resource accounting, claim semantics, requeue behavior, and patch-failure handling should be clarified before we proceed to implementation. |
|
This is a complex and extensive proposal. Breaking it down into smaller, more manageable parts would likely expedite the review process. PS: To be honest, I often forget the previous context halfway through reading it. |
RainbowMango
left a comment
There was a problem hiding this comment.
Start working on it.
/assign
PS:
I'm not sure if we can reach a consensus in release 1.18(DL: May 31st), but I do want to have it as soon as possible.
f2cf982 to
fe0d525
Compare
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
@whitewindmills Thank you for the thorough review! I've addressed all seven points in the design document.
|
Thanks for the feedback. I will try to break it down. |
Thanks for the info and time! I have moved it to 1.19. |
3c06090 to
220d1fd
Compare
|
|
||
| ## Motivation | ||
|
|
||
| Karmada v1.13 introduced priority-based scheduling (`PriorityBasedScheduling` feature gate), which orders the scheduling queue by priority. However, a high-priority binding that arrives after cluster resources are consumed by low-priority bindings will remain pending indefinitely. This is especially painful for batch/AI workloads where GPU resources are scarce. Binding preemption closes this gap. |
There was a problem hiding this comment.
By the way, are we ready to move this PriorityBasedScheduling to beta now?
If not, what else do we need to do?
@seanlaii @whitewindmills
There was a problem hiding this comment.
I think it’s up to user feedback.
I don't know if anyone has already applied it to production.
There was a problem hiding this comment.
@seanlaii is using it, but I don't know if he uses the same code as the upstream.
There was a problem hiding this comment.
We are using this feature in production. But also open to feedback from other users.
RainbowMango
left a comment
There was a problem hiding this comment.
Review is still ongoing.
|
|
||
| Phase 1 preemption is attempted only when all of the following hold: | ||
| 1. The binding is a workload (`IsWorkload() == true`). | ||
| 2. The binding is single-component (`len(Components) <= 1`). |
There was a problem hiding this comment.
Would you mind explaining why we restrict the number of components to 1 here?
I remember your use case that you don't need it to support multiple components, but this might be a limitation in the future. Personally, I don't want the following implementation to be based on this forever. For instance, what if a user runs Pytorch workloads with multiple components?
There was a problem hiding this comment.
The main reason is that I want to focus on replica scheduling in this proposal while maintaining the extendability to support component scheduling. The framework can be extend to support the component scheduling, and I am also happy to work on it, but I would prefer to put it in the other proposal as the current proposal is already a bit complex.
I have also updated the doc with more information in [Component Scheduling Path] : https://github.com/karmada-io/karmada/pull/7327/changes#:~:text=%23%23%23-,Component,-Scheduling%20Path.
| Phase 1 preemption is attempted only when all of the following hold: | ||
| 1. The binding is a workload (`IsWorkload() == true`). | ||
| 2. The binding is single-component (`len(Components) <= 1`). | ||
| 3. The binding uses `ClusterAffinity` (single affinity), not `ClusterAffinities` (preference-ordered list). |
There was a problem hiding this comment.
Again, I wonder to know the reason why it can't support ClusterAffinities.
There was a problem hiding this comment.
220d1fd to
2168a11
Compare
|
@RainbowMango @whitewindmills Thanks for your time for the review. I have updated the document to make it shorter by removing some unnecessary information and implementation detail. |
Signed-off-by: seanlaii <qazwsx0939059006@gmail.com>
2168a11 to
93cb0db
Compare
What type of PR is this?
/kind documentation
/kind feature
What this PR does / why we need it:
This PR adds a design proposal for binding-level preemption in the Karmada scheduler.
Currently, priority-based scheduling only orders the queue — a high-priority binding that arrives after cluster resources are consumed by lower-priority bindings remains pending indefinitely. This proposal introduces preemption so the scheduler can evict lower-priority bindings to make room, scoped to single-cluster Divided scheduling in Phase 1 with estimator-based node-level simulation planned for Phase 2.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?: