Skip to content

Latest commit

 

History

History
134 lines (109 loc) · 8.34 KB

File metadata and controls

134 lines (109 loc) · 8.34 KB

Kubeflow Trainer ROADMAP

2026

  • Scheduling & Scalability
    • Workload-Aware Scheduling for TrainJobs: #3015
    • KAI Scheduler Integrations: #2628
    • Support Multi-Node NVLink (MNNVL) for TrainJob: #3264
    • First-Class Integration with Kueue for multi-cluster job dispatching, topology-aware scheduling, and other features.
    • Enhanced Scalability for Massively Distributed TrainJobs: #2318
  • MPI and HPC on Kubernetes
    • Flux Integration for MPI and HPC workloads: #2841
    • IntelMPI Support: #1807
    • PMIx Investigation with Flux or Slurm plugins
    • Enhance MPI Orchestration: #2751
  • Observability & Reliability
    • TrainJob Progress Tracking & Metrics Exposure: #2779
    • Transparent Checkpoint/Restore for GPU-Accelerated TrainJobs: #2245
    • TTLs and ActiveDeadlineSeconds for TrainJobs: #2899
    • Elastic TrainJobs: #2903
    • Add controller-level Prometheus metrics and ServiceMonitor: #3429
    • Default Grafana dashboard for Kubeflow Trainer: #3430
  • Distributed Data Cache
    • Tensor caching to accelerate GPU workloads: #3173
    • Integration with OptimizationJob
    • Explore RDMA with AI Schedulers and Data Cache
  • LLM Fine-Tuning Enhancements
    • Automatic configuration of GPU requests for TrainJobs: #3328
    • Build Dynamic BuiltinTrainers and LLM Fine-Tuning Blueprints: #2839
  • New Kubeflow Trainer Runtimes
    • Distributed JAX: #2442
    • Distributed XGBoost: #2598
    • Tensor Parallelism with Megatron-LM: #3178
    • Slurm Runtime Integration: #2249
  • Implement registration mechanism in the Pipeline Framework to extend plugins and supported ML frameworks in the Kubeflow Trainer: #2750
  • Kubeflow Trainer UI and TrainJob History Server: #2648
  • Integration with Kubeflow MCP Server: kubeflow/sdk#238
  • Enhance lifecycle management and mutability of Runtimes: #3428

2025

  • Kubeflow Trainer v2 general availability: #2170
  • Local execution for Kubeflow Python SDK: kubeflow/sdk#22
  • Distributed in-memory data cache powered by Apache Arrow and Apache DataFusion: #2655
  • BuiltinTrainers for LLMs Fine-Tuning
  • Training Runtime support
  • Elastic PyTorch training jobs: kubernetes-sigs/jobset#463
  • Gang-scheduling capability for TrainJob
  • Multi-cluster TrainJob dispatching with Multi-Kueue.
  • Topology aware scheduling with Kueue.
  • Implement registration mechanism in the Pipeline Framework to extend plugins and supported ML frameworks in the Kubeflow Trainer: #2750
  • Enhanced MPI orchestration with SSH-based node communication: #2751
  • GPU testing infrastructure: #2432
  • Automation checkpointing for GPU-accelerated TrainJobs: #2245
  • Automation of Kubeflow Trainer releases: #2155
  • Kubeflow Trainer UI and TrainJob History Server: #2648

2023/2024

  • Training Operator V2
  • Enhance JobSet APIs for distributed training and fine-tuning
  • Kubeflow Training SDK improvements
  • Support for distributed JAX
  • Support for LLM Training runtimes
  • Python APIs for LLMs fine-tuning
  • Consolidate MPI Operator V2 into Training Operator

2022

  • Release training-operator v1.4 to be included in Kubeflow v1.5 release.
  • Migrate v2 MPI operator to unified operator.
  • Migrate PaddlePaddle operator to unified operator.
  • Support elastic training for additional frameworks besides PyTorch.
  • Support different gang scheduling definitions.
  • Improve test coverage.

2020 and 2021

Maintenance and reliability

We will continue developing capabilities for better reliability, scaling, and maintenance of production distributed training experiences provided by operators.

  • Enhance maintainability of operator common module. Related issue: #54.
  • Migrate operators to use kubeflow/common APIs. Related issue: #64.
  • Graduate MPI Operator, MXNet Operator and XGBoost Operator to v1. Related issue: #65.

Features

To take advantages of other capabilities of job scheduler components, operators will expose more APIs for advanced scheduling. More features will be added to simplify usage like dynamic volume supports and git ops experiences. In order to make it easily used in the Kubeflow ecosystem, we can add more launcher KFP components for adoption.

  • Support dynamic volume provisioning for distributed training jobs. Related issue: #19.
  • MLOps - Allow user to submit jobs using Git repo without building container images. Related issue: #66.
  • Add Job priority and Queue in SchedulingPolicy for advanced scheduling in common operator. Related issue: #46.
  • Add pipeline launcher components for different training jobs. Related issue: pipeline#3445.

Monitoring

  • Provides a standardized logging interface. Related issue: #60.
  • Expose generic prometheus metrics in common operators. Related issue: #22.
  • Centralized Job Dashboard for training jobs (Add metadata graph, model artifacts later). Related issue: #67.

Performance

Continue to optimize reconciler performance and reduce latency to take actions on CR events.

Quarterly Goals

Q1 & Q2

  • Better log support
    • Support log levels #1132
    • Log errors in events
  • Validating webhook #1016

Q3 & Q4

  • Better Volcano support
    • Support queue #916