Skip to content

Commit 78e312a

Browse files
NengXu001yifeililn
andcommitted
feat: integrate RL training with vLLM inference backend
Co-authored-by: yifeililn <yifeilin1202@qq.com>
1 parent 87e50ab commit 78e312a

File tree

3 files changed

+422
-50
lines changed

3 files changed

+422
-50
lines changed

xtuner/v1/ray/config/worker.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -143,6 +143,13 @@ class RolloutConfig(BaseModel):
143143
help="Number of GPUs allocated for each inference engine in the rollout worker.",
144144
),
145145
] = 1
146+
data_parallel_size: Annotated[
147+
int,
148+
Parameter(
149+
group=infer_group,
150+
help="Number of GPUs allocated for processing data batches in parallel (Data Parallelism).",
151+
),
152+
] = 1
146153
expert_parallel_size: Annotated[
147154
int,
148155
Parameter(

0 commit comments

Comments
 (0)