Skip to content

Merge branch 'main' into add/llm-compressor-fp8-inference

fae346b
Select commit
Loading
Failed to load commit list.
Sign in for the full log view
Open

make fp8 model quantized by llm-compressor can be inferenced in turbomind #4509

Merge branch 'main' into add/llm-compressor-fp8-inference
fae346b
Select commit
Loading
Failed to load commit list.

Annotations

1 warning
pr_functions_test
succeeded Apr 8, 2026 in 39m 15s