Skip to content

[PyTorch] Support scaled + clamped SwiGLU in te.ops and enable fused MXFP8 grouped MLP #15896

[PyTorch] Support scaled + clamped SwiGLU in te.ops and enable fused MXFP8 grouped MLP

[PyTorch] Support scaled + clamped SwiGLU in te.ops and enable fused MXFP8 grouped MLP #15896

Annotations

1 warning

Check

succeeded Apr 8, 2026 in 6s