Skip to content

Add MLX op handler for aten.isinf#18988

Open
Jah-yee wants to merge 1 commit intopytorch:mainfrom
Jah-yee:add-isinf-mlx-op-handler
Open

Add MLX op handler for aten.isinf#18988
Jah-yee wants to merge 1 commit intopytorch:mainfrom
Jah-yee:add-isinf-mlx-op-handler

Conversation

@Jah-yee
Copy link
Copy Markdown

@Jah-yee Jah-yee commented Apr 18, 2026

Good day

Summary

This PR adds a decomposed MLX op handler for aten.isinf to the pytorch/executorch project.

Motivation

The MLX delegate converts PyTorch aten ops into MLX graph nodes during export. When an aten op has no handler, it falls back to CPU execution, breaking the GPU acceleration pipeline. Adding a handler for aten.isinf enables it to run on Metal GPU via MLX.

Implementation

The handler uses a decomposed approach:

isinf(x) = abs(x) == inf

This uses existing AbsNode and EqualNode which are already supported, avoiding the need for a dedicated MLX isinf op.

Changes

  • backends/mlx/ops.py: Added _isinf_handler function registered for torch.ops.aten.isinf.default
  • backends/mlx/test/test_ops.py: Added isinf to _UNARY_OP_TESTS with standard test configuration

Testing

The handler can be tested with:

python -m executorch.backends.mlx.test.run_all_tests -k isinf

Thank you for your attention. If there are any issues or suggestions, please leave a comment and I will address them promptly.

Warmly,
RoomWithOutRoof

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 18, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18988

Note: Links to docs will display an error until the docs builds have been completed.

⚠️ 12 Awaiting Approval

As of commit 810bc80 with merge base 490ec5c (image):

AWAITING APPROVAL - The following workflows need approval before CI can run:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 18, 2026
@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Implement isinf op handler for the MLX delegate backend.

isinf(x) is decomposed as abs(x) == inf, using existing AbsNode
and EqualNode which are already supported in the MLX graph schema.

The handler also includes a corresponding test case with _inf_input_fn
that generates inputs containing both positive and negative infinity.

Fixes: pytorch#18922
@nil-is-all
Copy link
Copy Markdown
Contributor

@claude review

@claude
Copy link
Copy Markdown

claude Bot commented Apr 20, 2026

Claude finished @nil-is-all's task in 1m 31s —— View job


Reviewing PR #18988: Add MLX op handler for aten.isinf

  • Gather context and read changed files
  • Review backends/mlx/ops.py changes
  • Review backends/mlx/test/test_ops.py changes
  • Post review feedback

Review Summary

The decomposition approach (isinf(x) = abs(x) == inf) is sound and follows the same pattern as the existing _isnan_handler. The implementation correctly uses AbsNode and EqualNode, and the test coverage is appropriate. A few items to address:


Issues

1. Formatting: # activations comment merged onto the isinf linebackends/mlx/test/test_ops.py:4130

The isinf entry was inserted before the # activations comment, but the comment ended up appended to the end of the isinf line instead of staying on its own line:

# Before (line 4129-4131 in original):
    {"op_name": "isnan", ...},
    # activations
    {"op_name": "relu", ...},

# After (current):
    {"op_name": "isnan", ...},
    {"op_name": "isinf", ...},    # activations
    {"op_name": "relu", ...},

The # activations section comment should remain on its own line to preserve the visual grouping:

    {"op_name": "isnan", ...},
    {"op_name": "isinf", ...},
    # activations
    {"op_name": "relu", ...},

Fix this →

2. Hardcoded torch.float32 for the inf constantbackends/mlx/ops.py:468

inf_slot = emit_lifted_constant(P, float('inf'), torch.float32)

The inf constant is always created as float32, even when the input tensor x may be float16 or bfloat16. This works because EqualNode likely handles type promotion internally, but it's worth verifying this is intentional and doesn't cause issues for half-precision inputs. The test does cover float16 and bfloat16, so if CI passes this is likely fine — but a comment noting the intentional choice or using the input's dtype would make this clearer.

3. Minor: _inf_input_fn overwrites some +inf with -infbackends/mlx/test/test_ops.py:4029-4032

mask_pos = torch.rand(shape) > 0.8   # ~20% set to +inf
mask_neg = torch.rand(shape) > 0.9   # ~10% set to -inf (may overwrite +inf)
x[mask_pos] = float('inf')
x[mask_neg] = float('-inf')

Since mask_neg is applied after mask_pos, some positions marked +inf will be overwritten to -inf. This isn't a bug per se (both infinities should be detected), but the effective distribution is slightly different from what the thresholds suggest. The existing _nan_input_fn uses a single mask, which is cleaner. Consider using non-overlapping masks if the intent is to have distinct proportions, or just note that overlap is acceptable.


Looks Good

  • The decomposition abs(x) == inf is mathematically correct and handles both +inf and -inf.
  • Properly uses make_tmp_slot() for the intermediate abs result, matching codebase conventions.
  • Correctly validates args/kwargs with require_args and require_kwargs.
  • Good placement under the "Numerical checks" section alongside _isnan_handler.
  • Test configuration with _SHAPES_3 and multiple float dtypes provides reasonable coverage.

Issue (1) is the only one I'd consider a must-fix before merge. The # activations comment being on its own line is part of the existing code organization pattern in that file, and the current diff breaks that grouping.
|

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants