Skip to content

Add MLX op handler for aten.hardtanh#18986

Closed
Jah-yee wants to merge 1 commit intopytorch:mainfrom
Jah-yee:roomwithoutroof/add-mlx-hardtanh
Closed

Add MLX op handler for aten.hardtanh#18986
Jah-yee wants to merge 1 commit intopytorch:mainfrom
Jah-yee:roomwithoutroof/add-mlx-hardtanh

Conversation

@Jah-yee
Copy link
Copy Markdown

@Jah-yee Jah-yee commented Apr 18, 2026

Good day

This PR adds support for aten.hardtanh in the MLX delegate backend, addressing issue #18921.

Summary

Add a decomposed handler for aten.hardtanh that uses the existing ClipNode to implement bounded activation on Metal GPU.

Changes

  • Add _hardtanh_handler in backends/mlx/ops.py using the decomposition: hardtanh(x, min_val, max_val) = clamp(x, min_val, max_val)
  • Default values: min_val=-1, max_val=1
  • Add test coverage for hardtanh operation

Testing

python -m executorch.backends.mlx.test.run_all_tests -k hardtanh

References

Thank you for your attention. If there are any issues or suggestions, please leave a comment and I will address them promptly.

Warmly,
RoomWithOutRoof

cc @metascroy

Adds a decomposed handler for aten.hardtanh that uses the existing
ClipNode to implement bounded activation on Metal GPU.

hardtanh(x, min_val=-1, max_val=1) = clamp(x, min_val, max_val)

Fixes: pytorch#18921
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 18, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18986

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 19 New Failures, 2 Cancelled Jobs, 1 Pending

As of commit bed3054 with merge base 9ca0ff1 (image):

NEW FAILURES - The following jobs have failed:

CANCELLED JOBS - The following jobs were cancelled. Please retry:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 18, 2026
@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@nil-is-all nil-is-all requested a review from metascroy April 20, 2026 18:55
@nil-is-all nil-is-all added the module: mlx Issues related to MLX Backend: Metal-accelerated inference on Apple Silicon label Apr 20, 2026
@nil-is-all
Copy link
Copy Markdown
Contributor

@claude review

@claude
Copy link
Copy Markdown

claude Bot commented Apr 20, 2026

Claude finished @nil-is-all's task in 3m 26s —— View job


Review of PR #18986: Add MLX op handler for aten.hardtanh

  • Read changed files and gather context
  • Review existing op handler patterns in MLX backend
  • Provide review feedback

Issue: Unrelated bitwise_or change included

The diff includes an unrelated addition at backends/mlx/ops.py:457:

([torch.ops.aten.bitwise_or.default], BitwiseOrNode, "aten.bitwise_or", False),

This will fail at import time because:

  1. BitwiseOrNode is not imported in the import block (lines 32-153)
  2. BitwiseOrNode does not exist in the FlatBuffers schema (backends/mlx/serialization/schema.fbs)
  3. No corresponding handler or schema definition has been added

This line must be removed from this PR. If bitwise_or support is desired, it needs its own PR with the full schema definition, code generation, import, and tests.

Fix this ->


_hardtanh_handler review (backends/mlx/ops.py:2855-2885)

Correctness: The decomposition hardtanh(x, min_val, max_val) = clamp(x, min_val, max_val) is correct, and reusing ClipNode is the right approach.

Issue 1 — Silent fallback on missing metadata (line 2870)

dtype = x_meta.dtype if x_meta is not None else torch.float32

The adjacent _clamp_handler (line 2824-2825) and _relu_handler (line 2684-2685) both raise ValueError when metadata is missing rather than silently defaulting. This handler should follow the same pattern for consistency and to avoid subtle bugs if metadata is unexpectedly absent:

x_meta = n.args[0].meta.get("val")
if x_meta is None:
    raise ValueError("Input tensor metadata not found for hardtanh")
dtype = x_meta.dtype

Fix this ->

Issue 2 — Verbose docstring (lines 2857-2861)

Per the repo's CLAUDE.md, code should be self-documenting with minimal comments. The single-line decorator + function name already conveys what this does. The existing _clamp_handler docstring is somewhat longer, but its semantics are less obvious (optional min/max, None handling). For hardtanh, a one-liner would suffice or the docstring could be removed entirely since the decomposition is trivial.

This is a nit — not blocking.


Test coverage (backends/mlx/test/test_ops.py:4103)

The test uses the unary op test harness, which exercises the default min_val=-1, max_val=1 behavior via torch.nn.functional.hardtanh (which uses default arguments). This is reasonable for a first pass, but note that custom min_val/max_val values are not tested. If the handler is intended to support non-default bounds (which it is, given the argument parsing), a dedicated test case with custom bounds would strengthen coverage.

This is minor — not blocking.


Summary

Item Severity
BitwiseOrNode line must be removed (breaks import) Blocking
Missing metadata should raise, not silently default Medium
Verbose docstring Nit
No test for custom min/max bounds Minor

The core hardtanh implementation is correct and well-structured. The main blocker is the unrelated bitwise_or line that will cause a runtime failure.

@metascroy
Copy link
Copy Markdown
Contributor

Overall looks great @Jah-yee! Can you address the comment on testing the min/max value bounds?

Also copy/paste the output of your test run (python -m executorch.backends.mlx.test.run_all_tests -k hardtanh) in PR summary.

Copy link
Copy Markdown
Contributor

@metascroy metascroy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add test cases for min/max value, and paste test output in PR summary :)

@Jah-yee Jah-yee closed this Apr 24, 2026
@Jah-yee Jah-yee deleted the roomwithoutroof/add-mlx-hardtanh branch April 24, 2026 07:51
@nil-is-all
Copy link
Copy Markdown
Contributor

@Jah-yee, are you opening another PR to address this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. module: mlx Issues related to MLX Backend: Metal-accelerated inference on Apple Silicon

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Good First Issue: Add MLX Op Handler for aten.hardtanh

3 participants