Generic numeric debugging (#19317)#19317
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19317
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 4 New Failures, 2 Unrelated FailuresAs of commit cc5b5a0 with merge base bd5752a ( NEW FAILURES - The following jobs have failed:
FLAKY - The following job failed but was likely due to flakiness present on trunk:
BROKEN TRUNK - The following job failed but was present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@metascroy has exported this pull request. If you are a Meta employee, you can view the originating Diff in D103956056. |
This PR needs a
|
|
@metascroy has imported this pull request. If you are a Meta employee, you can view this in D103956056. |
| specs: list[TapSpec] = [] | ||
| new_tap_nodes: list[fx.Node] = [] | ||
|
|
||
| for node in candidate_nodes: |
There was a problem hiding this comment.
what happens if the delegate have to fuse some candidate nodes? Try lowering conv2d-->batch_norm, where XNNPACK doesn't support standalone batch_norm IIRC.
Also how is it better than forced-single-op-partitions?
There was a problem hiding this comment.
This is primarily to help RL in a numeric debugging investigation. I may clean it up to make it a generic utility if they find it useful, but will likely go through design review to get input from others if I do that.
To answer your question on fused candidates: this is tested with CoreML's quantized linear pattern [dequantize -> linear]. In that case, we tap the intermediate output after linear node, which is actually a quantized linear in both eager (from QDQ pattern) and CoreML (from its internal fusion). In the [conv2d-->batch_norm] case, I'd have to check. Tapping batch norm should mean we want the output of batch_norm, i.e., the intermediate output after batchnorm, which can be the result of a fused [conv2d-->batch_norm] op. If we did forced single-op partitions on con2d and batch_norm separately, we wouldn't get the fusion.
how is it better than forced-single-op-partitions
One reason is single-op partitions destroy fusion that backends do. Here we tap intermediates, so as long as we tap the final intermediate after a fusion pattern we should be good.
A second reason is this approach does keep the same big delegate blob, just with extra outputs. In CoreML's case that means it will still be routed to ANE, whereas if you break up into single op partitions, it will very likely reroute to CPU b/c they are small.
Summary: Pull Request resolved: pytorch#19317 Differential Revision: D103956056 Pulled By: metascroy
7ad210a to
f8d3715
Compare
Summary: Pull Request resolved: pytorch#19317 Differential Revision: D103956056 Pulled By: metascroy
f8d3715 to
c4fc2e5
Compare
Summary: Pull Request resolved: pytorch#19317 Differential Revision: D103956056 Pulled By: metascroy
c4fc2e5 to
74941da
Compare
Summary: Pull Request resolved: pytorch#19317 Differential Revision: D103956056 Pulled By: metascroy
74941da to
cc5b5a0
Compare
Summary: Pull Request resolved: #19317
Differential Revision: D103956056
Pulled By: metascroy