ARROW-2917: [Python] Use detach() to avoid PyTorch gradient errors#2311
ARROW-2917: [Python] Use detach() to avoid PyTorch gradient errors#2311alok wants to merge 2 commits intoapache:masterfrom
Conversation
`detach()` doesn't copy data unless it has to and will give a RuntimeError if the detached data needs to have its gradient calculated.
|
Thanks for your contribution. https://github.com/apache/arrow/blob/master/.github/CONTRIBUTING.md#how-to-contribute-patches
|
Codecov Report
@@ Coverage Diff @@
## master #2311 +/- ##
==========================================
+ Coverage 84.38% 86.76% +2.38%
==========================================
Files 293 237 -56
Lines 44786 42045 -2741
==========================================
- Hits 37791 36482 -1309
+ Misses 6964 5563 -1401
+ Partials 31 0 -31Continue to review full report at Codecov.
|
|
@kou I'd rather not make a JIRA account for a one word change. Would you mind adding the patch? It does pass the tests. |
|
Is there any way to trigger the problem that this fixes? Having an ASF JIRA account is a good idea; at some point you may find yourself contributing a larger patch to an Apache project. |
|
@wesm Here's a script that triggers it for me: https://gist.github.com/c683d79cbc4b5c51282df983d81856ae |
…s on master Change-Id: I48a985a9661949f852d8d7951564b742682b8c2b
|
I added a unit test; all you had to do was try serializing a tensor with |
|
@wesm Thanks for adding the test that I really should've. |
detach()doesn't copy data unless it has to and will give a RuntimeError if the detached data needs to have its gradient calculated.