Skip to content

Conversation

chrisrosner
Copy link

Addresses #10399

This was the minimal set of changes I could figure out to get HypergraphConv to compile with torchscript.

Included are unit tests demonstrating the problems.

The following issues were resolved:
type of alpha changing from None -> tensor
torchscript not understanding the import functional as F
non-attention branch does not define some member variables
default value for dropout is wrong type

Test Plan

pytest -k hyper

Copy link

codecov bot commented Aug 11, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 85.94%. Comparing base (c211214) to head (a6bf19f).
⚠️ Report is 94 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master   #10400      +/-   ##
==========================================
- Coverage   86.11%   85.94%   -0.18%     
==========================================
  Files         496      502       +6     
  Lines       33655    35130    +1475     
==========================================
+ Hits        28981    30191    +1210     
- Misses       4674     4939     +265     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@wsad1
Copy link
Member

wsad1 commented Aug 28, 2025

@chrisrosner thanks for the work.
Could you enable Maintainers are allowed to edit this pull request. for this PR. Need to merge master so that tests pass.

@akihironitta akihironitta changed the title HypergraphConv TorchScript Compilation Compatibility #10399 HypergraphConv TorchScript Compilation Compatibility Aug 30, 2025
Comment on lines +55 to +58
output = script(torch.randn(4, in_channels),
torch.tensor([[0, 1, 2], [0, 0, 1]]),
hyperedge_attr=torch.randn(2, in_channels))
assert output.size() == (4, out_channels * conv.heads)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
output = script(torch.randn(4, in_channels),
torch.tensor([[0, 1, 2], [0, 0, 1]]),
hyperedge_attr=torch.randn(2, in_channels))
assert output.size() == (4, out_channels * conv.heads)
x = torch.randn(4, in_channels)
out_1 = conv(x, torch.tensor([[0, 1, 2], [0, 0, 1]]),
hyperedge_attr=torch.randn(2, in_channels))
out_2 = script(x,
torch.tensor([[0, 1, 2], [0, 0, 1]]),
hyperedge_attr=torch.randn(2, in_channels))
assert torch.allclose(out_1, out_2)

Also add the onlyFullTest decorator like

.
Good to merge after this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants