-
-
Notifications
You must be signed in to change notification settings - Fork 293
Add support for closestPointToPoint in WebGPU #761
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Introduces a new WebGPU generation mode for SDFs, alongside the existing CPU and WebGL options. Actual shader code is a stub for now.
Adds a compute shader to generate a signed distance field for a sphere. The SDF is currently a simple sphere distance, but lays the groundwork for more complex BVH-based SDF generation.
Integrates BVH-based closest point to point search into the WebGPU SDF generation example. This includes: - Adding new storage buffer bindings for BVH nodes, triangle indices, and vertex positions. - Porting the `_bvhClosestPointToPoint`, `distanceSqToBounds`, and `closestPointToTriangle` functions to WGSL. - Updating the SDF compute shader to use the BVH for distance calculation. - Populating the new buffers with BVH and mesh data.
…WebGPU # Conflicts: # package-lock.json # package.json
Amazing work, thanks! To your questions:
I think for the sake of readability and demonstration to new users it would be best to keep them separate. I'm imagining an example with the "webgpu_" prefix similar to the path tracing example (we can leave the CPU-generation method off).
Can you explain the motivation? Are you building compute shaders without nodes? I'm wondering if this is a feature request that's better requested for three.js nodes, instead?
On my machine I'm seeing ~3-6x speed up. ~740ms > ~120ms with default parameters and ~3600ms > ~1230ms with max resolution. The timing method looks right - the WebGL version requires generating each 3d texture layer separately and then copying it into the 3d texture due to some errors when writing to 3d textures (see #720), so it's expected that there will be some overhead. Though I agree the amount is quite high.
Yes it's expected. There were some changes made to three.js compute in service of #756 that were first published with 179.
Got it fixed! There were some typing issues that were needed - I've made an issue in the types repo to ask about whether this is something that can be addressed automatically: three-types/three-ts-types#1812. |
let splitAxis = boundsInfox & 0x0000ffffu; | ||
let rightIndex = 4u * boundsInfoy / 32u; | ||
let leftToRight = distanceSqToBVHNodeBoundsPoint( point, bvh, leftIndex ) < distanceSqToBVHNodeBoundsPoint( point, bvh, rightIndex );//rayDirection[ splitAxis ] >= 0.0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this comment just left over? Or is it still needed?
//rayDirection[ splitAxis ] >= 0.0;
|
||
const MAX_RESOLUTION = 200; | ||
const MIN_RESOLUTION = 10; | ||
const WORKGROUP_SIZE = [ 16, 16, 1 ]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will take some tuning but it may be worth adjusting these values to find what the best combination is for performance. The "best" values can be different per hardware but in the WebGPU ray tracing demo, for example, 8x8x1
was more performance for me than 16x16x1
. And given that we're generating a 3d texture in this case it may be best to dispatch a 3d work group - eg something like 4x4x4
or 8x8x4
. Though maybe you've already tried this and landed on these values 😁
I've thought about it for a while and maybe this is not a good idea to add ability to extract shader code out of nodes. If you are able to do in three.js with compute nodes anything that raw webgpu can, then there is no point in using webgpu. With webgl this is different: at work I often need to use raw context to achieve some complex effects.
Ok, will extract it into a new example then. |
I think one reason to do this would be to enable non-three.js platforms to more easily use the compute code without having to import a full three.js system. The particular cases we're talking about here are not using the three.js Nodes system so fully so it may be more reasonable to separate them (though it does make things like managing dependencies between snippets significantly more ergonomic) - though I think it would be nice to able to explore being able to "extract" some of the code from three.js nodes, at least as a first step. As an example, an improved compute-based path tracer being able to support Babylon etc would be a nice bonus. I'm thinking we should revisit this when there's a more concrete use case for it, though. |
Proof of Concept for closestPointToPoint function being supported in WebGPU. To test out that the function works I extended the sdfGeneration example with a third way to generation the sdf.
@gkjohnson There are a couple of questions to discuss here:
On my machine generating sdf with webgpu takes ~10x faster than with webgl (~230ms -> ~25ms with default parameters) which I find quite insane. Maybe there is an issue with how time is measured? Would be interesting to see how the numbers change on other machines.
Also, the webgpu example did not work until I upgraded threejs to r179. Is this expected?
That upgrade seems to have broken CI, should I just revert it?