Skip to content

Conversation

TheBlek
Copy link
Contributor

@TheBlek TheBlek commented Aug 11, 2025

Proof of Concept for closestPointToPoint function being supported in WebGPU. To test out that the function works I extended the sdfGeneration example with a third way to generation the sdf.

@gkjohnson There are a couple of questions to discuss here:

  1. Should there be a separate sdfGeneration example that generates CPU/WebGPU or leaving three options is fine? Former is more work and does not allow to directly compare webgl/webgpu results without switching examples.
  2. Should the library have a way to import wgsl code without using threejs' nodes? Personally, I think there should be. I couldn't find a way to extract function code from the node and resorted to copy-pasting when prototyping with pure WebGPU (earlier commits).

On my machine generating sdf with webgpu takes ~10x faster than with webgl (~230ms -> ~25ms with default parameters) which I find quite insane. Maybe there is an issue with how time is measured? Would be interesting to see how the numbers change on other machines.

Also, the webgpu example did not work until I upgraded threejs to r179. Is this expected?
That upgrade seems to have broken CI, should I just revert it?

Introduces a new WebGPU generation mode for SDFs, alongside the existing CPU and WebGL options. Actual shader code is a stub for now.
Adds a compute shader to generate a signed distance field for a sphere.
The SDF is currently a simple sphere distance, but lays the groundwork for more complex BVH-based SDF generation.
Integrates BVH-based closest point to point search into the WebGPU SDF generation example.
This includes:
- Adding new storage buffer bindings for BVH nodes, triangle indices, and vertex positions.
- Porting the `_bvhClosestPointToPoint`, `distanceSqToBounds`, and `closestPointToTriangle` functions to WGSL.
- Updating the SDF compute shader to use the BVH for distance calculation.
- Populating the new buffers with BVH and mesh data.
@TheBlek TheBlek marked this pull request as ready for review August 12, 2025 06:45
…WebGPU

# Conflicts:
#	package-lock.json
#	package.json
@gkjohnson
Copy link
Owner

Amazing work, thanks!

To your questions:

Should there be a separate sdfGeneration example that generates CPU/WebGPU or leaving three options is fine? Former is more work and does not allow to directly compare webgl/webgpu results without switching examples.

I think for the sake of readability and demonstration to new users it would be best to keep them separate. I'm imagining an example with the "webgpu_" prefix similar to the path tracing example (we can leave the CPU-generation method off).

Should the library have a way to import wgsl code without using threejs' nodes? Personally, I think there should be. I couldn't find a way to extract function code from the node and resorted to copy-pasting when prototyping with pure WebGPU (earlier commits).

Can you explain the motivation? Are you building compute shaders without nodes? I'm wondering if this is a feature request that's better requested for three.js nodes, instead?

On my machine generating sdf with webgpu takes ~10x faster than with webgl (~230ms -> ~25ms with default parameters) which I find quite insane. Maybe there is an issue with how time is measured? Would be interesting to see how the numbers change on other machines.

On my machine I'm seeing ~3-6x speed up. ~740ms > ~120ms with default parameters and ~3600ms > ~1230ms with max resolution. The timing method looks right - the WebGL version requires generating each 3d texture layer separately and then copying it into the 3d texture due to some errors when writing to 3d textures (see #720), so it's expected that there will be some overhead. Though I agree the amount is quite high.

Also, the webgpu example did not work until I upgraded threejs to r179. Is this expected?

Yes it's expected. There were some changes made to three.js compute in service of #756 that were first published with 179.

That upgrade seems to have broken CI, should I just revert it?

Got it fixed! There were some typing issues that were needed - I've made an issue in the types repo to ask about whether this is something that can be addressed automatically: three-types/three-ts-types#1812.

@gkjohnson gkjohnson added this to the v0.9.2 milestone Sep 6, 2025
@gkjohnson gkjohnson linked an issue Sep 6, 2025 that may be closed by this pull request
3 tasks
let splitAxis = boundsInfox & 0x0000ffffu;
let rightIndex = 4u * boundsInfoy / 32u;
let leftToRight = distanceSqToBVHNodeBoundsPoint( point, bvh, leftIndex ) < distanceSqToBVHNodeBoundsPoint( point, bvh, rightIndex );//rayDirection[ splitAxis ] >= 0.0;
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this comment just left over? Or is it still needed?

//rayDirection[ splitAxis ] >= 0.0;


const MAX_RESOLUTION = 200;
const MIN_RESOLUTION = 10;
const WORKGROUP_SIZE = [ 16, 16, 1 ];
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will take some tuning but it may be worth adjusting these values to find what the best combination is for performance. The "best" values can be different per hardware but in the WebGPU ray tracing demo, for example, 8x8x1 was more performance for me than 16x16x1. And given that we're generating a 3d texture in this case it may be best to dispatch a 3d work group - eg something like 4x4x4 or 8x8x4. Though maybe you've already tried this and landed on these values 😁

@TheBlek
Copy link
Contributor Author

TheBlek commented Sep 11, 2025

Can you explain the motivation? Are you building compute shaders without nodes? I'm wondering if this is a feature request that's better requested for three.js nodes, instead?

I've thought about it for a while and maybe this is not a good idea to add ability to extract shader code out of nodes. If you are able to do in three.js with compute nodes anything that raw webgpu can, then there is no point in using webgpu. With webgl this is different: at work I often need to use raw context to achieve some complex effects.

I think for the sake of readability and demonstration to new users it would be best to keep them separate. I'm imagining an example with the "webgpu_" prefix similar to the path tracing example (we can leave the CPU-generation method off).

Ok, will extract it into a new example then.

@gkjohnson
Copy link
Owner

gkjohnson commented Sep 11, 2025

If you are able to do in three.js with compute nodes anything that raw webgpu can, then there is no point in using webgpu.

I think one reason to do this would be to enable non-three.js platforms to more easily use the compute code without having to import a full three.js system. The particular cases we're talking about here are not using the three.js Nodes system so fully so it may be more reasonable to separate them (though it does make things like managing dependencies between snippets significantly more ergonomic) - though I think it would be nice to able to explore being able to "extract" some of the code from three.js nodes, at least as a first step. As an example, an improved compute-based path tracer being able to support Babylon etc would be a nice bonus.

I'm thinking we should revisit this when there's a more concrete use case for it, though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add support for WebGPU compute shaders
2 participants