Fix WebGPU Shadow Bugs & Boost FPS: 3 Pro Tips for 2025
Struggling with flickering shadows or low frame rates in WebGPU? Learn 3 pro tips to fix common shadow bugs and significantly boost your FPS in 2025.
Alex Ivanov
Lead graphics engineer specializing in real-time rendering and WebGPU performance optimization.
You’ve done it. After wrestling with pipelines, buffers, and bind groups, you finally have shadows in your WebGPU application. It’s a magical moment. Then, you move the camera and reality hits. Jagged edges, shimmering surfaces, and a frame rate that’s suddenly taken a nosedive. Welcome to the subtle, often maddening art of shadow mapping.
For years, high-quality, performant shadows were the domain of native game engines. But as we head into 2025, WebGPU has matured, giving us the power and low-level control to implement AAA-quality techniques directly in the browser. The problem is, many online tutorials still teach the basics, leaving you to fight the really tough bugs on your own.
Forget the simple stuff. Let's dive into three professional-grade tips that will help you squash common shadow artifacts and claw back precious milliseconds from your render time. These aren't just fixes; they're upgrades to your entire rendering philosophy.
Tip 1: Go Beyond Standard Depth Bias to Slay Shadow Acne
First up is the most common and visually jarring shadow bug: shadow acne. This happens when a surface incorrectly shadows itself due to precision errors in the shadow map. The result is a noisy, flickering pattern on surfaces that should be fully lit. The classic quick fix is to apply a small depth bias, pushing the geometry slightly towards the light when rendering the shadow map.
However, a constant bias is a blunt instrument. If you set it too low, the acne remains. Set it too high, and you get an equally ugly problem called Peter Panning, where shadows appear disconnected from the objects casting them, as if they're floating.
The Pro Solution: Normal Offset Bias
Instead of pushing every vertex by a constant amount, a more intelligent approach is to push each vertex along its normal. Think about it: a surface directly facing a light (high dot product between normal and light direction) needs very little bias. A surface nearly parallel to the light direction (a grazing angle) is extremely prone to self-shadowing and needs a much larger offset. This is exactly what Normal Offset Bias does.
This technique is more geometrically correct and adapts to the model's shape, drastically reducing both acne and Peter Panning simultaneously. Here's a conceptual look at how you'd implement this in your vertex shader when rendering to the shadow map:
// WGSL example for the shadow pass vertex shader
struct VertexOutput {
@builtin(position) position: vec4<f32>,
};
@vertex
fn vs_main(@location(0) in_pos: vec3<f32>, @location(1) in_normal: vec3<f32>) -> VertexOutput {
// -- Standard depth bias values --
let slope_scale_bias = 2.5;
let constant_bias = 0.005; // A small base value
// -- Normal Offset Calculation --
// Get the angle of the surface relative to the light
// For a directional light, light_direction is uniform
let cos_theta = dot(normalize(in_normal), light_direction);
// The more parallel the surface is to the light, the more offset we need.
// tan(acos(cos_theta)) gives a much better offset than 1.0 - cos_theta
let normal_offset = tan(acos(cos_theta)) * slope_scale_bias;
var world_pos = model_matrix * vec4<f32>(in_pos, 1.0);
// Apply the offset along the world-space normal
world_pos.xyz += in_normal * normal_offset;
var output: VertexOutput;
// Project the biased position from the light's point of view
output.position = light_view_proj_matrix * world_pos;
// Apply a small constant bias to handle flat surfaces and precision issues
output.position.z -= constant_bias;
return output;
}
By combining a slope-scaled normal offset with a tiny constant bias, you get a robust solution that handles both complex curved surfaces and flat planes with grace. It’s a little more work, but the visual payoff is immense.
Tip 2: Smarter Filtering for Fast, Believable Soft Shadows
Hard, aliased shadow edges scream "computer graphics." The go-to solution for soft shadows is Percentage-Closer Filtering (PCF), where you take multiple samples from the shadow map around the pixel's location and average the results. A simple 3x3 PCF kernel (9 samples) is a huge improvement over a single tap, but it can also be expensive, especially as you increase the kernel size for softer shadows.
The Pro Solution: Variance Shadow Maps (VSM)
What if you could get beautiful, soft shadows with just a single texture lookup? That's the promise of Variance Shadow Maps (VSM). Instead of just storing the depth (z) in your shadow map, you store two values: the depth and the depth squared (z²). These are the first two "moments" of the depth distribution.
With these two values, you can use a powerful statistical formula called Chebyshev's inequality to estimate the probability that a pixel is in shadow. The magic of VSM is that you can pre-filter the shadow map with a hardware-accelerated blur (like a Gaussian blur). This means you blur the z and z² values *once*, and then your main shader only needs to perform a single, filtered lookup into this blurred map to get a smoothly interpolated result.
VSM vs. PCF: A Quick Comparison
Technique | Performance Cost | Quality & Artifacts |
---|---|---|
PCF (e.g., 5x5) | High (25 texture samples per pixel) | Accurate, but can show banding with too few samples. |
VSM | Low (1 blurred texture sample per pixel) | Very smooth, but can suffer from "light bleeding" on overlapping objects. |
The main drawback of VSM is light bleeding, where light can seem to 'leak' through objects that are close together. However, for many scenes, this is a minor issue that can be mitigated with careful bias or by combining it with other techniques. For a massive FPS boost, especially when targeting mid-range hardware, VSM is an incredible tool to have in your arsenal.
Tip 3: Dynamic Cascades for Open-World Scale & Performance
A single shadow map for a large scene is a recipe for disaster. If it covers the whole area, the resolution will be far too low for objects near the camera. If you make it high-resolution for nearby objects, you'll either have no shadows in the distance or need a texture so massive it will cripple your GPU.
The standard solution is Cascaded Shadow Maps (CSM). You split the camera's view frustum into several sections (cascades) and render a separate, high-quality shadow map for each one. This is a fantastic start, but we can do better.
The Pro Solution: Dynamically Fit Cascades to the Scene
The "pro" move is to not just use fixed cascade splits, but to calculate the tightest possible bounding box for each cascade's light-space view *every single frame*. Why? Because the geometry visible in each slice of your frustum changes constantly as the player moves and looks around. By fitting the shadow map projection tightly to only the visible geometry, you maximize every single pixel of your shadow map.
Furthermore, instead of linear splits, use a logarithmic or a hybrid split scheme. This allocates exponentially more shadow map resolution to the cascades closer to the camera, which is where players will notice detail the most.
Here’s the process:
- Split the Frustum: Divide your camera frustum into 3 or 4 cascades using a logarithmic scheme.
- Find Visible Geometry: For each cascade, determine the objects that fall within it.
- Calculate Bounding Sphere: Compute a tight bounding sphere that encloses all visible objects within that cascade.
- Create Light View/Projection: Create an orthographic projection for the light that tightly frames this bounding sphere. This becomes the `light_view_proj_matrix` for that cascade.
- Render and Sample: Render the shadow map for each cascade. In the main pass, determine which cascade a pixel belongs to and sample from the correct map.
This dynamic approach ensures you're never wasting resolution. You get crisp, detailed shadows up close and stable, efficient shadows in the distance, providing a massive performance and quality uplift for any scene larger than a single room.
Bringing It All Together
Shadows are a deep and complex topic, but you don't have to settle for buggy, slow results in WebGPU. By moving beyond the textbook implementations, you can achieve a level of quality and performance that was previously unthinkable on the web.
To recap, your new pro toolkit for 2025 includes:
- Normal Offset Bias: For clean, artifact-free shadow lines on any surface.
- Variance Shadow Maps: For blazing-fast, beautifully soft shadows with a single texture fetch.
- Dynamic Cascaded Shadow Maps: For scaling your shadow quality to massive, open-world scenes without killing your frame rate.
Implementing these techniques requires a solid understanding of the rendering pipeline, but the payoff is a dynamic, immersive experience that will make your users forget they're even in a browser. Happy coding!