However I don't have any issues with the demo in the middle (the hard shadows). So the artifacting has to be from the soft shadow rules, or from the "few extra tweaks".
The primary force behind real soft shadows is obviously that real lights are not point sources. I wonder how much worse the performance would be if instead of the first two (kinda hacky) we instead replaced the light by maybe five lights that represent random points in a small circular light. Maybe you'd get too much banding unless you used a much higher number of light sources, but at the very least it would be an interesting comparison to justify using the approximation
> The demo at the top has some bad noise issues when the light is in small gaps, at least on my phone (which I don't think the article acknowledges).
Right at the end:
> The random jitter ensures that pixels next to each other don’t end up in the same band. This makes the result a little grainy which isn’t great. But I think looks better than banding… This is an aspect of the demo that I’m still not satisfied with, so if you have ideas for how to improve it please tell me!
AFAIK (I have a similar soft shadows system based on SDFs) the reason the noise issues occur in small gaps is that the distance values become small there so the steps become small and you start ending up in artifact land. The workaround for this is to enforce a minimum step size of perhaps 0.5 - 2.0 pixels (depending on the quality of your SDF) so you don't get trapped like that - the author probably knows but it's not done by their sample code.
Small step sizes are doubly bad because low-spec shader models like WebGL and D3D9 have a limitation on the number of loop iterations, so no matter how powerful your GPU is the step loop will terminate somewhat early and produce results that don't resemble the ground truth.
Same goes for a few of the other images too, but not all of them.
The article would probably benefit from having figure captions below each image stating whether the image is interactive or not.
Or alternatively to figure captions about interactivity, showing some kind of symbol in one of the corners of each of the ones that are interactive. In that case, the intro should also mention that symbol and what it means before any images that have that symbol on it.
I wonder if it would help if you looked at gradient of the SDF as well – maybe you could walk further safely if you're not moving in the same direction as the gradient?
This is really cool! If I were to work on it, I would make the light source a bouncing ball or something similar (maybe even a fish or a bird) via some 2D physics next.
the fact that this runs butter-smooth on webgl while my company's 'enterprise dashboard' struggles to render 50 divs says everything about how much performance we leave on the table with bad abstractions
This is truly a very clever series of calculations, a really cool effect, and a great explanation of what went into it. I'll admit that I skimmed over some of the technical details because I want to try it myself from scratch... but the distance map is a great clue.
While the methods are similar in that they both ray-march through the scene to compute per-pixel fluence, the algorithm presented in the blog post scales linearly with the number of light sources, whereas Radiance Cascades can handle an arbitrary distribution of light sources with constant time by benefiting from geometric properties of lighting. Radiance Cascades are also not limited to SDFs for smooth shadows.
Note that the first image is an interactive demo. Click or touch it. (It's not obvious from the text at the time of writing)
The demo at the top has some bad noise issues when the light is in small gaps, at least on my phone (which I don't think the article acknowledges).
The demo at the end has bad banding issues (which the article does acknowledge).
It seems like a cheat-ish improvement to both of these would be a blur applied at the end.
However I don't have any issues with the demo in the middle (the hard shadows). So the artifacting has to be from the soft shadow rules, or from the "few extra tweaks".
The primary force behind real soft shadows is obviously that real lights are not point sources. I wonder how much worse the performance would be if instead of the first two (kinda hacky) we instead replaced the light by maybe five lights that represent random points in a small circular light. Maybe you'd get too much banding unless you used a much higher number of light sources, but at the very least it would be an interesting comparison to justify using the approximation
> The demo at the top has some bad noise issues when the light is in small gaps, at least on my phone (which I don't think the article acknowledges).
Right at the end:
> The random jitter ensures that pixels next to each other don’t end up in the same band. This makes the result a little grainy which isn’t great. But I think looks better than banding… This is an aspect of the demo that I’m still not satisfied with, so if you have ideas for how to improve it please tell me!
Ah I missed that, thanks. More than a little grainy for me but that might be a resolution/pixel ratio thing on my phone that could be tweaked out.
AFAIK (I have a similar soft shadows system based on SDFs) the reason the noise issues occur in small gaps is that the distance values become small there so the steps become small and you start ending up in artifact land. The workaround for this is to enforce a minimum step size of perhaps 0.5 - 2.0 pixels (depending on the quality of your SDF) so you don't get trapped like that - the author probably knows but it's not done by their sample code.
Small step sizes are doubly bad because low-spec shader models like WebGL and D3D9 have a limitation on the number of loop iterations, so no matter how powerful your GPU is the step loop will terminate somewhat early and produce results that don't resemble the ground truth.
Same goes for a few of the other images too, but not all of them.
The article would probably benefit from having figure captions below each image stating whether the image is interactive or not.
Or alternatively to figure captions about interactivity, showing some kind of symbol in one of the corners of each of the ones that are interactive. In that case, the intro should also mention that symbol and what it means before any images that have that symbol on it.
I wonder if it would help if you looked at gradient of the SDF as well – maybe you could walk further safely if you're not moving in the same direction as the gradient?
Probably not that related, but the article reminded me of a shadow casting implementation on the PICO-8: https://medium.com/hackernoon/lighting-by-hand-4-into-the-sh...
This is really cool! If I were to work on it, I would make the light source a bouncing ball or something similar (maybe even a fish or a bird) via some 2D physics next.
It's always impressive to see a live demo in a technical blog post like this, especially one that runs so fast and slick on mobile. Kudos.
In relative terms your mobile is a superb computer compared to 20 years ago; and it's a small resolution.
The iPhone 17 pro is faster in quite a few benchmarks compared to the standard HP Intel notebook my company provides if you prefer windows over MacOs.
the fact that this runs butter-smooth on webgl while my company's 'enterprise dashboard' struggles to render 50 divs says everything about how much performance we leave on the table with bad abstractions
This is truly a very clever series of calculations, a really cool effect, and a great explanation of what went into it. I'll admit that I skimmed over some of the technical details because I want to try it myself from scratch... but the distance map is a great clue.
this looks great but is there no demo link? maybe Im blind and missed it?
They are embedded in the blog. Just click around on the images.
oops - thanks
This sounds similar to radiance cascades:
https://mini.gmshaders.com/p/radiance-cascades
https://youtube.com/watch?v=3so7xdZHKxw
While the methods are similar in that they both ray-march through the scene to compute per-pixel fluence, the algorithm presented in the blog post scales linearly with the number of light sources, whereas Radiance Cascades can handle an arbitrary distribution of light sources with constant time by benefiting from geometric properties of lighting. Radiance Cascades are also not limited to SDFs for smooth shadows.
[dead]
[dead]