csmoak 2 hours ago

The only applications I'm aware of that currently do spectral rendering on the fly are painting apps.

I have one called Brushwork ( https://brushworkvr.com ) that upsamples RGB from 3 to a larger number of samples spread across visible light, mixes paint in the upsampled space, and then downsamples for rendering (the upsampling approach that app uses is from Scott Burns http://scottburns.us/color-science-projects/ ). FocalPaint on iOS does something similar ( https://apps.apple.com/us/app/focalpaint/id1532688085 ).

I'm happy that tech like this will open up more apps to use spectral rendering.

Tooster 2 hours ago

I was sure it must have been invented already! I've been trying to look for this idea without knowing it's called "spectral rendering", looking for "absorptive rendering" or similar instead, which led me to dead ends. The technique is very interesting and I would love to see it together with semi-transparent materials — I have been suspecting for some time that a method like that could allow cheap OIT out of the box?

  • dahart 34 minutes ago

    I’m not sure carrying wavelength or spectral info changes anything with respect to order of transparency.

    It seems like OIT is kind of a misnomer when people are talking about deferred compositing. Storing data and sorting later isn’t exactly order independent, you still have to compute the color contributions in depth order, since transparency is fundamentally non-commutative, right?

    The main benefit of spectral transparency is what happens with multiple different transparent colors… you can get out a different color than you get when using RGB or any 3 fixed primaries while computing the transmission color.

  • zokier an hour ago

    Conventional RGB path tracing already handles basic transparency, you don't need spectral rendering for that.

    • pixelesque 41 minutes ago

      Not exactly what parent poster was saying (I think?), but absorption and scattering coefficients for volume handling together with the Mean Free Path is very wavelength-specific, so using spectral rendering for that (and hair as well, although that's normally handled via special BSDFs) generally models volume scattering more accurately (if you model the properties correctly).

      Very helpful for things like skin, and light diffusion through skin with brute-force (i.e. Woodcock tracking) volume light transport.

cubefox 2 hours ago

Apparently (from a layman's perspective) the difference between conventional RGB ray tracing and spectral ray tracing is this:

RGB assumes all light sources consist of three RGB lights, where the brightness of red, green, and blue varies. E.g. a yellow light would always be a red and a green light.

In contrast, spectral rendering allows light sources with arbitrary spectra. A pure yellow light (~580 nm) is different from a red+green light.

The physical difference is this: If you shine, for example, a pure yellow light on a scene, everything looks yellow, just more or less dark. But if you shine a red+green (impure yellow) light on a scene, green objects will be green and red objects will be red. Not everything will appear as a shade of yellow. Conventional RGB rendering can only model the latter case.

This means some light sources, like high-pressure sodium lamps, cannot be accurately rendered with RGB rendering: red and green surfaces would look too bright.

(Also note that the linked post has also a part 1 and 3, accessible via "next/previous post" at the bottom.)

  • dahart 43 minutes ago

    > RGB assumes all light sources consist of three RGB lights

    Another way to say this is that conventional 3 channel renderers pre-integrate the spectrum of lights down to 3-channel colors. They don’t necessarily assume three lights, but it’s accurate to say that’s the net effect.

    It’s mostly just about when you integrate, and what you have to do when you delay the integration. It’s kind of a subtle distinction, really, but rendering with spectral light and material data and integrating down to RGB at the end more closely mimics reality; the cones in our eyes are the integrators, and before that everything is spectral. Or more accurately, individual photons have wavelength. A spectrum is inherently a statistical summary of the behavior of many photons.

  • turnsout an hour ago

    It also becomes important for rendering glass and other highly refractive substances. Some conventional RGB rendering engines can mimic dispersion, but with spectral rendering you get it "for free."

    • pixelesque an hour ago

      One issue with Hero Wavelength sampling (mentioned in article) is that because IOR is wavelength-dependent, after a refraction event, you basically lose the non-hero wavelengths, so you get the colour noise again through refraction.

    • rf15 an hour ago

      You would still need to provide extra logic/data to do dispersion/refraction curves for materials, it's hardly "for free"