POV-Ray : Newsgroups : povray.binaries.images : Native dispersion v. spectral render Server Time
1 Nov 2024 06:17:26 EDT (-0400)
  Native dispersion v. spectral render (Message 1 to 4 of 4)  
From: Cousin Ricky
Subject: Native dispersion v. spectral render
Date: 28 Dec 2013 20:49:08
Message: <52bf7f94@news.povray.org>
Since RGB does not preserve spectral information, POV-Ray naturally has 
to guess how a colored light might disperse.  I got curious as to how 
POV-Ray's dispersion of black body emissions would differ from a 
spectral render.

These scenes show a glowing rod viewed through a BK7 glass filter. 
Image prism_whites-rgb.png is the native dispersion render, and 
prism_whites-spectral.png is the spectral render.  The geometry of both 
scenes is identical.

The RGB render shows less refraction than spectral render, so I 
#debugged some values.

IOR_Glass_BK7 from spectral_glasses.inc:
   IOR at 380 nm = 1.533745 (SpectralRender shortest)
   IOR at 580 nm = 1.517122 (near Fraunhofer line D)
   IOR at 730 nm = 1.512304 (SpectralRender shortest)

Native dispersion from ior.inc:
   iorCrownGlassBK7 = 1.516730 (presumably at line D, 589.29 nm)
   dispCrownGlassBK7 = 1.010552

What surprised me is that POV-Ray's native dispersion changes the hue of 
each spectral color.  For example, the violet extreme is ultraviolet 
blue for the high temperature white, and reddish purple for the low 
temperature white.  (It's still a whole lot better than POV-Ray 3.6 
dispersion.)  The spectral render merely adjusts the intensity of each 
band without changing its hue, as expected.


Post a reply to this message


Attachments:
Download 'prism_whites-rgb.png' (32 KB) Download 'prism_whites-spectral.png' (35 KB)

Preview of image 'prism_whites-rgb.png'
prism_whites-rgb.png

Preview of image 'prism_whites-spectral.png'
prism_whites-spectral.png


 

From: clipka
Subject: Re: Native dispersion v. spectral render
Date: 4 Jan 2014 09:04:15
Message: <52c814df$1@news.povray.org>
Am 29.12.2013 02:49, schrieb Cousin Ricky:

> What surprised me is that POV-Ray's native dispersion changes the hue of
> each spectral color.  For example, the violet extreme is ultraviolet
> blue for the high temperature white, and reddish purple for the low
> temperature white.  (It's still a whole lot better than POV-Ray 3.6
> dispersion.)  The spectral render merely adjusts the intensity of each
> band without changing its hue, as expected.

This effect is due to official POV-Ray doing all colour math in RGB 
space. The problem is that the human eye's sensors for long wavelengths 
(red) are also sensitive to very short wavelengths (violet); thus, when 
translating a very short wavelength to an RGB colour you do get a deal 
of R in there (about 24% of the B channel at 400 nm, at least with the 
table POV-Ray is currently using).

Now in official POV-Ray, the resulting colour in dispersion is computed 
as the component-wise product of the refracted light's RGB colour and 
the RGB colour corresponding to the given wavelength.

When the light has a high colour temperature, there is only little R in 
the light's RGB colour; thus, at e.g. 400 nm, the resulting colour will 
be R = 24% * little, G = 0% * some, B = 100% * plenty, resulting in a 
very blue colour hue.

On the other hand, when the light has a low colour temperature, there is 
only little B in the light's RGB colour, but plenty of R and G: thus, 
again at the very same wavelength of 400 nm, the resulting colour will 
be R = 2%% * plenty, G = 0% * plenty, B = 100% * little, resulting in a 
very violet colour hue.


Post a reply to this message

From: Cousin Ricky
Subject: Re: Native dispersion v. spectral render
Date: 4 Jan 2014 13:50:01
Message: <web.52c85731937dbf81306548240@news.povray.org>
clipka <ano### [at] anonymousorg> wrote:
>  The problem is that the human eye's sensors for long wavelengths
> (red) are also sensitive to very short wavelengths (violet); thus, when
> translating a very short wavelength to an RGB colour you do get a deal
> of R in there (about 24% of the B channel at 400 nm, at least with the
> table POV-Ray is currently using).

Is this really the eye's sensors?  It seems obvious to me that POV-Ray uses the
color matching function, where the red component does show up in the violet
region.  But this doesn't appear to be reflected in our eye's cone cells;
rather, it seems that the CMF is an outcome of the brain's /interpretation/ of
the signals from the cone cells, or perhaps some preprocessing done in the
retina.

I made this graph a few years ago.  Unfortunately, the source of the data is not
recorded in the scene file; there is a rudimentary comment, but I must have
gotten distracted before completing it.  (Now having flashbacks of a certain
POVer who once posted some of Warp's code without giving him credit.)  I'm
pretty sure it's on my hard disk somewhere, and in any case, such data is
readily available on the Web, at least in normalized form.


Post a reply to this message


Attachments:
Download 'cones-linear.png' (34 KB)

Preview of image 'cones-linear.png'
cones-linear.png


 

From: clipka
Subject: Re: Native dispersion v. spectral render
Date: 4 Jan 2014 14:49:34
Message: <52c865ce@news.povray.org>
Am 04.01.2014 19:47, schrieb Cousin Ricky:
> clipka <ano### [at] anonymousorg> wrote:
>>   The problem is that the human eye's sensors for long wavelengths
>> (red) are also sensitive to very short wavelengths (violet); thus, when
>> translating a very short wavelength to an RGB colour you do get a deal
>> of R in there (about 24% of the B channel at 400 nm, at least with the
>> table POV-Ray is currently using).
>
> Is this really the eye's sensors?  It seems obvious to me that POV-Ray uses the
> color matching function, where the red component does show up in the violet
> region.  But this doesn't appear to be reflected in our eye's cone cells;
> rather, it seems that the CMF is an outcome of the brain's /interpretation/ of
> the signals from the cone cells, or perhaps some preprocessing done in the
> retina.

The retina does indeed perform some comparatively heavy preprocessing of 
the colour stimulus, and may play an important role in the mangling of 
the R and B channels. Unfortunately that knowledge doesn't help us solve 
the underlying problem.

Using a color space with a B closer to the violet end of the spectrum 
might make the problem less prominent. However, it should be noted that 
similar shifts in hue happen at all wavelengths, and what happens at the 
purple line is only the tip of the iceberg. The only solution is to use 
more color channels. Enter spectral rendering.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.