|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I'm just wondering: is there a plan to extend SDL to allow more than jst rgb
color spaces?
The color specification syntax is already perfectly designed to allow addition
of different colorspaces. So, instead of using macros from colors.inc, syntax
could be made more user-friendly, like
pigment {
color hsl <0.5,0.3,1>
}
This is straight-forward, srgb/rgb difference already requires some conversion
anyway. This could be expanded to other color spaces: Yuv, CIE, Lab and so on.
On a similar note, if povray4 goes natively spectral, how will this be handled?
In my mind, I see a block in global_settings that specifies the color mode:
choose rgb or spectral, possibly with a number specifying how many samples to
use, and of course, a mapping function to get back from spectrum to rgb
(nontrivial, there are several different spectral response function models that
try to approximate the human eye, or a display device). In the spectral mode, I
think that colors specified as rgb could be modeled using a default response
function for r,g,b (let's say, the response functions of the eye), but one could
also specify colors as functions of the wavelength: point-wise specified spectra
(which could be specified in a big .inc file). Something like
pigment {
gradient z
color_map {
[0 color rgb <1,0.5,0>] // actually 1*CIE_red+0.5*CIE_green
[1 color spectral <440:0.1,550:0.4,600:0.6,700:1>] //linear interpolation
between specified wavelengths
}
}
Is it reasonable to expect something like that?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 21.03.2013 02:40, schrieb Simon Copar:
> I'm just wondering: is there a plan to extend SDL to allow more than jst rgb
> color spaces?
Not at present.
> On a similar note, if povray4 goes natively spectral, how will this be handled?
I'm nor sure wheter spectral rendering will be implemented in mainstream
POV-Ray at all. It probably depends on how well spectral rendering will
perform in a patched version of 3.7 created by some 3rd party (which
might eventually be me).
> In my mind, I see a block in global_settings that specifies the color mode:
> choose rgb or spectral, possibly with a number specifying how many samples to
> use,
Yes, that does seem sensible. Of course for the render core it all boils
down to N-channel rendering, but rgb mode needs some special treatment
at the boundaries of the core, since it's not a spectral model.
It might also make sense to provide a way to specify the frequencies of
the individual channels; this way, a user could e.g. have POV-Ray use
more channels in the yellow region for a scene with sodium lamps.
> and of course, a mapping function to get back from spectrum to rgb
> (nontrivial, there are several different spectral response function models that
> try to approximate the human eye, or a display device).
Until now I'd have expected that to be the least problem: Just choose
one of the two CIE standard observers, use the response functions to
convert to XYZ, and from there to whatever color space is chosen for output.
> In the spectral mode, I
> think that colors specified as rgb could be modeled using a default response
> function for r,g,b (let's say, the response functions of the eye),
I guess this is actually the hardest nut to crack.
The most straightforward approach would of course be to pick a set of of
real-life or synthesized phosphors with known emission curves; we'd then
first convert from scRGB to the RGB color space defined by those
phosphors, and then multiply each color component by the corresponding
phosphor's spectrum.
However, I don't like this approach, as it would still be strongly
biased towards red, green and blue hues, in the sense that filters of
those hues attenuate light of the same hue less than would be the case
with yellow, cyan or purple hues. I'd rather prefer a spectrum synthesis
algorithm where the attenuation of a given filter depends only on
brightness and saturation.
> but one could
> also specify colors as functions of the wavelength: point-wise specified spectra
> (which could be specified in a big .inc file). Something like
>
> pigment {
> gradient z
> color_map {
> [0 color rgb <1,0.5,0>] // actually 1*CIE_red+0.5*CIE_green
> [1 color spectral <440:0.1,550:0.4,600:0.6,700:1>] //linear interpolation
> between specified wavelengths
> }
> }
I haven't spent much thought on the syntax yet, but I do like this one.
> Is it reasonable to expect something like that?
Yup. If you have any additional ideas along these lines, feel free to
let your thoughts go wild.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> It might also make sense to provide a way to specify the frequencies of
> the individual channels; this way, a user could e.g. have POV-Ray use
> more channels in the yellow region for a scene with sodium lamps.
I like this idea, this way you could accurately raytrace even scenes with
monochromatic (laser-like) illumination, or use it to simulate actual display
devices and optics.
> > and of course, a mapping function to get back from spectrum to rgb
> > (nontrivial, there are several different spectral response function models that
> > try to approximate the human eye, or a display device).
>
> Until now I'd have expected that to be the least problem: Just choose
> one of the two CIE standard observers, use the response functions to
> convert to XYZ, and from there to whatever color space is chosen for output.
I have to say that I never really liked XYZ color space. Mapping from XYZ to RGB
is linear, so there is no difference between converting to XYZ and then to a
chosen RGB (+gamma correction after that), or precomputing RGB response
functions and mapping directly to that. There is no additional information in
XYZ color space, it's just defined to avoid negative values in standard (from
year 1931, not very representative) rgb physiological response functions. We
could remove the intermediate step and go directly to the target color space, we
lose nothing.
Sorry for the rant, I got very frustrated recently about the arbitrary
definition of XYZ space and the difference between physiological and
standardized color spaces (CIE rgb, scrgb,...).
I think that it would make the most sense to let the user override the rgb
response functions, because this opens the doors for color profile freaks that
are never satisfied with the default settings. You could use a spectral response
of a specific display device for maximum accuracy (for instance, if you know
exactly what projector will display your movie). And for normal users, you would
just keep scrgb as default.
> I guess this is actually the hardest nut to crack.
>
> The most straightforward approach would of course be to pick a set of of
> real-life or synthesized phosphors with known emission curves; we'd then
> first convert from scRGB to the RGB color space defined by those
> phosphors, and then multiply each color component by the corresponding
> phosphor's spectrum.
>
> However, I don't like this approach, as it would still be strongly
> biased towards red, green and blue hues, in the sense that filters of
> those hues attenuate light of the same hue less than would be the case
> with yellow, cyan or purple hues. I'd rather prefer a spectrum synthesis
> algorithm where the attenuation of a given filter depends only on
> brightness and saturation.
Right, I see it's much worse than I imagined. The main problem is, that standard
rgb responses are not orthogonal. Of you start with <1,0,0>, map to CIE_red and
then project back, you will probably get something like <0.97,0.02,-0.03>
On one hand, this avoids the problem that most "pure" rgb colors look
unrealistic, especially with radiosity and photons turned on, but it also makes
the results behave in an unexpected way. Still, in the spectral model, it's
mostly impossible to find the spectral responses that would give pure colors,
because the target color space response functions overlap.
However, I see one well-defined (and probably mathematically optimal) solution.
What you want is a closest match of rgb colors in the target color space. Let's
say you choose 4 sample wavelengths. You are looking for a vector (a,b,c,d) that
satisfies
(a,b,c,d)*red_response=r
(a,b,c,d)*green_response=g
(a,b,c,d)*blue_response=b
where red,green,blue_response are (in this example 4-component) vectors that
approximate the spectral response functions. This is simply a linear system of
equations (3 equations, N variables for N samples) that can be solved to give a
least-square fit to the desired rgb color (matrix pseudoinverse solves this
trivially).
This approach is
* specifically designed for chosen wavelength samples
* treats all hues equally
* gets mathematically as close as possible to the non-spectral solution with the
same colors
Disadvantages:
* spectral result will depend more on chosen wavelength samples than it would if
you used default response functions
* the resulting (a,b,c,d) may include negative values because it tries to remove
the response overlap. I think for reasonable sample selection it wouldn't be too
bad.
A related solution that also solves these problems, is to take the same set of
linear equations described above, but instead of just taking the pseudo-inverse
(and getting least-square solution), you can minimize
abs(r-r0)+abs(g-g0)+abs(b-b0) and specify constraints a>0,b>0,c>0,d>0. This is a
linear programming problem [algorithm can be taken straight from numerical
recipes], which is also mathematically well-defined an will give you an optimal
and physically meaningful result. This would of course all be done at parse
time.
I vote for the second solution, I think it's as good as you can get :)
> Yup. If you have any additional ideas along these lines, feel free to
> let your thoughts go wild.
Just one thought. Now we have filter & transmit options. This works quite well
for most cases, but is still restrictive. Basically, the light that hits the
surface can be split in into reflected (seen by camera), transmitted and
absorbed components in any possible way. Physically, energy conservation only
requires reflected_color+transmitted_color<1. Right now, "transmit" gives you
transmitted_color=constant and "filter" gives you
transmitted_color=constant*reflected_color, and combination of filter and
transmit gives you something in the middle.
In rgb space, this is not so bad, but when you go spectral, you lose a lot of
options.
Instead of transmit & filter we could have a single color parameter
pigment {
color rgb [mycolor] transmission [othercolor]
}
and a fallback with old syntax that generates the transmission color according
to above model.
I would suggest this even without the spectral model. It enables you to do with
normal textures what you can already do with media - simulation of materials,
that don't just absorb, but scatter (shadow gets an opposite color as the
material, like
http://nanocomposix.com/sites/default/files/images/technology/plasmonics/_MG_3049_1.jpg).
If the material is thin enough, you could avoid media by using this model on the
surface texture. And that's just the extreme scenario. Most of the paints and
finishes have at least some of this effect.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> However, I see one well-defined (and probably mathematically optimal) solution.
> What you want is a closest match of rgb colors in the target color space.
That could be the default (I'm not sure what sort of spectra that would
generate when you had a lot more wavelengths than 4), but how about
having another parameter (like CRI is used for white light) to determine
the "width" of the spectrum created? That way if you had said "cri 0"
after the rgb POV would use the minimum number of non-zero monochromatic
sources to create the colour (as if the light was created by combining
laser light). Or if you put "cri 100" it would create the "widest"
spectrum (maybe by minimising RMS value of the spectrum?) to better
simulate things like paint and incandescent light.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 25.03.2013 16:55, schrieb Simon Copar:
>>> and of course, a mapping function to get back from spectrum to rgb
>>> (nontrivial, there are several different spectral response function models that
>>> try to approximate the human eye, or a display device).
>>
>> Until now I'd have expected that to be the least problem: Just choose
>> one of the two CIE standard observers, use the response functions to
>> convert to XYZ, and from there to whatever color space is chosen for output.
>
> I have to say that I never really liked XYZ color space. Mapping from XYZ to RGB
> is linear, so there is no difference between converting to XYZ and then to a
> chosen RGB (+gamma correction after that), or precomputing RGB response
> functions and mapping directly to that. There is no additional information in
> XYZ color space, it's just defined to avoid negative values in standard (from
> year 1931, not very representative) rgb physiological response functions. We
> could remove the intermediate step and go directly to the target color space, we
> lose nothing.
Of course that intermediate step can be skipped by just pre-computing
RGB response curves; but I consider that implementation details rather
than part of the big picture.
> Sorry for the rant, I got very frustrated recently about the arbitrary
> definition of XYZ space and the difference between physiological and
> standardized color spaces (CIE rgb, scrgb,...).
Well, there's nothing wrong with XYZ as opposed to any other variation
on the same theme (such as the various RGB color spaces), as long as you
don't try to do compute color "distances" or do light computations in
that color space.
> I think that it would make the most sense to let the user override the rgb
> response functions, because this opens the doors for color profile freaks that
> are never satisfied with the default settings. You could use a spectral response
> of a specific display device for maximum accuracy (for instance, if you know
> exactly what projector will display your movie). And for normal users, you would
> just keep scrgb as default.
Nah, color profile freaks wouldn't benefit from custom response
functions; for that use case, a final conversion step from spectrum to
XYZ (based on the standard observer) to whatever color profile they
choose (based on a user-supplied CIE color profile) would be the way to
go. In this respect, sRGB output would be just a special case of this
final step.
Custom response functions might still be handy for creating false-color
renderings of colors outside the visible spectrum, but I'd consider that
pretty low priority.
> A related solution that also solves these problems, is to take the same set of
> linear equations described above, but instead of just taking the pseudo-inverse
> (and getting least-square solution), you can minimize
> abs(r-r0)+abs(g-g0)+abs(b-b0) and specify constraints a>0,b>0,c>0,d>0. This is a
> linear programming problem [algorithm can be taken straight from numerical
> recipes], which is also mathematically well-defined an will give you an optimal
> and physically meaningful result.
Not exactly what I was trying to achieve, but might be worth
consideration nonetheless.
> This would of course all be done at parse
> time.
Nope, it's not an option for input images. But some pre-computed values
and interpolation might do the trick.
> I vote for the second solution, I think it's as good as you can get :)
Sounds like this will cause the resulting spectrum to be governed by
> Just one thought. Now we have filter & transmit options. This works quite well
> for most cases, but is still restrictive. Basically, the light that hits the
> surface can be split in into reflected (seen by camera), transmitted and
> absorbed components in any possible way. Physically, energy conservation only
> requires reflected_color+transmitted_color<1. Right now, "transmit" gives you
> transmitted_color=constant and "filter" gives you
> transmitted_color=constant*reflected_color, and combination of filter and
> transmit gives you something in the middle.
> In rgb space, this is not so bad, but when you go spectral, you lose a lot of
> options.
>
> Instead of transmit & filter we could have a single color parameter
> pigment {
> color rgb [mycolor] transmission [othercolor]
> }
> and a fallback with old syntax that generates the transmission color according
> to above model.
Absolutely. Internal representation of transmit+filter as full-fledged
colors is a must for spectral rendering, and exposing this to the user
is only natural and not a big deal.
The syntax
color rgb COLOR transmit COLOR
would be perfect, as COLOR could still be a single numeric value,
automatically giving you the old behaviour of plain white transmission.
For backward compatibility, "filter" could also be retained, but would
always expect a single numeric value.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Huh, I'm thinking now that if you have a lot of wavelengths, you can have both
cases: non-existence of positive-spectrum solution for rgb color, or many
optimal solutions with different spectra (depends on the input color). So from
what scott wrote, you could then additionally minimize for example a+b+c+d
(maximize spectral efficiency) to avoid stuff like strong infrared components
for red spectrum (the result should then be pretty smooth).
I realize that the result is governed by the choice of sampling wavelengths. If
you dislike that, you can perform the optimization for a fixed set of
wavelengths (something like a fixed-spaced 5-10 samples) and interpolate the
result. It doesn't make any sense to make it more precise than that, because
your input only has 3 components.
Just giving ideas, until someone starts writing this, details probably don't
matter much :)
> Or if you put "cri 100" it would create the "widest"
> spectrum (maybe by minimising RMS value of the spectrum?) to better
> simulate things like paint and incandescent light.
In any case, this would only be used if you specify rgb components (just a
shortcut if you don't put any effort into specifying the full spectral
response). If you want good effects, you should provide spectral colors anyway.
If this is implemented, you would certainly get macros for generating a light
spectrum of chosen color temperature (plus probably ionized gas colors and CIE
standard illuminants, such as D65), and a database of paint colors and natural
material spectra that you can then also mix in a scene as you wish.
> > of a specific display device for maximum accuracy (for instance, if you know
> > exactly what projector will display your movie). And for normal users, you would
> > just keep scrgb as default.
>
> Nah, color profile freaks wouldn't benefit from custom response
> functions; for that use case, a final conversion step from spectrum to
> XYZ (based on the standard observer) to whatever color profile they
> choose (based on a user-supplied CIE color profile) would be the way to
> go. In this respect, sRGB output would be just a special case of this
> final step.
>
> Custom response functions might still be handy for creating false-color
> renderings of colors outside the visible spectrum, but I'd consider that
> pretty low priority.
Sure, but it's straight-forward because your final projection (before nonlinear
gamma-like conversions) is just a dot product like
red=red_response*spectrum
....
so changing the basis vectors is no big deal.
Using exact device spectrum is more precise than XYZ adjustment. After
projection to 3 components, all you can do is adjust the whitepoint and that
hides all the spectral details. But the freaks might not care about that :)
> Nope, it's not an option for input images. But some pre-computed values
> and interpolation might do the trick.
Well yes, but for literals in the scene precomputation will be faster (and
simpler, because the internal colors that stand as object and pigment properties
in the code can be in the same format without overloading). That's a small
implementation detail anyway :)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
May I, as a beginner in PovRay but a long-time worker in colour technology, post
a word of caution?
Most people do not understand colour. A straightforward 3D system like rgb is
quite complicated enough for them.
Having said that, I will draw a parallel with 'RealBasic' (now Real Studio)
which I use to create my own software. The point is that every new version
becomes more complex and farther removed from the easy-to-use system that it
used to be. The simple methods that a beginner can put to immediate use are
still there, but increasingly hidden in a mass of clever-clever stuff.
It is possible for RealStudio (and PovRay) to become so clever that only its
already expert users know where to start.
I looked briefly at PovRay about 20 years ago. I see that it has developed
considerably. It occurs to me that if you spend much of a lifetime building a
system it might take a newcomer much of a lifetime to understand it.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 15.04.2013 14:57, schrieb Bernard:
> May I, as a beginner in PovRay but a long-time worker in colour technology, post
> a word of caution?
> Most people do not understand colour. A straightforward 3D system like rgb is
> quite complicated enough for them.
This is one of multiple reasons why I'm trying to figure out a good way
to convert from RGB to some synthesized spectrum: It'll allow to use
spectral math inside the render engine, but still use the RGB colour
model in scenes.
But you're making an interesting point there: Maybe we should also
provide for more intuitive color models to be used in scenes.
I think it is obvious that one of the three parameters for such a color
model would need to govern (1) the hue, while the other two should not
influence it in any way.
For easy use with light sources, it should be possible to adjust the
brightness in the most simple manner possible, i.e. by tweaking just one
single parameter. Therefore, for any given hue the second parameter
should govern (2) the saturation, and for any given combination of hue
and saturation the third parameter should govern (3) the brightness.
There are two other properties that would both be desirable but can't be
fulfilled together: It would be nice for the third parameter to directly
specify luminance (i.e. what the .grey component returns). On the other
hand this would make it extremely difficult to specify a pigment with
maximum chroma for any given hue, which I'd personally consider a no-go.
The above constraints rule out both chroma- and lightness-based models,
even though they would have merits of their own. I guess in the end the
model will inevitably have to be a variation of HSV (aka HSB).
A major open question would be whether to specify the brightness in
terms of a perceptual or physical scale.
As for saturation, with RGB-based color math it should probably
represent saturation with respect to the gamut of the internal color
space, while with spectral-based color math I guess it would make sense
to have it directly represent excitation purity.
Thinking about it, this type of parameterization might also make it
easier to convert from user-specified colour to a synthetic spectrum.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka <ano### [at] anonymousorg> wrote:
>
> This is one of multiple reasons why I'm trying to figure out a good way
> to convert from RGB to some synthesized spectrum: It'll allow to use
> spectral math inside the render engine, but still use the RGB colour
> model in scenes.
I am not sure why there should be any requirement for spectral ray tracing in
PovRay except perhaps as a secondary specialist choice. Colour and the spectrum
are different identities. Colour does not exist in the physical world, only as
a physiological phenomenon in our eye/head, and in this respect there are only 3
colours. We can not respond to every wavelength in the light spectrum except by
relegating to one of, or a mixture of, our 3 responses. It would, no doubt, make
refraction more interesting but this is bound to be of minority interest among
those who wish to make 3D scenes in 'real' colour. But probably there is more
involved than I realise.
>
> But you're making an interesting point there: Maybe we should also
> provide for more intuitive color models to be used in scenes.
>
> I think it is obvious that one of the three parameters for such a color
> model would need to govern (1) the hue, while the other two should not
> influence it in any way.
I suspect that what you are looking for here does not exist. Among the many ways
of representing colour numerically, RGB has the advantage (if it is an
advantage) of being the closest parallel to our colour vision mechanism. It is
certainly difficult for the novice to visualise the colour represented by RGB
parameters but only to the extent that I have difficulty in visualising the
effect of many other PovRay function parameters.
>
> For easy use with light sources, it should be possible to adjust the
> brightness in the most simple manner possible, i.e. by tweaking just one
> single parameter. Therefore, for any given hue the second parameter
> should govern (2) the saturation, and for any given combination of hue
> and saturation the third parameter should govern (3) the brightness.
>
> There are two other properties that would both be desirable but can't be
> fulfilled together: It would be nice for the third parameter to directly
> specify luminance (i.e. what the .grey component returns). On the other
> hand this would make it extremely difficult to specify a pigment with
> maximum chroma for any given hue, which I'd personally consider a no-go.
>
> The above constraints rule out both chroma- and lightness-based models,
> even though they would have merits of their own. I guess in the end the
> model will inevitably have to be a variation of HSV (aka HSB).
>
> A major open question would be whether to specify the brightness in
> terms of a perceptual or physical scale.
>
The only way for anyone to be sure is to see the colour in front of you. I have
two tools which I use, a colour picker and one which facilitates comparison
with colours already entered (e.g. in an .inc colour map). I can make these
available if anyone is interested and after making them more user-friendly.
> Thinking about it, this type of parameterization might also make it
> easier to convert from user-specified colour to a synthetic spectrum.
Please don't!
I now retreat in the face of greater mathematical minds.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 16.04.2013 14:52, schrieb Bernard:
> clipka <ano### [at] anonymousorg> wrote:
>
>>
>> This is one of multiple reasons why I'm trying to figure out a good way
>> to convert from RGB to some synthesized spectrum: It'll allow to use
>> spectral math inside the render engine, but still use the RGB colour
>> model in scenes.
>
> I am not sure why there should be any requirement for spectral ray tracing in
> PovRay except perhaps as a secondary specialist choice. Colour and the spectrum
> are different identities. Colour does not exist in the physical world, only as
> a physiological phenomenon in our eye/head, and in this respect there are only 3
> colours. We can not respond to every wavelength in the light spectrum except by
> relegating to one of, or a mixture of, our 3 responses. It would, no doubt, make
> refraction more interesting but this is bound to be of minority interest among
> those who wish to make 3D scenes in 'real' colour. But probably there is more
> involved than I realise.
I do expect benefits in all areas that include colored distance-based
attenuation, such as with fog, absorbing media (including the extinction
effect of scattering media) and interior fading. With an RGB colour
model, these effects invariably exhibit a shift of hue towards the
primaries of the working colour space with increasing density and/or
distance.
That said, as a matter of fact I do plan to implement this feature in an
unofficial version first; whether it'll make its way into official
POV-Ray will depend on how much it will impact render performance, how
much it will actually improve render quality, and how many people will
actually use it.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|