|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Ive schrieb:
> But again, all this is not relevant when using s(c)RGB as a working
> color space and there has something like the Bradford chromatic adaption
> to be applied for given xyY values to make e.g. POV-Ray calculate with
> "good" RGB values.
I don't get it. On one hand, you are saying that reference white does
not matter for sRGB as a working color space (unless I misunderstand
you, which I guess I do), on the other hand yo're saying some adaption
/must/ be applied?
I'm pretty sure somehow that the /spectrum/ of the whitepoint should
matter for such things as dispersion, but where and why adaption come
into play still eludes me.
Let me think aloud for a moment to try to sort this out, and kick me
where I'm wrong:
So the starting point is the /tristimulus/, which (basically) models how
strongly the three different color receptors ("cones") in the human eye
react to different wavelengths. (To my knowledge we can count the "rod"
receptors out, probably because they only contribute in dim conditions,
right? Otherwise we should be able to distinguish four different
primaries, as the rods' spetral response is yet again different than the
cones'.)
Experiments were conducted to measure this per-wavelength response (a
bit indirectly) in a manner that, to my understanding, only yielded
/relative/ results: Conclusion could be drawn how much stronger a
particular cone type is stimulated by wavelength A compared to some
other wavelength B, but there was no way to infer how much stronger a
particular wavelength stimulated cone type A as compared to cone type B.
Thus, the immediate conclusions drawn from these experiments left open
the question what "white" is (which comes as no surprise, given that it
depends on the viewing conditions, i.e. the eye's "calibration").
However, I'm a bit worried at this point already: I guess the
wavelengths to test were generated from a "white" light source by means
of a prism; did they actually measure the physical light intensity of
the light source at that particular wavelength, to compensate for any
nonlinearities in intensity in their results? Or is there a hidden
"whitepoint" in the original data already, due to the light source used?
Duh - I haven't even really /started/ thinking about the problem of
whitepoint, and it gets in my way already...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 11/12/2009 8:49 AM, scott wrote:
> Ah ok - I see where the confusion is now between us. You are assuming
> that the OP wanted to reproduce the reflective surface that those xyY
> values were measured from, whereas I was just reproducing those exact
> xyY colours on the monitor. FWIW my colour meter just measures the
> incoming spectrum, it has no light source, in fact we usually use it in
> a totally dark room.
Actually, the object I am rendering also has its ambient level set to 1,
so it is *both* an emissive *and* a reflective surface.
Mike
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> I'm pretty sure somehow that the /spectrum/ of the whitepoint should
> matter for such things as dispersion, but where and why adaption come into
> play still eludes me.
They were talking about (which I didn't get initially) how to make a certain
*surface* look correct on a monitor, not a particular spectrum/colour.
Obviously the colour a surface appears to your eye depends on what colour
light you use to illuminate it with. Adaption comes into play when someone
else has measured the "apparent" surface colour using one type of
illuminant, but you want to know what colour it will look when lit with
another illuminant. If you know the viewing conditions under which you are
looking at your monitor (eg D50 illuminant) then you can work out what to
display on your monitor to make it look identical to if you had the actual
surface next to your monitor.
> So the starting point is the /tristimulus/, which (basically) models how
> strongly the three different color receptors ("cones") in the human eye
> react to different wavelengths.
Tristimulus values can be any 3 parameters that you decide to use, a bit
like how you can use (almost) any set of 3 vectors to describe all points in
3D space. The most common used are XYZ, and they don't really match with
the cone response curves, they're just 3 parameters.
> Experiments were conducted to measure this per-wavelength response (a bit
> indirectly) in a manner that, to my understanding, only yielded /relative/
> results:
No, I think they were able to generate pretty accurate colour matching
functions, which mapped exactly how the intensity of each wavelength
corresponded to the tristimulus values. You end up with a chart like the
one on wikipedia:
http://en.wikipedia.org/wiki/File:CIE_1931_XYZ_Color_Matching_Functions.svg
> Thus, the immediate conclusions drawn from these experiments left open the
> question what "white" is
The only universal "scientific" definition of "white" I can think of is
equal energy at all wavelengths, this gives XYZ=(1,1,1). Other whites are
usually related to the colour of a hot object, eg D65 is the colour of an
object at 6500K.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka wrote:
> Ive schrieb:
>
> I don't get it. On one hand, you are saying that reference white does
> not matter for sRGB as a working color space (unless I misunderstand
> you, which I guess I do), on the other hand yo're saying some adaption
> /must/ be applied?
>
Three cases for given xyY values.
These values refer to a reflective spectrum (as it is the case with
Munsell color definitions or one of the many "real world" material
databases out there, one of those is e.g. the Aster spectral library at
http://speclib.jpl.nasa.gov/ containing among other things data for all
moon stones collected at the various Apollo mission landing sides).
All these databases and all measurement hardware that I'm aware of is
using D50 as reference white (but as scott pointed out there are also
others).
Now, assuming we are using scRGB (sRGB primaries and whitepoint but
without gamma correction) as the POV-Ray internal RGB *working* color
space we have to apply chromatic adaption for these xyY values to make
them consistent with RGB values from other sources and especially with
the RGB values for light sources that are used within POV-Ray to
illuminate them.
Now, speaking of light sources within POV-Ray, the second case are xyY
values that refer to those and then must *no* chromatic adaption be applied.
The third case (IMO not relevant for POV-Ray anyway) is the usage of
sRGB as output device color space (as opposed to working color space)
where the sRGB standard implies an environment lighting/viewing
assumed to be done by the human visual system and therefor no chromatic
adaption has to be applied when calculating RGB values from given xyY.
and within the business I'm working in it is ignored and so does also
e.g. Adobe - not that I'm saying what they are doing is always right;)
> I'm pretty sure somehow that the /spectrum/ of the whitepoint should
> matter for such things as dispersion, but where and why adaption come
> into play still eludes me.
>
Well, in fact there is no chromatic adaption needed and I think that the
whitepoint of the POV-Ray internal RGB working color space shouldn't
matter at all for calculating dispersion samples (besides that it is
needed for the xyz->rgb conversion). I seem to remember in some quick
response I did state otherwise and in case this is true I'm sorry for
the confusion this might have caused.
> Let me think aloud for a moment to try to sort this out, and kick me
> where I'm wrong:
>
> So the starting point is the /tristimulus/, which (basically) models how
> strongly the three different color receptors ("cones") in the human eye
> react to different wavelengths. (To my knowledge we can count the "rod"
> receptors out, probably because they only contribute in dim conditions,
> right? Otherwise we should be able to distinguish four different
> primaries, as the rods' spetral response is yet again different than the
> cones'.)
>
Within the CIE standard observer experiments the rods are just ignored.
> Experiments were conducted to measure this per-wavelength response (a
> bit indirectly) in a manner that, to my understanding, only yielded
> /relative/ results: Conclusion could be drawn how much stronger a
> particular cone type is stimulated by wavelength A compared to some
> other wavelength B, but there was no way to infer how much stronger a
> particular wavelength stimulated cone type A as compared to cone type B.
> Thus, the immediate conclusions drawn from these experiments left open
> the question what "white" is (which comes as no surprise, given that it
> depends on the viewing conditions, i.e. the eye's "calibration").
>
There is such a thing as the Grassmann law. Google it for more details.
> However, I'm a bit worried at this point already: I guess the
> wavelengths to test were generated from a "white" light source by means
> of a prism; did they actually measure the physical light intensity of
> the light source at that particular wavelength, to compensate for any
> nonlinearities in intensity in their results? Or is there a hidden
> "whitepoint" in the original data already, due to the light source used?
>
No. They did use mercury (and other metals) vapor lamps to produce 3
different monochromatic light beams at three different wavelengths. Out
of my head theses have been around 435nm, 545nm and 700nm and as a side
note these values are not completely willingly chosen, they had also to
deal with the kind of vapor lamps that where available in 1931.
The "observer" could then adjust the intensity of the three beams until
the color that did result from the mixture did match a given one. So
there is no "hidden" whitepoint and no dealing with "what is white" at all.
> Duh - I haven't even really /started/ thinking about the problem of
> whitepoint, and it gets in my way already...
Don't worry ;)
-Ive
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
scott schrieb:
>> Experiments were conducted to measure this per-wavelength response (a
>> bit indirectly) in a manner that, to my understanding, only yielded
>> /relative/ results:
>
> No, I think they were able to generate pretty accurate colour matching
> functions, which mapped exactly how the intensity of each wavelength
> corresponded to the tristimulus values. You end up with a chart like
> the one on wikipedia:
Not really: They used various other assumtions to get to the XYZ color
model, which were not part of the original tristimulus experiments (for
instance results from an experiment that tested for how people percieve
the brightness of spectral colors in relation to one another).
> http://en.wikipedia.org/wiki/File:CIE_1931_XYZ_Color_Matching_Functions.svg
>
>> Thus, the immediate conclusions drawn from these experiments left open
>> the question what "white" is
>
> The only universal "scientific" definition of "white" I can think of is
> equal energy at all wavelengths, this gives XYZ=(1,1,1). Other whites
> are usually related to the colour of a hot object, eg D65 is the colour
> of an object at 6500K.
Given that "white" is what we humans /percieve/ as "white", that
equal-energy definition is not really reliable.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Ive schrieb:
> Now, assuming we are using scRGB (sRGB primaries and whitepoint but
> without gamma correction) as the POV-Ray internal RGB *working* color
> space we have to apply chromatic adaption for these xyY values to make
> them consistent with RGB values from other sources and especially with
> the RGB values for light sources that are used within POV-Ray to
> illuminate them.
Wouldn't it be more logical in the case of reflective surfaces to apply
chromatic adaption to the /light source/ in the scene?
>> I'm pretty sure somehow that the /spectrum/ of the whitepoint should
>> matter for such things as dispersion, but where and why adaption come
>> into play still eludes me.
>>
> Well, in fact there is no chromatic adaption needed and I think that the
> whitepoint of the POV-Ray internal RGB working color space shouldn't
> matter at all for calculating dispersion samples (besides that it is
> needed for the xyz->rgb conversion). I seem to remember in some quick
> response I did state otherwise and in case this is true I'm sorry for
> the confusion this might have caused.
Yes, you mentioned some chromatic adaptation. Apologies accepted, but I
still seem to be confused :-)
>> Experiments were conducted to measure this per-wavelength response (a
>> bit indirectly) in a manner that, to my understanding, only yielded
>> /relative/ results: Conclusion could be drawn how much stronger a
>> particular cone type is stimulated by wavelength A compared to some
>> other wavelength B, but there was no way to infer how much stronger a
>> particular wavelength stimulated cone type A as compared to cone type
>> B. Thus, the immediate conclusions drawn from these experiments left
>> open the question what "white" is (which comes as no surprise, given
>> that it depends on the viewing conditions, i.e. the eye's "calibration").
>>
> There is such a thing as the Grassmann law. Google it for more details.
Hum... okay... so I googled it up. But I have no idea how it fits into
the whole smash...
> No. They did use mercury (and other metals) vapor lamps to produce 3
> different monochromatic light beams at three different wavelengths. Out
> of my head theses have been around 435nm, 545nm and 700nm and as a side
> note these values are not completely willingly chosen, they had also to
> deal with the kind of vapor lamps that where available in 1931.
Yes, I read that.
> The "observer" could then adjust the intensity of the three beams until
> the color that did result from the mixture did match a given one. So
> there is no "hidden" whitepoint and no dealing with "what is white" at all.
Well, this is exactly the point I'm after here: This "given [color]"
must have come from somewhere. From the experiment description, it was
this very color (a spectral one, I presume) that they "measured" with
the experiment.
This color must have had some intensity, and I guess the test persons
did not just try to match the color, but the apparent brightness as
well. I mean, after all, for instance both 700 nm and 740 nm are
percieved as pretty much the same hue of red, except that one is
percieved as brighter than the other.
So, was the intensity deliberately "normalized" to equal physical
brightness?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Not really: They used various other assumtions to get to the XYZ color
> model, which were not part of the original tristimulus experiments (for
> instance results from an experiment that tested for how people percieve
> the brightness of spectral colors in relation to one another).
Well I don't know the details of exactly what experiments and calculations
they conducted to end up with the colour matching functions, but today they
are a standard to convert from a spectrum (which you can measure
scientifically) to XYZ, which is the basis for all colour spaces. If you
are given a spectrum, you can get out only one possible XYZ colour, there is
no reliance on any concept of "white" to make that conversion step.
> Given that "white" is what we humans /percieve/ as "white", that
> equal-energy definition is not really reliable.
That isn't very scientific though, usually "white" in a strictly scientific
way means equal energy across all relevant wavelengths (see "white noise" in
audio). The CIE colour matching functions were designed to give equal XYZ
values when presented with a spectrum of "white" light such as this.
Obviously the human perception of what is "white" light varies greatly with
the surrounding illumination (which often comes from the sun/sky), our
psychological concept of what "should" be white etc, this is why various
other whites are defined as standards.
I guess you could think up of an experiment in a completely dark room where
you show a subject two near-white coloured light sources and ask them which
is "whiter" than the other. Repeat until you find "white". Someone has
probably done that already...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka wrote:
> Wouldn't it be more logical in the case of reflective surfaces to apply
> chromatic adaption to the /light source/ in the scene?
>
Err, no. Let me put it this way: we use chromatic adaption for the
reflective surface because we want to rule out the influence the used
hardware did have for measuring its reflectivity. From this point of
view raytracing is about having some objective surface color that is
illuminated by some freely chosen colored light source and we are after
the reflected color that is the same as if the real world thing would be
illuminated by our light source. We are definitely no longer interested
in e.g. what kind of lamp was used to *measure* reflectivity and so
(again) in this case chromatic adaption for the given spectral data or
xyY value or whatever has to be used.
>>> Experiments were conducted to measure this per-wavelength response (a
>>> bit indirectly) in a manner that, to my understanding, only yielded
>>> /relative/ results: Conclusion could be drawn how much stronger a
>>> particular cone type is stimulated by wavelength A compared to some
>>> other wavelength B, but there was no way to infer how much stronger a
>>> particular wavelength stimulated cone type A as compared to cone type
>>> B. Thus, the immediate conclusions drawn from these experiments left
>>> open the question what "white" is (which comes as no surprise, given
>>> that it depends on the viewing conditions, i.e. the eye's
>>> "calibration").
>>>
>> There is such a thing as the Grassmann law. Google it for more details.
>
> Hum... okay... so I googled it up. But I have no idea how it fits into
> the whole smash...
>
I was referring to the linearity within human color perception as stated
by Grassmann and one of the major problems within the CIE rgb color
space that e.g. various vectors but of *equal* length within the CIE xy
diagram would represent *different* human perception of color difference
(or DeltaE as this is called).
But I might get you completely wrong in what you are after here.
>> No. They did use mercury (and other metals) vapor lamps to produce 3
>> different monochromatic light beams at three different wavelengths.
>> Out of my head theses have been around 435nm, 545nm and 700nm and as a
>> side note these values are not completely willingly chosen, they had
>> also to deal with the kind of vapor lamps that where available in 1931.
>
> Yes, I read that.
>
>> The "observer" could then adjust the intensity of the three beams
>> until the color that did result from the mixture did match a given
>> one. So there is no "hidden" whitepoint and no dealing with "what is
>> white" at all.
>
> Well, this is exactly the point I'm after here: This "given [color]"
> must have come from somewhere. From the experiment description, it was
> this very color (a spectral one, I presume) that they "measured" with
> the experiment.
>
AFAIR: the test colors have been also produced by metal vapor lamps with
known wavelength but those where easier to build at this time as they
had not to be made adjustable in brightness. I have a book at work with
exact description of the Wright/Guild experiments and in case I'm wrong
here I will correct myself in the next week ;)
> This color must have had some intensity, and I guess the test persons
> did not just try to match the color, but the apparent brightness as
> well. I mean, after all, for instance both 700 nm and 740 nm are
> percieved as pretty much the same hue of red, except that one is
> percieved as brighter than the other.
>
> So, was the intensity deliberately "normalized" to equal physical
> brightness?
There was not a *single* color matching experiment, it was performed
during almost 10 years with different people, different lamps and even
slightly different setups. The resulting data was then assembled to give
about 'brightness' issues, guessing they were quite happy to make the
lamps work anyhow.
-Ive
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
scott schrieb:
> That isn't very scientific though, usually "white" in a strictly
> scientific way means equal energy across all relevant wavelengths (see
> "white noise" in audio). The CIE colour matching functions were
> designed to give equal XYZ values when presented with a spectrum of
> "white" light such as this.
Okay, I think I get that.
So in /that/ context, there is no "whitepoint" involved, right? Or am I
getting something wrong here?
And when you add two spectral colors, still the whitepoint would not
come into play - presuming that color perception is linear, which seems
to be the case from all that scientists have found out. (Stop me when
I'm talking nonsense).
And with the sRGB color model, the same principle applies, because it is
just another choice of the coordinate axes in 3D color space (leaving
the transport function aside for now).
So as long as we're talking about some light color which we intend to
convert from XYZ to sRGB, we can happily forget about whitepoint: If we
shove a XYZ color into the transformation matrix that represents "white"
in the physical sense, then the sRGB color we get will just as well
represent "white" in the physical sense - right?
Thus, in order to "render" light of a certain color with know XYZ
coordinates on an sRGB "output channel" (be it a device, a file, or
whatever), we should just take the XYZ value we have, shove it through
the transformation matrix as defined in the sRGB standard, and live
happily ever after. As for viewing conditions, I would expect these to
be taken care of automatically by a properly calibrated display.
I just toyed around with the sRGB transformation matrix, leading me to
the conclusion that the XYZ color model must be using illuminant *E* (!)
(that is, equal physical light intensity) as its native "white" (i.e.
<1,1,1> - heck, I could have guessed that from the x,y coordinates of
the various illuminants), while "the" sRGB "white" (again <1,1,1>)
matches D65. (Ah-hah! So that's what the "display whitepoint" is
denoting in the sRGB specs.)
Okay, so I think I got it, as far as light goes. Now for the color of
surfaces:
In my naive mind, I would have presumed that to specify the color of a
surface via the CIE XYZ color system, one would specify the XYZ
coordinates of the diffusely reflected light when the surface in
question is subject to physically white light (i.e. illuminant E);
however, from your explanations I gather that this is /not/ the case, right?
Which raises the (possibly trivial) question, what exactly /is/ then
specified for color pigments? Is it, as I now tend to presume, the XYZ
coordinates of the diffusely reflected light when subject to illuminant
D65 instead?
Gee, that doesn't make it easier to come up with a sensible way of
proper color handling in POV-Ray...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> So in /that/ context, there is no "whitepoint" involved, right? Or am I
> getting something wrong here?
You're right, "whitepoint" has no relevance or meaning when you're talking
solely about a tristimulus value or a specific spectrum.
> And when you add two spectral colors, still the whitepoint would not come
> into play
Nope.
> And with the sRGB color model, the same principle applies, because it is
> just another choice of the coordinate axes in 3D color space (leaving the
> transport function aside for now).
You have to deal with linear sRGB to be able to "add" the values together,
but yes it's just the same because XYZ -> (linear)sRGB is just a linear
relationship.
> So as long as we're talking about some light color which we intend to
> convert from XYZ to sRGB, we can happily forget about whitepoint: If we
> shove a XYZ color into the transformation matrix that represents "white"
> in the physical sense, then the sRGB color we get will just as well
> represent "white" in the physical sense - right?
Yes, XYZ <-> sRGB is a 1-1 fixed mapping that has no dependence on any white
point.
> Thus, in order to "render" light of a certain color with know XYZ
> coordinates on an sRGB "output channel" (be it a device, a file, or
> whatever), we should just take the XYZ value we have, shove it through the
> transformation matrix as defined in the sRGB standard, and live happily
> ever after. As for viewing conditions, I would expect these to be taken
> care of automatically by a properly calibrated display.
Yes exactly. This is what I assumed the OP wanted at the beginning of this
thread. But it turns out the "XYZ" they "knew" was actually measured from a
reflective surface under a certain light source. They wanted to display
"the surface" on their monitor as if it was lit from a different light
source. In order to achieve this you need to calculate a new XYZ value
first to account for the different illuminant, then convert to sRGB. I
think it's confusing to call the illuminant a "reference white", but hey.
> I just toyed around with the sRGB transformation matrix, leading me to the
> conclusion that the XYZ color model must be using illuminant *E* (!) (that
> is, equal physical light intensity) as its native "white" (i.e. <1,1,1> -
> heck, I could have guessed that from the x,y coordinates of the various
> illuminants), while "the" sRGB "white" (again <1,1,1>) matches D65.
> (Ah-hah! So that's what the "display whitepoint" is denoting in the sRGB
> specs.)
:-) I was going to say earlier you could start with sRGB=<1,1,1> and work
back to XYZ to find out what "white" meant in sRGB space.
> Okay, so I think I got it, as far as light goes. Now for the color of
> surfaces:
>
> In my naive mind, I would have presumed that to specify the color of a
> surface via the CIE XYZ color system, one would specify the XYZ
> coordinates of the diffusely reflected light when the surface in question
> is subject to physically white light (i.e. illuminant E); however, from
> your explanations I gather that this is /not/ the case, right?
That is one option, but generally illuminant E is not available or not
desired (for whatever reason), so D50, D65 or some other can be used. So
long as you specify the XYZ value *and* the illuminant used then it's enough
to define the absolute reflective colour of the surface and how it will
appear under any other illuminant.
> Which raises the (possibly trivial) question, what exactly /is/ then
> specified for color pigments? Is it, as I now tend to presume, the XYZ
> coordinates of the diffusely reflected light when subject to illuminant
> D65 instead?
We only use D65 in the display industry because the reflective colour of
LCDs only becomes visible under very bright conditions (eg outside when it's
sunny) - then D65 is a good approximation of the illuminant and thus allows
direct comparison of the transmissive and reflective xyY values (obviously
you want them to match as closely as possible so things don't change colour
when the sun comes out!).
If your reflective surfaces will mostly be viewed under some other lighting
conditions then it probably makes sense to use that illuminant as your
reference.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|