![](/i/fill.gif) |
![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Ive <"ive### [at] lilysoft org"> wrote:
> MessyBlob wrote:
> > There are two ways of looking at this 'out of gamut' question: wavelengths, and
> > brightness (amplitudes). It's possible to have wavelengths that don't exist in
> > POV-Ray, and amplitudes that are too bright for integer (8-bit or 16-bit)
> > output from POV-Ray.
> >
> >
> > First, the frequency perspective:
> > [snip...]
> This is all true but has nothing to do with the color space gamut.
>
> > The brightness (amplitude) perspective is a bit easier to understand:
> > [snip...]
>
> Again, true but not related to the color space gamut.
Not directly, but you ultimately have to map the RGB range to some other range,
and depending on that mapping, you'll either need to keep the RGB colours so
that they don't extend out of the target gamut, or intelligently map the
out-of-gamut colours.
If the problem is that you have out-of-gamut colours within the RGB
representation, then it can only be because colour element values are out of
range (<0.0 or >1.0), which can be fixed by scaling, shifting, or clipping the
colour values.
I'm not sure how relevant gamut control is to POV-Ray, as standard colour
management software can handle conversions from RGB. I think it's really a
question of 'rendering intent' (a colour management phrase, not a ray tracing
term) after POV-Ray output is generated, in that you can either (a) map the
limits of one device to another (relative colorimetric) so that a full-gamut
image in RGB will extend to the limits of the target gamut, or (b) you can
equate both gamuts to an absolute scale (absolute colorimetric) so that images
will look the same when arranged (in the real world) next to each other, even
when the image media are different.
If the above is waffle, then maybe we've not understood the problem correctly
:o)
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
>> Wrong. Out of gamut values are indicated by one or more RGB components
>> with *negative* values. It has nothing to do with HDR. And IIRC POV-Ray
>> does clip negative values - but it is a long time since I looked at the
>> source - so you might know better.
>
> Well, if RGB components can be negative, then obviously we're not talking
> about
> a straightforward 3-band spectrum (or am I missing something here?) as
> simulated by POV-Ray.
RGB are just 3 numbers that tell you how much of three actual colours to mix
together. The actual 3 colours that are mixed will depend on your monitor,
or what colour space you are working in (if you are not displaying the
results directly).
It makes no difference what the maximum limit of each channel is, whether
it's 1, 255 or 34723489, you are still only going to end up with colours
inside that colour space (ok they be representing a really bright colour,
but the colour will still be inside the colour space).
The only way to generate colours outside of the colour space is to use
negative values. If, for example, you want to model a laser light in POV,
you are not going to be able to describe the colour using only positive
values of RGB, you would have to use a negative somewhere because
monochromatic light sources are outside of most colour spaces.
If POV doesn't clip negative values, then the final rendered result should
be physically accurate, even if un-displayable on most hardware. Some
colour shift algorithm would usually be needed to map the entire visual
colour space onto displayable RGB, just clamping negative values to 0 is a
very very poor one as the hue is not preserved. A better one is to reduce
the saturation of the colour while keeping the hue until it is on the border
of the RGB colour space.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
"scott" <sco### [at] scott com> wrote:
> [...] you would have to use a negative somewhere because
> monochromatic light sources are outside of most colour spaces.
Values for RGB are about responses to light in three frequency ranges.
You can't have negative responses.
What you're looking for falls in the 'frequency' gamut problem (see above), and
demands a spectral approach (see above), which is beyond the capabilities of
POV-Ray.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
"MessyBlob" <nomail@nomail> wrote:
> What you're looking for falls in the 'frequency' gamut problem (see above), and
> demands a spectral approach (see above), which is beyond the capabilities of
> POV-Ray.
Is it?
SDL can do quite a lot of stuff; e.g. you could render an "animation" cycling
through narrow spectral bands, using a smart macro to pick different colors
based on spectra you define. Something like:
#if (frame_number < 3)
// in frames 0-2, we render the scene using different discrete spectral bands
// (imagine us illuminating the scene with e.g. a Hg vapor lamp, a genuine
// neon lamp, or whatever lamp you have to give you narrow spectral emission,
// and changing light sources from frame to frame)
// building what would basically be false-color images of the scene
#macro ColorFromSpectrum(q1,q2,q3,q4,q5,q6,q7,q8,q9)
#switch (frame_number)
#case (0):
rgb <q1,q4,q7>
#break
#case (1):
rgb <q2,q5,q8>
#break
#case (2):
rgb <q3,q6,q9>
#break
#end
....
sphere { <0,0,0>, 1
pigment { color ColorFromSpectrum(0.2,0.3,0.7,0.5,0.2,0.2,0.1,0.3,0.0) }
finish { ...}
}
....
#else
// now, in the final frame, we build an orthographic scene,
// using the previously rendered false-color frames as input for some
// smart functions to transform the results into a color space
// of our liking
// I'll leave the exact details out here because I'm not a color space
// transformation expert, and too lazy to look up the exact syntax;
// The basic procedure would be to make pigment functions from the
// rendered frames to retrieve the individual values,
// compose functions from these that each compute a result color channel,
// and use some pigment averaging to compose the color channels together.
#end
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
"clipka" <nomail@nomail> wrote:
> "MessyBlob" <nomail@nomail> wrote:
> SDL can do quite a lot of stuff; e.g. you could render an "animation" cycling
> through narrow spectral bands, using a smart macro to pick different colors
> based on spectra you define. Something like:
> [...code...]
OK, I'll give you that one, but I was thinking more about frequency-specific
effects like dispersion, refraction, etc.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
> Values for RGB are about responses to light in three frequency ranges.
Yes that's true, but the frequency ranges used for RGB are different than
the frequency ranges in your eyes (which are significantly overlapping).
You cannot describe all visible colours using only positive RGB values. To
describe the ones outside of positive RGB you can either use negative values
or introduce different colours.
> You can't have negative responses.
Nope, in the same way that you can't possess a negative number of any
physical objects, or you can't have a complex-valued voltage, but they are
still useful concepts with many uses.
Some visible colours that you can see with your eyes are simply not possible
to represent as positive RGB, allowing negative RGB allows these colours to
be correctly represented within the RGB system. They might not be
displayable on normal monitors, but the data is there and complete, should
anyone wish to use it for some other purpose (eg calculating the dominant
wavelength, converting it to another colour space, displaying it on
specialised hardware, etc).
Note that allowing negative values in RGB is not the same as allowing a
negative brightness. RGB of (-1,0,-1) doesn't make much physical sense, but
(-5,100,-5) might correctly describe the colour of a green LED.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
"scott" <sco### [at] scott com> wrote:
> > Values for RGB are about responses to light in three frequency ranges.
> Note that allowing negative values in RGB is not the same as allowing a
> negative brightness. RGB of (-1,0,-1) doesn't make much physical sense, but
> (-5,100,-5) might correctly describe the colour of a green LED.
It's possible that overlaps between different colours' response profiles might
leave one feeling that a source peak has been represented with inappropriate
responses in one of the targets. I can see you wanting a negative value to
compensate for that. Green LED might be a good example, because, despite being
monochrome, it might attract weak responses from red or blue, in which case,
you change the response profiles, rather than mess about with negative numbers.
(?)
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
> compensate for that. Green LED might be a good example, because, despite
> being
> monochrome, it might attract weak responses from red or blue, in which
> case,
> you change the response profiles, rather than mess about with negative
> numbers.
> (?)
Sure, for that you can use the CIE 1931 XYZ standard observer responses,
X,Y,Z are guaranteed to be positive for all visible colours (but the
downside is you also get some XYZ combinations that are not visible). But
usually when creating images you are using some other space, like sRGB (for
web), adobeRGB (from digital cameras), the NTSC standard (for TV work) etc.
These don't cover the whole CIE gamut so negative values must be used if you
want to describe and preserve some colours (eg the green LED).
For example if I take a photo on my camera it will produce an image using
the adobeRGB colour space. If I convert that to sRGB (to display on a web
page) or to some other space (eg to submit to a printer, or for use on a
certain display system) it is possible that some resulting RGB values might
be negative. It's not wrong or an error, it's just what is needed to
represent those colours. What you do with negative values is up to you, but
if you really only need positive values (eg to display on a monitor) then
just clipping them to zero is almost the worst thing to do.
Going back to what POV does, the best thing POV could do is just to preserve
the negative-ness of colour components and produce an output file (if using
a suitable format) that reflects this. That way if the POV output file is
converted to some other colour space further down the workflow line, the
results will be as accurate as possible. For immediate display on a monitor
or print, maybe an option to clip negative values to zero, or to use a more
sophisticted algorithm that preserves hue could be included.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Le 15.04.2009 15:52, scott nous fit lire :
> Going back to what POV does, the best thing POV could do is just to
> preserve the negative-ness of colour components and produce an output
> file (if using a suitable format) that reflects this.
Here lies your main issue. The suitable format. For Pov.
If you really want to work in another colorspace, you might want to specify some
information as a spectral sampling (or whatever your color vector is based on),
instead of
some rgb, but also want to specify some interaction of matter with a ray as a matrix
(and
not only as rgbtf).
Of course, according to both your scene specification's elements and the output
format,
another adaptation of the final ray to the colorspace of the output image would be
needed.
Which lead to another option: outputting the same render in different file format at
once
(one render, ten files! ok... may be only two, a classical png and a fancy spectral
format)
The interest of the transformation matrix for interaction of ray with "pigment" is the
modeling of real effect, such as black light (UV lightwave gets divided by 2, reaching
the
visible spectrum... classical usage is a "white UV", but you can consider fancier
filter)
or some stone variable IOR according to the lightwave frequency... or worse.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
> Here lies your main issue. The suitable format. For Pov.
For BMP, JPEG, PNG etc I don't think you are allowed negative values so POV
would have to clip them before writing. I don't know if the HDR formats
allow negative colour values or not.
> If you really want to work in another colorspace, you might want to
> specify some
> information as a spectral sampling
Most functions in POV don't need that information, they don't need to know
what R,G and B actually are at all. It would be a huge effort and make
rendering really much slower if you were to work on spectral information,
the only gain being things like dispersion being slightly more accurate and
being able to model fluorescent materials correctly. I don't think it is
worth it.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |