|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I forgot that there was also this post:
https://news.povray.org/povray.text.scene-files/thread/%3C58da4065%241%40news.povray.org%3E/
Post a reply to this message
|
|
| |
| |
|
|
From: Thomas de Groot
Subject: Re: Gamma and the sRGB Keywords in POV-Ray 3.7: a Tutorial
Date: 23 Apr 2024 08:14:40
Message: <6627a630$1@news.povray.org>
|
|
|
| |
| |
|
|
Op 23-4-2024 om 11:16 schreef jr:
> hi,
>
> Thomas de Groot <tho### [at] degrootorg> wrote:
>> Op 22/04/2024 om 19:18 schreef Bald Eagle:
>>> ...
>>> Oh please oh please oh please..... ;)
>>
>> <large grin>
>>
>> I imagine that hell is paved ...
>
> Hell yes !! (I only got as far as tarring & feathering) </grin>
>
>
> regards, jr.
>
That is bad enough already ;-)
--
Thomas
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
So, I did a little spreadsheet dabbing, and the srgb-to-rgb and inverse both
work perfectly and as expected.
So the real question at this point, is what do people want to know, have
available to them, and see in a documentation render, animation, macro,
function, etc?
It should unambiguously answer any questions that you have, and provide you with
any visual explanations that you might need, and data that you might to
copy/paste into a scene to solve whatever problem you might be experiencing.
- BW
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Bald Eagle" <cre### [at] netscapenet> wrote:
> I do believe it was stated somewhere above that POV-Ray takes into account the
> effective scene gamma... so I would recommend digging around in the source code.
https://news.povray.org/web.5f8799f076c60ba860e0cc3d0%40news.povray.org
"The srgb keyword always returns a color that *looks* the same across all
assumed_gamma settings. To take your example, srgb 0.75 will *look* the same
whether your scene uses assumed_gamma 1, assumed_gamma 2.2, or assumed_gamma
srgb. But this implies that, internally, it will evaluate to different rgb
values depending on the assumed_gamma setting.
When you use assumed_gamma srgb, the scene's nonlinearity aligns with the color
definition, which is why rgb and srgb return the same value. And since sRGB is
close to gamma 2.2, rgb and srgb return close, but not identical, values under
assumed_gamma 2.2."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Bald Eagle" <cre### [at] netscapenet> wrote:
> "The srgb keyword [...]"
The problem is I don't understand the problem,
There's three ways in dealing with colour:
1. the easy way, everything in linear space (as POV-Ray does)
2. the proper way, CIE colour space.
3. the messy way (srgb et all)
in 1 & 2 every operations is done within the same colour space so the results
are always the same. In linear colour space operations are easy, in CIE they are
hard(er) as it is a curved space. Only at the very end of the chain of
operations the result is adapted to the presentation method screen (nowadays
srgb), print on paper, print on slide, print om film, carve in wood, etch in
zinc, etc.
The messy way (3) kind off starts at the output and you have to adapt the input,
that's a strange way. The output isn't a fixed thing.
I kind of understand why the srgb colour was introduced in POV-Ray, but it feels
very wrong to me. A build in function, fromsrgb( ) would have been fine and
explicit. Maybe it can still be changed?
ingo
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"ingo" <nomail@nomail> wrote:
> The problem is I don't understand the problem,
So, it looks to me to be like this:
When POV-Ray uses an assumed_gamma that is <> 1, it funnels everything through a
function that translates all of the colors from a line to a curve.
Also, when images are encoded using a gamma <> 1, the same thing happens.
So if you're going to use byte-encoded (s)rgb values from a color picker, then
you're not going to be funneling them through any kind of software that "reads
the image in" - you're manually bypassing that. So you wind up having to
manually do the correction yourself with the srgb keyword to pull all of the
colors back into the linear space _of the pre-rendered SDL file_ before then
rendering your image with what may be a(nother) non-linear gamma <>1.
So if you just used the byte-encoded color values, you'd wind up applying gamma
adjustments twice - the one that is inherent in the sampled image color values,
and then the one that POV-Ray applies when you render the image with a gamma <>
1.
Now, anyone familiar with film gamma knows that higher gamma values give you a
more "contrasty" look than a gamma=1 image. So think about what happens when
you think you're using srgb values, but you're using rgb values, and then
correct for a higher gamma to translate that into linear space --- you bend that
gamma curve in the _opposite_ direction - giving rise to that "washed out" look,
because you've _decreased_ the contrast of the color space.
SO, I think what we may want to see are several things.
An RGB cube viewed from a corner to give that nice RGB hexagon.
(I think that having a function {} using hexagonal coordinates to just color a
hexagonal prism would be nice little tool.)
A single image rendered as strips of increasing gamma, to show the differences.
Color strips and graphs to show the change in hue and brightness.
Do we have a mathematical way to express "gamma" or "contrast", given the full
range of color values in an image?
It's early, and the coffee is still sinking in, so I don't fully understand why
gamma gets applied to an image in the first place - unless it's just a way to
preserve the original gamma=1 color values for image-editing purposes.
- BW
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Bald Eagle" <cre### [at] netscapenet> wrote:
> It's early, and the coffee is still sinking in, so I don't fully understand why
> gamma gets applied to an image in the first place - unless it's just a way to
> preserve the original gamma=1 color values for image-editing purposes.
>
That is the core, or origin of the problem. In the early day it seemed wise to
safe files "gamma encoded" so they didn't need that process every time they put
an image on screen. It save clock cycles.
But when you operate on them you first have to go back to linear. But, but, we
didn't even know from what gamma we had to go to linear.... as no format stored
any metadata and Apple did it different any way. (PNM uses BT.709, but now often
srgb...). Even Adobe (Photoshop) did it wrong and happily operated on non linear
data.
For me, in POV-Ray, for a single data point this srgb is a non-issue. "One
tweaks a colour anyway". For importing images as texture it is a different
issue, but there also, tweak the image data to your desire to fit the scene or a
proper reference. I wouldn't miss srgb.
ingo
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Thanks Cousin Ricky, that's a really good and useful distillation of the rgb vs.
srgb multiplication rules. I shall refer to it often. Those rules are easy to
forget :-(
> [jr wrote:]
> so, the "take away" is to not mix RGB types and 's' variants in the
> same scene ?
I don't think it causes any technical problems 'under the hood'-- I sometimes
mix the two 'flavors' in *object* colors, depending on...whim ;-) But as a
*general* rule, it is probably not a good idea to mix one flavor in LIGHTS with
another flavor on objects-- only because the resulting object colors would be
somewhat unexpected. Consider the following combinations (while using
assumed_gamma 1.0):
the usual standard or typical way, no srgb:
light_source{ rgb <.3,.5,.7>}
object{... pigment{rgb <.5,.3,.1>}
vs.
light_source{ srgb <.3,.5,.7>} // srgb now
object{... pigment{rgb <.5,.3,.1>}
vs.
light_source{ srgb <.3,.5,.7>} // srgb now
object{... pigment{srgb <.5,.3,.1>} // srgb now
The object in each case will appear with a different brightness and a slightly
different hue...possibly not what you expect, given the unchanging triplet
values that you choose.
Personally, I like to use the 3rd combination...just a personal choice due to
familiarity with how colors in other graphics apps appear to me. No 'washed-out'
colors, in other words. I have never been very good at choosing plain rgb
triplets to get what I want.
Post a reply to this message
|
|
| |
| |
|
|
From: Cousin Ricky
Subject: Re: Gamma and the sRGB Keywords in POV-Ray 3.7: a Tutorial
Date: 26 Apr 2024 13:55:04
Message: <662bea78$1@news.povray.org>
|
|
|
| |
| |
|
|
On 4/25/24 08:40 (-4), Bald Eagle wrote:
>
> So, it looks to me to be like this:
>
> When POV-Ray uses an assumed_gamma that is <> 1, it funnels everything through a
> function that translates all of the colors from a line to a curve.
>
> Also, when images are encoded using a gamma <> 1, the same thing happens.
>
> So if you're going to use byte-encoded (s)rgb values from a color picker, then
> you're not going to be funneling them through any kind of software that "reads
> the image in" - you're manually bypassing that. So you wind up having to
> manually do the correction yourself with the srgb keyword to pull all of the
> colors back into the linear space _of the pre-rendered SDL file_ before then
> rendering your image with what may be a(nother) non-linear gamma <>1.
> So if you just used the byte-encoded color values, you'd wind up applying gamma
> adjustments twice - the one that is inherent in the sampled image color values,
> and then the one that POV-Ray applies when you render the image with a gamma <>
> 1.
Sounds like you've got it.
> It's early, and the coffee is still sinking in, so I don't fully understand why
> gamma gets applied to an image in the first place - unless it's just a way to
> preserve the original gamma=1 color values for image-editing purposes.
I can't speak for the old pre-standardization days, but nowadays I
believe it's to minimize storage requirements. If the image is stored
with a linear format, our non-linear perception would lead to banding in
darker areas of the image, and storage overkill in the lighter areas.
Eliminating the banding while keeping a linear format would require
significantly more storage. By using a gamma that approximates our
perception, we shift the resolution balance towards the darker end, thus
reducing banding without increasing storage requirements.
Note that HDR formats such as EXR can get away with linear storage,
because they use floating point, rather than binary.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
hi,
"Kenneth" <kdw### [at] gmailcom> wrote:
> ...
> > so, the "take away" is to not mix RGB types and 's' variants in the
> > same scene ?
>
> I don't think it causes any technical problems 'under the hood'-- I sometimes
> mix the two 'flavors' in *object* colors, depending on...whim ;-) But as a
> *general* rule, it is probably not a good idea to mix one flavor in LIGHTS with
> another flavor on objects-- only because the resulting object colors would be
> somewhat unexpected.
ouch. exactly the habit I've fallen into, srgb light + rgb all else. </grin>
> Consider the following combinations ...
thank you very much for that. simple and effective.
> Personally, I like to use the 3rd combination...just a personal choice due to
> familiarity with how colors in other graphics apps appear to me. No 'washed-out'
> colors, in other words. I have never been very good at choosing plain rgb
> triplets to get what I want.
same here, I need a visual of the colour, usually. may try and address "the
habit".
@ingo, "I wouldn't miss srgb."
thanks for providing a "sideways perspective", appreciated.
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|