|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
So, I'm tired, and my eyeballs hurt, so I sought out some
isobutylphenylpropionic acid and a gin & tonic.
Warp and clipka always went at it with gusto! :D
We should email Warp and drag him back in....
I think, if we're going to play with these things, and be able to reliably and
usefully understand WTH is going on, then some sort of visual correspondence
roadmap ought to be made.
Consider:
A box has a pigment with 2 image_maps, blended in a pigment_map, illuminated by
a light source, rendered by POV-Ray, displayed on a monitor, and seen by you.
Each of those things has a gamma associated with it.
My idea is to start with a point on the light source graph, draw a line to the
object graph, then the monitor, then the observer.
I'm almost over here crying with laughter, because I can just hear my girlfriend
now:
"And this.... ***THIS*** is what you do ..... for "fun" ???!"
Let the games begin.
=================================================================
The new tone-adjusting formulas are:
#declare SRGB_Encode = function (C, M) {
select (C-0.0031308, C*12.92*M, C*12.92*M, (1.055 * pow (C, 1/2.4) - 0.055)*M)
}
#declare SRGB_Decode = function (C, M) {
select (C-0.040449936, C/12.92, pow ((C+0.055)/1.055, 2.4)*M)
}
using M as a multiplier. Looks like it works correctly, even with M > 1.
Perhaps it's not "right", but it's the way I envisioned its usage.
The gradients are made with the following syntax:
Note the gray in the interpolation across the 0 saturation region.
This is what Jerome was pointing out in that thread.
box {<0, 0, 0> <1, 0.25, 0.01>
pigment {gradient x
pigment_map {
blend_gamma 1
blend_mode 2
[0 rgb <1, 0, 1>]
[1 rgb <0, 1, 0>]
}
}
scale <6.5, 1, 1>
}
Post a reply to this message
Attachments:
Download 'colorconversionformulas_fromsource.png' (115 KB)
Preview of image 'colorconversionformulas_fromsource.png'
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Bald Eagle" <cre### [at] netscapenet> wrote:
>
> I'm almost over here crying with laughter, because I can just hear my girlfriend
> now:
> "And this.... ***THIS*** is what you do ..... for "fun" ???!"
>
Ha! I know the feeling; *everything* takes a back seat when I'm hunkerd down
with POV-ray. As I keep explaining to friends, it's the only way to *learn*!
>
> The new tone-adjusting formulas are:
>
> #declare SRGB_Encode = function (C, M) {
> select (C-0.0031308, C*12.92*M, C*12.92*M, (1.055 * pow (C, 1/2.4) - 0.055)*M)
> }
>
> #declare SRGB_Decode = function (C, M) {
> select (C-0.040449936, C/12.92, pow ((C+0.055)/1.055, 2.4)*M)
> }
>
> using M as a multiplier. Looks like it works correctly, even with M > 1.
> Perhaps it's not "right", but it's the way I envisioned its usage.
>
That's...brilliant. As are your graphing results. Congrats on the hard work!
I've been re-reading that NVIDIA article that I mentioned; it has some really
astute observations to make. I'm paraphrasing quite a lot here, to try and make
it applicable to POV-ray; and I borrowed another illustration from there (posted
below):
"If the linear lighting values as seen in POV-ray's assumed_gamma 1.0
environment are properly encoded for the saved file, when you view the final
image file it should again look like a real object with those reflective
properties." In other words, by using the true RGB to SRGB conversion formula to
'encode' the file to preliminarily 'brighten' the image even more, before
sending it to the 'reverse' 2.2-gamma monitor, the end result will be linear
lighting again. Absolute realism (but only as good as it can be, of course, when
perceived on a typical non-HDR monitor.)
As in Image A, with its sharp shadow-terminator line.
"But if you were to display the assumed_gamma 1.0 image file on a gamma 2.2
monitor *without* pre-encoding the file with the formula, the image will
actually look darker." Or rather, with incorrect gamma interpretation.
As in Image B.
(And as *I* used to do in a different way in v3.6xx days-- using 'plain' RGB
colors and assumed_gamma 2.2; essentially the same result.)
"With more 'advanced' lighting techniques that you use (such as HDR, global
illumination of any kind, and subsurface scattering), the more critical it will
become to stick to a linear color space to match the linear calculations of your
sophisticated lighting."
Apparently, Image A *is* how we see things in the real world, with our eyes.
Which is of course combined with real-world 'radiosity' and fill light, so we
probably don't perceive things *exactly* that way, but close to it. Whereas,
Image B is what many of us *think* we should see-- probably based on how photos
and films of the real world look, at least with older film technology. I think
that was part of Warp's fundamental argument with Clipka, back in 2010 (and my
own argument then too.)
But digressing a bit, and getting back to fundamentals:
It seems to me that there are two 'schools of thought' when rendering in
POV-ray: the *absolute lighting realism* of the assumed_gamma 1.0 way-- Image
A-- and the 'photographically pleasing or expected' look of the assumed_gamma
2.2 (or gamma srgb) way, Image B...even with its 'incorrect' lighting
computations and gamma. (Incorrect as to absolute realism.) Such images
generally have higher contrast and richer colors. Did old photos and films
reproduce absolute 'linear light' realism? No. But they looked nice anyway. ;-)
I wouldn't call this an artistic choice; it's more of a visual expectation.
Personally, I'm *still* straddling the fence as to which scheme I personally
like, visually speaking; but I'll stick to assumed_gamma 1.0 and
'realistic/correct' lighting for now.
Btw, this is interesting:
In professional CGI environments, artists apparently work in a 'complete' and
rather austere assumed_gamma 1.0 world-- even when using image_maps and the
like:
"Any input textures that are already gamma-corrected [like JPEGs] need to be
brought back to a linear color space before they can be used for shading or
compositing. Ordinary JPEG files viewed with Web browsers and the like will look
washed out. Film studios don't care if random images on the Internet look wrong
Wow.
Post a reply to this message
Attachments:
Download 'gamma_moon_images.jpg' (18 KB)
Preview of image 'gamma_moon_images.jpg'
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Kenneth" <kdw### [at] gmailcom> wrote:
> Btw, this is interesting:
> In professional CGI environments, artists apparently work in a 'complete' and
> rather austere assumed_gamma 1.0 world-- even when using image_maps and the
> like:
Yes, and that's the gist of what I'm trying to figure out.
Because with multiple encodings and decodings, it's next to impossible to figure
out where some fundamental thing went wrong or how to fix it. Therefore you're
forever doing all of that artistic fiddling.
If you get a chance, you should check out Ansel Adams' 3-part series on The
Print, The Negative, and The Camera. Or just look up a good explanation of the
zone system. It uses film gamma as an explanation, but it's where I started.
I also forgot to throw assumed_gamma into the mix.
So there are all of those things playing off one another, and they all add up to
a result. The idea is to 1:1 graph it, or come up with some sort of test suite
that allows one to use an image or pattern and unambiguously verify if what one
is using is in linear color space. Then if the monitor converts it to sRGB to
display it, who cares - that's the expected end result anyway.
But we should be careful to not "double-convert" one way or the other.
Post a reply to this message
|
|
| |
| |
|
|
From: Alain Martel
Subject: Re: Stock colors and assumed_gamma 1 in POV-Ray 3.6
Date: 20 Oct 2020 14:32:20
Message: <5f8f2d34$1@news.povray.org>
|
|
|
| |
| |
|
|
> Apparently, Image A *is* how we see things in the real world, with our eyes.
> Which is of course combined with real-world 'radiosity' and fill light, so we
> probably don't perceive things *exactly* that way, but close to it. Whereas,
> Image B is what many of us *think* we should see-- probably based on how photos
> and films of the real world look, at least with older film technology. I think
> that was part of Warp's fundamental argument with Clipka, back in 2010 (and my
> own argument then too.)
>
When looking at the Moon, what I see is more like A, but with the dark
part similar to B.
When the object is MUCH closer, like a close by concrete ball, then,
it's A all the way.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Bald Eagle" <cre### [at] netscapenet> wrote:
> So, I'm tired, and my eyeballs hurt, so I sought out some
> isobutylphenylpropionic acid and a gin & tonic.
>
> Warp and clipka always went at it with gusto! :D
> We should email Warp and drag him back in....
>
> I think, if we're going to play with these things, and be able to reliably and
> usefully understand WTH is going on, then some sort of visual correspondence
> roadmap ought to be made.
>
> Consider:
>
> A box has a pigment with 2 image_maps, blended in a pigment_map, illuminated by
> a light source, rendered by POV-Ray, displayed on a monitor, and seen by you.
> ...
I'm afraid that we also need to take into account any changes made by the driver
for the graphic card, the driver for the monitor and perhaps the operating
system.
--
Tor Olav
http://subcube.com
https://github.com/t-o-k
Post a reply to this message
|
|
| |
| |
|
|
From: Cousin Ricky
Subject: Re: Stock colors and assumed_gamma 1 in POV-Ray 3.6
Date: 20 Oct 2020 20:26:46
Message: <5f8f8046$1@news.povray.org>
|
|
|
| |
| |
|
|
On 2020-10-20 2:32 PM (-4), Alain Martel wrote:
>>
> When looking at the Moon, what I see is more like A, but with the dark
> part similar to B.
> When the object is MUCH closer, like a close by concrete ball, then,
> it's A all the way.
That's because there is very little back lighting in outer space, unless
the Moon is in a crescent phase.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Tor Olav Kristensen" <tor### [at] TOBEREMOVEDgmailcom> wrote:
[Bald Eagle wrote:}
> >
> > Consider:
> >
> > A box has a pigment with 2 image_maps, blended in a pigment_map, illuminated by
> > a light source, rendered by POV-Ray, displayed on a monitor, and seen by you.
> > ...
>
> I'm afraid that we also need to take into account any changes made by the driver
> for the graphic card, the driver for the monitor and perhaps the operating
> system.
>
Yes-- and I am currently grappling with such changes, on my Win 7 computer
(built-in video card) and my cheap-o 'TV monitor': basically, increased
orange/yellow color-intensity in any 'saved' image file, from any source. And
slightly darkened image files from POV-ray. It is either due to the monitor
itself, or I am using the wrong ICC profile in the computer (currently sRGB,
which I thought would be correct.) In any case, the only *reliable* viewing
environments that I completely trust are 1) the POV-ray preview window, and 2)
Ive's IC/Lilysoft image-viewing app. Everything else is skewed.
A new *real* computer monitor would certainly help...unless the problem is
somewhere in the computer itself.
I miss my old and trustworthy CRT monitor.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Bald Eagle" <cre### [at] netscapenet> wrote:
> In the same way that the graph shows arrows representing conversions between
> the different curves, I'd like it if we could have a collection macros that
> do the same thing - with a switch to turn on some text output to the debug
> stream describing the inputs and results. "Converting linear rgb <r, g, b>
> to sRGB <sr, sg, sb>..." or some such thing.
....
> I would also like to have a way to do what clipka was cautioning me
> about - where a palette of srgb colors could be lightened or darkened using
> a multiplier, and the math would be correct to do it in the proper color
> space.
>
> The new tone-adjusting formulas are:
>
> #declare SRGB_Encode = function (C, M) {
> select (C-0.0031308, C*12.92*M, C*12.92*M, (1.055 * pow (C, 1/2.4) - 0.055)*M)
> }
>
> #declare SRGB_Decode = function (C, M) {
> select (C-0.040449936, C/12.92, pow ((C+0.055)/1.055, 2.4)*M)
> }
>
> using M as a multiplier. Looks like it works correctly, even with M > 1.
> Perhaps it's not "right", but it's the way I envisioned its usage.
>
I took your advice-- I'm currently working on a complex text scene, taking into
account assumed_gamma, and rgb/srgb colors for both objects AND light sources--
because I personally want to form an opinion about whether or not to use srgb
colors for lights as well, simply to get the color I 'visually' expect as
opposed to 'linear' rgb colors that are intrinsically washed-out in an
assumed_gamma 1.0 environment. I know that Clipka recommended we stick with
'rgb' there, but my test scene will compare the difference.
And I plan to use your excellent conversion functions; it looks like you figured
out how to 'correctly' multiply srgb colors with an M, which is a more complex
situation than multiplying simple rgb colors. Your coding skills are more
sophisticated than mine, so I'll take your word for it ;-)
Can I assume that, if I choose *not* to use the M-multiplier in your functions
(for my simpler test scene), I could just change the function like so? and with
C being a typical 3-part color vector like <.3,.6,.9>)?:
#declare SRGB_Encode = function (C) {
select (C-0.0031308, C*12.92, C*12.92, (1.055 * pow (C, 1/2.4) - 0.055))
The reason being, that I might restrict any color values to between 0.0 and 1.0,
for a better 'basic' test of things. (Although, I might change my mind.)
When my test scene is complete and working properly, I'll post it elsewhere.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Kenneth" <kdw### [at] gmailcom> wrote:
> I took your advice-- I'm currently working on a complex text scene, taking into
> account assumed_gamma, and rgb/srgb colors for both objects AND light sources--
> because I personally want to form an opinion about whether or not to use srgb
> colors for lights as well, simply to get the color I 'visually' expect as
> opposed to 'linear' rgb colors that are intrinsically washed-out in an
> assumed_gamma 1.0 environment. I know that Clipka recommended we stick with
> 'rgb' there, but my test scene will compare the difference.
Yes, the washed-out thing is very off-putting.
In situations where there is no readily-available and easily-understood
documentation that adequately explains how all of the pieces influence the final
result, I think it's just as important to understand what _doesn't_ work (and
why ) as it is to be aware of what does work.
In the absence of any "proper" way to achieve an end result, the Warp et al
approach of using assumed_gamma 2.2 is both understandable and unavoidable.
However, if there _exists_ a "proper" method by which to achieve the exact same
results, then it's important to people who spend a lot of time using the system
to be proficient with that, even though there may be specific instances where
they purposefully don't. (In this instance, using the "proper" gamma for all of
the reasons that clipka outlined involving lighting, monitors, highlights and
shadows, etc.)
> Can I assume that, if I choose *not* to use the M-multiplier in your functions
> (for my simpler test scene), I could just change the function like so? and with
> C being a typical 3-part color vector like <.3,.6,.9>)?:
A multiplier of 1 is ... no multiplier at all. ;)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> "Bald Eagle" <cre### [at] netscapenet> wrote:
> >
> > The new tone-adjusting formulas are:
> >
> > #declare SRGB_Encode = function (C, M) {
> > select (C-0.0031308, C*12.92*M, C*12.92*M, (1.055 * pow (C, 1/2.4) - 0.055)*M)
> > }
> >
> > #declare SRGB_Decode = function (C, M) {
> > select (C-0.040449936, C/12.92, pow ((C+0.055)/1.055, 2.4)*M)
> > }
> >
> > using M as a multiplier. Looks like it works correctly, even with M > 1.
> > Perhaps it's not "right", but it's the way I envisioned its usage.
> >
>
[off-topic, sort of...]
Here's some food for thought, which just occured to me-- not about your
functions, but about the conversion formulae themselves (Wikipedia and
elsewhere):
The conversion formulae use a "2.4 gamma" in the computations-- whereas, in
POV-ray, and running antialiasing in a scene, Clipka chose a "2.5 gamma" for the
antialiasing gamma (this in v3.8xx). Maybe I'm cluelessly 'comparing apples to
oranges', but I wonder why Clipka didn't choose "2.4 gamma" instead? I suppose
there *is* a reason, but... :-O
Just something to keep you and me up at nights, wondering about the
difference... ;-)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|