|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
If I remember correctly, Nathan said that the rendered image exists in
floating point format before it is converted to the final image
format. Would it be possible to gain access to the floating point
values? I'd like to be able to specify my own formula for converting
the floating point value to an 8bit integer.
Why? I want to alter the linearity of the translation from floating
point to integer. If you look at a plot of photgraphic film's response
to light, it isn't linear. There is a certain threshold that must be
achieved before any image is recorded. Above that, the response is
somewhat linear, until we get to the brightest exposure levels. In the
highlights, increasing the exposure has less and less of an effect.
In other words, the darker picture elements are expanded in dynamic
range (higher contrast), and the brighter elements are compressed in
dynamic range (lower contrast). One of the effects of this would be
the ability to render light sources and not have them "wash out", or
lose detail, quite so easily. I think that adjusting the response
curve to approximate film would also make the image look more
realistic in general.
I'm not sure yet if the human eye responds to light intensity values
in a similar non-linear way, but I think it does. I know that human
hearing is non-linear in its perception of dynamic range, but I can't
say for sure about human vision yet.
If anyone needs more of an explanation of what I'm talking about, I'll
be more than happy to scan some charts for you and explain further.
Thanks,
Glen Berry
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I really like this kind of idea. Nice explaining of it.
This sort of thing is what makes getting great skies with foreground
subjects so difficult when taking a photograph. Fill lighting and filters
are what help to do that instead of being able to just snap a picture.
I would think human vision does some of this too.
Even though the pupil controls the majority of "emulsion" on the retina
I'd guess it also shuts down or boosts the amount of light at the back
of the eye to some extent.
Speaking of this, where I went today for the Easter picnic was a campground
which has a large cave. At one point the tour guide shut off all the lights
and you don't see anything but the remnants of what was previously seen.
Just a little after the lights were out I was waving my hand in front of my face
proclaiming I could see it and I was promptly ignored.
Really though, I saw not a dim hand in front of my eyes but instead a totally
black silhouette of it, as though the cave had light in it at imperceptible
levels. I turned around just afterward and noticed a ever brightening bluish
glow from a small spot. It was the place we exited to after the lights were
back on.
Anyway, back to POV ;-) the way a person gets non-linear changes now
is to use ambient and diffuse, pigment color and gamma in various
relations I suppose. That way you suggest sounds intriguing even if I can't
understand how anyone would go about it.
Bob
"Glen Berry" <7no### [at] ezwvcom> wrote in message
news:TUgDOeHcwtbl7VCZwbG2XphrPOQ2@4ax.com...
| If I remember correctly, Nathan said that the rendered image exists in
| floating point format before it is converted to the final image
| format. Would it be possible to gain access to the floating point
| values? I'd like to be able to specify my own formula for converting
| the floating point value to an 8bit integer.
|
| Why? I want to alter the linearity of the translation from floating
| point to integer. If you look at a plot of photgraphic film's response
| to light, it isn't linear. There is a certain threshold that must be
| achieved before any image is recorded. Above that, the response is
| somewhat linear, until we get to the brightest exposure levels. In the
| highlights, increasing the exposure has less and less of an effect.
|
| In other words, the darker picture elements are expanded in dynamic
| range (higher contrast), and the brighter elements are compressed in
| dynamic range (lower contrast). One of the effects of this would be
| the ability to render light sources and not have them "wash out", or
| lose detail, quite so easily. I think that adjusting the response
| curve to approximate film would also make the image look more
| realistic in general.
|
| I'm not sure yet if the human eye responds to light intensity values
| in a similar non-linear way, but I think it does. I know that human
| hearing is non-linear in its perception of dynamic range, but I can't
| say for sure about human vision yet.
|
| If anyone needs more of an explanation of what I'm talking about, I'll
| be more than happy to scan some charts for you and explain further.
|
| Thanks,
| Glen Berry
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <TUgDOeHcwtbl7VCZwbG2XphrPOQ2@4ax.com>, Glen Berry
<7no### [at] ezwvcom> wrote:
> If I remember correctly, Nathan said that the rendered image exists in
> floating point format before it is converted to the final image
> format. Would it be possible to gain access to the floating point
> values? I'd like to be able to specify my own formula for converting
> the floating point value to an 8bit integer.
This shouldn't be hard...I think all of the post-processing is done
before the final clipping to the 0-1 range.(don't take that as reliable
information though, I am just guessing)
> Why? I want to alter the linearity of the translation from floating
> point to integer. If you look at a plot of photgraphic film's response
> to light, it isn't linear. There is a certain threshold that must be
> achieved before any image is recorded. Above that, the response is
> somewhat linear, until we get to the brightest exposure levels. In the
> highlights, increasing the exposure has less and less of an effect.
>
> In other words, the darker picture elements are expanded in dynamic
> range (higher contrast), and the brighter elements are compressed in
> dynamic range (lower contrast). One of the effects of this would be
> the ability to render light sources and not have them "wash out", or
> lose detail, quite so easily. I think that adjusting the response
> curve to approximate film would also make the image look more
> realistic in general.
I think a better way would be to implement one or several "film type"
post processing filters which cover physical models as well as computer
effects...you could have "standard computer graphics"(the current type),
various color films, black and white, antique sepia tone, light
intensity, etc.
Also, I think the post_process stuff really belongs in the camera, not
global_settings. This is just personal preference, though...
--
Christopher James Huff - Personal e-mail: chr### [at] yahoocom
TAG(Technical Assistance Group) e-mail: chr### [at] tagpovrayorg
Personal Web page: http://chrishuff.dhs.org/
TAG Web page: http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Chris Huff wrote:
> I think a better way would be to implement one or several "film type"
> post processing filters which cover physical models as well as computer
> effects...you could have "standard computer graphics"(the current type),
> various color films, black and white, antique sepia tone, light
> intensity, etc.
Don't you think that encapsulating(system call) the code would be
better?
And, what if i want to program in basic or pascal or whatever...
I think that you might want to add some feature directly in MegaPOV's
code, but still offer the opportunity to use your own post-processing
stuff on a single image, instead of doing a patch for one job...
> Also, I think the post_process stuff really belongs in the camera, not
> global_settings. This is just personal preference, though...
Well... it'd fit better there in most ways of thinking, but if you see
it in a hierarchical way, where the cam is part of the scene, it'd
belong in the global settings, i think....
--
AKA paul_virak_khuong at yahoo.com, pkhuong at deja.com, pkhuong at
crosswinds.net and pkhuong at technologist.com(list not complete)...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Sun, 23 Apr 2000 15:18:32 -0400, Glen Berry <7no### [at] ezwvcom>
wrote:
>In other words, the darker picture elements are expanded in dynamic
>range (higher contrast), and the brighter elements are compressed in
>dynamic range (lower contrast).
Sorry, I made a mistake there. Both ends of the response curve are
typically compressed, or lower in contrast. Black and near-black
values are brought closer together. Likewise, white and near-white
values are also brought closer together.
Later,
Glen Berry
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Sun, 23 Apr 2000 15:18:32 -0400, Glen Berry <7no### [at] ezwvcom>
wrote:
>If anyone needs more of an explanation of what I'm talking about, I'll
>be more than happy to scan some charts for you and explain further.
Glen,
there is a patch by Darren Scott Wilson which does just what you
suggest. He calls it "The Infinite Light Patch". You can find Darren's
addy in the MegaPOV docs (he also wrote the dispersion patch). Perhaps
he can tell you more on the subject.
Peter Popov ICQ : 15002700
Personal e-mail : pet### [at] usanet
TAG e-mail : pet### [at] tagpovrayorg
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Mon, 24 Apr 2000 03:11:16 +0300, Peter Popov <pet### [at] usanet>
wrote:
>there is a patch by Darren Scott Wilson which does just what you
>suggest. He calls it "The Infinite Light Patch". You can find Darren's
>addy in the MegaPOV docs (he also wrote the dispersion patch). Perhaps
>he can tell you more on the subject.
Unfortunately, Darren's webpage detailing the "Infinite Light Patch"
and his "Dispersion Patch" is no longer online. He has two websites,
and I visited them both. However, the link to the aforementioned web
page does not work.
I had seen the "Infinite Light Patch" in the past, and recall that it
attempted to solve at least some of the same issues, but I don't
remember it in enough detail to comment on it properly. At the time, I
had the impression that his implementation wasn't modeled in a
physically accurate manner, but I could be easily mistaken. I'm also
not sure if he had effectively modeled a film response curve, or if he
was simply working on preventing highlights from "blowing out." I
suppose I'll be emailing him soon to find out more.
If taken to its limit, I think my idea is a bit more ambitious and
versatile.
Later,
Glen Berry
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Sun, 23 Apr 2000 17:39:23 -0500, Chris Huff
<chr### [at] yahoocom> wrote:
>I think a better way would be to implement one or several "film type"
>post processing filters which cover physical models as well as computer
>effects...you could have "standard computer graphics"(the current type),
>various color films, black and white, antique sepia tone, light
>intensity, etc.
This sounds like the idea I have, except you are suggesting creating
some "presets" to cover the more common applications.
My primary wish was to roughly simulate a film's response curve,
ignoring color values. All three color channels would be manipulated
according to the same formula.
If we want to get more advanced (and I think we should), we could
process each channel independently, and achieve a much broader range
of effects. Including:
1 Blue, Gold, Selenium, Sepia, or Split Toning (and many others)
2 Reciprocity Failure Simulation
3 Solarization
4 Posterization
5 Conversion to a Negative
6 Simulation of Antiquated or Alternative Photographic Processes
7 Simulation of Specific Film Stocks (the most ambitious goal)
Note: To obtain the toning effects (normally a B&W photographic
process), we would need to first convert the full-color RGB image into
a monochromatic RGB image. Then we simply alter the curves of the RGB
channels independently. While a typical POV image can currently be
manipulated like this in an image editor, it would be better to
perform such level manipulation on the raw floating point values, to
better preserve the dynamic range of the original scene.
As always, I'll be more than happy to provide more details,
explanations, and even some images to illustrate any point that might
be unclear to anyone.
Later,
Glen Berry
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Sun, 23 Apr 2000 20:42:17 -0400, Glen Berry <7no### [at] ezwvcom>
wrote:
>If taken to its limit, I think my idea is a bit more ambitious and
>versatile.
This I admit. Good luck in this endeavor!
Peter Popov ICQ : 15002700
Personal e-mail : pet### [at] usanet
TAG e-mail : pet### [at] tagpovrayorg
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <390380D7.3DF36E34@videotron.ca>, pk <thi### [at] videotronca>
wrote:
> Don't you think that encapsulating(system call) the code would be
> better?
> And, what if i want to program in basic or pascal or whatever...
> I think that you might want to add some feature directly in MegaPOV's
> code, but still offer the opportunity to use your own post-processing
> stuff on a single image, instead of doing a patch for one job...
Uh, what? I don't understand what you mean...I don't think you are
saying POV should have a built in C/Pascal/BASIC interpreter/compiler so
you could write post-process filters with those, but I don't see what
else you could mean...
POV is written in C, but that language doesn't have anything to do with
POV syntax. The syntax of these "film types" has already been designed
and implemented, it would just be another post_process filter. And the
code is probably pretty well encapsulated already, I think most of the
post_process stuff is in postproc.c and postproc.h. And what do you mean
by "system call"?
Are you trying to say that these filters should be implemented as
separate programs written in any language that are called by POV? I
doubt this is possible to do in a cross-platform way...and how would it
would it be an improvement over the current post_process patch? The
source is available already, you could just modify that. The C code
should be easy enough to copy and modify, even if you don't completely
understand it. And if you insist on writing a filter in a language other
than C, or don't want to put it in the POV-Ray program, you might as
well make it a stand-alone utility.
> > Also, I think the post_process stuff really belongs in the camera, not
> > global_settings. This is just personal preference, though...
> Well... it'd fit better there in most ways of thinking, but if you see
> it in a hierarchical way, where the cam is part of the scene, it'd
> belong in the global settings, i think....
If you viewed the scene that way, the camera should also be in
global_settings...which, of course, it isn't.
It just seems more logical to have things that directly affect the
output image part of the camera statement, rather than global_settings
(which seems to be intended more for settings than for specification of
special effects).
--
Christopher James Huff - Personal e-mail: chr### [at] yahoocom
TAG(Technical Assistance Group) e-mail: chr### [at] tagpovrayorg
Personal Web page: http://chrishuff.dhs.org/
TAG Web page: http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|