|
![](/i/fill.gif) |
"pierre" <pie### [at] efpg inpg fr> wrote in message
news:web.44eeab388da26955ad334ed60@news.povray.org...
> ... snip ...
> It looks very strange to me, that it is not possible to re-create a
> virtual
> goniometre.
Hi again Pierre,
It's because there's no real light. It's simulated in two steps, with the
scene file first being parsed and then rendered. It's only when the scene
gets rendered that the 'rays' are sent out from the camera to work out what
colour and intensity each pixel should be. The renderer therefore wouldn't
be able to work out how much 'light' would enter the theoretical pinhole
camera until the render part is complete, which would be too late for you to
use that information in any calculations that take place during the parsing
of the scene file.
> ... snip ...
> As I am working on textile texture and gloss, I want to compare
> measurements to a simulated surface. I hope it is gonna be possible using
> povray.
>
POV-Ray uses a conceptual pinhole camera, so doesn't emulate all of the
complex effects of having a big heavy array of lenses on the front of a real
camera (like lense diameter, which affects the amount of light hitting the
film, and arm ache which affect the jitter), so you won't necessarily be
able to get precise matches to photos or to light measuring devices in the
real world. That's where the artistic side comes in, where POV-Ray artists
are able to simulate effects that improve the realism of the rendered image
(like lense flare from extremely bright points).
> ->to Trevor
> I have no clue on Megapov. I am gonna look at it. But perhaps you could
> explain the main advantage to use HDR image output/input?
>
Trevor pointed out that the pixel values stored in an image generated by
POV_Ray are clipped, so very strong points of light (such as the intensive
glow you may get off a small area of glossy fibre) would be reduced to a
plain white pixel. This would mean that averaging out the pixels would give
you an artificially low value. MegaPOV is a build of POV-Ray that includes a
lot of extra bits of code that people have contributed, some of which may
find their way into POV-Ray in time and some of which are experimental in
nature. High Definition images can store a much higher range of values for
each pixel and, by generating this type of image you would reduce the
problem of clipping that would affect the value you calculate for the total
light in your image.
Regards,
Chris B.
Post a reply to this message
|
![](/i/fill.gif) |