POV-Ray : Newsgroups : povray.advanced-users : how can i do this? Server Time
24 Nov 2024 14:31:35 EST (-0500)
  how can i do this? (Message 1 to 9 of 9)  
From: amar
Subject: how can i do this?
Date: 18 Mar 2006 19:00:00
Message: <web.441c9e8c6ad875f6c46bf1ef0@news.povray.org>
Hello Everybody, i have a doubt. if i have my camera located at say
C[x,y,z]. Is it possible to measure the amount of light that is falling on
my camera?. is there anyway to calculate it based on the pixels within the
view of camera??

would be verymuch thankfll to anyone who could help me out

Thank you in advance

cheers
amar


Post a reply to this message

From: Tek
Subject: Re: how can i do this?
Date: 21 Mar 2006 15:31:31
Message: <442062a3$1@news.povray.org>
Can't you just render a 360 degree view and average that? You'd need to 
scale the brightness of your scene so none of the pixels saturate out, and 
you'd need to compensate for the distortion of the lens when you average the 
colour (otherwise the stretched areas will have more effect on the average 
than the less-stretched ones), so divide pixel colour by the angular area 
that pixel represents...

An alternative scheme would be to use megapov's function camera and setup a 
function so each pixel is sampled in a random direction, then just find the 
average colour of that.

But it does beg the question: why do you want to do that?

-- 
Tek
http://evilsuperbrain.com

<amar> wrote in message 
news:web.441c9e8c6ad875f6c46bf1ef0@news.povray.org...
> Hello Everybody, i have a doubt. if i have my camera located at say
> C[x,y,z]. Is it possible to measure the amount of light that is falling on
> my camera?. is there anyway to calculate it based on the pixels within the
> view of camera??
>
> would be verymuch thankfll to anyone who could help me out
>
> Thank you in advance
>
> cheers
> amar
>
>


Post a reply to this message

From: amar
Subject: Re: how can i do this?
Date: 23 Mar 2006 15:45:01
Message: <web.4423084f8da26955c46bf1ef0@news.povray.org>
"Tek" <tek### [at] evilsuperbraincom> wrote:
> Can't you just render a 360 degree view and average that? You'd need to
> scale the brightness of your scene so none of the pixels saturate out, and
> you'd need to compensate for the distortion of the lens when you average the
> colour (otherwise the stretched areas will have more effect on the average
> than the less-stretched ones), so divide pixel colour by the angular area
> that pixel represents...
>
> An alternative scheme would be to use megapov's function camera and setup a
> function so each pixel is sampled in a random direction, then just find the
> average colour of that.
>
> But it does beg the question: why do you want to do that?
>
> --
> Tek
> http://evilsuperbrain.com
>
> <amar> wrote in message
> news:web.441c9e8c6ad875f6c46bf1ef0@news.povray.org...
> > Hello Everybody, i have a doubt. if i have my camera located at say
> > C[x,y,z]. Is it possible to measure the amount of light that is falling on
> > my camera?. is there anyway to calculate it based on the pixels within the
> > view of camera??
> >
> > would be verymuch thankfll to anyone who could help me out
> >
> > Thank you in advance
> >
> > cheers
> > amar
> >
> >

hai .. thanks a lot for answering my question. i need to do it for
illumination analysis. as most of the softwares which are meant for it are
pretty costly. i thought to this way. ultimately i want to measure the
light intensity at any given point in my scene.


Post a reply to this message

From: pierre
Subject: Re: how can i do this?
Date: 24 Aug 2006 09:45:00
Message: <web.44edac778da26955ad334ed60@news.povray.org>
"amar" <amar> wrote:
> Hello Everybody, i have a doubt. if i have my camera located at say
> C[x,y,z]. Is it possible to measure the amount of light that is falling on
> my camera?. is there anyway to calculate it based on the pixels within the
> view of camera??
>
> would be verymuch thankfll to anyone who could help me out
>
> Thank you in advance
>
> cheers
> amar

Hi everybody,

I have just posted a message asking exactly the same question. I hope that
somebody would answer...
Or perhaps amar found a way to do it?
Thanks
pierre


Post a reply to this message

From: Chris B
Subject: Re: how can i do this?
Date: 24 Aug 2006 12:09:33
Message: <44edcf3d$1@news.povray.org>
"pierre" <pie### [at] efpginpgfr> wrote in message 
news:web.44edac778da26955ad334ed60@news.povray.org...
> "amar" <amar> wrote:
>> Hello Everybody, i have a doubt. if i have my camera located at say
>> C[x,y,z]. Is it possible to measure the amount of light that is falling 
>> on
>> my camera?. is there anyway to calculate it based on the pixels within 
>> the
>> view of camera??
>>
>> would be verymuch thankfll to anyone who could help me out
>>
>> Thank you in advance
>>
>> cheers
>> amar
>
> Hi everybody,
>
> I have just posted a message asking exactly the same question. I hope that
> somebody would answer...
> Or perhaps amar found a way to do it?
> Thanks
> pierre
>
>

Hi Pierre,

It doesn't look like anybody rushed to answer it when it was posted, so I'll 
take a pop at it. Maybe that will help stimulate ideas or debate.

I don't know of any way of measuring the light that would be recorded by the 
camera during the render.
However, once you have the rendered bitmap you could read the values for 
each pixel and average them, which would give you an approximation of the 
amount of light that the camera captured.

You could probably do this post-rendering step using POV-Ray, by defining 
the image as a pigment and using the eval_pigment function to find the 
values of each pixel. You could then display the total or average value in 
the message stream or you could use the value on a second pass at rendering 
the scene (e.g. turn on another light if the first pass was too dull).

Regards,
Chris B.


Post a reply to this message

From: Trevor G Quayle
Subject: Re: how can i do this?
Date: 24 Aug 2006 14:55:00
Message: <web.44edf5f18da26955c150d4c10@news.povray.org>
"Chris B" <c_b### [at] btconnectcomnospam> wrote:
> I don't know of any way of measuring the light that would be recorded by the
> camera during the render.
> However, once you have the rendered bitmap you could read the values for
> each pixel and average them, which would give you an approximation of the
> amount of light that the camera captured.
>
> You could probably do this post-rendering step using POV-Ray, by defining
> the image as a pigment and using the eval_pigment function to find the
> values of each pixel. You could then display the total or average value in
> the message stream or you could use the value on a second pass at rendering
> the scene (e.g. turn on another light if the first pass was too dull).
>
> Regards,
> Chris B.

This may be the best approach, however, the resulting image will be
low-dynamic range, i.e., values greater than 1 will be clipped, giving you
a false result.  Perhaps use this approach, but using HDR image
output/input feature of megaPOV.  (I do something along the lines of this
somewhat successfully for the HDRI environment lighting macro I've been
working on for a while.)

-tgq


Post a reply to this message

From: pierre
Subject: Re: how can i do this?
Date: 25 Aug 2006 03:50:00
Message: <web.44eeab388da26955ad334ed60@news.povray.org>
Thanks for yours answers! it is very nice to meet helpfull people!

I was thinking to use the method proposed by Chris. THere are still
technical problem that I have to overcome, but it is playable! I guess It
would be faster to create the image in povray and to analyse each pixel in
matlab.

It looks very strange to me, that it is not possible to re-create a virtual
goniometre. As I am working on textile texture and gloss, I want to compare
measurements to a simulated surface. I hope it is gonna be possible using
povray.

->to Trevor
I have no clue on Megapov. I am gonna look at it. But perhaps you could
explain the main advantage to use HDR image output/input?

Anyway, I will try to find something and I will let you know the result.
Greetings
Pierre


"Trevor G Quayle" <Tin### [at] hotmailcom> wrote:
> "Chris B" <c_b### [at] btconnectcomnospam> wrote:
> > I don't know of any way of measuring the light that would be recorded by the
> > camera during the render.
> > However, once you have the rendered bitmap you could read the values for
> > each pixel and average them, which would give you an approximation of the
> > amount of light that the camera captured.
> >
> > You could probably do this post-rendering step using POV-Ray, by defining
> > the image as a pigment and using the eval_pigment function to find the
> > values of each pixel. You could then display the total or average value in
> > the message stream or you could use the value on a second pass at rendering
> > the scene (e.g. turn on another light if the first pass was too dull).
> >
> > Regards,
> > Chris B.
>
> This may be the best approach, however, the resulting image will be
> low-dynamic range, i.e., values greater than 1 will be clipped, giving you
> a false result.  Perhaps use this approach, but using HDR image
> output/input feature of megaPOV.  (I do something along the lines of this
> somewhat successfully for the HDRI environment lighting macro I've been
> working on for a while.)
>
> -tgq


Post a reply to this message

From: Chris B
Subject: Re: how can i do this?
Date: 25 Aug 2006 06:53:48
Message: <44eed6bc$1@news.povray.org>
"pierre" <pie### [at] efpginpgfr> wrote in message 
news:web.44eeab388da26955ad334ed60@news.povray.org...
> ... snip ...
> It looks very strange to me, that it is not possible to re-create a 
> virtual
> goniometre.

Hi again Pierre,

It's because there's no real light. It's simulated in two steps, with the 
scene file first being parsed and then rendered. It's only when the scene 
gets rendered that the 'rays' are sent out from the camera to work out what 
colour and intensity each pixel should be. The renderer therefore wouldn't 
be able to work out how much 'light' would enter the theoretical pinhole 
camera until the render part is complete, which would be too late for you to 
use that information in any calculations that take place during the parsing 
of the scene file.

> ... snip ...
> As I am working on textile texture and gloss, I want to compare
> measurements to a simulated surface. I hope it is gonna be possible using
> povray.
>

POV-Ray uses a conceptual pinhole camera, so doesn't emulate all of the 
complex effects of having a big heavy array of lenses on the front of a real 
camera (like lense diameter, which affects the amount of light hitting the 
film, and arm ache which affect the jitter), so you won't necessarily be 
able to get precise matches to photos or to light measuring devices in the 
real world. That's where the artistic side comes in, where POV-Ray artists 
are able to simulate effects that improve the realism of the rendered image 
(like lense flare from extremely bright points).

> ->to Trevor
> I have no clue on Megapov. I am gonna look at it. But perhaps you could
> explain the main advantage to use HDR image output/input?
>

Trevor pointed out that the pixel values stored in an image generated by 
POV_Ray are clipped, so very strong points of light (such as the intensive 
glow you may get off a small area of glossy fibre) would be reduced to a 
plain white pixel. This would mean that averaging out the pixels would give 
you an artificially low value. MegaPOV is a build of POV-Ray that includes a 
lot of extra bits of code that people have contributed, some of which may 
find their way into POV-Ray in time and some of which are experimental in 
nature. High Definition images can store a much higher range of values for 
each pixel and, by generating this type of image you would reduce the 
problem of clipping that would affect the value you calculate for the total 
light in your image.

Regards,
Chris B.


Post a reply to this message

From: Trevor G Quayle
Subject: Re: how can i do this?
Date: 25 Aug 2006 08:45:01
Message: <web.44eef0ad8da26955c150d4c10@news.povray.org>
"pierre" <pie### [at] efpginpgfr> wrote:
> ->to Trevor
> I have no clue on Megapov. I am gonna look at it. But perhaps you could
> explain the main advantage to use HDR image output/input?

When most image types are written/displayed, the coulour values range from 0
to 1. However, in real life, light brightness is by no means limited to 1
(i.e., compare the sun to a light bulb to a white piece of paper), in fact
it can be substantially more, relatively speaking.  So when photos (or
equally, rendered images are written, the colour of superbright objects get
clipped at 1 because of the image file limitations and the true brightness
of the scene gets lost.  This is generally fine for an image meant for
direct viewing, however, if we want to use the image for something where
the brightness is required (e.g. reflection maps, environment lighting,
or,as you want to do, scene lighting evaluation) then we end up getting
false results, because the true brightness of the +1 pixels is lost and
they are treated the same as pixels with an actual value of 1 (i.e., in the
image sun, the light bulb and the white piece of paper will all be treated
as having the same brightness even though this is clearly untrue).  HDR
(called high-dynamic range as opposed to LDR or low-dynamic range) and some
other format save pixel values as floating point numbers and can save the
actual value of each pixel,no matter what the value, thus preserving the
true brightness or dynamic range of the image. Unfortuneatley POVRay 3.6
does not support HDR (it is being implemented in the 3.7 beta however) but
megaPOV does (http://megapov.inetart.net/).

I hope this has been more helpful than confusing.  If you have any more
questions or want to see some examples of how it can work, feel free to
ask.  Also,have a look at this page, as it has some good information and
illustrations of HDR vs LDR:
http://www.highpoly3d.com/writer/tutorials/hdri/hdri.htm

-tgq


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.