POV-Ray : Newsgroups : povray.binaries.images : Re: Regarding the defocus module Server Time
28 Apr 2024 05:16:44 EDT (-0400)
  Re: Regarding the defocus module (Message 7 to 16 of 16)  
<<< Previous 6 Messages Goto Initial 10 Messages
From: clipka
Subject: Re: Regarding the defocus module
Date: 8 Aug 2016 10:20:45
Message: <57a8953d$1@news.povray.org>
Am 08.08.2016 um 09:32 schrieb pkrskr:

>> In a scenario with a diaphragm in a different plane than the lens, this
>> simplification does not hold, as we would have to place the virtual
>> aperture at different X/Y coordinates depending on the point P (compare
>> the diagram, in which the red ray -- which should be at the center of
>> the virtual aperture -- passes through the lens at an offset from the
>> camera axis).
> 
> This is the most interesting point. In your first explanation, you mentioned
> that the red ray is originally drawn through the center of the nominal camera
> location (which is the virtual aperture), and I thus presumed that the ray went
> through the pole of this virtual aperture (since we were tracing through a
> pinhole). However, now that you mention it, I see that the pinhole ray was
> traced through the center of the real aperture and thus happens to be shifted in
> X/Y on the plane of the virtual aperture and the jitter is applied across this
> shifted point. This is indeed different from the above scenario where the lens
> and the diaphragm are at the same plane.

No, that's a misunderstanding there. The pinhole ray is also traced
through the center of the /virtual/ aperture.

> However, could you clarify how the locations of the real aperture and the
> virtual aperture are decided? And does this mean that the nominal camera
> location is the position of the real aperture?

As I said before:

"(1) The nominal camera location always corresponds to the center of the
virtual aperture."

As a matter of fact, POV-Ray /never/ actually "thinks" of the real
camera geometry in the first place, since the geometry is only specified
indirectly. When you set up a camera in POV-Ray, you are essentially
specifying:

- The camera location = center of the virtual aperture.
- The direction of the camera axis (via `direction` or `look_at`).
- The distance between the virtual aperture and the plane in focus
(again via `direction` or `look_at`).
- The effective horizontal and vertical opening angle of the camera
(either via `up`/`right` and `direction`, or via `angle`).

There is an infinite number of physical cameras that satisfy any given
combination of these parameters.


Post a reply to this message

From: pkrskr
Subject: Re: Regarding the defocus module
Date: 9 Aug 2016 06:00:00
Message: <web.57a9a8db29233825f2ed086a0@news.povray.org>
>
> No, that's a misunderstanding there. The pinhole ray is also traced
> through the center of the /virtual/ aperture.
>

Okay. So it seems like in the illustration you provided earlier, the red ray
should be corrected to pass through the center of the virtual aperture and not
the real aperture, since it represents the pinhole ray. And then it seems like
the jitter is applied around the location of the pinhole?

>
> As I said before:
>
> "(1) The nominal camera location always corresponds to the center of the
> virtual aperture."
>
> As a matter of fact, POV-Ray /never/ actually "thinks" of the real
> camera geometry in the first place, since the geometry is only specified
> indirectly. When you set up a camera in POV-Ray, you are essentially
> specifying:
>
> - The camera location = center of the virtual aperture.
> - The direction of the camera axis (via `direction` or `look_at`).
> - The distance between the virtual aperture and the plane in focus
> (again via `direction` or `look_at`).
> - The effective horizontal and vertical opening angle of the camera
> (either via `up`/`right` and `direction`, or via `angle`).
>
> There is an infinite number of physical cameras that satisfy any given
> combination of these parameters.

Okay. I think I understand what you are saying here. You are right about the
existence of several cameras for which the same parameter combination holds.


Post a reply to this message

From: clipka
Subject: Re: Regarding the defocus module
Date: 9 Aug 2016 07:21:18
Message: <57a9bcae$1@news.povray.org>
Am 09.08.2016 um 11:56 schrieb pkrskr:
>>
>> No, that's a misunderstanding there. The pinhole ray is also traced
>> through the center of the /virtual/ aperture.
> 
> Okay. So it seems like in the illustration you provided earlier, the red ray
> should be corrected to pass through the center of the virtual aperture and not
> the real aperture, since it represents the pinhole ray.

No, the illustration is correct in that the _refracted_ red ray passes
through the center of the _real_ aperture, while the _unrefracted_ red
ray (shown semi-transparent) passes through the center of the _virtual_
aperture.

> And then it seems like
> the jitter is applied around the location of the pinhole?

Yes.


Post a reply to this message

From: pkrskr
Subject: Re: Regarding the defocus module
Date: 9 Aug 2016 07:55:00
Message: <web.57a9c41729233825f2ed086a0@news.povray.org>
>
> No, the illustration is correct in that the _refracted_ red ray passes
> through the center of the _real_ aperture, while the _unrefracted_ red
> ray (shown semi-transparent) passes through the center of the _virtual_
> aperture.
>

I see the semi-transparent ray and that it originates at the center of the
virtual aperture. This precisely clarifies the method you detailed in elaborate
steps earlier.

Can you elaborate on how the nominal pinhole camera location is computed as
mentioned in step 2?

>
> (2) From the camera settings, POV-Ray computes the parameters for a
> simple pinhole camera equivalent, with the pinhole at the nominal camera
> location.
>


Post a reply to this message

From: clipka
Subject: Re: Regarding the defocus module
Date: 9 Aug 2016 13:49:28
Message: <57aa17a8$1@news.povray.org>
Am 09.08.2016 um 13:52 schrieb pkrskr:

> I see the semi-transparent ray and that it originates at the center of the
> virtual aperture. This precisely clarifies the method you detailed in elaborate
> steps earlier.
> 
> Can you elaborate on how the nominal pinhole camera location is computed as
> mentioned in step 2?
>>
>> (2) From the camera settings, POV-Ray computes the parameters for a
>> simple pinhole camera equivalent, with the pinhole at the nominal camera
>> location.

I'll re-iterate:

"As a matter of fact, POV-Ray /never/ actually "thinks" of the real
camera geometry in the first place, since the geometry is only specified
indirectly. [...]"

Both the virtual aperture center and the pinhole are placed at the
nominal camera location, as per the camera block.


Post a reply to this message

From: udyank
Subject: Re: Regarding the defocus module
Date: 13 Sep 2016 09:00:02
Message: <web.57d7f7f829233825f2ed086a0@news.povray.org>
Hey!
@clipka: Thanks for your earlier clarifications. They really helped!

I need a little more help. I am trying out a sample scene with 3 horizontal bars
in the front and 3 vertical ones at the back. I have an image of this scene I
want to compare POVRay's result to, but I'm having problems setting up a few
parameters.
- First of all, in the picture you attached before (aperture.png) with the
standard lens diagram, We had an image plane along with the lens an object. My
understanding was that the camera parameters in POVRay are for the 'lens' part.
I set the focal length using the 'direction' variable (I hope that's correct).
So how do I specify where the image plane is to be?
- Also regarding the aperture, the official doc says "while this behaves as a
real camera does, the values for aperture are purely arbitrary and are not
related to f-stops." If I want to specify the aperture value as 'f/X' as in a
camera (ie. with respect to focal length), how can I specify that?
- Regarding the confidence value: the doc says "The confidence value is used to
determine when the samples seem to be close enough to the correct color." How
does POVRay 'know' the correct color? Suppose I have a point (like P in the
image) I'm shooting a number of rays from, which are slightly deviated from each
other going to the lens, and converging at 1 pt P' on the other side. Suppose
some hit an object before P' and some hit it after, but hit different objects.
So how do you know the correct color in such a case, and subsequently, when to
stop tracing more samples as the color is 'close enough'?
- Finally, I have a different setup where I've configured my rays to just pick
up the color value of the object the ray hits and add it up on the target pixel
the ray comes from. Is there a way to bypass the color calculations (the
diffuse/ambient/specular etc. values) and just make POVRay add up the color
value to the pixel where it's color is to be added?

I'm attaching the files I'm using in the setup, along with the 'aperture.png'
used for reference. The testing1.png is when i'm using the 'focal_point'
parameter, and testing2.png is when that is removed. Why is such an effect
happening? Because from my understanding, f-value=1 unit (As set by
'direction').
Thanks in advance!


Post a reply to this message


Attachments:
Download 'files.zip' (940 KB)

From: Alain
Subject: Re: Regarding the defocus module
Date: 13 Sep 2016 12:41:42
Message: <57d82c46$1@news.povray.org>

> Hey!
> @clipka: Thanks for your earlier clarifications. They really helped!
>
> I need a little more help. I am trying out a sample scene with 3 horizontal bars
> in the front and 3 vertical ones at the back. I have an image of this scene I
> want to compare POVRay's result to, but I'm having problems setting up a few
> parameters.
> - First of all, in the picture you attached before (aperture.png) with the
> standard lens diagram, We had an image plane along with the lens an object. My
> understanding was that the camera parameters in POVRay are for the 'lens' part.
> I set the focal length using the 'direction' variable (I hope that's correct).
> So how do I specify where the image plane is to be?

The direction vector set the reference plane of the image. Objects in 
front of that point appear larger and those beyond appear smaller. Along 
with the up and right vectors, it determine the field of view, not a 
focal length.
The image plane is where the point computer by adding the camera's 
location and the direction vector, and is perpendicular to the direction 
vector. It's further modified if you use look_at or any transformation 
on the camera.

> - Also regarding the aperture, the official doc says "while this behaves as a
> real camera does, the values for aperture are purely arbitrary and are not
> related to f-stops." If I want to specify the aperture value as 'f/X' as in a
> camera (ie. with respect to focal length), how can I specify that?

The f/x is a ratio. Say the focal_point is 100 unit in front of the 
camera and you want an f/x of f100, you divide the distance between the 
camera and focal_point by your f/x, in this case, it gives an aperture of 1.
Formula:
vlength(Camera_Location - focal_point)/ (f/x)

> - Regarding the confidence value: the doc says "The confidence value is used to
> determine when the samples seem to be close enough to the correct color." How
> does POVRay 'know' the correct color? Suppose I have a point (like P in the
> image) I'm shooting a number of rays from, which are slightly deviated from each
> other going to the lens, and converging at 1 pt P' on the other side. Suppose
> some hit an object before P' and some hit it after, but hit different objects.
> So how do you know the correct color in such a case, and subsequently, when to
> stop tracing more samples as the color is 'close enough'?

confidence is the probability that the resulting colour is correct. 
It's a statistical thing. It's recommended to always use a value smaller 
than 1.
variance is how much you are willing to deviate from the exact colour. 
Using a value of zero is not recommended. Instead, use a very small 
value like 1e-6 or smaller.
After each samples are taken, they are averaged and compared with the 
average of the previous samples. If the change after adding a new sample 
is small enough, you can say that you are close enough. That evaluation 
also depends on variance and adc_bailout from the global_settings.

> - Finally, I have a different setup where I've configured my rays to just pick
> up the color value of the object the ray hits and add it up on the target pixel
> the ray comes from. Is there a way to bypass the color calculations (the
> diffuse/ambient/specular etc. values) and just make POVRay add up the color
> value to the pixel where it's color is to be added?

Remove all light.
Set all finish as :
finish{ambient 1 diffuse 0 reflection 0 specular 0 phong 0}
Alternatively, but without focal blur, use +q0 on the command line.

>
> I'm attaching the files I'm using in the setup, along with the 'aperture.png'
> used for reference. The testing1.png is when i'm using the 'focal_point'
> parameter, and testing2.png is when that is removed. Why is such an effect
> happening? Because from my understanding, f-value=1 unit (As set by
> 'direction').
> Thanks in advance!
>
You need to set focal_point.
If left undefined, it may get located at the same place as the camera, 
or extremely far away.

A warning: You must use look_at AFTER you set the direction vector.
Normally, it's the last item when using a standard camera, and you only 
set the focal blur parameters after it.
Good : direction <0,0,0>  look_at   <0.0, 1.0,  0.0>
Bad :  look_at  <0.0, 1.0,  0.0>   direction <0,0,0>


Post a reply to this message

From: udyank
Subject: Re: Regarding the defocus module
Date: 13 Sep 2016 13:30:01
Message: <web.57d836ac29233825f2ed086a0@news.povray.org>
> The direction vector set the reference plane of the image. Objects in
> front of that point appear larger and those beyond appear smaller. Along
> with the up and right vectors, it determine the field of view, not a
> focal length.
> The image plane is where the point computer by adding the camera's
> location and the direction vector, and is perpendicular to the direction
> vector. It's further modified if you use look_at or any transformation
> on the camera.

So if the direction vector is used to determine the field of view, how do I set
the 'f' value exactly? Suppose I've made a scene where I assume focal length=2
'units' and all objects, lens and image plane are set according to that.
How/Where do I set that in POVRay? As you must have seen in the .pov file, I say
"X units" from the camera. What I really want is that distance to be as "Y*f",
as in a multiplier of the f-value. How can I do that?

> The f/x is a ratio. Say the focal_point is 100 unit in front of the
> camera and you want an f/x of f100, you divide the distance between the
> camera and focal_point by your f/x, in this case, it gives an aperture of 1.
> Formula:
> vlength(Camera_Location - focal_point)/ (f/x)

Here the f/X I meant was like the f/1.4 or f/1.7 in real cameras. Supposing the
focal_point is X*f from the camera, and I want an aperture of f/2.0, what should
I set the 'aperture' parameter as? [Given that I know how to set 'f' value
first?]

> confidence is the probability that the resulting colour is correct.
> It's a statistical thing. It's recommended to always use a value smaller
> than 1.
> variance is how much you are willing to deviate from the exact colour.

I still didn't get how you say you know the 'correct' color. For computing
probabilities, that knowledge
is reqd. Is it approximated such that you keep adding the colors to the target
pixel, and when that pixel's
color stops changing much, you say you don't need more rays?

> You need to set focal_point.
> If left undefined, it may get located at the same place as the camera,
> or extremely far away.

But what does focal_point really do in the context of the tracing done?


Post a reply to this message

From: Alain
Subject: Re: Regarding the defocus module
Date: 14 Sep 2016 14:47:40
Message: <57d99b4c$1@news.povray.org>

>> The direction vector set the reference plane of the image. Objects in
>> front of that point appear larger and those beyond appear smaller. Along
>> with the up and right vectors, it determine the field of view, not a
>> focal length.
>> The image plane is where the point computer by adding the camera's
>> location and the direction vector, and is perpendicular to the direction
>> vector. It's further modified if you use look_at or any transformation
>> on the camera.
>
> So if the direction vector is used to determine the field of view, how do I set
> the 'f' value exactly? Suppose I've made a scene where I assume focal length=2
> 'units' and all objects, lens and image plane are set according to that.
> How/Where do I set that in POVRay? As you must have seen in the .pov file, I say
> "X units" from the camera. What I really want is that distance to be as "Y*f",
> as in a multiplier of the f-value. How can I do that?
>
>> The f/x is a ratio. Say the focal_point is 100 unit in front of the
>> camera and you want an f/x of f100, you divide the distance between the
>> camera and focal_point by your f/x, in this case, it gives an aperture of 1.
>> Formula:
>> vlength(Camera_Location - focal_point)/ (f/x)
>
> Here the f/X I meant was like the f/1.4 or f/1.7 in real cameras. Supposing the
> focal_point is X*f from the camera, and I want an aperture of f/2.0, what should
> I set the 'aperture' parameter as? [Given that I know how to set 'f' value
> first?]

f/2 mean that the aperture is exactly half the distance between the 
optical center of the lens and the film/detector.
In POV-Ray terms, that should be about half the direction vector length.
Do some testing using a fraction of the direction vector and the 
distance between the camera and focal_point.

>
>> confidence is the probability that the resulting colour is correct.
>> It's a statistical thing. It's recommended to always use a value smaller
>> than 1.
>> variance is how much you are willing to deviate from the exact colour.
>
> I still didn't get how you say you know the 'correct' color. For computing
> probabilities, that knowledge
> is reqd. Is it approximated such that you keep adding the colors to the target
> pixel, and when that pixel's
> color stops changing much, you say you don't need more rays?
Yes. When the change from a new sample get less that a threshold value, 
you say, this is close enough and I don't need to get any new sample.

>
>> You need to set focal_point.
>> If left undefined, it may get located at the same place as the camera,
>> or extremely far away.
>
> But what does focal_point really do in the context of the tracing done?
>
It define the plane of sharpness, every ray shot toward any given point 
on that plane will represent exactly the same pixel in the final image.


Post a reply to this message

From: udyank
Subject: Re: Regarding the defocus module
Date: 15 Sep 2016 03:40:00
Message: <web.57da4f2b29233825f2ed086a0@news.povray.org>
> f/2 mean that the aperture is exactly half the distance between the
> optical center of the lens and the film/detector.
> In POV-Ray terms, that should be about half the direction vector length.
> Do some testing using a fraction of the direction vector and the
> distance between the camera and focal_point.

> > But what does focal_point really do in the context of the tracing done?
> >
> It define the plane of sharpness, every ray shot toward any given point
> on that plane will represent exactly the same pixel in the final image.

It seems like from what you are saying is that the distance between the camera
location
set by the 'location' parameter and the 'focal_point' = focal length of lens.
But wouldn't that mean that all rays always come from infinity? Because if my
'plane of sharpness' is always around the focal plane of the lens, that means
rays always
come from infinity, which would mean I cannot test any other in-focus pair. For
example,
by the simple thin lens formula, if v=1.1f, then u=11f. Which means for an
object placed
11f on one side of the lens, I should be able to place a film 1.1f on the other
side and
be able to see at least the object at 11f in focus and other objects blurred
with respect
to their distance from the 'plane of sharpness' (which would now be around 11f).
How can I see such an effect in POVRay?

> Yes. When the change from a new sample get less that a threshold value,
> you say, this is close enough and I don't need to get any new sample.
Understood!

1 more request: Will it be possible to have a chat with you somehow? I would
like to have
a bit more back-and-forth on this, if that's okay with you. Should I mail you,
or is any
other platform more suitable?

Thanks for all your clarifications regarding this!


Post a reply to this message

<<< Previous 6 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.