POV-Ray : Newsgroups : povray.advanced-users : Obtaining Depth while using Anti-aliasing Server Time
16 May 2024 09:33:02 EDT (-0400)
  Obtaining Depth while using Anti-aliasing (Message 1 to 9 of 9)  
From: handos
Subject: Obtaining Depth while using Anti-aliasing
Date: 9 Jan 2013 15:30:01
Message: <web.50edd2ee984bc5d2d7ae32040@news.povray.org>
Hi everyone,

I had modified the source/backend/render/tracetask.cpp file to obtain the depth
map. Every instance of

From: trace(DBL(x), DBL(rect.top) - 1.0, GetViewData()->GetWidth(),
GetViewData()->GetHeight(), pixels(x, rect.top - 1)); was modified to my trace
function

To: trace(DBL(x), DBL(rect.top) - 1.0, GetViewData()->GetWidth(),
GetViewData()->GetHeight(), pixels(x, rect.top - 1),depthVal);

There are three places where this function is being used in this function
void TraceTask::NonAdaptiveSupersamplingM1()

(a) In the for loop where it says // sample line above current block

(b) The for loop just after that which starts from
   for(int y = rect.top; y <= rect.bottom; y++)
 trace(DBL(rect.left) - 1.0, y, GetViewData()->GetWidth(),
GetViewData()->GetHeight(), pixels(rect.left - 1, y),depthVal); // sample pixel
left of current line in block

(c) The immediate for loop over x has same trace

My modified trace function (defined in tracepixel.cpp in the same directory)
collects depth values and in the if (CreateCameraRay(ray, x, y, width, height,
rayno) == true)
it sums them all and then averages the depth like the colour value being
averaged.

I think there is something wrong going on in here because when I obtain two
rendered images with depth-maps and try to align the images based on the
depth-map I obtain something really peculiar.

I have the images shown here at this web-page:
http://www.doc.ic.ac.uk/~ahanda/anti_alias_depth.html

The first image (1,1) shows the registration of second image (called I_ref) with
the image at the bottom (called I_test). The black areas are marked as pixels
where the registration of I_ref with I_test (forward registraion) and
registraiton of I_test and I_ref (backwards registration) does not agree - this
could be pixels that are occluded etc. But I can't see why the plane at the
right is blacked! It shows that there is error in obtaining depth, isn't it? I
am wondering if someone will be able to help me fix the depth-map code in
tracetask.cpp / tracepixel.cpp files to make sure that I get expected results.

Apologies if this is something hard to understand for first time. I can
re-explain that!

Thank you,
Ankur.


Post a reply to this message

From: clipka
Subject: Re: Obtaining Depth while using Anti-aliasing
Date: 10 Jan 2013 02:17:37
Message: <50ee6b11@news.povray.org>
Am 09.01.2013 21:28, schrieb handos:

> The first image (1,1) shows the registration of second image (called I_ref) with
> the image at the bottom (called I_test). The black areas are marked as pixels
> where the registration of I_ref with I_test (forward registraion) and
> registraiton of I_test and I_ref (backwards registration) does not agree - this
> could be pixels that are occluded etc. But I can't see why the plane at the
> right is blacked! It shows that there is error in obtaining depth, isn't it? I
> am wondering if someone will be able to help me fix the depth-map code in
> tracetask.cpp / tracepixel.cpp files to make sure that I get expected results.
>
> Apologies if this is something hard to understand for first time. I can
> re-explain that!

Please do, as I have no idea what that "registration" thing is you're 
talking about.

Does "That Thing You Are Doing" work if you're not using anti-aliasing? 
Maybe the problem is in "That Thing" rather than in your modified 
version of POV-Ray.

Posting the depth maps associated with the ref & test images might also 
help.

(BTW, is there a specific reason you're limiting yourself to B&W with 
those images? I mean, you could make the non-agreeing regions stand out 
better if you used red, for instance. Or show the ref image in the green 
channel and the test image in the red channel. Fun stuff like that, 
which might help get a clearer picture - literally - of what is 
happening here.)


Staring at those images I get a hunch that you're doing something 
fundamentally wrong with "That Thing You Are Doing". From the pictures I 
gather that they are shot in the same direction but with some lateral 
translation of the camera, i.e. there's something going on with a 
parallax; I have a gut feeling that you're not taking that parallax into 
account the way you would need to.

(You /are/ aware that the depth computed during POV-Ray's tracing is 
/not/ some kind of "Z coodinate in camera space", but the raw distance 
to the camera, right?)


Post a reply to this message

From: handos
Subject: Re: Obtaining Depth while using Anti-aliasing
Date: 10 Jan 2013 06:25:01
Message: <web.50eea45ce780593ad7ae32040@news.povray.org>
Hi Clipka,

Thanks for spending time on this. Registration here means that aligning one
image with the other. For example, if we have pose and depth-map for a given
image (in my case it is I_ref) and given another pose (pose of I_test in this
case) we can project the points given in I_ref frame to I_test pose frame and
pick the texture from the I_ref to obtain an estimate of how I_test would have
looked like and in our case we have the I_test render directly obtained from
POVRay so we can make visual comparison of I_test predicted from 3D points given
in I_ref and I_test given by POVRay. This is a sanity check to assure that
depth-maps and camera poses are correct. Ideally the predicted I_test and POVRay
I_test should have looked same. This is what I mean by registration.
Additionally, I call it forward registration because I am registering I_ref to
I_test.

Another sanity test I did was to align the images other way around, swapping the
role of I_ref and I_test too (because my POVRay gives me depth-map for I_test
and pose too) and this can allow us to reason occlusion. Ideally if there was no
occlusion, points in I_ref registered to I_test should have same correspondence
for points in I_test registered to I_ref back. Black regions are marked out as
occluded points. If you look at the images at
http://www.doc.ic.ac.uk/~ahanda/anti_alias_depth.html the right wall is
completely black which means that there is something wrong because clearly that
wall is visible in both images, so how can it be occluded?

Yes, the depth is raw euclidean distance but I have the codes to convert them to
the real Z-coordinate. They are available here in MATLAB / C++
http://www.doc.ic.ac.uk/~ahanda/HighFrameRateTracking/downloads.html under the
section "Ground Truth Data Parsing (Source Code)"

I have updated the images with red-green channels to them at
http://www.doc.ic.ac.uk/~ahanda/anti_alias_depth.html I have also shown the
depth-maps next to the both images. They seem all OK to me (or maybe not!) but
let me know if there is something wrong with camera matrix?

Thanks,
Ankur.





clipka <ano### [at] anonymousorg> wrote:
> Am 09.01.2013 21:28, schrieb handos:
>
> > The first image (1,1) shows the registration of second image (called I_ref) with
> > the image at the bottom (called I_test). The black areas are marked as pixels
> > where the registration of I_ref with I_test (forward registraion) and
> > registraiton of I_test and I_ref (backwards registration) does not agree - this
> > could be pixels that are occluded etc. But I can't see why the plane at the
> > right is blacked! It shows that there is error in obtaining depth, isn't it? I
> > am wondering if someone will be able to help me fix the depth-map code in
> > tracetask.cpp / tracepixel.cpp files to make sure that I get expected results.
> >
> > Apologies if this is something hard to understand for first time. I can
> > re-explain that!
>
> Please do, as I have no idea what that "registration" thing is you're
> talking about.
>
> Does "That Thing You Are Doing" work if you're not using anti-aliasing?
> Maybe the problem is in "That Thing" rather than in your modified
> version of POV-Ray.
>
> Posting the depth maps associated with the ref & test images might also
> help.
>
> (BTW, is there a specific reason you're limiting yourself to B&W with
> those images? I mean, you could make the non-agreeing regions stand out
> better if you used red, for instance. Or show the ref image in the green
> channel and the test image in the red channel. Fun stuff like that,
> which might help get a clearer picture - literally - of what is
> happening here.)
>
>
> Staring at those images I get a hunch that you're doing something
> fundamentally wrong with "That Thing You Are Doing". From the pictures I
> gather that they are shot in the same direction but with some lateral
> translation of the camera, i.e. there's something going on with a
> parallax; I have a gut feeling that you're not taking that parallax into
> account the way you would need to.
>
> (You /are/ aware that the depth computed during POV-Ray's tracing is
> /not/ some kind of "Z coodinate in camera space", but the raw distance
> to the camera, right?)


Post a reply to this message

From: clipka
Subject: Re: Obtaining Depth while using Anti-aliasing
Date: 10 Jan 2013 18:31:54
Message: <50ef4f6a$1@news.povray.org>
Am 10.01.2013 12:22, schrieb handos:

> I have updated the images with red-green channels to them at
> http://www.doc.ic.ac.uk/~ahanda/anti_alias_depth.html I have also shown the
> depth-maps next to the both images. They seem all OK to me (or maybe not!) but
> let me know if there is something wrong with camera matrix?

Unfortunately I'm not familiar with the data formats you're using in 
your project, or what a "K-matrix" is, nor do I have Matlab available to 
review your math.

All I can tell is that yes, there seems to be /something/ wrong with 
what your algorithm /thinks/ were the camera parameters, and what the 
/actual/ camera parameters were. Relevant parameters that spring to my 
mind are:

- camera position (seems easy enough, but you never know...)
- camera direction
- camera field of view (horizontally in this case)

It might be interesting to see in comparison what happens if the camera 
is moved in a different manner; from the image material it seems that 
you're just rotating the camera horizontally around its location. What 
happens if you rotate it vertically instead? What if you change the 
camera location?

Your algorithm's reaction to these different scenarios might help figure 
out what exactly is going wrong.


(BTW, your original suspected culprit, the anti-aliasing algorithm, is 
probably fine.)


Post a reply to this message

From: handos
Subject: Re: Obtaining Depth while using Anti-aliasing
Date: 11 Jan 2013 06:05:01
Message: <web.50eff1d5e780593ad7ae32040@news.povray.org>
Hi Clipka,

Great! Thanks for letting me know that the anti-aliasing depth thing is fine. So
it is now back to the recovering camera parameters i.e. the Rotation and
Translation of the camera with respect to the world frame of reference.

cam_dir = cam_dir;
cam_pos = cam_pos;
cam_up  = cam_up;

z = cam_dir / norm(cam_dir);

x = cross(cam_up,z); // cross product of cam_up and z

x = x / norm(x);

y = cross(z,x); // cross product of z and x

R = [x y z];

T = cam_pos;

For instance

cam_pos      = [-0.892, 1.3, 2.814]';
cam_dir      = [-0.0217671, 0, -0.999763]';
cam_up       = [0, 1, 0]';
cam_lookat   = [-0.992, 1.3, -1.779]';
cam_sky      = [0, 1, 0]';
cam_right    = [-1.33302, 0, 0.0290228]';
cam_fpoint   = [0, 0, 1]';
cam_angle    = 90;

I get the R and T as:

R =

   -0.9998         0   -0.0218
         0    1.0000         0
    0.0218         0   -0.9998

T =

   -0.8920
    1.3000
    2.8140

That is how I obtain the R and T of the camera. Do you think it could go wrong
anywhere? Is it how you would obtain? If that all seems OK to you then it has to
be the calibration matrix "K" that projects the 3D points onto image plane that
could go wrong.

My camera is initialised in the povray script as

camera {
  perspective
  location   < 0.108+val01,1.3, 2.814>
  look_at  < 0.008+val01,1.3,-1.779>
}

where val01 is -1.0.

Kind Regards,
Ankur.


clipka <ano### [at] anonymousorg> wrote:
> Am 10.01.2013 12:22, schrieb handos:
>
> > I have updated the images with red-green channels to them at
> > http://www.doc.ic.ac.uk/~ahanda/anti_alias_depth.html I have also shown the
> > depth-maps next to the both images. They seem all OK to me (or maybe not!) but
> > let me know if there is something wrong with camera matrix?
>
> Unfortunately I'm not familiar with the data formats you're using in
> your project, or what a "K-matrix" is, nor do I have Matlab available to
> review your math.
>
> All I can tell is that yes, there seems to be /something/ wrong with
> what your algorithm /thinks/ were the camera parameters, and what the
> /actual/ camera parameters were. Relevant parameters that spring to my
> mind are:
>
> - camera position (seems easy enough, but you never know...)
> - camera direction
> - camera field of view (horizontally in this case)
>
> It might be interesting to see in comparison what happens if the camera
> is moved in a different manner; from the image material it seems that
> you're just rotating the camera horizontally around its location. What
> happens if you rotate it vertically instead? What if you change the
> camera location?
>
> Your algorithm's reaction to these different scenarios might help figure
> out what exactly is going wrong.
>
>
> (BTW, your original suspected culprit, the anti-aliasing algorithm, is
> probably fine.)


Post a reply to this message

From: clipka
Subject: Re: Obtaining Depth while using Anti-aliasing
Date: 11 Jan 2013 16:57:27
Message: <50f08ac7@news.povray.org>
Am 11.01.2013 12:04, schrieb handos:

> Great! Thanks for letting me know that the anti-aliasing depth thing is fine. So
> it is now back to the recovering camera parameters i.e. the Rotation and
> Translation of the camera with respect to the world frame of reference.

You've given only one set of camera parameters, but you have two images 
- can you also post the other set of camera parameters? (Please make 
sure to identify which is which.)

The camera translation T looks perfectly fine. I'm not so sure about the 
rotation matrix R: It may or may not need flipping about the diagonal, 
depending on whether you consider position vectors to be 1x3 or 3x1 
matrices. (Classic vector arithmetics typically uses the former 
convention, while computer graphics often uses the latter one.)

Another thing to beware is that POV-Ray by default uses a left-handed 
coordinate system. Did you account for that?

You can force POV-Ray to use a right-handed coordinate system by 
specifying the camera as

camera {
   right      x*ASPECT_RATIO
   up         y
   direction -z
   location <...>
   look_at <...>
   angle ...
   ...
}


Post a reply to this message

From: handos
Subject: Re: Obtaining Depth while using Anti-aliasing
Date: 12 Jan 2013 11:05:02
Message: <web.50f1889be780593ad7ae32040@news.povray.org>
Hi Clipka,

I am giving camera parameters for both images. I have the rotation matrices now
with only diagonal elements non-zero to avoid any confusion with flipping etc.
My first camera parameters [R1 T1] corresponding to I_ref are:

cam_pos      = [-0.892, 1.3, 2.814]';
cam_dir      = [0, 0, -1]';
cam_up       = [0, 1, 0]';
cam_lookat   = [0.108, 1.3, -1.779]';
cam_sky      = [0, 1, 0]';
cam_right    = [-1.33, 0, 0]';
cam_fpoint   = [0, 0, 1]';
cam_angle    = 90;

R1 =

    -1     0     0
     0     1     0
     0     0    -1
T1 =
   -0.8920
    1.3000
    2.8140

and the parameters of the second camera (this corresponds to I_test) are:
cam_pos      = [-0.712, 1.3, 2.814]';
cam_dir      = [0, 0, -1]';
cam_up       = [0, 1, 0]';
cam_lookat   = [0.108, 1.3, -1.779]';
cam_sky      = [0, 1, 0]';
cam_right    = [-1.33, 0, 0]';
cam_fpoint   = [0, 0, 1]';
cam_angle    = 90;

R2 =

    -1     0     0
     0     1     0
     0     0    -1

T2 =
   -0.7120
    1.3000
    2.8140


I_ref and I_test correspond to images red and green respectively here in this
web-page http://www.doc.ic.ac.uk/~ahanda/anti_alias_depth.html .

Another thing I'd like to mention is that this scene is made of meshes
(converting the obj file made in wings3D to povray codes via PoseRay. This is
Jaime's Living room scene rendered via texture baking in POVRay). Do you think
it could be an issue? I am not sure but I wanted to mention that in case that
springs something.

I tried inverting the z direction (direction -z in povray code) but I observed
same thing. I can't quite figure out why the depth values that are parallel to
the optical axis seem to register images properly than the ones that are Y axis.
I'm surprised even that the ceiling is aligned.

Thank you very much for your time.

Kind Regards,
Ankur.


clipka <ano### [at] anonymousorg> wrote:
> Am 11.01.2013 12:04, schrieb handos:
>
> > Great! Thanks for letting me know that the anti-aliasing depth thing is fine. So
> > it is now back to the recovering camera parameters i.e. the Rotation and
> > Translation of the camera with respect to the world frame of reference.
>
> You've given only one set of camera parameters, but you have two images
> - can you also post the other set of camera parameters? (Please make
> sure to identify which is which.)
>
> The camera translation T looks perfectly fine. I'm not so sure about the
> rotation matrix R: It may or may not need flipping about the diagonal,
> depending on whether you consider position vectors to be 1x3 or 3x1
> matrices. (Classic vector arithmetics typically uses the former
> convention, while computer graphics often uses the latter one.)
>
> Another thing to beware is that POV-Ray by default uses a left-handed
> coordinate system. Did you account for that?
>
> You can force POV-Ray to use a right-handed coordinate system by
> specifying the camera as
>
> camera {
>    right      x*ASPECT_RATIO
>    up         y
>    direction -z
>    location <...>
>    look_at <...>
>    angle ...
>    ...
> }


Post a reply to this message

From: clipka
Subject: Re: Obtaining Depth while using Anti-aliasing
Date: 12 Jan 2013 11:42:15
Message: <50f19267@news.povray.org>
Am 12.01.2013 17:00, schrieb handos:

> I tried inverting the z direction (direction -z in povray code) but I observed
> same thing. I can't quite figure out why the depth values that are parallel to
> the optical axis seem to register images properly than the ones that are Y axis.
> I'm surprised even that the ceiling is aligned.

Your camera movement in this example is (more or less) limited to the 
left/right direction, i.e. parallel to the unaffected walls. So if your 
algorithm makes some mistakes in calculating the absolute position of 
the ceiling or rear wall, it will make the same mistake in both images. 
It may actually match the wrong points, but those points will be roughly 
at the same distance and hence the mismatch won't be detected.

This is why I suggest to repeat the experiment with (2) an up/down 
camera movement, and (3) a front/back movement, instead of the 
left/right movement. Your algorithm's reaction to those different 
scenarios might give important clues where to look for the error. 
Likewise, repeating those three experiments with the camera facing in 
the +X direction might also give valuable insight.


Post a reply to this message

From: Le Forgeron
Subject: Re: Obtaining Depth while using Anti-aliasing
Date: 12 Jan 2013 12:23:47
Message: <50f19c23$1@news.povray.org>
Le 12/01/2013 17:00, handos nous fit lire :
> Hi Clipka,
> 
> I am giving camera parameters for both images. I have the rotation matrices now
> with only diagonal elements non-zero to avoid any confusion with flipping etc.
> My first camera parameters [R1 T1] corresponding to I_ref are:
> 
> cam_pos      = [-0.892, 1.3, 2.814]';
> cam_dir      = [0, 0, -1]';
> cam_up       = [0, 1, 0]';
> cam_lookat   = [0.108, 1.3, -1.779]';
> cam_sky      = [0, 1, 0]';
> cam_right    = [-1.33, 0, 0]';
> cam_fpoint   = [0, 0, 1]';
> cam_angle    = 90;
> 

> and the parameters of the second camera (this corresponds to I_test) are:
> cam_pos      = [-0.712, 1.3, 2.814]';
> cam_dir      = [0, 0, -1]';
> cam_up       = [0, 1, 0]';
> cam_lookat   = [0.108, 1.3, -1.779]';
> cam_sky      = [0, 1, 0]';
> cam_right    = [-1.33, 0, 0]';
> cam_fpoint   = [0, 0, 1]';
> cam_angle    = 90;

Hi, if it can help in anyway, hereafter are the actual parameters of the
relevant perspective camera once updated by look_at & angle processing,
in povray 3.7:

Pos.: -0.8920000, 1.3000000, 2.8140000
Up  : 0.0000000, 1.0000000, 0.0000000
Rig.: -1.2995551, 0.0000000, -0.2829425
Dir.: 0.1414713, 0.0000000, -0.6497776

Pos.: -0.7120000, 1.3000000, 2.8140000
Up  : 0.0000000, 1.0000000, 0.0000000
Rig.: -1.3092975, 0.0000000, -0.2337522
Dir.: 0.1168761, 0.0000000, -0.6546487


camera { perspective
location cam_pos
direction cam_dir
look_at cam_lookat
up cam_up
sky cam_sky
right cam_right
angle cam_angle
}


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.