POV-Ray : Newsgroups : povray.binaries.images : "Position Finder" results : Re: "Position Finder" results Server Time
1 May 2024 20:01:44 EDT (-0400)
  Re: "Position Finder" results  
From: Bald Eagle
Date: 2 Oct 2018 12:35:00
Message: <web.5bb39e20813dde7ac437ac910@news.povray.org>
"Bald Eagle" <cre### [at] netscapenet> wrote:
> clipka <ano### [at] anonymousorg> wrote:
>
> > Or, maybe that's actually the problem?
>
> He's just not seeing it embedded in
> transform{TEXT_OBJECT_TRANSFORM}



So, I've just been dabbling with this in the odd 5-10 minute blocks of free time
that I have, and trying to see what I did with modeling the camera's view
frustum.

Let's suppose we're using the default camera, with the camera at the origin, the
"canvas"/image plane at z=1, right x, up y, sky, y, direction z, look_at z

If you're going to select pixels, then you want to translate those values so
that you get the +/- 0.5 range for x and y, with the center being <0,0>

Just divide the pixel value by the width and subtract 0.5 to give a range of
-0.5 to 0.5

Do the same for y and height, and flip the sign

Then to project that point "out" into the 3D scene, you just multiply all those
values by the z value (returned by trace() )
Because similar triangles.


This is not the hard part - the hard part is doing this when the basis vectors
are different - when the camera and look_at values, etc aren't axis-aligned.


So for envisioning what to do:
1. Take the scene's camera values and rotate everything so that it's in the
"default" frame.   (more to do if there's shear, etc.)
2. Do the simple math.
3. Rotate everything back by doing exactly the opposite of what you did in step
1.


And this is where I think the transform matrix approach will really shine.

I just need enough uninterrupted round-tuits to get oriented and forge far
enough ahead.

[highly] Suggested reading:
http://www.scratchapixel.com/lessons/3d-basic-rendering/computing-pixel-coordinates-of-3d-point/mathematics-computing-2
d-coordinates-of-3d-points


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.