POV-Ray : Newsgroups : povray.off-topic : Any thoughts on this... : Re: Any thoughts on this... Server Time
7 Sep 2024 17:17:02 EDT (-0400)
  Re: Any thoughts on this...  
From: scott
Date: 26 May 2008 02:53:31
Message: <483a5e6b@news.povray.org>
> Actually, that *won't* work as I intended. In most cases you are going
> to be dealing with something that only have *one* side to it, like a
> HUD.

How is a HUD with one side different to any other mesh in the algorithm 
Darren described?  It should just work.

> In other cases, you may be dealing with something that has a
> complex surface, but you don't want/need to know which "triangle" was
> touched, you need to know where on the "texture" was touched,

For example, DirectX provides a function called D3DXIntersect which takes a 
mesh and a ray, then returns which face was hit and the barycentric hit 
coordinates within that face.  You can then do *very* simple math with the 
texture coordinates of that face to work out exactly which texture 
coordinates the pointer is over.

> which
> "may" overlap several such triangles, if its a complex shape. Using
> tests against which face was touched just dumps you right back in the
> same "Oh, prim xyz was touched, now what?", case you had before, since
> you then would have to figure out if the texture is "on" that triangle,
> which "part" of it is on that triangle, and "if" that somehow connected
> to what you are looking for in the script. I.e., a total mess.

No, that seems like the logical way to do it to me.  First convert the 
pointer screen position to a ray (ie start point and direction) in 3D.  Then 
make a list of possible objects the ray intersects (using simple bounding 
spheres or boxes), then test each of those objects with a function like 
D3DXIntersect, then compute the texture coordinates of the hit.

Note however that your system *might* allow the same part of texture to be 
used on more than 1 part of the mesh, the GPU and APIs certainly allow this.

> What I am looking at, instead, is much simpler. You have some complex
> object that "happens" to fit in a box that is 5x3x1, and a texture on
> *one* of the 5x3 faces, that is towards you. You touch it. The
> client/server determines that you "would have" also touched that face,
> of that virtual "box", and based on that, returns a set of coordinates
> xp and yp, which ranger between 0-1

That is going to fail and annoy users even with slight deviations from the 
bounding box face due to parallax errors.  If the user tries to click on a 
small area, your algorithm will give the wrong place and the user won't be 
able to click on buttons etc.

> You method would require something a lot more complicated, involving
> determining if the right texture is on that part of the object,

You should have the information already if you are drawing the object on the 
screen.

> if it
> is, where that is in relationship "to" that texture,

Again, you should already have that information in the vertex list that you 
pass to the GPU.  Each vertex has x and y texture coordinates passed to it, 
and if you use something like D3DXIntersect that returns barycentric hit 
coordinates, the math is extremely simple.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.