POV-Ray : Newsgroups : povray.off-topic : Any thoughts on this... : Any thoughts on this... Server Time
7 Sep 2024 11:25:12 EDT (-0400)
  Any thoughts on this...  
From: Patrick Elliott
Date: 22 May 2008 01:43:18
Message: <MPG.229eb5e5ce6c245c98a159@news.povray.org>
Lets say, for the sake of argument, that you have a way to *detect* when 
an avatar in a game world has "touched" and object. Fine so far, but 
lets say the object can be any imaginable shape, but you "need" to be 
able to tell *where* they touched it, without breaking the object down 
into specific objects, each with their own detection. My thought is, use 
something like a bounding box. You can determine which "side" you are 
touching it from, and where, relative to the top-left of that "side" of 
the bounding box it was touched, by sort of extrapolating where on that 
"side" you contacted it. There are obvious limitations to this method, 
but it may be good enough for most applications its likely to be used 
for. So.. The question becomes, knowing what I need, how the bloody heck 
do you implement it?

Basically, I am going to try to propose an extension to the "touch" 
functions in a virtual environment, which works similar to the image 
maps used in HTML pages (so you can make images/buttons on the texture 
as target points), but I would like to have a clear description of what 
I am proposing and how it would work, so that the odds of it being 
adopted is a tad higher than just making the suggestions. Note - Due to 
the way objects are handled, to maintain ownership of them, the script 
and the functions that determine what has been touched, and thus also 
where on the bounding box, run on the server end, not in the client. I 
figured that since the math to detect a collision with a bounding box is 
fast, as is the general means to derive one, the only real issue is to 
figure out how to find out where something got touched. This may have to 
be done client side (and getting that data to pass to the server isn't a 
big deal if it has to be), but in essence, I figure you would be 
generating three pieces of information:

1. I am facing side S of the bounding box.
2. I touched the bounding boxes side at X,Y.
3. I touched object O, which is bound by #2.

Its already generating 3, but I don't see it being practical to detect 
where on an object it touches directly, since a) the objects are meshes, 
and b) even if you detected a face, that face could be far bigger than 
the part of the texture that you *want* to detect the touch on.

-- 
void main () {

    if version = "Vista" {
      call slow_by_half();
      call DRM_everything();
    }
    call functional_code();
  }
  else
    call crash_windows();
}

<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
 
3D Content, and 3D Software at DAZ3D!</A>


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.