POV-Ray : Newsgroups : povray.off-topic : Any thoughts on this... Server Time
6 Nov 2024 12:17:54 EST (-0500)
  Any thoughts on this... (Message 1 to 10 of 16)  
Goto Latest 10 Messages Next 6 Messages >>>
From: Patrick Elliott
Subject: Any thoughts on this...
Date: 22 May 2008 01:43:18
Message: <MPG.229eb5e5ce6c245c98a159@news.povray.org>
Lets say, for the sake of argument, that you have a way to *detect* when 
an avatar in a game world has "touched" and object. Fine so far, but 
lets say the object can be any imaginable shape, but you "need" to be 
able to tell *where* they touched it, without breaking the object down 
into specific objects, each with their own detection. My thought is, use 
something like a bounding box. You can determine which "side" you are 
touching it from, and where, relative to the top-left of that "side" of 
the bounding box it was touched, by sort of extrapolating where on that 
"side" you contacted it. There are obvious limitations to this method, 
but it may be good enough for most applications its likely to be used 
for. So.. The question becomes, knowing what I need, how the bloody heck 
do you implement it?

Basically, I am going to try to propose an extension to the "touch" 
functions in a virtual environment, which works similar to the image 
maps used in HTML pages (so you can make images/buttons on the texture 
as target points), but I would like to have a clear description of what 
I am proposing and how it would work, so that the odds of it being 
adopted is a tad higher than just making the suggestions. Note - Due to 
the way objects are handled, to maintain ownership of them, the script 
and the functions that determine what has been touched, and thus also 
where on the bounding box, run on the server end, not in the client. I 
figured that since the math to detect a collision with a bounding box is 
fast, as is the general means to derive one, the only real issue is to 
figure out how to find out where something got touched. This may have to 
be done client side (and getting that data to pass to the server isn't a 
big deal if it has to be), but in essence, I figure you would be 
generating three pieces of information:

1. I am facing side S of the bounding box.
2. I touched the bounding boxes side at X,Y.
3. I touched object O, which is bound by #2.

Its already generating 3, but I don't see it being practical to detect 
where on an object it touches directly, since a) the objects are meshes, 
and b) even if you detected a face, that face could be far bigger than 
the part of the texture that you *want* to detect the touch on.

-- 
void main () {

    if version = "Vista" {
      call slow_by_half();
      call DRM_everything();
    }
    call functional_code();
  }
  else
    call crash_windows();
}

<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
 
3D Content, and 3D Software at DAZ3D!</A>


Post a reply to this message

From: Warp
Subject: Re: Any thoughts on this...
Date: 22 May 2008 02:25:39
Message: <483511e3@news.povray.org>
http://en.wikipedia.org/wiki/Collision_detection

-- 
                                                          - Warp


Post a reply to this message

From: Patrick Elliott
Subject: Re: Any thoughts on this...
Date: 22 May 2008 21:48:17
Message: <MPG.229fd0135ebfee5698a15a@news.povray.org>
In article <483511e3@news.povray.org>, war### [at] tagpovrayorg says...
>   http://en.wikipedia.org/wiki/Collision_detection
> 
Hmm. There is no "collision" between the mouse pointer and the object, 
save in the most basic sense that something like the POV-Ray camera can 
be said to "collide" with an object via its vector. The issue here is 
detecting one such collision, based on another. I.e., you can't "test" 
the bounding box *unless* you already know that the pointer made 
apparent contact with the object, and then you have to determine *which* 
face of the bounding box that "had to be" for it to have struck it. Mind 
you, I can see where its possible they are already using a bounding 
system to limit how many tests they have to make, and therefor they may 
be testing if something "entered" the boundary, then if it actually 
struck the object. That would make things a whole heck of a lot simpler, 
but since this is all done server end, for security reasons, I don't 
have access to source to check...

I suppose the wiki is applicable only if they are doing something "like 
that" already. If not, then I suppose a second test against the bounding 
box created would work. Frankly, I am not real certain that they "are" 
using something to limit how many tests they need. Their code often runs 
slow, there asset services, which track who owns things, goes down way 
to often, etc. Even *basic* stuff in the script system seems to be 
missing, since it never occurred to them that it made sense to have it.

Example - Every person/group has a unique ID. The "test" function to 
tell who touched an object only tests "one" ID at a time, and you can 
only have one such script in each object, which means you have to use 
multiple objects, each with a seperate test, to determine if one of of a 
list of people/groups touched it. Some people get around this using HTTP 
requests and external servers to handle the tests, but imho, its damn 
stupid that they waste time and resources to run multiple scripts, in 
multiple objects, which "still" have to be sent to the client, along 
with textures and other data, to do what a simple "Is the ID if the 
person that touched it in a *list* of IDs?" I mean, this is a complete 
"duh!", but its missing.

So, I am not betting that they necessarily are using bounding, or other 
tests effectively to determine what got touched, client side or server 
side. :(

Still, I think I can probably post the idea anyway, since its possible 
they do have something in place that could be adapted to allow it.

-- 
void main () {

    if version = "Vista" {
      call slow_by_half();
      call DRM_everything();
    }
    call functional_code();
  }
  else
    call crash_windows();
}

<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
 
3D Content, and 3D Software at DAZ3D!</A>


Post a reply to this message

From: Darren New
Subject: Re: Any thoughts on this...
Date: 22 May 2008 22:10:40
Message: <483627a0$1@news.povray.org>
Patrick Elliott wrote:
> So.. The question becomes, knowing what I need, how the bloody heck 
> do you implement it?

I would say you can separate the security from the calculations pretty 
easily.

1) The client knows what it's drawing at each pixel, so when you click, 
compute which bit of mesh was shown in the pixel the mouse was in when 
the click occurred. Report that to the server.

2) All the *server* has to do is verify that the client is capable of 
seeing the bit of mesh that you clicked on. In other words, the server 
doesn't have to figure out where you clicked - it only has to confirm 
that you could have clicked on what the client figured out. Given the 
server knows the object's mesh coords and the camera's location, this 
should be pretty easy. A piece of mesh partially obscured might make 
this a little harder.

> Basically, I am going to try to propose an extension to the "touch" 
> functions in a virtual environment, which works similar to the image 
> maps used in HTML pages (so you can make images/buttons on the texture 
> as target points), 

Yeah. My object in SL took 57 objects to make a 5x5 board for 2 people. 
Messy to instantiate. :-)


-- 
   Darren New / San Diego, CA, USA (PST)
     "That's pretty. Where's that?"
          "It's the Age of Channelwood."
     "We should go there on vacation some time."


Post a reply to this message

From: Patrick Elliott
Subject: Re: Any thoughts on this...
Date: 22 May 2008 22:36:14
Message: <MPG.229fdb834ffd0e7898a15b@news.povray.org>
In article <483627a0$1@news.povray.org>, dne### [at] sanrrcom says...
> Patrick Elliott wrote:
> > So.. The question becomes, knowing what I need, how the bloody heck 
> > do you implement it?
> 
> I would say you can separate the security from the calculations pretty 
> easily.
> 
> 1) The client knows what it's drawing at each pixel, so when you click,
 
> compute which bit of mesh was shown in the pixel the mouse was in when 
> the click occurred. Report that to the server.
> 
> 2) All the *server* has to do is verify that the client is capable of 
> seeing the bit of mesh that you clicked on. In other words, the server 
> doesn't have to figure out where you clicked - it only has to confirm 
> that you could have clicked on what the client figured out. Given the 
> server knows the object's mesh coords and the camera's location, this 
> should be pretty easy. A piece of mesh partially obscured might make 
> this a little harder.
> 
Well. Yeah, I figure this is the case, but I also figured that complex 
objects "might" be a bit more complicated than testing a bounding box 
around them. However, if they already do basic detection to determine if 
the need to "test" for such things anyway, then the bounding box already 
exists. Since I don't know near enough of the math, I have no clue how 
complicated something like a sculptie would make the whole mess, never 
mind linked objects.

> > Basically, I am going to try to propose an extension to the "touch" 
> > functions in a virtual environment, which works similar to the image 
> > maps used in HTML pages (so you can make images/buttons on the texture
 
> > as target points), 
> 
> Yeah. My object in SL took 57 objects to make a 5x5 board for 2 people.
 
> Messy to instantiate. :-)
> 
Ah. You guessed it. lol Seriously though, HUDs and even things like 
kiosks and door keypads waste anything from 6-7 prims and textures for 
something "simple", like the animation overriders, to 9 for a keypad, to 
10+ for some kiosks, etc. And every damn one has to have a script to 
"say" things to the key prim, which needs a listener, etc. Just making 
it detect where on a texture you touched, so you don't need all the damn 
prims, could cut the amount of crap running server side, as well as the 
number of prims and textures to be sent/rendered, in half.

As for your game.. Not sure that would have helped much. You still need 
a key "listener" with the board texture, to detect where you clicked, 
instead of talking to it, then lots of listeners in the pieces, to make 
them rez, move and derez when needed. Its possible it would have helped 
some, but maybe not as much as you think in that case.

-- 
void main () {

    if version = "Vista" {
      call slow_by_half();
      call DRM_everything();
    }
    call functional_code();
  }
  else
    call crash_windows();
}

<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
 
3D Content, and 3D Software at DAZ3D!</A>


Post a reply to this message

From: Darren New
Subject: Re: Any thoughts on this...
Date: 22 May 2008 23:48:03
Message: <48363e73@news.povray.org>
Patrick Elliott wrote:
> Well. Yeah, I figure this is the case, but I also figured that complex 
> objects "might" be a bit more complicated than testing a bounding box 
> around them.

I'm not sure you followed my description.

The client looks at the click, and decides which facet of which prim was 
drawn where you clicked. The server isn't involved yet.

The client then sends to the server the fact that you clicked at a 
particular point on a particular triangle in a particular mesh on a 
particular prim. The server merely needs to verify that the normal of 
that triangle points towards the avatar and isn't blocked by something 
else.

I.e., it makes the hard part ("what did I hit where") run on the client, 
and the server only has to verify that (say) all three corners of the 
triangle are in view of the camera. The actual mouse position relative 
to the screen, for example, never needs to go to the server.

> As for your game.. Not sure that would have helped much. 

My game doesn't have movable pieces. It has poke-able places. I.e., more 
like tic-tac-toe than chess.

-- 
   Darren New / San Diego, CA, USA (PST)
     "That's pretty. Where's that?"
          "It's the Age of Channelwood."
     "We should go there on vacation some time."


Post a reply to this message

From: Patrick Elliott
Subject: Re: Any thoughts on this...
Date: 23 May 2008 23:35:33
Message: <MPG.22a13af354b5b0a798a15c@news.povray.org>
In article <48363e73@news.povray.org>, dne### [at] sanrrcom says...
> Patrick Elliott wrote:
> > Well. Yeah, I figure this is the case, but I also figured that complex
 
> > objects "might" be a bit more complicated than testing a bounding box
 
> > around them.
> 
> I'm not sure you followed my description.
> 
> The client looks at the click, and decides which facet of which prim was
 
> drawn where you clicked. The server isn't involved yet.
> 
> The client then sends to the server the fact that you clicked at a 
> particular point on a particular triangle in a particular mesh on a 
> particular prim. The server merely needs to verify that the normal of 
> that triangle points towards the avatar and isn't blocked by something 
> else.
> 
> I.e., it makes the hard part ("what did I hit where") run on the client,
 
> and the server only has to verify that (say) all three corners of the 
> triangle are in view of the camera. The actual mouse position relative 
> to the screen, for example, never needs to go to the server.
> 
Actually, that *won't* work as I intended. In most cases you are going 
to be dealing with something that only have *one* side to it, like a 
HUD. In other cases, you may be dealing with something that has a 
complex surface, but you don't want/need to know which "triangle" was 
touched, you need to know where on the "texture" was touched, which 
"may" overlap several such triangles, if its a complex shape. Using 
tests against which face was touched just dumps you right back in the 
same "Oh, prim xyz was touched, now what?", case you had before, since 
you then would have to figure out if the texture is "on" that triangle, 
which "part" of it is on that triangle, and "if" that somehow connected 
to what you are looking for in the script. I.e., a total mess.

What I am looking at, instead, is much simpler. You have some complex 
object that "happens" to fit in a box that is 5x3x1, and a texture on 
*one* of the 5x3 faces, that is towards you. You touch it. The 
client/server determines that you "would have" also touched that face, 
of that virtual "box", and based on that, returns a set of coordinates 
xp and yp, which ranger between 0-1 (you might have stretched the 
texture, so you can figure where something "should be" based on 
percentage of the texture far more easilly than by "dimension". So, if 
you touch the virtual box at some place like 3.41x1.21, you get the 
"apparent" location on the texture (assuming you touched the object at 
all, since a complex one may not have any part of itself "at" those 
coordinates) you get xp = 0.682 and yp = 0.4033. You can then test if
 
that falls within a region on the texture that would have had your 
button on it. It won't work with *hugely* complex surfaces, but for 
anything that uses a relatively flat surface anyway, the result will, 
even if you map over something not 100% flat, be "close" to what you 
figured when designing the texture. And of course, if you had such a 
system, you could add a feature to any "editors" that may be designed 
later to produce an "accurate" location, even if it is on a complex 
surface.

You method would require something a lot more complicated, involving 
determining if the right texture is on that part of the object, if it 
is, where that is in relationship "to" that texture, and a mess of other 
things that are, frankly, not likely to be in the library anyway, since 
textures are just mapped via GPU to the surfaces, and there probably are 
not any functions, either in the GPU, or in the library used to place 
them there, that can return that kind of information, especially not 
with enough "precision" to tell you where you touched it, not just, 
"Yes, the texture overlaps the face you touched."


-- 
void main () {

    if version = "Vista" {
      call slow_by_half();
      call DRM_everything();
    }
    call functional_code();
  }
  else
    call crash_windows();
}

<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
 
3D Content, and 3D Software at DAZ3D!</A>


Post a reply to this message

From: Darren New
Subject: Re: Any thoughts on this...
Date: 24 May 2008 13:10:37
Message: <48384c0d$1@news.povray.org>
Patrick Elliott wrote:
> Actually, that *won't* work as I intended.

Well, adjust as needed.

> In most cases you are going 
> to be dealing with something that only have *one* side to it, like a 
> HUD. In other cases, you may be dealing with something that has a 
> complex surface, but you don't want/need to know which "triangle" was 
> touched, you need to know where on the "texture" was touched, which 
> "may" overlap several such triangles, if its a complex shape.

Sure. But the client knows why it drew the pixel you clicked on in red, 
or blue, or orange. So the client is already doing all the math to 
figure out exactly which texture is drawn where on each pixel. That's my 
point.

After that, the server only needs to know if the client's request is 
*possible* to make sure it's legal.

> client/server determines that you "would have" also touched that face, 

Well, is it the client, or the server, that figures that out? I thought 
that's what you were asking.

> It won't work with *hugely* complex surfaces, but for 
> anything that uses a relatively flat surface anyway, the result will, 
> even if you map over something not 100% flat, be "close" to what you 
> figured when designing the texture. And of course, if you had such a 
> system, you could add a feature to any "editors" that may be designed 
> later to produce an "accurate" location, even if it is on a complex 
> surface.

But you don't have to do that. The client is already doing all the 
"accurate location" math when it draws the object on the screen, even 
for a HUD.

In other words, start out not thinking of it as "where on the object did 
I touch", but instead think of it as "where on the screen did I touch" 
followed by "what did I draw on the screen at that location".

> You method would require something a lot more complicated, involving 
> determining if the right texture is on that part of the object, if it 
> is, where that is in relationship "to" that texture, and a mess of other 
> things that are, frankly, not likely to be in the library anyway, since 
> textures are just mapped via GPU to the surfaces, and there probably are 
> not any functions, either in the GPU, or in the library used to place 
> them there, that can return that kind of information,

That seems like a pretty sucky library. :-)  Obviously the math is easy 
enough to do in a small fraction of a second, tho.  In any case, 
anything you can offload to the client is something you *should* offload 
to the client, in that architecture. Once the client has figured out 
what you touched where, chances are the server can confirm it more 
easily than it can iterate through all the possible textures and prims 
within visual distance of your camera. Especially if you're close to the 
edge of a server.

-- 
   Darren New / San Diego, CA, USA (PST)
     "That's pretty. Where's that?"
          "It's the Age of Channelwood."
     "We should go there on vacation some time."


Post a reply to this message

From: Patrick Elliott
Subject: Re: Any thoughts on this...
Date: 25 May 2008 22:40:31
Message: <MPG.22a3d0edd9b3cdf398a15d@news.povray.org>
In article <48384c0d$1@news.povray.org>, dne### [at] sanrrcom says...
> That seems like a pretty sucky library. :-)  Obviously the math is easy
 
> enough to do in a small fraction of a second, tho.  In any case, 
> anything you can offload to the client is something you *should* offload
 
> to the client, in that architecture. Once the client has figured out 
> what you touched where, chances are the server can confirm it more 
> easily than it can iterate through all the possible textures and prims 
> within visual distance of your camera. Especially if you're close to the
 
> edge of a server.
> 
Well, I admit I don't know much about 3D libraries of the type they are 
actually using, so wasn't sure how much was handled via the software and 
how much the GPU, with respect to textures.
 

-- 
void main () {

    if version = "Vista" {
      call slow_by_half();
      call DRM_everything();
    }
    call functional_code();
  }
  else
    call crash_windows();
}

<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
 
3D Content, and 3D Software at DAZ3D!</A>


Post a reply to this message

From: Darren New
Subject: Re: Any thoughts on this...
Date: 26 May 2008 01:45:08
Message: <483a4e64$1@news.povray.org>
Patrick Elliott wrote:
> Well, I admit I don't know much about 3D libraries of the type they are 
> actually using, so wasn't sure how much was handled via the software and 
> how much the GPU, with respect to textures.

I don't know either. But certainly the server won't be able to use the 
GPU to calculate anything, and it's clearly something you want to 
offload out to the client as much as possible. I imagine the client is 
at least doing the clipping-to-viewport or something that would reduce 
the amount of calculation needed.

-- 
   Darren New / San Diego, CA, USA (PST)
     "That's pretty. Where's that?"
          "It's the Age of Channelwood."
     "We should go there on vacation some time."


Post a reply to this message

Goto Latest 10 Messages Next 6 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.