POV-Ray : Newsgroups : povray.off-topic : Any thoughts on this... Server Time
7 Sep 2024 15:22:12 EDT (-0400)
  Any thoughts on this... (Message 7 to 16 of 16)  
<<< Previous 6 Messages Goto Initial 10 Messages
From: Patrick Elliott
Subject: Re: Any thoughts on this...
Date: 23 May 2008 23:35:33
Message: <MPG.22a13af354b5b0a798a15c@news.povray.org>
In article <48363e73@news.povray.org>, dne### [at] sanrrcom says...
> Patrick Elliott wrote:
> > Well. Yeah, I figure this is the case, but I also figured that complex
 
> > objects "might" be a bit more complicated than testing a bounding box
 
> > around them.
> 
> I'm not sure you followed my description.
> 
> The client looks at the click, and decides which facet of which prim was
 
> drawn where you clicked. The server isn't involved yet.
> 
> The client then sends to the server the fact that you clicked at a 
> particular point on a particular triangle in a particular mesh on a 
> particular prim. The server merely needs to verify that the normal of 
> that triangle points towards the avatar and isn't blocked by something 
> else.
> 
> I.e., it makes the hard part ("what did I hit where") run on the client,
 
> and the server only has to verify that (say) all three corners of the 
> triangle are in view of the camera. The actual mouse position relative 
> to the screen, for example, never needs to go to the server.
> 
Actually, that *won't* work as I intended. In most cases you are going 
to be dealing with something that only have *one* side to it, like a 
HUD. In other cases, you may be dealing with something that has a 
complex surface, but you don't want/need to know which "triangle" was 
touched, you need to know where on the "texture" was touched, which 
"may" overlap several such triangles, if its a complex shape. Using 
tests against which face was touched just dumps you right back in the 
same "Oh, prim xyz was touched, now what?", case you had before, since 
you then would have to figure out if the texture is "on" that triangle, 
which "part" of it is on that triangle, and "if" that somehow connected 
to what you are looking for in the script. I.e., a total mess.

What I am looking at, instead, is much simpler. You have some complex 
object that "happens" to fit in a box that is 5x3x1, and a texture on 
*one* of the 5x3 faces, that is towards you. You touch it. The 
client/server determines that you "would have" also touched that face, 
of that virtual "box", and based on that, returns a set of coordinates 
xp and yp, which ranger between 0-1 (you might have stretched the 
texture, so you can figure where something "should be" based on 
percentage of the texture far more easilly than by "dimension". So, if 
you touch the virtual box at some place like 3.41x1.21, you get the 
"apparent" location on the texture (assuming you touched the object at 
all, since a complex one may not have any part of itself "at" those 
coordinates) you get xp = 0.682 and yp = 0.4033. You can then test if
 
that falls within a region on the texture that would have had your 
button on it. It won't work with *hugely* complex surfaces, but for 
anything that uses a relatively flat surface anyway, the result will, 
even if you map over something not 100% flat, be "close" to what you 
figured when designing the texture. And of course, if you had such a 
system, you could add a feature to any "editors" that may be designed 
later to produce an "accurate" location, even if it is on a complex 
surface.

You method would require something a lot more complicated, involving 
determining if the right texture is on that part of the object, if it 
is, where that is in relationship "to" that texture, and a mess of other 
things that are, frankly, not likely to be in the library anyway, since 
textures are just mapped via GPU to the surfaces, and there probably are 
not any functions, either in the GPU, or in the library used to place 
them there, that can return that kind of information, especially not 
with enough "precision" to tell you where you touched it, not just, 
"Yes, the texture overlaps the face you touched."


-- 
void main () {

    if version = "Vista" {
      call slow_by_half();
      call DRM_everything();
    }
    call functional_code();
  }
  else
    call crash_windows();
}

<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
 
3D Content, and 3D Software at DAZ3D!</A>


Post a reply to this message

From: Darren New
Subject: Re: Any thoughts on this...
Date: 24 May 2008 13:10:37
Message: <48384c0d$1@news.povray.org>
Patrick Elliott wrote:
> Actually, that *won't* work as I intended.

Well, adjust as needed.

> In most cases you are going 
> to be dealing with something that only have *one* side to it, like a 
> HUD. In other cases, you may be dealing with something that has a 
> complex surface, but you don't want/need to know which "triangle" was 
> touched, you need to know where on the "texture" was touched, which 
> "may" overlap several such triangles, if its a complex shape.

Sure. But the client knows why it drew the pixel you clicked on in red, 
or blue, or orange. So the client is already doing all the math to 
figure out exactly which texture is drawn where on each pixel. That's my 
point.

After that, the server only needs to know if the client's request is 
*possible* to make sure it's legal.

> client/server determines that you "would have" also touched that face, 

Well, is it the client, or the server, that figures that out? I thought 
that's what you were asking.

> It won't work with *hugely* complex surfaces, but for 
> anything that uses a relatively flat surface anyway, the result will, 
> even if you map over something not 100% flat, be "close" to what you 
> figured when designing the texture. And of course, if you had such a 
> system, you could add a feature to any "editors" that may be designed 
> later to produce an "accurate" location, even if it is on a complex 
> surface.

But you don't have to do that. The client is already doing all the 
"accurate location" math when it draws the object on the screen, even 
for a HUD.

In other words, start out not thinking of it as "where on the object did 
I touch", but instead think of it as "where on the screen did I touch" 
followed by "what did I draw on the screen at that location".

> You method would require something a lot more complicated, involving 
> determining if the right texture is on that part of the object, if it 
> is, where that is in relationship "to" that texture, and a mess of other 
> things that are, frankly, not likely to be in the library anyway, since 
> textures are just mapped via GPU to the surfaces, and there probably are 
> not any functions, either in the GPU, or in the library used to place 
> them there, that can return that kind of information,

That seems like a pretty sucky library. :-)  Obviously the math is easy 
enough to do in a small fraction of a second, tho.  In any case, 
anything you can offload to the client is something you *should* offload 
to the client, in that architecture. Once the client has figured out 
what you touched where, chances are the server can confirm it more 
easily than it can iterate through all the possible textures and prims 
within visual distance of your camera. Especially if you're close to the 
edge of a server.

-- 
   Darren New / San Diego, CA, USA (PST)
     "That's pretty. Where's that?"
          "It's the Age of Channelwood."
     "We should go there on vacation some time."


Post a reply to this message

From: Patrick Elliott
Subject: Re: Any thoughts on this...
Date: 25 May 2008 22:40:31
Message: <MPG.22a3d0edd9b3cdf398a15d@news.povray.org>
In article <48384c0d$1@news.povray.org>, dne### [at] sanrrcom says...
> That seems like a pretty sucky library. :-)  Obviously the math is easy
 
> enough to do in a small fraction of a second, tho.  In any case, 
> anything you can offload to the client is something you *should* offload
 
> to the client, in that architecture. Once the client has figured out 
> what you touched where, chances are the server can confirm it more 
> easily than it can iterate through all the possible textures and prims 
> within visual distance of your camera. Especially if you're close to the
 
> edge of a server.
> 
Well, I admit I don't know much about 3D libraries of the type they are 
actually using, so wasn't sure how much was handled via the software and 
how much the GPU, with respect to textures.
 

-- 
void main () {

    if version = "Vista" {
      call slow_by_half();
      call DRM_everything();
    }
    call functional_code();
  }
  else
    call crash_windows();
}

<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
 
3D Content, and 3D Software at DAZ3D!</A>


Post a reply to this message

From: Darren New
Subject: Re: Any thoughts on this...
Date: 26 May 2008 01:45:08
Message: <483a4e64$1@news.povray.org>
Patrick Elliott wrote:
> Well, I admit I don't know much about 3D libraries of the type they are 
> actually using, so wasn't sure how much was handled via the software and 
> how much the GPU, with respect to textures.

I don't know either. But certainly the server won't be able to use the 
GPU to calculate anything, and it's clearly something you want to 
offload out to the client as much as possible. I imagine the client is 
at least doing the clipping-to-viewport or something that would reduce 
the amount of calculation needed.

-- 
   Darren New / San Diego, CA, USA (PST)
     "That's pretty. Where's that?"
          "It's the Age of Channelwood."
     "We should go there on vacation some time."


Post a reply to this message

From: scott
Subject: Re: Any thoughts on this...
Date: 26 May 2008 02:53:31
Message: <483a5e6b@news.povray.org>
> Actually, that *won't* work as I intended. In most cases you are going
> to be dealing with something that only have *one* side to it, like a
> HUD.

How is a HUD with one side different to any other mesh in the algorithm 
Darren described?  It should just work.

> In other cases, you may be dealing with something that has a
> complex surface, but you don't want/need to know which "triangle" was
> touched, you need to know where on the "texture" was touched,

For example, DirectX provides a function called D3DXIntersect which takes a 
mesh and a ray, then returns which face was hit and the barycentric hit 
coordinates within that face.  You can then do *very* simple math with the 
texture coordinates of that face to work out exactly which texture 
coordinates the pointer is over.

> which
> "may" overlap several such triangles, if its a complex shape. Using
> tests against which face was touched just dumps you right back in the
> same "Oh, prim xyz was touched, now what?", case you had before, since
> you then would have to figure out if the texture is "on" that triangle,
> which "part" of it is on that triangle, and "if" that somehow connected
> to what you are looking for in the script. I.e., a total mess.

No, that seems like the logical way to do it to me.  First convert the 
pointer screen position to a ray (ie start point and direction) in 3D.  Then 
make a list of possible objects the ray intersects (using simple bounding 
spheres or boxes), then test each of those objects with a function like 
D3DXIntersect, then compute the texture coordinates of the hit.

Note however that your system *might* allow the same part of texture to be 
used on more than 1 part of the mesh, the GPU and APIs certainly allow this.

> What I am looking at, instead, is much simpler. You have some complex
> object that "happens" to fit in a box that is 5x3x1, and a texture on
> *one* of the 5x3 faces, that is towards you. You touch it. The
> client/server determines that you "would have" also touched that face,
> of that virtual "box", and based on that, returns a set of coordinates
> xp and yp, which ranger between 0-1

That is going to fail and annoy users even with slight deviations from the 
bounding box face due to parallax errors.  If the user tries to click on a 
small area, your algorithm will give the wrong place and the user won't be 
able to click on buttons etc.

> You method would require something a lot more complicated, involving
> determining if the right texture is on that part of the object,

You should have the information already if you are drawing the object on the 
screen.

> if it
> is, where that is in relationship "to" that texture,

Again, you should already have that information in the vertex list that you 
pass to the GPU.  Each vertex has x and y texture coordinates passed to it, 
and if you use something like D3DXIntersect that returns barycentric hit 
coordinates, the math is extremely simple.


Post a reply to this message

From: Darren New
Subject: Re: Any thoughts on this...
Date: 26 May 2008 13:10:17
Message: <483aeef9$1@news.povray.org>
scott wrote:
> That is going to fail and annoy users even with slight deviations from 
> the bounding box face due to parallax errors.  If the user tries to 
> click on a small area, your algorithm will give the wrong place and the 
> user won't be able to click on buttons etc.

FWIW, if you're too far away from the object in 3D, the interface 
doesn't let you click on it at all. It's not like you're firing a weapon 
or something. :-)

Come to think of it, you might want to consider how to make it work if 
you're firing a weapon. I can see people wanting to use this sort of 
thing to determine where on the dart board your dart landed, etc. But 
that would be an entirely different algorithm, since you're no longer 
talking about the screen and camera, and for that matter you're no 
longer even necessarily running the code on the same server the client 
is connected to.

You'll need to figure out how to handle it if you're standing on one 
server and the object you're touching is on another server, too.

At least make the API capable of supporting those, so you don't get the 
PHP affect.

-- 
   Darren New / San Diego, CA, USA (PST)
     "That's pretty. Where's that?"
          "It's the Age of Channelwood."
     "We should go there on vacation some time."


Post a reply to this message

From: Patrick Elliott
Subject: Re: Any thoughts on this...
Date: 27 May 2008 00:46:29
Message: <MPG.22a53ff03be0974298a15e@news.povray.org>
In article <483aeef9$1@news.povray.org>, dne### [at] sanrrcom says...
> scott wrote:
> > That is going to fail and annoy users even with slight deviations from
 
> > the bounding box face due to parallax errors.  If the user tries to 
> > click on a small area, your algorithm will give the wrong place and the
 
> > user won't be able to click on buttons etc.
> 
> FWIW, if you're too far away from the object in 3D, the interface 
> doesn't let you click on it at all. It's not like you're firing a weapon
 
> or something. :-)
> 
Yeah. Its likely fairly irrelevant. If you are not close enough, you can 
*already* touch the wrong thing, even when every "touch point" is an 
individual prim. Worrying about trying to touch 3-4 pixel on something 
accurately isn't going to matter at all, since you can't reliably do 
that "anyway".

> Come to think of it, you might want to consider how to make it work if 
> you're firing a weapon. I can see people wanting to use this sort of 
> thing to determine where on the dart board your dart landed, etc. But 
> that would be an entirely different algorithm, since you're no longer 
> talking about the screen and camera, and for that matter you're no 
> longer even necessarily running the code on the same server the client 
> is connected to.
> 
The cases I have seen with arrows, they use multiple hollow cylinders, 
which is what we are trying to avoid here. Obviously a collision detect 
for two objects is going to be "slightly" more accurate anyway, since 
you are actually intersecting two objects that can't "hit" in the wrong 
place, unless game gravity reverses itself, its guided somehow, etc.

> You'll need to figure out how to handle it if you're standing on one 
> server and the object you're touching is on another server, too.
> 
Definitely.

-- 
void main () {

    if version = "Vista" {
      call slow_by_half();
      call DRM_everything();
    }
    call functional_code();
  }
  else
    call crash_windows();
}

<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
 
3D Content, and 3D Software at DAZ3D!</A>


Post a reply to this message

From: Patrick Elliott
Subject: Re: Any thoughts on this...
Date: 27 May 2008 00:57:32
Message: <MPG.22a542897fb253c98a15f@news.povray.org>
In article <483a5e6b@news.povray.org>, sco### [at] scottcom says...
> > Actually, that *won't* work as I intended. In most cases you are going
> > to be dealing with something that only have *one* side to it, like a
> > HUD.
> 
> How is a HUD with one side different to any other mesh in the algorithm
 
> Darren described?  It should just work.
> 
Umm. Comparison and contrast. First you mention the case where you think 
it "will" work, then you mention the one where your not so certain.

> > In other cases, you may be dealing with something that has a
> > complex surface, but you don't want/need to know which "triangle" was
> > touched, you need to know where on the "texture" was touched,
> 
> For example, DirectX provides a function called D3DXIntersect which takes
 a 
> mesh and a ray, then returns which face was hit and the barycentric hit
 
> coordinates within that face.  You can then do *very* simple math with th
e 
> texture coordinates of that face to work out exactly which texture 
> coordinates the pointer is over.
> 
This is a Windows, Mac *and* Linux client. OpenGL may have something 
similar, but knowing what it is doesn't necessarily help if you don't 
know how to get from that to the final result. lol I am sure its simple 
math, but..

> > which
> > "may" overlap several such triangles, if its a complex shape. Using
> > tests against which face was touched just dumps you right back in the
> > same "Oh, prim xyz was touched, now what?", case you had before, since
> > you then would have to figure out if the texture is "on" that triangle,
> > which "part" of it is on that triangle, and "if" that somehow connected
> > to what you are looking for in the script. I.e., a total mess.
> 
> No, that seems like the logical way to do it to me.  First convert the 
> pointer screen position to a ray (ie start point and direction) in 3D.  T
hen 
> make a list of possible objects the ray intersects (using simple bounding
 
> spheres or boxes), then test each of those objects with a function like
 
> D3DXIntersect, then compute the texture coordinates of the hit.
> 
> Note however that your system *might* allow the same part of texture to b
e 
> used on more than 1 part of the mesh, the GPU and APIs certainly allow th
is.
> 
Yeah, and if you place them wrong, you could have it tiled. This isn't a 
huge issue, since, frankly, the point is to make a non-tiled texture, 
used so that it works from "one" view, with no copies into other areas 
of the object. You could also "test", to make sure its the right face, 
but that adds more script overhead, which we are trying to minimize, not 
increase. The *major* things fracking SL right now are a) custom 
textures, since you can't cache them, being as most are copyrighted, b) 
idiots running bots made to "look" like real AVs, complete with clothes, 
so they can amp up their traffic scores, c) animation overrides, because 
there is currently no way to tell SL to simply, "Use Walk-Like-An-
Egyption instead of default-walk, when you are walking", for example, d)
particle effects, which some morons insist on glueing all over 
themselves, and e) scripts, both badly written ones, and ones that work 
well, but are contained in like 50 different objects in a 20 meter 
range. While some of that you can't do much about, getting rid of about 
50% of those scripts, which are all interconnected for door locks, 
stargate DHDs, linked this, linked that, etc., and reducing the textures 
"needed" to make some of those things, is bound to have some impact on 
the matter.

Someone else has already suggested adding procedural textures as well, 
as a means to bypass the issue of custom ones, in cases where its just 
damn stupid to use them, like making rocks, or any other surface that 
doesn't "need" hand drawn Photoshop images.

-- 
void main () {

    if version = "Vista" {
      call slow_by_half();
      call DRM_everything();
    }
    call functional_code();
  }
  else
    call crash_windows();
}

<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
 
3D Content, and 3D Software at DAZ3D!</A>


Post a reply to this message

From: scott
Subject: Re: Any thoughts on this...
Date: 27 May 2008 03:19:40
Message: <483bb60c$1@news.povray.org>
> Umm. Comparison and contrast. First you mention the case where you think
> it "will" work, then you mention the one where your not so certain.

A HUD is no different to a mesh with 2 triangles.  Using the algorithm both 
Darren and me explained it will just work.

> This is a Windows, Mac *and* Linux client. OpenGL may have something
> similar,

If not, I suspect the algorithm is pretty simple for checking ray/triangle 
intersections - in fact the source for the DirectX implementation is 
available in the SDK.  I guess POV has something similar in the source too.

> but knowing what it is doesn't necessarily help if you don't
> know how to get from that to the final result. lol I am sure its simple
> math, but..

It's very simple.  If x,y are the barycentric coordinates returned from your 
triangle/ray intersection algorithm, the texture coordinates of the hit 
point are:

Tu = x * V1.u  +  y * V2.u  +  (1-x-y) * V3.u
Tv = x * V1.v  +  y * V2.v  +  (1-x-y) * V3.v

Where V1.u is the texture x coordinate of vertex 1, etc.

> Someone else has already suggested adding procedural textures as well,
> as a means to bypass the issue of custom ones, in cases where its just
> damn stupid to use them, like making rocks, or any other surface that
> doesn't "need" hand drawn Photoshop images.

That sounds a good idea, sending a few hundred bytes of pixel shader code 
seems more efficient than the textures.  The engine on the client can then 
just render the texture and create the mipmaps locally once.  Or, if "live" 
textures are needed, the pixel shader can just be directly used in the game, 
that would allow animated textures :-)


Post a reply to this message

From: Patrick Elliott
Subject: Re: Any thoughts on this...
Date: 29 May 2008 02:19:20
Message: <MPG.22a7f8d884fcfd7398a160@news.povray.org>
In article <483bb60c$1@news.povray.org>, sco### [at] scottcom says...
> > Umm. Comparison and contrast. First you mention the case where you thin
k
> > it "will" work, then you mention the one where your not so certain.
> 
> A HUD is no different to a mesh with 2 triangles.  Using the algorithm bo
th 
> Darren and me explained it will just work.
> 
Lol Think you are confusing *which one* I said would be the problem. The 
HUD is also *basically* a box, so is, for all intents and purposes, 
identical to its own bounding box. Obviously, and test against such a 
bounding box "will" produce an identical result (or close enough) to the 
one testing the *actual* object. Why would I think that HUDs would be 
the problem?

> > This is a Windows, Mac *and* Linux client. OpenGL may have something
> > similar,
> 
> If not, I suspect the algorithm is pretty simple for checking ray/triangl
e 
> intersections - in fact the source for the DirectX implementation is 
> available in the SDK.  I guess POV has something similar in the source to
o.
> 
Well, I am guessing it isn't "in" the API. There are a couple of pages I 
found with it, which led me to conclude that they are "probably" already 
using it in the client to test ray intersects for the mouse pointer. The 
problem is, they may be doing that using one only the "yes it 
intersected" result, not by returning the actual coordinates of the 
intersect. Annoyingly (why do people have to make newbies lives so bad?) 
the page that had the best explanation "also" said, "Of course, if you 
want the coordinates instead, its trivial to return them." Uh... Right. 
Because "everything" is trivial to the guy that goes looking for how 
something works, but doesn't really know what they are doing. lol

> > but knowing what it is doesn't necessarily help if you don't
> > know how to get from that to the final result. lol I am sure its simple
> > math, but..
> 
> It's very simple.  If x,y are the barycentric coordinates returned from y
our 
> triangle/ray intersection algorithm, the texture coordinates of the hit
 
> point are:
> 
> Tu = x * V1.u  +  y * V2.u  +  (1-x-y) * V3.u
> Tv = x * V1.v  +  y * V2.v  +  (1-x-y) * V3.v
> 
> Where V1.u is the texture x coordinate of vertex 1, etc.
> 
Yep, absolutely simple. Now, can you translate it to English? ;) 
Seriously though, the full algorithm returns (t,u,v), not just (Tu,Tv), 
presuming you know how to get it to do so, otherwise is "apparently" 
only returns (t), which is 0 (or maybe NULL??) if no intersect happened. 
Probably NULL, since otherwise I don't get how you figure out if it 
happened, should the (t) happen "at" 0.

> > Someone else has already suggested adding procedural textures as well,
> > as a means to bypass the issue of custom ones, in cases where its just
> > damn stupid to use them, like making rocks, or any other surface that
> > doesn't "need" hand drawn Photoshop images.
> 
> That sounds a good idea, sending a few hundred bytes of pixel shader code
 
> seems more efficient than the textures.  The engine on the client can the
n 
> just render the texture and create the mipmaps locally once.  Or, if "liv
e" 
> textures are needed, the pixel shader can just be directly used in the ga
me, 
> that would allow animated textures :-)
>  

The only issue being, SL is "likely" avoiding it, since it can't be an 
"asset" in the same way everything else is now. How do you "copyright" a 
procedural? lol But yeah, its damn stupid feeding all the BS they do to 
the client, especially since its all fed through using UUIDs, which the 
client then has to, I am guessing, do a "please send me the asset 
assigned to UUID blah", for every damn thing it displays. Not sure if 
every object gets one, or just "final" objects, or how the server, if 
its dealing with combined objects, figures out which ones to "send", all 
are once, so only one ID is needed, or the main one, then all the linked 
ones, or... One could have nightmares trying to work out how they manage 
to make it work as well as it does, and it doesn't work *that* well. lol

-- 
void main () {

    if version = "Vista" {
      call slow_by_half();
      call DRM_everything();
    }
    call functional_code();
  }
  else
    call crash_windows();
}

<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
 
3D Content, and 3D Software at DAZ3D!</A>


Post a reply to this message

<<< Previous 6 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.