POV-Ray : Newsgroups : povray.off-topic : Any thoughts on this... Server Time
7 Sep 2024 15:23:52 EDT (-0400)
  Any thoughts on this... (Message 11 to 16 of 16)  
<<< Previous 10 Messages Goto Initial 10 Messages
From: scott
Subject: Re: Any thoughts on this...
Date: 26 May 2008 02:53:31
Message: <483a5e6b@news.povray.org>
> Actually, that *won't* work as I intended. In most cases you are going
> to be dealing with something that only have *one* side to it, like a
> HUD.

How is a HUD with one side different to any other mesh in the algorithm 
Darren described?  It should just work.

> In other cases, you may be dealing with something that has a
> complex surface, but you don't want/need to know which "triangle" was
> touched, you need to know where on the "texture" was touched,

For example, DirectX provides a function called D3DXIntersect which takes a 
mesh and a ray, then returns which face was hit and the barycentric hit 
coordinates within that face.  You can then do *very* simple math with the 
texture coordinates of that face to work out exactly which texture 
coordinates the pointer is over.

> which
> "may" overlap several such triangles, if its a complex shape. Using
> tests against which face was touched just dumps you right back in the
> same "Oh, prim xyz was touched, now what?", case you had before, since
> you then would have to figure out if the texture is "on" that triangle,
> which "part" of it is on that triangle, and "if" that somehow connected
> to what you are looking for in the script. I.e., a total mess.

No, that seems like the logical way to do it to me.  First convert the 
pointer screen position to a ray (ie start point and direction) in 3D.  Then 
make a list of possible objects the ray intersects (using simple bounding 
spheres or boxes), then test each of those objects with a function like 
D3DXIntersect, then compute the texture coordinates of the hit.

Note however that your system *might* allow the same part of texture to be 
used on more than 1 part of the mesh, the GPU and APIs certainly allow this.

> What I am looking at, instead, is much simpler. You have some complex
> object that "happens" to fit in a box that is 5x3x1, and a texture on
> *one* of the 5x3 faces, that is towards you. You touch it. The
> client/server determines that you "would have" also touched that face,
> of that virtual "box", and based on that, returns a set of coordinates
> xp and yp, which ranger between 0-1

That is going to fail and annoy users even with slight deviations from the 
bounding box face due to parallax errors.  If the user tries to click on a 
small area, your algorithm will give the wrong place and the user won't be 
able to click on buttons etc.

> You method would require something a lot more complicated, involving
> determining if the right texture is on that part of the object,

You should have the information already if you are drawing the object on the 
screen.

> if it
> is, where that is in relationship "to" that texture,

Again, you should already have that information in the vertex list that you 
pass to the GPU.  Each vertex has x and y texture coordinates passed to it, 
and if you use something like D3DXIntersect that returns barycentric hit 
coordinates, the math is extremely simple.


Post a reply to this message

From: Darren New
Subject: Re: Any thoughts on this...
Date: 26 May 2008 13:10:17
Message: <483aeef9$1@news.povray.org>
scott wrote:
> That is going to fail and annoy users even with slight deviations from 
> the bounding box face due to parallax errors.  If the user tries to 
> click on a small area, your algorithm will give the wrong place and the 
> user won't be able to click on buttons etc.

FWIW, if you're too far away from the object in 3D, the interface 
doesn't let you click on it at all. It's not like you're firing a weapon 
or something. :-)

Come to think of it, you might want to consider how to make it work if 
you're firing a weapon. I can see people wanting to use this sort of 
thing to determine where on the dart board your dart landed, etc. But 
that would be an entirely different algorithm, since you're no longer 
talking about the screen and camera, and for that matter you're no 
longer even necessarily running the code on the same server the client 
is connected to.

You'll need to figure out how to handle it if you're standing on one 
server and the object you're touching is on another server, too.

At least make the API capable of supporting those, so you don't get the 
PHP affect.

-- 
   Darren New / San Diego, CA, USA (PST)
     "That's pretty. Where's that?"
          "It's the Age of Channelwood."
     "We should go there on vacation some time."


Post a reply to this message

From: Patrick Elliott
Subject: Re: Any thoughts on this...
Date: 27 May 2008 00:46:29
Message: <MPG.22a53ff03be0974298a15e@news.povray.org>
In article <483aeef9$1@news.povray.org>, dne### [at] sanrrcom says...
> scott wrote:
> > That is going to fail and annoy users even with slight deviations from
 
> > the bounding box face due to parallax errors.  If the user tries to 
> > click on a small area, your algorithm will give the wrong place and the
 
> > user won't be able to click on buttons etc.
> 
> FWIW, if you're too far away from the object in 3D, the interface 
> doesn't let you click on it at all. It's not like you're firing a weapon
 
> or something. :-)
> 
Yeah. Its likely fairly irrelevant. If you are not close enough, you can 
*already* touch the wrong thing, even when every "touch point" is an 
individual prim. Worrying about trying to touch 3-4 pixel on something 
accurately isn't going to matter at all, since you can't reliably do 
that "anyway".

> Come to think of it, you might want to consider how to make it work if 
> you're firing a weapon. I can see people wanting to use this sort of 
> thing to determine where on the dart board your dart landed, etc. But 
> that would be an entirely different algorithm, since you're no longer 
> talking about the screen and camera, and for that matter you're no 
> longer even necessarily running the code on the same server the client 
> is connected to.
> 
The cases I have seen with arrows, they use multiple hollow cylinders, 
which is what we are trying to avoid here. Obviously a collision detect 
for two objects is going to be "slightly" more accurate anyway, since 
you are actually intersecting two objects that can't "hit" in the wrong 
place, unless game gravity reverses itself, its guided somehow, etc.

> You'll need to figure out how to handle it if you're standing on one 
> server and the object you're touching is on another server, too.
> 
Definitely.

-- 
void main () {

    if version = "Vista" {
      call slow_by_half();
      call DRM_everything();
    }
    call functional_code();
  }
  else
    call crash_windows();
}

<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
 
3D Content, and 3D Software at DAZ3D!</A>


Post a reply to this message

From: Patrick Elliott
Subject: Re: Any thoughts on this...
Date: 27 May 2008 00:57:32
Message: <MPG.22a542897fb253c98a15f@news.povray.org>
In article <483a5e6b@news.povray.org>, sco### [at] scottcom says...
> > Actually, that *won't* work as I intended. In most cases you are going
> > to be dealing with something that only have *one* side to it, like a
> > HUD.
> 
> How is a HUD with one side different to any other mesh in the algorithm
 
> Darren described?  It should just work.
> 
Umm. Comparison and contrast. First you mention the case where you think 
it "will" work, then you mention the one where your not so certain.

> > In other cases, you may be dealing with something that has a
> > complex surface, but you don't want/need to know which "triangle" was
> > touched, you need to know where on the "texture" was touched,
> 
> For example, DirectX provides a function called D3DXIntersect which takes
 a 
> mesh and a ray, then returns which face was hit and the barycentric hit
 
> coordinates within that face.  You can then do *very* simple math with th
e 
> texture coordinates of that face to work out exactly which texture 
> coordinates the pointer is over.
> 
This is a Windows, Mac *and* Linux client. OpenGL may have something 
similar, but knowing what it is doesn't necessarily help if you don't 
know how to get from that to the final result. lol I am sure its simple 
math, but..

> > which
> > "may" overlap several such triangles, if its a complex shape. Using
> > tests against which face was touched just dumps you right back in the
> > same "Oh, prim xyz was touched, now what?", case you had before, since
> > you then would have to figure out if the texture is "on" that triangle,
> > which "part" of it is on that triangle, and "if" that somehow connected
> > to what you are looking for in the script. I.e., a total mess.
> 
> No, that seems like the logical way to do it to me.  First convert the 
> pointer screen position to a ray (ie start point and direction) in 3D.  T
hen 
> make a list of possible objects the ray intersects (using simple bounding
 
> spheres or boxes), then test each of those objects with a function like
 
> D3DXIntersect, then compute the texture coordinates of the hit.
> 
> Note however that your system *might* allow the same part of texture to b
e 
> used on more than 1 part of the mesh, the GPU and APIs certainly allow th
is.
> 
Yeah, and if you place them wrong, you could have it tiled. This isn't a 
huge issue, since, frankly, the point is to make a non-tiled texture, 
used so that it works from "one" view, with no copies into other areas 
of the object. You could also "test", to make sure its the right face, 
but that adds more script overhead, which we are trying to minimize, not 
increase. The *major* things fracking SL right now are a) custom 
textures, since you can't cache them, being as most are copyrighted, b) 
idiots running bots made to "look" like real AVs, complete with clothes, 
so they can amp up their traffic scores, c) animation overrides, because 
there is currently no way to tell SL to simply, "Use Walk-Like-An-
Egyption instead of default-walk, when you are walking", for example, d)
particle effects, which some morons insist on glueing all over 
themselves, and e) scripts, both badly written ones, and ones that work 
well, but are contained in like 50 different objects in a 20 meter 
range. While some of that you can't do much about, getting rid of about 
50% of those scripts, which are all interconnected for door locks, 
stargate DHDs, linked this, linked that, etc., and reducing the textures 
"needed" to make some of those things, is bound to have some impact on 
the matter.

Someone else has already suggested adding procedural textures as well, 
as a means to bypass the issue of custom ones, in cases where its just 
damn stupid to use them, like making rocks, or any other surface that 
doesn't "need" hand drawn Photoshop images.

-- 
void main () {

    if version = "Vista" {
      call slow_by_half();
      call DRM_everything();
    }
    call functional_code();
  }
  else
    call crash_windows();
}

<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
 
3D Content, and 3D Software at DAZ3D!</A>


Post a reply to this message

From: scott
Subject: Re: Any thoughts on this...
Date: 27 May 2008 03:19:40
Message: <483bb60c$1@news.povray.org>
> Umm. Comparison and contrast. First you mention the case where you think
> it "will" work, then you mention the one where your not so certain.

A HUD is no different to a mesh with 2 triangles.  Using the algorithm both 
Darren and me explained it will just work.

> This is a Windows, Mac *and* Linux client. OpenGL may have something
> similar,

If not, I suspect the algorithm is pretty simple for checking ray/triangle 
intersections - in fact the source for the DirectX implementation is 
available in the SDK.  I guess POV has something similar in the source too.

> but knowing what it is doesn't necessarily help if you don't
> know how to get from that to the final result. lol I am sure its simple
> math, but..

It's very simple.  If x,y are the barycentric coordinates returned from your 
triangle/ray intersection algorithm, the texture coordinates of the hit 
point are:

Tu = x * V1.u  +  y * V2.u  +  (1-x-y) * V3.u
Tv = x * V1.v  +  y * V2.v  +  (1-x-y) * V3.v

Where V1.u is the texture x coordinate of vertex 1, etc.

> Someone else has already suggested adding procedural textures as well,
> as a means to bypass the issue of custom ones, in cases where its just
> damn stupid to use them, like making rocks, or any other surface that
> doesn't "need" hand drawn Photoshop images.

That sounds a good idea, sending a few hundred bytes of pixel shader code 
seems more efficient than the textures.  The engine on the client can then 
just render the texture and create the mipmaps locally once.  Or, if "live" 
textures are needed, the pixel shader can just be directly used in the game, 
that would allow animated textures :-)


Post a reply to this message

From: Patrick Elliott
Subject: Re: Any thoughts on this...
Date: 29 May 2008 02:19:20
Message: <MPG.22a7f8d884fcfd7398a160@news.povray.org>
In article <483bb60c$1@news.povray.org>, sco### [at] scottcom says...
> > Umm. Comparison and contrast. First you mention the case where you thin
k
> > it "will" work, then you mention the one where your not so certain.
> 
> A HUD is no different to a mesh with 2 triangles.  Using the algorithm bo
th 
> Darren and me explained it will just work.
> 
Lol Think you are confusing *which one* I said would be the problem. The 
HUD is also *basically* a box, so is, for all intents and purposes, 
identical to its own bounding box. Obviously, and test against such a 
bounding box "will" produce an identical result (or close enough) to the 
one testing the *actual* object. Why would I think that HUDs would be 
the problem?

> > This is a Windows, Mac *and* Linux client. OpenGL may have something
> > similar,
> 
> If not, I suspect the algorithm is pretty simple for checking ray/triangl
e 
> intersections - in fact the source for the DirectX implementation is 
> available in the SDK.  I guess POV has something similar in the source to
o.
> 
Well, I am guessing it isn't "in" the API. There are a couple of pages I 
found with it, which led me to conclude that they are "probably" already 
using it in the client to test ray intersects for the mouse pointer. The 
problem is, they may be doing that using one only the "yes it 
intersected" result, not by returning the actual coordinates of the 
intersect. Annoyingly (why do people have to make newbies lives so bad?) 
the page that had the best explanation "also" said, "Of course, if you 
want the coordinates instead, its trivial to return them." Uh... Right. 
Because "everything" is trivial to the guy that goes looking for how 
something works, but doesn't really know what they are doing. lol

> > but knowing what it is doesn't necessarily help if you don't
> > know how to get from that to the final result. lol I am sure its simple
> > math, but..
> 
> It's very simple.  If x,y are the barycentric coordinates returned from y
our 
> triangle/ray intersection algorithm, the texture coordinates of the hit
 
> point are:
> 
> Tu = x * V1.u  +  y * V2.u  +  (1-x-y) * V3.u
> Tv = x * V1.v  +  y * V2.v  +  (1-x-y) * V3.v
> 
> Where V1.u is the texture x coordinate of vertex 1, etc.
> 
Yep, absolutely simple. Now, can you translate it to English? ;) 
Seriously though, the full algorithm returns (t,u,v), not just (Tu,Tv), 
presuming you know how to get it to do so, otherwise is "apparently" 
only returns (t), which is 0 (or maybe NULL??) if no intersect happened. 
Probably NULL, since otherwise I don't get how you figure out if it 
happened, should the (t) happen "at" 0.

> > Someone else has already suggested adding procedural textures as well,
> > as a means to bypass the issue of custom ones, in cases where its just
> > damn stupid to use them, like making rocks, or any other surface that
> > doesn't "need" hand drawn Photoshop images.
> 
> That sounds a good idea, sending a few hundred bytes of pixel shader code
 
> seems more efficient than the textures.  The engine on the client can the
n 
> just render the texture and create the mipmaps locally once.  Or, if "liv
e" 
> textures are needed, the pixel shader can just be directly used in the ga
me, 
> that would allow animated textures :-)
>  

The only issue being, SL is "likely" avoiding it, since it can't be an 
"asset" in the same way everything else is now. How do you "copyright" a 
procedural? lol But yeah, its damn stupid feeding all the BS they do to 
the client, especially since its all fed through using UUIDs, which the 
client then has to, I am guessing, do a "please send me the asset 
assigned to UUID blah", for every damn thing it displays. Not sure if 
every object gets one, or just "final" objects, or how the server, if 
its dealing with combined objects, figures out which ones to "send", all 
are once, so only one ID is needed, or the main one, then all the linked 
ones, or... One could have nightmares trying to work out how they manage 
to make it work as well as it does, and it doesn't work *that* well. lol

-- 
void main () {

    if version = "Vista" {
      call slow_by_half();
      call DRM_everything();
    }
    call functional_code();
  }
  else
    call crash_windows();
}

<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
 
3D Content, and 3D Software at DAZ3D!</A>


Post a reply to this message

<<< Previous 10 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.