|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Okay...I'm trying to do something here and I'm wondering if anyone has come
up with a technique for doing it before me. If so, please let me know, if
not, maybe I'll come up with something and post it here. Or maybe someone
smarter/faster than me will beat me to it.
I am looking for a way to specify coordinates in a 3D texture for corners of
trangles (in a mesh or not) to map to.
In other words, something like:
triangle {
Vertex_1, Vertex_2, Vertex_3
texture_coords Coord_1, Coord_2, Coord_3
}
What I'm looking for is not UV mapping (I think), which I know is now
available in POV-Ray...the texture coordinates would be 3D coordinates
within a 3D texture, and the texture would then be interpolated along the
surface of the triangle.
What I'm trying to do is to have a mesh whose vertices are attached to
MegaPOV mechsim masses, textured by a normal POV 3D texture, but allow the
texture to "follow" the triangles as the mesh deforms over time.
The only two solutions that I've thought of so far (but not yet to the point
of implementation) are:
1) Assign a copy of the texture to each triangle in the mesh separately,
then when drawing the deformed version of the mesh, figure out how each
triangle in the mesh has been transformed from its current position (how we
do that, I have no idea, yet, but I have a sneaking suspicion that it is
possible), and then apply that transformation to that triangle's copy of
the texture.
2) Generate a small image for each triangle in the original mesh, containing
what the texture looks like for that triangle in its original position, and
then set UV coordinates for the triangle using that image as its
texture...which should result in the same texture being applied to each
triangle after the mesh was deformed. (There should be some way to put all
of this in a single texture image)
Both of these seem fairly complicated, not to menion memory/processor/disk
space intensive...does anyone else have a better idea?
Or, of course, I could just stick with UV mapping and just create images
with approximations to the 3D textures I want to use...
Cheers,
Joe
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Alright,
I've thought about both of the options I outlined in the previous post, and
it seems that speed-wise #2 is the better option, because we just figure
out UV mappings and build an image for the UV coordinates to map
into...once...and then we can use it to handle any deformation of the mesh.
BUT, there could be artifacts due to scaling/shearing of triangles...which
#1 is immune to.
I have implemented option #1, and will post it, along with some example
images to show what I'm talking about, tomorrow (too late to do it
tonight).
If anyone still thinks they have a better way to do it, please let me know.
This is probably something that could quite easily be bundled into POV-Ray
itself.
Cheers,
Joe
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Alright, here is an explanation of what I've done along with a link to an
interesting paper and to another posting with some sample code and videos.
Basically, this is what I'm doing:
Take two traingles: P = <p0,p1,p2>, Q = <q0,q1,q2> and find the affine
transformation that maps triangle P onto triangle Q.
Find a point p3 in the direction of P's normal vector, with a distance from
P that is scaled by the lengths of the sides of P (we need this because the
affine transformation that moves P to Q could change the size/shape of the
triangle). I scale is by the sqrt of the length of the normal vector
determined by the cross product of two sides of the triangle.
So...
p3 = ( p0 + ( p2 - p1 ) x ( p0 - p1 ) ) / sqrt( | ( p2 - p1 ) x ( p0 - p1 )
| )
And find a similar fourth point for Q:
q3 = ( q0 + ( q2 - q1 ) x ( q0 - q1 ) ) / sqrt( | ( q2 - q1 ) x ( q0 - q1 )
| )
Aside: Why do we need these two points: later on we will be determining a
3x3 transformation matrix given 3x3 matrices representing the triangles.
The 3x3 transformation matrix will not contain translational
information...we are going to do the translational component separately.
When we determine M, we are going to treat one of the vertices of P as the
point of rotation, which means we actually only have two points (the other
two vertices) to use for determining the affine transformation...we need
three, so we use this calculated orthogonal point as the third one.
Alright, now we create our array representations of the triangles. We do
this by translating <p1,p2,p3> by -p0, moving the triangle so that p0 is
treated as the center of space for translation/scale/shearing
transformations, and similarly for Q.
[ p1.x - p0.x, p2.x - p0.x, p3.x - p0.x ]
Tp = [ p1.y - p0.y, p2.y - p0.y, p3.y - p0.y ]
[ p1.z - p0.z, p2.z - p0.z, p3.z - p0.z ]
[ q1.x - q0.x, q2.x - q0.x, q3.x - q0.x ]
Tq = [ q1.y - q0.y, q2.y - q0.y, q3.y - q0.y ]
[ q1.z - q0.z, q2.z - q0.z, q3.z - q0.z ]
We find the inverse of Tp:
Tp_inv = inverse( Tp )
Then multiply:
M = [Tq][Tp_inv] (thanks for simplifying that part for me)
Of course, what we've really found is the mapping of <p1,p2,p3> onto
<q1,q2,q3>...which is the mapping from P to Q, minus the translational
component. The actual mapping we want is what you get when you first
translate the triangle P as we did above to re-center it, then apply M to
it, then translate it back out to q0.
Q = Mt[P-p0]
Where Mt is M, plus the translational component that translates the
rotated/scaled/sheared triangle to its correct position in space:
[ M00, M01, M02, q0 ]
Mt = [ M10, M11, M12, q1 ]
[ M20, M21, M22, q2 ]
[ 0, 0, 0, 1 ]
Whooh! It works!
If you're unsure about the "fourth point" thingie I do, there is a
discussion of why it needs to be done in this paper:
http://people.csail.mit.edu/jovan/assets/papers/sumner-2004-dtt.pdf
Sample code and videos can be found here:
http://news.povray.org/povray.binaries.misc/thread/%3Cweb.442bab7262605497a467f4a20%40news.povray.org%3E/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|