|
|
"solanb" <nomail@nomail> wrote in message
news:web.4af04b4ba22773f2cf0bfa9c0@news.povray.org...
> Ok, but then how to define the uv_vector corresponding to each triangle?
>
> My image to be mapped consists of spots of a given dimension (eg. 1cm
> diameter),
> are
> also in cm. And I want to keep the scale of my image when I project it on
> the
> scene's triangles.
> Using uv_mapping without uv_vector doesn't produce what I expect but the
> definition uv_vector is not clear for me.
>
> Thanks in advance!
> Benoit
Thoughts about UV mapping in general:
If you have a triangle defined in 3D space by these points:
A = <x1, y1, z1>
B = <x2, y2, z2>
C = <x3, y3, z3>
you'll define a corresponding triangle in your image map, in two dimensions:
a = <u1, v1>
b = <u2, v2>
c = <u3, v3>
Traditionally, X/Y/Z are used for coordinates in 3D space, and U/V are used
for the coordinates in the mapped 2D space (W is used in matrix
representations of 3D points, so U & V are next in line from that end of the
Latin alphabet).
When a program like POV goes to figure out what color your 3D triangle is,
it takes the point in the plane of the triangle that it's looking at, and
figures out how close it is relatively to points A, B, & C in 3D space, then
uses the same relationship to find a corresponding pixel in an image map
that is that (relative) distance from points a, b, & c.
The values for U & V are normally expressed as values from zero to one,
reading from left to right and bottom to top. When you specify the triangle,
you say both which 3D points define the shape in space, and which 2D points
map to a flat picture, and in what order.
So, the program figures out the distance from each ABC, determines the same
relative distances from abc, and comes up with a point on the 2D plane that
equals 0.300 by 0.250. It then multiplies those numbers by the number of
pixels in each direction, finds the pixel in the image map at that point,
then uses the color of that pixel for the color of the triangle in the
rendered image.
The mapping is always done in the plane of the triangle to the plane of the
image, so you should get what you're looking for there. You can use the same
UV coordinates for every leaf in the mesh, if you want, and they'll all have
the same color (before lighting, fog, whatever, etc.)
Here's an example of a shape made from two triangles in a POV mesh that form
a diamond shape, each triangle using the same part of an image (which might
look a little weird if you actually rendered it):
// The image map color
#declare Image_Based_Texture =
texture {
pigment {
uv_mapping
image_map {
jpeg "Leaf.jpg"
}
}
}
// The "leaf"
mesh2 {
// Physical points in 3D space that define
// the shape
vertex_vectors {
4,
< 0.000, 0.000, 0.000>,
<-1.000, 1.000, 0.500>,
< 0.750, 1.100, 0.250>,
< 0.100, 2.000, 0.000>
}
// Points inside the image file
uv_vectors {
3,
<0.000, 0.000>,
<1.000, 0.000>,
<0.500, 1.000>
}
// List of textures that are used for the shape
texture_list {
1,
texture { Image_Based_Texture }
}
// Build the actual triangles of the shape from
// the vertex_vectors list plus textures
face_indices {
2,
<0, 1, 2>, 0,
<1, 2, 3>, 0
}
// For each face, show the points in the image
// plane that create its color
uv_indices {
2,
<0, 1, 2>,
<0, 1, 2>
}
}
Post a reply to this message
|
|