|
|
High!
Kenneth wrote:
> That mesh 2 generation trick is quite interesting; I've never tried modeling
> that way. I need to study your code (and Clipka's correction) to see if I can
> understand the method. Looks to be very useful!
>
> Ken W.
>
O.k., here it is - Yadgar's Little Asteroid Modeling Tutorial:
I started with a color topographic map of Phobos, made in 1997 by A.
Tayfun Öner/Calvin J. Hamilton
(http://www.solarviews.org/cap/mars/phobos5.htm - there are many more
such maps on www.solarviews.org!). As the height levels of Phobos'
relief are coded according to hue values in the HSV/HSB color model
(with purple areas being lowest, red areas being highest), one can get a
greyscale bitmap easily interpretable by PoV-Ray when using a channel
separation function of a pixel-oriented graphics editor, for example the
GIMP.
So I cut out the map part of the image (which then is exactly 800 x 400
pixels) and started GIMP's separation function (Colors -> Components ->
Separate -> Color Mode: HSV). I found that the original image was not
exactly calibrated, so that the highest red parts were in fact already
slightly purplish, which made them stand out as white "islands" amidst
black regions in the hue value greyscale image, so I decided to adjust
the hue values of the original image.
In GIMP, this is done by Colors -> Hue/Saturation, in the dialog box
displayed then, press "All" (in the middle of six color fields in the
upper half), if not pre-selected yet, and then move the "hue" slide
control 5 units to the right and confirm with "Ok". After this, once
again separate the image channels.
For it is probably more convenient to assign the lowest grayscale values
to the lowest radius values rather than vice versa, you then should
invert the image (Colors -> Invert).
For some strange reasons, although the hue values of the original image
cover the full range of the HSV model (0 to 360), the gray values in the
separated image do not. So, to avoid complications when programming the
mesh2 reading routine, it's best to stretch the contrast to the maximum,
so that really the full grayscale range in fact shows up in the image.
Click on Colors -> Automatic -> Stretch Contrast; you can check the
result by using Colors -> Information -> Histogram (after stretching,
the black curve should span the entire width from 0 to 255).
Now you've got a POV-readable height bitmap! As the original image
already contained some compression artifacts, I recommend to store the
grayscale image in a lossless format, e. g. PNG.
Now to the programming in POV-Ray:
When used as a pigment, every bitmap, regardless of its actual
dimensions, is fit into a square of 1 x 1 POV units and (of course
unless rotated or translated) displayed on the x-y plane. The lower left
corner of the bitmap sits at the origin, the uper right corner sits at
<1, 1> (as all pigment patterns in POV-Ray are three-dimensional and
infinite, the bitmap pattern extends infinitely in both positive and
negative z direction).
In POV-Ray, a function (in the functions.inc file, which has to be
included) exists to obtain the color value of any pigment
at any given point in 3-D space: eval_pigment(). As we want to model
elevations of an asteroidal body corresponding to grayscale values in a
bitmap, we now need to read out the gray value of every pixel.
The syntax for eval_pigment are: eval_pigment(<Name of pigment>, <Point
at which color is to be read>), so we don't need to have the imagemap
explicitly displayed in the scene, just defining it will do:
#declare PhobosRelief =
pigment
{
image_map { png "solarsys/phobos_topo.png" }
}
According to the number of pixels in both dimensions (x and y), we read
in the color data from a nested loop; to get reliable results, we pick
the color from the center of each pixel, so that the loop will look like:
#declare a=0; // counter for rows
#while (a<400)
#declare b=0; // counter for pixels in each row
#while (b<800)
#declare GrayValue = eval_pigment(PhobosRelief, <(0.5+b)*(1/800),
0, (0.5+b)*(1/400)>.gray); // .gray as we just want one vector
component, not the whole rgb vector itself
#declare b=b+1;
#end
#declare a=a+1;
#end
As asteroid can be understood as an irregular spheroid, so that each
point of the surface can be described by geographical coordinates and
the distance to the center of the body, i. e. the local radius.
The radius for each surface point (= pixel of the grayscale bitmap) is
calculated from the minimal radius, the difference between maximal
radius und minimal radius and the respective gray value between 0 and 255.
As seen in the original topographic map image, the lowest radius is 8.1
kms, while the highest radius is 14 km, so the radius for each pixel
calculates as follows:
#declare rd = 8.1+GrayValue*(14-8.1);
Points on the surface of spheroids are defined by geographical
coordinates (latitude and longitude), which are derived here from the
two-dimensional coordinates of pixels in the greyscale map: the 800
points per row equal 360 degrees of longitude, the 400 rows equal -90 to
90 degrees of latitude. As the bitmap pigment spans <0, 0> to <1, 1>, we
begin at the "south pole", i. e. -90 degrees.
#declare SurfacePoint =
<sin(radians(long(b*(360/800)))*cos(radians(lat(a*(180/400))),
sin(radians(lat(a*(360/800))),
cos(radians(long(b*(360/800)))*cos(radians(lat(a*(180/400)))>;
The advantage of mesh2 object compared with the older mesh objects is
that with a mesh2 object, each surface point (vertex) need to be
declared and read into the internal calculation matrix only once, while
the old mesh object consists of triangles as three-point triples, thus
each triangle needed nine floating point values, which, with most points
repeating, led to exponentially growing rendering times.
Contrary to this, mesh2 objects use index numbers of points in their
triangle definition rather than actual points, so that each point needs
to be defined only once.
So, we get the following code:
mesh2
{
vertex_vectors
{
320000 // number of vertices: 800 x 400 pixels!
#declare a=0; // counter for rows
#while (a<400)
#declare b=0; // counter for pixels in each row
#while (b<800)
#declare GrayValue = eval_pigment(PhobosRelief, <(0.5+b)*(1/800),
0, (0.5+b)*(1/400)>.gray); // .gray as we just want one vector
component, not the whole rgb vector itself
#declare rd = 8.1+GrayValue*(14-8.1);
#declare SurfacePoint =
<sin(radians(long(b*(360/800)))*cos(radians(lat(a*(180/400))),
sin(radians(lat(a*(360/800))),
cos(radians(long(b*(360/800)))*cos(radians(lat(a*(180/400)))>;
rd*SurfacePoint // the vertex must be generated explicitly, not
just defined!
#declare b=b+1;
#end
#declare a=a+1;
#end
}
/*
Here the vertices are generated and continuously assigned index numbers
(here, from 0 to 319999); part 2 of a mesh2 is the definition of surface
triangles from these vertices.
Starting with vertex 0(b) in row 0(a), all triangles are built according
to the following scheme:
<a*800+b, a*800+b+1, (a+1)*800+b>,
#if (b=798 & a=398)
<319998, 319999, 319199>
#else
<(a+1)*800+b, (a+1)*800+b+1, a*800+b+1>,
#end
The #if clause is necessary because the last vector statement must not
be followed by a comma.
Finally, we get this code:
mesh2 // Phobos
{
vertex_vectors
{
320000
#declare a=0;
#while (a<400)
#declare b=0;
#while (b<800)
((8.1 + eval_pigment(PhobosRelief, <(0.5+b)*(1/800),
(0.5+a)*(1/400), 0>).red * (14-8.1)) *
<sin(radians(b*(360/800)))*cos(radians(-90+a*(180/400))),
sin(radians(-90+a*(180/400))),
cos(radians(b*(360/800)))*cos(radians(-90+a*(180/400)))>)/sc
#declare b=b+1;
#end
#declare a=a+1;
#end
}
face_indices
{
637602
#declare a=0;
#while (a<399)
#declare b=0;
#while (b<799)
<a*800+b, a*800+b+1, (a+1)*800+b>,
#if (b=798 & a=398)
<319998, 319999, 319199>
#else
<(a+1)*800+b, (a+1)*800+b+1, a*800+b+1>,
#end
#declare b=b+1;
#end
#declare a=a+1;
#end
}
texture
{
pigment { color rgb 0.3 }
finish { F_Standard_Planetary_Surface }
}
rotate -y*100
translate Pos_Phobos/sc
}
// end of code
This code renders in about one minute - an old-school mesh would have
taken many hours even on a Athlon 64!
See you in Khyberspace!
Yadgar
*/
Post a reply to this message
|
|