The Superpatch
was conceived and executed by Ron
Parker, parkerr@mail.fwi.com,
but it contains the
work of a great
many people who are listed in the
Superpatch Contributors
section below. Getting
here has been
a long process, starting in the days
of POV 3.02
and finally coming to a temporary stop
with the current
version, 3.1a.
The Superpatch
is an UNOFFICIAL build of POV-Ray
and is not supported
by the POV-Team. For
information
on getting the official POV-Ray, see
"Where to find
the official POV-Ray" at the end of
this document.
Spherical,
cylindrical, toroidal, and planar
warps, light_group,
isosurface pigment function:
Matthew Corey Brown (xenoarch)
Unicode support:
Jon A. Cruz
Returns the minimum
x,y,z values for a #declared
object.
This is one corner of the object's
bounding box.
Example:
#declare
MySphere = sphere { <0,0,0>, 1 }
#declare
BBoxMin = min_extent( MySphere );
object
{MySphere
texture {
pigment {color rgb 1}
}
}
sphere
{BBoxMin, .1
texture {
pigment {color red 1}
}
}
#declare
MySphere = sphere { <0,0,0>, 1 }
#declare
BBoxMax = max_extent( MySphere );
object
{MySphere
texture {
pigment {color rgb 1}
}
}
sphere
{BBoxMax, .1
texture {
pigment {color blue 1}
}
}
Note: Checking
the normal vector for <0,0,0> is
the only reliable
way to determine whether an
intersection
has actually occured, as valid
intersections
can and do occur anywhere, including
at <0,0,0>.
Example:
#declare
MySphere = sphere { <0,0,0>, 1 }
#declare
Norm = <0,0,0>;
#declare
Start = <1,1,1>;
#declare
Inter=
trace( MySphere, Start, <0,0,0>-Start, Norm );
object
{MySphere
texture {
pigment {color rgb 1}
}
}
#if (Norm.x
!= 0 || Norm.y != 0 || Norm.z != 0)
cylinder {Inter, Inter+Norm, .1
texture {
pigment {color red 1}
}
}
#end
Example:
#declare
myarray=array[10]
...
#ifdef(myarray[0])
...
#end
Example:
#declare
pos=spline {
linear_spline
0,<0,0,0>
0.5,<1,0,0>
1,<0,0,0>
}
Examples:
/* a moving sphere
*/
sphere {
Spline(clock) 1
pigment { color Yellow }
}
/* a sphere
which changes it size */
sphere
{ <0,0,0>
Spline(clock)
pigment { color Yellow }
}
/* the same,
but with the x-component
of a spline
vector */
sphere
{ <0,0,0>
Spline(clock).x
pigment { color Yellow }
}
This object is
described by four spheres. You can
use as many
spheres as you like to describe the
object, but
you will need at least two spheres for
a linear Sphere
Sweep, and four spheres for one
approximated
with Catmull-Rom-Splines or B-
Splines.
The example above
would result in an object shaped
like the letter
"N". Changing the kind of
interpolation
to a Catmull-Rom-Spline produces a
quite different,
slightly bent, object, starting
at the second
sphere and ending at the third one.
If you use a
B-Spline, the resulting object lies
somewhere between
the four spheres.
Note: If you
see dark spots on the surface of the
object, you
are probably experiencing an effect
called "Self-Shading".
This means that the object
casts shadows
onto itself at some points because
of calculation
errors. A ray tracing program
usually defines
the minimal distance a ray must
travel before
it actually hits another (or the
same) object
to avoid this effect. If this
distance is
chosen too small, Self-Shading may
occur. To avoid
this, there is a way to chose
a bigger minimal
distance than the preset 1.0e-6
(0.000001).
Use the sphere_sweep_depth_tolerance
keyword at the
end of the Sphere Sweep description
to chose another
value. The following would set
the depth tolerance
to 1.0e-3 (0.001), for
example:
sphere_sweep
{
b_spline_sphere_sweep, 4,
<-5, -5, 0>, 1
<-5, 5, 0>, 1
< 5, -5, 0>, 1
< 5, 5, 0>, 1
sphere_sweep_depth_tolerance 1.0e-3
}
Another problem
occurs when using the merge
statement with
Sphere Sweeps: There is a small
gap between
the merged objects. Right now the only
workaround is
to avoid the merge statement and use
the union statement
instead. If the objects have
transparent
surfaces this leads to different
looking pictures,
however.
Example:
bicubic_patch
{
type 2
accuracy 0.01
<0,0,0>, <1,1,2>,
...
}
If you want
to use rational bicubic_patch use
"type 3".
In addition, you must specify control
points as a
four-element vector of the form
<x,y,z,w>.
w is the weight of the control point.
Example:
bicubic_patch
{
type 3
accuracy 0.01
<0,0,0,1>, <1,1,2,0.5>,...
...
}
#declare
TRIM_IDENTIFIER = TRIM
U_Order, V_Order
- number of control points in u
or v direction,
or the order of patch in u or v
direction plus
1. These values must be 2, 3, or
4.
Accuracy_Value
- specifies how accurate
computation
should be. 0.01 is a good value to
start with.
For higher precision you should
specify a smaller
number. This value must be
greater than
zero.
rational
- If specified, a rational bezier patch
(or trimming
curve) will be created. In the case
of a rational
bezier patch, the control points
must be four-dimensional
vectors as described for
bicubic_patch.
ControlPt_*_*
- control points of the surface -
each control
point is a vector. For rational
surfaces, these
are four-dimensional vectors.
Trimming shapes
are closed curves, piecewise
Bezier curve,
or sequences of (possibly rational)
bezier curves.
There are two
types of trimming shapes. The
keyword "type"
has no relation to the method used
for compute
them as with bicubic_patch.
Type 0 specifies
that the area inside the trimming
curve will be
visible, while type 1 specifies that
the area outside
the trimming curve will be
visible.
Order - order
of trimming curve plus 1. This
value must be
2, 3, or 4.
ControlPt_* -
control points of trimming curve in
<u,v> coordinate
system of trimmed bezier surface.
Hence <0,0>
is mapped to ControlPt_0_0, <1,0> to
ContolPt_1_U_Order,
<0,1> is equal to
ControlPt_V_Order_1
and <1,1> to
ControlPt_V_Order_U_Order.
scale, translate,
rotate - you can use these
keywords to
transform control points of already
inserted curves
of the current trimming shape. You
can use an arbitrary
number of curves and
transformations
in one trimmed_by section. You may
also freely
mix them as needed to obtain the
desired shape
If the first
control point of a curve is different
from the last
control point of the previous one
they will be
automatically conected by a line. The
same applies
for the last control point of the
last curve and
the first point of the first curve.
Note that if
you use transformations, particularly
rotations, points
will not be transformed exactly
and an additional
line may be inserted. To avoid
this problem
you may use the keyword previous for
the current
value of the last control point of the
previous curve
or first for the current value of
the first control
point of the first curve. Since
only <u,v>
position is copied, you will still have
to specify the
weight of the control point.
When using a
#declared trimming shape you may
change its type,
size, orientation, or position.
However, changing
type or transformation will
require more
memory and you may not add new
curves.
The indices are
ZERO-BASED, so the first item in
each list has
an index of zero.
max_trace: This is either an integer
or the keyword
all_intersections
and it specifies the number of
intersections
to be found with the object. Unless
you’re using
the object in a CSG, 1 is usually
sufficient.
threshold: float value. This
is the value through
which an isosurface
should be drawn. The default
is zero.
max_gradient: This value specifies
the maximum
value of the
rate of change of the specified
function along
a unit distance on any ray. The
default is 1.1.
method: 1 or 2. This specifies
which method will
be used to find
the surface of the object. The
default is method
1. Method 2 may be faster, but
requires that
you specify MaxGradient.
Params: parameters
for internal function. The
number of parameters
varies depending on which
function is
being used.
LibName: The
name of the dynamic or shared library
that contains
the specified internal function.
InitString: This
string is passed to the library’s
initialization
function when the library is
loaded.
InitVect: This
vector is passed to the library’s
initialization
function when the library is
loaded.
As with Params, the number of parameters
varies depending
on the library being loaded.
eval: If specified, POV will attempt
to determine
the maximum
gradient by examining the adjacent
space.
This is most reliable when the actual
maximum is close
to 1 (about 0.5 to 5)
trimmed_by, clipped_by:
Like the standard POV
versions, but
optimized for boxes and spheres.
f(x,y,z) can
be almost any expression that is a
function of
x, y, z, or a combination of these.
In addition to
the standard mathematical functions,
you may use
the new pigment function:
This function
will return a value based on the red
and green values
of the color at the specified point
in the pigment.
Red is the most significant and green
is the least.
This is in case you want to use height_field
files that use
the red and green components. Otherwise
just use grey
scale colors in your pigment.
This won't work
with slope based patterns. You can use
them but you'll
always get a constant value when its looked
up.
Accuracy: float
value. This value is used to
determine how
accurate the solution must be before
POV stops looking.
The smaller the better, but
smaller values
are slower too.
precompute
can speedup rendering of parametric
surfaces. It
simply divides parametric surfaces
into small ones
(2^depth) and precomputes ranges
of the variables(x,y,z)
which you specify after
depth. Be careful!
High values of depth can produce
arrays greater
than amount of your memory (RAM+swap).
If you declare
a parametric surface with the
precompute keyword
and then use it twice, all
arrays are in
memory only once.
Parallel lights
shoot rays from the closest point
on a plane to
the object intersection point. The
plane is determined
by a perpendicular defined by
the light location
and the points_at vector.
For normal point
lights, points_at must come after
parallel.
fade_dist and fade_power use the light
location to
determine distance for light atenuation.
This will work
with all other kinds of light sources,
spot, cylinder,
point lights, and even area lights.
The light rays
that pass through the projected_through
object will
be the only light rays that contribute to
the scene. Any
objects between the light and the projected
through object
will not cast shadows for this light. Also
any surface
within the projected through object will not
cast shadows.
Any textures or interiors on the object will
be stripped
and the object will not show up in the scene.
If you wish
the projected through object to show up in the
scene, do something
like the below example (This will
simulate reflected
light)
Example:
#declare
MirrorShape=box{<-0.5,0,0>,<0.5,1,0.05>}
#declare Reflection=<0.91,0.87,0.94>
#declare LightColor=<0.9,0.9,0.7>
light_source{ <10,10,-10> rgb LightColor}
light_source{
<10,10,10> rgb LightColor*Reflection
projected_through{MirrorShape}
}
object{
MirrorShape
pigment {rgb 0.3}
finish {
diffuse 0.5
ambient 0.3
reflection rgb Reflection
}
}
object
{
...
light_group "name1"
no_shadow "name2"
}
media
{
...
light_group "!name3"
}
defaults:
light_group "all"
no_shadow "none"
This is to control
light interaction of media
and objects.
There can be a max of 30 user defined
Light groups,
none and all are pre defined. groups
can be called
anything but no spaces and they are
delimited by
commas with the groups key word. All
lights are automatically
in the "all" group. if
light_group
or no_shadow group contains a ! it is
interpreted
to mean all lights not in the group
that follows
the !.
If a light doesn't
interact with a media or object
then no shadows
are cast from that object for that
light.
The only real
reflectivity model I know of right
now is the Fresnel
function, which uses the IOR of
a substance
to calculate its reflectivity at any
given angle.
Naturally, it doesn't work for
opaque materials,
which don't have an IOR.
However, in
many cases it isn't the opaque object
doing the reflecting;
ceramic tiles, for instance,
have a thin
layer of transparent glaze on the
surface, and
it is the glaze (which -does- have an
IOR) that is
reflective.
However, for
those "other" materials, I've
extended the
standard reflectivity function to use
not one but
TWO reflectivity values. The
"maximum" value
is the reflectivity observed when
the material
is viewed at an angle perpendicular
to its normal.
The "minimum" is the reflectivity
observed when
the material is viewed "straight
down", parallel
to is normal. You CAN make the
minimum greater
than the maximum - it will work,
although you'll
get results that don't occur in
nature.
The "falloff" value specifies an exponent
for the falloff
from maximum to minimum as the
angle changes.
I don't know for sure what looks
most realistic
(this isn't a "real" reflectivity
model, after
all), but a lot of other physical
properties seem
to have squared functions so I
suggest trying
that first.
reflection_type
chooses reflectivity function.
The default reflection_type
is zero, which has new
features but
is backward-compatible. (It uses the
'reflection'
keyword.) A value of 1 selects the
Fresnel reflectivity
function, which calculates
reflectivity
using the finish's IOR. Not useful
for opaque textures,
but remember that for things
like ceramic
tiles, it's the transparent glaze on
top of the tile
that's doing the reflecting.
reflection_min
sets minimum
reflectivity in reflection_type 0.
This is how
reflective the surface will be when
viewed from
a direction parallel to its normal.
reflection_max
sets maximum
reflectivity in reflection_type 0.
This is how
reflective the surface will be when
viewed at a
90-degree angle to its normal.
You can make
reflection_min less than
reflection_max
if you want, although the result is
something that
doesn't occur in nature.
reflection_falloff
sets falloff
exponent in reflection_type 0.
This is the
exponent telling how fast the
reflectivity
will fall off, i.e. linear, squared,
cubed, etc.
reflection
convenience and
backward compatibility
This is the
old "reflection" keyword. It sets
reflection_type
to 0, sets both reflection_min and
reflection_max
to the value provided, and
reflection_falloff
to 1.
Normally, when
a ray hits a reflective surface,
another ray
is fired at a matching angle, to find
the reflective
color. When you specify a blurring
amount, a vector
is generated whose direction is
random and whose
length is equal to the blurring
factor.
This "jittering" vector is added to the
reflected ray's
normal vector, and the result is
normalized because
weird things happen if it
isn't.
One pitfall you
should keep in mind stems from the
fact that surface
normals always have a length of
1. If
you specify a blurring factor greater than
one, the reflected
ray's direction will be based
more on randomness
than the direction is "should"
go, and it's
possible for a ray to be "reflected"
THROUGH the
surface it's supposed to be bouncing
off of.
Since having
reflected rays going all over the
place will introduce
a certain amount of
statistical
"noise" into your reflections, you
have the option
of tracing more than one jittered
ray for each
reflection. The colors found by the
rays are averaged,
which helps to smooth the
reflection.
In a texture that already has some
noise in it
from the pigment, or if you're not
using a lot
of blurring, 5 or 10 samples should be
fine.
If you want to make a smooth flat mirror
with a lot of
blurring you may need upwards of 100
samples per
pixel. For preview renders where the
reflection doesn't
need to be smooth, use 1
sample, since
it renders as fast as unblurred
reflection.
reflection_blur
Specifies how
much blurring to apply.
Put this in
finish. A vector with this length and
a random direction
is added to each reflected
ray's direction
vector, to jitter it.
reflection_samples
Specifies how
many samples per pixel to take.
It may also
be placed in global_settings to change
the default
without having to specify a whole
default texture.
Each time a ray hits a reflective
surface, this
many jittered reflected rays are
fired and their
colors are averaged. If the finish
has a reflection_blur
of zero, only one sample
will be used
regardless of this setting.
But what happens
if an object's color is different on
parts of it?
What happens, for example, if you have a
Christmas ornament
that's red on one side and yellow on
the other, with
a smooth fade between the two colors?
The red side
should only reflect red light, but the
yellow side
should reflect both red and green.
(yellow = red
+ green) Ordinarily, there is no way to
accomplish this.
Hence, there
is a new feature: metallic reflection. It
kind of corresponds
to the "metallic" keyword, which
affects Phong
and specular highlights, but metallic
reflection multiplies
the reflection color by the pigment
color at each
point to determine the reflection color for
that point.
A value of "reflection 1" on a red object
will reflect
only red light, but the same value on a yellow
object reflects
yellow light.
Put this in finish.
Multiplies the
"reflection" color vector by the pigment
color at each
point where light is reflected to better
model the reflectivity
of metallic finishes. Like the
"metallic" keyword,
you can specify an optional float
value, which
is the amount of influence the
reflect_metallic
keyword has on the reflected color.
If this number
is omitted it defaults to 1.
A pattern statement
be used wherever an image
specifier like
tga or png may be used. Width and
Height specify
the resolution of the resulting
bitmap image.
The pigment body may contain
transformations.
If present, they apply only to
the pigment
and not to the object as a whole.
This image type
currently ignores any filter
values that
may be present in the pigment, but it
keeps transparency
information. If present, the
hf_gray_16 specifier
causes POV-Ray to create an
image that uses
the TGA 16-bit red/green mapping.
Form determines
the linear combination of
distances used
to create the texture. Form is a
vector.
The first component determines the
multiple of
the distance to the closest point to
be used in determining
the value of the pattern at
a particular
point. The second component
determines the
coefficient applied to the second-
closest distance,
and the third component
corresponds
to the third-closest distance.
The standard
form is <-1,1,0>, corresponding to the
difference in
the distances to the closest and
second-closest
points in the cell array. Another
commonly-used
form is <1,0,0>, corresponding to
the distance
to the closest point, which produces
a pattern that
looks roughly like a random
collection of
intersecting spheres or cells.
The default for
form is the standard <-1,1,0>.
Other forms
can create very interesting effects, but
it's best to
keep the sum of the coefficients low.
If the final
computed value is too low or too
high, the resultant
pigment will be saturated with
the color at
the low or high end of the color_map.
In this case,
try multiplying the form vector by a
constant.
The offset is
used to displace the texture from
the standard
xyz space along a fourth dimension.
It can be used
to round off the "pointy" parts of
a cellular normal
texture or procedural
heightfield
by keeping the distances from becoming
zero.
It can also be used to move the calculated
values into
a specific range if the result is
saturated at
one end of the color_map. The
default offset
is zero.
FACETS:
{facets [coords ScaleValue | size Factor] }
The facets texture
is designed to be used as a
normal.
Like bumps or wrinkles, it is not
suitable for
use as a pigment. There are two
forms of the
facets texture. One is most suited
for use with
rounded surfaces, and one is most
suited for use
with flat surfaces.
If "coords"
is specified, the facets texture
creates facets
with a size on the same order as the
specified scale
value. This version of facets is
most suited
for use with flat surfaces, but will
also work with
curved surfaces. The boundaries of
the facets coincide
with the boundaries of the
cells in
the standard crackle texture. The
coords version
of this texture may be quite
similar to a
crackle normal pattern with solid
specified.
If size
is specified, the facets texture uses a
different function
that creates facets only on
curved surfaces.
The factor determines how many
facets are created,
with smaller values creating
more facets,
but it is not directly related to
any real-world
measurement. The same factor will
create the same
pattern of facets on a sphere of
any size.
This texture creates facets by snapping
normal vectors
to the closest vectors in a
perturbed grid
of normal vectors. Because of
this, if a surface
has normal vectors that do not
vary along one
or more axes, there will be no
facet boundaries
along those axes.
image_pattern
uses a very similar syntax to
bump_map,
but without bump_size. There is also a
new keyword
use_alpha which uses the alpha channel
of the image
(if present) to determine the value
to be used at
each location.
It is meant to
be used for creating texture
"masks", like
the following:
texture
{
image_pattern { tga "image.tga" use_alpha }
texture_map
{
[0 mytex
]
[1 pigment{transmit 1} ]
}
}
Note: the following macros might come in handy:
#macro
texture_transmit(tex, trans)
average texture_map {
[1-trans tex]
[trans pigment{transmit 1}]
}
#end
#macro
pigment_transmit(tex, trans)
average pigment_map {
[1-trans tex]
[trans transmit 1]
}
#end
#macro
normal_transmit(tex, trans)
average normal_map {
[1-trans tex]
[trans bump_size 0.0]
}
#end
In this variant
the usage of the keyword 'slope'
is very similar
to the keyword 'gradient'. Just
like the latter
it takes a direction vector as its
argument. All
forms that are allowed for gradient
are possible
for slope as well:
slope
x
slope -x
slope y
...
slope <1, 0, 1>
slope <-1, 1, -1>
slope 1.5*<1, 2, 3>
slope <2, 4, -10.4>
When POV-Ray
parses this expression the vector
will be normalized,
which means that the last
example is equivalent
to
slope <1, 2, -5.2>
and arbitrary
multiples/fractions of it. It's just
the direction
of the vector that is significant.
NOTE: this holds
only for the 1-parameter
variant. Variants
2 and 3 DO evaluate the length
of the vectors.
And what does
slope do? Very easy: Just like
gradient
it returns values between (and including)
0.0 and 1.0,
where 0.0 is most negative slope
(surface normal
points in negative vector
direction),
0.5 corresponds to a vertical slope
(surface normal
and slope vector form 90 degree
angle) and 1.0
is horizontal with respect to the
slope vector
(surface normal parallel to slope vector).
This is what
everyone associates with slopes in
landscapes.
For all surface elements that are
horizontal slope
returns 1.0. All vertical parts
return 0.5 and
all 'anti-horizontal' surfaces
(pointing downwards)
return 0.0.
Slope may be
used in all contexts where gradient
is allowed.
Similarly, you can therefore construct
very complex
textures with this feature. Covering
height_field
mountains with snow is one of the
easier examples.
But try a pyramid in a desert
where - due
to continuous sand storms - the south-
west front is
covered with sands.
Imagine the following
scene: You have a mountain
landscape (presumably
slope's main application).
The mountain's
foot is in a moderate region such
that some vegetation
should be there. The mountain
is very rough
with steep areas. At lower regions
soil could settle
even on the steep parts, thus
vegetation is
there, too. The vegetation will be
on steep and
on flatter parts. Now, climbing up
the mountain
the vegetation will change more and
more. Now it
needs some shelter from the rough
climate and
will therefore prefer flatter regions.
Near the mountain's
top we will find no vegetation
any more. Instead
the flat areas will be covered
with snow, the
steep ones looking rocky.
One solution
to render this would be to compose
different layers
of texture. The change from one
layer to the
next would be controlled by
'gradient'.
Try and do some experiments. You will
probably find
that it is very difficult to hide
the seams between
the layers. Either you have
abrupt transition
from one layer to the next, for
example:
[ 0.3 T_Layer1 ]
[ 0.3 T_Layer2 ]
...
which will show
discontinuities in the resulting
total texture.
Or you would try smooth transitions
[ 0.3 T_Layer1 ]
[ 0.4 T_Layer2 ]
...
and will find
the resulting texture striped with
blurred bands.
So what can we do?
We would try
something like
slope y, y
texture_map {
[ 0.3 T_Layer1 ]
[ 0.3 T_Layer2 ]
...
}
And now comes
the tricky point. The second
parameter instructs
POV-Ray to calculate the
pattern value
not only from the slope but as a
composition
of slope and altitude. We will call
it altitude
here because in most cases it will be
such, but nothing
prevents us from using another
direction instead
of y. Also, both vectors may
have different
directions.
It is VERY IMPORTANT
that, unlike variant 1, the
two vectors'
lengths weight the combination. The
pattern value
is now calculated as (0.5*slope +
0.5*altitude).
If the slope
expression was
slope y, 3*y
then the calculation
would be (0.25*slope +
0.75*altitude).
slope 2*y, 6*y
will give the
same result.
Similarly, something
like
slope 10*y, y
will result in
a pattern where the altitude
component has
almost no influence; it is very
close to plain
'slope y'.
The component
'altitude' is the projection of the
point in space
that we are calculating the pattern
for, onto the
second vector (we called it Altitude
in our syntax
definition). For Altitude=y this is
quite intuitive,
for other vectors it's more
abstract.
In case the sum
(a*slope + b*altitude) exceeds 1.0
it is clipped
to fmod(sum). So if your resulting
texture is showing
unexpected discontinuities
check whether
your altitude exceeds 1.0. In that
case you can
either scale the whole object down,
or read section
3.3.
Be sure to be
quite familiar with the concept of
variant 2. The
next section will be harder stuff,
though very
useful for the power user. If the
above does all
you need in your scenes you may
stop reading
here and be happy with what you have
learnt so far.
But be warned: A situation will
come where you
will definitely need the last
variant.
Variant 3: slope <Slope>, <Altitude>, <SlopeLimits>,
<AltLimits>
The previous
variant does very well if your
altitude and
slope values both change between 0
and 1. The pattern
as the combination of these two
(weighted) components
will also be from [0..1] and
will fill this
interval quite well. But now
imagine the
following scene. You have a mountain
scene again.
Since it is a height_field you know
that the mountain
surface has no overhangs. The
surface's slope
value is therefore from [0.5..1].
Further, you
want to compose the final texture
using different
textures for different altitudes.
You could try
to deploy variant 2 here, but you
want only sand
at the mountain's foot, no snow, no
vegetation.
Then, in the middle, you would like a
mixture of vegetation
and rock. This mixture shall
have an altitude
dependency like the one described
in section 3.2,
i.e. the vegetation moving to
flatter areas
with increasing altitude. And in the
topmost region
we would like a mixture of rock and
snow.
You would intuitively
define the texturing for
this scene as
gradient y
texture_map {
[0.50 T_Sand]
[0.50 T_Vege2Rock]
[0.75 T_Vege2Rock]
[0.75 T_Rock2Snow]
}
Let's assume
you have already designed and tested
the following
textures:
#declare T_Vege2Rock = texture {
slope -y, y
texture_map {
[0.5 T_Vege]
[0.5 T_Rock]
}
}
#declare T_Rock2Snow = texture {
slope y, y
texture_map {
[0.5 T_Rock]
[0.5 T_Snow]
}
}
You tested them
on a sphere with diameter = 1.0.
But in the final
texture T_Vege2Rock will be
applied only
in the height interval [0.5..0.75].
And the slope
will always be greater 0.5. So how
will the resulting
pattern (the weighted
combination
of the slope and the altitude
component) behave?
Is it possible to scale the
texture in a
way that the values exhaust the full
intervals? And
if we should succeed and we want to
modify the texture,
what modifications should we
apply? Believe
us, it's too complex to get
reasonable,
controllable results!
The answer to
this problem is: Scale and translate
the slope/altitude
components to [0..1]. Now their
behavior is
predictable and controllable again. In
our example
we would define the two mixed textures
like this:
#declare T_Vege2Rock = texture {
slope -y, y, <0,0.5>, <0.5,0.75>
texture_map {
[0.5 T_Vege]
[0.5 T_Rock]
}
}
#declare T_Rock2Snow = texture {
slope y, y, <0.5,1.0>, <0.75,1.0>
texture_map {
[0.5 T_Rock]
[0.5 T_Snow]
}
}
What does this
do? In the first texture we added
the two additional
2-dimensional limits vectors
with the following
values:
Low slope = 0
High slope = 0.5
slope -y lets
slope's pattern component travel
between 1 and
0 where 1 means 'anti-horizontal'
(surfaces facing
down) and 0 is horizontal, 0.5
being vertical.
Since our terrain has no
overhangs, the
effective interval is only [0..0.5]
(again 0 being
horizontal, 0.5 vertical). Low
slope and high
slope transform this interval to
[0..1], now
0 still being horizontal, 1.0
vertical. (NOTE:
we could have used 'slope y,...'.
Actually we
used 'slope -y,...' to have 0
correspond with
horizontal and 1 with vertical. We
need this because
we then can define T_Vege for
<0.5 (=flatter
& lower) and T_Rock for >0.5
(=steeper &
higher) values. Of course we could
have done it
the other way around, but then we
would have to
change the altitude direction as
well. This way
it is more intuitive.)
Then we have
Low alt = 0.5
High alt = 0.75
We know that
T_Vege2Rock will be used only in the
altitude range
between 0.5 and 0.75. Thus low alt
and high alt
stretch the value interval
[0.5..0.75]
to [0..1], i.e. the altitude component
behaves just
like in our test sphere, and so does
the superposition
of the slope and the alt
component, instead
of producing uncontrolled
values.
We leave the
analysis of T_Rock2Snow as an
exercise for
the reader.
SPECIAL CASE:
It is possible to define <0,0> for
each of <SlopeLimits>
and <AltLimits>. This lets
POV-Ray simply
ignore the respective vector, i.e.
no transformation
will take place for that
component. Using
this feature you can easily
define slopes
where a transformation is done only
for the altitude
component, <0,0> just being a
placeholder
that tells POV-Ray: "do nothing to the
slope component".
Further, there
is one object that allows for
pattern modifiers
but has no defined surface:
sky_sphere.
sky_sphere is a sphere with infinite
radius, therefore
POV-Ray will never find an
intersection
point between the ray and the object.
Thus there is
no point for which a surface slope
could be defined.
It is syntactically correct to
use slope as
a pattern modifier for sky_spheres,
but it will
return the constant value 0.0 for all
directions.
Usage of gradient for sky_sphere
returns the
same pattern values anyway, so use it.
You may use the
turbulence keyword inside slope
pattern definitions.
It is syntactically correct
but may show
unexpected results. The reason for
this is that
turbulence is a 3-dimensional
distortion of
a pattern. Since slope is only
defined on objects'
surfaces a 3-dimensional
turbulence is
not applicable to the slope
component. In
case you use the extended variants
the total pattern
is calculated from the slope and
the altitude
component. While the slope component
is not affected
by turbulence the altitude
component will
be. You can produce nice results
with this, but
it is difficult to control it.
However, it is
sometimes more desireable to have
the texture
defined for the surface of the object.
This is especially
true for bicubic_patch objects
and mesh objects,
that can be stretched and
compressed.
When the the object is stretched or
compressed,
it would be nice for the texture to be
"glued" to the
object's surface and follow the
object's deformations.
A new keyword
has been added to object modifiers.
If the first
modifier in an object is
"uv_mapping",
then that object's texture will be
mapped on via
UV coordinates. This is done by
taking
a slice of the object's regular 3D texture
from the XY
plane and wrapping it around the
surface of the
object, following the object's UV
surface coordinates.
There is a new
keyword, uv_vectors. This keyword
can be used
in bicubic patches to set the UV
coordinates
for the starting and ending corners of
the patch.
The default is
uv_vectors <0,0>,<1,1>
syntax is "uv_vectors <start>,<end>"
If you had another
patch sitting right next to
this (as happens
often with sPatch or Moray), you
could map the
exact same texture to it but use
something like
uv_vectors <0,1>,<1,2>
(depending on
which side of this patch the other
is on) so that
the two textures fit seamlessly
together.
This new keyword
also shows up in triangle meshes
(the original
kind), and soon it will be in single
triangles, too.
Inside each mesh triangle, you
can specify
the UV coordinates for each of the
three verticies
with:
uv_vectors <uv1>,<uv2>,<uv3>
This goes right
after the coordinates (or
coordinates
& normals with smooth triangles) and
right before
the texture.
warp
{
spherical
[ orientation VECTOR | dist_exp FLOAT ]
}
warp
{
toroidal
[ orientation VECTOR | dist_exp FLOAT | major_radius
FLOAT ]
}
These warps essentially
use the same mapping as the
image maps use.
This way we can wrap checkers, bricks
hexagon and
other patterns around spheres, toruses,
cylinders and
other objects. It wraps around the Y axis.
However it does
3D mapping and some concession had to
be made on depth.
This is controllable
by dist_exp (distance exponent). In
the default
of 0 imagine a box <0,0> to <1,1> stretching
to infinity
along the orientation vector. The warp takes
its points from
that box. (except for cylindrical where
the y value
doesn't get warped if you use the default
orientation)
For a sphere
distance is distance from origin, cylinder
is distance
from y-axis, torus is distance from major
radius. (or
distance is minor radius if you prefer to
look at it that
way)
However the box
really is <0,0> <dist^dist_exp,dist^dist_exp>
This is for
if you have a non torus, cylinder or sphere, the
texture doesn't
look stretched unless you want to.
sphere
{
0,1
pigment {
hexagon
scale <0.5/pi,0.25/pi,1>*0.1
warp {
spherical
orientation y
dist_exp 1
}
}
}
cylinder
{
-y,y,1
pigment {
hexagon
scale <0.5/pi,1,1>*0.1
warp {
cylindrical
orientation y
dist_exp 1
}
}
}
This warp was
made to help make the spherical,
cylindrical,
and torodial warps act more like the
2D image mapping.
It maps each point to a plane
defined by a
normal(the VECTOR) and offset (the
float) just
like defining a plane object.
map_type 8 was
an early attempt at UV mapping of
textures.
It is only valid for bitmap images, and
it only works
with bicubic_patches and parametric
functions.
It is included here for backward
compatibility;
the newer syntax for UV mapping
should be used
for new scenes where possible.
Specifies a trace_level
max for blurring.
If the trace_level
is higher than this value, no
blurring and
one sample will be used. The default
is 5, the same
as the default max_trace_level.
specifies the
ADC bailout for blurring.
If the ADC weight
is less than this value, no
blurring and
one sample will be used. The default
is 1/255, the
same as the default adc_bailout.
STRING_ENCODING:
string_encoding "UTF8" | "ISO8859_1" |
"Cp1252" | "MacRoman"
specifies the
encoding used for extended character
sets.
This is currently only used for text objects,
but may be extended
to other string manipulation
functions in
the future. The double quotes are
required. The
default encoding is UTF8.
The Internet
home of POV-Ray is
http://www.povray.org
and ftp://ftp.povray.org.
Please stop
by often for the latest files, utilities,
news and images
from the official POV-Ray internet
site.