Introduction

 

 What is the Superpatch?

          The Superpatch is an unofficial build of POV-Ray.
          Like many unofficial builds, it adds new features
          to POV.  However, the Superpatch is intended to
          fulfill a loftier goal than merely to offer new
          features.  The goal of the Superpatch is to offer
          as many features as possible in a single
          unofficial build, allowing you to use the new
          features from a diverse collection of unofficial
          builds in a single package.

          The Superpatch was conceived and executed by Ron
          Parker, parkerr@mail.fwi.com, but it contains the
          work of a great many people who are listed in the
          Superpatch Contributors section below.  Getting
          here has been a long process, starting in the days
          of POV 3.02 and finally coming to a temporary stop
          with the current version, 3.1a.

          The Superpatch is an UNOFFICIAL build of POV-Ray
          and is not supported by the POV-Team.   For
          information on getting the official POV-Ray, see
          "Where to find the official POV-Ray" at the end of
          this document.
 

 Getting support for the Superpatch

          Since this is an UNOFFICIAL build of POV-Ray, the
          POV-Team will not provide support for it.
          Instead, you should come to me, Ron Parker, to get
          support.  I will do support by email at
          parkerr@fwi.com, or you can get support at any of
          the following places:
 

  E-mail

          I know, I said this one already, but some people
          probably missed it.  The primary means of one-on-
          one support for the Superpatch is by email.  Send
          your question to parkerr@fwi.com and I’ll get back
          to you as soon as I can.
 

  IRC #povray

          I can often be found here in the evenings.  Simply
          go to your favorite EFnet, DALnet, or NewNet
          server and join #povray.  Look for parkrrrr.
 

  news.povray.org

          The best place to talk about the superpatch and
          get community support for it is the
          povray.unofficial.patches newsgroup on the official
          POV news server, news.povray.org.  See
          http://www.povray.org/groups.html for more
          information.
 

 Superpatch Contributors

          The SuperPatch is currently a combination of all
          of the following patches, in addition to a number
          of small patches by Ron Parker.
 
          POV-Ray 3.0 Isosurface Patch - isosurface,
          function patterns, parametric, map_type 8:
            Daniel Skarda, Thomas Bily, Ryoichi Suzuki,
            Kochin Chang, Nathan Kopp
 
          POV-Ray with Sphere Sweeps:
            Jochen Lippert
 
          Rational Bezier Surface patch:
            Daniel Skarda
 
          POV-RaySP - POV-Ray with Splines:
            Wolfgang Ortmann
 
          UV-POV - UV mapping, mesh2, and image_pattern:
            Nathan Kopp
 
          Slope-dependent textures:
            Hans-Detlev Fink, Michael C. Andrews
 
          Variable, blurred, and metallic reflection:
            Mike Paul (wyzard)

          Spherical, cylindrical, toroidal, and planar
          warps, light_group, isosurface pigment function:
            Matthew Corey Brown (xenoarch)
 
          Unicode support:
            Jon A. Cruz
 
 
 



 
 

                      Vector Functions



 

 min_extent

          MIN_EXTENT:
            min_extent( OBJECT_IDENTIFIER )

          Returns the minimum x,y,z values for a #declared
          object.  This is one corner of the object's
          bounding box.
 
          Example:

          #declare MySphere = sphere { <0,0,0>, 1 }
          #declare BBoxMin = min_extent( MySphere );
          object {MySphere
                   texture {
                     pigment {color rgb 1}
                   }
          }
          sphere {BBoxMin, .1
                   texture {
                     pigment {color red 1}
                   }
          }
 



 

 max_extent

          MAX_EXTENT:
            max_extent( OBJECT_IDENTIFIER )
 
          Returns the maximum x,y,z values for a #declared
          object.  One corner of the object's bounding box.
 
          Example:

          #declare MySphere = sphere { <0,0,0>, 1 }
          #declare BBoxMax = max_extent( MySphere );

          object {MySphere
                   texture {
                     pigment {color rgb 1}
                   }
          }
          sphere {BBoxMax, .1
                   texture {
                     pigment {color blue 1}
                   }
          }
 



 

 trace

          TRACE:
            trace( OBJECT_IDENTIFIER <Start> <Direction>
            [VECTOR_IDENTIFIER] )
 
          Traces a ray beginning at <Start> in the direction
          specified by <Direction>.  If the ray hits the
          specified object, this function returns the
          coordinate where the ray intersected the object.
          If not, it returns <0,0,0>.  If a fourth parameter
          in the form of a vector identifier is provided,
          the normal of the object at the intersection point
          (not including any normal perturbations due to
          textures) is stored into that vector.  If no
          intersection was found, the normal vector is reset
          to <0,0,0>.

          Note: Checking the normal vector for <0,0,0> is
          the only reliable way to determine whether an
          intersection has actually occured, as valid
          intersections can and do occur anywhere, including
          at <0,0,0>.
 
          Example:
 
          #declare MySphere = sphere { <0,0,0>, 1 }
          #declare Norm = <0,0,0>;
          #declare Start = <1,1,1>;
          #declare Inter=
             trace( MySphere, Start, <0,0,0>-Start, Norm );
 
          object {MySphere
                   texture {
                     pigment {color rgb 1}
                   }
          }

          #if (Norm.x != 0 || Norm.y != 0 || Norm.z != 0)
            cylinder {Inter, Inter+Norm, .1
                       texture {
                         pigment {color red 1}
                       }
            }
          #end
 



 
 

                                Directives

 


 

 #ifdef (array[index])

          In the official POV-Ray, it is possible to
          determine whether a variable has been declared but
          it is not possible to check whether a specific
          element of an array has been declared.  The
          superpatch includes a patch to allow such a check.

          Example:

          #declare myarray=array[10]
          ...
          #ifdef(myarray[0])
          ...
          #end
 



 

 splines

 

  Declaration

          #declare Identifier = spline {
             [ linear_spline | cubic_spline ]
               Arg1, <VectVal1> | FloatVal1,...
           }
 
          Note: argument/value pairs needn't be sorted by
          argument value.

          Example:

          #declare pos=spline {
                 linear_spline
                 0,<0,0,0>
                 0.5,<1,0,0>
                 1,<0,0,0>
          }
 

  Usage

          Splines work like functions and may be used
          (nearly) anywhere a float or vector value is
          expected. The following examples assume you have
          defined a spline named Spline:

          Examples:

          /* a moving sphere */
          sphere { Spline(clock) 1
                   pigment { color Yellow }
          }
 
          /* a sphere which changes it size */
          sphere { <0,0,0>
             Spline(clock)
             pigment { color Yellow }
          }
 
          /* the same, but with the x-component
          of a spline vector */
          sphere { <0,0,0>
             Spline(clock).x
             pigment { color Yellow }
          }
 
 

  Special properties of "pure" splines

          The way splines are calculated is not exactly the
          same as for splines in SOR, prism and lathe
          objects.  Outside the range between the lowest and
          highest defined argument the value of the spline
          is constant and equal to the value for the lowest
          or highest argument. Cubic splines can be
          calculated in the range between the second and
          last but one argument. In the remaining parts
          linear splines are used. quadratic_spline is not
          implemented.
 

  Known Bugs

          Specifying two argument/value pairs with the same
          argument delivers no error message but strange
          results. The declaration of a new variable using
          splines directly doesn't work.
          Instead of
               #declare Angle=Spline(clock)
          one has to use
               #declare Angle=<0,0,0>+Spline(clock)
 
 

 

                           Objects

 

 

 sphere sweeps

          SPHERE_SWEEP:
            sphere_sweep {
               linear_sphere_sweep |
               catmull_rom_spline_sphere_sweep |
               b_spline_sphere_sweep
               NumSpheres,
               Center1, Radius1,...
               [sphere_sweep_depth_tolerance DepthTolerance]
              OBJECT_MODIFIERS
            }
 
          Example:
          sphere_sweep {
              linear_sphere_sweep,
              4,
              <-5, -5,  0>, 1
              <-5,  5,  0>, 1
              < 5, -5,  0>, 1
              < 5,  5,  0>, 1
          }

          This object is described by four spheres. You can
          use as many spheres as you like to describe the
          object, but you will need at least two spheres for
          a linear Sphere Sweep, and four spheres for one
          approximated with Catmull-Rom-Splines or B-
          Splines.

          The example above would result in an object shaped
          like the letter "N". Changing the kind of
          interpolation to a Catmull-Rom-Spline produces a
          quite different, slightly bent, object, starting
          at the second sphere and ending at the third one.
          If you use a B-Spline, the resulting object lies
          somewhere between the four spheres.

          Note: If you see dark spots on the surface of the
          object, you are probably experiencing an effect
          called "Self-Shading". This means that the object
          casts shadows onto itself at some points because
          of calculation errors. A ray tracing program
          usually defines the minimal distance a ray must
          travel before it actually hits another (or the
          same) object to avoid this effect. If this
          distance is chosen too small, Self-Shading may
          occur. To avoid this, there is a way to chose
          a bigger minimal distance than the preset 1.0e-6
          (0.000001). Use the  sphere_sweep_depth_tolerance
          keyword at the end of the Sphere Sweep description
          to chose another value. The following would set
          the depth tolerance to 1.0e-3 (0.001), for
          example:

          sphere_sweep {
              b_spline_sphere_sweep, 4,
              <-5, -5,  0>, 1
              <-5,  5,  0>, 1
              < 5, -5,  0>, 1
              < 5,  5,  0>, 1
              sphere_sweep_depth_tolerance 1.0e-3
          }
 

  Known problems

          If you are experiencing problems with dark spots
          on the surface of a Sphere Sweep, try adjusting
          the depth tolerance first (described above). A
          good value to start with seems to be 1.0e-3
          (0.001), especially for Sphere Sweeps modeled with
          Splines. Another way to get rid of these spots is
          using Adaptive Supersampling (Method 2) for
          antialiasing. I'm not sure whether Adaptive
          Supersampling solves the problem because single
          errors vanish in the higher number of rays traced,
          or it is just that the rays hit the object at
          different places where the errors don't occur, but
          the images sure look better with antialiasing
          anyways :)

          Another problem occurs when using the merge
          statement with Sphere  Sweeps: There is a small
          gap between the merged objects. Right now the only
          workaround is to avoid the merge statement and use
          the union statement instead. If the objects have
          transparent surfaces this leads to different
          looking pictures, however.
 



 

 new bicubic_patch options

          The old syntax of bicubic_patch has been
          preserved, but a new calculation method has been
          added. If you wish to use the new method for
          computing ray/surface intersection you should
          write "type 2" in the beginning of the
          specification of a bicubic_pach. You may also use
          the new keyword accuracy instead of flatness,
          u_steps, and v_steps.

          Example:

          bicubic_patch {
            type 2
            accuracy 0.01
            <0,0,0>, <1,1,2>,
            ...
          }
 
          If you want to use rational bicubic_patch use
          "type 3".  In addition, you must specify control
          points as a four-element vector of the form
          <x,y,z,w>. w is the weight of the control point.

          Example:

          bicubic_patch {
            type 3
            accuracy 0.01
            <0,0,0,1>, <1,1,2,0.5>,...
            ...
          }
 



 

 bezier_patch

          BEZIER_PATCH:
            bezier_patch {
              U_Order, V_Order
              [accuracy Accuracy_Value]
              [rational]
              <ControlPt_1_1>, ... ,<ControlPt_1_U_Order>
              ...
              <ControlPt_V_Order_1>,...,<ControlPt_V_Order_U_Order>
            [TRIM]
            OBJECT_MODIFIERS
          }
 
 TRIM:
          trimmed_by {
            TRIM_IDENTIFIER |
            TRIM_PARAMETERS
          }
 
          TRIM_PARAMETERS:
            type Type
            [ Order [rational] <ControlPt_1>,… ]
            [ scale <UVScale> ]
            [ translate <UVTrans> ]
            [ rotate Angle ]
 
          Trimming shapes may be declared and reused using
          #declare as with an object:

          #declare TRIM_IDENTIFIER = TRIM
 
          U_Order, V_Order - number of control points in u
          or v direction, or the order of patch in u or v
          direction plus 1.  These values must be 2, 3, or
          4.

          Accuracy_Value - specifies how accurate
          computation should be. 0.01 is a good value to
          start with. For higher precision you should
          specify a smaller number.  This value must be
          greater than zero.

          rational - If specified, a rational bezier patch
          (or trimming curve) will be created.  In the case
          of a rational bezier patch, the control points
          must be four-dimensional vectors as described for
          bicubic_patch.

          ControlPt_*_* - control points of the surface -
          each control point is a vector.  For rational
          surfaces, these are four-dimensional vectors.
 
          Trimming shapes are closed curves, piecewise
          Bezier curve, or sequences of (possibly rational)
          bezier curves.

          There are two types of trimming shapes. The
          keyword "type" has no relation to the method used
          for compute them as with bicubic_patch.
          Type 0 specifies that the area inside the trimming
          curve will be visible, while type 1 specifies that
          the area outside the trimming curve will be
          visible.
 
          Order - order of trimming curve plus 1.  This
          value must be 2, 3, or 4.

          ControlPt_* - control points of trimming curve in
          <u,v> coordinate system of trimmed bezier surface.
          Hence <0,0> is mapped to ControlPt_0_0, <1,0> to
          ContolPt_1_U_Order, <0,1> is equal to
          ControlPt_V_Order_1 and <1,1> to
          ControlPt_V_Order_U_Order.

          scale, translate, rotate - you can use these
          keywords to transform control points of already
          inserted curves of the current trimming shape. You
          can use an arbitrary number of curves and
          transformations in one trimmed_by section. You may
          also freely mix them as needed to obtain the
          desired shape

          If the first control point of a curve is different
          from the last control point of the previous one
          they will be automatically conected by a line. The
          same applies for the last control point of the
          last curve and the first point of the first curve.
          Note that if you use transformations, particularly
          rotations, points will not be transformed exactly
          and an additional line may be inserted. To avoid
          this problem you may use the keyword previous  for
          the current value of the last control point of the
          previous curve or first  for the current value of
          the first control point of the first curve. Since
          only <u,v> position is copied, you will still have
          to specify the weight of the control point.

          When using a #declared trimming shape you may
          change its type, size, orientation, or position.
          However, changing type or transformation will
          require more memory and you may not add new
          curves.
 



 

 mesh2

          MESH2:
            mesh2
            {
              vertex_vectors
               {
                Number_of_vertices,
                <Vertex1>, <Vertex2>, ...
              }
              normal_vectors
              {
                Number_of_normals,
                 <Normal1>, <Normal2>, ...
              }
              uv_vectors
              {
                Number_of_UV_vectors,
                <UV_Vect1>, <UV_Vect2>, ...
              }
              texture_list
              {
                Number_of_textures,
                texture { Texture1 },
                texture { Texture2 }, ...
              }
              face_indices
              {
                Number_of_faces,
                <Index_a, Index_b, Index_c>, [Texture_Index],
                <Index_d, Index_e, Index_f>, [Texture_Index],
                ...
              }
              normal_indices
              {
                Number_of_faces,
                <Index_a, Index_b, Index_c>,
                <Index_d, Index_e, Index_f>,
                ...
              }
              uv_indices
              {
                Number_of_faces,
                <Index_a, Index_b, Index_c>,
                <Index_d, Index_e, Index_f>,
                ...
              }
              OBJECT_MODIFIERS
            }
 
          The normal_vectors, uv_vectors, and texture_list
          sections are optional. If the number of normals
          equals the number of vertices then the
          normal_indices section is optional and the indices
          from the face_indices section are used instead.
          Likewise for the uv_indices section.

          The indices are ZERO-BASED, so the first item in
          each list has an index of zero.
 



 

 Isosurface

          ISOSURFACE:
            isosurface {
               FUNCTION_DECL
               [accuracy Accuracy]
               [max_trace MaxTrace | all_intersections]
               [threshold Threshold]
               [bounded_by {box { BoxCorner1, BoxCorner2 } } |
                bounded_by {sphere { Center, Radius } } |
                clipped_by {box { BoxCorner1, BoxCorner2 } } |
                clipped_by {sphere { Center, Radius } } ]
               [sign Sign]
               [max_gradient MaxGradient]
               [eval]
               [method Method]
            }
 
          FUNCTION_DECL:
             function{ f1(x,y,z) [ | or & ]  f2(x,y,z) ...} |
             function { FunctionName [<Params> [library
                LibName [InitString [<InitVect>]]]] }
 
 accuracy: float value.  This value is used to
          determine how accurate the solution must be before
          POV stops looking.  The smaller the better, but
          smaller values are slower too.

 max_trace: This is either an integer or the keyword
          all_intersections and it specifies the number of
          intersections to be found with the object.  Unless
          you’re using the object in a CSG, 1 is usually
          sufficient.

 threshold: float value.  This is the value through
          which an isosurface should be drawn.  The default
          is zero.

 sign: -1 or 1

 max_gradient: This value specifies the maximum
          value of the rate of change of the specified
          function along a unit distance on any ray.  The
          default is 1.1.

 method: 1 or 2.  This specifies which method will
          be used to find the surface of the object.  The
          default is method 1.  Method 2 may be faster, but
          requires that you specify MaxGradient.

          Params: parameters for internal function.  The
          number of parameters varies depending on which
          function is being used.

          LibName: The name of the dynamic or shared library
          that contains the specified internal function.

          InitString: This string is passed to the library’s
          initialization function when the library is
          loaded.

          InitVect: This vector is passed to the library’s
          initialization function when the library is
          loaded.  As with Params, the number of parameters
          varies depending on the library being loaded.

 eval: If specified, POV will attempt to determine
          the maximum gradient by examining the adjacent
          space.  This is most reliable when the actual
          maximum is close to 1 (about 0.5 to 5)

 trimmed_by, clipped_by: Like the standard POV
          versions, but optimized for boxes and spheres.
          f(x,y,z) can be almost any expression that is a
          function of x, y, z, or a combination of these.

          In addition to the standard mathematical functions,
          you may use the new pigment function:
 

          syntax:
       function { pigment { ... } }

          This function will return a value based on the red
          and green values of the color at the specified point
          in the pigment. Red is the most significant and green
          is the least. This is in case you want to use height_field
          files that use the red and green components.  Otherwise
          just use grey scale colors in your pigment.
 
          This won't work with slope based patterns. You can use
          them but you'll always get a constant value when its looked
          up.
 

  Built-In Functions (FunctionName)

 
    Sphere
          P0: radius
 
    rounded_box
          P0: radius of corner
 
    Torus
          P0: major radius
          P1: minor radius
 
    Superellipsoid
          P0: e
          P1: n
          (see POV-Ray documentation for the meaning of the
          superellipsoid parameters e and n)
 
    helix1 (major radius > minor radius)
          P0 : number of helix
          P1 : frequency
          P2 : minor radius
          P3 : major radius
          P4 : shape parameter
          P5 : cross section (1: circle, 2: diamond, < 1:
               rectangle(rounded)
          P6 : rotation angle for P5<1
 
    helix2 (minor radius > major radius)
          P0 : number of helix
          P1 : frequency
          P2 : minor radius
          P3 : major radius
          P4 : shape parameter
          P5 : cross section (1: circle, 2: diamond, < 1:
               rectangle(rounded)
          P6 : rotation angle for P5<1
 
    mesh1
          P0 : period x
          P1 : period z
          P2 : shape parameter 1
          P3 : amplitude
          P4 : shape parameter 2
 


 

parametric

          PARAMETRIC:
            parametric  {
              function x(u,v) , y(u,v) , z(u,v)
              <u1,v1>, <u2,v2>
              <x1,y1,z1>, <x2,y2,z2>
              [accuracy Accuracy]
              [precompute Depth, VarList]
            }
 
          <u1,v1>, <u2,v2>: boundaries to be computed in
          (u,v) space.
 
          <x1,y1,z1>, <x2,y2,z2>: bounding box of the
          function in real space.

          Accuracy: float value.  This value is used to
          determine how accurate the solution must be before
          POV stops looking.  The smaller the better, but
          smaller values are slower too.

          precompute can speedup rendering of parametric
          surfaces. It simply divides parametric surfaces
          into small ones (2^depth) and precomputes ranges
          of the variables(x,y,z) which you specify after
          depth. Be careful! High values of depth can produce
          arrays greater than amount of your memory (RAM+swap).
 

          Example:
          parametric
            {
             function u*v*sin(24*v), v, u*v*cos(24*v)
             <0,0>, <1,1>
             <-1.5,-1.5,-1.5>, <1.5,1.5,1.5>
             accuracy 0.001
             precompute 15, [x,z]
             /*
               precompute in y does not gain any speed
               in this case
             */
          }

          If you declare a parametric surface with the
          precompute keyword and then use it twice, all
          arrays are in memory only once.
 



 

 Parallel Lights

 
          syntax:
            light_source {
              ...
              parallel
              points_at VECTOR
            }

          Parallel lights shoot rays from the closest point
          on a plane to the object intersection point. The
          plane is determined by a perpendicular defined by
          the light location and the points_at vector.

          For normal point lights, points_at must come after
          parallel.  fade_dist and fade_power use the light
          location to determine distance for light atenuation.

          This will work with all other kinds of light sources,
          spot, cylinder, point lights, and even area lights.



 

 Projected Through:

 
          syntax:
            light_source{
              ...
              projected_through{object{...}}
            }

          The light rays that pass through the projected_through
          object will be the only light rays that contribute to
          the scene. Any objects between the light and the projected
          through object will not cast shadows for this light. Also
          any surface within the projected through object will not
          cast shadows. Any textures or interiors on the object will
          be stripped and the object will not show up in the scene.
          If you wish the projected through object to show up in the
          scene, do something like the below example (This will
          simulate reflected light)

          Example:

            #declare MirrorShape=box{<-0.5,0,0>,<0.5,1,0.05>}
            #declare Reflection=<0.91,0.87,0.94>
            #declare LightColor=<0.9,0.9,0.7>

            light_source{ <10,10,-10> rgb LightColor}

            light_source{
              <10,10,10> rgb LightColor*Reflection
              projected_through{MirrorShape}
            }

            object{
              MirrorShape
              pigment {rgb 0.3}
              finish {
                diffuse 0.5
                ambient 0.3
                reflection rgb Reflection
              }
            }



 

 Light Groups

          syntax:
            light_source{
              ...
              groups "name1,name2,name3,..."
            }

            object {
              ...
              light_group "name1"
              no_shadow "name2"
            }

            media {
              ...
              light_group "!name3"
            }

          defaults:
            light_group "all"
            no_shadow "none"

          This is to control light interaction of media
          and objects.  There can be a max of 30 user defined
          Light groups, none and all are pre defined. groups
          can be called anything but no spaces and they are
          delimited by commas with the groups key word. All
          lights are automatically in the "all" group. if
          light_group or no_shadow group contains a ! it is
          interpreted to mean all lights not in the group
          that follows the !.

          If a light doesn't interact with a media or object
          then no shadows are cast from that object for that
          light.
 
 
 

                        Textures

 


 

 variable reflection

 
  Overview
          Many materials, such as water, ceramic glaze, and
          linoleum are more reflective when viewed at
          shallow angles.  POV-Ray cannot simulate this,
          which is a big impediment to making realistic
          images sometimes.

          The only real reflectivity model I know of right
          now is the Fresnel function, which uses the IOR of
          a substance to calculate its reflectivity at any
          given angle.  Naturally, it doesn't work for
          opaque materials, which don't have an IOR.
          However, in many cases it isn't the opaque object
          doing the reflecting; ceramic tiles, for instance,
          have a thin layer of transparent glaze on the
          surface, and it is the glaze (which -does- have an
          IOR) that is reflective.

          However, for those "other" materials, I've
          extended the standard reflectivity function to use
          not one but TWO reflectivity values.  The
          "maximum" value is the reflectivity observed when
          the material is viewed at an angle perpendicular
          to its normal.  The "minimum" is the reflectivity
          observed when the material is viewed "straight
          down", parallel to is normal. You CAN make the
          minimum greater than the maximum - it will work,
          although you'll get results that don't occur in
          nature.  The "falloff" value specifies an exponent
          for the falloff from maximum to minimum as the
          angle changes. I don't know for sure what looks
          most realistic (this isn't a "real" reflectivity
          model, after all), but a lot of other physical
          properties seem to have squared functions so I
          suggest trying that first.
 
 reflection_type

          chooses reflectivity function.

          The default reflection_type is zero, which has new
          features but is backward-compatible.  (It uses the
          'reflection' keyword.) A value of 1 selects the
          Fresnel reflectivity function, which calculates
          reflectivity using the finish's IOR.  Not useful
          for opaque textures, but remember that for things
          like ceramic tiles, it's the transparent glaze on
          top of the tile that's doing the reflecting.
 
 reflection_min

          sets minimum reflectivity in reflection_type 0.
          This is how reflective the surface will be when
          viewed from a direction parallel to its normal.
 
 reflection_max

          sets maximum reflectivity in reflection_type 0.
          This is how reflective the surface will be when
          viewed at a 90-degree angle to its normal.
          You can make reflection_min less than
          reflection_max if you want, although the result is
          something that doesn't occur in nature.
 
 reflection_falloff

          sets falloff exponent in reflection_type 0.
          This is the exponent telling how fast the
          reflectivity will fall off, i.e. linear, squared,
          cubed, etc.
 
 reflection

          convenience and backward compatibility
          This is the old "reflection" keyword.  It sets
          reflection_type to 0, sets both reflection_min and
          reflection_max to the value provided, and
          reflection_falloff to 1.
 



 

 blurred reflection

 
  Overview
          In "ordinary" specular reflection, the reflected
          ray hits a single point, so there is no dispute as
          to its color.  But many materials have microscopic
          "bumpiness" that scatters the reflected rays into
          a cone shape, so the color of a reflected "ray" is
          the average color of all the points the cone hits.
          POV-Ray cannot trace a cone of light, but it CAN
          take a statistical sampling of it by tracing
          multiple "jittered" reflected rays.

          Normally, when a ray hits a reflective surface,
          another ray is fired at a matching angle, to find
          the reflective color.  When you specify a blurring
          amount, a vector is generated whose direction is
          random and whose length is equal to the blurring
          factor.  This "jittering" vector is added to the
          reflected ray's normal vector, and the result is
          normalized because weird things happen if it
          isn't.

          One pitfall you should keep in mind stems from the
          fact that surface normals always have a length of
          1.  If you specify a blurring factor greater than
          one, the reflected ray's direction will be based
          more on randomness than the direction is "should"
          go, and it's possible for a ray to be "reflected"
          THROUGH the surface it's supposed to be bouncing
          off of.

          Since having reflected rays going all over the
          place will introduce a certain amount of
          statistical "noise" into your reflections, you
          have the option of tracing more than one jittered
          ray for each reflection.  The colors found by the
          rays are averaged, which helps to smooth the
          reflection. In a texture that already has some
          noise in it from the pigment, or if you're not
          using a lot of blurring, 5 or 10 samples should be
          fine.  If you want to make a smooth flat mirror
          with a lot of blurring you may need upwards of 100
          samples per pixel.  For preview renders where the
          reflection doesn't need to be smooth, use 1
          sample, since it renders as fast as unblurred
          reflection.
 
 reflection_blur

          Specifies how much blurring to apply.
          Put this in finish. A vector with this length and
          a random direction is added to each reflected
          ray's direction vector, to jitter it.
 
 reflection_samples

          Specifies how many samples per pixel to take.
          It may also be placed in global_settings to change
          the default without having to specify a whole
          default texture. Each time a ray hits a reflective
          surface, this many jittered reflected rays are
          fired and their colors are averaged. If the finish
          has a reflection_blur of zero, only one sample
          will be used regardless of this setting.



 

 metallic reflection

 
          Many materials tint their reflected light by their
          surface color.  A red Christmas-ornament ball, for
          example, reflects only red light; you wouldn't
          expect it to reflect blue.  Since the "reflection"
          keyword takes a color vector, the common "reflection 1"
          actually gets promoted to "reflection rgb <1, 1, 1>",
          which would make the ornament reflect green and blue
          light as well.  You'd have to say "reflection Red" to
          make the ornament reflect correctly.

          But what happens if an object's color is different on
          parts of it?  What happens, for example, if you have a
          Christmas ornament that's red on one side and yellow on
          the other, with a smooth fade between the two colors?
          The red side should only reflect red light, but the
          yellow side should reflect both red and green.
          (yellow = red + green)  Ordinarily, there is no way to
          accomplish this.

          Hence, there is a new feature:  metallic reflection.  It
          kind of corresponds to the "metallic" keyword, which
          affects Phong and specular highlights, but metallic
          reflection multiplies the reflection color by the pigment
          color at each point to determine the reflection color for
          that point.  A value of "reflection 1" on a red object
          will reflect only red light, but the same value on a yellow
          object reflects yellow light.

 reflect_metallic

          Put this in finish.

          Multiplies the "reflection" color vector by the pigment
          color at each point where light is reflected to better
          model the reflectivity of metallic finishes.  Like the
          "metallic" keyword, you can specify an optional float
          value, which is the amount of influence the
          reflect_metallic keyword has on the reflected color.
          If this number is omitted it defaults to 1.



 

 pattern

          PATTERN:
            pattern Width,Height { [hf_gray_16] PIGMENT }
 
          This keyword defines a new bitmap image type.  The
          pixels of the image can be derived from any
          standard pigment.  This image may be used wherever
          a tga image may be used.  Some uses include
          creating heightfields from procedural textures or
          wrapping a 2d texture such as hexagons around a
          cylinder (though a better way to do this is with the
          new cylindrical warp.)

          A pattern statement be used wherever an image
          specifier like tga or png may be used.  Width and
          Height specify the resolution of the resulting
          bitmap image.  The pigment body may contain
          transformations.  If present, they apply only to
          the pigment and not to the object as a whole.

          This image type currently ignores any filter
          values that may be present in the pigment, but it
          keeps transparency information.  If present, the
          hf_gray_16 specifier causes POV-Ray to create an
          image that uses the TGA 16-bit red/green mapping.
 

          Example:
          #declare QuickLandscape=height_field {
            pattern 200,200 {
              hf_gray_16
              bozo
              color_map {
                [0 rgb 0]
                [1 rgb 1]
              }
            }
          }
 


 

 crackle

          These keywords are modifiers for the crackle
          pattern.  They may be specified anywhere within
          the pattern declaration.
 
 solid
          Causes the same value to be generated for every
          point within a  specific cell.  This has practical
          applications in making easy stained-glass windows
          or flagstones.  There is no provision for mortar,
          but mortar may be created by layering or texture-
          mapping a standard crackle texture with a solid
          one.  The default for this parameter is off.
 
          Example:
          plane {
            y,0
            texture {
              pigment {
                crackle
                solid
              }
            }
          }
 
 metric
          Changing the metric changes the function used to
          determine which cell center is closer, for
          purposes of determining which cell a particular
          point falls in.  The standard Euclidean distance
          function has a metric of  2.  Changing the metric
          value changes the boundaries of the cells.  A
          metric value of 3, for example, causes the
          boundaries to curve, while a very large metric
          constrains the boundaries to a very small set of
          possible orientations. The default for metric is
          2, as used by the standard crackle texture. Metrics
          other than 1 or 2 can lead to substantially longer
          render times, as the method used to calculate such
          metrics is not as efficient.
 
          Example:
          plane {
            y,0
            texture {
              pigment {
                crackle
                metric 1
              }
            }
          }
 
 form <FormVect>

          Form determines the linear combination of
          distances used to create the texture.  Form is a
          vector.  The first component determines the
          multiple of the distance to the closest point to
          be used in determining the value of the pattern at
          a particular point.  The second component
          determines the coefficient applied to the second-
          closest distance, and the third component
          corresponds to the third-closest distance.
 
          The standard form is <-1,1,0>, corresponding to the
          difference in the distances to the closest and
          second-closest points in the cell array.  Another
          commonly-used form is <1,0,0>, corresponding to
          the distance to the closest point, which produces
          a pattern that looks roughly like a random
          collection of intersecting spheres or cells.

          The default for form is the standard  <-1,1,0>.
          Other forms can create very interesting effects, but
          it's best to keep the sum of the coefficients low.
          If the final computed value is too low or too
          high, the resultant pigment will be saturated with
          the color at the low or high end of the color_map.
          In this case, try multiplying the form vector by a
          constant.
 

          Example:
          plane {
            y,0
            texture {
              pigment {
                crackle
                form <1,0,0>
              }
            }
          }
 
 offset

          The offset is used to displace the texture from
          the standard xyz space along a fourth dimension.
          It can be used to round off the "pointy" parts of
          a cellular normal texture or procedural
          heightfield by keeping the distances from becoming
          zero.  It can also be used to move the calculated
          values into a specific range if the result is
          saturated at one end of the color_map.  The
          default offset is zero.
 

          Example:
          plane {
            y,0
            texture {
              pigment {
                color rgb 1
              }
              normal {
                crackle
                offset .5
              }
            }
          }
 
 facets

          FACETS:
            {facets [coords ScaleValue | size Factor] }
 
          The facets texture is designed to be used as a
          normal.  Like bumps or wrinkles, it is not
          suitable for use as a pigment.  There are two
          forms of the facets texture.  One is most suited
          for use with rounded surfaces, and one is most
          suited for use with flat surfaces.

          If "coords" is specified, the facets texture
          creates facets with a size on the same order as the
          specified scale value.  This version of facets is
          most suited for use with flat surfaces, but will
          also work with curved surfaces. The boundaries of
          the facets coincide with the boundaries of the
          cells in  the standard crackle texture.  The
          coords version of this texture may be quite
          similar to a crackle normal pattern with solid
          specified.

          If size is specified, the facets texture uses a
          different function that creates facets only on
          curved surfaces.  The factor determines how many
          facets are created, with smaller values creating
          more facets, but it is  not directly related to
          any real-world measurement.  The same factor will
          create the same pattern of facets on a sphere of
          any size.  This texture creates facets by snapping
          normal vectors to the closest vectors in a
          perturbed grid of normal vectors.  Because of
          this, if a surface has normal vectors that do not
          vary along one or more axes, there will be no
          facet boundaries along those axes.
 

          Example:
          sphere {
            0,1
            texture {
              pigment {
                color rgb 1
              }
              normal {
                facets
                size .02
              }
            }
          }
          sphere {
            2*x,1
            texture {
              pigment {
                color rgb 1
              }
              normal {
                facets
                coords .2
              }
            }
          }
 

 image_pattern

          image_pattern uses a very similar syntax to
          bump_map, but without bump_size. There is also a
          new keyword use_alpha which uses the alpha channel
          of the image (if present) to determine the value
          to be used at each location.

          It is meant to be used for creating texture
          "masks", like the following:

            texture
            {
              image_pattern { tga "image.tga" use_alpha }
              texture_map
              {
                [0 mytex               ]
                [1 pigment{transmit 1} ]
              }
            }

          Note: the following macros might come in handy:

          #macro texture_transmit(tex, trans)
            average texture_map {
              [1-trans tex]
              [trans pigment{transmit 1}]
            }
          #end
          #macro pigment_transmit(tex, trans)
            average pigment_map {
              [1-trans tex]
              [trans transmit 1]
            }
          #end
          #macro normal_transmit(tex, trans)
            average normal_map {
              [1-trans tex]
              [trans bump_size 0.0]
            }
          #end
 


 slope-dependent textures

 

  Overview

          The basic syntax of the slope pattern is similar
          to the gradient pattern, but it may take more
          parameters:
 
          slope <Slope> [, <Altitude> [,<SlopeLimits>,
             <AltLimits>]]
 
          Since the usage of slope with more than one
          parameter needs some abstraction we will describe
          the three variants separately, starting with the
          simplest one. This is sort of a step-by-step
          introduction and should therefore be easier to
          understand.
 
  Variant 1: slope <Slope>

          In this variant the usage of the keyword 'slope'
          is very similar to the keyword 'gradient'. Just
          like the latter it takes a direction vector as its
          argument. All forms that are allowed for gradient
          are possible for slope as well:

            slope x
            slope -x
            slope y
            ...
            slope <1, 0, 1>
            slope <-1, 1, -1>
            slope 1.5*<1, 2, 3>
            slope <2, 4, -10.4>

          When POV-Ray parses this expression the vector
          will be normalized, which means that the last
          example is equivalent to

            slope <1, 2, -5.2>

          and arbitrary multiples/fractions of it. It's just
          the direction of the vector that is significant.

          NOTE: this holds only for the 1-parameter
          variant. Variants 2 and 3 DO evaluate the length
          of the vectors.
 
          And what does slope do? Very easy: Just like
          gradient it returns values between (and including)
          0.0 and 1.0, where 0.0 is most negative slope
          (surface normal points in negative vector
          direction), 0.5 corresponds to a vertical slope
          (surface normal and slope vector form 90 degree
          angle) and 1.0 is horizontal with respect to the
          slope vector (surface normal parallel to slope vector).
 

          Example:
          slope y

          This is what everyone associates with slopes in
          landscapes. For all surface elements that are
          horizontal slope returns 1.0. All vertical parts
          return 0.5 and all 'anti-horizontal' surfaces
          (pointing downwards) return 0.0.
 
          Slope may be used in all contexts where gradient
          is allowed. Similarly, you can therefore construct
          very complex textures with this feature. Covering
          height_field mountains with snow is one of the
          easier examples. But try a pyramid in a desert
          where - due to continuous sand storms - the south-
          west front is covered with sands.
 

          Example:
          sphere { <0, 0, 0>, 1
              pigment {
                  slope y
                  colour_map {
                      [ 0.00 color rgb <0, 0, 0> ]
                      [ 0.70 color rgb <0, 0, 0> ]
                      [ 0.75 color rgb <1, 1, 1> ]
                      [ 1.00 color rgb <1, 1, 1> ]
                  }
              }
          }
 
          Note that the texture's behavior (with regard to
          slope) does not depend on the object's size, as
          long as you scale the object uniformly in all
          directions (scale n, or scale <n,n,n>).
 
  Variant 2: slope <Slope>, <Altitude>

          Imagine the following scene: You have a mountain
          landscape (presumably slope's main application).

          The mountain's foot is in a moderate region such
          that some vegetation should be there. The mountain
          is very rough with steep areas. At lower regions
          soil could settle even on the steep parts, thus
          vegetation is there, too. The vegetation will be
          on steep and on flatter parts. Now, climbing up
          the mountain the vegetation will change more and
          more. Now it needs some shelter from the rough
          climate and will therefore prefer flatter regions.

          Near the mountain's top we will find no vegetation
          any more. Instead the flat areas will be covered
          with snow, the steep ones looking rocky.

          One solution to render this would be to compose
          different layers of texture. The change from one
          layer to the next would be controlled by
          'gradient'. Try and do some experiments. You will
          probably find that it is very difficult to hide
          the seams between the layers. Either you have
          abrupt transition from one layer to the next, for
          example:

               [ 0.3 T_Layer1 ]
               [ 0.3 T_Layer2 ]
               ...

          which will show discontinuities in the resulting
          total texture. Or you would try smooth transitions

               [ 0.3 T_Layer1 ]
               [ 0.4 T_Layer2 ]
               ...

          and will find the resulting texture striped with
          blurred bands. So what can we do?
          We would try something like

               slope y, y
               texture_map {
                   [ 0.3 T_Layer1 ]
                   [ 0.3 T_Layer2 ]
                   ...
               }

          And now comes the tricky point. The second
          parameter instructs POV-Ray to calculate the
          pattern value not only from the slope but as a
          composition of slope and altitude.  We will call
          it altitude here because in most cases it will be
          such, but nothing prevents us from using another
          direction instead of y. Also, both vectors may
          have different directions.

          It is VERY IMPORTANT that, unlike variant 1, the
          two vectors' lengths weight the combination. The
          pattern value is now calculated as (0.5*slope +
          0.5*altitude).
 
          If the slope expression was

               slope y, 3*y

          then the calculation would be (0.25*slope +
          0.75*altitude).

               slope 2*y, 6*y

          will give the same result.
 
          Similarly, something like

               slope 10*y, y

          will result in a pattern where the altitude
          component has almost no influence; it is very
          close to plain 'slope y'.

          The component 'altitude' is the projection of the
          point in space that we are calculating the pattern
          for, onto the second vector (we called it Altitude
          in our syntax definition). For Altitude=y this is
          quite intuitive, for other vectors it's more
          abstract.

          In case the sum (a*slope + b*altitude) exceeds 1.0
          it is clipped to fmod(sum). So if your resulting
          texture is showing unexpected discontinuities
          check whether your altitude exceeds 1.0. In that
          case you can either scale the whole object down,
          or read section 3.3.

          Be sure to be quite familiar with the concept of
          variant 2. The next section will be harder stuff,
          though very useful for the power user. If the
          above does all you need in your scenes you may
          stop reading here and be happy with what you have
          learnt so far. But be warned: A situation will
          come where you will definitely need the last
          variant.
 
  Variant 3: slope <Slope>, <Altitude>, <SlopeLimits>,
                   <AltLimits>

          The previous variant does very well if your
          altitude and slope values both change between 0
          and 1. The pattern as the combination of these two
          (weighted) components will also be from [0..1] and
          will fill this interval quite well. But now
          imagine the following scene. You have a mountain
          scene again. Since it is a height_field you know
          that the mountain surface has no overhangs. The
          surface's slope value is therefore from [0.5..1].
          Further, you want to compose the final texture
          using different textures for different altitudes.

          You could try to deploy variant 2 here, but you
          want only sand at the mountain's foot, no snow, no
          vegetation. Then, in the middle, you would like a
          mixture of vegetation and rock. This mixture shall
          have an altitude dependency like the one described
          in section 3.2, i.e. the vegetation moving to
          flatter areas with increasing altitude. And in the
          topmost region we would like a mixture of rock and
          snow.

          You would intuitively define the texturing for
          this scene as

               gradient y
               texture_map {
                   [0.50 T_Sand]
                   [0.50 T_Vege2Rock]
                   [0.75 T_Vege2Rock]
                   [0.75 T_Rock2Snow]
               }

          Let's assume you have already designed and tested
          the following textures:

               #declare T_Vege2Rock = texture {
                   slope -y, y
                   texture_map {
                       [0.5 T_Vege]
                       [0.5 T_Rock]
                   }
               }
               #declare T_Rock2Snow = texture {
                   slope y, y
                   texture_map {
                       [0.5 T_Rock]
                       [0.5 T_Snow]
                   }
               }

          You tested them on a sphere with diameter = 1.0.
          But in the final texture T_Vege2Rock will be
          applied only in the height interval [0.5..0.75].
          And the slope will always be greater 0.5. So how
          will the resulting pattern (the weighted
          combination of the slope and the altitude
          component) behave? Is it possible to scale the
          texture in a way that the values exhaust the full
          intervals? And if we should succeed and we want to
          modify the texture, what modifications should we
          apply? Believe us, it's too complex to get
          reasonable, controllable results!

          The answer to this problem is: Scale and translate
          the slope/altitude components to [0..1]. Now their
          behavior is predictable and controllable again. In
          our example we would define the two mixed textures
          like this:

               #declare T_Vege2Rock = texture {
                   slope -y, y, <0,0.5>, <0.5,0.75>
                   texture_map {
                       [0.5 T_Vege]
                       [0.5 T_Rock]
                   }
               }
               #declare T_Rock2Snow = texture {
                   slope y, y, <0.5,1.0>, <0.75,1.0>
                   texture_map {
                       [0.5 T_Rock]
                       [0.5 T_Snow]
                   }
               }

          What does this do? In the first texture we added
          the two additional 2-dimensional limits vectors
          with the following values:

               Low slope = 0
               High slope = 0.5

          slope -y lets slope's pattern component travel
          between 1 and 0 where 1 means 'anti-horizontal'
          (surfaces facing down) and 0 is horizontal, 0.5
          being vertical. Since our terrain has no
          overhangs, the effective interval is only [0..0.5]
          (again 0 being horizontal, 0.5 vertical). Low
          slope and high slope transform this interval to
          [0..1], now 0 still  being horizontal, 1.0
          vertical. (NOTE: we could have used 'slope y,...'.
          Actually we used 'slope -y,...' to have 0
          correspond with horizontal and 1 with vertical. We
          need this because we then can define T_Vege for
          <0.5 (=flatter & lower) and T_Rock for >0.5
          (=steeper & higher) values. Of course we could
          have done it the other way around, but then we
          would have to change the altitude direction as
          well. This way it is more intuitive.)
          Then we have

               Low alt = 0.5
               High alt = 0.75

          We know that T_Vege2Rock will be used only in the
          altitude range between 0.5 and 0.75. Thus low alt
          and high alt stretch the value interval
          [0.5..0.75] to [0..1], i.e. the altitude component
          behaves just like in our test sphere, and so does
          the superposition of the slope and the alt
          component, instead of producing uncontrolled
          values.

          We leave the analysis of T_Rock2Snow as an
          exercise for the reader.

          SPECIAL CASE: It is possible to define <0,0> for
          each of <SlopeLimits> and <AltLimits>. This lets
          POV-Ray simply ignore the respective vector, i.e.
          no transformation will take place for that
          component. Using this feature you can easily
          define slopes where a transformation is done only
          for the altitude component, <0,0> just being a
          placeholder that tells POV-Ray: "do nothing to the
          slope component".
 

  Bugs and Restrictions
          In some cases slope may show strange results. This
          is due to unexpected behavior of the object's
          surface normal. CSG construction can result in
          'inverted' normals; that is, normals that point
          INTO the object. In most of these cases inversion
          of the slope vector (i.e. 'slope y ...' becomes
          'slope -y ...') helps getting the desired result.

          Further, there is one object that allows for
          pattern modifiers but has no defined surface:
          sky_sphere. sky_sphere is a sphere with infinite
          radius, therefore POV-Ray will never find an
          intersection point between the ray and the object.
          Thus there is no point for which a surface slope
          could be defined. It is syntactically correct to
          use slope as a pattern modifier for sky_spheres,
          but it will return the constant value 0.0 for all
          directions. Usage of gradient for sky_sphere
          returns the same pattern values anyway, so use it.

          You may use the turbulence keyword inside slope
          pattern definitions. It is syntactically correct
          but may show unexpected results. The reason for
          this is that turbulence is a 3-dimensional
          distortion of a pattern. Since slope is only
          defined on objects' surfaces a 3-dimensional
          turbulence is not applicable to the slope
          component. In case you use the extended variants
          the total pattern is calculated from the slope and
          the altitude component. While the slope component
          is not affected by turbulence the altitude
          component will be. You can produce nice results
          with this, but it is difficult to control it.
 



 

 function-based textures

          The same functions that may be used to create
          isosurfaces may be used to create textures that
          vary as a function of the position in space.
 
          Example:
          #include "colors.inc"
          #declare GRID1=
             function  {min(min(abs(cos(z)),
                  abs(cos(y))),abs(cos(x)))}
 
          sphere {0, 3
            normal {
              function GRID1
              scale 0.25
            }
          }
 


 

 UV mapping

          All textures in POV are defined in 3 dimensions.
          Even planar image mapping is done this way.

          However, it is sometimes more desireable to have
          the texture defined for the surface of the object.
          This is especially true for bicubic_patch objects
          and mesh objects, that can be stretched and
          compressed.  When the the object is stretched or
          compressed, it would be nice for the texture to be
          "glued" to the object's surface and follow the
          object's deformations.

          A new keyword has been added to object modifiers.
          If the first modifier in an object is
          "uv_mapping", then that object's texture will be
          mapped on via UV coordinates.  This is done by
          taking  a slice of the object's regular 3D texture
          from the XY plane and wrapping it around the
          surface of the object, following the object's UV
          surface coordinates.
 

          Example:
          bicubic_patch
          {
            ...
            uv_mapping
            texture { MyFavoriteWoodTexture }
            scale A
            rotate B
            translate C
          }

          There is a new keyword, uv_vectors. This keyword
          can be used in bicubic patches to set the UV
          coordinates for the starting and ending corners of
          the patch.  The default is

            uv_vectors <0,0>,<1,1>

          syntax is "uv_vectors <start>,<end>"

          If you had another patch sitting right next to
          this (as happens often with sPatch or Moray), you
          could map the exact same texture to it but use
          something like

            uv_vectors <0,1>,<1,2>

          (depending on which side of this patch the other
          is on) so that the two textures fit seamlessly
          together.

          This new keyword also shows up in triangle meshes
          (the original kind), and soon it will be in single
          triangles, too.  Inside each mesh triangle, you
          can specify the UV coordinates for each of the
          three verticies with:

            uv_vectors <uv1>,<uv2>,<uv3>

          This goes right after the coordinates (or
          coordinates & normals with smooth triangles) and
          right before the texture.



 

 Cylindrical, Spherical and Toroidal Warps

 
          Syntax:
            warp {
              cylindrical
              [ orientation VECTOR | dist_exp FLOAT ]
            }

            warp {
              spherical
              [ orientation VECTOR | dist_exp FLOAT ]
            }

            warp {
              toroidal
              [ orientation VECTOR | dist_exp FLOAT | major_radius FLOAT ]
            }
 

          defaults:
            orientation   <0,0,1>
            dist_exp      0
            major_radius  1

          These warps essentially use the same mapping as the
          image maps use. This way we can wrap checkers, bricks
          hexagon and other patterns around spheres, toruses,
          cylinders and other objects. It wraps around the Y axis.
          However it does 3D mapping and some concession had to
          be made on depth.

          This is controllable by dist_exp (distance exponent). In
          the default of 0 imagine a box <0,0> to <1,1> stretching
          to infinity along the orientation vector. The warp takes
          its points from that box. (except for cylindrical where
          the y value doesn't get warped if you use the default
          orientation)

          For a sphere distance is distance from origin, cylinder
          is distance from y-axis, torus is distance from major
          radius. (or distance is minor radius if you prefer to
          look at it that way)

          However the box really is <0,0> <dist^dist_exp,dist^dist_exp>
          This is for if you have a non torus, cylinder or sphere, the
          texture doesn't look stretched unless you want to.
 

          Examples:
          torus {
            1, 0.5
            pigment {
              hexagon
              scale 0.1
              warp {
                toroidal
                orientation y
                dist_exp 1
                major_radius 1
              }
            }
          }

          sphere {
            0,1
            pigment {
              hexagon
              scale <0.5/pi,0.25/pi,1>*0.1
              warp {
                spherical
                orientation y
                dist_exp 1
              }
            }
          }

          cylinder {
            -y,y,1
            pigment {
              hexagon
              scale <0.5/pi,1,1>*0.1
              warp {
                cylindrical
                orientation y
                dist_exp 1
              }
            }
          }



 

 Planar Warps

 
          Syntax:
            warp {planar [ VECTOR , FLOAT ]}
 
          default:
            warp {planar <0,0,1>,0}

          This warp was made to help make the spherical,
          cylindrical, and torodial warps act more like the
          2D image mapping. It maps each point to a plane
          defined by a normal(the VECTOR) and offset (the
          float) just like defining a plane object.



 

 Bitmap Modifiers

 
 map_type 8

          map_type 8 was an early attempt at UV mapping of
          textures.  It is only valid for bitmap images, and
          it only works with bicubic_patches and parametric
          functions.  It is included here for backward
          compatibility; the newer syntax for UV mapping
          should be used for new scenes where possible.
 

     Example
          pigment {
              image_map {
                gif "..\povscn\level3\Ionic5\congo4.gif"
                map_type 8
              }
            }
 
 

 
 

                       Global Settings

 

 reflection_blur_max

          Specifies a trace_level max for blurring.
          If the trace_level is higher than this value, no
          blurring and one sample will be used.  The default
          is 5, the same as the default max_trace_level.
 


 reflection_blur_max_adc

          specifies the ADC bailout for blurring.
          If the ADC weight is less than this value, no
          blurring and one sample will be used.  The default
          is 1/255, the same as the default adc_bailout.


 string_encoding

          STRING_ENCODING:
            string_encoding "UTF8" | "ISO8859_1" |
                            "Cp1252" | "MacRoman"

          specifies the encoding used for extended character
          sets.  This is currently only used for text objects,
          but may be extended to other string manipulation
          functions in the future.  The double quotes are
          required. The default encoding is UTF8.
 







 
 
 

 Where to find the official POV-Ray

 
          Note: POV-Ray is available on many standalone
          BBS systems as well as on the Internet.  If you
          do not have ready access to the Internet, please
          consult the document entitled "POVWHERE.GET",
          included with this distribution, for alternative
          download sites.

          The Internet home of POV-Ray is
          http://www.povray.org and ftp://ftp.povray.org.
          Please stop by often for the latest files, utilities,
          news and images from the official POV-Ray internet
          site.