![](/i/fill.gif) |
![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
> Have you considered writing the data out as a mesh2?
I believe height fields trace considerably faster than their equivalent
meshes.
- Slime
[ http://www.slimeland.com/ ]
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <421cc4e9$1@news.povray.org>, "Slime" <fak### [at] email address>
wrote:
> > Have you considered writing the data out as a mesh2?
>
> I believe height fields trace considerably faster than their equivalent
> meshes.
Have you tried it? (I haven't...I'm really wondering if there's much of
a difference.)
A mesh has other advantages as well. You could create overhangs and
other structures, make it higher resolution tessellation in areas that
are more important, use a tessellation that doesn't produce artifacts
that are as obvious, etc. You can also compute the normals more
precisely while you're generating the mesh, rather than estimating them
from the triangle data. Look at the height field macros to see the
difference this can make...they compute the normal by looking at the
slope of the height function.
On the other hand, an image-based height field is easier to
edit...though you really need something that can handle 16 bit images.
--
Christopher James Huff <cja### [at] earthlink net>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: <chr### [at] tag povray org>
http://tag.povray.org/
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <421cb7b9$1@news.povray.org>, "Sebastian H." <van### [at] gmx de>
wrote:
> Aha, me too, but linux is my choice.
> As Christopher mentioned libpng is not that hard to use.
> Just yesterday I took a look at the manpage (man libpng) and there are
> several examples on how to write the image data.
> I didn't try it yet, but it didn't look too complicated.
> Maybe within the next week I'll have some C++ code.
Both reading and writing are really pretty easy, though you have to
write a bit more code than is convenient. It would really be nice if
they made an ultra-simple convenience API available. There are wrappers
out there that simplify things, though you will probably need to modify
them to allow 16 bit images.
--
Christopher James Huff <cja### [at] earthlink net>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: <chr### [at] tag povray org>
http://tag.povray.org/
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
> Have you tried it? (I haven't...I'm really wondering if there's much of
> a difference.)
Yes, but it was a while ago.
- Slime
[ http://www.slimeland.com/ ]
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Christopher James Huff <cja### [at] earthlink net> wrote:
> > I believe height fields trace considerably faster than their equivalent
> > meshes.
> Have you tried it? (I haven't...I'm really wondering if there's much of
> a difference.)
The tracing of a heightfield takes advantage of its geometry for
speed so in theory it should be faster than a generic mesh.
Also heightfields use reference counting in the same way as meshes.
--
plane{-x+y,-1pigment{bozo color_map{[0rgb x][1rgb x+y]}turbulence 1}}
sphere{0,2pigment{rgbt 1}interior{media{emission 1density{spherical
density_map{[0rgb 0][.5rgb<1,.5>][1rgb 1]}turbulence.9}}}scale
<1,1,3>hollow}text{ttf"timrom""Warp".1,0translate<-1,-.1,2>}// - Warp -
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
I have wrote a program with C++ and VIDE to make 8 or 16 bit height
feilds from contour lines. It for win98.
It's at http://leroywhetstone.s5.com/
I used PPM to save the 16 bit height feilds ,TGA for 8 bit height feilds
and BMP for the contour maps files . I hand coded the PPM & TGA file
functions and use windows functions for BMP.
If the program doesn't do what you need E-mail me and I can send you the
code. It might be a little messy and if you don't have VIDE you may
still be able to figure it out.
scott wrote:
> I'm writing some code in C++ (on windows) to generate some 16-bit height
> field data. What is going to be the easiest way for me to export the
> numbers to use in POV? The PNG file format looks pretty complex (compared
> to BMP which is what I'm used to).
>
> Is there any way to make POV use red+256*green or something like that from a
> BMP file? Or some software that will make red+256*green into 16-bit grey
> PNG?
>
> Thanks
>
> Scott
>
>
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
I mean to the mesh. In that the mesh consists of verticies, edges,
faces, uv_vectors, and normals. Transforming the verticies will
generally transform the edges and faces with which they are associated
unless my understanding of meshes is fundamentally flawed. Normally, UV
coordinates will want to be preserved, not transformed, but then again I
didn't add uv_vectors in the first place so it wasn't relevant but I can
see how this might be to others in some cases. This leaves normals,
which are transformed similarly to the verticies without any ill
effects, at least in the transforms I performed. I'd be interested in
knowing specifically what problems you can foresee with this sort of
mesh transformation so that I can try to come up with a solution.
Thanks,
Peter D.
Warp wrote:
> Peter Duthie <pd_### [at] warlordsofbeer com> wrote:
>
>>does a non linear transform (cylindrical wrapping)
>
>
> ... to the vertex points, you mean?
>
> (Technically a mesh is more than just its vertex points, and performing
> a transformation on the vertex points is not the same thing as performing
> a transfromation on the mesh.)
>
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Peter Duthie <pd_### [at] warlordsofbeer com> wrote:
> I mean to the mesh. In that the mesh consists of verticies, edges,
> faces, uv_vectors, and normals. Transforming the verticies will
> generally transform the edges and faces with which they are associated
> unless my understanding of meshes is fundamentally flawed.
We are talking about non-linear transformation here.
Moving the vertices around will only apply a linear transformation
to the edges and faces.
Applying a true non-linear transformation to the mesh would bend the
edges of the triangles (and thus their surfaces). However, no renderer
I know of can do this (some renderers subdivide the triangles to get
more "bending", but it's still just an approximation, not a true
non-linear transformation).
Usually a "non-linear" transformation of a mesh is performed by just
moving the vertices (and normals). The edges and faces keep straight
regardless (which means that only linear transformations are performed
to them in practice).
> I'd be interested in
> knowing specifically what problems you can foresee with this sort of
> mesh transformation so that I can try to come up with a solution.
The problem is that the "non-linear" transformation of a mesh is only
as good as the size of its triangles.
If you have for example a box consisting of two triangles per side,
twisting the box is basically impossible (without subdividing the
triangles).
--
plane{-x+y,-1pigment{bozo color_map{[0rgb x][1rgb x+y]}turbulence 1}}
sphere{0,2pigment{rgbt 1}interior{media{emission 1density{spherical
density_map{[0rgb 0][.5rgb<1,.5>][1rgb 1]}turbulence.9}}}scale
<1,1,3>hollow}text{ttf"timrom""Warp".1,0translate<-1,-.1,2>}// - Warp -
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Ok, I see what you're talking about now. Yes, as we're talking about
meshes, the non linear transformations occur on the verticies and
normals. I agree that there is indeed no way to apply a non linear
transformation to edges and faces in conventional meshes as they are
represented as a relationship between verticies, not as independent
entities. When you think about it, even subdividing meshes isn't
applying a non linear transformation to edges and faces, as what is
actually happening is a linear transformation plus one or more
additional edges and faces per original edge and face. The only way to
do this would be to change the format of the mesh such that the edges
were represented as an algorithm rather than a relationship, and then it
would not be a mesh anymore, more like a collection of Bezier patches.
Peter D.
Warp wrote:
> Peter Duthie <pd_### [at] warlordsofbeer com> wrote:
>
>>I mean to the mesh. In that the mesh consists of verticies, edges,
>>faces, uv_vectors, and normals. Transforming the verticies will
>>generally transform the edges and faces with which they are associated
>>unless my understanding of meshes is fundamentally flawed.
>
>
> We are talking about non-linear transformation here.
>
> Moving the vertices around will only apply a linear transformation
> to the edges and faces.
> Applying a true non-linear transformation to the mesh would bend the
> edges of the triangles (and thus their surfaces). However, no renderer
> I know of can do this (some renderers subdivide the triangles to get
> more "bending", but it's still just an approximation, not a true
> non-linear transformation).
>
> Usually a "non-linear" transformation of a mesh is performed by just
> moving the vertices (and normals). The edges and faces keep straight
> regardless (which means that only linear transformations are performed
> to them in practice).
>
>
>>I'd be interested in
>>knowing specifically what problems you can foresee with this sort of
>>mesh transformation so that I can try to come up with a solution.
>
>
> The problem is that the "non-linear" transformation of a mesh is only
> as good as the size of its triangles.
> If you have for example a box consisting of two triangles per side,
> twisting the box is basically impossible (without subdividing the
> triangles).
>
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Peter Duthie <pd_### [at] warlordsofbeer com> wrote:
> The only way to
> do this would be to change the format of the mesh such that the edges
> were represented as an algorithm rather than a relationship, and then it
> would not be a mesh anymore, more like a collection of Bezier patches.
That's where NURBS kick in... :P
--
#macro N(D)#if(D>99)cylinder{M()#local D=div(D,104);M().5,2pigment{rgb M()}}
N(D)#end#end#macro M()<mod(D,13)-6mod(div(D,13)8)-3,10>#end blob{
N(11117333955)N(4254934330)N(3900569407)N(7382340)N(3358)N(970)}// - Warp -
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |