POV-Ray : Newsgroups : povray.programming : Can someone patch POV so that you can output an isosurface as a wire frame? : Re: Can someone patch POV so that you can output an isosurface as a wire fr= Server Time
6 Oct 2024 23:19:28 EDT (-0400)
  Re: Can someone patch POV so that you can output an isosurface as a wire fr=  
From: Christopher James Huff
Date: 7 Nov 2002 19:52:38
Message: <chrishuff-64A8BE.19522507112002@netplex.aussie.org>
In article <web.3dcb0362195c50e4a5ab7de50@news.povray.org>,
 "normdoering" <nor### [at] yahoocom> wrote:

> But I meant something more abstract than mesh or wireframe... that's the
> problem I have communicating this idea. When I say "The wireframe (or mesh)
> is IN THERE" I don't mean there's a mesh2 file or wireframe there, I mean
> the information, the 3D data, must be accessible in some form or else you
> couldn't even ray trace it.

What POV does is calculate the intersection of a ray with the surface. 
That's all it knows about: "this surface intersects this ray at this 
distance along the ray". You could get a mishmash of points on the 
surface that are visible from the camera, but this is not enough 
information to reliably create a wireframe or mesh. There are algorithms 
for analyzing these "point clouds", but there is no perfect way of doing 
it, because some of the information is just lost.


> I suppose isosurfaces calculate the information on the fly as the image
> needs it and doesn't save that information to a swap file or anything like
> that (excuse my lack of programming terminology) but it does calculate some
> form of 3D information in a way I don't understand and it can use the u,v /
> x,y information from an image. The image itself tells you how many vertices
> are needed, one for each pixel, the isosurface is calculating some form of
> z value based on the greyscale so you must have the information you need to
> create a mesh2.

What you describe is a depth buffer. MegaPOV's post_process patch 
allowed output of depth information, or you could fake it with fog or 
textures. I'm pretty sure this isn't what you want, though: it doesn't 
match anything you've said before...all it will do is let you render the 
image and use it as a height field. (either a height_field primitive or 
a macro-generated mesh)

The result will be an object that might looks barely acceptable from one 
direction, if you don't have any shadows, refractions, or reflections to 
give it away and don't change the camera. The first ray intersection 
data alone is pretty much useless, especially for mesh processing. The 
only thing it's really been used for is to control post processing 
filters.


> Are there any mesh modelers out there that can do this?

Not really sure what you want here.

-- 
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.