|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Hello everybody;
I have programmed a 3D voxel renderer (like in medical tomography) and now I
am looking for good quality scenes to test.
A friend, Jaime Vives, has some really nice scenes in POV, and I was
wondering if I could use POV to render these in the voxel instead of in a
flat 2d image.
I have seen a thread about voxel & pov, but it does not contain what I am
looking for.
I have thinked a method to acomplish voxel rendering with any raytracer, but
I don't know the feasibility of my idea under the POV framework.
I need:
- To cast parallel rays over the scene.
- To know all points cutted by the ray in a known distance. Only for "empty"
objects, not volumetric object simulating clouds, etc...
- To set a different ray of incidence to get the ligth. Or to set a general
light and discard light comming from reflection. I want to obtain the cross
of the ray and the object, but the ray does not come from the camera! The
angle of the parallel rays must not add light information.
With these 3 points, I think I can render a good looking voxel scene.
Can any POV guru offer some advice? I have much experience in C & C++, and
in RT 3D engines, but little in raytracers.
BTW, I am using Mac OS X. I hope there are no serious problems with the
sources...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <web.3ecab077a576dc40da05be620@news.povray.org>,
"Peskanov" <nomail@nomail> wrote:
> I have programmed a 3D voxel renderer (like in medical tomography) and now I
> am looking for good quality scenes to test.
> A friend, Jaime Vives, has some really nice scenes in POV, and I was
> wondering if I could use POV to render these in the voxel instead of in a
> flat 2d image.
> I have seen a thread about voxel & pov, but it does not contain what I am
> looking for.
You want to render a POV scene to a voxel field? This can not be done
directly, but there are several ways you could approach it. You could
put the objects to be rendered in the voxel field into a union and
intersect them with a pair of planes to get a "slice" of the scene, you
could then render several frames with the planes progressing along the
object, and combine the frames into a final voxel field for display.
This all requires a lot of manual modifications to the scene, though,
and it sounds like you want some kind of patch that gives voxel output.
This is an interesting idea, and I can think of a couple possible
methods. I'm not clear on how much lighting calculations you want to do
immediately or how realistic your voxel rendering is to be.
One possibility is to march through the lattice of voxels covering the
scene, and for each voxel compute the average of the pigments for each
object that contains that point. This will lose any infinitely thin
objects, for those you would need to use raytracing. By collecting the
pigment information and saving light source information to a file for
your voxel renderer, you could get a shaded color view of the scene. You
could save more texture information to get a more realistic rendering,
but at the cost of a lot of storage.
Another possibility is to do some sort of palleted voxel field, where
you store a texture ID for the voxels and the texture information
separately. This might be much more memory efficient, but slower and
more complex to render.
BTW, what format are you using? I've been working on an extended density
file format for my own experimentation, maybe you would find it
interesting.
> BTW, I am using Mac OS X. I hope there are no serious problems with the
> sources...
If you are using the GUI version, you may want to look into MacMegaPOV,
which has a GUI that works better under OS X than the official version.
The command line version can be compiled for OS X, but it involves some
makefile hacking. It would probably be easiest to install it through
Fink. All this only applies if you are not making your own patch...if
you are patching, the path of least resistance would be to compile the
command line version using the free development tools.
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Christopher James Huff" <cja### [at] earthlinknet> wrote in message
news:cja### [at] netplexaussieorg...
> One possibility is to march through the lattice of voxels covering the
> scene, and for each voxel compute the average of the pigments for each
> object that contains that point. This will lose any infinitely thin
> objects, for those you would need to use raytracing.
Since you need to do raytracing anyway, why not for each ray calculate the
intersections between _all_ objects (not just the nearest), and store them
in a queue? Then, this queue of intersections could be processed into a
voxel format.
...Chambers
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Ben Chambers wrote:
>Since you need to do raytracing anyway, why not for each ray calculate the
>intersections between _all_ objects (not just the nearest), and store them
>in a queue? Then, this queue of intersections could be processed into a
>voxel format.
That was exaclty my idea. I intend to do this for the 3 axis so I don't
loose any plane.
What I would like is some hints about which functions should I look, as I am
new to POV sources.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Christopher James Huff wrote:
>You want to render a POV scene to a voxel field? This can not be done
>directly, but there are several ways you could approach it. You could
>put the objects to be rendered in the voxel field into a union and
>intersect them with a pair of planes to get a "slice" of the scene, you
>could then render several frames with the planes progressing along the
>object, and combine the frames into a final voxel field for display.
I have already considered this but I think I will be unable to get a good
lighting that way.
>This all requires a lot of manual modifications to the scene, though,
>and it sounds like you want some kind of patch that gives voxel output.
>This is an interesting idea, and I can think of a couple possible
>methods. I'm not clear on how much lighting calculations you want to do
>immediately or how realistic your voxel rendering is to be.
I would like to voxelize the scene with a global lighting applied. I do not
intend to do any lighting in real time (maybe later I will think about some
reflection/refraction).
>One possibility is to march through the lattice of voxels covering the
>scene, and for each voxel compute the average of the pigments for each
>object that contains that point. This will lose any infinitely thin
>objects, for those you would need to use raytracing. By collecting the
>pigment information and saving light source information to a file for
>your voxel renderer, you could get a shaded color view of the scene.
This method would probably too slow for the size of voxel I want, but thanks
for the idea.
My idea is the one Ben described, I just want some code hints from those who
know the sources.
Could you ( or anybody) point me (in general terms) what are the steps to
find all the crossings of a ray with the scene in a known range? I am
currenlty looking at the trace function un render.cpp.
>BTW, what format are you using? I've been working on an extended density
>file format for my own experimentation, maybe you would find it
>interesting.
About the format, I just want to store a final color for the voxel. I will
crunch the result in a variation of the octree I have thinked. I will try
to crunch the voxel by identifying similar sections, although this is going
to be quite slow...What have you thinked? I know several voxel formats
myself; some are much faster for rendering, but waste too much memory...
>If you are using the GUI version, you may want to look into MacMegaPOV,
>which has a GUI that works better under OS X than the official version.
>The command line version can be compiled for OS X, but it involves some
>makefile hacking. It would probably be easiest to install it through
>Fink. All this only applies if you are not making your own patch...if
>you are patching, the path of least resistance would be to compile the
>command line version using the free development tools.
I have it installed through Fink, but I am going to test "MacMegaPOV"; for
some reason I prefer GUIs to commandlines...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <web.3ed1eef81187fffdda05be620@news.povray.org>,
"Peskanov" <nomail@nomail> wrote:
> >You want to render a POV scene to a voxel field? This can not be done
> >directly, but there are several ways you could approach it. You could
> >put the objects to be rendered in the voxel field into a union and
> >intersect them with a pair of planes to get a "slice" of the scene, you
> >could then render several frames with the planes progressing along the
> >object, and combine the frames into a final voxel field for display.
>
> I have already considered this but I think I will be unable to get a good
> lighting that way.
Good lighting? What kind of lighting do you expect?
> I would like to voxelize the scene with a global lighting applied. I do not
> intend to do any lighting in real time (maybe later I will think about some
> reflection/refraction).
Highlights and some other effects are also dependant on the direction of
the incoming ray. More importantly, POV-Ray's global illumination is
viewpoint-dependant.
> This method would probably too slow for the size of voxel I want, but thanks
> for the idea.
It would also be helpful if you mentioned what voxelfield resolutions
you are thinking of. Unless you are using really fine-grained voxel
fields, it could easily be faster...insideness calculations are usually
a lot easier than intersection calculations.
> Could you ( or anybody) point me (in general terms) what are the steps to
> find all the crossings of a ray with the scene in a known range? I am
> currenlty looking at the trace function un render.cpp.
Trace() in render.cpp is probably the best place to start. Change it to
accumulate all the intersections and figure out some way to translate
that intersection list to a line of voxels and you have a good start.
But as I mentioned, the results will be dependant on the direction of
the incoming ray, so you may not get what you expect. You might have to
disable quite a few effects when generating the voxel field.
> About the format, I just want to store a final color for the voxel. I will
> crunch the result in a variation of the octree I have thinked. I will try
> to crunch the voxel by identifying similar sections, although this is going
> to be quite slow...What have you thinked? I know several voxel formats
You mean generating an oct-tree with large blocks covering multiple
voxels with very similar values? Sort of like dividing an image into a
heirarchy of flat-colored rectangles? I've thought about this as well,
but I don't know how well it would work...maybe a simple run-length
encoding would be more efficient, it would certainly be easier to read
and write. I've considered using (or abusing?) animation file formats,
maybe MNG (http://www.libpng.org/pub/mng/), but for now I have just been
planning on using gzip to compress the data. Another idea I've been
working on is a procedural format...basically a simple bytecode
interpreter that can generate voxel fields of arbitrary resolution at
runtime, sort of like PostScript or other vector image formats.
> myself; some are much faster for rendering, but waste too much memory...
What does the file format have to do with rendering speed? I'm assuming
this is real-time display, so reading the file during rendering is not a
good idea.
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Christopher James Huff wrote:
>Good lighting? What kind of lighting do you expect?
Global lighting (sun), with shadows, would be enough.
>Highlights and some other effects are also dependant on the direction of
>the incoming ray. More importantly, POV-Ray's global illumination is
>viewpoint-dependant.
Then I will try to calculate the lightings using a different vector. One ray
to get the crosses, other ray to get the lights, what do you think?
>It would also be helpful if you mentioned what voxelfield resolutions
>you are thinking of. Unless you are using really fine-grained voxel
>fields, it could easily be faster...insideness calculations are usually
>a lot easier than intersection calculations.
Which functions are used to get pigment from a coodinate?
I am using big resolution voxels, 12 bits side (4096*4096*4096).
Mind that I am not interested in the interiors of closed objects. My
objective is not showing "slices", but to show very complex scenes (nature
scenes, for example).
>Trace() in render.cpp is probably the best place to start. Change it to
>accumulate all the intersections and figure out some way to translate
>that intersection list to a line of voxels and you have a good start.
>But as I mentioned, the results will be dependant on the direction of
>the incoming ray, so you may not get what you expect. You might have to
>disable quite a few effects when generating the voxel field.
Ok, thanks.
>You mean generating an oct-tree with large blocks covering multiple
>voxels with very similar values? Sort of like dividing an image into a
>heirarchy of flat-colored rectangles? I've thought about this as well,
>but I don't know how well it would work..
Yes, that's the idea; I think it will work ok for synthetic scenes like POV
ones. These use to have lots of patterns...Also, I can graduate the
"matching" factor.
>.maybe a simple run-length
>encoding would be more efficient, it would certainly be easier to read
>and write. I've considered using (or abusing?) animation file formats,
>maybe MNG (http://www.libpng.org/pub/mng/), but for now I have just been
>planning on using gzip to compress the data. Another idea I've been
>working on is a procedural format...basically a simple bytecode
>interpreter that can generate voxel fields of arbitrary resolution at
>runtime, sort of like PostScript or other vector image formats.
If you aim is to compress the data as much as you can, and the voxel data is
really volumetric (like tomography), 3D wavelets should be your best shot.
I think that encoding voxels as the most simple wavelets and using a general
block sorting compressor with big buffers (bzip2 -9 for example) should
give very good results. Repetition is very strong in voxels, but sometimes
the copies are too far for common compressors.
My interest is to voxelize only the "skins" of the objects, so I guess our
targets are different.
>What does the file format have to do with rendering speed? I'm assuming
>this is real-time display, so reading the file during rendering is not a
>good idea.
No, the format is for real time. As I am telling you, the resolution of the
voxel is big. This means I can't store it in memory uncompressed, and also
rendering would be muuuuuch slower.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <web.3ed68f3b1187fffdda05be620@news.povray.org>,
"Peskanov" <nomail@nomail> wrote:
> >Good lighting? What kind of lighting do you expect?
>
> Global lighting (sun), with shadows, would be enough.
This may be a problem then. POV radiosity is dependant on camera
location, and you won't have a camera.
> >Highlights and some other effects are also dependant on the direction of
> >the incoming ray. More importantly, POV-Ray's global illumination is
> >viewpoint-dependant.
>
> Then I will try to calculate the lightings using a different vector. One ray
> to get the crosses, other ray to get the lights, what do you think?
I don't know what you expect to get from this, you will still be
computing the lighting from a single direction. The basic problem is
this: you are storing color data, and the colors depend on the viewing
direction, which is just not available at the time you are generating
the voxels. You simply won't have any highlights, reflection,
refraction, iridescence, or whatever else is dependant on viewing
direction unless you save this information as well and build a renderer
to do it.
You could generate multiple voxel fields from different viewpoints and
interpolate, but this would greatly increase file size and memory usage.
> >It would also be helpful if you mentioned what voxelfield resolutions
> >you are thinking of. Unless you are using really fine-grained voxel
> >fields, it could easily be faster...insideness calculations are usually
> >a lot easier than intersection calculations.
>
> Which functions are used to get pigment from a coodinate?
You don't. A point is not enough information, you need the intersection
and ray as well. The function is Determine_Apparent_Colour(), which is
implemented in lighting.cpp.
> I am using big resolution voxels, 12 bits side (4096*4096*4096).
> Mind that I am not interested in the interiors of closed objects. My
> objective is not showing "slices", but to show very complex scenes (nature
> scenes, for example).
A full field would be 192GB at 24 bit/voxel, so I take it you're talking
about some sort of "sparse" voxel field, storing only the voxels that
are on the surface of an object...unless I goofed with the math, as long
as less than 40% of the available voxels are used, the file size will be
smaller. Given the large amounts of empty space in most scenes, this
should be quite a bit more efficient. You are also likely to end up with
long lines of voxels, RLE may still come in useful here...it might be
interesting to try to figure out a way to have runs along each axis.
> If you aim is to compress the data as much as you can, and the voxel data is
> really volumetric (like tomography), 3D wavelets should be your best shot.
> I think that encoding voxels as the most simple wavelets and using a general
> block sorting compressor with big buffers (bzip2 -9 for example) should
> give very good results. Repetition is very strong in voxels, but sometimes
> the copies are too far for common compressors.
Yes, I've heard of people doing work on wavelet compression of voxels,
but that goes beyond anything I plan to implement.
> No, the format is for real time. As I am telling you, the resolution of the
> voxel is big. This means I can't store it in memory uncompressed, and also
> rendering would be muuuuuch slower.
You're saying you plan on reading and decoding the voxel file for every
frame in real-time? That sounds...unlikely. Are you planning on somehow
keeping only the visible voxels in RAM, discarding hidden ones and
loading newly visible ones as the viewpoint changes? My first thought is
that all that "housekeeping" would slow things even further, though I
could be wrong.
BTW, something fairly similar that I've also been looking at is point
field geometry...basically an alternative to triangle/polygon meshes
where you store a bunch of points on the object surface. It can even be
raytraced directly:
http://graphics.lcs.mit.edu/~gs/papers/psp/
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|