POV-Ray : Newsgroups : povray.programming : Using POV to render voxels : Re: Using POV to render voxels Server Time
5 Jul 2024 14:25:24 EDT (-0400)
  Re: Using POV to render voxels  
From: Christopher James Huff
Date: 29 May 2003 22:32:35
Message: <cjameshuff-A09F0D.21340429052003@netplex.aussie.org>
In article <web.3ed68f3b1187fffdda05be620@news.povray.org>,
 "Peskanov" <nomail@nomail> wrote:

> >Good lighting? What kind of lighting do you expect?
> 
> Global lighting (sun), with shadows, would be enough.

This may be a problem then. POV radiosity is dependant on camera 
location, and you won't have a camera.


> >Highlights and some other effects are also dependant on the direction of
> >the incoming ray. More importantly, POV-Ray's global illumination is
> >viewpoint-dependant.
> 
> Then I will try to calculate the lightings using a different vector. One ray
> to get the crosses, other ray to get the lights, what do you think?

I don't know what you expect to get from this, you will still be 
computing the lighting from a single direction. The basic problem is 
this: you are storing color data, and the colors depend on the viewing 
direction, which is just not available at the time you are generating 
the voxels. You simply won't have any highlights, reflection, 
refraction, iridescence, or whatever else is dependant on viewing 
direction unless you save this information as well and build a renderer 
to do it.

You could generate multiple voxel fields from different viewpoints and 
interpolate, but this would greatly increase file size and memory usage.


> >It would also be helpful if you mentioned what voxelfield resolutions
> >you are thinking of. Unless you are using really fine-grained voxel
> >fields, it could easily be faster...insideness calculations are usually
> >a lot easier than intersection calculations.
> 
> Which functions are used to get pigment from a coodinate?

You don't. A point is not enough information, you need the intersection 
and ray as well. The function is Determine_Apparent_Colour(), which is 
implemented in lighting.cpp.


> I am using big resolution voxels, 12 bits side (4096*4096*4096).
> Mind that I am not interested in the interiors of closed objects. My
> objective is not showing "slices", but to show very complex scenes (nature
> scenes, for example).

A full field would be 192GB at 24 bit/voxel, so I take it you're talking 
about some sort of "sparse" voxel field, storing only the voxels that 
are on the surface of an object...unless I goofed with the math, as long 
as less than 40% of the available voxels are used, the file size will be 
smaller. Given the large amounts of empty space in most scenes, this 
should be quite a bit more efficient. You are also likely to end up with 
long lines of voxels, RLE may still come in useful here...it might be 
interesting to try to figure out a way to have runs along each axis.


> If you aim is to compress the data as much as you can, and the voxel data is
> really volumetric (like tomography), 3D wavelets should be your best shot.
> I think that encoding voxels as the most simple wavelets and using a general
> block sorting compressor with big buffers (bzip2 -9 for example) should
> give very good results. Repetition is very strong in voxels, but sometimes
> the copies are too far for common compressors.

Yes, I've heard of people doing work on wavelet compression of voxels, 
but that goes beyond anything I plan to implement.


> No, the format is for real time. As I am telling you, the resolution of the
> voxel is big. This means I can't store it in memory uncompressed, and also
> rendering would be muuuuuch slower.

You're saying you plan on reading and decoding the voxel file for every 
frame in real-time? That sounds...unlikely. Are you planning on somehow 
keeping only the visible voxels in RAM, discarding hidden ones and 
loading newly visible ones as the viewpoint changes? My first thought is 
that all that "housekeeping" would slow things even further, though I 
could be wrong.

BTW, something fairly similar that I've also been looking at is point 
field geometry...basically an alternative to triangle/polygon meshes 
where you store a bunch of points on the object surface. It can even be 
raytraced directly:
http://graphics.lcs.mit.edu/~gs/papers/psp/

-- 
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.