|
![](/i/fill.gif) |
Thanks to everyone for all the hints, I got it working now.
In case someone else googles this page, I post the solution:
1) Create a depthmap of the scene. First I wanted to use the built-in
depthmap support of MegaPOV, but it turned out that this takes much longer
than rendering the image a second time in PovRay using a Z-gradient texture
like this one for all objects:
fprintf(ft,"#declare DephTex = texture { pigment { gradient z scale %d
color_map { [0.0 color rgb<0,0,0>] [%.5f color rgb<0,0,0>] [%.5f color
rgb<1,1,1>] [1.0 color rgb<1,1,1>] } } finish { ambient 1.0 }
}n",ply_clipfarz, (double)fogzmin/ply_clipfarz,
(double)fogzmax/ply_clipfarz);
ply_clipfarz: the Z-coordinate of the far clipping plane
fogzmin: the Z-coordinate where the fogging and thus the depth map should
start.
fogzmax: the Z-coordinate where the fogging reaches the maximum and the
depth map ends.
2) Render both the real image and the depthmap with transparent background
+UA +FN
3) Load the image (srcsrf), the depthmap (zbfsrf) and the background
(dstsrf), add fog to the image using the info from the depthmap and blend
it on top of the background:
(code is based on www.libsdl.org).
/* ADD RAYTRACED IMAGE, CREATING FOG FROM ZBUFFER */
for (i=0;i<dstsrf->h;i++)
{ dstpixel=(int32*)((char*)dstsrf->pixels+i*dstsrf->pitch);
srcpixel=(int32*)((char*)srcsrf->pixels+i*srcsrf->pitch);
for (j=0;j<dstsrf->w;j++)
{ /* SCALE WITH 1.008 TO MAKE SURE THAT POVRAY'S ALMOST WHITE (0xfdfdfd)
BECOMES TRUE WHITE (0xffffff) */
alpha=col_alpha(*srcpixel);
color=col_scale(*srcpixel,1.008);
if (alpha)
{ /* PIXEL NOT EMPTY, GET DEPTH */
depth=((int32*)((char*)zbfsrf->pixels+(i*zbfsrf->pitch)))[j];
if (col_alpha(depth)) depth=col_red(depth);
else
{ /* AS THE ZBUFFER IS RENDERED WITHOUT ANTIALIASING, IT MAY HAPPEN
THAT
ANTIALIASED PIXELS AT THE IMAGE EDGES DO NOT HAVE A
CORRESPONDING PIXEL IN
THE ZBUFFER - IN THIS CASE AVERAGE THE 4 SURROUNDING PIXELS */
depths=0;
depthsum=0;
for (k=-1;k<=1;k++)
{ if (i+k>=0&&i+k<dstsrf->h)
{ for (l=-1;l<=1;l++)
{ if (j+l>=0&&j+l<dstsrf->w&&k*l==0)
{
depth=((int32*)((char*)zbfsrf->pixels+((i+k)*zbfsrf->pitch)))[j+l];
if (col_alpha(depth))
{ depthsum+=col_red(depth);
depths++; } } } } }
if (depths) depth=depthsum/depths; }
/* ADD FOG */
color=col_blended(color,ply_fogcolorrgb,(255.0-depth)/255); }
/* BLEND WITH THE BACKGROUND, SET ALPHA TO 255 */
color=col_blended(color,*dstpixel,alpha/255.0)|COL_ALPHAMASK32;
*(dstpixel++)=color;
srcpixel++; } }
Hope it helps,
Elmar
"Abe" <bul### [at] taconic net> wrote:
> The limitations abound, but:
>
> 1. Function based texture applied to all objects but which doesn't affect
> the background. E.g. function(sqrt(x*x+y*y+z*z)) appropriately translated
> and normalized to taste. This method will fall flat with reflections and so
> forth.
>
> 2. Vahur Krouverk's Povman may give more access to the ray length
> information, including reflections, for shader writing. Not sure though.
>
> 3. A questionable alternative to not affecting the background color with
> fog, is to render a dual colored fog which is, say white, in the foreground
> and cyan in the extreme distance, effectively creating a different colored
> background suitable as a mask. This requires two fog statements: foreground
> colored fog and backgound fog set to extreme distance with a
> subtractive/addative color which will result in the desired background
> color when applied to the forground color.
>
> Regarding media fog, you should be able to duplicate colored fog using
> emission media (fog color) and absorption media (white). You may also have
> to set the absorption intensity to about 10x that of the emission media.
>
> Abe
Post a reply to this message
|
![](/i/fill.gif) |