POV-Ray : Newsgroups : povray.binaries.images : Limping Back Home (B-29 bomber) : Re: Limping Back Home (B-29 bomber) Server Time
19 May 2024 10:49:10 EDT (-0400)
  Re: Limping Back Home (B-29 bomber)  
From: Kenneth
Date: 25 Mar 2010 15:50:00
Message: <web.4babbcc2ae0d85565f302820@news.povray.org>
"Dave Blandston" <nomail@nomail> wrote:

> SO... It's very interesting to know more details of how the animation was
> created. Please feel free to share more!
>

Thanks indeed for your thoughtful comments.  I guess each of us 'toils in
solitude' while trying to creating something worthwhile in POV-Ray, spending
untold hours on it--so it's definitely nice to see that the work is appreciated
(even with its flaws.)

And now that you've asked... :-)  :-)  :-)

I added my own 'fake ambient occlusion' TEXTURE to the B-29--using a simple
'shadowing' technique that I picked up from the newsgroups. I made that as a
separate render--by placing LOTS of lights around a white B-29, with a white
background. In Photoshop, I inverted the image, made it into an alpha-channel
(with the 'real' image there being black) then applied that in POV-Ray as a
typical planar image_map, projected from above onto the final airplane. Far from
perfect--in fact, a rather amateurish attempt at AO!-- but it adds a tiny bit
more realism. (I'll be glad to post the image_map, to show what I mean.)

The puffy clouds are probably the thing I'm most proud of (in how they solved a
parsing-and-render-time problem, while still looking decent.)  As mentioned,
they're not media--which would have taken forever to render--but turbulated
image_maps projected onto simple scaled spheres. Actually, I used just *one*
(rather exacting) alpha-channel image_map, which was then run through some
randomized turbulence code inside a cloud-generating #while loop, before being
applied to a particular sphere--so each cloud looks different, more or less.
(The turbulence is even animated, so the clouds 'change' over time. It's very
subtle, though.) BTW, I pre-#declared the raw image_map as a pigment (before the
#while loop) which saves *considerable* time during parsing--similar in concept
to instancing multiple copies of a triangle mesh. So even though each cloud does
get its own 'texture', 600 clouds parse almost as fast as one. Without that
pre-#declare step, the cloud-parsing was *slow* and memory-intensive.

What appears to be self-shadowing under each cloud is actually built into my
image_map artwork--the clouds have a simple finish{ambient 1 diffuse 0} to
faithfully reproduce the image's tone values. (IMO, that image_map still needs
some tweaking--the clouds don't have as much gray-scale detail as I'd like.)

Along the way, I re-discovered something interesting about image_maps and
turbulence: Since I used the typical map_type 0 for projecting the image onto
each sphere, the image naturally shows up on both front and rear surfaces. But
turbulence operates on the image in 3-D space--it's added *after* the
projection, so to speak--so I get two *different* looks on each cloud. Nice!
Since the camera can see some of the rear image through the front one, it adds
'complexity' to each cloud's appearance. Then the animated movement and
motion-blur help give it a quasi-volumetric look. Of course, the front-and-rear
surfaces of each cloud need to remain more or less in line with the camera view.
Otherwise, *distinct* front/back images would be apparent. (Perhaps they still
are!)

Ken


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.