POV-Ray : Newsgroups : irtc.general : Minimum Entry Requirements : Re: Minimum Entry Requirements Server Time
17 May 2024 07:22:48 EDT (-0400)
  Re: Minimum Entry Requirements  
From: John VanSickle
Date: 22 Jun 2009 00:01:55
Message: <4a3f0233$1@news.povray.org>
Christian Froeschlin wrote:
> John VanSickle wrote:
> 
>> What if the image maps are generated by the ray tracer?
> 
> well, the "no multipass" rule was basically indended to prevent
> 2d-ish after-effects. Any such effect could be implemented by loading
> the image from the first pass, evaluating its pixels one-by-one,
> performing any image processing on the pixel array,  recreating
> a pigment and applying it to a plane.
> 
>  > What if the external data files are likewise generated?
> 
> No objections and can be done at the beginning of a single
> pass render without problems (at least in POV-Ray). Also, as
> long as the scene is capable of render in a single pass, I
> don't think there would be objections to also include a
> flag to reuse the data file for efficiency when
> tweaking renders, as with photon map data.
> 
>  > What if the post-processing is done by the renderer?
> 
> Not sure what kind of post-processing is supported by raytracers
> out there.

For POV-Ray, it is as simple as pigmenting a polygon using a combination 
of two or more image_maps, these being taken from frames rendered 
earlier.  The different effects that can be achieved this way are quite 
varied.

I have done masking in several of my IRTC animations, using POV-Ray to 
combine two images and a mask to make the final image.  Here is a 
description of what I did in my latest judged animation, "Getting It 
Online":

------------------------------------

"The shot in which Boxer accesses the video stream from the club is 
probably the most complicated job of compositing that I've done so far. 
  It went like this:

1.  I modeled a scene of a caseless ladybot going around the brass pole 
in the strip club.

2.  I rendered four seconds worth of frames at 320x240.

3.  I rendered the same frames a second time at 40x30 (1:8 sampling in 
both directions).

4.  I modeled a masking scene for the parts that are supposed to be 
digitally censored; this mask consisted of the container shape for the 
part that needed to be censored, textured solid white, with nothing else 
in the scene, and a black background.  I animated the container exactly 
as the uncased portion of the caseless ladybot, and animated the camera 
exactly as in the scene modeled in step one.

5.  I rendered the masking scene at 40x30 for four seconds of frames.

6.  I did another scene, in which a small rectangle was textured with a 
     pigment equal to A*B+C*(1-A), where A is the 40x30 render of the 
scene, B is the white portion of the mask, and C is the 320x240 render 
of the scene.  This scene also had the titling 
"www.TheSilverCircuit.xxx" and other stuff in it.

7.  I did another scene for the "Buffering..." screen.

8.  The frames from steps 6 and 7 were used as the pigmenting for 
Boxer's computer screen.

---------------------------

Possibly some of the steps that made use of earlier renders could have 
been rendered directly, but this was certainly not always the case.

In all of my animations involving a Greb ship flying into or out of a 
wormhole ("Cliff Hanger," "A Narrow Escape," and "News from the Front") 
I used masking to achieve the desired effect.

I've done several kinds of fades, by combining two sets of rendered 
frames using a pigment pattern, with the indexes in the pigment map 
shifting over time.

I have rendered frames for an animation a rate of 240n frames per second 
and averaged together bunches of n to make motion blur.

I've also used rendered frames as the image for video displays in 
animations as well.

Regards,
John


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.