POV-Ray : Newsgroups : povray.general : generating an image_map name: string syntax problem : Re: generating an image_map name: string syntax problem Server Time
30 Jul 2024 12:28:59 EDT (-0400)
  Re: generating an image_map name: string syntax problem  
From: Zeger Knaepen
Date: 20 Feb 2009 17:11:14
Message: <499f2a82@news.povray.org>
"Kenneth" <kdw### [at] earthlinknet> wrote in message 
news:web.499f1e4d40230f63f50167bc0@news.povray.org...
> Fascinating. It's actually very close to a paradigm I had come up with 
> (but only
> as a thought experiment) concerning a feature that could be added to 
> POV-Ray
> itself--*internally* rendering multiple 'snapshots' of a scene over time, 
> then
> averaging them internally, then spitting out the final blurred frame. So
> MegaPOV already has that; cool.
>
> You had mentioned earlier that this MegaPOV averaging method produces a 
> more
> natural blur than using POV the way I had worked it out (i.e., just 
> averaging
> multiple pre-rendered 24-bit frames during a 2nd render.) Is that solely
> because you've given MegaPOV fifty 'camera views' to average, vs. my 
> smaller
> number of ten? Or is there something about MegaPOV's internal method that
> inherently produces a more accurate blur?  I'm most curious about the
> difference.

The main difference is where you have very bright spots.  Normal 24bit 
images have no way of storing those colors, in POV-Ray-terms, all 
color-components >1 are clipped to 1, hence averaging those images will not 
give accurate results.

Example: let's put it in 1D and black&white.  Let's say you have the 
following frames (every line is a frame)

.1 .1 8 .1 .1
.1 .1 .1 8 .1
.1 .1 .1 .1 8

(so, that's like a very bright spot moving to the right :))
You're 24bit prerendered image will have them stored like:

.1 .1 1 .1 .1
.1 .1 .1 1 .1
.1 .1 .1 .1 1

and averaging those frames will result in the following:

.1 .1 .4 .4 .4

while the actual result should be:

.1 .1 2.73 2.73 2.73
(which will be stored in a 24bit image as .1 .1 1 1 1)

This makes all the difference between a realistic animation and a "there's 
just something synthetic about this!"-animation

> BTW, there *is*, at present, an inherent problem with my own blurring 
> scheme: It
> currently applies the multiple averaged images onto a flat box, positioned 
> in
> front of my orthographic camera for the 2nd render. And it's quite tricky 
> to
> scale the box, to get an exact 1:1 correlation between the pre-rendered 
> images'
> pixels and the 'new' camera's camera rays. (I.e., the 2nd camera's rays 
> should
> exactly intersect each pixel in the averaged composite image, to get a 
> truly
> accurate 2nd render.) I looked through 'screen.inc' to see what I could 
> use
> there instead of my box idea, but I couldn't discern if it produces this
> *exact* 1:1 correspondence. I'm thinking that it does, but I haven't tried 
> yet.

I suppose screen.inc gives a 1:1 correlation, as long as you're output-image 
is the same size as the input-image.  Be sure though not to use 
anti-aliasing and/or jitter.


>> And the second way is by using MegaPOV's noise_pigment.  It renders 
>> faster,
>> but I use this method more for testing-purposes only as it doesn't give
>> really accurate results:
>
> This is a weird one to understand.  I need to read the MegaPOV 
> documentation to
> get a mental picture of what happens here. I'm wondering if it has 
> anything to
> do with the idea of 'frameless rendering'?

I guess there's some similarity, but it certainly isn't frameless rendering. 
With frameless rendering, every ray (or every pixel when not using 
anti-aliasing) is shot at a random point in time in the given time-interval. 
With perfect anti-aliasing, this will produce perfect accuracy.  My second 
method also uses a random time per ray, but only from a predefined (quite 
small) subset of all possible points in time.  Even with perfect 
anti-aliasing the best you'll get is exactly the same as the frist method. 
The only advantage of this second method, is that without anti-aliasing it 
renders much faster than the first method, while still giving you a fairly 
good idea of what's going on in the animation, making it ideal for 
test-renders :)

> http://www.acm.org/crossroads/xrds3-4/ellen.html

That site seems to use a slightly different definition of frameless 
rendering than I used, but the idea remains the same

> That introduced me to a new term: 'temporal antialisaing.' The method 
> seems to
> be more applicable to real-time rendering, though (of game graphics, for
> example.)

as is the definition of frameless rendering that site uses :)
Interesting stuff though!

cu!
-- 
#macro G(b,e)b+(e-b)*C/50#end#macro _(b,e,k,l)#local C=0;#while(C<50)
sphere{G(b,e)+3*z.1pigment{rgb G(k,l)}finish{ambient 1}}#local C=C+1;
#end#end _(y-x,y,x,x+y)_(y,-x-y,x+y,y)_(-x-y,-y,y,y+z)_(-y,y,y+z,x+y)
_(0x+y.5+y/2x)_(0x-y.5+y/2x)            // ZK http://www.povplace.com


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.