POV-Ray : Newsgroups : povray.binaries.animations : Re: function driven postprocessing Server Time
1 Nov 2024 15:26:08 EDT (-0400)
  Re: function driven postprocessing (Message 1 to 7 of 7)  
From: Rick [Kitty5]
Subject: Re: function driven postprocessing
Date: 16 Jul 2003 05:47:32
Message: <3f151f34$2@news.povray.org>
ABX wrote:
> This is an example (extended official splinefollow example) of using
> various cameras within function driven postprocessing as base for
> applaying effects like find_edges or depth. 300 frames were rendered
> in single run. Three views are mixed with functions. This was made
> using rewritten postprocessing prepared for MegaPOV discussed a few
> times in povray.unofficial.patches group.

i m p r e s s i v e ! !

-- 
Rick

Kitty5 NewMedia http://Kitty5.com
POV-Ray News & Resources http://Povray.co.uk
TEL : +44 (01270) 501101 - ICQ : 15776037

PGP Public Key
http://pgpkeys.mit.edu:11371/pks/lookup?op=get&search=0x231E1CEA


Post a reply to this message

From: Chris Johnson
Subject: Re: function driven postprocessing
Date: 16 Jul 2003 12:26:33
Message: <3f157cb9$1@news.povray.org>
Very nice - I especially like the edges render. Could you explain in more
detail how this was created? I can't find any reference to post-processing
in the unofficial.patches group.


Post a reply to this message

From: ABX
Subject: Re: function driven postprocessing
Date: 17 Jul 2003 04:48:30
Message: <6cochv8b05hsi21he7n9j9g4s90egms0mk@4ax.com>
On Wed, 16 Jul 2003 17:26:25 +0100, "Chris Johnson" <chr### [at] chris-jcouk>
wrote:
> Very nice - I especially like the edges render. Could you explain in more
> detail how this was created? I can't find any reference to post-processing
> in the unofficial.patches group.

It was created with the same algorithm as in 'old' MegaPOV except it is now
coded within SDL as functions instead of hardcoding it in C++. Thanks to this
change you can use own extensions to algorithm as well as use other sources
than just prerendered image - in this case camera_view pigment was the input.

ABX


Post a reply to this message

From: Rune
Subject: Re: function driven postprocessing
Date: 22 Jul 2003 07:30:19
Message: <3f1d204b@news.povray.org>
This is very impressive!

I haven't followed the discussion in p.u.a so I don't know so much about
how it works. The old post-process feature did not support antialiasing.
Does this new approach support it? What about heavy blurring - is it not
very slow when done using functions?

Rune
--
3D images and anims, include files, tutorials and more:
rune|vision:  http://runevision.com (updated Oct 19)
POV-Ray Ring: http://webring.povray.co.uk


Post a reply to this message

From: ABX
Subject: Re: function driven postprocessing
Date: 22 Jul 2003 08:43:33
Message: <i49qhv4pp9ei33t8t92o91faga6gdct7oq@4ax.com>
On Tue, 22 Jul 2003 13:21:10 +0200, "Rune" <run### [at] runevisioncom> wrote:
> This is very impressive!

Thanks. If I remember correctly that's you who made this example, thanks.

> I haven't followed the discussion in p.u.a so I don't know so much about
> how it works.

Content of channels and other previous effects is available via predefined
functions like other channels in functions.inc. There is about 20 predefined
functions. Moreover thanks to camera_view pigment you can use channels from
other cameras to create interesting effects like transitions, overlaping,
masking.

> The old post-process feature did not support antialiasing.
> Does this new approach support it?

Yes and no. Engine requests the same number of pixels as needed for image. But
within script you can deliver antialiased content which means you can apply
any antialiasing you wish just like every other effect via functions. For
example sample we are commenting used some primitive averaging to make output
more smooth in last applied effect:

  post_process{
    #local du=1/(4*image_width);
    #local dv=1/(4*image_height);
    function{(f_pp_red  (u-du,v-dv,-1)+
              f_pp_red  (u-du,v+dv,-1)+
              f_pp_red  (u+du,v-dv,-1)+
              f_pp_red  (u+du,v+dv,-1))/4}
    function{(f_pp_green(u-du,v-dv,-1)+
              f_pp_green(u-du,v+dv,-1)+
              f_pp_green(u+du,v-dv,-1)+
              f_pp_green(u+du,v+dv,-1))/4}
    function{(f_pp_blue (u-du,v-dv,-1)+
              f_pp_blue (u-du,v+dv,-1)+
              f_pp_blue (u+du,v-dv,-1)+
              f_pp_blue (u+du,v+dv,-1))/4}
    function{0}
  }

f_pp_red, f_pp_green, and f_pp_blue are output channels from other effect and
value "-1" as third parameter says it is output of previous effect. If you
wish this looks complicated do not worry about. I made set of macros to create
functions to duplicate all effects from 'old' MegaPOV + some new like
duplicating antialiasing from engine, making simple transitions, creating
subviews.

Of course new postprocessing is slower than old one. But having all effects
written as functions in SDL makes it more flexible - you can modify them,
optimize, operate on prerendered image_maps etc. etc.

> What about heavy blurring - is it not
> very slow when done using functions?

Of course more samples means longer rendering, thought rendering time depends
what is source of applied effect. You can use output of normal rendering which
has disadventage of pixelization but is fast because it is just reading of
floats from array. But you can also use mentioned camera_view feature which
means that every sample requested cause new sended ray. In this case
postprocessing time increases. In presented example camera_view was used to
test macro with "find_edges" effect from 'old' MegaPOV. Finding edges needs
many samples so this is similiar to complexity of blurring. Here is summary
statistic from presented process:

Time For Parse/Frame:    0 hours  0 minutes   0.4 seconds (0 seconds)
Time For Parse Total:    0 hours  2 minutes  25.0 seconds (145 seconds)
Time For Trace/Frame:    0 hours  0 minutes   5.1 seconds (5 seconds)
Time For Trace Total:    0 hours 25 minutes  34.0 seconds (1534 seconds)
Time For Post./Frame:    0 hours  2 minutes  32.9 seconds (152 seconds)
Time For Post. Total:   12 hours 44 minutes  44.0 seconds (45884 seconds)
          Total Time:   13 hours 12 minutes  43.0 seconds (47563 seconds)

ABX


Post a reply to this message

From: Rune
Subject: Re: function driven postprocessing
Date: 22 Jul 2003 11:28:39
Message: <3f1d5827@news.povray.org>
ABX wrote:
>> The old post-process feature did not support
>> antialiasing. Does this new approach support it?
>
> Yes and no. Engine requests the same number of
> pixels as needed for image.

Hmm. Is there any specific reason that you can not set the width and
height individually for each filter? It seems from your function example
that the filters are not really dependent on having a specific
resolution... But maybe I don't fully understand the concept yet.

> You can use output of normal rendering which has
> disadventage of pixelization but is fast because
> it is just reading of floats from array. But you
> can also use mentioned camera_view feature which
> means that every sample requested cause new
> sended ray. In this case postprocessing time
> increases.

Hmm. Wouldn't it then be possible to use the camera_view *instead* of
the normal rendering? That is, to skip the normal rendering step
completely, but get the same result by getting the image through the
camera_view? And wouldn't then the rendering time be approximately the
same?

> Here is summary statistic from presented process:
> <snip>

Hmm, only 25 minutes for tracing but almost 13 hours for post
processing - this *is* pretty slow! :(

Rune
--
3D images and anims, include files, tutorials and more:
rune|vision:  http://runevision.com (updated Oct 19)
POV-Ray Ring: http://webring.povray.co.uk


Post a reply to this message

From: ABX
Subject: Re: function driven postprocessing
Date: 22 Jul 2003 12:25:48
Message: <samqhvg6pkmkd8dn9jpkm0avprhjgu4cf7@4ax.com>
On Tue, 22 Jul 2003 17:27:07 +0200, "Rune" <run### [at] runevisioncom> wrote:
> > > The old post-process feature did not support
> > > antialiasing. Does this new approach support it?
> >
> > Yes and no. Engine requests the same number of
> > pixels as needed for image.
>
> Hmm. Is there any specific reason that you can not set the width and
> height individually for each filter? It seems from your function example
> that the filters are not really dependent on having a specific
> resolution... But maybe I don't fully understand the concept yet.

There is no stright connection between image_size and effects now. But since
find_edges effect in 'old' MegaPOV measure radius in pixels I have to consider
it if I want to deliver macros for backward compatibility. But macro which
duplicates old behaviour is just wrapper for another more flexible macro which
has more settings than old effect.

> > You can use output of normal rendering which has
> > disadventage of pixelization but is fast because
> > it is just reading of floats from array. But you
> > can also use mentioned camera_view feature which
> > means that every sample requested cause new
> > sended ray. In this case postprocessing time
> > increases.
>
> Hmm. Wouldn't it then be possible to use the camera_view *instead* of
> the normal rendering? That is, to skip the normal rendering step
> completely, but get the same result by getting the image through the
> camera_view?

That's the idea of one of optimizations I plan. If at least one post_process
is in a script and non of predefined channels is used it means that whole
post_process base on non-normal-rendering data so no need for rendering at
all.

> And wouldn't then the rendering time be approximately the same?

It wouldn't. To calculate one pixel in find edges process you need data about
colour, depth, received normal of neighbouring. Depending on radius of edges
more samples is required. In 'old' post_processing and in geting values fron
rendering in 'new' post_processing there is image_width*image_height possible
values of samples because number of samples is limited. Using 'new'
post_processing with camera_view pigment gives much more accurate effect
becuse every sampling of camera_view cause sending new ray. Camera_view is not
precalculated into image_map. It is calculated online per request without
pixelization.

> > Here is summary statistic from presented process:
> > <snip>
>
> Hmm, only 25 minutes for tracing but almost 13 hours for post
> processing - this *is* pretty slow! :(

I know. But if you are comparing this time to times in 'old' postprocessing
time than you are doing wrong becuase it is like comparing rendering time of
isosurface in 3.5 with rendering of teapot in povray 2.0. The main slowdown is
from many rays sended for find_edges subview and in 'old' MegaPOV there wasn't
neither camera_view nor subview possibility. Moreover applying that averaging
multiply postprocessing time by more than 4 but this averaging was added only
to introduce little more smoothnes.

I plan some optimizations to make it faster and more efficient but I do not
know if they will be ready before release of MegaPOV 1.1. Perhaps they will be
in some future versions after notes of users. There is also possibility that
users can find more flexible versions of my macros so they make it better. I
just reused algorithms from old MegaPOV postproc.c code.

If windows binary could contain JIT compiler to native code for functions as
it is (afaik) done for Macintoshes then it could be also faster.

Another interesting thing is that the same set of macros delivers pigments
with equivalent of effects so you can for example apply find_edges effect to
picture on the wall within scene.

ABX


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.