POV-Ray : Newsgroups : povray.general : Scanline rendering in POV-Ray : Re: Scanline rendering in POV-Ray Server Time
4 Aug 2024 16:09:58 EDT (-0400)
  Re: Scanline rendering in POV-Ray  
From: Ray Gardener
Date: 2 Jun 2003 10:42:33
Message: <3edb6259@news.povray.org>
> Yes, the fact that someone uses a 16 year old paper that used a CCI
6/32...

But, hmm... raytracing goes back 35 years.
Bumpmapping goes back over 20 years.
Who is it, um, who's using the older paper..?

And I suppose the machine was rather slow, but that
would have been true for a raytracer running
on it as well. I don't see the fairness in
blaming an algorithm on the hardware available
at the time. Despite faster hardware, raytracing
still has difficulty providing realtime performance,
even for simple scenes, while scanline systems
do it routinely. If REYES was doomed, I would
imagine Pixar would have fired the Renderman staff
once the hardware improved. But instead, it
seems to have flourished in the film industry
quite well.


> Lets see, you have an image that has about 275000 pixels, and you render
> 16.8 million triangles, that yields about 61 triangles per pixel.

Actually, the image is antialiased by drawing a 200% version
of the image and then averaging it down, so there are over a million pixels
originally. Yes, it's a lot of triangles, because the fractal cubes
insert themselves after the heightfield shader draws, and a lot of
the latter pixels get replaced in the z-buffer.


> This suggests very ineffective or nonexistent clipping
> and culling algorithms.

Well, that's the nice thing about a z-buffer. I do perform
frustum and backface culling, but being able to insert
geometry and not worry about memory is useful in terms
of artistic approaches, especially in shader development.
Production runs would be optimized, but it's nice to know
that during development, I'm free to use brute-force
techniques that send any number of polygons. In a
raytracer, that luxury doesn't exist; there's always
a memory limit on geometry. Even if you have the time
to render it, if you don't have the memory, you're stuck.
And the memory I save on geometry I can put towards textures
and shaders.

Let me put it this way: I could have spent more time
optimizing the textures to get them to raytrace well.
But instead, I'm free to brute-force the approach and
also have a general solution that works from other
camera angles (customized textures tend not to).
It's a good system that can offer design-time efficiency
in addition to runtime efficiency. It's good to
have choices. In fact, the scanline system lets me
easily develop textures for the raytracer.


>in short, on any computer you could have bought in the last two years this
> should render in a few seconds using scanline rendering.

I do have some scenes which, in fact, render in only
several seconds. If I omit the fractal cubes, the thing
really rips along. This is also my first attempt at
such a renderer, only a week old or so, so I'm not
claiming it's optimized.


> > I suppose ignorance is bliss...
>
> No, just a necessity because if I would spend as much time explaining
> details every time somebody comes up with completely unresearched,
> misinterpreted or just plain random "facts", I would hardly find time to
do
> anything else.

Well, no one forced you to spend your time
participating in this discussion, so don't
blame me if you feel your time is wasted. A true
professional would simply have stated that
he felt the matter was of low importance
or misinformed and left it at that. Better still,
you could have written or co-written a definitive
paper on why scanline/REYES is non sequitor and included
a reference to it in the FAQ.

As for facts, I don't see that they have
been concluded. Your first reaction was
not to discuss technical feasibility at all;
just a statement that it would take a long time
to implement. That sounds, frankly, more like
someone who is afraid of change than someone
with a clear set of reasons why an approach
should not be tried. One would have expected
your first reaction to be something along
the lines of "Nice idea, but it won't work
because of a, b, and c." or "That's fine,
but the problem domain of POV-Ray is
specifically x, y, and z."

The facts I do know about are that scanline
rendering has no geometry memory limit
while raytracing does, and scanline rendering
can deliver realtime (or at least much faster)
scene previewing capabilities. Even with all
secondary raycasting disabled, a raytracer
cannot outperform or match a scanline algorithm.
If it could, raytracing would be used in video games.
And the fact is, again, raytracing has not
displaced scanline/REYES as the preferred
CG rendering method in motion pictures. To quote
Dr. Gritz, BMRT assisted in only 16 scenes
in the movie A Bug's Life. I don't know what
facts you are in possession of, but they
certainly can't include the film industry
falling all over themselves to use raytracing.

And on the topic of efficiency, why does POV-Ray
allocate a memory block for not only each scanline
and color channel of a texture? A simple scene using
several 128-pixel tall textures wound up generating
over 3000 calls to POV_MALLOC. Caching the row address
multiplies into their own array I can understand, but
why not just have them point to offsets within a single
block for each texture component plane?

Anyway, I've started my modifications to POV-Ray
to support the ideas mentioned so far, and will
let interested persons know the results. In the
interest of fairness to everyone, I think it
is best if I try an implementation and let
the facts speak for themselves. At the
very least, it's an idea worth trying, given
the potential benefits.

Ray


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.