POV-Ray : Newsgroups : povray.general : Scanline rendering in POV-Ray Server Time
4 Aug 2024 14:25:21 EDT (-0400)
  Scanline rendering in POV-Ray (Message 11 to 20 of 96)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Ray Gardener
Subject: Re: Scanline rendering in POV-Ray
Date: 1 Jun 2003 21:48:54
Message: <3edaad06@news.povray.org>
> You are talking about "The Reyes image rendering architecture", I guess?
> And taking a four sentence paragraph that says little if anything to
justify
> your argument? ...

> 2.1. Geometric Locality.
>...
>
> And this is all.  As you are aware that this was 16 years ago, a single
> author's analysis, and the context of the paper is rendering feature-film
> length animations?  I am not even going to start to reason here how
> nonsensical it is to base your argument on this paper.
>
> And the lengthy "argument" that follows in your post, well, you don't want
> to keep the whole scene in memory and that is why you want to use a (even
> "more" memory limited) 3d accelerator to draw that model?  Carefully
> transmitting all the geometry and texture data over a rather slow bus for
> every frame?  And as far as textures are concerned, you do know that
number
> of texture samples to be taken by a ray tracer grows linearly with the
image
> size, while for a scanline renderer is grows linearly with the polygon
> number and size?
>
> I am not going to laugh, I am surprised you didn't know better (also you
> should), and I am just going to ignore the rest of this thread ... sorry!


I suppose ignorance is bliss...

Well, I'm not sure there's any need to employ
derogatory remarks in this discussion or insinuate
that my intelligence is lower than it should be.
It sounds like you're threatened or uncomfortable
about something. Don't worry; I'm not asking
the POV-Team to do anything. Scanlining is just
one way to accomplish some goals; I'm open to
any idea that will do the same. I'm also surprised
because I'm not asking the raytracing system
to be replaced. Far from it, actually.

Raytracing is certainly elegant. It certainly
produces stunning imagery. But if raytracing is
the solution for production rendering needs,
then it would likely be used by all the studios
already. But the fact is, it's not. The current trend
is to hybridize the two.

If POV-Ray is characterized as a raytracer,
then yes, changing or augmenting its raytracing
system is to change the entire mission statement.
But I think POV-Ray is also seen by some people
as an image/movie creation platform, where the
particulars of image rendering are merely
an implementation detail. Did people get
uncomfortable when Larry Gritz wrote BMRT and
showed that RIB files could be raytraced? No;
they saw that the Renderman system could be
extended to more uses with raytracing. The
initial scanline architecture was treated as
a plumbing detail, not something to get
religious about (the RIB spec even includes
a trace() function, although of course PRMan
doesn't implement it). I think POV-Ray can be seen
in the same manner (but in reverse, of course).

Your points are important but memory bus bandwidth
is not a factor. A CPU has to access memory over that bus
whether raytracing or scanlining, because the scene geometry
simply doesn't fit either way into a data cache. In fact,
with raytracing it's sometimes worse, because the data
for an object can be paged out of the data cache when
a secondary ray hits another object, whereas when scanlining,
the object can remain in cache; memory accesses are not
spread out all over the geometry database.

As for scaling up texture access, a scanline
system can store z-buffer entries to do
texturing in a seperate pass, so only
those pixels that are actually visible
need to be textured. When I make a Z entry,
I can store the surface normal, uv coords,
and texture ID and do the texturing later.
The approaches can even be mixed depending
on what makes more efficient resource use
at the time.

It's also not a point that throws the baby out
with the bathwater. If I want a realtime animation
preview of a moderate POV scene, for example,
scanlining will do it. There are worthwhile
things to gain even if a technology has some faults.


Ray


Post a reply to this message

From: Christoph Hormann
Subject: Re: Scanline rendering in POV-Ray
Date: 2 Jun 2003 04:07:24
Message: <3EDB05BA.660C1D9@gmx.de>
Ray Gardener wrote:
> 
> > Concerning landscapes - i have seen and rendered far more complex
> > landscapes with POV-Ray than you usually see being made with scanline
> > renderers.  I really wonder if you could show me a landscape render you
> > could say 'this would not have been possible in POV-Ray because of the
> > detailed geometry'.
> 
> Fair enough. I submit the following picture:
> http://www.daylongraphics.com/products/leveller/tour/ss_scanline.jpg
> 
> It contains almost 17 million triangles. The rocky lumps
> in the foreground were drawn using a fractal cube geometry
> insertion. The rest of the ground is done using fractal
> displacement subdivision. There is no bumpmapping; every
> lighting effect is done with actual geometry.

In your sample the foreground shows not much more than pure noise - no way
to determine the exact geometry.  This noise could easily be done in
POV-Ray though.  The background seems easy to accomplish in POV-Ray as
well, at least similarly.  If you want exactly the same you would have to
implement the used function in POV-Ray.  The whole thing without shadows,
textures and other slow stuff - it is well possible that POV-Ray could
beat the 12 minutes on it.

17 million triangles would of course need quite some memory in POV-Ray as
a mesh but luckily there are other possibilities to generate geometry in
POV-Ray as well...

As an example - this scene was made in 1999 - at a time when most
landscape renders were commonly using no more than 1-2 million triangles:

http://www-public.tu-bs.de:8080/~y0013390/pov/pict/iso_rock_01.jpg

Although this scene was never designed to render fast - in fact by nature
it was quite slow - it renders with plain texture, no aa and no shadows
within 7min44sec (640x480, Athlon 1GHz). Memory use 214k BTW.

http://www.schunter.etc.tu-bs.de/~chris/files/iso_rock_02b.png

The geometry is equivalent to far more than 17 million triangles, of
course you can't really see it at this size.

> [...]
> 
> I'm not worried about feature non-support
> as much as whether the renderer is available at all.
> The goal of supporting every primitive type,
> every option, etc. is laudable but I see it
> as something that can be grown towards.

You are free to implement such a thing but be warned: there will hardly
anyone be interested in a POV-Ray feature for scanline rendering that only
supports certain specific geometry definitions that are not compatible to
raytracing and therefore are not usable in a 'real' POV-Ray render.

Christoph

-- 
POV-Ray tutorials, include files, Sim-POV,
HCR-Edit and more: http://www.tu-bs.de/~y0013390/
Last updated 28 Feb. 2003 _____./\/^>_*_<^\/\.______


Post a reply to this message

From: Thorsten Froehlich
Subject: Re: Scanline rendering in POV-Ray
Date: 2 Jun 2003 04:28:23
Message: <3edb0aa7$1@news.povray.org>
In article <3edaad06@news.povray.org> , "Ray Gardener" 
<ray### [at] daylongraphicscom> wrote:

> Well, I'm not sure there's any need to employ
> derogatory remarks in this discussion or insinuate
> that my intelligence is lower than it should be.
> It sounds like you're threatened or uncomfortable
> about something.

Yes, the fact that someone uses a 16 year old paper that used a CCI 6/32 as
basis does indeed make me uncomfortable seriously question this persons
ability to evaluate "facts" before reaching any conclusion. -- For those not
aware, this system delivers proximately the same computing power as a 4 Mhz
(yes, four Megahertz!) Pentium 4, or in the standard of the time, about six
MIPS (peak) or around a rating of five on the Specmark 89 scale, which is
slightly better than a 386 with FPU or less than half that of a SPARCstation
4/60 (Sparc 1).

(from another of your posts)
> Fair enough. I submit the following picture:
> http://www.daylongraphics.com/products/leveller/tour/ss_scanline.jpg

Lets see, you have an image that has about 275000 pixels, and you render
16.8 million triangles, that yields about 61 triangles per pixel.  More,
actually, somewhere around 70 triangles per pixel, because some part of the
image is the sky.  This suggests very ineffective or nonexistent clipping
and culling algorithms.  I have hardly ever seen a scene with more than one
million triangles that, after clipping and culling well before (!!!) getting
down to a triangle level, reached more than 10 triangles per pixel.  in
short, on any computer you could have bought in the last two years this
should render in a few seconds using scanline rendering.

16 million triangles is peanuts, nothing else, compared to properly clipped
and culled datasets that created (and still create, of course) very good
results when used with POV-Ray even several years ago:
<http://astronomy.swin.edu.au/~pbourke/terrain/mars32/>

> I suppose ignorance is bliss...

No, just a necessity because if I would spend as much time explaining
details every time somebody comes up with completely unresearched,
misinterpreted or just plain random "facts", I would hardly find time to do
anything else.  Yet, few of the users in these groups care (or need to know,
your response to Gilles reply is a good example, but that takes to long to
explain here why) enough about the technical reasons that so clearly show
why your "facts" are plain wrong.  One problem with newsgroups is that false
"facts" tend to come up weeks later when somebody simply accepted everything
that went unchallenged as true "fact".  Especially if that person appears to
have knowledge about the topic being discussed.

In short:

***
I consider it a matter of courtesy, especially in any non-verbal discussion
where writing the reply takes longer than anything else, that people *first*
find out what they are going to talk in a responsible manner about and
*then* speak up rather than depend on others to do the thinking for them.
***

(Rhetorical question)
Now that I wasted over 50 minutes of my time on this, are you happy? :-(

    Thorsten

____________________________________________________
Thorsten Froehlich
e-mail: mac### [at] povrayorg

I am a member of the POV-Ray Team.
Visit POV-Ray on the web: http://mac.povray.org


Post a reply to this message

From: Gilles Tran
Subject: Re: Scanline rendering in POV-Ray
Date: 2 Jun 2003 04:29:46
Message: <3edb0afa$1@news.povray.org>

3eda8ea4$1@news.povray.org...
> Those are interesting pics (and very neat, too).
Thanks!

> I'm curious as to what the memory usage
> is like on your computer when rendering
> these scenes, and how you would handle
> getting previews of such scenes
> (either stills or test animations)
> and how long they would take.

Meshes in POV-Ray have two major benefits:
1) they can be instanciated, i.e. the first copy of the mesh is the only one
that counts. The other copies can be rotated, translated, scaled etc. and
only add the price of the pointer (if I'm telling nonsense, people who
really know the code will correct me). A tree mesh of 100Mb (mesh2) can be
used once or 4000 times with a minimal difference in memory cost. I didn't
record the total RAM use for the but it was well below the 1Gb I have, and,
as I said, these scenes all use radiosity (and other things than meshes).
Without any additional elements, the 4000-tree scene will use the RAM of the
initial tree + 4000 pointers.

2) they render extremely fast

As this applies too to height-fields (static or generated on the fly), the
consequences in terms of landscape generation are huge. A single
height-field tiled properly and 5 different trees can generate an entire
forest, quickly, and within the memory cost of the initial elements (see the
page here for examples (with code) of on-the-fly large height-fields, tiled
or not. http://www.oyonale.com/ressources/english/sources13.htm).
Actually the main limitation of scenes with large meshes and height-fields
is not the render time but the parsing time, as a 100mb meshes need to be
read anyway. To speed up things during tests, I often use dummy props
(simple primitives or low-poly meshes) and low-res height-fields (which can
be done in the code if they're function-based).

The Forester page (http://www.dartnall.f9.co.uk/) has impressive animations
of big POV-Ray generated landscapes, btw. Perhaps he has more insight about
how he did those (render times, memory cost etc.).

This doesn't mean of course that an hybrid engine wouldn't be interesting...

>and also to shorten render times for scenes that
>don't need global illumination effects.
Do such scenes really exist ;-)

Gilles

--

**********************
http://www.oyonale.com
**********************
- Graphic experiments
- POV-Ray and Poser computer images
- Posters


Post a reply to this message

From: Ray Gardener
Subject: Re: Scanline rendering in POV-Ray
Date: 2 Jun 2003 10:42:33
Message: <3edb6259@news.povray.org>
> Yes, the fact that someone uses a 16 year old paper that used a CCI
6/32...

But, hmm... raytracing goes back 35 years.
Bumpmapping goes back over 20 years.
Who is it, um, who's using the older paper..?

And I suppose the machine was rather slow, but that
would have been true for a raytracer running
on it as well. I don't see the fairness in
blaming an algorithm on the hardware available
at the time. Despite faster hardware, raytracing
still has difficulty providing realtime performance,
even for simple scenes, while scanline systems
do it routinely. If REYES was doomed, I would
imagine Pixar would have fired the Renderman staff
once the hardware improved. But instead, it
seems to have flourished in the film industry
quite well.


> Lets see, you have an image that has about 275000 pixels, and you render
> 16.8 million triangles, that yields about 61 triangles per pixel.

Actually, the image is antialiased by drawing a 200% version
of the image and then averaging it down, so there are over a million pixels
originally. Yes, it's a lot of triangles, because the fractal cubes
insert themselves after the heightfield shader draws, and a lot of
the latter pixels get replaced in the z-buffer.


> This suggests very ineffective or nonexistent clipping
> and culling algorithms.

Well, that's the nice thing about a z-buffer. I do perform
frustum and backface culling, but being able to insert
geometry and not worry about memory is useful in terms
of artistic approaches, especially in shader development.
Production runs would be optimized, but it's nice to know
that during development, I'm free to use brute-force
techniques that send any number of polygons. In a
raytracer, that luxury doesn't exist; there's always
a memory limit on geometry. Even if you have the time
to render it, if you don't have the memory, you're stuck.
And the memory I save on geometry I can put towards textures
and shaders.

Let me put it this way: I could have spent more time
optimizing the textures to get them to raytrace well.
But instead, I'm free to brute-force the approach and
also have a general solution that works from other
camera angles (customized textures tend not to).
It's a good system that can offer design-time efficiency
in addition to runtime efficiency. It's good to
have choices. In fact, the scanline system lets me
easily develop textures for the raytracer.


>in short, on any computer you could have bought in the last two years this
> should render in a few seconds using scanline rendering.

I do have some scenes which, in fact, render in only
several seconds. If I omit the fractal cubes, the thing
really rips along. This is also my first attempt at
such a renderer, only a week old or so, so I'm not
claiming it's optimized.


> > I suppose ignorance is bliss...
>
> No, just a necessity because if I would spend as much time explaining
> details every time somebody comes up with completely unresearched,
> misinterpreted or just plain random "facts", I would hardly find time to
do
> anything else.

Well, no one forced you to spend your time
participating in this discussion, so don't
blame me if you feel your time is wasted. A true
professional would simply have stated that
he felt the matter was of low importance
or misinformed and left it at that. Better still,
you could have written or co-written a definitive
paper on why scanline/REYES is non sequitor and included
a reference to it in the FAQ.

As for facts, I don't see that they have
been concluded. Your first reaction was
not to discuss technical feasibility at all;
just a statement that it would take a long time
to implement. That sounds, frankly, more like
someone who is afraid of change than someone
with a clear set of reasons why an approach
should not be tried. One would have expected
your first reaction to be something along
the lines of "Nice idea, but it won't work
because of a, b, and c." or "That's fine,
but the problem domain of POV-Ray is
specifically x, y, and z."

The facts I do know about are that scanline
rendering has no geometry memory limit
while raytracing does, and scanline rendering
can deliver realtime (or at least much faster)
scene previewing capabilities. Even with all
secondary raycasting disabled, a raytracer
cannot outperform or match a scanline algorithm.
If it could, raytracing would be used in video games.
And the fact is, again, raytracing has not
displaced scanline/REYES as the preferred
CG rendering method in motion pictures. To quote
Dr. Gritz, BMRT assisted in only 16 scenes
in the movie A Bug's Life. I don't know what
facts you are in possession of, but they
certainly can't include the film industry
falling all over themselves to use raytracing.

And on the topic of efficiency, why does POV-Ray
allocate a memory block for not only each scanline
and color channel of a texture? A simple scene using
several 128-pixel tall textures wound up generating
over 3000 calls to POV_MALLOC. Caching the row address
multiplies into their own array I can understand, but
why not just have them point to offsets within a single
block for each texture component plane?

Anyway, I've started my modifications to POV-Ray
to support the ideas mentioned so far, and will
let interested persons know the results. In the
interest of fairness to everyone, I think it
is best if I try an implementation and let
the facts speak for themselves. At the
very least, it's an idea worth trying, given
the potential benefits.

Ray


Post a reply to this message

From: Ray Gardener
Subject: Re: Scanline rendering in POV-Ray
Date: 2 Jun 2003 11:00:41
Message: <3edb6699@news.povray.org>
> You are free to implement such a thing but be warned: there will hardly
> anyone be interested in a POV-Ray feature for scanline rendering that only
> supports certain specific geometry definitions that are not compatible to
> raytracing and therefore are not usable in a 'real' POV-Ray render.

I didn't plan on supporting any scanline-only
primitives, if that's what you meant. My prototype
will support a subset of existing POV primitives,
so that whatever the scanline system can render,
the raytracer can render.

In that spirit, however, I agree that there
will be those who will not use a renderer that
cannot reproduce all of POV-Ray's existing
effects (i.e., true drop-in renderer substitution).
This is understandable for those who have
existing scene files and are looking to improve
render times on them in their final rendered sense.
Improving the existing raytracer provides more
return for that audience.

The first benefit, I think, will be in previewing.
Unlike VOP, my previews will be integrated directly
into WinPOV's render window and be available
by simply choosing a menu command. The preview
will render during parsing, so best case, a scene
could render in as much time as it takes to parse.
A camera object will, however, need to be
defined ahead of all other objects. I could preflight
the .pov file looking for the last camera statement,
but that would lengthen parsing times.

Shaders for the scanline system pose a greater
raytracer-compatability challenge (although BMRT
proves it can be met). I'm personally interested
in deploying shaders as DLLs, for performance
reasons, but that would understandably be a
philosophical departure for POV-Ray.

Ray


Post a reply to this message

From: ABX
Subject: Re: Scanline rendering in POV-Ray
Date: 2 Jun 2003 11:08:38
Message: <nopmdv4t2l85jvjgugpms4bv2qkj1o9b1i@4ax.com>
On Mon, 2 Jun 2003 08:00:51 -0700, "Ray Gardener" <ray### [at] daylongraphicscom>
wrote:
> I could preflight
> the .pov file looking for the last camera statement,
> but that would lengthen parsing times.


Are you aware that:

  #macro C()camera#end
  #declare c=C(){}
  C(){c}

is valid for parser ? The same applies to:

  #include "strings.inc"
  Parse_String("camera{}")

Are you sure it isn't simpler to you to write completly new application ?

ABX


Post a reply to this message

From: Thorsten Froehlich
Subject: Re: Scanline rendering in POV-Ray
Date: 2 Jun 2003 11:31:49
Message: <3edb6de5$1@news.povray.org>
In article <3edb6259@news.povray.org> , "Ray Gardener" 
<ray### [at] daylongraphicscom> wrote:

> But, hmm... raytracing goes back 35 years.
> Bumpmapping goes back over 20 years.
> Who is it, um, who's using the older paper..?
<snip>
> and left it at that. Better still,
> you could have written or co-written a definitive
> paper on why scanline/REYES is non sequitor and included
> a reference to it in the FAQ.

Excuse me, since when is a whole research area (ray tracing) the same as a
single paper? -- If you just want to continue for the sake of argument,
fine.  I am not going to.

But who knows, maybe such a paper will one days surface, but I don't make
such promises...

> Despite faster hardware, raytracing
> still has difficulty providing realtime performance,
> even for simple scenes, while scanline systems
> do it routinely.

You have it exactly the wrong way.  Ray-tracing will be fast on *complex*
scenes while scanline methods will slow down.  I did actually explain it a
few weeks ago in a thread here somewhere.  Even a first year student knows
that an algorithm that performs very well on a small set of data does not
necessarily work well on a big set of data and vice versa!

> If REYES was doomed, I would imagine Pixar would have fired the Renderman
> staff once the hardware improved. But instead, it seems to have flourished in
> the film industry quite well.

Now, at what point did I say that their approach was slower than ray-tracing
today?

> he felt the matter was of low importance or misinformed

As said, anything said stays and thus undisputed statements are simply taken
for true and correct by many users.  Don't ask me why, it simply has
happened so often in the past <sigh>

> As for facts, I don't see that they have been concluded. Your first reaction
> was not to discuss technical feasibility at all;

Of course, this is povray.general, not povray.programming.  So I have to
assume you are the average user with a completely wrong mental model how ray
tracing works.  A short answer usually scares away non-programmers from
programming topics, which is good because otherwise one has to start
explaining really everything...

> just a statement that it would take a long time to implement. That sounds,
> frankly, more like someone who is afraid of change than someone with a clear
> set of reasons why an approach should not be tried.

Maybe it sounds more like someone who knows the source code of POV-Ray
better than you do?  usually those who work with the source code of a
program on a day to day basis know more about it than the casual observer?

> One would have expected your first reaction to be something along the lines of
> "Nice idea, but it won't work because of a, b, and c." or "That's fine, but
> the problem domain of POV-Ray is specifically x, y, and z."

If it wasn't the 100457th time somebody suggested the need or desire to have
or implement something like this, maybe...

> The facts I do know about are that scanline rendering has no geometry memory
> limit while raytracing does, and scanline rendering can deliver realtime (or
> at least much faster) scene previewing capabilities.

So, you say if you have a function for a geometry, you need to have all its
triangles to represent it in a ray tracer, while you need to only spill them
to the hardware 3d engine.  Your thinking is really very narrow if you think
this.  You fall into the same trap as many, many new users of POV-Ray do
when they want to solve a specific problem and fail:  They have a
preconceived solution in mind and manage to successfully ignore the one
million other ways to solve the problem.  In short: No need to keep the
geometry for ray tracing either.

> Even with all secondary raycasting disabled, a raytracer cannot outperform or
> match a scanline algorithm. If it could, raytracing would be used in video
> games. And the fact is, again, raytracing has not displaced scanline/REYES as
> the preferred CG rendering method in motion pictures. To quote Dr. Gritz, BMRT
> assisted in only 16 scenes in the movie A Bug's Life. I don't know what facts
> you are in possession of, but they certainly can't include the film industry
> falling all over themselves to use raytracing.

Few people have scene of the complexity to make ray tracing worthwhile.
Give me a billion (1000 million) triangles in a million objects, and ray
tracing will beat whatever scanline hardware you throw at the problem on an
average workstation.  Of course, ray tracing isn't limited to triangles...

> And on the topic of efficiency, why does POV-Ray allocate a memory block for
> not only each scanline and color channel of a texture? A simple scene using
> several 128-pixel tall textures wound up generating over 3000 calls to
> POV_MALLOC.

At most POV-Ray will have to touch all image maps together p * n * m times,
where p is the number of pixels in the 2d image, n is the maximum recursion
level and m is the number of image maps that cover the same object surface
(aka layered textures). If you optimise algorithms the way you suggest - all
I can say is that Knuth said something in this regard...

> Caching the row address multiplies into their own array I can
> understand, but why not just have them point to offsets within a single block
> for each texture component plane?

If you want to "optimise" POV-Ray on this level of detail, be my guest. I
neither can nor am I going to stop you!

> Anyway, I've started my modifications to POV-Ray to support the ideas
> mentioned so far, and will let interested persons know the results. In the
> interest of fairness to everyone, I think it is best if I try an
> implementation and let the facts speak for themselves. At the very least, it's
> an idea worth trying, given the potential benefits.

Goodbye!  Call be pessimistic, but I don't expect to ever see this being
even close to finished in a usable state.  Not the way you are approaching
it, anyway...

    Thorsten

____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde

Visit POV-Ray on the web: http://mac.povray.org


Post a reply to this message

From: Ray Gardener
Subject: Re: Scanline rendering in POV-Ray
Date: 2 Jun 2003 11:42:33
Message: <3edb7069$1@news.povray.org>
> Are you aware that:
>
>   #macro C()camera#end
>   #declare c=C(){}
>   C(){c}
>
> is valid for parser ? The same applies to:
>
>   #include "strings.inc"
>   Parse_String("camera{}")
>
> Are you sure it isn't simpler to you to write completly new application ?


I have considered that. I was about to write
or modify a RIB renderer, for example.
But I wasn't aware of the camera statement
flexibility per se; thanks. If the POV-Team
didn't mind me copying their parser code,
the argument for doing a new app would
certainly be better advanced. POV also
has all the other infrastructure, such
as texture file loading, in one nice spot.

I believe POV's future lies in being
a more powerful platform for creating
3D graphics in general, not just as
a raytracer. For all I know, this experiment
may one day lead to POV-Ray becoming the dominant
film production tool for CG effects, and enable
a whole new population of movie producers.

VOP already went the "new application" route.
The problem is, its .pov parser is now out of date
and cannot handle many scene files.

For now, I'll probably just require that a
camera be defined up front. It's not a
huge requirement at this stage, and many
people tend to declare cameras early.
If a cam is missing, I could also have
the preview window make its own, which
the person could relocate with the mouse.
Hmm... if it copied the cam data to the
clipboard in .pov format, that would be
an interesting take on NavCam. It would
certainly make defining cameras as easy
as point and click. :)


Ray Gardener
Daylon Graphics Ltd.
"Heightfield modeling perfected"


Post a reply to this message

From: Christoph Hormann
Subject: Re: Scanline rendering in POV-Ray
Date: 2 Jun 2003 12:35:57
Message: <3EDB7CED.99BA1222@gmx.de>
Ray Gardener wrote:
> 
> I didn't plan on supporting any scanline-only
> primitives, if that's what you meant. My prototype
> will support a subset of existing POV primitives,
> so that whatever the scanline system can render,
> the raytracer can render.

Quoting from your original post:

> I'm currently investigating development of a
> scanline renderer, because the scenes I need
> to support (landscape scenes) typically contain
> too many objects for efficient raytracing.

Sorry but i simply don't get it.  Your original motivation for considering
implementing scanline rendering in POV-Ray is to allow something that is
not possible in POV-Ray as you state (i.e. efficient rendering of scenes
with many objects).  But now you write you will only support a subset of
those shapes already available in POV-Ray - how does this fit together?

I think this whole thread suffers from one serious problem - You never
made a clear and open statement of your objectives and you don't seem to
be willing to discuss whether your idea for a solution (i.e. scanline
rendering) will meet your objectives.  All people who replied to you in
this thread have a good deal of experience with POV-Ray in various fields
but i have the impression that you either ignore or don't understand most
of the arguments we have given.

Christoph

-- 
POV-Ray tutorials, include files, Sim-POV,
HCR-Edit and more: http://www.tu-bs.de/~y0013390/
Last updated 28 Feb. 2003 _____./\/^>_*_<^\/\.______


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.