POV-Ray : Newsgroups : povray.general : Scanline rendering in POV-Ray Server Time
5 Aug 2024 00:21:25 EDT (-0400)
  Scanline rendering in POV-Ray (Message 37 to 46 of 96)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Warp
Subject: Re: Scanline rendering in POV-Ray
Date: 3 Jun 2003 07:30:35
Message: <3edc86db@news.povray.org>
Ray Gardener <ray### [at] daylongraphicscom> wrote:
> Not quite. Hidden surface removal is accomplished in
> raytracing by using the nearest intersection. If an
> eye ray does not intersect other objects/bboxes along its vector,
> it cannot be sure that it has hit the nearest one.

  You didn't understand what I meant.

  Your scanline algorithm renders all meshes separately, one by one.
There's nothing in raytracing which would disallow this: You can first
raytrace one mesh and then another (using the same z-buffer method for
surface removal as in scanline rendering). Of course this means that the
same pixels need to be raytraced more than once, but that's exactly what
the scanline method is doing (except that with two meshes the raytracer
calculates each pixel twice, but the scanline method can calculate
many pixels tens of times, depending on the meshes).

> Even if it were so, POV-Ray doesn't give
> you the choice. It retains all the geometry regardless.

  The original question was memory usage of the raytracing and scanline
algorithms.

>>   1. It's a lot simpler. Rendering and texturing a mesh is a lot simpler
>> with a raytracing algorithm than with a scanline-rendering algorithm.

> Is it? With a (arbitrary) mesh, you need to determine the intersection
> of a ray with some kind of octree. This means knowing the ray-plane
> intersection algorithm and how to create, populate, and search octrees.
> Then you need to forward the ray to intersect the particular triangle
> in the mesh, and get its ST coords. You may also need to check other
> parts of the octree to be sure your intersection is nearest the eye.

  If the ray hits a triangle in a nearer octree node, there's absolutely
no way that a triangle in a farther octree node would be nearer. The octree
optimization works in such way that the first time the ray hits a triangle,
it's surely the closest one.

> In scanline, you just go through the triangles in the mesh
> and draw them.

  If your mesh has 100 million triangles and 10 of them are visible (so that
the rest get hidden behind them), you don't want to do that. Of course unless
render time is not critical.
  If you want any speed, you will need to perform a spatial tree optimization
to your scanline rendering algorithm as well. AFAIK commercial high-end
renderers do this.

  Besides, you say "draw them" as if it was trivial (or at least more trivial
than calculating the intersection of a ray and a triangle). I tend to
disagree.

> ST coords are easy because you get them
> from vertex and edge interpolation.

  You need perspective-correct interpolation (linear interpolation will
give you awful results), which is less trivial.

> Not true in full raytracing, unless you know in advance
> the direction of every secondary ray.

  You didn't understand this either. Please read my text again.

  You don't need to load meshes to memory, you only need to know their
bounding box. When a ray hits the bounding box, you load the mesh. If a
ray never hits the bounding box of a mesh, that mesh is never loaded.
Understand?

-- 
plane{-x+y,-1pigment{bozo color_map{[0rgb x][1rgb x+y]}turbulence 1}}
sphere{0,2pigment{rgbt 1}interior{media{emission 1density{spherical
density_map{[0rgb 0][.5rgb<1,.5>][1rgb 1]}turbulence.9}}}scale
<1,1,3>hollow}text{ttf"timrom""Warp".1,0translate<-1,-.1,2>}//  - Warp -


Post a reply to this message

From: Tom York
Subject: Re: Scanline rendering in POV-Ray
Date: 3 Jun 2003 09:10:01
Message: <web.3edc9d6bc13294582ff34a90@news.povray.org>
Patrick Elliott wrote:
>With the number of component we could fit into a new card
>the result would be no bigger than an existing NVidia card, however the
>development time needed to produce the chip, make sure it was stable and
>then pray that game companies that are used to using the cheats supplied
>through OpenGL and DirectX will actually use it makes the odds of anyone
>seeing such a card on the market any time soon very unlikely.

Cheats? Was there ever a 30fps algorithm free of them? :-)

>The irony being that in many cases you have to upload large chunks of
>additional geometry and pre-made textures every few seconds in a game,
>while a true raytrace engine would only have to upload terrain geometry
>and those textures you couldn't produce using procedural systems (which for
>most games would be maybe 10% of them...)

Why won't a raytracing card have to load new geometry regularly? It's not
necessarily going to know ahead of time which objects will be needed during
a game, and they are not all going to fit in memory at the same time for a
modern game.

Procedural textures will have to be calculated by the graphics card, which
surely has enough to do already (if you get the CPU to do it and send it to
the graphics card, you might as well send an image). I thought anyway that
procedural textures are not specific to raytracing?

Whilst I'm sure that, given sufficient time, you could reproduce 90% (or
100%) of game image textures with procedural textures, the question surely
is whether it is worth the extra time taken to generate the textures in the
first place, every frame. I can especially see problems producing things
like text at speed.


Post a reply to this message

From: Ray Gardener
Subject: Re: Scanline rendering in POV-Ray
Date: 3 Jun 2003 10:11:58
Message: <3edcacae$1@news.povray.org>
>  If the ray hits a triangle in a nearer octree node, there's absolutely
no way that a triangle in a farther octree node would be nearer.

Well, there's that word, "if". If it doesn't hit
the nearer octree node, then another one has to be tested.


>   If you want any speed, you will need to perform a spatial tree
optimization
to your scanline rendering algorithm as well. AFAIK commercial high-end
renderers do this.

Yes, I agree that's a given. Using POV-Ray as
infrastructure is good in this respect
since it has raycasting in which to help build
such trees. Games of course do this (BSP, portals, etc.).
I think the attraction of scanline in film was
that many scenes are in rooms, where setting
up spatial culls is less demanding.


>  You need perspective-correct interpolation (linear interpolation will
give you awful results), which is less trivial.

True, except for micropolygons. My particular prototype
does draw macropolygons, so I will need to do that.
In REYES, it's not necessary.


>   You don't need to load meshes to memory, you only need to know their
> bounding box. When a ray hits the bounding box, you load the mesh. If a
> ray never hits the bounding box of a mesh, that mesh is never loaded.
> Understand?

Yes, I see what you're saying. So you would
load the bboxes of the geometry by themselves,
and then they would fill with the actual
respective geometry as initial hits occurred.
Yes, that's doable. You eventually wind
up with only the necessary geometry.
For some reason I thought you meant trying to
load/unload. Sorry about that.

I guess a developer might reason that, worst case,
you need to have enough memory for all objects,
so why bother adding the (albeit slight) overhead
of doing the first-time hit test. But I can see
this being a win on distributed or shared resource
systems when rendering movies or different view
angles of stills, because statistically more RAM
would be available to other tasks. POV would
definitely benefit, because when it animates
it reparses the .pov file for each frame. So as
you zoom in on a scene, less and less geometry
would get loaded (assuming no reflected rays that
wind up hitting the off-camera objects, at least).

I don't want to get into a scanline vs. raytracing
debate (except to collect the pros and cons of
each approach) because I want to augment POV-Ray, not replace
its raytracer. Hybrid renderers offer a compelling
wealth of strengths. Maybe a person doesn't need
to scanline anything most of the time, but
it's nice to know that he could if he wanted to.
For me, it's about having choice. I don't want
to invest lots of time learning POV and then
be forced to always raytrace when scanlining
might offer a big speed benefit on a particular job
or various parts of a job.

Thanks,
Ray


Post a reply to this message

From: Ray Gardener
Subject: Raytracing individual objects
Date: 3 Jun 2003 13:16:05
Message: <3edcd7d5$1@news.povray.org>
> You can first
> raytrace one mesh and then another (using the same z-buffer method for
> surface removal as in scanline rendering).

Yes, that's true. That can be an efficient way
in some cases, like when a heightfield presents
a slim or small profile. What I particularly
like is that it provides a fine-grained way
to mix scanlining and raytracing.

One difficulty I have with doing that is that
sometimes its easier to drive a shader using
some stepped coordinate system of the primitive rather
than the world locations a raytracer returns.
For example, when scanlining a heightfield,
I do it cell by cell, so rocks can be distributed
based on how likely a cell is to be occupied.
With raytracing, I have to derive the cell
coords, and then maintain some kind of map
to keep track of which cell was painted with what.
I realize it's not good form to write primitive-specific
shaders, but they can be much easier to do sometimes,
especially when prototyping.

Displacement shading is also easier when scanlining.
At least, I haven't figured an easy way to do it
when raytracing.

Hmm.. this thread has been a good exploration
of the pros and cons of the two rendering techniques.
I'll make a digest of them and add it to my website.

Ray


Post a reply to this message

From: Christopher James Huff
Subject: Re: Scanline rendering in POV-Ray
Date: 3 Jun 2003 13:34:00
Message: <cjameshuff-27D2C9.12264003062003@netplex.aussie.org>
In article <Xns### [at] povrayorg>,
 ingo <ing### [at] tagpovrayorg> wrote:

> Just for my understanding; Everytime I see a statement simular to this I 
> wonder, does it mean building a CSG from POV-Ray primitives and then 
> tesselate it, or tesselate the "POV-Ray primitives" to a certain level and 
> then do the CSG (as I saw it in a very old version of 3D-studio)?

It generally refers to doing CSG with meshes. Making sure you don't end 
up with gaps or odd seams where the formerly separate meshes touch is 
not an easy problem to solve. Unions are easy, but merge, difference, 
and intersection are not.

There are algorithms for tessellating arbitrary objects, but they are 
never optimal. You end up with huge amounts of triangles in places where 
you don't need them, and you can still lose small details, the point of 
a cone or edges of a box for example. You can tessellate a box with 12 
triangles, but with one of these algorithms you are likely to end up 
with hundreds of thousands just to get rid of the faceting artifacts. 
CSG of multiple meshes is really the best way to go, it's just really 
hard.

One possible compromise would be to just avoid the problem of joining 
the seams by using solid geometry CSG on the meshes, like what POV does 
now with other primitives and meshes with an inside_vector specified. 
This is not really the best possible solution, but could give some 
improvements.

-- 
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/


Post a reply to this message

From: Lutz Kretzschmar
Subject: Re: Scanline rendering in POV-Ray
Date: 3 Jun 2003 14:24:42
Message: <spppdv8cp7j6r6thnm11boc1rnservo7he@4ax.com>
Hi ingo, you recently wrote in povray.general:

> wonder, does it mean building a CSG from POV-Ray primitives and then 
> tesselate it, 
Well, you can't really do that.... I mean, what would you build?

> or tesselate the "POV-Ray primitives" to a certain level and 
> then do the CSG (as I saw it in a very old version of 3D-studio)?
Yes, this is how you do it. You have to do CSG on triangle meshes and
that just gets very ugly. It is doable, Moray does this. But the
result is not pretty. If (in Moray) you've ever evaluated a CSG and
then converted it to mesh and went to edit that mesh, you'll know what
I mean. 

- Lutz
  email : lut### [at] stmuccom
  Web   : http://www.stmuc.com/moray


Post a reply to this message

From: Lutz Kretzschmar
Subject: Re: Scanline rendering in POV-Ray
Date: 3 Jun 2003 14:28:23
Message: <rvppdvgq9pdf056hhf84s38de36lnagnn2@4ax.com>
Hi Ray Gardener, you recently wrote in povray.general:

> Well, CSG is undoubtedly a challenge.
> As long as "extremely hard" doesn't
> wind up becoming "intractable" I'm
> willing to forge ahead. In the reverse
> direction, I would love to see Moray
> display POV CSG objects.
Well, it can display them (after you do evaluate), but it doesn't
always work and the result, while displaying nicely on the screen, is
actually quite ugly once you look at it :-) But if all you want is
screen display, then yes, it's not intractable.
 
> In REYES, which tesselates down to
> micropolygons, one can consider the
> 3D location of the polygon to be
> inside or outside another primitive, and
> thus cull it at render time. 
That would be a nice way, but you're running into boundary conditions
as well that you need to be aware of. Because you're often going to
end up testing polygons against the boundary of the object... but if
you kept track to which object the polygon belonged, you could avoid
testing against it... sounds promising.
I guess the problem can be solved differently at rendering time.

Regards,

- Lutz
  email : lut### [at] stmuccom
  Web   : http://www.stmuc.com/moray


Post a reply to this message

From: Patrick Elliott
Subject: Re: Scanline rendering in POV-Ray
Date: 3 Jun 2003 15:07:38
Message: <MPG.19469fa441aabf3989807@news.povray.org>
In article <web.3edc9d6bc13294582ff34a90@news.povray.org>, 
tom### [at] compsocmanacuk says...
> Patrick Elliott wrote:
> >With the number of component we could fit into a new card
> >the result would be no bigger than an existing NVidia card, however the
> >development time needed to produce the chip, make sure it was stable and
> >then pray that game companies that are used to using the cheats supplied
> >through OpenGL and DirectX will actually use it makes the odds of anyone
> >seeing such a card on the market any time soon very unlikely.
> 
> Cheats? Was there ever a 30fps algorithm free of them? :-)
> 
True enough, but in this case I mean 'cheat' as in faking real geometry 
like boxes and spheres using triangles. I consider it a cheat because it 
takes advantage of the existing hardware capability to produce something 
that only looks real by using so many triangles that you can't place 
anything else in the scene with it. That is the nature of most cheats, 
something that looks good enough for you purpose, but doesn't really 
reproduce the result accurately.

> >The irony being that in many cases you have to upload large chunks of
> >additional geometry and pre-made textures every few seconds in a game,
> >while a true raytrace engine would only have to upload terrain geometry
> >and those textures you couldn't produce using procedural systems (which for
> >most games would be maybe 10% of them...)
> 
> Why won't a raytracing card have to load new geometry regularly? It's not
> necessarily going to know ahead of time which objects will be needed during
> a game, and they are not all going to fit in memory at the same time for a
> modern game.
> 
True, but some things don't need to be fed back into in continually and 
those bases on primitives would take less room to store, meaning you can 
leave more of them in memory than normal. This should cut in half the 
amount of stuff you have to cram into the card in each frame, possibly 
more.

> Procedural textures will have to be calculated by the graphics card, which
> surely has enough to do already (if you get the CPU to do it and send it to
> the graphics card, you might as well send an image). I thought anyway that
> procedural textures are not specific to raytracing?
> 
Think some new cards may use them, but the same issue existed for them as 
for a true card based engine, the methods used to produce them where 
simply too complex to 'fit' in the existing architecture.

> Whilst I'm sure that, given sufficient time, you could reproduce 90% (or
> 100%) of game image textures with procedural textures, the question surely
> is whether it is worth the extra time taken to generate the textures in the
> first place, every frame. I can especially see problems producing things
> like text at speed.
> 
Kind of hard to say, since the option isn't exactly common or if it does 
exist is not used from what I have seen.

-- 
void main () {

    call functional_code()
  else
    call crash_windows();
}


Post a reply to this message

From: Rick [Kitty5]
Subject: Re: Scanline rendering in POV-Ray
Date: 3 Jun 2003 16:23:24
Message: <3edd03bc@news.povray.org>
Lutz Kretzschmar wrote:
> If (in Moray) you've ever evaluated a CSG and
> then converted it to mesh and went to edit that mesh, you'll know what
> I mean.

Perhaps a triangle beautification process would be an idea :)

-- 
Rick

Kitty5 NewMedia http://Kitty5.co.uk
POV-Ray News & Resources http://Povray.co.uk
TEL : +44 (01270) 501101 - FAX : +44 (01270) 251105 - ICQ : 15776037

PGP Public Key
http://pgpkeys.mit.edu:11371/pks/lookup?op=get&search=0x231E1CEA


Post a reply to this message

From: Christopher James Huff
Subject: Re: Scanline rendering in POV-Ray
Date: 3 Jun 2003 16:29:50
Message: <cjameshuff-DB7A38.15222803062003@netplex.aussie.org>
In article <3edb8f1d@news.povray.org>,
 "Ray Gardener" <ray### [at] daylongraphicscom> wrote:

> Sorry for the confusion. For my purposes,
> I don't require all primitives, at least in
> the short to mid-term. Further along, it
> would be desirable to have as many POV
> primitives supported as possible.

What *are* your purposes? You still haven't explained this.


> I need the "lots of shapes" capability to implement shaders,

What do shaders have to do with the number of shapes?


> For shapes like hair strands and grass blades,
> however, supporting a spline (rope?) primitive
> should be done.

And what does this have to do with scanlining?


> I also don't see how an isosurface would
> be able to plant lots of trees or independant
> rocks on the terrain and maintain the
> same memory/rendering performance. If I
> was limited to raytracing, I would definitely
> brush up on isosurfaces, but otherwise...

Simple: Make a few dozen variations of tree meshes and plant a few tens 
of thousands (or more if you like) on the landscape. Blobs and 
isosurfaces can make good rocks, or you could use meshes for those too. 
And when viewed from a sufficient distance, a noisy isosurface can make 
a good bush.


> Like I mentioned, I think I have to try
> a prototype scanline implementation in POV to
> see how it goes. The jury is kind of out
> on conclusions just yet, and I don't

I haven't seen anyone but you suggest this is a good idea. Personally, I 
agree with Thorsten: adding scanline rendering to POV-Ray is silly and 
not that useful.


> want to endorse anything until I have
> accurate benchmarks. By Mr. Froehlich's
> own admission, raytracing speed doesn't
> compete with scanlining until you have
> billions of objects, so there's definitely
> room for exploration under that limit.

That's not what he said. Scanlining can compete with rendering triangles 
when memory is limited for the data, but you're ignoring other 
primitives. Your only reason given for thinking scanline would be better 
is an old paper for a computer system vastly different from modern 
computers, and your messages show extremely poor understanding of 
optimization.

Basically, POV doesn't need it, can't really benefit from it, it would 
be extremely limited compared to the raytracer, and it would be a huge 
amount of work to implement. You've given memory use as one reason, but 
raytracing can be optimized similarly. The fact that POV doesn't 
currently use these doesn't mean it never will. Speed as another, but 
you haven't compared a scanline rendering with an analogous raytracing. 
The speed advantages of scanlining are much less once you include all 
the work to make it do what raytracing does automatically. You've used 
video games as an example...POV isn't a video game, it is designed under 
entirely different constraints. If you want a preview, a much simpler 
and more effective solution would be an OpenGL preview.

-- 
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.