POV-Ray : Newsgroups : povray.programming : More about modest proposals... Server Time
26 Dec 2024 10:18:46 EST (-0500)
  More about modest proposals... (Message 1 to 4 of 4)  
From: Mark VandeWettering
Subject: More about modest proposals...
Date: 27 Jul 1999 16:46:28
Message: <379E1AA4.AFED43B@pixar.com>
Ken wrote:

>   Wouldn't it also add greatly to the memory overhead for the program. I
> recall reading that the reason the developers of pov originally decided to
> use the mathematically derived primitives unlike the triangle model is because
> it is both faster and less memory intensive to create. 3ds max to create
> an equivalent smooth sphere without using surface normal smoothing requires
> 1000's of triangles to represent. It would surely limit the number of
> objects you could include in your scene if you had low memory on your system
> and even those with more memory would max out rather quickly with only 10's
> to a few hundred objects. I'm not saying you shouldn't or can't add this
> feature though I wonder at it's usefulness.

Boy, my initial comment generated alot of heat... :-)

It is true that tesselating primitives may seem like a bad idea.  Surely
generating
thousands of triangles for a simple sphere seems like a bad idea.  Why
did I make
this suggestion then?

Because if your goal is to generate realistic scenes, your scenes won't
have very many 
spheres in them.  Neither will they consist of julia sets.  Probably
won't include
very many implicit surfaces either.

Most sophisticated modelers output models as some kind of parametric
surface mesh, a subdivision
mesh, or simply polygons.  All of these surface types are rather simple
to tesselate uniformly,
and really not much more complicated to tesselate adaptively.  Once
tesselated, such models are 
easily arranged into a hierarchy for accelerated raytracing.  Also, once
they are reduced to 
a common form, it becomes easier to (say) cache them to disk and
reorganize them.  

There is a history of raytracing software in academia that do precisely
this, Barr, Von Herzen, and
more recently the Toro group at Stanford (who have carried the idea the
farthest) have all tried
to construct raytracing engines based upon this principle.  A review of
their work would be 
useful in understanding the basic ideas.

I'll leave you with a somewhat concrete example.   A while ago, I
decided to try to use noise functions 
to raytrace some pictures of an asteroid.  It's basically a sphere with
a noise based displacement 
map on it.  The program takes a level of detail parameter which allows
you to generate the asteroid at 
varying levels of detail.

The program which generates the triangles of the tesselation runs as a
separate process, and generates
the level 6 tesselation which consists of 32k triangles in about three
seconds.  When loaded into my
own (rather pedestrian raytracer) I can generate an image at 512x512
pixels (507k rays total) in about 100
seconds, or about 160 usecs per ray.   Just 1 call to the noise function
I used takes around 20 usecs.  That
means if you are going to beat the time of just doing the triangles, you
need to do an absolute maximum of 8 noise calls per ray.    Assuming
that your raytracer would normally work by some kind of adaptive
subdivision,
it would seem to require at least three noise evaluations per level of
recursion, and we have gone six levels down, which means a minimum of 18
noise calls, which is a factor of two slower than the simple
pre-tesselate case, actually more if you count the additional overhead
needed to find which part of the object you need to refine next.

At level 7, the generation program takes about 4x longer, roughly 12
seconds.  The raytrace only takes 175 seconds, an increase of merely
1.75x however.  The time per ray is about 172 usecs, an increase of only
4%.  The rest of the time is additional parsing and hierarchy building
preprocess.  In short, the deeper in recursion you go, the better the
case for tesselation looks...

Well, except for one caveat, and it's a big one.  Memory.  At level 7 my
raytracer chewed 107M of memory to produce this asteroid picture.  Now,
in it's defense, my raytracer is an incremental modification of one I
wrote nearly 14 years ago, so it isn't the most efficient.  It uses far
too much memory (each triangle actually stores a transform matrix and
its inverse, as well as a bounding box, it uses doubles rather than
floats...) but the criticism remains, tesselation DOES take memory, and
you have to account for that.

Rather than run away and saying it's too hard, the proper approach is to
confront the issue and see what can be done.  One obvious one is to do
tesselation lazily: only tesselate an object when it's bounding box is
hit by a ray.  Do adaptive rate tesselations: don't generate fine
tesselations for objects which are small on screen.  Consider a caching
scheme to page out geometry which hasn't been accessed in awhile. 
Reorder computation to make better use of cached triangles.

The Toro work at Stanford is a significant example of how to make this
kind of system work.   And it supports displacement maps too!

Mark



-- 
Mark T. VandeWettering 			Telescope Information (and more) 
Email: <mar### [at] pixarcom> 		http://www.idle.com/~markv/
No Code International Member #1173


Post a reply to this message

From: Mark Wagner
Subject: Re: More about modest proposals...
Date: 28 Jul 1999 02:20:00
Message: <379ea110@news.povray.org>
Mark VandeWettering wrote in message <379### [at] pixarcom>...
>The Toro work at Stanford is a significant example of how to make this
>kind of system work.   And it supports displacement maps too!


Have you seen the latest developments with the SuperPatch in p.b.i?


Post a reply to this message

From: Alexander Enzmann
Subject: Re: More about modest proposals...
Date: 28 Jul 1999 08:05:47
Message: <379EF277.C936BC0D@mitre.org>
Mark Wagner wrote:
> 
> Mark VandeWettering wrote in message <379### [at] pixarcom>...
> >The Toro work at Stanford is a significant example of how to make this
> >kind of system work.   And it supports displacement maps too!
> 
> Have you seen the latest developments with the SuperPatch in p.b.i?

You are missing his point.  The raymarching technique used in the
"SuperPatch" would need many more evaluations of the noise function than
a basic tesselation.  Solid noise functions cost a lot to evaluate. 
Thus, for doing that sort of surface triangles can be a big win
(especially in these days of lots of RAM and simple mapping of files to
VM).

There are always going to be time/space tradeoffs.  By splitting into
triangles, you have the cost of storing vertices and triangle indices
(plus the bounding hierarchy).  For a raymarcher you have the cost of
evaluating a potential function at every step along the ray (and the
memory occupied by the displacement map).

There is also a pretty long tradition of making specific primitive types
for raytracing.  For the asteroid example, Mark could have implemented a
spherical heightfield prim (like in Polyray) that has been built
expressly for tracing.  With that approach you have the memory savings
of only storing the displacement map (plus some additional acceleration
information), and much better performance than a raymarcher.  Of course,
it would have taken time to write the prim...

Xander


Post a reply to this message

From: Mark VandeWettering
Subject: Re: More about modest proposals...
Date: 28 Jul 1999 13:18:45
Message: <379F3B75.A5DC0D1F@pixar.com>
Alexander Enzmann wrote:
> 
> Mark Wagner wrote:
> >
> > Mark VandeWettering wrote in message <379### [at] pixarcom>...
> > >The Toro work at Stanford is a significant example of how to make this
> > >kind of system work.   And it supports displacement maps too!
> >
> > Have you seen the latest developments with the SuperPatch in p.b.i?
> 
> You are missing his point.  The raymarching technique used in the
> "SuperPatch" would need many more evaluations of the noise function than
> a basic tesselation.  Solid noise functions cost a lot to evaluate.
> Thus, for doing that sort of surface triangles can be a big win
> (especially in these days of lots of RAM and simple mapping of files to
> VM).

That is indeed, almost precisely the point.  Basically, most methods for
intersecting rays with parametric surfaces boil down to tesselation. 
Oftentimes you are avoiding computing regions of the tesselation which
are far from the ray, but in the end, you end up with a particular u-v
region, and you test for intersection with a triangle or quad, and
return that as the intersection.  That might sound like a good idea, but
in fact, if your patch is going to be tested by rays lots of times, you
might be better off evaluating the tesselation entirely once, and saving
it.  You then need not apply the relatively complex refinement
operations of (say) Bezier clipping or interval arithmetic, but can
merely walk the hierarchy of triangles in as fast a manner as you can
manage (Brian Smits has an interesting article in JGT about how to do
this).  

Even if your ray is tested against rays only a couple of times, you
might be better off computing tesselations, because tesselations are
dead simple to compute.

> There is also a pretty long tradition of making specific primitive types
> for raytracing.  For the asteroid example, Mark could have implemented a
> spherical heightfield prim (like in Polyray) that has been built
> expressly for tracing.  With that approach you have the memory savings
> of only storing the displacement map (plus some additional acceleration
> information), and much better performance than a raymarcher.  Of course,
> it would have taken time to write the prim...

Indeed.  It would be much simpler to just go ahead and tesselate the
model and feed it to a raytracer which was optimized for ray triangle
intersection.  A bit harder would be to do such tessellation lazily and
adaptively, you'd like to not tesselate portions of the model which
aren't visible.  Better still, but again more complicated, would be to
cache regions of the tesselations to help keep memory consumption down.

In general, I guess I am arguing against the creation of special
primitive types within a raytracer, at least at the level of ray
intersectors.  It is a different architecture which differs from what is
generally presented as a conventional raytracer design, but conventional
raytracing engines normally don't scale very well.  This alternative
design is in many ways an attempt to adapt some features of the Reyes
architecture to a raytracing engine.  It is my belief that it presents a
more flexible and scalable architecture for image synthesis than the
more traditional approach.

Hope this has been food for thought...
	
	Mark

> Xander

-- 
Mark T. VandeWettering 			Telescope Information (and more) 
Email: <mar### [at] pixarcom> 		http://www.idle.com/~markv/
No Code International Member #1173


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.