POV-Ray : Newsgroups : povray.off-topic : CGI : CGI Server Time
29 Jul 2024 02:21:25 EDT (-0400)
  CGI  
From: Orchid Win7 v1
Date: 24 Jun 2012 17:20:10
Message: <4fe7848a$1@news.povray.org>
When I first got into computer graphics, "3D" meant a white wire-frame 
on a black background. Because that's just about all the video hardware 
could handle.

Later on, I got to work with programs like Expert 4D, Imagine 3D, Real 
3D, Cinema 4D and so on. All of these run on the Commodore Amiga, and 
(with the exception of Real 3D) they all draw polygons with texturing 
and light sourcing, and maybe shadows. Mostly they are distinguished by 
their particular mesh editing abilities and texturing options. Imagine 
3D has a special "ray tracing" mode, in which reflections work (but only 
if the objects are added to the scene in the correct order), and sadly 
it makes the render about 45x slower.

Remember that the Amiga is powered by the Motorola 68000 series, and may 
or may not have an FPU installed. My dad's Amiga 600 took about 2 hours 
to render a single torus mesh with one point light source and a simple 
wood-grain procedural texture. It was slow times, my friend, slow times. 
(Amiga 600 = M68000 running at 7.09 MHz and no FPU.)

Real 3D allows spherical spheres, conical cones, and so forth, plus it 
has CSG. It also has a nasty GUI that makes it really fiddly to actually 
select the thing you want. The named hierachy of objects is useful though.

Then along came the rise of graphics cards, and with them a whole 
industry of drawing texture-mapped polygons really, really fast. I 
played all of Quake II without a 3D graphics card. Because, remember, 

when we bought it - on finance, obviously.) Nobody has that kind of 
money laying around. Well, not in 1995 anyway.

I still remember playing Quake II with hardware acceleration for the 
first time, and being amazed at the glory of trilinear filtering and 
coloured light sources. (And framerates exceeding 10 FPS.) If that 
sounds silly, try playing without those things, and you'll quickly see 
what I mean!

(I was also surprised when the game /didn't/ take 20 minutes to load 
each level. I had assumed it was normal for it to take so long; 
apparently it was just our PC's 16MB of RAM causing massive swapping!)

And then, of course, I discovered POV-Ray. While computer games amuse 
themselves drawing polygons really, really fast, POV-Ray directly 
simulates things like curved surfaces, 3D textures, reflection, 
refraction, and so forth.

I wrote elaborate Pascal programs to draw wireframe models, to perform a 
backface cull, and to do simple Gouraud shading, do rotations, and so 
forth. In game graphics, more sophisticated trickery is used to fake 
curved surfaces, rough textures, and even reflections.

POV-Ray uses no such trickery. It just directly solves the rendering 
problem, in a simple and elegant way. Every now and then somebody would 
pop up and say "why bother with POV-Ray when a modern graphics card can 
easily out-do it?" But the example images offered never looked anywhere 
near as good as what a few lines of code with POV-Ray could produce.

The SDL user interface is radical in its own way too. With a visual 
modeller, it would be ridiculously difficult to position a sphere so 
that it exactly coincides with the end of a cylinder. With SDL, it's 
trivial. Complex shapes can be built up using nothing but perfect 
geometric primitives. And if you apply a wood texture, surfaces with 
different orientations show different grain in the correct way. You 
can't do /that/ with 2D pixel maps. And let's not forget, of course, 
that if bump maps just don't cut it for you, with an isosurface you can 
make genuinely curved geometry. (Although the render time explodes...)

Then of course, computers keep getting faster. POV-Ray added photon 
maps. It added radiosity. It added area lights. It added focal blur, 
atmospheric media, and dispersion. Ever more rendering power, requiring 
ever longer to draw, but computers keep getting faster.

(The old benchmark, skyvase.pov, used to take /hours/ to render. My 
current PC rips through it at previously unthinkable resolutions in a 
just second or two.)



Now, however, I sense that the tables have turned. Once it seems that 
scanline renderers went through all sorts of elaborate tricks to try to 
simulate constructs that POV-Ray just renders directly, physically 
correctly, without any tricks or gimmicks. It just directly simulates 
the laws of optics, and all these individual effects are just an 
automatic result of that.

But now we have the so-called unbiased renderers, the path tracers, 
whatever you want to call them. With POV-Ray I need to configure photon 
maps and set up area lights and waste hours tweaking and tuning 
radiosity parameters. And then I see a demo that runs IN A FRIGGING WEB 
BROWSER which uses my GPU to almost instantly render a Cornell box that 
would take multiple hours to draw with POV-Ray.

Now the boot is on the other foot: These renderers just shoot lots and 
lots of rays in all directions, and average the results. The longer you 
run it, the sharper the image becomes. No tweaking parameters, no 
setting up different simulations for different aspects of the result. 
Just tell it what to draw, and wait until it looks good enough. All the 
effects fall out of the simulation; they don't have to be configured 
individually.

I will freely admit, however, that almost all of the actual demo images 
are still suspiciously grainy. Which suggests that for "real" scenes, 
rendering still isn't instant. For example,

http://tinyurl.com/87oyfmh

You'd need some insane radiosity settings to get that out of POV-Ray. 
I'm not sure if it's even /possible/ to get that glow around the light 
sources. On the other hand, no one can deny the image is slightly grainy 
in places.

And then I see something like this:

http://tinyurl.com/7r72o7k

At first glance, this is an amazing, near-photographic image. And then I 
notice THE POLYGONS! IT BURNS!!! >_< KILL IT WITH FIRE!

It seems that while these renderers have vastly superior lighting 
capabilities, they're still stuck in the obsolete technology of drawing 
lots and lots and lots of tiny flat surfaces and desperately trying to 
pretend that they're actually curved. Yuk!

If only there was a way I could use [something like] SDL to control a 
GPU-powered renderer that has all the great features of POV-Ray, yet has 
a modern illumination system...



Now here's an interesting question: Does anybody know how this stuff 
actually /works/?

In the simplest form, it seems that for each pixel, you shoot a ray 
backwards, and let it bounce off things at random angles (subject to how 
rough or smooth the materials are) until it finds a light source. For 
one ray, the result is pretty random. But average together a few 
bazillion rays, and slowly the random speckle converges to a beautiful, 
smooth (but slightly grainy) image.

Trouble is, depending on what your light sources are like, the vast 
majority of rays will never hit one. It's an old problem; you can start 
from the lights and worry about never hitting the camera, or you can 
start from the camera and worry about never hitting the lights. POV-Ray 
and its ilk solve this problem by "knowing" exactly where all the light 
sources are and shooting rays directly at them. But this misses any 
light reflected from other surfaces - even trivial specular reflections. 
(Hence, photon maps or radiosity.)

You can force it so the last bounce /always/ hits a known light source. 
(Or, the other direction, the last ray always hits the camera.) 
Apparently "bidirectional path tracing" works by starting one end from 
the camera, the other end from a light source, and forcing them to 
connect in the middle.

Wikipedia asserts "Contrary to popular myth, Path Tracing is /not/ a 
generalisation of ray tracing." This statement makes no sense 
whatsoever. :-P

Then there's this whole idea of "unbiased rendering". The general idea 
is that, while any individual ray shot might produce any old colour, /in 
average/ each pixel's colour will converge to the physically correct 
one. The term comes from statistics. A statistical estimator may or may 
not be "consistent", and may or may not be "unbiased". Does /anybody/ 
understand the difference? Because I sure as hell don't!

Then today, I made the grave mistake of attempting to understand 
"Metropolis light transport".

http://graphics.stanford.edu/papers/metro/gamma-fixed/metro.pdf

Does this paper make /any/ sense to anybody?

In particular, the basic idea is that, rather than tracing /random/ ray 
paths, you start with a random path, and then gradually "adjust" it, 
such that each path is similar to the previous one. This immediately 
raises two important questions:

1. How does this not utterly bias the calculation, greatly favouring one 
set of paths rather than exploring the entire path space?

2. Why does this produce superior results?

The actual mathematics is too dense for me to follow. In particular, 
it's difficult to follow the physical significance of some of the terms 
and notations.

I'd almost be tempted to write a renderer myself - except that running 
on a CPU, it will undoubtedly take several billion millennia to draw 
anything. (There's a /reason/ nobody has tried this until today...)


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.