POV-Ray : Newsgroups : povray.off-topic : New LuxRender web site (http://www.luxrender.net) Server Time
11 Oct 2024 13:15:48 EDT (-0400)
  New LuxRender web site (http://www.luxrender.net) (Message 106 to 115 of 175)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: scott
Subject: Re: New LuxRender web site (http://www.luxrender.net)
Date: 21 Feb 2008 08:45:53
Message: <47bd8091$1@news.povray.org>
> ask for a sphere, I get a sphere. Not some polygon mesh approximating a 
> sphere, but AN ACTUAL SPHERE.

In the end you get a bunch of pixels approximating a sphere though, so as 
long as your polygons are roughly the same size as pixels, you won't get 
anything worse than a true sphere.

> You can construct shapes of arbitrary complexity. Surfaces and textures 
> can be magnified arbitrarily and never look pixellated.

That's just because they're procedurally generated and not textures.  You 
can do the same on a GPU if you want (but usually a texture is faster, even 
a really big one, probably is in POV too for moderately complex textures).

> Reflections JUST WORK. Refraction JUST WORKS. Etc.

What you mean is, the very simplistic direct reflection and refraction in 
POV "just works".  Try matching anything seen in reality (caustics, blurred 
reflections, area lights, diffuse reflection, focal blur, subsurface 
scattering) and you enter the world of parameter tweaking.

> (OTOH, the fast preview you can get sounds like a useful feature. Ever 
> wait 6 hours for a render only to find out that actually it looks lame? 
> It's not funny...)

I usually do lots of quicker renders first, one without radiosity/focal 
blur/area lights to make sure the geometry is placed correctly.  Then do a 
low-res render with radiosity and area lights to check that the 
colours/brightness looks ok overall.  Then maybe another high res one with 
just focal blur to make sure I have enough blur_samples.  Then finally do 
the big one with everything turned on.  And pray ;-)


Post a reply to this message

From: scott
Subject: Re: New LuxRender web site (http://www.luxrender.net)
Date: 21 Feb 2008 09:00:13
Message: <47bd83ed$1@news.povray.org>
>> Not just the rendering method, but things like different reflection and 
>> lighting models, newer methods of increasing the efficiency of ray 
>> tracing (I posted a link in the pov4 group), etc.
>
> OK. I wasn't aware that any existed, but hey.

The lighting model implemented in POV is about the simplest available, what 
was first used on 3D cards 10 years ago.  Today there are far more accurate 
models used, you must have heard names like Cook-Torrence, Blinn etc, if 
you've never looked outside of POV you wouldn't know they existed.  They 
start to model the microfacets on a surface and produce lighting results 
based on the geometry and physics of the microfacets (eg occlusion, 
self-shadowing etc).  Then there's anisotropic materials like brushed metal, 
where the properties are different depending on which direction the light is 
coming from.

AFAIK POV already uses some clever techniques for speeding up tracing 
complex scenes (try adding 100000 spheres to your ray tracer and compare the 
speed with POV...).  But there are plenty more new techniques out there that 
are certainly worth investigating, some of them quite recent.

> My point is that usually, no matter how closely you look at a POV-Ray 
> isosurface, it will always be beautifully smooth. Every NURBS demo I've 
> ever seen for a GPU has been horribly tesellated with sharp edges 
> everywhere.

NURBS are not isosurfaces though.  What the nVidia demo does is to generate 
the triangle mesh on the fly from the isosurface formula.  So when you zoom 
in, it can create more detail over a small area, and when you zoom out it 
doesn't need so much detail, but of course it needs it over a bigger area. 
It gives the impression that there *are* billions of triangles, but in 
reality it only draws the ones that you can see, small enough that you can't 
tell they are triangles.  Clever eh?  Same way as you can drive for an hour 
around the island on "Test Drive Unlimited", seeing billions of triangles, 
but of course it doesn't try to draw them (or even have them in RAM) all at 
once.

> POV-Ray, of course, gets round this problem by using more sophisticated 
> mathematical techniques than simply projecting flat polygons onto a 2D 
> framebuffer. I've yet to see any GPU attempt this.

GPUs just do it in a different way.  The end result is the same, pixels on 
the screen that match what you would expect from the isosurface formula.

> Mmm, OK. Well my graphics card is only a GeForce 7800 GTX, so I had 
> assumed it would be too under-powered to play it at much about 0.02 FPS.

Nah, on low detail it should certainly be playable.


Post a reply to this message

From: scott
Subject: Re: New LuxRender web site (http://www.luxrender.net)
Date: 21 Feb 2008 09:05:02
Message: <47bd850e$1@news.povray.org>
>> Basically it is this simple:
>>
>> 1. You shoot ray to scene and let it bounce randomly as it wants based on
>> material characteristics.
>> 2. Repeat many times.
>
> Right. So trace rays, let them bounce off diffuse surfaces at semi-random 
> angles, and gradually total up the results for all rays?

Yes.

> Presumably that won't work with point-lights though? (You'd never hit 
> any!)

Unless you start some rays from the point light to add into the mix (in a 
physically correct way of course).


Post a reply to this message

From: Invisible
Subject: Re: New LuxRender web site (http://www.luxrender.net)
Date: 21 Feb 2008 09:05:35
Message: <47bd852f$1@news.povray.org>
scott wrote:
>> ask for a sphere, I get a sphere. Not some polygon mesh approximating 
>> a sphere, but AN ACTUAL SPHERE.
> 
> In the end you get a bunch of pixels approximating a sphere though, so 
> as long as your polygons are roughly the same size as pixels, you won't 
> get anything worse than a true sphere.

Yeah - and when does GPU rendering ever use polygons even approaching 
that kind of size? Oh yeah - never.

>> Surfaces and 
>> textures can be magnified arbitrarily and never look pixellated.
> 
> That's just because they're procedurally generated and not textures.  
> You can do the same on a GPU if you want (but usually a texture is 
> faster, even a really big one, probably is in POV too for moderately 
> complex textures).

Procedural textures have advantages and disadvantages. Personally, I 
prefer them. But maybe that's just me. Certainly I prefer procedural 
geometry to polygon meshes...

>> Reflections JUST WORK. Refraction JUST WORKS. Etc.
> 
> What you mean is, the very simplistic direct reflection and refraction 
> in POV "just works".  Try matching anything seen in reality (caustics, 
> blurred reflections, area lights, diffuse reflection, focal blur, 
> subsurface scattering) and you enter the world of parameter tweaking.

Well it works a damn site better than in GPU rendering solutions - and 
that's what I was comparing it to.

(Besides, for caustics, you turn on photon mapping and adjust *one* 
parameter: photon spacing. Set it too low and the caustics are a bit 
blurry. Set it too high and it takes months. Experiment. Area lights are 
similarly straight-forward. Radiosity takes a lot more tuning, but 
photons and area lights are both quite easy.)

>> (OTOH, the fast preview you can get sounds like a useful feature. Ever 
>> wait 6 hours for a render only to find out that actually it looks 
>> lame? It's not funny...)
> 
> I usually do lots of quicker renders first, one without radiosity/focal 
> blur/area lights to make sure the geometry is placed correctly.  Then do 
> a low-res render with radiosity and area lights to check that the 
> colours/brightness looks ok overall.  Then maybe another high res one 
> with just focal blur to make sure I have enough blur_samples.  Then 
> finally do the big one with everything turned on.  And pray ;-)

Being able to get a fast but grainy preview certainly sounds useful in 
this respect. I guess it depends on just *how* grainy. (I.e., how long 
it takes for the image to become clear enough to tell if it needs 
tweaking. Presumably that depends on what the image is...)

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Invisible
Subject: Re: New LuxRender web site (http://www.luxrender.net)
Date: 21 Feb 2008 09:15:41
Message: <47bd878d$1@news.povray.org>
scott wrote:

> The lighting model implemented in POV is about the simplest available, 
> what was first used on 3D cards 10 years ago.  Today there are far more 
> accurate models used, you must have heard names like Cook-Torrence, 
> Blinn etc, if you've never looked outside of POV you wouldn't know they 
> existed.

That would explain how... I didn't know they existed. :-D

So, what exactly do these correctly predict that POV-Ray currently doesn't?

(BTW, POV-Ray offers several kinds of scattering media, but I can never 
seem to tell the difference between them. Is that normal?)

> They start to model the microfacets on a surface and produce 
> lighting results based on the geometry and physics of the microfacets 
> (eg occlusion, self-shadowing etc).

So how does that affect the end visual result? Are we talking about a 
big difference or a subtle one?

> Then there's anisotropic materials 
> like brushed metal, where the properties are different depending on 
> which direction the light is coming from.

How about something that can do metalic paint? That would be nice...

> AFAIK POV already uses some clever techniques for speeding up tracing 
> complex scenes (try adding 100000 spheres to your ray tracer and compare 
> the speed with POV...).  But there are plenty more new techniques out 
> there that are certainly worth investigating, some of them quite recent.

LOL! I think POV-Ray probably beats the crap out of my ray tracer with 
just 1 sphere. ;-) But hell yeah, faster == better!

> NURBS are not isosurfaces though.

Oh. Wait, you mean they're parametric surfaces then?

> What the nVidia demo does is to 
> generate the triangle mesh on the fly from the isosurface formula.  So 
> when you zoom in, it can create more detail over a small area, and when 
> you zoom out it doesn't need so much detail, but of course it needs it 
> over a bigger area. It gives the impression that there *are* billions of 
> triangles, but in reality it only draws the ones that you can see, small 
> enough that you can't tell they are triangles.  Clever eh?

Does it add more triangles to the areas of greatest curvature and fewer 
to the flat areas?

Even so, I would think that something like heavily textured rock would 
take an absurd number of triangles to capture every tiny crevice. And 
how do you avoid visible discontinuities as the LoD changes? And...

> Same way as 
> you can drive for an hour around the island on "Test Drive Unlimited", 
> seeing billions of triangles, but of course it doesn't try to draw them 
> (or even have them in RAM) all at once.

I often look at a game like HL and wonder how it's even possible. I 
mean, you walk through the map for, like, 20 minutes before you get to 
the other end. The total polygon count must be spine-tinglingly huge. 
And yet, even on a machine with only a few MB of RAM, it works. How can 
it store so much data at once? (Sure, on a more modern game, much of the 
detail is probably generated on the fly. But even so, maps are *big*...)

>> Mmm, OK. Well my graphics card is only a GeForce 7800 GTX, so I had 
>> assumed it would be too under-powered to play it at much about 0.02 FPS.
> 
> Nah, on low detail it should certainly be playable.

Mmm, OK.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Invisible
Subject: Re: New LuxRender web site (http://www.luxrender.net)
Date: 21 Feb 2008 09:16:45
Message: <47bd87cd$1@news.povray.org>
scott wrote:

>> Right. So trace rays, let them bounce off diffuse surfaces at 
>> semi-random angles, and gradually total up the results for all rays?
> 
> Yes.

OK. Sounds simple enough...

>> Presumably that won't work with point-lights though? (You'd never hit 
>> any!)
> 
> Unless you start some rays from the point light to add into the mix (in 
> a physically correct way of course).

Well... presumably it still traces rays "backwards" from camera to 
source, right?

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Vincent Le Chevalier
Subject: Re: New LuxRender web site (http://www.luxrender.net)
Date: 21 Feb 2008 09:33:53
Message: <47bd8bd1$1@news.povray.org>
Invisible a écrit :
> scott wrote:
>>
>> Unless you start some rays from the point light to add into the mix 
>> (in a physically correct way of course).
> 
> Well... presumably it still traces rays "backwards" from camera to 
> source, right?
> 

Not in every method. For example bi-directionnal path tracing, if I 
reckon correctly, creates random (but physically correct, of course) 
paths linking light to camera. So it works with point lights pretty well.

-- 
Vincent


Post a reply to this message

From: scott
Subject: Re: New LuxRender web site (http://www.luxrender.net)
Date: 21 Feb 2008 09:40:24
Message: <47bd8d58@news.povray.org>
> So how does that affect the end visual result? Are we talking about a big 
> difference or a subtle one?

A subtle one that makes materials look more realistic.  There's a section in 
most 3D rendering books about it, the one I remember has a bronze vase, and 
repeatedly compares the different algorithms to a photo.  It's surprising 
how tiny changes in the highlight can make you believe it's really bronze or 
plastic or some unrealistic material.

> How about something that can do metalic paint? That would be nice...

Have you seen the "car paint" demo from the ATI developer website?  That's 
pretty cool.

> LOL! I think POV-Ray probably beats the crap out of my ray tracer with 
> just 1 sphere. ;-) But hell yeah, faster == better!

I meant compare the speeds relatively, like do 10^N spheres on your tracer, 
and then on POV, and compare the curves.  POV doesn't simply test each ray 
with every object during tracing...

>> NURBS are not isosurfaces though.
>
> Oh. Wait, you mean they're parametric surfaces then?

NURBS are just 2D equivalent of splines, basically one way to mathematically 
define a surface.  An isosurface defines a scalar field in 3D, and then a 
surface is constructed where the field equals zero.

> Does it add more triangles to the areas of greatest curvature and fewer to 
> the flat areas?

No, it just uses 32x32x32 marching cubes for each "block".  The block size 
depends on the distance from the camera.

> Even so, I would think that something like heavily textured rock would 
> take an absurd number of triangles to capture every tiny crevice.

But it only needs to cover the ones you can see close up, and modern 
graphics cards can render 10 billion vertices per second, so it should be 
doable.

> And how do you avoid visible discontinuities as the LoD changes?

Alpha blend the old and new blocks (very old technique for LOD), apply some 
bias to the isosurface function based on block size (this stops "fighting" 
between the two surfaces during the transition).  The transitions are so 
small in screen space that you don't notice them.

> I often look at a game like HL and wonder how it's even possible. I mean, 
> you walk through the map for, like, 20 minutes before you get to the other 
> end.

Try driving at 150mph for an hour before you get to the other end :-)

> The total polygon count must be spine-tinglingly huge. And yet, even on a 
> machine with only a few MB of RAM, it works. How can it store so much data 
> at once? (Sure, on a more modern game, much of the detail is probably 
> generated on the fly. But even so, maps are *big*...)

Mesh instancing.  Like in POV you can draw the same mesh with very little 
extra memory, ditto for the GPU.  In fact you can even make subtle changes 
to each mesh as you draw it (eg colour, vertex displacement, animation 
cycles etc).  You can draw a field of trees and grass moving in the wind, 
with an army of 1000 men running over it with just a handful of meshes very 
quickly.  Nowhere in RAM or GPU RAM is the total triangle array held at any 
time.

Since DX10 you can now use the GPU to actually create geometry on the fly, 
so you are no longer limited to only modifying existing meshes.  For 
instance the CPU could provide a simplified mesh of a person, and a list of 
10000 points where the person should be drawn.  The GPU can then generate 
more detailed geometry when needed (if the mesh is near the camera), animate 
the mesh based on some walk cycle, change the colour of the clothes or 
whatever, and then render it.  It gives the impression of billions of 
polygons, but taking up a tiny amount of RAM.

Also big games tend to load in data from the disc in the background when you 
get near a different part of the level.  They also need to shuffle about 
things in the GPU memory as they go along.


Post a reply to this message

From: Mike Raiford
Subject: Re: New LuxRender web site (http://www.luxrender.net)
Date: 21 Feb 2008 09:50:45
Message: <47bd8fc5$1@news.povray.org>
Mike Raiford wrote:

> Wouldn't you still need to tweak light sources to get the scene to look 
> "right"

BTW, I'm going to let my true fanboyism show.. Give me an unbiased 
renderer connected to POV SDL, and I'll surely be happy. Been browsing 
the galleries for the one, and I'm impressed, but I like my CSG!


Post a reply to this message

From: Invisible
Subject: Re: New LuxRender web site (http://www.luxrender.net)
Date: 21 Feb 2008 09:59:29
Message: <47bd91d1$1@news.povray.org>
scott wrote:
>> So how does that affect the end visual result? Are we talking about a 
>> big difference or a subtle one?
> 
> A subtle one that makes materials look more realistic.  There's a 
> section in most 3D rendering books about it.

You must be reading different books to me... The last one I read spent 
several chapters discussing the various methods of performing hidden 
line removal, skipping over Z-buffering because it's "probitively 
expensive" except in "high-end scenarios". (AFAIK, it's the standard 
technique that all GPU rendering systems use today...)

> The one I remember has a 
> bronze vase, and repeatedly compares the different algorithms to a 
> photo.  It's surprising how tiny changes in the highlight can make you 
> believe it's really bronze or plastic or some unrealistic material.

Right. So we're talking about something so subtle that I'm unlikely to 
notice any difference...

>> LOL! I think POV-Ray probably beats the crap out of my ray tracer with 
>> just 1 sphere. ;-) But hell yeah, faster == better!
> 
> I meant compare the speeds relatively, like do 10^N spheres on your 
> tracer, and then on POV, and compare the curves.  POV doesn't simply 
> test each ray with every object during tracing...

Yeah, it uses bounding volumes and so forth. If you have enough 
geometry, I would think that starts to make a pretty big difference...

>> Even so, I would think that something like heavily textured rock would 
>> take an absurd number of triangles to capture every tiny crevice.
> 
> But it only needs to cover the ones you can see close up, and modern 
> graphics cards can render 10 billion vertices per second, so it should 
> be doable.

Damn, how do you even *store* 10 billion vertices?! o_O

>> And how do you avoid visible discontinuities as the LoD changes?
> 
> Alpha blend the old and new blocks (very old technique for LOD), apply 
> some bias to the isosurface function based on block size (this stops 
> "fighting" between the two surfaces during the transition).  The 
> transitions are so small in screen space that you don't notice them.

Mmm, OK.

This is one of the major annoyances with playing various Source-based 
games. When you move past a certain place, you see objects abruptly 
change LoD. It's really quite distracting. The human eye is very 
sensitive to movements like that...

>> I often look at a game like HL and wonder how it's even possible. I 
>> mean, you walk through the map for, like, 20 minutes before you get to 
>> the other end.
> 
> Try driving at 150mph for an hour before you get to the other end :-)

Presumably the map is in much lower detail in that case. ;-)

(Question: Is there a limit to how small a texture can be? Because in 
every game I've ever played, all the props seem to be textured at the 
same resolution as the game world. The result is signs on walls that are 
barely readable due to the low resolution...)

>> The total polygon count must be spine-tinglingly huge. And yet, even 
>> on a machine with only a few MB of RAM, it works. How can it store so 
>> much data at once? (Sure, on a more modern game, much of the detail is 
>> probably generated on the fly. But even so, maps are *big*...)
> 
> Mesh instancing.

Oh, OK.

How does that work with every room in the map being a completely 
different shape though? (Altough I guess most walls are flat, so...)

Also, how does it figure out which polygons to draw? It can't possibly 
draw all 10 million polygons every frame - and yet, figuring out which 
ones are visible would seem to take more effort than actually drawing 
them all...

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.