POV-Ray : Newsgroups : povray.off-topic : New LuxRender web site (http://www.luxrender.net) Server Time
11 Oct 2024 19:14:23 EDT (-0400)
  New LuxRender web site (http://www.luxrender.net) (Message 101 to 110 of 175)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Gilles Tran
Subject: Re: New LuxRender web site (http://www.luxrender.net)
Date: 21 Feb 2008 07:57:48
Message: <47bd754c@news.povray.org>

47bd6ea9$1@news.povray.org...

> I can certainly see the advantage of a "I just throw objects in and it 
> works" approach to lighting. But then, that's more or less how POV-Ray's 
> radiosity feature works. You usually don't have to twiddle the settings 
> all *that* much - it's more a question of how many years you're willing to 
> wait for the result.

Just wondering... Could you show us some of your own experiments with 
radiosity in POV-Ray or is your position just theoretical? Because after 
using (and being in love with) POV-Ray's radiosity since 1996 and hundreds 
of tests and pictures later, that's not really what I've experienced. Even 
when using insane settings, there are situations where you just can't get 
rid of artifacts and other problems and where workarounds (or Photoshop...) 
are necessary to hide them. Unbiased renderers do that naturally and 
traditional high-end renderers have lots of built-in optimisation tricks 
that POV-Ray just doesn't have.

> And that's the kind of worrying part - how many years will you have to 
> wait for the result from an unbiased renderer?

As I said it's now used *** for actual production *** of stills (mostly 
architectural, design and even TV commercials) so apparently that's not such 
a problem, at least for commercial production with access to networks of 
fast machines and render farms. But even on a "normal" hobbyist machine, my 
tests with Maxwell were rather positive, i.e. it was slow, but so was 
radiosity in 1996. My "glasses" picture that's in the "Digital art" article 
in Wikipedia took 500 hours to render. For POVers, speed isn't always an 
issue.

G.

-- 
*****************************
http://www.oyonale.com
*****************************
- Graphic experiments
- POV-Ray, Cinema 4D and Poser computer images
- Posters


Post a reply to this message

From: Invisible
Subject: Re: New LuxRender web site (http://www.luxrender.net)
Date: 21 Feb 2008 08:07:46
Message: <47bd77a2$1@news.povray.org>
>> I can certainly see the advantage of a "I just throw objects in and it 
>> works" approach to lighting. But then, that's more or less how POV-Ray's 
>> radiosity feature works. You usually don't have to twiddle the settings 
>> all *that* much - it's more a question of how many years you're willing to 
>> wait for the result.
> 
> Just wondering... Could you show us some of your own experiments with 
> radiosity in POV-Ray or is your position just theoretical? Because after 
> using (and being in love with) POV-Ray's radiosity since 1996 and hundreds 
> of tests and pictures later, that's not really what I've experienced.

Well, maybe my scenes aren't complicated enough then. Usually if I just 
insert an empty radiosity{} block, I get a reasonable image. Sometimes I 
have to tweat a few parameters and then it looks good. Occasionally it 
just becomes so absurdly slow that I give up.

But then, as you know, most of my renders are pretty trivial. For 
example, the image attached to the first post in this thread. I have no 
idea how the hell it's possible to model something that complicated. 
Surely something like that must take many months of modelling?

>> And that's the kind of worrying part - how many years will you have to 
>> wait for the result from an unbiased renderer?
> 
> As I said it's now used *** for actual production *** of stills (mostly 
> architectural, design and even TV commercials) so apparently that's not such 
> a problem, at least for commercial production with access to networks of 
> fast machines and render farms.

Heh. Any algorithm can be made fast enough if you throw enough CPU power 
at it. ;-) [Er, well, any linear-time algorithm anyway...]

> But even on a "normal" hobbyist machine, my 
> tests with Maxwell were rather positive, i.e. it was slow, but so was 
> radiosity in 1996.

OK, well that's encouraging then...

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Invisible
Subject: Re: New LuxRender web site (http://www.luxrender.net)
Date: 21 Feb 2008 08:09:17
Message: <47bd77fd@news.povray.org>
Severi Salminen wrote:

> Yeah. Just search google:
> 
> path tracing
> monte carlo path tracing
> bidirectional path tracing
> etc.

None of those are search terms I thought to try...

> Basically it is this simple:
> 
> 1. You shoot ray to scene and let it bounce randomly as it wants based on
> material characteristics.
> 2. Repeat many times.

Right. So trace rays, let them bounce off diffuse surfaces at 
semi-random angles, and gradually total up the results for all rays?

Presumably that won't work with point-lights though? (You'd never hit any!)

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: nemesis
Subject: Re: New LuxRender web site (http://www.luxrender.net)
Date: 21 Feb 2008 08:16:01
Message: <47bd7991$1@news.povray.org>
Gilles Tran wrote:
> My "glasses" picture that's in the "Digital art" article 
> in Wikipedia took 500 hours to render.

mama mia!  and it still features a strange little dark spot right in the 
middle of the ground...

but it's a wonderful photorealistic povray render...

perhaps fidos could try that one next to see how it comes along. Andrew, 
go to the "Montecarlo path tracing with MegaPov 1.2.1" thread in p.b.i 
to find out more...


Post a reply to this message

From: nemesis
Subject: Re: New LuxRender web site (http://www.luxrender.net)
Date: 21 Feb 2008 08:29:51
Message: <47bd7ccf$1@news.povray.org>
Invisible wrote:
> Well, maybe my scenes aren't complicated enough then. Usually if I just 
> insert an empty radiosity{} block, I get a reasonable image. Sometimes I 
> have to tweat a few parameters and then it looks good. Occasionally it 
> just becomes so absurdly slow that I give up.

default radiosity settings look hardly any better than just simple flat 
ambient lighting.  If radiosity is not getting your render above 1 hour, 
then it's really not doing much.

> But then, as you know, most of my renders are pretty trivial. For 
> example, the image attached to the first post in this thread. I have no 
> idea how the hell it's possible to model something that complicated. 
> Surely something like that must take many months of modelling?

no, most probably a couple of hours in a visual modelling package like 
Blender or Wings 3D.  Many people actually just reuse premade models and 
just work on composition, texturing and lighting.  The real hard work is 
in the rendering, but this is now mostly automated.  You just say "use a 
bright clear sky at 16:00" or "the lamps should be 160W fluorescent" and 
let it go.

>>> And that's the kind of worrying part - how many years will you have 
>>> to wait for the result from an unbiased renderer?

on quad-cores, fidos has been posting slightly noisy results in the 11 
hour range.  In other more mature ubiased renderers, 10-20 hours on 
powerful hardware already give pretty good results for scenes much more 
complex than just RSOCP...


Post a reply to this message

From: scott
Subject: Re: New LuxRender web site (http://www.luxrender.net)
Date: 21 Feb 2008 08:45:53
Message: <47bd8091$1@news.povray.org>
> ask for a sphere, I get a sphere. Not some polygon mesh approximating a 
> sphere, but AN ACTUAL SPHERE.

In the end you get a bunch of pixels approximating a sphere though, so as 
long as your polygons are roughly the same size as pixels, you won't get 
anything worse than a true sphere.

> You can construct shapes of arbitrary complexity. Surfaces and textures 
> can be magnified arbitrarily and never look pixellated.

That's just because they're procedurally generated and not textures.  You 
can do the same on a GPU if you want (but usually a texture is faster, even 
a really big one, probably is in POV too for moderately complex textures).

> Reflections JUST WORK. Refraction JUST WORKS. Etc.

What you mean is, the very simplistic direct reflection and refraction in 
POV "just works".  Try matching anything seen in reality (caustics, blurred 
reflections, area lights, diffuse reflection, focal blur, subsurface 
scattering) and you enter the world of parameter tweaking.

> (OTOH, the fast preview you can get sounds like a useful feature. Ever 
> wait 6 hours for a render only to find out that actually it looks lame? 
> It's not funny...)

I usually do lots of quicker renders first, one without radiosity/focal 
blur/area lights to make sure the geometry is placed correctly.  Then do a 
low-res render with radiosity and area lights to check that the 
colours/brightness looks ok overall.  Then maybe another high res one with 
just focal blur to make sure I have enough blur_samples.  Then finally do 
the big one with everything turned on.  And pray ;-)


Post a reply to this message

From: scott
Subject: Re: New LuxRender web site (http://www.luxrender.net)
Date: 21 Feb 2008 09:00:13
Message: <47bd83ed$1@news.povray.org>
>> Not just the rendering method, but things like different reflection and 
>> lighting models, newer methods of increasing the efficiency of ray 
>> tracing (I posted a link in the pov4 group), etc.
>
> OK. I wasn't aware that any existed, but hey.

The lighting model implemented in POV is about the simplest available, what 
was first used on 3D cards 10 years ago.  Today there are far more accurate 
models used, you must have heard names like Cook-Torrence, Blinn etc, if 
you've never looked outside of POV you wouldn't know they existed.  They 
start to model the microfacets on a surface and produce lighting results 
based on the geometry and physics of the microfacets (eg occlusion, 
self-shadowing etc).  Then there's anisotropic materials like brushed metal, 
where the properties are different depending on which direction the light is 
coming from.

AFAIK POV already uses some clever techniques for speeding up tracing 
complex scenes (try adding 100000 spheres to your ray tracer and compare the 
speed with POV...).  But there are plenty more new techniques out there that 
are certainly worth investigating, some of them quite recent.

> My point is that usually, no matter how closely you look at a POV-Ray 
> isosurface, it will always be beautifully smooth. Every NURBS demo I've 
> ever seen for a GPU has been horribly tesellated with sharp edges 
> everywhere.

NURBS are not isosurfaces though.  What the nVidia demo does is to generate 
the triangle mesh on the fly from the isosurface formula.  So when you zoom 
in, it can create more detail over a small area, and when you zoom out it 
doesn't need so much detail, but of course it needs it over a bigger area. 
It gives the impression that there *are* billions of triangles, but in 
reality it only draws the ones that you can see, small enough that you can't 
tell they are triangles.  Clever eh?  Same way as you can drive for an hour 
around the island on "Test Drive Unlimited", seeing billions of triangles, 
but of course it doesn't try to draw them (or even have them in RAM) all at 
once.

> POV-Ray, of course, gets round this problem by using more sophisticated 
> mathematical techniques than simply projecting flat polygons onto a 2D 
> framebuffer. I've yet to see any GPU attempt this.

GPUs just do it in a different way.  The end result is the same, pixels on 
the screen that match what you would expect from the isosurface formula.

> Mmm, OK. Well my graphics card is only a GeForce 7800 GTX, so I had 
> assumed it would be too under-powered to play it at much about 0.02 FPS.

Nah, on low detail it should certainly be playable.


Post a reply to this message

From: scott
Subject: Re: New LuxRender web site (http://www.luxrender.net)
Date: 21 Feb 2008 09:05:02
Message: <47bd850e$1@news.povray.org>
>> Basically it is this simple:
>>
>> 1. You shoot ray to scene and let it bounce randomly as it wants based on
>> material characteristics.
>> 2. Repeat many times.
>
> Right. So trace rays, let them bounce off diffuse surfaces at semi-random 
> angles, and gradually total up the results for all rays?

Yes.

> Presumably that won't work with point-lights though? (You'd never hit 
> any!)

Unless you start some rays from the point light to add into the mix (in a 
physically correct way of course).


Post a reply to this message

From: Invisible
Subject: Re: New LuxRender web site (http://www.luxrender.net)
Date: 21 Feb 2008 09:05:35
Message: <47bd852f$1@news.povray.org>
scott wrote:
>> ask for a sphere, I get a sphere. Not some polygon mesh approximating 
>> a sphere, but AN ACTUAL SPHERE.
> 
> In the end you get a bunch of pixels approximating a sphere though, so 
> as long as your polygons are roughly the same size as pixels, you won't 
> get anything worse than a true sphere.

Yeah - and when does GPU rendering ever use polygons even approaching 
that kind of size? Oh yeah - never.

>> Surfaces and 
>> textures can be magnified arbitrarily and never look pixellated.
> 
> That's just because they're procedurally generated and not textures.  
> You can do the same on a GPU if you want (but usually a texture is 
> faster, even a really big one, probably is in POV too for moderately 
> complex textures).

Procedural textures have advantages and disadvantages. Personally, I 
prefer them. But maybe that's just me. Certainly I prefer procedural 
geometry to polygon meshes...

>> Reflections JUST WORK. Refraction JUST WORKS. Etc.
> 
> What you mean is, the very simplistic direct reflection and refraction 
> in POV "just works".  Try matching anything seen in reality (caustics, 
> blurred reflections, area lights, diffuse reflection, focal blur, 
> subsurface scattering) and you enter the world of parameter tweaking.

Well it works a damn site better than in GPU rendering solutions - and 
that's what I was comparing it to.

(Besides, for caustics, you turn on photon mapping and adjust *one* 
parameter: photon spacing. Set it too low and the caustics are a bit 
blurry. Set it too high and it takes months. Experiment. Area lights are 
similarly straight-forward. Radiosity takes a lot more tuning, but 
photons and area lights are both quite easy.)

>> (OTOH, the fast preview you can get sounds like a useful feature. Ever 
>> wait 6 hours for a render only to find out that actually it looks 
>> lame? It's not funny...)
> 
> I usually do lots of quicker renders first, one without radiosity/focal 
> blur/area lights to make sure the geometry is placed correctly.  Then do 
> a low-res render with radiosity and area lights to check that the 
> colours/brightness looks ok overall.  Then maybe another high res one 
> with just focal blur to make sure I have enough blur_samples.  Then 
> finally do the big one with everything turned on.  And pray ;-)

Being able to get a fast but grainy preview certainly sounds useful in 
this respect. I guess it depends on just *how* grainy. (I.e., how long 
it takes for the image to become clear enough to tell if it needs 
tweaking. Presumably that depends on what the image is...)

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Invisible
Subject: Re: New LuxRender web site (http://www.luxrender.net)
Date: 21 Feb 2008 09:15:41
Message: <47bd878d$1@news.povray.org>
scott wrote:

> The lighting model implemented in POV is about the simplest available, 
> what was first used on 3D cards 10 years ago.  Today there are far more 
> accurate models used, you must have heard names like Cook-Torrence, 
> Blinn etc, if you've never looked outside of POV you wouldn't know they 
> existed.

That would explain how... I didn't know they existed. :-D

So, what exactly do these correctly predict that POV-Ray currently doesn't?

(BTW, POV-Ray offers several kinds of scattering media, but I can never 
seem to tell the difference between them. Is that normal?)

> They start to model the microfacets on a surface and produce 
> lighting results based on the geometry and physics of the microfacets 
> (eg occlusion, self-shadowing etc).

So how does that affect the end visual result? Are we talking about a 
big difference or a subtle one?

> Then there's anisotropic materials 
> like brushed metal, where the properties are different depending on 
> which direction the light is coming from.

How about something that can do metalic paint? That would be nice...

> AFAIK POV already uses some clever techniques for speeding up tracing 
> complex scenes (try adding 100000 spheres to your ray tracer and compare 
> the speed with POV...).  But there are plenty more new techniques out 
> there that are certainly worth investigating, some of them quite recent.

LOL! I think POV-Ray probably beats the crap out of my ray tracer with 
just 1 sphere. ;-) But hell yeah, faster == better!

> NURBS are not isosurfaces though.

Oh. Wait, you mean they're parametric surfaces then?

> What the nVidia demo does is to 
> generate the triangle mesh on the fly from the isosurface formula.  So 
> when you zoom in, it can create more detail over a small area, and when 
> you zoom out it doesn't need so much detail, but of course it needs it 
> over a bigger area. It gives the impression that there *are* billions of 
> triangles, but in reality it only draws the ones that you can see, small 
> enough that you can't tell they are triangles.  Clever eh?

Does it add more triangles to the areas of greatest curvature and fewer 
to the flat areas?

Even so, I would think that something like heavily textured rock would 
take an absurd number of triangles to capture every tiny crevice. And 
how do you avoid visible discontinuities as the LoD changes? And...

> Same way as 
> you can drive for an hour around the island on "Test Drive Unlimited", 
> seeing billions of triangles, but of course it doesn't try to draw them 
> (or even have them in RAM) all at once.

I often look at a game like HL and wonder how it's even possible. I 
mean, you walk through the map for, like, 20 minutes before you get to 
the other end. The total polygon count must be spine-tinglingly huge. 
And yet, even on a machine with only a few MB of RAM, it works. How can 
it store so much data at once? (Sure, on a more modern game, much of the 
detail is probably generated on the fly. But even so, maps are *big*...)

>> Mmm, OK. Well my graphics card is only a GeForce 7800 GTX, so I had 
>> assumed it would be too under-powered to play it at much about 0.02 FPS.
> 
> Nah, on low detail it should certainly be playable.

Mmm, OK.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.