|
|
"clipka" <nomail@nomail> wrote:
> I've already pondered the idea. And found no reason why it shouldn't work.
>
> Well, except for one thing: That "render until I like it" feature with interim
> result output would currently be an issue. Nothing that couldn't be solved
> though. For starters, high-quality anti-aliasing settings might do instead of
> rendering the shot over and over again.
As I understand it, the idea of rendering the shot over and over again is the
only way that it *could* work. Each pixel of a 'pass' follows a diffuse bounce
in a 'random' direction (which could have a small bias, eg portals) and the
more times you repeat those random passes the more 'coverage' of the total
solution you get. Kind of like having an infinite radiosity count, spread out
over time. That's why it starts out so noisy...
> Adding blur, like in MCPov, would be an extra gimmick that in my opinion
> might be of interest for standard POV-Ray scenes anyway.
That is just a natural side effect of the multiple passes - each pass takes a
randomly different reflection or refraction direction, and when you average
them all together you get a blurred result. This is also why you get perfect
anti-aliasing.
> The "diffuse job" could be fully covered by radiosity code, by just bypassing
> the sample cache, using a low sample ray count and high recursion depth, and
> some fairly small changes to a few internal parameters. That, plus another
Yes, I think so. You probably don't even need that high a recursion depth, maybe
5 or so. And it's not so much recursion as it is iteration, right? You don't
shoot [count] rays again for each bounce of a single path?
> thing that has already made its way onto the radiosity agenda: The radiosity
> sample ray pattern would have to be able to use certain hints in the scene
> about bright areas (MCPov's "portals"); and for monte-carlo tracing it would
> have to be truly random.
Could it just be any object with finish ambient > 0? There is also
bi-directional path tracing and Metropolis Light Transport systems, but I think
they are significantly more complicated to implement..
> So there would be some work to do, but basically nothing that wouldn't fit into
> the 3.7 framework.
I wish I understood enough of it to be able to help in implementing! Perhaps one
day I will find the time to look at the POV source. Or maybe fidos can help? :)
> Scattering media would remain a problem, unless one would go for full volumetric
> monte-carlo scattering simulation. Which, for highly scattering media, might be
> veeeeeery slow.
Yes, but it would be *correct*, and the results would be unbeatable. It could be
turned on/off though for people who don't want to wait weeks for a decent result
;)
Well, just a pipe dream. For now, there's Indigo Renderer and Blender...
Post a reply to this message
|
|