|
|
"clipka" <nomail@nomail> wrote:
> "sooperFoX" <bon### [at] gmailcom> wrote:
> > > Well, except for one thing: That "render until I like it" feature with interim
> > > result output would currently be an issue. Nothing that couldn't be solved
> > > though. For starters, high-quality anti-aliasing settings might do instead of
> > > rendering the shot over and over again.
> >
> > As I understand it, the idea of rendering the shot over and over again is the
> > only way that it *could* work. Each pixel of a 'pass' follows a diffuse bounce
> > in a 'random' direction (which could have a small bias, eg portals) and the
> > more times you repeat those random passes the more 'coverage' of the total
> > solution you get. Kind of like having an infinite radiosity count, spread out
> > over time. That's why it starts out so noisy...
>
> No, *basically* the only important ingredient for montecarlo raytracing to work
> is that each *pixel* is sampled over and over again. That this is done in
> multiple passes is "only" a matter of convenience. A single-pass "render each
> pixel 1000 times" rule, which madly high non-adaptive anti-aliasing settings
> would roughly be equivalent to, would qualify just as well.
>
> I do agree of course that doing multiple passes, giving each pixel just one shot
> per pass, and saving an interim result image after each pass, is much more
> convenient than doing it all in one pass, giving each pixel 1000 shots before
> proceeding to the next. No argument here.
This is more or less what my stochastic render rig does currently in POV. If I
render 400 passes (which means full frames via animation), then I'm sampling
each pixel 400 times. I combine the 400 HDR images afterwards. Or after 40
passes to get a feel for how it's going. And if 400 passes still looks a little
noisy, then I fire up another render to get some more frames.
Stuff I currently can do this way:
- Blurred reflection, blurred refraction (a tiny bit of blurring on everything
adds a lot to realism)
- Anti-aliasing
- Depth of Field (with custom bokeh maps)
- High quality Media effects (by doing randomised low quality media each pass)
- Distributed light sources (i.e. studio bank lighting)
- High density lightdomes (up to 1000 lights)
- Soft shadowing (by using a different jitter 2x2 area light each pass)
- Fake SSS (via a low quality jittered FastSS() on each pass)
Things I haven't done (or can't do) yet
- Radiosity - haven't tried it yet (but I could do a pre-pass with and use
save/load)
- Photons (same as radiosity above)
- Dispersion (filter greater than 1 plain doesn't work in POV, even when I
patch the obvious underflow condition in the code)
- SSLT! (There's not ability to jitter for each pass)
Most of the randomisation is driven by a single
#declare stochastic_seed = seed( frame_number)
at the start of the code.
e.g. Blurry reflections are simply micronormals randomized on the
stochastic_seed
normal {
bumps
scale 0.0001
translate <rand(stochastic_seed), rand(stochastic_seed),
rand(stochastic_seed)>
}
or averaging in the "real" normal
normal {
average
normal_map {
[ 1 crackle ]
[ 1
bumps
scale 0.0001
translate <rand(stochastic_seed), rand(stochastic_seed),
rand(stochastic_seed)>
]
}
}
I've also started using a lot of halton sequences based on the frame_number, as
this seem even better than pure randomisation for a evenly distributed series
of samples.
My 35mm camera macros, that got posted here a month or so ago, have the
Anti-aliasing and DoF implementation of this in them.
The thing I'd really like to do is to identify pixels that haven't stabilized
after N passes, and only render those some more. It's a waste to render all the
pixels in a frame when most of them have arrived at their final value +/- some
minute percentage. It's the problem pixels that need more work.
> > > The "diffuse job" could be fully covered by radiosity code, by just bypassing
> > > the sample cache, using a low sample ray count and high recursion depth, and
> > > some fairly small changes to a few internal parameters. That, plus another
> >
> > Yes, I think so. You probably don't even need that high a recursion depth, maybe
> > 5 or so. And it's not so much recursion as it is iteration, right? You don't
> > shoot [count] rays again for each bounce of a single path?
>
> Well, if using the existing radiosity mechanism (just bypassing the sample
> cache), it would actually be recursion indeed: The radiosity algorithm is
> *designed* to shoot N rays per location.
Are they randomised, or would they be the same each pass?
> Of Metropolis Light Transport I know nothing.
It's just a statistical method applied to bi-direction path tracing to allow
speed-ups while preserving the unbiased results.
Cheers,
Edouard.
Post a reply to this message
|
|