|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Testing my lightdome code on a more complex model. The lightprobes I
took myself, and while they look pretty bad close-up, they work nicely
for slightly dull reflections and for generating image based lighting rigs.
The lightdome code is extremely simple - I just sample the environment,
completely at random, then choose the N brightest samples. This works
well for lightprobes with large areas of light (with low sampling) or
lightprobes with one or more very, and roughly equally, bright lights
(like those with the sun in them - I sample these images highly to hit
the near point light sources). It does less well on other mixes of
lights, and you do have to tweak the number of samples to get the effect
you want.
On the plus side, I write out the lights to a file, which means I can
avoid the sampling time in subsequent renders (e.g. my stochastic
rendering rig).
I'd perhaps like to try a median-cut algorithm at some point, as that is
a good way of getting all the light from the probe distilled into a
minimum number of representative samples, but I'm still not convinced
it's the best way of getting the most accurate area lights and fill
lights for the probe.
The Buddha image doesn't have any anti-aliasing, so it could be made to
look a little better. And I added a slight glow in Photoshop because I
just can't stop myself...
Cheers,
Edouard.
Post a reply to this message
Attachments:
Download 'buddha-lp-300.jpg' (100 KB)
Download 'skybridge1-full.jpg' (33 KB)
Download 'skybridge1b.jpg' (23 KB)
Preview of image 'buddha-lp-300.jpg'
Preview of image 'skybridge1-full.jpg'
Preview of image 'skybridge1b.jpg'
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I'd been trying to place the 3D model into a picture I took of one of
the locations where I made a lightprobe, but had failed (for a couple of
hours) to get the photo mapped onto the plane on which the model was
sitting. Then I tried Rune's illusion.inc file, and bada-bing! Hooray
for the masters!
The only post-processing I did was colour correction and tone mapping in
Photoshop (I saved the render as an HDR), and adding a little bit of
noise to the dark areas of the final image.
All in all I'm pretty pleased with it, although there are still a bunch
of technical hurdles to overcome - not least of which is it's a bit of a
hack to let the light captured from the table light the model from
beneath (which helped the realism a lot). no_shadow and
double_illuminate etc.
Single pass render, 200 area lights sampled from the lightprobe image (a
scaled-down, non-HDR version of which is also attached).
Cheers,
Edouard.
Post a reply to this message
Attachments:
Download 'atrium-buddha.jpg' (296 KB)
Download 'atrium-full-ll.jpg' (33 KB)
Preview of image 'atrium-buddha.jpg'
Preview of image 'atrium-full-ll.jpg'
|
|
| |
| |
|
|
|
|
| |
| |
|
|
This looks good.
I wonder if you could e-mail me the Buddha source file..?
Sven
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Looks like a good start.
I spent quite a while developing a lightdome system (
http://news.povray.org/povray.binaries.scene-files/message/%3Cweb.46488bca4e05d400c150d4c10%40news.povray.org%3E/#%3Cwe
b.46488bca4e05d400c150d4c10%40news.povray.org%3E)
I started with unifrom sampling and using only lights over a certain threshold.
I found that this was using a lot of lights, but it ends up dropping out lower
ambient lighting effects, and you also end up with a very high number of lights
to look effective. I ended up using the median cut method with very good
success. With median cut, rather than a uniform distribution of light sources
with varying intensity, you get light sources with uniform intensity (varying
colour though) at non-uniform spacing. It is highly effective at capturing
both bright and low ambient light sources and gives much smoother results with
fewer lightsources. I even took it one step further and introduced area
lights, which let me use even fewer effective light sources with smoother
results.
It works very well because it does account for all lighting in the scene, areas
with brighter lights just get more lightsources, while ambient areas get fewer.
The real limitation in using median-cut is what the sampling size used for the
original HDR. The finer the image is broken down, the more accurate it gets
(i.e., doesn't miss very small point light sources), but it gets much more
computationally intense so you need to find the compromise size.
I've been away from the boards for a while with RL, so I missed a lot of your
earlier development. I may check in occasionally as I get time if you need any
assitance with your work or how mine went together (At this point, I barely
remember how some of the features of mine work behind the scenes, as I worked
on the code for so long that I forgot...)
-tgq
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|