|
|
I finished the improved version of my stochastic camera rig for MegaPOV (using
it's user_defined_camera feature).
I was trying to get rid of a certain kind of artifact that could show up in my
stochastic renders (effectively displaying a correlation between the focal blur
offsets and which light was being used from the lightdome code in that frame).
The user_defined_camera allowed each pixel to be a different offset, so that
correlation went away.
The code can also be run in a single frame by turning setting a sub-pixel flag,
and then rendering with a high anti-aliasing value.
I've got three ways of setting the bokeh - from a bitmap (you can use the bokeh
from a real photo), from a pigment (onion works well for example), or by
specifying the number of blades, rotation and edge fuzziness.
There are some downsides - mainly to do with the limited nature and speed of POV
functions. The code adds several seconds to a 500x500 image. I also couldn't
calculate halton positions on the fly, so had to calculate an array at parse
time to use in the functions, and that takes about a second per thousand items,
which is too slow for doing a whole image worth of them.
I've added code to allow the user to set the size of their POV units (i.e. 1
unit = 1mm, 1 unit = 10 meters, 1 unit = 1 inch, etc).
I'll post the updated Camera35mm macros with all this newness soon.
Attached is a picture I did with the code in MegaPOV. Same old Buddha scene I
posted a week or two ago, but with a more attractive pentagonal bokeh, and with
blurred reflection on the center Buddha turned back on. I've processed it like I
would process a photo taken with my SLR, as it's starting to feel real enough to
me for that treatment.
Cheers,
Edouard.
Post a reply to this message
Attachments:
Download 'buddha-dof-udc.jpg' (106 KB)
Preview of image 'buddha-dof-udc.jpg'
|
|