|
|
> "Robert McGregor" <rob### [at] mcgregorfineartcom> wrote:
>
>>
>> I have a 5 inch chrome sphere that I use for that very purpose (plus it looks
>> cool on my desk when not in use). Mount your camera on a tripod and take a set
>> of bracketed shots, then rotate the tripod ~90 degrees around the sphere and
>> take another set of bracketed shots. After you've combined the two bracketed
>> imagesets into 2 HDRs you can merge these offset images to eliminate the
>> camera/tripod completely from the final shot. Of course, Ive's IC is very useful
>> here:
>>
>> http://www.lilysoft.org/IC/ic_index.htm
>>
>
> So the 90-degree image is simply to get a clean area to 'replace' the
> camera-plus-photographer in the original straight-on image? That makes perfect
> sense. Here's a question, though: Is the *final* light probe ultimately made
> from BOTH of those images? (Meaning: Is the 'corrected' straight-on image
> somehow combined WITH the (similarly-corrected) 90-degree image to get a light
> probe that has MORE environment imagery in it? Or is only the straight-on image
> used?)
>
> There's also something *mysterious* about light probes that I still have trouble
> grasping: In my research into the subject, several sources stated that the
> mirrored ball actually gathers environment imagery from BEHIND itself-- in the
> spatial hemisphere out of view of the camera(!)-- implying that the very edges
> of the ball pick up the 'hidden' back-side environment. Is that true (or even
> possible?)
>
>
When using a mirrored sphere, there is a blind cone right behind it: The
area of the view hiden from the camera by the sphere itself.
There is also a loss of definition near the edge.
remove the camera, to get a full spherical covering.
Post a reply to this message
|
|