POV-Ray : Newsgroups : povray.binaries.images : Experiments with light probes Server Time
20 May 2024 01:21:52 EDT (-0400)
  Experiments with light probes (Message 8 to 17 of 27)  
<<< Previous 7 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Bill Pragnell
Subject: Re: Experiments with light probes
Date: 2 Jun 2009 07:20:01
Message: <web.4a250a8b75b7d3c96dd25f0b0@news.povray.org>
"Edouard" <pov### [at] edouardinfo> wrote:
> One of my shots - to give you an idea of the source material I'm getting with a
> 1 inch ball.

That's great - much clearer than I've managed so far even with my bigger ball. I
really need to polish it and investigate that focus override... :)


Post a reply to this message

From: Trevor G Quayle
Subject: Re: Experiments with light probes
Date: 2 Jun 2009 08:40:00
Message: <web.4a251cb475b7d3c981c811d20@news.povray.org>
"Bill Pragnell" <bil### [at] hotmailcom> wrote:
> "Trevor G Quayle" <Tin### [at] hotmailcom> wrote:
> > -Use HDR shop to make sure the images are aligned perfectly.  Open all of one
> > series, maiximize the windows, then centre all images (5 on the keypad).  You
> > can then flick back and forth between each pic and use the
> > "Image/Transform/Shift w/wrap" function in HDR shop to align each one, using
> > one as a starting reference.  Even one pixel shift can help if necessary.
>
> Do you mean the bracketed series before combination? There's no problem here,
> the HDRs I'm getting are very sharp.
>

Yes, I mean the bracketed shots.  But it sounds like you arent having a problem
here.

> > -Try to match resolutions with your two combined HDRs and the masking image.
> > Scale smoothly.  I always scale up to the largest.
>
> Yup, am doing. I scale down to the smaller one, but it's usually only different
> by a dozen pixels or so. I make the mask by painting on a layer over the
> rotated image to wipe out the unwanted areas, then clearing the lower layer to
> black, merging and blurring.
>
> > - I use the mask provided and scale it accordingly
>
> I never thought of that. However, my shadow can be anywhere, and my tripod head
> gets in the way quite a bit.
>

If you can do it, a custom mask is probably the best as you can tailor it to the
specific areas you want to remove.  However, typically it's the center and back
of the image that always need to be removed.  Unwanted shadows can be an
additional problem though.  I like to use a delay on my CHDK bracketing so I
can get myself and shadow out of the shot as much as possible (hide or move
further back so you are smaller)


> > - After youve successfully created your full HDR probe, it is still in
> > mirrorball, transform to Lat/Long.  I like to keep the height the same as the
> > mirrorball resolutions and the width double that (eg 500x500 mirroball ->
> > 1000x500 Lat/Long)
>
> Hmm, I've been unwrapping to angular and rotating at the same time, as per the
> tutorial. Do you think keeping it in the mirrorball projection would be better
> for matching and blending? Or do you really mean angular ;-) ?
>
> I'm using megapov, so I figured using the angular map would be fine for the time
> being. I'll probably supply both projections when I get a collection going.

No wrapping and rotating at the same time is fine.  I still would always
recommend Lat/Long format over angular.


>
> > Maybe post some details of your difficulties, or if you'd like you can email
> > me directly.
>
> I'll try post an example of the problem later today... it's slight, but enough
> of a mismatch to be annoying. If it's an interior environment, straight lines
> of walls/windows etc end up broken. Maybe I'm too much of a perfectionist!
>
> Bill

It will help to see what you see as the issue.
It may be that you are being too much of a perfectionist.  Note that the larger
the ball and the closer the scene or objects in the scene are to the ball, the
greater the parallax errors are going to be (i.e. lines not lining up)  This is
particularly the case in indoor shots, where the whole environment tends to be
closer to the ball.  Make sure your mask has an appropriate blur to help hide
these imperfections.

-tgq


Post a reply to this message

From: Bill Pragnell
Subject: Re: Experiments with light probes
Date: 2 Jun 2009 09:25:00
Message: <web.4a2527a075b7d3c96dd25f0b0@news.povray.org>
"Trevor G Quayle" <Tin### [at] hotmailcom> wrote:
> Unwanted shadows can be an
> additional problem though.  I like to use a delay on my CHDK bracketing so I
> can get myself and shadow out of the shot as much as possible (hide or move
> further back so you are smaller)

Another good idea.

> No wrapping and rotating at the same time is fine.  I still would always
> recommend Lat/Long format over angular.

Any particular reason? Does this projection preserve more information? I expect
I can still do the blending in HDRSHop... ?

> It may be that you are being too much of a perfectionist.

Wouldn't be the first time ;-)

> Note that the larger
> the ball and the closer the scene or objects in the scene are to the ball, the
> greater the parallax errors are going to be (i.e. lines not lining up)  This is
> particularly the case in indoor shots, where the whole environment tends to be
> closer to the ball.

This is as I suspected. It feels reminiscent of the parallax ghosting one sees
with handheld-shot panoramas. The worst example was when I had the ball sitting
on a window-sill, but it's still noticeable in the middle of a room.

Bill


Post a reply to this message

From: Trevor G Quayle
Subject: Re: Experiments with light probes
Date: 2 Jun 2009 10:30:01
Message: <web.4a25363d75b7d3c981c811d20@news.povray.org>
"Bill Pragnell" <bil### [at] hotmailcom> wrote:
> "Trevor G Quayle" <Tin### [at] hotmailcom> wrote:

> > No wrapping and rotating at the same time is fine.  I still would always
> > recommend Lat/Long format over angular.
>
> Any particular reason? Does this projection preserve more information? I expect
> I can still do the blending in HDRSHop... ?
>

No particular reason other than I find them easier to work with and view.
Reasons I like lat/long:

-Easy to open and view and understand the scene.  Any rotation from horizontal
is also usually easy to see and correct.  Viewing and angular map, I find it
very difficult to understand what the overall scene looks like.

-megaPOV may have the angular map-type, but not all programs do and it is very
simple to wrap the lat/long to a sphere.

- I have an extensive lightdome macro that I have developed for POV.  It is much
easier to parse and analyze the lat/long format


I don't know wheteher they preserve the information better or not for certain,
but I do suspect that they make more efficient and uniform storing of the
environment data.  They actually do contain more information than necessary, as
they get spread apart towards the poles.  A true equal representation would be
cos shaped progrssing from a width of 1 circumference at the equator to 0 at
the poles.  Angular maps only occupy a circular portion of the square image and
not all of this is efficiently used, as one progresses outward from the center
(ie angle from viewing line increases), the number of pixels (circumference)
increases linearly, whereas, to be equal, it should progress sin shaped.  For
angles from 0 to 90, there are less pixels than ideal by up to 50%, from 90 to
180, there are more pixels than ideal, meaning less efficient storing of the
data.

-tgq


Post a reply to this message

From: Trevor G Quayle
Subject: Re: Experiments with light probes
Date: 2 Jun 2009 11:20:00
Message: <web.4a2542bb75b7d3c981c811d20@news.povray.org>
"Trevor G Quayle" <Tin### [at] hotmailcom> wrote:
> "Bill Pragnell" <bil### [at] hotmailcom> wrote:
> > "Trevor G Quayle" <Tin### [at] hotmailcom> wrote:
> I don't know wheteher they preserve the information better or not for certain,
> but I do suspect that they make more efficient and uniform storing of the
> environment data.  They actually do contain more information than necessary, as
> they get spread apart towards the poles.  A true equal representation would be
> cos shaped progrssing from a width of 1 circumference at the equator to 0 at
> the poles.  Angular maps only occupy a circular portion of the square image and
> not all of this is efficiently used, as one progresses outward from the center
> (ie angle from viewing line increases), the number of pixels (circumference)
> increases linearly, whereas, to be equal, it should progress sin shaped.  For
> angles from 0 to 90, there are less pixels than ideal by up to 50%, from 90 to
> 180, there are more pixels than ideal, meaning less efficient storing of the
> data.
>
> -tgq

I did some strange math:

The lat/long format has an efficiency of ~64% (64% of the pixels it contains
carry unique, required information)

The angular format has a similar efficiency over the circular area itself, but
coupled with the inefficiency of the circular area over the square area of the
image gives a total efficiency of ~50% (only 50% of the pixels are unique and
usuable).  Add also to this that over the middle of the circle, there is a
total deficiency of about 27% (there are about 27% less pixels in this region
than needed) based on the 90deg circle pixel count. Conversely, if we assume
that the pixel count at/near the centre (0deg) of the angular map is
sufficient, such that tere is no inherent deficiency, then the map has a total
efficiency of ~32%.

I don't know if this makes much sense or is a valid assessment, nor how these
numbers are with reference to the data available from the original mirrorball
images, but it does seem to indicate that, mathematically speaking, the
lat/long format preserves the data better and more efficiently.

-tgq


Post a reply to this message

From: clipka
Subject: Re: Experiments with light probes
Date: 2 Jun 2009 11:40:00
Message: <web.4a25478375b7d3c9f708085d0@news.povray.org>
"Bill Pragnell" <bil### [at] hotmailcom> wrote:
> It works a treat, but often I find that the rotated image doesn't quite match
> the other image after unwrapping. The points I select in the editor match
> perfectly, naturally, but the images seem to diverge near the edges. I'm not
> entirely sure why this is happening, but making sure the ball is centred in the
> frame when taking the pictures seems to help considerably. Can any resident
> HDR-makers shed any light on this?

I'm not an expert on this, but AFAIK virtually all real-life cameras exhibit
some distortion of objects not precisely in the center of the frame.

It's basically the same effect as can be seen with POV-Ray's standard
"perspective" camera: Place two spheres side by side - one in th center of the
image, and one at the side. Crop both shots to the spheres' dimensions. You'll
notice that the off-center sphere does not seem to be circular.

AFAIK it is mathematically not possible to design a camera that produces planar
2D images without any such effects.

The more zoom you use, the less prominent these distortions should be.


If your lightprobe-generating software takes already-cropped images, it will
most likely be unable to determine whether your chrome sphere was off-center in
the original shot, and silently assume that it was taken head-on.


Post a reply to this message

From: Trevor G Quayle
Subject: Re: Experiments with light probes
Date: 2 Jun 2009 12:45:01
Message: <web.4a2555f275b7d3c981c811d20@news.povray.org>
"clipka" <nomail@nomail> wrote:
> "Bill Pragnell" <bil### [at] hotmailcom> wrote:
> > It works a treat, but often I find that the rotated image doesn't quite match
> > the other image after unwrapping. The points I select in the editor match
> > perfectly, naturally, but the images seem to diverge near the edges. I'm not
> > entirely sure why this is happening, but making sure the ball is centred in the
> > frame when taking the pictures seems to help considerably. Can any resident
> > HDR-makers shed any light on this?
>
> I'm not an expert on this, but AFAIK virtually all real-life cameras exhibit
> some distortion of objects not precisely in the center of the frame.
>
> It's basically the same effect as can be seen with POV-Ray's standard
> "perspective" camera: Place two spheres side by side - one in th center of the
> image, and one at the side. Crop both shots to the spheres' dimensions. You'll
> notice that the off-center sphere does not seem to be circular.
>
> AFAIK it is mathematically not possible to design a camera that produces planar
> 2D images without any such effects.
>
> The more zoom you use, the less prominent these distortions should be.
>
>
> If your lightprobe-generating software takes already-cropped images, it will
> most likely be unable to determine whether your chrome sphere was off-center in
> the original shot, and silently assume that it was taken head-on.

This should usually not be the problem.  I say usually, as I would assume that
most people try to center the mirrorball in their shot, which would minimize
any distortion effects directly related to being off-center.  However, I
suppose it should be worth noting as general advice to do so.

There is far more distortion due to perspective and parallax in most cases.  The
parallax distortion occurs because the mirrorball is not infinitely small and
the environment is not infinitely large.  When shots are taken from the two
different positions (90deg to each other), they are not reflecting *exactly*
the same geometry.  The amount of parallax distortion is related to how big the
mirrorball is relative to how large the enviroment or particular visible objects
are.

There is also perspective related distortion that comes from the fact that the
camera isn't taking a true orthographic shot of the mirrorball.  There is
actually a portion of the scene missing from the back (ie, not really getting
the full 360deg), however it is being used as if it is.  The amount missing is
directly related the ratio of the mirrorball size to the distance from the
mirrorball.

-tgq


Post a reply to this message

From: Trevor G Quayle
Subject: Re: Experiments with light probes
Date: 2 Jun 2009 13:35:01
Message: <web.4a2561fb75b7d3c981c811d20@news.povray.org>
A couple more little pieces of advice based on my thoughts on clipkas comments:

1) Make sure the mirrorball is as close to the centre of the camera view as
possible when taking the shots, minimizing lens distortion effects that occur
when off-centre (eg: barrel distortion)

2)  When picking your matching points for rotation, as I said, try to be as
precise as possible.  Also try to use points of objects that are more distant.
This will minimize the parallax errors.   There will still be parallax errors in
objects closer, but at least it will be minimized for the scene as a whole.
Picking closer points, though it may seem easier to be more precise, will
likely cause much more parallax errors.

3) This is a little secret I didn't want to describe yet, as it can be tricky to
master and involves a little math:

Due to the fact that you are not getting a true orthographic shot of the
mirrorball, some of the back portion of the scene is actually missing due to
perspective.  The amount missing is related to the ratio of ball size to
distance of camera from ball.  However, when transforming and using the map, it
is assumed that it is a true 360 image.  This can cause some distortion as well,
especially with larger ball/distance ratios.  This distortion, if noticeable
enough, can cause errors when trying to match and combine your two images.  I
had developed a method to try to minimize this.  It is not perfect and can be a
little tedious:

(I will use a ball diameter of D=60 (R=30), distance of X=300 and image
resolution of I=1000x1000 for the examples)

R/X=0.1  ratio of ball size to distance

A=2*ASIN(R/X)=11.5deg : total included angle of perspective.  This also means
that the area missing from the rear of the mirrorball is ~11.5deg

A/360=0.0638 (~6.38%) Percent missing

Now take the mirrorball and transform it to angular.
Now we want to crop the angular map LARGER than the edge by an amount equal to
the percent missing

1000*0.0638=64

Therefore, add 64 pixels to each of the top, bottom and sides of the image.
There will be no image information in this area, but thats ok as this is in the
areas getting masked out amyways. Make sure under "Select/Select Options" that
"Restrict Selection to Image" is UNCHECKED.

Now there's one more tricky thing to do before proceeding.  If you scan over the
black crop areas, you'll note that their colour is "-1.#IO".  This will cause a
problem when combining as it is basically a null colour (not black) and HDRShop
doesnt handle it very well.  Make sure you have .BMP files set to open by an
appropriate editor by default in windows (I use goos old Windows Paint).  Go to
"File/Edit in Image Editor" and it should open up for editing.  Now flood the
perimeter black area to white or some other colour.  DO NOT EDIT ANYTHING ELSE.
 Save and close.  Back at HDRShop, click the "Hit OK when edit complete" button.
 It should update with the white borders now.  What happens here is that
everything is preserved as it was EXCEPT whatever was edited.  The borders
should be whatever colour you chose rather than showing the -1.#IO error

You can now proceed with point matching and masking/combining of the images from
the angular maps as before, but now we have compensated for the missing area.
Now this isn't exactly perfect due to the mirrorball not being 360deg, but it
should be much less noticeable.

As I said, this can be tricky and tedious, but I find it does help somewhat if
you can get it right.

-tgq


Post a reply to this message

From: "Jérôme M. Berger"
Subject: Re: Experiments with light probes
Date: 2 Jun 2009 15:49:16
Message: <4a25823c$1@news.povray.org>
Bill Pragnell wrote:
> I should see if Cinepaint works under wine...
> 
	What for? According to the web site, it runs natively on Linux, OSX 
and BSD (and not on Windows btw)

		Jerome
-- 
mailto:jeb### [at] freefr
http://jeberger.free.fr
Jabber: jeb### [at] jabberfr


Post a reply to this message


Attachments:
Download 'us-ascii' (1 KB)

From: Bill Pragnell
Subject: Re: Experiments with light probes
Date: 2 Jun 2009 16:20:01
Message: <web.4a25894d75b7d3c969f956610@news.povray.org>
=?ISO-8859-1?Q?=22J=E9r=F4me_M=2E_Berger=22?= <jeb### [at] freefr> wrote:
> Bill Pragnell wrote:
> > I should see if Cinepaint works under wine...
> >
>  What for? According to the web site, it runs natively on Linux, OSX
> and BSD (and not on Windows btw)

Oops. I must have been thinking of something else. I've done quite a lot of
hdr-related softare searches recently, it's obviously coalescing in my brain :)


Post a reply to this message

<<< Previous 7 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.