POV-Ray : Newsgroups : povray.binaries.images : Experiments with light probes Server Time
9 May 2024 01:16:40 EDT (-0400)
  Experiments with light probes (Message 11 to 20 of 27)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 7 Messages >>>
From: Trevor G Quayle
Subject: Re: Experiments with light probes
Date: 2 Jun 2009 10:30:01
Message: <web.4a25363d75b7d3c981c811d20@news.povray.org>
"Bill Pragnell" <bil### [at] hotmailcom> wrote:
> "Trevor G Quayle" <Tin### [at] hotmailcom> wrote:

> > No wrapping and rotating at the same time is fine.  I still would always
> > recommend Lat/Long format over angular.
>
> Any particular reason? Does this projection preserve more information? I expect
> I can still do the blending in HDRSHop... ?
>

No particular reason other than I find them easier to work with and view.
Reasons I like lat/long:

-Easy to open and view and understand the scene.  Any rotation from horizontal
is also usually easy to see and correct.  Viewing and angular map, I find it
very difficult to understand what the overall scene looks like.

-megaPOV may have the angular map-type, but not all programs do and it is very
simple to wrap the lat/long to a sphere.

- I have an extensive lightdome macro that I have developed for POV.  It is much
easier to parse and analyze the lat/long format


I don't know wheteher they preserve the information better or not for certain,
but I do suspect that they make more efficient and uniform storing of the
environment data.  They actually do contain more information than necessary, as
they get spread apart towards the poles.  A true equal representation would be
cos shaped progrssing from a width of 1 circumference at the equator to 0 at
the poles.  Angular maps only occupy a circular portion of the square image and
not all of this is efficiently used, as one progresses outward from the center
(ie angle from viewing line increases), the number of pixels (circumference)
increases linearly, whereas, to be equal, it should progress sin shaped.  For
angles from 0 to 90, there are less pixels than ideal by up to 50%, from 90 to
180, there are more pixels than ideal, meaning less efficient storing of the
data.

-tgq


Post a reply to this message

From: Trevor G Quayle
Subject: Re: Experiments with light probes
Date: 2 Jun 2009 11:20:00
Message: <web.4a2542bb75b7d3c981c811d20@news.povray.org>
"Trevor G Quayle" <Tin### [at] hotmailcom> wrote:
> "Bill Pragnell" <bil### [at] hotmailcom> wrote:
> > "Trevor G Quayle" <Tin### [at] hotmailcom> wrote:
> I don't know wheteher they preserve the information better or not for certain,
> but I do suspect that they make more efficient and uniform storing of the
> environment data.  They actually do contain more information than necessary, as
> they get spread apart towards the poles.  A true equal representation would be
> cos shaped progrssing from a width of 1 circumference at the equator to 0 at
> the poles.  Angular maps only occupy a circular portion of the square image and
> not all of this is efficiently used, as one progresses outward from the center
> (ie angle from viewing line increases), the number of pixels (circumference)
> increases linearly, whereas, to be equal, it should progress sin shaped.  For
> angles from 0 to 90, there are less pixels than ideal by up to 50%, from 90 to
> 180, there are more pixels than ideal, meaning less efficient storing of the
> data.
>
> -tgq

I did some strange math:

The lat/long format has an efficiency of ~64% (64% of the pixels it contains
carry unique, required information)

The angular format has a similar efficiency over the circular area itself, but
coupled with the inefficiency of the circular area over the square area of the
image gives a total efficiency of ~50% (only 50% of the pixels are unique and
usuable).  Add also to this that over the middle of the circle, there is a
total deficiency of about 27% (there are about 27% less pixels in this region
than needed) based on the 90deg circle pixel count. Conversely, if we assume
that the pixel count at/near the centre (0deg) of the angular map is
sufficient, such that tere is no inherent deficiency, then the map has a total
efficiency of ~32%.

I don't know if this makes much sense or is a valid assessment, nor how these
numbers are with reference to the data available from the original mirrorball
images, but it does seem to indicate that, mathematically speaking, the
lat/long format preserves the data better and more efficiently.

-tgq


Post a reply to this message

From: clipka
Subject: Re: Experiments with light probes
Date: 2 Jun 2009 11:40:00
Message: <web.4a25478375b7d3c9f708085d0@news.povray.org>
"Bill Pragnell" <bil### [at] hotmailcom> wrote:
> It works a treat, but often I find that the rotated image doesn't quite match
> the other image after unwrapping. The points I select in the editor match
> perfectly, naturally, but the images seem to diverge near the edges. I'm not
> entirely sure why this is happening, but making sure the ball is centred in the
> frame when taking the pictures seems to help considerably. Can any resident
> HDR-makers shed any light on this?

I'm not an expert on this, but AFAIK virtually all real-life cameras exhibit
some distortion of objects not precisely in the center of the frame.

It's basically the same effect as can be seen with POV-Ray's standard
"perspective" camera: Place two spheres side by side - one in th center of the
image, and one at the side. Crop both shots to the spheres' dimensions. You'll
notice that the off-center sphere does not seem to be circular.

AFAIK it is mathematically not possible to design a camera that produces planar
2D images without any such effects.

The more zoom you use, the less prominent these distortions should be.


If your lightprobe-generating software takes already-cropped images, it will
most likely be unable to determine whether your chrome sphere was off-center in
the original shot, and silently assume that it was taken head-on.


Post a reply to this message

From: Trevor G Quayle
Subject: Re: Experiments with light probes
Date: 2 Jun 2009 12:45:01
Message: <web.4a2555f275b7d3c981c811d20@news.povray.org>
"clipka" <nomail@nomail> wrote:
> "Bill Pragnell" <bil### [at] hotmailcom> wrote:
> > It works a treat, but often I find that the rotated image doesn't quite match
> > the other image after unwrapping. The points I select in the editor match
> > perfectly, naturally, but the images seem to diverge near the edges. I'm not
> > entirely sure why this is happening, but making sure the ball is centred in the
> > frame when taking the pictures seems to help considerably. Can any resident
> > HDR-makers shed any light on this?
>
> I'm not an expert on this, but AFAIK virtually all real-life cameras exhibit
> some distortion of objects not precisely in the center of the frame.
>
> It's basically the same effect as can be seen with POV-Ray's standard
> "perspective" camera: Place two spheres side by side - one in th center of the
> image, and one at the side. Crop both shots to the spheres' dimensions. You'll
> notice that the off-center sphere does not seem to be circular.
>
> AFAIK it is mathematically not possible to design a camera that produces planar
> 2D images without any such effects.
>
> The more zoom you use, the less prominent these distortions should be.
>
>
> If your lightprobe-generating software takes already-cropped images, it will
> most likely be unable to determine whether your chrome sphere was off-center in
> the original shot, and silently assume that it was taken head-on.

This should usually not be the problem.  I say usually, as I would assume that
most people try to center the mirrorball in their shot, which would minimize
any distortion effects directly related to being off-center.  However, I
suppose it should be worth noting as general advice to do so.

There is far more distortion due to perspective and parallax in most cases.  The
parallax distortion occurs because the mirrorball is not infinitely small and
the environment is not infinitely large.  When shots are taken from the two
different positions (90deg to each other), they are not reflecting *exactly*
the same geometry.  The amount of parallax distortion is related to how big the
mirrorball is relative to how large the enviroment or particular visible objects
are.

There is also perspective related distortion that comes from the fact that the
camera isn't taking a true orthographic shot of the mirrorball.  There is
actually a portion of the scene missing from the back (ie, not really getting
the full 360deg), however it is being used as if it is.  The amount missing is
directly related the ratio of the mirrorball size to the distance from the
mirrorball.

-tgq


Post a reply to this message

From: Trevor G Quayle
Subject: Re: Experiments with light probes
Date: 2 Jun 2009 13:35:01
Message: <web.4a2561fb75b7d3c981c811d20@news.povray.org>
A couple more little pieces of advice based on my thoughts on clipkas comments:

1) Make sure the mirrorball is as close to the centre of the camera view as
possible when taking the shots, minimizing lens distortion effects that occur
when off-centre (eg: barrel distortion)

2)  When picking your matching points for rotation, as I said, try to be as
precise as possible.  Also try to use points of objects that are more distant.
This will minimize the parallax errors.   There will still be parallax errors in
objects closer, but at least it will be minimized for the scene as a whole.
Picking closer points, though it may seem easier to be more precise, will
likely cause much more parallax errors.

3) This is a little secret I didn't want to describe yet, as it can be tricky to
master and involves a little math:

Due to the fact that you are not getting a true orthographic shot of the
mirrorball, some of the back portion of the scene is actually missing due to
perspective.  The amount missing is related to the ratio of ball size to
distance of camera from ball.  However, when transforming and using the map, it
is assumed that it is a true 360 image.  This can cause some distortion as well,
especially with larger ball/distance ratios.  This distortion, if noticeable
enough, can cause errors when trying to match and combine your two images.  I
had developed a method to try to minimize this.  It is not perfect and can be a
little tedious:

(I will use a ball diameter of D=60 (R=30), distance of X=300 and image
resolution of I=1000x1000 for the examples)

R/X=0.1  ratio of ball size to distance

A=2*ASIN(R/X)=11.5deg : total included angle of perspective.  This also means
that the area missing from the rear of the mirrorball is ~11.5deg

A/360=0.0638 (~6.38%) Percent missing

Now take the mirrorball and transform it to angular.
Now we want to crop the angular map LARGER than the edge by an amount equal to
the percent missing

1000*0.0638=64

Therefore, add 64 pixels to each of the top, bottom and sides of the image.
There will be no image information in this area, but thats ok as this is in the
areas getting masked out amyways. Make sure under "Select/Select Options" that
"Restrict Selection to Image" is UNCHECKED.

Now there's one more tricky thing to do before proceeding.  If you scan over the
black crop areas, you'll note that their colour is "-1.#IO".  This will cause a
problem when combining as it is basically a null colour (not black) and HDRShop
doesnt handle it very well.  Make sure you have .BMP files set to open by an
appropriate editor by default in windows (I use goos old Windows Paint).  Go to
"File/Edit in Image Editor" and it should open up for editing.  Now flood the
perimeter black area to white or some other colour.  DO NOT EDIT ANYTHING ELSE.
 Save and close.  Back at HDRShop, click the "Hit OK when edit complete" button.
 It should update with the white borders now.  What happens here is that
everything is preserved as it was EXCEPT whatever was edited.  The borders
should be whatever colour you chose rather than showing the -1.#IO error

You can now proceed with point matching and masking/combining of the images from
the angular maps as before, but now we have compensated for the missing area.
Now this isn't exactly perfect due to the mirrorball not being 360deg, but it
should be much less noticeable.

As I said, this can be tricky and tedious, but I find it does help somewhat if
you can get it right.

-tgq


Post a reply to this message

From: "Jérôme M. Berger"
Subject: Re: Experiments with light probes
Date: 2 Jun 2009 15:49:16
Message: <4a25823c$1@news.povray.org>
Bill Pragnell wrote:
> I should see if Cinepaint works under wine...
> 
	What for? According to the web site, it runs natively on Linux, OSX 
and BSD (and not on Windows btw)

		Jerome
-- 
mailto:jeb### [at] freefr
http://jeberger.free.fr
Jabber: jeb### [at] jabberfr


Post a reply to this message


Attachments:
Download 'us-ascii' (1 KB)

From: Bill Pragnell
Subject: Re: Experiments with light probes
Date: 2 Jun 2009 16:20:01
Message: <web.4a25894d75b7d3c969f956610@news.povray.org>
=?ISO-8859-1?Q?=22J=E9r=F4me_M=2E_Berger=22?= <jeb### [at] freefr> wrote:
> Bill Pragnell wrote:
> > I should see if Cinepaint works under wine...
> >
>  What for? According to the web site, it runs natively on Linux, OSX
> and BSD (and not on Windows btw)

Oops. I must have been thinking of something else. I've done quite a lot of
hdr-related softare searches recently, it's obviously coalescing in my brain :)


Post a reply to this message

From: Bill Pragnell
Subject: Re: Experiments with light probes
Date: 2 Jun 2009 16:30:00
Message: <web.4a258ad075b7d3c969f956610@news.povray.org>
"Trevor G Quayle" <Tin### [at] hotmailcom> wrote:
> 3) This is a little secret I didn't want to describe yet, as it can be tricky to
> master and involves a little math:
[snip]
> As I said, this can be tricky and tedious, but I find it does help somewhat if
> you can get it right.

Cunning indeed. I may give this a try if all else fails.

It sounds like distant environments centred properly in shot will be the way to
go for the time being, however.

Thanks for all the detail!


Post a reply to this message

From: Bill Pragnell
Subject: Re: Experiments with light probes
Date: 2 Jun 2009 17:05:00
Message: <web.4a2593c775b7d3c969f956610@news.povray.org>
"Trevor G Quayle" <Tin### [at] hotmailcom> wrote:
> > > Maybe post some details of your difficulties, or if you'd like you can email
> > > me directly.

Here's a pair of angular maps that demonstrate my issue (scaled down to 512x512,
but it's still evident). I've marked corresponding pixel locations with red
crosses.

http://www.infradead.org/~wmp/angular_eg.png

It's interesting, now I come to actually look at this in detail, it seems to be
most pronounced on the horizontal. This suggests to me that keeping the ball
more central will definitely help...

Bill


Post a reply to this message

From: Edouard
Subject: Re: Experiments with light probes
Date: 2 Jun 2009 18:15:00
Message: <web.4a25a3b275b7d3c9b1aa47b00@news.povray.org>
"Bill Pragnell" <bil### [at] hotmailcom> wrote:

> That's about right, my camera's 7.2Mp and I can juuuust get it focused at full
> zoom from about half a meter away, although I can't get it to focus full-frame.
> I'm getting 1700 pixels square after cropping. Perhaps I'll try the focus
> override - forgot about that.

Oh - one more thing I forgot; if your camera has an actual aperture (and many of
the Canon point an shoots don't) try to set it to the smallest value to increase
the depth of field.

> I think it's stainless steel, so I shouldn't have any problems. I've never had
> stainless steel rust before except through wet contact over months. Where on
> earth can I get it chrome-plated, and how much would that cost?! :)

I thought mine were stainless steel, but maybe they were just steel...

Chrome plating is usually a function of how big the object is - and a ball
bearing is pretty small. There's often a minimum charge involved though. Mine
was about 20 pounds because of that minimum. If I were to do more balls, I'd
shop around to find if someone could do it cheaper. Just look in the yellow
pages for chrome platers in your area.

The only tricky part of how to mount the ball in the chrome plating tank. I got
them to weld a wire onto the ball and hang it into the tank. You end up with a
spot that's, but that can go on the bottom. I also got some small bubbles on
the opposite side, but there's enough clear area to take a good picture.

> > I use HDR shop to convert the images from spherical mirror projection (i.e. the
> > HDR photograph) into Latitude/Longitude format, then do the stitching in
> > Photoshop. I think there is an HDR version of GIMP - Cinepaint? Everything is
> > much simpler to do in square lat/long format, and POV can use the resulting
> > images just fine.
>
> Actually the blending is very straightforward in HDRShop, I can knock up a mask
> in the GIMP in about 5 minutes. I don't have Photoshop, so I am limited in my
> retouching facilities. I should see if Cinepaint works under wine...

Using Photoshop (or something like Cinepaint) is just a preference I have, but I
do find it easier to manually touch stuff up in lat/long format.

Cheers,
Edouard.


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 7 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.