POV-Ray : Newsgroups : povray.binaries.images : IBL : Re: IBL Server Time
7 Aug 2024 15:18:28 EDT (-0400)
  Re: IBL  
From: Trevor G Quayle
Date: 21 Apr 2006 15:25:00
Message: <web.4449311ff8d220636c4803960@news.povray.org>
Jaime Vives Piqueres <jai### [at] ignoranciaorg> wrote:
> Trevor G Quayle wrote:
> > Here is my attempt at doing this (I assume I know what you are doing, i.e.
> > you are trying to assemble a series of LDR images into an HDR image within
> > POV).  The difficult part is determining a camera response curve (similar
> > to monitor gamma) and the relative/absolute F-Stop values for the given
> > images. I'm not even sure I've applied the response curve correctly here,
> > but it seems ok even though the results are slightly different than that
> > obtained using HDRShop.
>
>    Thanks for sharing! I didn't try to understand what you done there,
> but it gives much better results than the simple linear approach I was
> using. And it's also much more practical to have it on a single layer,
> of course. Now, if I can convince the camera to focus on the reflection...
>
> --
> Jaime

What I did was not too difficult.
1) First I mapped the images using the onion pattern to enable being able to
use loops to simplify the code for varying numbers of input images.
2) Then for each onion layer (representing each LDRI) convert the image
colour (for each r,g,b channel) by COL^CamRsp which converts the pixel
colour to the corresponding real world colour according to the camera
response.
3) Divide the resulting colour by 2^FStop, this coverts the colour of the
corresponding image to its equivilant at 0 stops.
4) Use the max colour found.  Any colours that were clipped in the
corresponding LDRIs (i.e. values of 0 or 1) get dropped out.  (Any colours
that were still clipped at the lowest and highest fstop values will still
be clipped however)

This is a very rough method and assumes that you know the camera response
and the FStop values of each LDRI.  Programs like HDRShop can do a much
better job of estimating these sorts of things.

There is no need to add the successive images together, (as it wouldn't give
the proper result anyways). What HDRI creation is trying to achieve, is find
the real world brightness of every pixel in the image.  If you had only one
image, but there were no clipped colours, then it could act directly like
an HDRI.  However, due to photographic and image storage restrictions,
colour do get clipped.  rgb values are limited to 1, but brightnesses of
lights tend to be much higher, especially as the FStop value gets higher,
and they get clipped to 1 in the image file.  On the other hand, dark
shadow areas can get clipped to 0 due to decimal place truncation at lower
FStop values.

So basically if you take a low FStop image and a high FStop image, convert
them to an equal FStop level, then you just need to check for the greatest
pixel valuebetween the two.  Ideally, for pixels not clipped in either
LDRI, the values should be the same.  For any pixel that was clipped in one
LDRI but not the other, the clipped one will get dropped out.  And for any
pixel clipped in both, well it'll stay clipped, as there is no easy way to
determine the actual value, there isn't enough information.

Hope this helps you understand HDRI and what I did a bit more and I didn't
bore you to death.  I don't profess to be an expert, but I have been
working with HDRI a lot lately so have had to get to understand it myself.

(FYI: in case you don't know, FStops double (or half) the light let in, so
an increase of an image of n Stops will result in an image 2^n times
brighter.)

-tgq


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.