|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 1/27/2011 2:02 PM, Alain wrote:
> If your actual range is smaller, with some subtile darker details, then
> your 255, or less, can be enough.
How many people actually design scenes with light_sources or ambient
textures over 255? The main reason I am concerned is so I can update
Rune's illusion.inc which relies on images generated with POV...
> If your high range is from a night scene where everything is dark, it
> could be an idea to actualy multiply your values...
I'm really hoping to develop some "one size fits all" default values,
and let people worry about adjusting things only when it's absolutely
necessary.
Sam
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 1/27/2011 11:52 AM, Jaime Vives Piqueres wrote:
> As you can read on the prior thread, Ive slapped my face again and
> thanks to it I've discovered the OpenEXR output. It's working perfectly
> to generate the light maps with alpha, but when using it back into the
> texture via functions, I was getting a sort of "solarize" effect. Then I
> read your post and realized this was the problem... thanks! Now I can
> finish my baking tutorial! :)
Glad I could help! I'm looking forward to baking some textures myself
some day. Your tutorial will make things much clearer, I'm sure :)
Oh, since you are gaining knowledge about mesh-based cameras, do you
know if they can be used to render certain effects like custom focal
blur? I seem to remember something about the mesh camera's ability to
take more than one sample, for AA or something.
Sam
Post a reply to this message
|
|
| |
| |
|
|
From: Jim Holsenback
Subject: Re: HDR images as functions: is this right?
Date: 27 Jan 2011 18:39:40
Message: <4d42023c@news.povray.org>
|
|
|
| |
| |
|
|
On 01/27/2011 06:55 PM, stbenge wrote:
> On 1/27/2011 11:52 AM, Jaime Vives Piqueres wrote:
>> As you can read on the prior thread, Ive slapped my face again and
>> thanks to it I've discovered the OpenEXR output. It's working perfectly
>> to generate the light maps with alpha, but when using it back into the
>> texture via functions, I was getting a sort of "solarize" effect. Then I
>> read your post and realized this was the problem... thanks! Now I can
>> finish my baking tutorial! :)
>
> Glad I could help! I'm looking forward to baking some textures myself
> some day. Your tutorial will make things much clearer, I'm sure :)
>
> Oh, since you are gaining knowledge about mesh-based cameras, do you
> know if they can be used to render certain effects like custom focal
> blur? I seem to remember something about the mesh camera's ability to
> take more than one sample, for AA or something.
>
> Sam
>
>
I'm trying coax a sample scene and write up for the distribution from
this poster:
http://www.adamcrume.com/blog/archive/2011/01/19/forcing-extreme-supersampling-with-pov-ray
Post a reply to this message
|
|
| |
| |
|
|
From: Jaime Vives Piqueres
Subject: Re: HDR images as functions: is this right?
Date: 28 Jan 2011 04:44:51
Message: <4d429013@news.povray.org>
|
|
|
| |
| |
|
|
> I'm really hoping to develop some "one size fits all" default
> values, and let people worry about adjusting things only when it's
> absolutely necessary.
>
> Sam
From OpenEXR technical documentation:
----------------------------------------------------------------------
half numbers have 1 sign bit, 5 exponent bits, and 10 mantissa bits. The
interpretation of the sign, exponent and mantissa is analogous to
IEEE-754 floating-point numbers. half supports normalized and
denormalized numbers, infinities and NANs (Not A Number). The range of
----------------------------------------------------------------------
I checked a POV-Ray generated .exr with IC, and seems it uses the "half"
format... so, I suppose this means we can use 65000 as a safe value for
POV-Ray generated .exr files?
--
Jaime Vives Piqueres
La Persistencia de la Ignorancia
http://www.ignorancia.org
Post a reply to this message
|
|
| |
| |
|
|
From: Jaime Vives Piqueres
Subject: Re: HDR images as functions: is this right?
Date: 28 Jan 2011 05:05:59
Message: <4d429507$1@news.povray.org>
|
|
|
| |
| |
|
|
> Glad I could help! I'm looking forward to baking some textures myself
> some day. Your tutorial will make things much clearer, I'm sure :)
I'm not so sure... :)
> Oh, since you are gaining knowledge about mesh-based cameras, do you
> know if they can be used to render certain effects like custom focal
> blur? I seem to remember something about the mesh camera's ability
> to take more than one sample, for AA or something.
Yes, you can use several "stacked" meshes to accomplish a sort of AA,
as in the example Jim pointed out.
Theoretically, it could be used to do focal blur, but I've not tried
that experiment yet (it's on the list, of course). I suppose it's just a
matter of creating the meshes so that the face normals are the same on
the focused region, and start to diverge progressively with the face
distance to the focus point. But I'm really bad theorizing... I usually
prove myself wrong when I try things for real.
--
Jaime Vives Piqueres
La Persistencia de la Ignorancia
http://www.ignorancia.org
Post a reply to this message
|
|
| |
| |
|
|
From: Jaime Vives Piqueres
Subject: Re: HDR images as functions: is this right?
Date: 28 Jan 2011 06:19:54
Message: <4d42a65a@news.povray.org>
|
|
|
| |
| |
|
|
> Ahh!!! Now I get it. It is not the function itself, it is the range
> of the color_map that limits to the 0..1 range. Of course. So Jaimes
> min() makes perfectly sense and your solution overcomes this
> limitation.
So, this seems to confirm my guess that dot-artifacts are caused by
the image interpolation returning values greater than 1, isn't? Somehow
this doesn't shows on direct usage, but only when the interpolated
images are used in functions... time to file the bug report, I guess.
--
Jaime Vives Piqueres
La Persistencia de la Ignorancia
http://www.ignorancia.org
Post a reply to this message
|
|
| |
| |
|
|
From: Christian Froeschlin
Subject: Re: HDR images as functions: is this right?
Date: 29 Jan 2011 13:08:16
Message: <4d445790$1@news.povray.org>
|
|
|
| |
| |
|
|
stbenge wrote:
> I'm really hoping to develop some "one size fits all" default values,
> and let people worry about adjusting things only when it's absolutely
> necessary.
Basically, what you're trying to do is to encode a large but finite
value range in the interval [0,1], which is mathematically perfectly
ok but breaks down in real life due to finite precision.
If you are only concerned about preventing wraparound and preserving
detail in the dark areas you can use a non-linear encoding (e.g based
on a logarithm function). Then the reconstruction requires you to
approximate an exponential mapping using many color map entries.
The drawback is still loss of detail in bright areas.
Alternatively, you can use the method of dividing by 256
but allowing multiple slices. For example f_r_0_256 could
be a function that has the values of f_r/256 except that
it is 0 where f_r > 256. f_r_256_512 is a function that
has value "f_r/256 - 1" where f_r > 256 or f_r <= 512
but is 0 everything else, and so on.
These can be turned into pigments that are black outside
their definition range and reconstruct a slice of brightness
within ( actually, to separate unused Black from the first
payload color value in the map it might be better to not
use the full interval (0,1] for encoding values, but
rather only an interval [1/256,1] ). Averaging these
with appropriate scaling should yield the pigment.
The best solution would probably be to provide built-in
support for building pigments from functions without going
via cludgy pattern, color_map and averaging in povray.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 1/28/2011 2:05 AM, Jaime Vives Piqueres wrote:
>> Oh, since you are gaining knowledge about mesh-based cameras, do you
>> know if they can be used to render certain effects like custom focal
>> blur? I seem to remember something about the mesh camera's ability
>> to take more than one sample, for AA or something.
>
> Yes, you can use several "stacked" meshes to accomplish a sort of AA,
> as in the example Jim pointed out.
>
> Theoretically, it could be used to do focal blur, but I've not tried
> that experiment yet (it's on the list, of course). I suppose it's just a
> matter of creating the meshes so that the face normals are the same on
> the focused region, and start to diverge progressively with the face
> distance to the focus point. But I'm really bad theorizing... I usually
> prove myself wrong when I try things for real.
Cool, I'll have to eventually try it. I'm not really pleased with the
time it takes to generate the meshes, but some extra time parsing might
be worth it.
There's also an extension of that idea for rendering motion blur in one
step (or two, if the textures are baked for a speed increase). Back when
I was playing with real time raytracing in POV, I resorted to creating
multiple copies of the entire scene in 3D space. Since you are only
allowed to move the camera with RTR, I just moved the camera from one
scene instance to the next. Something similar can be used for camera
mesh motion blur, where the changing scene is copied and rendered using
the mesh camera. I'm worried that I'd use up too much RAM, though...
Sam
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 1/29/2011 10:08 AM, Christian Froeschlin wrote:
> The best solution would probably be to provide built-in
> support for building pigments from functions without going
> via cludgy pattern, color_map and averaging in povray.
One (currently impossible) solution for what I'm trying to do would be
to warp a pigment with a function directly. Then I could have three
warps, one for each axis, using Rune's base functions for applying a
camera-based projection. There would be no need to build a pigment from
converted functions, and the depiction of HDR and OpenEXR images would
be accurate.
But maybe you're right, directly building pigment from functions would
probably be best, since then people could apply luminance transforms
among other things.
I'm really not interested in developing complex workarounds for a result
that may end up rendering slower than three averaged pigments, or for
something only a tiny portion of POVvers would actually need. I've seen
no apparent loss of detail when dividing and multiplying by 255, but
then again I use sane light_source and ambient settings (<10.0 in most
cases).
Sam
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 28.01.2011 10:44, schrieb Jaime Vives Piqueres:
> From OpenEXR technical documentation:
>
> ----------------------------------------------------------------------
> half numbers have 1 sign bit, 5 exponent bits, and 10 mantissa bits. The
> interpretation of the sign, exponent and mantissa is analogous to
> IEEE-754 floating-point numbers. half supports normalized and
> denormalized numbers, infinities and NANs (Not A Number). The range of
> ----------------------------------------------------------------------
>
> I checked a POV-Ray generated .exr with IC, and seems it uses the "half"
> format... so, I suppose this means we can use 65000 as a safe value for
> POV-Ray generated .exr files?
POV-Ray OpenEXR output does indeed uses the 16-bit so-called "half
precision" IEEE-754 floating point format (which BTW has made it into
the IEEE 754 standard by now, as of IEEE 754-2008).
A value of 65000 is not sufficient though. As the half precision float
format (which BTW actually is part of the IEEE-754 standard by now) uses
5 exponent bits and an exponent bias of 15, with exponent codes 0 and 31
being reserved for special values, and the 10-bit mantissa representing
only the fractional part of a value >= 1.0, the largest representable
number is actually just a bit below 2.0*(2^15) = 65536.0. (Also note
that the lowest "normalized" value is 1.0*(2^-14), and the precision of
"subnormal" values is 1*(2^-24).)
To minimize loss of precision due to rounding errors, you should use a
power-of-2 factor, so 65536 should be the value to go for.
There's actually no need to worry about precision errors for small
values. For all computations, POV-Ray internally uses at least the
"float" C++ data type, which on most machines is implemented as the
32-bit "single precision" IEEE 754 floating point type with 8 exponent
bits and an exponent bias of 127 - allowing to represent exponents in
the range from -126 to +127 (again, exponent codes 0 and 255 are
reserved) - and a mantissa of 23 (+1) bits; this has the following
implications:
- For "normalized" half precision values, a division by 65536 boils down
to nothing more than subtracting 16 from the exponent value, giving a
resulting exponent of no less than -30, well within the float range (the
mantissa value will be left unchanged).
- For "subnormal" half precision values, a division by 65536 boils down
to shifting the mantissa N bits to the left so that the most significant
non-zero bit ends up in the implied 24th mantissa bit, and setting the
exponent value to -14-N, giving a resulting exponent value of no less
than -37, still well within the limits of the IEEE 754 single-precision
float format.
In either case, the internal data format is more than sufficient to
represent the resulting value range at full precision, and with a
power-of-2 divisor you'll not even get any rounding problems.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|