|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Hi all!
Considering that many 3D applications (eg games) use single precision for their
renders, I am wondering where we really need double precision for 'normally'
sized scenes. I mean scenes that do not mix microscopic and gigantic distances.
For sure, if the dynamic is huge, we absolutely need double precision, at least
for object coordinates. But, if the dynamic in distances is reasonable, is it
mandatory to use double precision for the following features :
ray - bounding box intersection tests ?
ray - object intersection computation ?
others (TBD) ?
Independantly from the dynamic in scenes and from CPU hardware considerations
(see
http://research.colfaxinternational.com/file.axd?file=2012%2F4%2FColfax_FLOPS.pdf),
why would single precision not be sufficient for :
RGBFT values storage, which might imply :
radiosity/photon maps elaboration ?
texturing and lighting ?
media (even though it uses integration methods)
others (TBD)
What do you think ? Has the subject already been debated here ?
Bruno.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Bruno Cabasson" <bru### [at] cabassoncom> wrote:
> Hi all!
>
> Considering that many 3D applications (eg games) use single precision for their
> renders, I am wondering where we really need double precision for 'normally'
> sized scenes. I mean scenes that do not mix microscopic and gigantic distances.
>
> For sure, if the dynamic is huge, we absolutely need double precision, at least
> for object coordinates. But, if the dynamic in distances is reasonable, is it
> mandatory to use double precision for the following features :
>
> ray - bounding box intersection tests ?
> ray - object intersection computation ?
> others (TBD) ?
>
[snip]
>
> What do you think ? Has the subject already been debated here ?
The dynamic doesn't even have to be huge. See:
http://news.povray.org/povray.binaries.images/thread/%3Cweb.47e40ce72d3a644685de7b680@news.povray.org%3E/?ttop=304745&t
off=500
Since the error was not lighting-related, I don't think the 37,417 POV-unit
distance to the light source should be considered in the dynamic. What sort of
dynamic do you have in mind?
POV-Ray does not seem suited to single precision. Perhaps it is the nature of
mathematically defined shapes that they require higher precision than mesh-based
systems.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Cousin Ricky" <rickysttATyahooDOTcom> wrote:
> "Bruno Cabasson" <bru### [at] cabassoncom> wrote:
> > Hi all!
> >
> > Considering that many 3D applications (eg games) use single precision for their
> > renders, I am wondering where we really need double precision for 'normally'
> > sized scenes. I mean scenes that do not mix microscopic and gigantic distances.
> >
> > For sure, if the dynamic is huge, we absolutely need double precision, at least
> > for object coordinates. But, if the dynamic in distances is reasonable, is it
> > mandatory to use double precision for the following features :
> >
> > ray - bounding box intersection tests ?
> > ray - object intersection computation ?
> > others (TBD) ?
> >
> [snip]
> >
> > What do you think ? Has the subject already been debated here ?
>
> The dynamic doesn't even have to be huge. See:
>
>
http://news.povray.org/povray.binaries.images/thread/%3Cweb.47e40ce72d3a644685de7b680@news.povray.org%3E/?ttop=304745
&t
> off=500
>
> Since the error was not lighting-related, I don't think the 37,417 POV-unit
> distance to the light source should be considered in the dynamic. What sort of
> dynamic do you have in mind?
>
> POV-Ray does not seem suited to single precision. Perhaps it is the nature of
> mathematically defined shapes that they require higher precision than mesh-based
> systems.
Meshes also involve maths ...
The kind of dynamic I have in mind could be a 1 millimeter detail close to the
camera, and a million miles away huge object. Take the recent post with the
toroidal planet in pbi. Imagin the camera sees a flower on its ground with small
petals.
My question was about the fundamental need of double precision in Ray-Tracing
for 'normal' scenes. What features need DP, and what need not.
Post a reply to this message
|
|
| |
| |
|
|
From: Patrick Elliott
Subject: Re: single precision vs double precision
Date: 15 Nov 2012 12:15:57
Message: <50a5234d@news.povray.org>
|
|
|
| |
| |
|
|
On 11/15/2012 8:43 AM, Bruno Cabasson wrote:
> "Cousin Ricky" <rickysttATyahooDOTcom> wrote:
>> "Bruno Cabasson" <bru### [at] cabassoncom> wrote:
>>> Hi all!
>>>
>>> Considering that many 3D applications (eg games) use single precision for their
>>> renders, I am wondering where we really need double precision for 'normally'
>>> sized scenes. I mean scenes that do not mix microscopic and gigantic distances.
>>>
>>> For sure, if the dynamic is huge, we absolutely need double precision, at least
>>> for object coordinates. But, if the dynamic in distances is reasonable, is it
>>> mandatory to use double precision for the following features :
>>>
>>> ray - bounding box intersection tests ?
>>> ray - object intersection computation ?
>>> others (TBD) ?
>>>
>> [snip]
>>>
>>> What do you think ? Has the subject already been debated here ?
>>
>> The dynamic doesn't even have to be huge. See:
>>
>>
http://news.povray.org/povray.binaries.images/thread/%3Cweb.47e40ce72d3a644685de7b680@news.povray.org%3E/?ttop=304745
> &t
>> off=500
>>
>> Since the error was not lighting-related, I don't think the 37,417 POV-unit
>> distance to the light source should be considered in the dynamic. What sort of
>> dynamic do you have in mind?
>>
>> POV-Ray does not seem suited to single precision. Perhaps it is the nature of
>> mathematically defined shapes that they require higher precision than mesh-based
>> systems.
>
> Meshes also involve maths ...
>
> The kind of dynamic I have in mind could be a 1 millimeter detail close to the
> camera, and a million miles away huge object. Take the recent post with the
> toroidal planet in pbi. Imagin the camera sees a flower on its ground with small
> petals.
>
> My question was about the fundamental need of double precision in Ray-Tracing
> for 'normal' scenes. What features need DP, and what need not.
>
Uh.. All of them? lol
Seriously though, the problem is close distances. Things like
"coincident surfaces" can appear much easier in cases where the
precision is lower. The higher the precision, the less likely you are to
get miscalculations. I presume there are other cases as well. But, in
this case, in principle, you can have objects layered "much closer" to
each other, and not have accidental overlaps, due to attempts to find
which one got hit first in the math.
Still think there has got to be some better solution to those things
though, than just throwing bigger and bigger numbers at them... lol
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 15.11.2012 15:40, schrieb Bruno Cabasson:
> Considering that many 3D applications (eg games) use single precision for their
> renders, I am wondering where we really need double precision for 'normally'
> sized scenes. I mean scenes that do not mix microscopic and gigantic distances.
>
> For sure, if the dynamic is huge, we absolutely need double precision, at least
> for object coordinates. But, if the dynamic in distances is reasonable, is it
> mandatory to use double precision for the following features :
>
> ray - bounding box intersection tests ?
> ray - object intersection computation ?
> others (TBD) ?
Yes, for some object types it is absolutely necessary, at least for the
internal math.
Likewise, if we were using single-precision, coincident surface issues
would be even worse. (A similar problem exists in 3D games, called
"Z-fighting".)
> why would single precision not be sufficient for :
>
> RGBFT values storage, which might imply :
> radiosity/photon maps elaboration ?
> texturing and lighting ?
Absolutely. That's why color storage and math already /is/
single-precision ;-). (An exception is color storage at SDL level, but
that's only because the SDL doesn't differentiate between colors and
vectors.)
> media (even though it uses integration methods)
Not sure what you mean by this. As far as color math is concerned, see
above.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 15.11.2012 17:43, schrieb Bruno Cabasson:
>> POV-Ray does not seem suited to single precision. Perhaps it is the nature of
>> mathematically defined shapes that they require higher precision than mesh-based
>> systems.
>
> Meshes also involve maths ...
Mesh math is pretty trivial, requiring only linear equations to be
solved. But even a seemingly simple thing as a sphere already involves
quadratic equations, and cubic equations aren't uncommon in POV-Ray
either. Some primitives require even higher-order polynomials.
And the higher the order of a polynomial, the higher the dynamic range
needed to solve it.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka <ano### [at] anonymousorg> wrote:
> Am 15.11.2012 17:43, schrieb Bruno Cabasson:
>
> >> POV-Ray does not seem suited to single precision. Perhaps it is the nature of
> >> mathematically defined shapes that they require higher precision than mesh-based
> >> systems.
> >
> > Meshes also involve maths ...
>
> Mesh math is pretty trivial, requiring only linear equations to be
> solved. But even a seemingly simple thing as a sphere already involves
> quadratic equations, and cubic equations aren't uncommon in POV-Ray
> either. Some primitives require even higher-order polynomials.
>
> And the higher the order of a polynomial, the higher the dynamic range
> needed to solve it.
Let me put it differently. Since today's GPUs' native format is single
precision, which is much faster than their emulated double precision (AFAIK),
and if POV-Ray were to use GPU or hardware-accelrated computing (one day ...,
see a recent post on the subject), I was curious to know where double precision
is really necessary.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 16.11.2012 09:47, schrieb Bruno Cabasson:
> Let me put it differently. Since today's GPUs' native format is single
> precision, which is much faster than their emulated double precision (AFAIK),
> and if POV-Ray were to use GPU or hardware-accelrated computing (one day ...,
> see a recent post on the subject), I was curious to know where double precision
> is really necessary.
>
Contemporary GPU's have native support for 64bit floating point
precession (nothing 'emulated') and believe me, 64bit versus 32bit is
the very least of your problems when it comes to porting POV-Ray to the GPU.
I've done that already with a brutally stripped down version of POV-Ray
(actually McPOV, but is now closer to DKB-Trace Version 1.0 as it is to
POV-Ray) and OpenCL.
-Ive
Post a reply to this message
|
|
| |
| |
|
|
From: Le Forgeron
Subject: Re: single precision vs double precision
Date: 16 Nov 2012 04:57:25
Message: <50a60e05@news.povray.org>
|
|
|
| |
| |
|
|
Le 16/11/2012 09:47, Bruno Cabasson a écrit :
> Let me put it differently. Since today's GPUs' native format is single
> precision, which is much faster than their emulated double precision (AFAIK),
> and if POV-Ray were to use GPU or hardware-accelrated computing (one day ...,
> see a recent post on the subject), I was curious to know where double precision
> is really necessary.
>
>
GPU are great when running the *same* code over multiple data.
Which, for all the mesh-only-based renderers is great too: take all the
mesh points/normal/wtf data and apply an operation on it.
(but even using a GPU put constraints on the data organisation: if you
want to find which triangle is in a clipping zone, you just do not get
back a short list of triangles. You get the full list of triangles with
a bit (or more) in each triangle about the result of your processing.)
In term of generic code (latency and such)
1. Preparing the code to run on GPU: costly.
2. Running the code on GPU: cheap
3. Using the result: irrelevant
Changing the code of 1 is expensive. But with today screensize &
expectation, using a mesh with 10x or 30x more points, thanks to 2, is
not a problem.
A GPU is like a factoring tool to make screws and/or nail.
Making 20 final objects per second is ok, as long as all objects are the
same. Changing the production from a nail of length 2cm, diameter 1mm
with circular flat head of 3mm to a different nail or a screw is
something that could take a few minutes, hours or a day.
If you have a single floating point operation to perform, the direct
handling by the CPU will be faster than the same on a GPU: because the
CPU either:
1. take the floating points data from memory to registers
2. perform the maths
3. store the result back in memory/cache
vs
1. get the GPU resources
2. make the GPU load the relevant code for the operation
3. make the GPU access/load the data
4. get signaled that the data have been processed
The implicit next step is:
*. use the result
on CPU path, the result is already in a register/cache
on CPU+GPU path, loading the result will take a few cycles too.
(on mesh-only-renders, the target is to avoid using the CPU on the data,
at all! Input the mesh-data, the lights & so, and computes different
partial images with the GPU (ambient, 1st reflection..., shadows, ...),
then get back either on GPU or CPU to combines these partial
contributions in a single image.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka <ano### [at] anonymousorg> wrote:
> Mesh math is pretty trivial, requiring only linear equations to be
> solved. But even a seemingly simple thing as a sphere already involves
> quadratic equations, and cubic equations aren't uncommon in POV-Ray
> either. Some primitives require even higher-order polynomials.
> And the higher the order of a polynomial, the higher the dynamic range
> needed to solve it.
To get a clearer picture of why this is so, consider that when you
multiply two values, the result requires double the bits of the original
values if exact accuracy of the result is required. (If you now multiply
the result with a third value, the amount of bits is triple that of the
original values. Basically, each multiplication requires as many bits
as the sum of bits of the factors.)
If you don't have that many bits in the result, it will be inaccurate
(because the least-significant bits of the result will be dropped.)
When talking about floating point values, the crucial bits are the
mantissa bits: Their amount determines how accurate the result will be
(ie. how many least-significant bits of the result will be dropped after
a multiplication.)
In this context it's easy to imagine why single-precision floats (which
have 24 mantissa bits) will quickly become very inaccurate when you
perform multiplications on them. (Double-precision floats have 53 mantissa
bits.)
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|