|
|
"Thorsten Froehlich" <tho### [at] trfde> wrote in message
news:3ee74bb0$1@news.povray.org...
> In article <3ee74af9$2@news.povray.org> , "Michael Goldshteyn"
> <mgo### [at] n-o-s-p-a-m-earthlinknet> wrote:
>
> > Most modern PCs have a very powerful 3d video card with a GPU that is
> > hundreds if not thousands of times faster than the CPU for certain
> > calculations. I was thinking if some of the POV computations could be
> > offloaded to the GPU, we could speed up renders by a very large factor.
The
> > big question is one of which routines thouse would be and which chunks
of
> > the GPU could be utilized. Certainly matrix operations could be
offloaded at
> > the very least, although I don't know to what extent that would improve
> > performance. It's just a shame that there is this uber-powerful floating
> > point processor sitting in most PCs that can't be used to help things
move
> > along. What do others think?
>
> You need double precision floating-point processing. Sorry!
>
> Thorsten
Sorry, I accidentally e-mailed my reply, instead of posting it.
In any case, a quote from NVIDIA's web site:
And, powered by the second-generation NVIDIA CineFXT 2.0 engine with the
industry's only true 128-bit precision processing--the GeForce FX 5900 GPUs
take cinematic-quality special effects to new levels while providing the
industry's most compatible and reliable gaming platform.
Now, double-precision uses 64-bit floating point, so I think this is more
than enough.
Post a reply to this message
|
|