|
|
Simply put, every one of the requests we see for a GPU accelerated
POV-Ray is a case of a solution looking for a problem. Unfortunately,
it's the wrong solution. In that regard, it's more of a hammer looking
for nails.
GPU acceleration will be useful when the following conditions are met:
1) Support for sophisticated branching
2) Full double-precision accuracy
3) Large memory sets (other than textures)
4) Independent shaders running on distinct units.
The fact is that no GPU currently in production satisfies a single one
of these critera. Some are getting closer (for instance, both the
nVidia 2x0 series and the AMD 48x0 series have much better support for
double precision), but none of them have it.
Additionally, there are the following two criteria:
5) A standard interface for boards from multiple vendors
6) Must be fast enough to be useful
OpenCL *might* help - there's no way to know for sure until we see some
implementations, and experiment with it.
The speed, on the other hand, is the real killer. TMPGEnc was recently
modified to utilize nVidia's CUDA, and with disastrous results. A
high-end nVidia card, the 260 I believe, was compared to a 2.4gHz Intel
C2Q. It turns out that the conversion ran *faster* with CUDA turned
off. So much for helping out.
Once these conditions are met, *then* the idea will merit another
consideration. Until then, it's just a waste of time.
--
...Chambers
www.pacificwebguy.com
Post a reply to this message
|
|
|
|
Chambers <ben### [at] pacificwebguycom> wrote:
> Simply put, every one of the requests we see for a GPU accelerated
> POV-Ray is a case of a solution looking for a problem. Unfortunately,
> it's the wrong solution. In that regard, it's more of a hammer looking
> for nails.
Are you sure you don't mean a screwdriver looking for nails?
I think it's often the case that performance is a convenient substitute for
content. It seems to me that the payoff of improving algorithms and features
would be much greater than that of going for sheer performance.
> Once these conditions are met, *then* the idea will merit another
> consideration. Until then, it's just a waste of time.
It's an interesting dilemma. Just think of the huge amount of resources being
poured into developing codes for computers like Roadrunner, and the processors,
standards, and languages that will replace it in a couple years. Of course this
is the only way to move forward, but I think I'll wait a few years before diving
into GPGPU.
- Ricky
Post a reply to this message
|
|