|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
scott escreveu:
>> GPU acceleration will be useful when the following conditions are met:
>> 1) Support for sophisticated branching
>> 2) Full double-precision accuracy
>> 3) Large memory sets (other than textures)
>> 4) Independent shaders running on distinct units.
>
> I think all those conditions are already met. For me the only barrier
> to not implementing POV on the GPU is the developer effort needed, and
> the risk that it might be wasted if the overall technology changes of
> programming GPUs in the next 5-10 years.
>
> Maybe there are some small problems that have to be worked around, but
> IMO GPUs are powerful enough today to run something like POV. I already
> made demo applications that do raytraced spheres and isosurfaces on the
> GPU (and my GPU is not even a modern one, it still has certain
> limitations), so I'm pretty sure something *could* written in OpenCL or
> CUDA on a new GPU and would handle POV fine, including all primitives
> and texturing. It's just the effort needed, and whether it will all be
> wasted in 5 years when something new appears.
Thank you. I appreciate your open-mindedness. I can perfectly accept
lack of human resources as a fine excuse, the others are just BS.
hey, I'd rather be a dick than have my head into my ass.
--
a game sig: http://tinyurl.com/d3rxz9
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Thank you. I appreciate your open-mindedness. I can perfectly accept
> lack of human resources as a fine excuse, the others are just BS.
I haven't really seen any valid *technical* reason why something like POV
couldn't be ported to CUDA or OpenCL and run fine on the latest 3D cards.
Most of the technical arguments against it are only valid for older cards
(eg lack of double support, limits of number of instructions and branching
etc).
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
scott <sco### [at] scottcom> wrote:
> > Thank you. I appreciate your open-mindedness. I can perfectly accept
> > lack of human resources as a fine excuse, the others are just BS.
> I haven't really seen any valid *technical* reason why something like POV
> couldn't be ported to CUDA or OpenCL and run fine on the latest 3D cards.
> Most of the technical arguments against it are only valid for older cards
> (eg lack of double support, limits of number of instructions and branching
> etc).
I repeat: If it's seemingly so easy, please go ahead and just do it.
All the material is there, ready to be put together. What are you waiting
for?
It's like the path-tracing patch for povray (mc-pov): Instead of just
whining, someone went and just implemented it.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> scott <sco### [at] scottcom> wrote:
>> > Thank you. I appreciate your open-mindedness. I can perfectly accept
>> > lack of human resources as a fine excuse, the others are just BS.
>
>> I haven't really seen any valid *technical* reason why something like POV
>> couldn't be ported to CUDA or OpenCL and run fine on the latest 3D cards.
>> Most of the technical arguments against it are only valid for older cards
>> (eg lack of double support, limits of number of instructions and
>> branching etc).
>
> I repeat: If it's seemingly so easy, please go ahead and just do it.
> All the material is there, ready to be put together. What are you waiting
> for?
He didn't say it was easy, he said there is no reason why it would be
currently *impossible*.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Nicolas Alvarez <nic### [at] gmailcom> wrote:
> He didn't say it was easy, he said there is no reason why it would be
> currently *impossible*.
My estimate is that if you wanted any kind of actual advantage of using
the GPU for ray-triangle intersections, a serious re-design of the entire
core renderer of POV-Ray would be needed, and even then the speedup would
probably show up only in a rather limited set of situations.
You can't simply write a routine like "ok, I need to check if this ray
hits this triangle mesh and where, so I'll just run the intersection
routine on the GPU and get the info" because getting the intersection
point (and normal vector) from the GPU is probably going to take longer
than doing the intersection calculation directly with the CPU would have
taken.
You need to buffer a whole bunch of intersection tests against the mesh,
then have the GPU process them all (with as much parallelism as possible)
and return all the intersection points at once (which ought to minimize
the I/O overhead).
However, the core renderer doesn't currently work that way. It doesn't
"buffer" intersection tests to be calculated in a bunch and then processed
all at the same time afterwards. It fully calculates one ray and all the
new rays that it spawns before going to the next ray, and so on.
Even if you were able to totally change the design of the core renderer
to do that, it would still probably help only in a limited amount of
situations. Obviously it would only work with meshes (and perhaps some
simple primitives such as spheres and planes), but ray-surface intersections
are often *not* the heaviest part of the raytracing process. Texturing,
lighting, media, photon mapping and radiosity often take a big chunk
of the total time.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 20-1-2010 21:03, Nicolas Alvarez wrote:
> Warp wrote:
>> scott <sco### [at] scottcom> wrote:
>>>> Thank you. I appreciate your open-mindedness. I can perfectly accept
>>>> lack of human resources as a fine excuse, the others are just BS.
Nemesis: again, you are not qualified to judge that.
>>> I haven't really seen any valid *technical* reason why something like POV
>>> couldn't be ported to CUDA or OpenCL and run fine on the latest 3D cards.
Nemesis: no, you have not seen a reason that convinces you. That is not
the same, given that you are not qualified to judge the technical merit.
>>> Most of the technical arguments against it are only valid for older cards
>>> (eg lack of double support, limits of number of instructions and
>>> branching etc).
>> I repeat: If it's seemingly so easy, please go ahead and just do it.
>> All the material is there, ready to be put together. What are you waiting
>> for?
>
> He didn't say it was easy, he said there is no reason why it would be
> currently *impossible*.
>
Nor did he say that it would be faster using a GPU. Only that all
partial implementations are faster than POV.
It may be true that all requirements have been met individually (though
I seriously doubt that, but that might take weeks to figure out given
the large amount of research activity) and that there for there is no
reason why the combination should be impossible in principle. To reach
the goals shortcuts have been made and I am not in a position to judge
if they are compatible. Experience tells me that the change that all
optimizations are orthogonal is extremely small.
Just to find out if it is possible in principle would take another few
weeks of dedicated study. After finding out that they can in principle
be combined, figuring out if there is a way to do that that is still
faster than a CPU is most probably months. And then it has to be
implemented. When you reach the conclusion that it cannot be done within
a week someone will come up with a new idea and you have to start all
over again.
Given that most people here have only about a day per week max to devote
to such a project, we are talking about several years of work with a
high chance of failure. With a high change of starting all over before
the project is finished because of new developments.
If you do this outside an academic environment or a lab of a big
industry you don't stand a change.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp escreveu:
> However, the core renderer doesn't currently work that way. It doesn't
> "buffer" intersection tests to be calculated in a bunch and then processed
> all at the same time afterwards. It fully calculates one ray and all the
> new rays that it spawns before going to the next ray, and so on.
Curiously enough, it's the same basic process as a path tracer (like the
one I linked to), except a path tracer spawns several times more rays
for each ray than povray.
> Even if you were able to totally change the design of the core renderer
> to do that, it would still probably help only in a limited amount of
> situations. Obviously it would only work with meshes (and perhaps some
> simple primitives such as spheres and planes), but ray-surface intersections
> are often *not* the heaviest part of the raytracing process. Texturing,
> lighting, media, photon mapping and radiosity often take a big chunk
> of the total time.
They take far less than a path tracer.
--
a game sig: http://tinyurl.com/d3rxz9
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
andrel wrote:
> On 20-1-2010 21:03, Nicolas Alvarez wrote:
>> Warp wrote:
>>> scott <sco### [at] scottcom> wrote:
>>>> I haven't really seen any valid *technical* reason why something
>>>> like POV
>>>> couldn't be ported to CUDA or OpenCL and run fine on the latest 3D
>>>> cards.
>
> Nemesis: no, you have not seen a reason that convinces you. That is not
> the same, given that you are not qualified to judge the technical merit.
That's not me.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> I repeat: If it's seemingly so easy,
Where did I say it was easy?
> please go ahead and just do it.
> All the material is there, ready to be put together. What are you waiting
> for?
More time and programming skill. Sorry but I don't have enough of it to
give away for free. I like tinkering on small apps but am not the sort of
person who ever finishes things, so volunteering to do a huge project like
this just seems ridiculous.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 21-1-2010 1:32, nemesis wrote:
> andrel wrote:
>> On 20-1-2010 21:03, Nicolas Alvarez wrote:
>>> Warp wrote:
>>>> scott <sco### [at] scottcom> wrote:
>>>>> I haven't really seen any valid *technical* reason why something
>>>>> like POV
>>>>> couldn't be ported to CUDA or OpenCL and run fine on the latest 3D
>>>>> cards.
>>
>> Nemesis: no, you have not seen a reason that convinces you. That is
>> not the same, given that you are not qualified to judge the technical
>> merit.
>
> That's not me.
Sorry, misread the number of '>' I think you said something similar
before, so I wrongfully assume it was you again.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |