 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
nemesis wrote:
> around here would enjoy the idea that GPU's are finally getting ready to
> boost raytracing, despite many years of such trumpeting here and
Here's a hint for you. When this is the message you want to communicate,
you post a link and say "Cool, GPUs are getting to where they can do
ray-tracing." You don't say "You're all losers for not doing this, and the
software you love to work on is going to DIE DIE DIE if you don't
immediately work to integrate this feature."
Dick.
--
Darren New, San Diego CA, USA (PST)
Forget "focus follows mouse." When do
I get "focus follows gaze"?
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Sabrina Kilian wrote:
> actual contribution has been negligible.
actual contribution has been negative.
FTFY.
--
Darren New, San Diego CA, USA (PST)
Forget "focus follows mouse." When do
I get "focus follows gaze"?
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Chambers wrote:
> GPU acceleration will be useful when the following conditions are met:
Actually, just curious here... would it help in any way with speeding up
shadow calculations? If you have a large hard-to-bound-well object with
dozens of lights, is there anything you could do in parallel (such as on a
GPU) that might be able to tell you which lights are known not to be visible
at a particular 3D point?
That would seem an easier problem to apply GPU to than the full
ray-intersection and texture and color and such, because you could fall back
to a CPU-based test in the event you came back with "I don't know"?
--
Darren New, San Diego CA, USA (PST)
Forget "focus follows mouse." When do
I get "focus follows gaze"?
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 1/19/2010 2:23 PM, Jim Henderson wrote:
> On Tue, 19 Jan 2010 13:35:47 -0500, nemesis wrote:
>
>> The only explanation so far being thrown has been: "we don't want to
>> speed up povray's ray-triangle intersections because it would make it
>> much more useful to people outside our small geek niche and those people
>> wouldn't be interested in using other povray features thus making us
>> feel unloved".
>
> Huh, you and I are reading different messages, then.
>
>> Really, they can't stop talking how isosurfaces,
>> textures and whatsoever would not be well-supported on GPU even though I
>> agreed with that from the start and only hinted at triangles speed up.
>
> And given the relatively small niche in the userbase who has hardware
> that could do so, it doesn't seem reasonable to you to say "this is not a
> good use of our developer's time"?
>
> In any development project, there are good ideas that get put off or not
> implemented due to resource constraints. This is one of those times.
> Maybe the right kind of GPU hardware will become more pervasive and
> someone will take this on, but until then, you'll just have to be
> satisfied with the answer that's been given, which I'm quite *sure* isn't
> "we want to keep the software slow", as you seem to think it is.
>
> Jim
You know.. There is one bit of irony in this whole mess. If you look at
something that "attempts" to do building in a virtual environment you
get the god awful mess of Second Life/Opensim, and there idea of
"prims". While the purpose of POVRay hasn't been to create a game engine
at all, its tiresome to see such total junk produced to do what POVRay
does well, which is let you build stuff, without making it in a $500
application that handles nothing but meshes.
I would love nothing else than to see POVRay like design features, and
real primitives, integrated into a GPU supported system, that worked
better than the stuff on the market. Both have handicaps. POVRay due to,
until now, there being no feasible way to use a GPU to help it, and
everything else by the fact that the *best* real time generation of a
scene, based on anything close to a data set that defines what you are
looking at, its a bloody disaster, because its trying to use stuff that
works mathematically with predefined meshes, and various cheats, to
*fake* CSG effects, which it can't actually manage at all.
You could use half as many "prims", end up with better results, and use
the half you end up left over to add detail, if SL/Opensim used real
primitives, even *with* the idiocy of having to tessellate them into a
mesh first. Its absurd, and annoying, and I dream of the day someone
manages to fix the problem. But, based on my understanding, that
**isn't** going to happen any time soon, especially not unless POVRay
picked up some heavy hitters, who knew the other code well, and actually
thought it would be a good idea to produce something that used the best
of both. The team doesn't have such a person, and even if it did, it
would still need to get 3.7 working, without such added functionality,
and that, for now, is the reason its not going to happen now, or
necessarily even "soon".
--
void main () {
If Schrödingers_cat is alive or version > 98 {
if version = "Vista" {
call slow_by_half();
call DRM_everything();
}
call functional_code();
}
else
call crash_windows();
}
<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
3D Content, and 3D Software at DAZ3D!</A>
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
> GPU acceleration will be useful when the following conditions are met:
> 1) Support for sophisticated branching
> 2) Full double-precision accuracy
> 3) Large memory sets (other than textures)
> 4) Independent shaders running on distinct units.
I think all those conditions are already met. For me the only barrier to
not implementing POV on the GPU is the developer effort needed, and the risk
that it might be wasted if the overall technology changes of programming
GPUs in the next 5-10 years.
Maybe there are some small problems that have to be worked around, but IMO
GPUs are powerful enough today to run something like POV. I already made
demo applications that do raytraced spheres and isosurfaces on the GPU (and
my GPU is not even a modern one, it still has certain limitations), so I'm
pretty sure something *could* written in OpenCL or CUDA on a new GPU and
would handle POV fine, including all primitives and texturing. It's just
the effort needed, and whether it will all be wasted in 5 years when
something new appears.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
scott escreveu:
>> GPU acceleration will be useful when the following conditions are met:
>> 1) Support for sophisticated branching
>> 2) Full double-precision accuracy
>> 3) Large memory sets (other than textures)
>> 4) Independent shaders running on distinct units.
>
> I think all those conditions are already met. For me the only barrier
> to not implementing POV on the GPU is the developer effort needed, and
> the risk that it might be wasted if the overall technology changes of
> programming GPUs in the next 5-10 years.
>
> Maybe there are some small problems that have to be worked around, but
> IMO GPUs are powerful enough today to run something like POV. I already
> made demo applications that do raytraced spheres and isosurfaces on the
> GPU (and my GPU is not even a modern one, it still has certain
> limitations), so I'm pretty sure something *could* written in OpenCL or
> CUDA on a new GPU and would handle POV fine, including all primitives
> and texturing. It's just the effort needed, and whether it will all be
> wasted in 5 years when something new appears.
Thank you. I appreciate your open-mindedness. I can perfectly accept
lack of human resources as a fine excuse, the others are just BS.
hey, I'd rather be a dick than have my head into my ass.
--
a game sig: http://tinyurl.com/d3rxz9
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
> Thank you. I appreciate your open-mindedness. I can perfectly accept
> lack of human resources as a fine excuse, the others are just BS.
I haven't really seen any valid *technical* reason why something like POV
couldn't be ported to CUDA or OpenCL and run fine on the latest 3D cards.
Most of the technical arguments against it are only valid for older cards
(eg lack of double support, limits of number of instructions and branching
etc).
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
scott <sco### [at] scott com> wrote:
> > Thank you. I appreciate your open-mindedness. I can perfectly accept
> > lack of human resources as a fine excuse, the others are just BS.
> I haven't really seen any valid *technical* reason why something like POV
> couldn't be ported to CUDA or OpenCL and run fine on the latest 3D cards.
> Most of the technical arguments against it are only valid for older cards
> (eg lack of double support, limits of number of instructions and branching
> etc).
I repeat: If it's seemingly so easy, please go ahead and just do it.
All the material is there, ready to be put together. What are you waiting
for?
It's like the path-tracing patch for povray (mc-pov): Instead of just
whining, someone went and just implemented it.
--
- Warp
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Warp wrote:
> scott <sco### [at] scott com> wrote:
>> > Thank you. I appreciate your open-mindedness. I can perfectly accept
>> > lack of human resources as a fine excuse, the others are just BS.
>
>> I haven't really seen any valid *technical* reason why something like POV
>> couldn't be ported to CUDA or OpenCL and run fine on the latest 3D cards.
>> Most of the technical arguments against it are only valid for older cards
>> (eg lack of double support, limits of number of instructions and
>> branching etc).
>
> I repeat: If it's seemingly so easy, please go ahead and just do it.
> All the material is there, ready to be put together. What are you waiting
> for?
He didn't say it was easy, he said there is no reason why it would be
currently *impossible*.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Nicolas Alvarez <nic### [at] gmail com> wrote:
> He didn't say it was easy, he said there is no reason why it would be
> currently *impossible*.
My estimate is that if you wanted any kind of actual advantage of using
the GPU for ray-triangle intersections, a serious re-design of the entire
core renderer of POV-Ray would be needed, and even then the speedup would
probably show up only in a rather limited set of situations.
You can't simply write a routine like "ok, I need to check if this ray
hits this triangle mesh and where, so I'll just run the intersection
routine on the GPU and get the info" because getting the intersection
point (and normal vector) from the GPU is probably going to take longer
than doing the intersection calculation directly with the CPU would have
taken.
You need to buffer a whole bunch of intersection tests against the mesh,
then have the GPU process them all (with as much parallelism as possible)
and return all the intersection points at once (which ought to minimize
the I/O overhead).
However, the core renderer doesn't currently work that way. It doesn't
"buffer" intersection tests to be calculated in a bunch and then processed
all at the same time afterwards. It fully calculates one ray and all the
new rays that it spawns before going to the next ray, and so on.
Even if you were able to totally change the design of the core renderer
to do that, it would still probably help only in a limited amount of
situations. Obviously it would only work with meshes (and perhaps some
simple primitives such as spheres and planes), but ray-surface intersections
are often *not* the heaviest part of the raytracing process. Texturing,
lighting, media, photon mapping and radiosity often take a big chunk
of the total time.
--
- Warp
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |