 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
nemesis wrote:
> how slow paradigms change and new languages with radically new
> ideas flourish (motivated by hardware changes anyway)...
You know, like how Erlang and Haskell and J have completely replaced C, C++,
FORTRAN, and COBOL. And how we got rid of structured statements and OOP and
replaced them with better stuff just a decade or so after they were invented.
--
Darren New, San Diego CA, USA (PST)
Forget "focus follows mouse." When do
I get "focus follows gaze"?
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Invisible wrote:
> Chambers wrote:
>
>> 1) Support for sophisticated branching
>
> When this happens, the GPU will be exactly the same speed as the CPU.
> The GPU is fast *because* it doesn't support sophisticated branching.
That's too bad, because POV requires sophisticated branching.
>> 2) Full double-precision accuracy
>
> This already exists apparently. (E.g., my GPU supports double-precision
> math.)
Yes, but there are still relatively few cards in consumer machines that
fully support double precision.
>> 3) Large memory sets (other than textures)
>
> My GPU has access to just under 1GB of RAM. How much do you want?
Last I checked, each shader was limited to accessing a very small amount
(I think it was less than 1MB, though this may no longer be accurate for
current top of the line cards) of shared memory, and then had access to
textures.
If you could figure out a way to store your data as a texture (ie, an
array of data) then you didn't have a problem. Of course, textures
aren't designed to hold distinct values in them (like a plain array), so
you have to tell the card to disable all those optimizations like
blending & filtering... you know, all the things that were designed
thinking that textures were actually images.
>> 4) Independent shaders running on distinct units.
>
> What exactly do you mean by that?
POV often needs to find the intersection of a single ray with a single
object.
GPUs still function by calling blocks of shaders with the same program
(this is how they get their speed; even though each individual shader is
relatively slow, the whole block together is considered fast), and very
similar data.
Now, POV could hold onto pending intersection tests until there are
enough to fill a buffer... but the data wouldn't be distributed the way
that GPUs want it.
That is, POV would still have a group of independent intersection tests,
each one with different parameters.
GPUs work by saying, "Run this shader, with the first parameter
interpolated between these two values, and the second parameter
interpolated between these two other values, and the third parameter
interpolated between yet another set of values..."
The random data access of POV would render that unworkable.
Of course, I fully admit to not having read the specs for OpenCL, only
CUDA, so I can't say how much more useful it is. However, given that I
believe these to be hardware limitations rather than software, I'd be
surprised if OpenCL is really that much more powerful (though I've heard
it's easier to work with).
...Chambers
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Darren New wrote:
> People nowadays don't remember what it was like when *every* program
> didn't have enough address space, when a simple
> sub-notepad-sophistication keyboard driven text editor didn't have
> enough address space to hold a document...
I still remember the first time I allocated a single, 2MB array, and it
*just worked*.
I was in sheer awe at the phenomenal power available to me at that point...
;)
...Chambers
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Darren New <dne### [at] san rr com> wrote:
> nemesis wrote:
> > how slow paradigms change and new languages with radically new
> > ideas flourish (motivated by hardware changes anyway)...
>
> You know, like how Erlang and Haskell and J have completely replaced C, C++,
> FORTRAN, and COBOL. And how we got rid of structured statements and OOP and
> replaced them with better stuff just a decade or so after they were invented.
in other words, you agree with me that change in the software world goes on
friggin' slowly, right?
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 16-1-2010 16:40, nemesis wrote:
> andrel wrote:
>> On 16-1-2010 15:58, nemesis wrote:
>>>> GPUs are, but do they support CUDA or similar*?
>>>
>>> they will once code is complete.
>>
>> Ah, still young and naive. I have seen a lot of promising technology
>> and software discontinued after a few years. I think it is highly like
>> ( >70% ) that this particular type of GPU programming will be
>> obsolete in 5 years time.
>
> I look at heavyweights in the industry at large and they seem to think
> differently. Either you are right and they will all be broke by
> investing on a fad or you are
They can afford to invest in something that will only last a few years.
In fact they have to in order to survive long enough to participate in
the next hype. So I might be right and they are still doing the right
thing.
Is it fair to assume that apart from having no low level programming
skills, your marketing skills are also not very well developed?
Or to ask a personal question that you don't have to answer: what *is*
your background?
>>> We shouldn't have to wait for the iPhone to have a proper GPU to
>>> begin any such coding...
>>
>> Ok, be my guest. I'll wait and see what you come up with.
>
> yes, keep waiting.
I should possibly have stated that a bit clearer: please stop telling
people what they should do if you don't have the skills to understand
the impact of what you are proposing.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Chambers wrote:
> I still remember the first time I allocated a single, 2MB array, and it
> *just worked*.
It wasn't that long ago that I had a 9 gig textual database dump I needed to
do something interactive with, and I spent about 10 minutes trying to figure
out the best program for writing the mung in before I realized "hey, wait, I
have 16G RAM on this machine. I can just open it with VI."
--
Darren New, San Diego CA, USA (PST)
Forget "focus follows mouse." When do
I get "focus follows gaze"?
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 16-1-2010 16:42, nemesis wrote:
> andrel wrote:
>> On 16-1-2010 16:00, nemesis wrote:
>>> Sabrina Kilian wrote:
>>>> You learn more about a person
>>>> by the words they choose to address others by
>>>
>>> I hope you have learned that I'm a clown by heart. I enjoy making
>>> people laugh.
>>
>> Keep working on it, it does not come through on the internet, at least
>> not for me.
>
> sadly, my humorous side is often a victim of my troll side and thus
> don't get as much recognition, specially when people are fed up already.
I am not fed up. A bit tired of repeating the same discussion over and
over. We try to spread the load by alternating who is answering this
time*. You just wait until it is Warp's, or even better Thorsten's,
turn, then it gets really funny.
* nothing formal, just the way things develop.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
nemesis wrote:
> in other words, you agree with me that change in the software world goes on
> friggin' slowly, right?
No, but changes in underlying software infrastructure are slow. I'll note
we're still building hardware out of semiconductors.
--
Darren New, San Diego CA, USA (PST)
Forget "focus follows mouse." When do
I get "focus follows gaze"?
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Chambers <Ben### [at] gmail com> wrote:
> Invisible wrote:
> > Chambers wrote:
> >> 1) Support for sophisticated branching
> >
> > When this happens, the GPU will be exactly the same speed as the CPU.
> > The GPU is fast *because* it doesn't support sophisticated branching.
>
> That's too bad, because POV requires sophisticated branching.
yeah, I wonder how a path tracer, which requires more branching for each new ray
spawned than a conventional raytracer, did it...
> >> 2) Full double-precision accuracy
> >
> > This already exists apparently. (E.g., my GPU supports double-precision
> > math.)
>
> Yes, but there are still relatively few cards in consumer machines that
> fully support double precision.
Don't worry, even non-gamers will all be running double precision GPU's when
povray 3.7 finally gets out of beta.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
andrel <a_l### [at] hotmail com> wrote:
> Or to ask a personal question that you don't have to answer: what *is*
> your background?
right now it's this one:
http://img37.imageshack.us/img37/6122/luxfruitsback.jpg
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |