 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 16-1-2010 15:58, nemesis wrote:
> andrel wrote:
>> On 15-1-2010 21:38, nemesis wrote:
>>> andrel <a_l### [at] hotmail com> wrote:
>>>> to try to port to one specific architecture. Particularly if that is
>>>> not
>>>> supported by the standard setup of a typical POV user.
>>>
>>> GPU's are part of every computer nowadays whether you use them or not.
>>>
>>
>> GPUs are, but do they support CUDA or similar*?
>
> they will once code is complete.
Ah, still young and naive. I have seen a lot of promising technology and
software discontinued after a few years. I think it is highly like (
>70% ) that this particular type of GPU programming will be obsolete in
5 years time. Most probably replaced by an API where you don't know if
software is running on a GPU or not, if I can have a guess. Because of
backwards compatibility you know.
> By the time povray 3.7 was started,
> the thought of running raytracing on GPU was a distant dream. Looks
> like hardware evolves far faster than software. You can certainly tell
> that by how slow paradigms change and new languages with radically new
> ideas flourish (motivated by hardware changes anyway)...
>
>> Of the three machines here that are on, one does (GeForce 8800GTS) one
>> doesn't (GeForce FX 5200) and one I don't know (a fairly recent HP
>> laptop).
>
> The first is the only one that counts, since it's the most recent (and
> fairly old already BTW). Laptops always use yesterday's technology.
The laptop is the newest by far. The reason I have that 8800 is that I
don't do much things that require a fast graphics card (mainly signal
processing and some POV if I get a chance. Blender is the only thing
that might benefit from a better card.). I decided not to spend money on
that when I bought the machine and wait until I really needed one.
Assuming that by then I could get more performance for less money.
> We shouldn't have to wait for the iPhone to have a proper GPU to begin
> any such coding...
Ok, be my guest. I'll wait and see what you come up with.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 16-1-2010 16:00, nemesis wrote:
> Sabrina Kilian wrote:
>> You learn more about a person
>> by the words they choose to address others by
>
> I hope you have learned that I'm a clown by heart. I enjoy making
> people laugh.
Keep working on it, it does not come through on the internet, at least
not for me.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
andrel wrote:
> On 16-1-2010 15:58, nemesis wrote:
>>> GPUs are, but do they support CUDA or similar*?
>>
>> they will once code is complete.
>
> Ah, still young and naive. I have seen a lot of promising technology and
> software discontinued after a few years. I think it is highly like (
> >70% ) that this particular type of GPU programming will be obsolete in
> 5 years time.
I look at heavyweights in the industry at large and they seem to think
differently. Either you are right and they will all be broke by
investing on a fad or you are
> Most probably replaced by an API where you don't know if
> software is running on a GPU or not, if I can have a guess.
Like heterogeneous multiprocessing with OpenCL? Agreed.
>> The first is the only one that counts, since it's the most recent (and
>> fairly old already BTW). Laptops always use yesterday's technology.
>
> The laptop is the newest by far.
It has a newer card? I don't like laptops precisely because you pay too
much to use old tech (that are finally miniaturized enough to fit and
consume less power).
>> We shouldn't have to wait for the iPhone to have a proper GPU to begin
>> any such coding...
>
> Ok, be my guest. I'll wait and see what you come up with.
yes, keep waiting.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
andrel wrote:
> On 16-1-2010 16:00, nemesis wrote:
>> Sabrina Kilian wrote:
>>> You learn more about a person
>>> by the words they choose to address others by
>>
>> I hope you have learned that I'm a clown by heart. I enjoy making
>> people laugh.
>
> Keep working on it, it does not come through on the internet, at least
> not for me.
sadly, my humorous side is often a victim of my troll side and thus
don't get as much recognition, specially when people are fed up already.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
nemesis wrote:
> andrel wrote:
>> GPUs are, but do they support CUDA or similar*?
>
> they will once code is complete. By the time povray 3.7 was started,
> the thought of running raytracing on GPU was a distant dream. Looks
> like hardware evolves far faster than software. You can certainly tell
> that by how slow paradigms change and new languages with radically new
> ideas flourish (motivated by hardware changes anyway)...
>
Right, so, which should be supported? CUDA, Stream, OpenCL, or do we
just do things with shaders and hack tricks in DirectX and OpenGL? Do we
go lower than that, skip the OS, and make POV-Ray something that runs at
a very low level, accessing the hardware in assembly just to get every
bit of speed and capability from the architectures? Support your answer
with examples of benefits and potential problems that will be faced
along the development path. Discuss the impact that future GPU
capabilities, such as shorter buses, integration into the CPU, and
branching, will offer.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
nemesis wrote:
> how slow paradigms change and new languages with radically new
> ideas flourish (motivated by hardware changes anyway)...
You know, like how Erlang and Haskell and J have completely replaced C, C++,
FORTRAN, and COBOL. And how we got rid of structured statements and OOP and
replaced them with better stuff just a decade or so after they were invented.
--
Darren New, San Diego CA, USA (PST)
Forget "focus follows mouse." When do
I get "focus follows gaze"?
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Invisible wrote:
> Chambers wrote:
>
>> 1) Support for sophisticated branching
>
> When this happens, the GPU will be exactly the same speed as the CPU.
> The GPU is fast *because* it doesn't support sophisticated branching.
That's too bad, because POV requires sophisticated branching.
>> 2) Full double-precision accuracy
>
> This already exists apparently. (E.g., my GPU supports double-precision
> math.)
Yes, but there are still relatively few cards in consumer machines that
fully support double precision.
>> 3) Large memory sets (other than textures)
>
> My GPU has access to just under 1GB of RAM. How much do you want?
Last I checked, each shader was limited to accessing a very small amount
(I think it was less than 1MB, though this may no longer be accurate for
current top of the line cards) of shared memory, and then had access to
textures.
If you could figure out a way to store your data as a texture (ie, an
array of data) then you didn't have a problem. Of course, textures
aren't designed to hold distinct values in them (like a plain array), so
you have to tell the card to disable all those optimizations like
blending & filtering... you know, all the things that were designed
thinking that textures were actually images.
>> 4) Independent shaders running on distinct units.
>
> What exactly do you mean by that?
POV often needs to find the intersection of a single ray with a single
object.
GPUs still function by calling blocks of shaders with the same program
(this is how they get their speed; even though each individual shader is
relatively slow, the whole block together is considered fast), and very
similar data.
Now, POV could hold onto pending intersection tests until there are
enough to fill a buffer... but the data wouldn't be distributed the way
that GPUs want it.
That is, POV would still have a group of independent intersection tests,
each one with different parameters.
GPUs work by saying, "Run this shader, with the first parameter
interpolated between these two values, and the second parameter
interpolated between these two other values, and the third parameter
interpolated between yet another set of values..."
The random data access of POV would render that unworkable.
Of course, I fully admit to not having read the specs for OpenCL, only
CUDA, so I can't say how much more useful it is. However, given that I
believe these to be hardware limitations rather than software, I'd be
surprised if OpenCL is really that much more powerful (though I've heard
it's easier to work with).
...Chambers
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Darren New wrote:
> People nowadays don't remember what it was like when *every* program
> didn't have enough address space, when a simple
> sub-notepad-sophistication keyboard driven text editor didn't have
> enough address space to hold a document...
I still remember the first time I allocated a single, 2MB array, and it
*just worked*.
I was in sheer awe at the phenomenal power available to me at that point...
;)
...Chambers
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Darren New <dne### [at] san rr com> wrote:
> nemesis wrote:
> > how slow paradigms change and new languages with radically new
> > ideas flourish (motivated by hardware changes anyway)...
>
> You know, like how Erlang and Haskell and J have completely replaced C, C++,
> FORTRAN, and COBOL. And how we got rid of structured statements and OOP and
> replaced them with better stuff just a decade or so after they were invented.
in other words, you agree with me that change in the software world goes on
friggin' slowly, right?
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 16-1-2010 16:40, nemesis wrote:
> andrel wrote:
>> On 16-1-2010 15:58, nemesis wrote:
>>>> GPUs are, but do they support CUDA or similar*?
>>>
>>> they will once code is complete.
>>
>> Ah, still young and naive. I have seen a lot of promising technology
>> and software discontinued after a few years. I think it is highly like
>> ( >70% ) that this particular type of GPU programming will be
>> obsolete in 5 years time.
>
> I look at heavyweights in the industry at large and they seem to think
> differently. Either you are right and they will all be broke by
> investing on a fad or you are
They can afford to invest in something that will only last a few years.
In fact they have to in order to survive long enough to participate in
the next hype. So I might be right and they are still doing the right
thing.
Is it fair to assume that apart from having no low level programming
skills, your marketing skills are also not very well developed?
Or to ask a personal question that you don't have to answer: what *is*
your background?
>>> We shouldn't have to wait for the iPhone to have a proper GPU to
>>> begin any such coding...
>>
>> Ok, be my guest. I'll wait and see what you come up with.
>
> yes, keep waiting.
I should possibly have stated that a bit clearer: please stop telling
people what they should do if you don't have the skills to understand
the impact of what you are proposing.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |