|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Tue, 15 Jul 2008 20:15:23 +0100, Orchid XP v8 wrote:
>>> ...OK, WHERE the HELL do you buy something that has 128 CPUs in it??
>>> O_O This information surely has a most direct relevance to all who
>>> worship POV-Ray...!
>>
>> http://www.sgi.com/products/servers/altix/4000/
>
> "Price: Ask a salesman."
>
> So... the price depends on who you ask? x_x
No, but pricing isn't always a simple thing with a complex setup.
Jim
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 15-Jul-08 21:05, Eero Ahonen wrote:
> Orchid XP v8 wrote:
>>
>> ...OK, WHERE the HELL do you buy something that has 128 CPUs in it??
>> O_O This information surely has a most direct relevance to all who
>> worship POV-Ray...!
>>
>
> http://www.sgi.com/products/servers/altix/4000/
>
I might have posted this before, but the Mark Potse in this press
release:
http://www.sgi.com/company_info/newsroom/press_releases/2008/january/udm.html
was a PhD student of mine and when he is in the Netherlands we share a
room at work.
I did try to find a way to get SGI interested in getting POV compiled on
that Altix, I did not succeed yet. Perhaps I try again, this time with
the pretext that in a meeting in Munich next year I will organize a
track on Art and Scientific Visualization. BTW anyone working in science
here is invited to join that, I will do a formal announcement when I
have more details.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid XP v8 wrote:
> Yeah, I know. But you'd be surprised what some crazy nutters try to mod
> their C64s to do... ;-)
>
And you'd be surprised how much money some crazy nutters spend on computers
just to run distributed computing projects. I know somebody who bought a
PS3 just to run www.ps3grid.net on it.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid XP v8 wrote:
> I think in general most "supercomputers" only achieve peak performance
> for very specific kinds of workload. Just grabbing the POV-Ray source
> code and throwing it at a C compiler is *highly* unlikely to just happen
> to produce the right kind of workload.
That's one of the reasons the really big ones are custom designed for
the job they'll be running.
Sure, you could just network a bunch of Linux PCs in a Beowulf cluster,
or you could spec out the actual CPU speeds, RAM and network bandwidth
necessary to achieve optimum load on all components.
The first option is cheap, but you might have a very fast CPU sitting
and waiting for network traffic. Or, conversly, you could have a
network that's never fully utilized. Or you might end up running out of
RAM.
The second option is expensive, but ensures that everything you buy is
something you need, and that it will be fully utilized.
...Chambers
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> ...OK, WHERE the HELL do you buy something that has 128 CPUs in it?? O_O
> This information surely has a most direct relevance to all who worship
> POV-Ray...!
Saw a demo a while back where some dude had hooked up 3 PS3's to raytrace in
realtime at full HD resolution. That's 27 processor cores. I wonder how
fast it would be if someone ported POV to the PS3?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
scott wrote:
> Saw a demo a while back where some dude had hooked up 3 PS3's to
> raytrace in realtime at full HD resolution. That's 27 processor cores.
> I wonder how fast it would be if someone ported POV to the PS3?
Mmm, I'm seeing *a lot* of references to the PS3 in relation to high
performance computing. I know nothing about it, but from the sheer
weight of references, I'm guessing it's moderately well-equipted?
Actually, just porting POV-Ray to run on a GPU would probably yield some
interesting speedups. You'd have to radically restructure the program to
take advantage of the way GPUs work, but it should be quite fast...
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Mmm, I'm seeing *a lot* of references to the PS3 in relation to high
> performance computing. I know nothing about it, but from the sheer weight
> of references, I'm guessing it's moderately well-equipted?
There's a nice article on Wikipedia about the processor
http://en.wikipedia.org/wiki/Cell_processor
> Actually, just porting POV-Ray to run on a GPU would probably yield some
> interesting speedups. You'd have to radically restructure the program to
> take advantage of the way GPUs work, but it should be quite fast...
The problem is still floating point precision on the GPU - wait until they
get double precision throughout the pipeline and we'll begin to see some
interesting stuff. Also, as Warp always mentions, realistic multi-level
reflections and refractions are not simple at all (compared to the basic
"environment mapping" that almost every game uses).
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
scott wrote:
>> Mmm, I'm seeing *a lot* of references to the PS3 in relation to high
>> performance computing. I know nothing about it, but from the sheer
>> weight of references, I'm guessing it's moderately well-equipted?
>
> There's a nice article on Wikipedia about the processor
>
> http://en.wikipedia.org/wiki/Cell_processor
Yeah, I'm reading about it now. (Hmm, I guess I'm not the only person
who still thinks 512 MB RAM is a lot. ;-)
>> Actually, just porting POV-Ray to run on a GPU would probably yield
>> some interesting speedups. You'd have to radically restructure the
>> program to take advantage of the way GPUs work, but it should be quite
>> fast...
>
> The problem is still floating point precision on the GPU - wait until
> they get double precision throughout the pipeline and we'll begin to see
> some interesting stuff.
nVidia GeForce 200 series is ment to offer double-precision when driven
using CUDA...
> Also, as Warp always mentions, realistic
> multi-level reflections and refractions are not simple at all (compared
> to the basic "environment mapping" that almost every game uses).
It's a completely different algorithm to be sure. The mathematics is
simple enough - it's figuring out how to make it efficient on real-world
hardware that's the hard part. ;-)
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Yeah, I'm reading about it now. (Hmm, I guess I'm not the only person who
> still thinks 512 MB RAM is a lot. ;-)
For a games console it's probably enough. What do you hold in RAM during a
game? All the 3D meshes and textures are on the GPU video memory only*.
Probably only the code, collision meshes, map data? And the PS3 OS is
vastly simpler than Windows, so I doubt that uses up much RAM.
* When you write a game under Windows, you must keep all the GPU data
mirrored in normal RAM, because if the user Alt-Tabs to a different 3D game,
you need to refill the GPU memory quickly (and not go through some long
load-from-disk process) when the user comes back to your app. On a games
console you don't need to do this, because (at least on the PS3) you have to
quit one game before you can go into another.
> It's a completely different algorithm to be sure. The mathematics is
> simple enough - it's figuring out how to make it efficient on real-world
> hardware that's the hard part. ;-)
There was a nice paper I read in a book a while back (one of the GPU gems
series I think), where some algorithm was explained for doing correct
multi-level reflections on the GPU. It was quite complex, involving
generating lots of cube-maps with depth and texture information and some
ray-tracing steps in a pixel shader. In the end they compared the output
with a real raytracer, and the results looked identical, but the GPU version
ran at about 10fps IIRC with something like 4 levels of reflection. The
real raytracer (Maya I think) was measured in minutes.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> Yeah, I'm reading about it now. (Hmm, I guess I'm not the only person
>> who still thinks 512 MB RAM is a lot. ;-)
>
> For a games console it's probably enough.
Well, judging by the number of PS3 games... ;-)
> What do you hold in RAM
> during a game? All the 3D meshes and textures are on the GPU video
> memory only*. Probably only the code, collision meshes, map data? And
> the PS3 OS is vastly simpler than Windows, so I doubt that uses up much
> RAM.
That's rather perverse though. Are you telling it you need "at least 2
GB RAM" to run M$ Office smoothly, but 1/8 of that is just fine for
running extremely intensive game software?
> * When you write a game under Windows, you must keep all the GPU data
> mirrored in normal RAM, because if the user Alt-Tabs to a different 3D
> game, you need to refill the GPU memory quickly (and not go through some
> long load-from-disk process) when the user comes back to your app. On a
> games console you don't need to do this, because (at least on the PS3)
> you have to quit one game before you can go into another.
Now that you mention it, if I Alt-Tab out of TF2, my PC locks up for
about 30 seconds. (As in, I get a black screen for 30 seconds.) Then
Windows comes up - possibly in the wrong resolution. Switching back to
TF2 is similarly slow. Go figure...
> There was a nice paper I read in a book a while back (one of the GPU
> gems series I think), where some algorithm was explained for doing
> correct multi-level reflections on the GPU. It was quite complex,
> involving generating lots of cube-maps with depth and texture
> information and some ray-tracing steps in a pixel shader. In the end
> they compared the output with a real raytracer, and the results looked
> identical, but the GPU version ran at about 10fps IIRC with something
> like 4 levels of reflection. The real raytracer (Maya I think) was
> measured in minutes.
As I understand it, technologies like CUDA allow you to run arbitrary
code on a GPU. So no need for convoluted trickery to convince the GPU
that your proplem is just like texture mapping, just feed it the actual
calculations you want it to do. (Of course, it runs arbitrary code, that
doesn't necessarily mean it runs it *fast*.)
Of course, CUDA is *only* for nVidia GPUs. (Wouldn't surprise me if ATi
had developed something similar.) I wonder if this will be like the old
3Dfx API where eventually everything gravitates towards a single API
that works for any GPU?
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|