|
 |
>> I find it quite scary how they can constantly gain 2x speedups. Every
>> time they change something, it goes another 2x or 4x faster. You would
>> think that there would be a physical limit to how fast it can possibly
>> go, but no, it keeps getting faster without end. Hmm...
>
> Yeh, it makes you realise that just because you think you have some
> "fast" C code, there is probably still some big speedups to be made.
I find it particularly disturbing that merely changing the order in
which the bytes reside in memory produces a 50x speedup. This
practically guarantees that no high-level programming language can ever
be fast. It means that abstraction and maintainability will forever be
the enemies of performance and efficiency.
Also... It's an 8-core server, but parallel processing gives only a 3.5x
speedup? That's pretty interesting. ;-)
Question: How much faster does this go on the GPU? [I'm going to stick
my neck out and say it'd take longer to transfer the data to the GPU
than to actually do the calculation.] Do any current GPUs support
double-precision yet?
> Next assignment, do the same activity on the inverse matrix function :-)
Hehehe, yeah... I'm guessing that's going to be a tad slower. ;-)
>> Just for giggles, I could implement this in Haskell and see how many
>> million times slower it is.
>
> I wonder if it would finish in one day? :-)
Well, the initial version took "only" 5 hours on a dual quad-Xeon 3.15
GHz server. How slow can it possibly be on my ancient 32-bit single-core
AMD Athlon 1700+ 1.5 GHz? :-P
Post a reply to this message
|
 |