|
 |
>> Indeed, this is my plan. I'm just trying to get the naive list-based
>> version to work. (Currently it gives the wrong answers, which
>> obviously isn't a lot of use...)
>
> I could have written a working C version by now :-D
I have a working Haskell version by now. (Apparently I mixed up the rows
and columns, hence the bogus answers.) It's really not hard. I've
actually spent more time building a test harness than coding the matrix
multiplication so far.
In case you care, the code is:
type Matrix x = [[x]]
multiply :: (Num x) => Matrix x -> Matrix x -> Matrix x
multiply m1 m2 = [ [ sum $ zipWith (*) row col | col <- transpose m2 ] |
row <- m1 ]
That's the whole program. (Minus the part where it creates some
matricies, multiplies then, and prints out the result.) Complex, eh?
Now, in terms of speed... Well, it takes about 62 seconds to multiply
two 512x512 matricies. But hey, this is only the simplest, dumbest
version I could come up with. Let's see what happens (if anything) when
I start applying a few optimisations...
> Also I wonder how his results compare to something like MatLab that
> supposedly has very fast matrix handling functions built it...
Well, matrix multiplication is a built-in MatLab primitive. I would
imagine it's comparable to BLAS (or possibly parallel BLAS)...
Post a reply to this message
|
 |