|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
http://www.haskell.org/communities/11-2010/html/report.html
Well, there goes the rest of my week...
So far I've already read about all manner of crazy stuff:
+ GHC 7.0 is in development, featuring more new bells & whistles than is
sane for any one system to have.
- The "Jenga" of the type checker has been replaced.
- Improvements to the inliner yield 80% speedups in some cases.
- There's a new LLVM backend.
- There's a new, faster I/O subsystem.
- A long-standing glitch with Parallel Strategies has been fixed.
(And in the process tonnes of new functionality is available.)
- There's work on improving numerical performance (finally!)
- I read a paper where somebody benchmarked the standard containers
library and managed to make it go about 25% faster.
- We might finally get concurrent GC too.
- Microsoft Research is funding further development of Haskell's
parallel capabilities. (!) There's some talk of an MPI binding.
- There's more DPH improvements.
- Tonnes of other stuff too small to mention.
+ UHC (the Utrecht University compiler for Haskell) contains a bunch of
interesting stuff. Most particularly, they're using an interesting
Attribute Grammar for performing tree transformations. It turns out this
is simpler and easier than writing the transformations in Haskell -
which is rather surprising, given how good Haskell is at data
transformations...
+ There's a big long paper about The Reduceron. It's basically an FPGA
processor for running Haskell (or something like it).
- First there's a great long section where they explain their design
decisions and various theoretical benchmark results.
- Then they describe the actual design of the hardware, and the
various decisions they chose there, and what the final thing looks like.
- Finally they measure the performance of the thing. And it's not too
shabby. A program running on the Reduceron is about 4x slower than the
same program compiled with GHC and running on an Intel Core 2 Duo at
3GHz. Which is very, very impressive when you consider that the
Reduceron runs at a piffling 96MHz. (!) On the other hand, no
floating-point support yet...
God only knows what I'll find when I scroll down further...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Invisible escreveu:
> + GHC 7.0 is in development, featuring more new bells & whistles than is
> sane for any one system to have.
> - The "Jenga" of the type checker has been replaced.
> - Improvements to the inliner yield 80% speedups in some cases.
> - There's a new LLVM backend.
> - There's a new, faster I/O subsystem.
> - A long-standing glitch with Parallel Strategies has been fixed. (And
> in the process tonnes of new functionality is available.)
> - There's work on improving numerical performance (finally!)
> - I read a paper where somebody benchmarked the standard containers
> library and managed to make it go about 25% faster.
> - We might finally get concurrent GC too.
amazing news, huh?
concurrent GC is almost as hard as solving a halting problem... ;p
> - Microsoft Research is funding further development of Haskell's
> parallel capabilities. (!) There's some talk of an MPI binding.
that SPJ guy sure has some ballz.
> + There's a big long paper about The Reduceron. It's basically an FPGA
> processor for running Haskell (or something like it).
> - First there's a great long section where they explain their design
> decisions and various theoretical benchmark results.
> - Then they describe the actual design of the hardware, and the
> various decisions they chose there, and what the final thing looks like.
> - Finally they measure the performance of the thing. And it's not too
> shabby. A program running on the Reduceron is about 4x slower than the
> same program compiled with GHC and running on an Intel Core 2 Duo at
> 3GHz. Which is very, very impressive when you consider that the
> Reduceron runs at a piffling 96MHz. (!) On the other hand, no
> floating-point support yet...
whoa!
how about an OpenCL port. ;)
--
a game sig: http://tinyurl.com/d3rxz9
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> - We might finally get concurrent GC too.
>
> amazing news, huh?
>
> concurrent GC is almost as hard as solving a halting problem... ;p
Have *you* tried it? :-P
Seriously, analysing the structure of a huge chunk of data while it's
still being modified is no picnic.
>> - Microsoft Research is funding further development of Haskell's
>> parallel capabilities. (!) There's some talk of an MPI binding.
>
> that SPJ guy sure has some ballz.
I said MSR is funding it. I didn't say why. It might be nothing to do
with SPJ. There /are/ other people getting paid to use Haskell, you
know. ;-)
>> + There's a big long paper about The Reduceron. It's basically an FPGA
>> processor for running Haskell (or something like it).
>
> whoa!
>
> how about an OpenCL port. ;)
Oh, I gather somebody's doing that for their PhD thesis...
But still, modern CPUs are especially designed for the way normal
programs work. Functional programs work in a totally different way;
maybe by designing the hardware differently, it can go faster? It's
worth at least trying it out to see how it goes.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> God only knows what I'll find when I scroll down further...
...and now I'm reading about Agda, a programming language which has
MELTED MY MIND! >_<
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Invisible <voi### [at] devnull> wrote:
> >> - We might finally get concurrent GC too.
> >
> > amazing news, huh?
> >
> > concurrent GC is almost as hard as solving a halting problem... ;p
> Have *you* tried it? :-P
> Seriously, analysing the structure of a huge chunk of data while it's
> still being modified is no picnic.
If Haskell requires a garbage collector, it means that objects are not
strictly scope-bound (in other words, objects can outlive the scope where
they were creted, rather than being automatically destroyed when the scope
ends). This, consequently, means that objects are not handled by value, but
by reference (or pointer, or whatever you want to call it), which is a
requirement if you want to share the same object with more than one other
object. I know next to nothing about Haskell, but the little I have seen
doesn't look like it would be reference-based code. It *looks* to me like
everything is handled by value.
Could give a simple example which demonstrates the need for a GC?
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp escreveu:
> Invisible <voi### [at] devnull> wrote:
>>>> - We might finally get concurrent GC too.
>>> amazing news, huh?
>>>
>>> concurrent GC is almost as hard as solving a halting problem... ;p
>
>> Have *you* tried it? :-P
>
>> Seriously, analysing the structure of a huge chunk of data while it's
>> still being modified is no picnic.
>
> If Haskell requires a garbage collector, it means that objects are not
> strictly scope-bound (in other words, objects can outlive the scope where
> they were creted, rather than being automatically destroyed when the scope
> ends). This, consequently, means that objects are not handled by value, but
> by reference (or pointer, or whatever you want to call it), which is a
> requirement if you want to share the same object with more than one other
> object. I know next to nothing about Haskell, but the little I have seen
> doesn't look like it would be reference-based code. It *looks* to me like
> everything is handled by value.
>
> Could give a simple example which demonstrates the need for a GC?
kinda weird your reasoning there. Java passes things by reference and
still needs a GC. I don't quite see the linking between the need for a
GC and passing things by reference or by value.
--
a game sig: http://tinyurl.com/d3rxz9
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 11/11/2010 04:02 PM, Warp wrote:
> I know next to nothing about Haskell, but the little I have seen
> doesn't look like it would be reference-based code. It *looks* to me like
> everything is handled by value.
In normal Haskell code, all data is immutable. It's therefore impossible
to tell whether it's being passed by reference or by value; either
choice makes no difference to the end result. It only affects
performance. This is probably why Haskell "looks like" it passes by value.
> Could give a simple example which demonstrates the need for a GC?
I always used to think that creating a data structure who's size is not
statically known required handling it by reference. However, C++ somehow
manages to do this by value, so it seems my assumptions are incorrect.
In fact, thinking about it, you probably /could/ implement Haskell
without GC. It would just be far less efficient, that's all.
Anyway, as an example: Consider the function
linear a b x = a*x + b
You probably think that this function takes three numbers and returns a
number computed from them. Strictly speaking, this isn't true; in fact
this function takes three numbers and returns /an expression/ containing
them. What the caller actually gets back is a reference to the root of
an expression tree. For example, if I do "linear 2 3 5", I get back a
reference to the root of a tree that looks like
+
/ \
* 3
/ \
2 5
If I try to /use/ this value, this will execute the (+) function. The
first thing this does is inspect its arguments, which causes the (*)
function to execute. After it executes, the tree looks like
+
/ \
10 3
2 5
There are now two items of garbage: the 2 and the 5. Further, after (+)
finishes executing, we have
13
10 3
2 5
So now there's 4 items of garbage (2, 5, 10, 3).
The real situation is of course a bit more complicated than this, but
you get the idea.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
nemesis <nam### [at] gmailcom> wrote:
> Warp escreveu:
> > Invisible <voi### [at] devnull> wrote:
> >>>> - We might finally get concurrent GC too.
> >>> amazing news, huh?
> >>>
> >>> concurrent GC is almost as hard as solving a halting problem... ;p
> >
> >> Have *you* tried it? :-P
> >
> >> Seriously, analysing the structure of a huge chunk of data while it's
> >> still being modified is no picnic.
> >
> > If Haskell requires a garbage collector, it means that objects are not
> > strictly scope-bound (in other words, objects can outlive the scope where
> > they were creted, rather than being automatically destroyed when the scope
> > ends). This, consequently, means that objects are not handled by value, but
> > by reference (or pointer, or whatever you want to call it), which is a
> > requirement if you want to share the same object with more than one other
> > object. I know next to nothing about Haskell, but the little I have seen
> > doesn't look like it would be reference-based code. It *looks* to me like
> > everything is handled by value.
> >
> > Could give a simple example which demonstrates the need for a GC?
> kinda weird your reasoning there. Java passes things by reference and
> still needs a GC.
Isn't that what I said above? You write as if I had said the exact
opposite (ie. that languages using references don't need GC).
> I don't quite see the linking between the need for a
> GC and passing things by reference or by value.
If you handle objects by reference, and you can pass references around,
it means that objects can be shared and that the lifetime of these objects
is not bound to the scope where they were created. If objects outlive the
scope where they were created, you need some kind of GC to destroy them
(unless the language is such that leaking objects is ok).
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Invisible <voi### [at] devnull> wrote:
> I always used to think that creating a data structure who's size is not
> statically known required handling it by reference. However, C++ somehow
> manages to do this by value, so it seems my assumptions are incorrect.
No, dynamically allocated objects can only be handled with a pointer.
You can of course wrap that pointer so that it looks and behaves like
the allocated object itself, but as if handled by-value (including being
scope-bound).
For example you can handle an instance of std::vector or std::string
by value, but internally they contain a pointer to dynamically allocated
memory (which they manage).
> In fact, thinking about it, you probably /could/ implement Haskell
> without GC. It would just be far less efficient, that's all.
It would be less efficient because it would require always deep-copying
data when it's passed around?
> Anyway, as an example: Consider the function
> linear a b x = a*x + b
> You probably think that this function takes three numbers and returns a
> number computed from them. Strictly speaking, this isn't true; in fact
> this function takes three numbers and returns /an expression/ containing
> them. What the caller actually gets back is a reference to the root of
> an expression tree. For example, if I do "linear 2 3 5", I get back a
> reference to the root of a tree that looks like
> +
> / \
> * 3
> / \
> 2 5
> If I try to /use/ this value, this will execute the (+) function. The
> first thing this does is inspect its arguments, which causes the (*)
> function to execute. After it executes, the tree looks like
> +
> / \
> 10 3
> 2 5
> There are now two items of garbage: the 2 and the 5. Further, after (+)
> finishes executing, we have
> 13
> 10 3
> 2 5
> So now there's 4 items of garbage (2, 5, 10, 3).
> The real situation is of course a bit more complicated than this, but
> you get the idea.
It looks like a really inefficient way of computing a simple calculation.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp escreveu:
> Isn't that what I said above? You write as if I had said the exact
> opposite (ie. that languages using references don't need GC).
doh! I read it wrong...
--
a game sig: http://tinyurl.com/d3rxz9
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|