|
|
On 08/09/2011 05:09 AM, Darren New wrote:
> On 9/7/2011 2:22, Invisible wrote:
> > then O(log N) new heap nodes get constructed.
>
> Which, incidentally, is the same number of nodes you'd have to update in
> a mutable heap, too.
Indeed. The only difference is that with Haskell, you *copy* these nodes
rather than simply modify them in-place.
>> But you don't ever have to worry about making one thread
>> wait to update something until after the other thread has definitely
>> finished using it.
>
> That's not strictly true. It's entirely possible in modern processors
> that the pointer gets written and is visible to other processors before
> the data it points to gets flushed from the cache into main memory.
No, that's exactly what the "cache coherence" protocol prevents.
>> Immutable arrays are great as super-fast lookup tables, but unless the
>> data
>> changes very infrequently, you end up needing mutable arrays. Immutable
>> arrays just aren't all that useful.
>
> Some of the google stuff is cool in that way. You can imagine you don't
> want to update the entire file that holds the entire web every time you
> scan a new document.
If defies my comprehension that what Google does is actually physically
possible, but anyway...
Post a reply to this message
|
|