|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> 1. divide the list into manageable sub-lists (pages)
> 2. add up the sub-lists and get sub-totals
> 3. add up the sub-totals and get the sum of the list
> vrs
> 1. 0 for an empty list
> 2. number for a list with 1 number
> 3. number + number for a list with 2 numbers
> 4. sum of the first half of list + sum of the second half of list,
> for a list with at least 3 numbers.
> I think those require random access, which is not available for linked
> lists.
I'm not sure that "list" always means linked list in a
functional language, they're optimized into arrays
behind the curtain, and you're not free to change
the link pointers because they're treated as static
(at least in Haskell).
The first one is very loosely defined, it doesn't
say what is manageable, it doesn't say how to
add those chunks together, or in what order. It's
easy to imagine processing it in parallel, or reading
numbers from a file in chunks, then adding them up
sequentially as you go along, or if manageable means
two at a time, it could identical to Andrew's.
The second describes an algorithm, not the data
structures needed to implement it. It's probably
too specific in parts, and very unspecific about
how to determine where to split the list in half,
or differences in performance between using it
for boxed or unboxed number types.
There's nothing overwhelmingly "functional biased"
about a recursive algorithm, but providing a
description of the desired results and then letting the
rest sort itself out probably is. That's not always a
good thing, machine code is imperative so every
bit of code is describing actions, the separation of
function from actions is artificial.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I added a couple of lines:
function sum(list)
> var total = 0;
> var position = list.start();
> while (position != null)
> {
> total = total + position.get_value();
> position = position.get_next();
> }
return total
end
> Pretty much a verbatim match there.
OK, so for your "Other Method" we have:
function sum(list)
if( list.start() == NULL )
return 0
else
return list.get_value() + sum(list.get_next())
endif
end
> And of course, accessing parts of a list requires splitting them, and
> recusion isn't looping. ;-)
I suspect the recursion method is slower, because calling a function has
some overheads (storing and recalling registers and return address on stack
etc).
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
scott wrote:
> I added a couple of lines:
OK, fair enough.
> OK, so for your "Other Method" we have:
>
> function sum(list)
> if( list.start() == NULL )
> return 0
> else
> return list.get_value() + sum(list.get_next())
> endif
> end
Or, alternatively,
sum [] = 0
sum (x:xs) = x + sum xs
as we say in Haskell. The English translation was attempting to preserve
the structure without actually using Haskell language syntax. But either
way, you get the idea.
The important thing is you don't say "if the list is empty then *return*
0", you say "if the list is empty then total is *defined as* 0". That's
the main difference. (And yes, it's a conceptual difference rather than
an implementation difference. And that's kind of what I wanted to
hilight - without going into a long technical debate.)
>> And of course, accessing parts of a list requires splitting them, and
>> recusion isn't looping. ;-)
>
> I suspect the recursion method is slower, because calling a function has
> some overheads (storing and recalling registers and return address on
> stack etc).
Not when the compiler can detect tail recursion and transform it into a
normal loop structure automatically.
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Not when the compiler can detect tail recursion and transform it into a
> normal loop structure automatically.
So actually, you had the same thought to start with (I want to sum all the
elements) and the end result is the same assembly code. So the only real
point is if you find writing out and thinking about the first solution
easier than the second or not. In your own words:
"3. The second version looks like some kind of a riddle. Not a hard
riddle, but a somewhat baffling definition all the same."
So why on Earth bother with the riddle in the first place?
Here's a couple of changes you can try to make to the Haskell version:
1) Modify the code so that it stores the sum up to each element in that
element.
2) Modify the code so that it sums up nodes in a tree structure, starting
from any child.
In the 1st version, it's very easy to implement. For 1) you just add:
position.sumToHere = total
in the loop.
For 2), you change the position = line to:
position = position.GetParent()
All very intuitive, requiring little modification to the sum function.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
scott wrote:
> So actually, you had the same thought to start with (I want to sum all
> the elements) and the end result is the same assembly code. So the only
> real point is if you find writing out and thinking about the first
> solution easier than the second or not. In your own words:
>
> "3. The second version looks like some kind of a riddle. Not a hard
> riddle, but a somewhat baffling definition all the same."
>
> So why on Earth bother with the riddle in the first place?
If all you're trying to do is sum all the elements in a list, neither
approach has a really compelling advantage or disadvantage. I used it as
an example of a simple task that anybody can easily understand, and used
it to illustrate the difference in mentallity between the two approaches.
> Here's a couple of changes you can try to make to the Haskell version:
>
> 1) Modify the code so that it stores the sum up to each element in that
> element.
Uh, OK.
sums xs = sums' 0 xs
where
sums' t [] = [t]
sums' t (x:xs) = t : sums' (t+x) xs
Takes a list of numbers, returns a list of intermediate sums. The first
element is always 0, the last element is always the total sum of the
entire list.
> 2) Modify the code so that it sums up nodes in a tree structure,
> starting from any child.
Erm... I don't really understand what you mean.
> All very intuitive, requiring little modification to the sum function.
Similarly, in Haskell I can define a single function that iterates over
a list, and then write 1-linears that will find the sum / product /
minimum / maximum of the list, or the logical AND / OR of a list of
booleans, or concatenate a list of lists, or... All very intuitive and
easy to write once you have the iteration function.
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> If all you're trying to do is sum all the elements in a list, neither
> approach has a really compelling advantage or disadvantage. I used it as
> an example of a simple task that anybody can easily understand, and used
> it to illustrate the difference in mentallity between the two approaches.
I guess the difference would be in more complex examples then, when both the
human C++ coder and the Haskell compiler have to work harder to get
efficient code.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Invisible <voi### [at] devnull> wrote:
> > I still like the definition of "connected" in the New Zealand go rules:
> >
> > "Two stones of the same colour are connected if they are on adjacent
> > intersections or if they are both connected to a third stone."
> >
> > Even most functional programmers have hard time understanding that
> > correctly the first time they see it.
> Makes perfect sense to me - but then, I'm a maths nerd... ;-)
Does it? Can you give me several varied examples of connected stones,
as well as examples where two stones are *not* connected according
to this definition?-)
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
scott wrote:
>> If all you're trying to do is sum all the elements in a list, neither
>> approach has a really compelling advantage or disadvantage. I used it
>> as an example of a simple task that anybody can easily understand, and
>> used it to illustrate the difference in mentallity between the two
>> approaches.
>
> I guess the difference would be in more complex examples then, when both
> the human C++ coder and the Haskell compiler have to work harder to get
> efficient code.
If all you're after is the ultimate machine efficiency, C can probably
still do that better than almost any other language, given enough skill.
But what about factors such as code size, readability, maintainability,
code reuse, polymorphism, etc etc etc?
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Invisible <voi### [at] devnull> wrote:
> sum [] = 0
> sum (x:xs) = x + sum xs
Don't you love it how complicated something like this looks in C++:
template<typename Iterator>
typename std::iterator_traits<Iterator>::value_type
sum(Iterator begin, Iterator end)
{
typedef typename std::iterator_traits<Iterator>::value_type t;
t result = t();
while(begin != end) result += *begin++;
return result;
}
Its advantage, though, is that it works with *any* container type
as long as it has sequential iterators, and *any* value type, as long
as it supports the + operator. So you can do for example like this:
int table[] = { 1, 2, 3, 4, 5, 6, 7, 8 };
int result = sum(table, table+8);
or like this:
std::string table[] = { "a", "b", "cde", "f" };
std::string result = sum(table, table+8);
or like this:
std::vector<double> table(100);
// init the vector
double result = sum(table.begin(), table.end());
or like this:
std::list<std::complex<double> > l;
// init the list
std::complex<double> result = sum(l.begin(), l.end());
or like this:
std::set<std::string> s;
// init the set
std::string result = sum(s.begin(), s.end());
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Invisible wrote:
> If all you're after is the ultimate machine efficiency, C can probably
> still do that better than almost any other language, given enough skill.
... on a machine whose architecture is well-matched to C. And given
enough skill.
A COBOL interpreter in C isn't going to be nearly as fast as a COBOL
interpreter in microcode. (Yes, btdt. :-) It's going to be hard to beat
one clock cycle per byte for an "edit byte string" (similar to "print
using" in BASIC) using a C function.
A FORTH chip is going to run FORTH way faster than it'll run the same
algorithm written in a way that looks like reasonable C, I expect.
Put 128 cores on a chip connected in a hypernet, and you're going to
have a heck of a time writing an efficient sort algorithm in C for that.
Put a big pile of SIMD processors together, and you're not going to be
able to program it in C at all. APL maybe, but not C.
C makes a whole pile of assumptions about processor architecture that
most people don't notice because most processors they use have evolved
to account for what C does. But the assumptions are there.
--
Darren New / San Diego, CA, USA (PST)
"That's pretty. Where's that?"
"It's the Age of Channelwood."
"We should go there on vacation some time."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|