|
 |
Warp wrote:
> I don't know why, but I'm still understanding that as "O(1) doesn't make
> sense".
You realize that in O(1), the 1 is the *output* of the analysis, right?
> The only way for an algorithm to be O(1) is that it does *not* depend on
> any amount of input.
Correct. The algorithm runs the same number of steps regardless of how much
input you give it. That's O(1). "Return the first element in your input" is
an O(1) operation.
> You seem to be arguing that the O notation does not
> describe such algorithms. Thus O(1) is nonsensical.
No. I'm saying O() is nonsensical if you have an algorithm that only takes a
fixed-size input. O() isn't defined on algorithms that can't handle
unbounded size inputs, since it's defined as how many resources an algorithm
consumes while running on increasingly larger inputs. If it doesn't run at
all on increasingly larger inputs, O() isn't defined.
> But O(1) is not nonsensical. It's commonly used.
It's commonly used, but not on algorithms with fixed-size inputs. If you
have a sort that only accepts 10 integers to sort, O() isn't defined,
because O() asks how fast does the algorithm run as you give it more than 10
inputs. If the answer is "it crashes with more than 10 inputs", it makes no
sense to ask how fast the answer comes back.
> "How many steps are needed to sort an array of 10 elements?"
That's not a question that is answered with O() notation, exactly because
every single operation with a fixed size input runs in the same length of
time, as far as O() is concerned. It's like asking what answer is returned
by a function that gets stuck in an infinite loop. "Answer" isn't defined in
that case.
You may want to say "Any algorithm that only runs on a fixed-size input I
define as running in O(1)", but that's an addition to the defintion that
everyone else uses.
--
Darren New, San Diego CA, USA (PST)
"We'd like you to back-port all the changes in 2.0
back to version 1.0."
"We've done that already. We call it 2.0."
Post a reply to this message
|
 |