|
|
Darren New <dne### [at] sanrrcom> wrote:
> Warp wrote:
> > Darren New <dne### [at] sanrrcom> wrote:
> >> No, it makes perfect sense as long as some input value is getting
> >> progressively larger. (Or heading towards a value that takes progressively
> >> more time or space.) O(1) is the *output* when the *input* grows without
> >> bound. If your input grows without bound, the function isn't meaningful.
> >
> > I didn't understand that.
> I'm saying O() is defined as "a function that's always bigger than *your*
> function, even as the input to your function grows without bound." If your
> input doesn't grow without bound, O() isn't well defined.
> O() is defined as an asymptotic value. You can't have an asymptote of one value.
> It's like asking whether you can solve the halting problem for one program I
> give you. That isn't "the halting problem", so the answer is in a completely
> different class.
I don't know why, but I'm still understanding that as "O(1) doesn't make
sense".
The only way for an algorithm to be O(1) is that it does *not* depend on
any amount of input. You seem to be arguing that the O notation does not
describe such algorithms. Thus O(1) is nonsensical.
But O(1) is not nonsensical. It's commonly used.
--
- Warp
Post a reply to this message
|
|