 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Warp wrote:
> Darren New <dne### [at] san rr com> wrote:
>> No, it makes perfect sense as long as some input value is getting
>> progressively larger. (Or heading towards a value that takes progressively
>> more time or space.) O(1) is the *output* when the *input* grows without
>> bound. If your input grows without bound, the function isn't meaningful.
>
> I didn't understand that.
I'm saying O() is defined as "a function that's always bigger than *your*
function, even as the input to your function grows without bound." If your
input doesn't grow without bound, O() isn't well defined.
O() is defined as an asymptotic value. You can't have an asymptote of one value.
It's like asking whether you can solve the halting problem for one program I
give you. That isn't "the halting problem", so the answer is in a completely
different class.
--
Darren New, San Diego CA, USA (PST)
"We'd like you to back-port all the changes in 2.0
back to version 1.0."
"We've done that already. We call it 2.0."
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Darren New <dne### [at] san rr com> wrote:
> Warp wrote:
> > Darren New <dne### [at] san rr com> wrote:
> >> No, it makes perfect sense as long as some input value is getting
> >> progressively larger. (Or heading towards a value that takes progressively
> >> more time or space.) O(1) is the *output* when the *input* grows without
> >> bound. If your input grows without bound, the function isn't meaningful.
> >
> > I didn't understand that.
> I'm saying O() is defined as "a function that's always bigger than *your*
> function, even as the input to your function grows without bound." If your
> input doesn't grow without bound, O() isn't well defined.
> O() is defined as an asymptotic value. You can't have an asymptote of one value.
> It's like asking whether you can solve the halting problem for one program I
> give you. That isn't "the halting problem", so the answer is in a completely
> different class.
I don't know why, but I'm still understanding that as "O(1) doesn't make
sense".
The only way for an algorithm to be O(1) is that it does *not* depend on
any amount of input. You seem to be arguing that the O notation does not
describe such algorithms. Thus O(1) is nonsensical.
But O(1) is not nonsensical. It's commonly used.
--
- Warp
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
>> OK, so let me get this straight:
>>
>> If there exists a property that some programs have and some other
>> programs do not have, it is impossible to tell which programs have
>> this property.
>
> Almost. The "property" is a property of the language the program
> accepts, not the program. Note that a "language" in this case isn't the
> programming language, but the mapping from inputs to outputs. In other
> words, a "property" is a relationship between input and output, not a
> property of the text of the program.
>
> A "property" of a program is the subset of all possible functions that
> has that property. The property "halts when given an empty string" is
> defined as "the set of all programs that halt when given an empty
> string." The property "outputs the word 'green' somewhere" means "the
> set of all programs that output the word 'green' somewhere".
>
> The theorem states that you can't tell what properties a function has.
> In other words, if you look at a property as a collection of functions
> that does *that*, then you can only tell if a function has it or not
> only if that set is universal or empty.
>
> Spend a couple minutes looking at the "formal statement".
>
>> This seems downright false. For example, some programs contain a
>> variable named "X", while others do not.
>
> That's not a property of the program. That's a property of the source
> code used to describe the program. The exact same program may or may not
> have an X. If you replace X with Y everywhere, the exact same program
> (defined in terms of its inputs and outputs) doesn't have an X.
OK, so Rice's theorum applies only to the input/output behaviour of the
program then?
Does that mean it *doesn't* apply to things like time or space usage?
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Orchid XP v8 wrote:
> OK, so Rice's theorum applies only to the input/output behaviour of the
> program then?
>
> Does that mean it *doesn't* apply to things like time or space usage?
Correct. If you have any (computable) bound on the time or space usage,
then it's easy to determine if a given program will exceed those bounds
in computing a function by simply running it and checking (note in the
space usage case you might have to run it exponentially longer than the
space bound). If the program runs too long or uses too much space, you
can just terminate it and halt, so this property is decidable.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
>> OK, so Rice's theorum applies only to the input/output behaviour of
>> the program then?
>>
>> Does that mean it *doesn't* apply to things like time or space usage?
>
> Correct. If you have any (computable) bound on the time or space usage,
> then it's easy to determine if a given program will exceed those bounds
> in computing a function by simply running it and checking (note in the
> space usage case you might have to run it exponentially longer than the
> space bound). If the program runs too long or uses too much space, you
> can just terminate it and halt, so this property is decidable.
Right. So if I write a pure function (i.e., some code that takes some
input and returns some output but does nothing else - aside from maybe
allocating memory), it is impossible to tell anything about this
function other than how long it's going to take and how much memory it's
going to eat?
Hmm, well that's a fairly dissapointing result. Apparently static
analysis is an entirely hopeless endevour...
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Orchid XP v8 wrote:
> Right. So if I write a pure function (i.e., some code that takes some
> input and returns some output but does nothing else - aside from maybe
> allocating memory), it is impossible to tell anything about this
> function other than how long it's going to take and how much memory it's
> going to eat?
It's slightly worse than that. You can only *verify* a guess for the
time/space consumption for each input you test. It's always possible
that the program could be entirely correct but exceed these bounds. And
really this is only possible through sort of a "cheat" -- there's not
any interesting static analysis going on anywhere in this.
Doesn't mean static analysis can't be useful in practice though.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Warp wrote:
> I don't know why, but I'm still understanding that as "O(1) doesn't make
> sense".
You realize that in O(1), the 1 is the *output* of the analysis, right?
> The only way for an algorithm to be O(1) is that it does *not* depend on
> any amount of input.
Correct. The algorithm runs the same number of steps regardless of how much
input you give it. That's O(1). "Return the first element in your input" is
an O(1) operation.
> You seem to be arguing that the O notation does not
> describe such algorithms. Thus O(1) is nonsensical.
No. I'm saying O() is nonsensical if you have an algorithm that only takes a
fixed-size input. O() isn't defined on algorithms that can't handle
unbounded size inputs, since it's defined as how many resources an algorithm
consumes while running on increasingly larger inputs. If it doesn't run at
all on increasingly larger inputs, O() isn't defined.
> But O(1) is not nonsensical. It's commonly used.
It's commonly used, but not on algorithms with fixed-size inputs. If you
have a sort that only accepts 10 integers to sort, O() isn't defined,
because O() asks how fast does the algorithm run as you give it more than 10
inputs. If the answer is "it crashes with more than 10 inputs", it makes no
sense to ask how fast the answer comes back.
> "How many steps are needed to sort an array of 10 elements?"
That's not a question that is answered with O() notation, exactly because
every single operation with a fixed size input runs in the same length of
time, as far as O() is concerned. It's like asking what answer is returned
by a function that gets stuck in an infinite loop. "Answer" isn't defined in
that case.
You may want to say "Any algorithm that only runs on a fixed-size input I
define as running in O(1)", but that's an addition to the defintion that
everyone else uses.
--
Darren New, San Diego CA, USA (PST)
"We'd like you to back-port all the changes in 2.0
back to version 1.0."
"We've done that already. We call it 2.0."
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Orchid XP v8 wrote:
> OK, so Rice's theorum applies only to the input/output behaviour of the
> program then?
Basically. It applies to the execution of an abstract program running on a
given input. (I.e., a particular turing machine with a particular input tape.)
> Does that mean it *doesn't* apply to things like time or space usage?
I don't know.
--
Darren New, San Diego CA, USA (PST)
"We'd like you to back-port all the changes in 2.0
back to version 1.0."
"We've done that already. We call it 2.0."
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Darren New <dne### [at] san rr com> wrote:
> Warp wrote:
> > I don't know why, but I'm still understanding that as "O(1) doesn't make
> > sense".
> You realize that in O(1), the 1 is the *output* of the analysis, right?
I don't understand that.
> > The only way for an algorithm to be O(1) is that it does *not* depend on
> > any amount of input.
> Correct. The algorithm runs the same number of steps regardless of how much
> input you give it. That's O(1). "Return the first element in your input" is
> an O(1) operation.
How is that different from "sort the first 10 elements of the input"
(which is basically equivalent to "sort an array of 10 elements")?
> > But O(1) is not nonsensical. It's commonly used.
> It's commonly used, but not on algorithms with fixed-size inputs. If you
> have a sort that only accepts 10 integers to sort, O() isn't defined,
> because O() asks how fast does the algorithm run as you give it more than 10
> inputs.
Hence with your "return the first element in your input" O() isn't defined
either.
--
- Warp
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Orchid XP v8 wrote:
> function other than how long it's going to take and how much memory it's
> going to eat?
No, you can't tell how long it's going to take or how much memory it'll use.
You can only tell if it's going to take more than X time or more than Y
memory. You do that by running it until it either halts or exceeds that
limit. (And you can tell if you got in a loop using only a fixed amount of
memory, so that's not hit by the halting problem.)
That's different from figuring out the minimum X time to doesn't take longer
than, for example.
--
Darren New, San Diego CA, USA (PST)
"We'd like you to back-port all the changes in 2.0
back to version 1.0."
"We've done that already. We call it 2.0."
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |