|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Of course, games trying to hit the christmas season, or cars going into
> production, all count as actual deadlines, as do trade show demos in the
> computer business.
You just have to ask yourself in each case, what are the consequences of
missing the deadline. That should tell you how much of a "real" deadline it
is...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> If it's part of operating the phone, sure. If it's an add-on game or
> something, probably less so. When you're trying to cut battery usage so
> you turn off the receiver half-way thru getting a packet because you can
> tell from the first half that you're not interested in the second half,
> that's squeezing out performance.
I guess they didn't bother with such things in the iPhone :-)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>>>> I especially like the claims of "every time we added debug code, the
>>>> bug went away". Surely it is impossible to work under such
>>>> conditions...
>>>
>>> Well, I tell you that's pure reality.
>>
>>> Yeah, it's kind of fun developing software for devices where
>>> occasionally the only way to get any helpful debug information out of
>>> the thing is by switching an LED on or off, and errors might just as
>>> well be hardware-related.
>>
>> Wouldn't this make it scientifically impossible to deliver a working
>> product?
>
> I'm not sure what you mean by this.
>
> If you're saying, "wouldn't this make it impossible to prove that the
> product is working properly", then yes: Such a proof is impossible for
> virtually /every/ real-life product. Welcome to real-world software (and
> hardware) development.
>
> If you're saying, "if you can't get any debug information out of the
> device except for an LED, how can the product be of use anyway?", then
> you're missing the fact that there might be other interfaces to the
> outside world, which just happen to be unavailable for debugging
> information (e.g. they're not initialized yet when the error occurs, or
> the interface protocol provides no means to introduce any "side channel").
No, what I'm saying is "if adding debug code alters all the bugs, how
does that not make it scientifically impossible to remove all the bugs?"
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> From what I've seen, only games programmers go to the extremes of using
> obscure and elaborate low-level hackery to squeeze every last femtosecond
> of performance out of the machine. Other programmers usually don't have
> such ardent performance limits to worry about. (Usually.)
>
> Which is not to say that all codebases that aren't games are beautifully
> written... just that games typically have ludicrous performance
> requirements and insane release schedules.
Imagine writing software for an mp3 player or phone, not only do you have an
insane release schedule, you also need to fit in with the development teams
and testing schedules for the electronics and mechanical parts. It's not as
easy as you think, no mass-produced product has the luxury of weeks of spare
time to optmise.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> I take it you've never seen a program written for a cell phone, a
> set-top box, or an MP3 player?
No.
I have also never seen a program written to control a nuclear reactor.
I'm guessing that for such a program, machine efficiency is irrelevant
(if we need a faster PC, we'll just buy one), and program correctness is
the *only* criteria of any significance. I bet people writing such
software don't just accept random hangups and crashes as an unavoidable
fact of life. They fix them. All of them.
Anyway, embedded programming and safety-critical programming are both
very specialist. I doubt many people will ever see such code in their
lives. And I guess you could argue that games programming is pretty
specialist too... "Most" programs, after all, are either desktop
applications or web applications, and most of them just push data around.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Sabrina Kilian wrote:
> It looked like the problem was that the voidp was being free()ed in the
> event handler, and the data they wanted to pass into the event handler
> was something that couldn't/shouldn't be sent to free(). Maybe copying
> that data out to a malloc block would have taken too much time, since it
> was for controller inputs.
The way I read it, they had a structure with two integers and a void
pointer. "But we couldn't just add an extra integer field, because we'd
have to change a few thousand other functions." I'm not seeing why
changing a structure to have a new field which is only used in certain
places is a problem.
Then again, we're talking about a large, complex codebase. Maybe there's
something relevant they forgot to mention or something...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> No, what I'm saying is "if adding debug code alters all the bugs, how
> does that not make it scientifically impossible to remove all the bugs?"
It's virtually impossible to remove them all anyway (heck, you won't
even /notice/ some of the bugs).
And of course one develops an instinct where such Heisenbugs /really/
originate.
It's also the type of bugs where doing a long weekend (or sometimes a
long week) of "extreme debugging" will usually get you somewhere.
The worse problems are those where the error mechanism is plain obvious,
but is inherent in the software design, and fixing it invariably creates
a bug somewhere else, which can only be fixed by introducing a third
one, which in turn can only be fixed by re-introducing the first one...
seen such things happening; with such bugs "oscillating" for a month or
so before people decided to get back to the drawing board, and redesign
that part of the application. That was an embedded system - in a large
software project for an insurance company I was briefly involved in,
some bugs had allegedly been oscillating between "fixed" and "broken"
for years already.
Fortunately that kind of bugs doesn't suit my working mode: Repetitive
tasks - i.e. anything I encounter more than once - typically prompt me
to invest a lot of time and energy into making sure I never have to
spend any significant time on them a third or even fourth time.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> No, what I'm saying is "if adding debug code alters all the bugs, how
>> does that not make it scientifically impossible to remove all the bugs?"
>
> It's virtually impossible to remove them all anyway (heck, you won't
> even /notice/ some of the bugs).
Sure. But if you know there's a bug there, but it vanishes every time
you try to fix it, then even though you know it's there, it must surely
be impossible to actually fix.
> And of course one develops an instinct where such Heisenbugs /really/
> originate.
Well, yeah, there is that. If you don't *do* crazy stuff with pointer
arithmetic in the first place, you're less likely to have problems.
> The worse problems are those where the error mechanism is plain obvious,
> but is inherent in the software design, and fixing it invariably creates
> a bug somewhere else, which can only be fixed by introducing a third
> one, which in turn can only be fixed by re-introducing the first one...
In other words, the design of your application doesn't let you solve the
problem properly, and the only way to get it to work is with a hack that
doesn't work properly, which then introduces more problems that can only
be fixed with hacks... until eventually you figure out that it's less
work to redesign your application to work right in the first place.
[Can you tell how many large programs I've worked on?]
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Invisible schrieb:
> I have also never seen a program written to control a nuclear reactor.
> I'm guessing that for such a program, machine efficiency is irrelevant
> (if we need a faster PC, we'll just buy one), and program correctness is
> the *only* criteria of any significance. I bet people writing such
> software don't just accept random hangups and crashes as an unavoidable
> fact of life. They fix them. All of them.
No. *Those* people make sure random hangups and crashes *never ever*
happen in the first place.
Well, at least I hope so...
Then again, nuclear reactors are designed in such a way that they don't
necessarily need a computer at all.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Invisible schrieb:
> The way I read it, they had a structure with two integers and a void
> pointer. "But we couldn't just add an extra integer field, because we'd
> have to change a few thousand other functions." I'm not seeing why
> changing a structure to have a new field which is only used in certain
> places is a problem.
It wasn't a structure: It was a callback function signature.
That is, there was some module to dispatch events; other modules would
ask this dispatcher module to call particular functions, as in "if a
controller button press comes in, please call me back; my address is XYZ".
Of course the dispatcher function would have to know what parameters to
pass to the function stored at address XYZ. For this purpose, all such
callback functions would conform to the same set of parameters - in this
case two integers and a pointer.
Something along these lines.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|