|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Orchid XP v7" <voi### [at] devnull> wrote in message
news:471e2d2f$1@news.povray.org...
> >> not just wasted because lazy programmers couldn't be bothered to fix
> >> their code.
> >
> > Why on Earth should they optimise a program for minimum-RAM footprint
> > that uses only 10MB in the first place??!?! It would be a total WASTE
> > of time and money for precisely ZERO benefit.
>
> It saddens me that these days people think producing a superior product
> is "a waste of time".
It is a waste when there's no return on the time spent. Software is a
business. If you write software for a software house and tell the planners
and designers that you need a week to optimise the code so that it takes a
quarter of the memory it would otherwise need, they'll likly to tell you to
get lost. That week is a week not spent adding features to the program, and
its features that sell, not efficiency or beauty of code.
That's the reality of the software business. When you're writing code for
fun, you can spend several days getting things as optimal as you like,
because you're working on your own time.
Ref: http://www.codinghorror.com/blog/archives/000980.html
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> Why on Earth should they optimise a program for minimum-RAM footprint
>> that uses only 10MB in the first place??!?! It would be a total WASTE of
>> time and money for precisely ZERO benefit.
>
> It saddens me that these days people think producing a superior product is
> "a waste of time".
Look, nobody in the world apart from you cares if Word uses up 10MB or 6MB
of RAM, Microsoft are not going to employ 50 people for a few months to go
through and optimise for RAM usage just to make you feel better. In fact,
if they did that they'd probably sacrifice speed by using less
speed-efficient data structures somewhere, compressed graphics, or such
like. I'm pretty sure that they don't just randomly insert code to use up
RAM for no reason...
When they are writing something like Word, which they know is going to use a
tiny % of RAM compared to what everyone has, I am pretty sure they
prioritise it loading up and doing stuff quickly rather than using a minimum
amount of RAM. Why do it the other way round?
> You say "only" as if 10 MB is a small amount of RAM...
I can buy a 1024 MB USB stick for £5.87 (from ebuyer.com). That makes 10 MB
cost about 6p. Fast RAM for a computer is about 5 or 6 times that price -
not much is it? The cheapest machine on the Dell website (£359) comes with
1024 MB RAM. I open up task manager and I have 21 processes using more than
10MB of RAM. To me, 10MB seems pretty small.
> Given how badly many of the PCs at work struggle to run Word, I doubt
> that...
It's not our fault your company has updated the software without updating
the hardware in 10 years.
> I guess it just comes down to how frustrating it is that my PC takes 30
> seconds to load or close any given application.
Get a new PC dude - I'm pretty sure that even if you spend £50 on ebay you'd
be way better off. Why are you still using an ancient pile of junk to try
and run modern software?
> I mean, 20 *years* ago computers could do that instanteneously with a
> fraction of the RAM and CPU power. Why are we not coding like that any
> more??
Because we (well, most of us) have better computers than we did 20 years
ago?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Wed, 24 Oct 2007 09:08:59 +0200, scott wrote:
> Look, nobody in the world apart from you cares if Word uses up 10MB or
> 6MB of RAM,
Um, not nobody. It's bothered me for a long time that as computers get
faster and faster that coding gets sloppier and sloppier.
I realise the realities of software development (given that I work for a
software company), but if coders were taught to make their code efficient
to start with, then it would be less of a problem. The problem doesn't
start in the software development houses, it starts at school when
students are learning how to code and are taught bad habits from the
start.
If coders learned how to write efficient code to start with, there would
be less wastage of memory in a system (regardless of whether it's a 64K
system or a 4 GB system), and wasting less memory means more can be done
with the system at any given time.
Jim
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Wed, 24 Oct 2007 12:48:01 -0400, Jim Henderson wrote:
> The problem doesn't
> start in the software development houses, it starts at school when
> students are learning how to code and are taught bad habits from the
> start.
As an example, I remember a core component of a popular server OS that
would, under certain circumstances, display negative disk space free.
This looks quite stupid when it happens - since clearly that's not a
possible situation. The bug was deemed to be a "cosmetic" error and was
never fixed. Customers looked at it and said "if that sort of error
exists, what am I *not* seeing?"
To someone with a little background in C (the language used for the tool
in question), it's clear that the problem was either (a) sloppy
declaration of the variable type, letting the default signed value be
used rather than specifying an unsigned data type or (b) poor use of the
formatting specification when the data was output, using a signed output
type when an unsigned data type was used.
Either way is not good. It's an easy mistake to make, but if they've
internalized the need to be specific, then this sort of thing wouldn't
happen.
Jim
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Jim Henderson <nos### [at] nospamcom> wrote:
> (a) sloppy
> declaration of the variable type, letting the default signed value be
> used rather than specifying an unsigned data type
How exactly would that solve the problem? The only difference would be
that instead of seeing something like "-1234" you would see something like
"4294966062" (or "18446744073709550382" if they are using 64-bit values),
which isn't any more helpful. In fact, it's actually worse because it's
even more confusing.
On a different note, I have been changing my habit with regard to this.
In the past I followed the practice "always use unsigned values for things
where negative values don't make sense, only use signed values where negative
values make sense and are possible". However, I have noticed more and more
how this cause more problems than it's worth.
The current trend in programming guides is becoming more "always use
signed values unless there's a good reason not to", and I am starting
to agree with that.
One good example: Assume you have a bitmap in memory. The dimensions of
bitmaps are always positive. Negative values don't make any sense with such
a thing as bitmap dimensions. These dimensions will never be negative.
Thus it makes sense to use unsigned values to represent the dimensions of
the bitmap?
However, suppose that you want to draw the bitmap on screen, at a given
position. The position if the bitmap on the screen is given as pixel
coordinates so that the center of the bitmap is located at those coordinates.
It's perfectly valid if the bitmap is partially outside the screen. This
means that the upper-left corner screen coordinates of the bitmap can be
negative. The center coordinates themselves could be negative too.
Usually when you draw a bitmap on screen you specify its position on
screen by its upper left corner coordinate. Thus you would calculate these
coordinates with something like (x - width/2, y - height/2).
Since x and y are signed integers and width and height are unsigned
integers, you are now mixing signed and unsigned integers, causing some
implicit conversions, and possibly producing a compiler warning.
Moreover, comparing (signed) coordinates with the (usigned) bitmap
dimensions are even more likely to give you problems, or at least compiler
warnings, for example in something like "if(x < width)".
The handiest way of doing this is to keep the bitmap dimensions as
signed integers. Even though they never get negative values, they can
be part of expressions which result in negative values, and thus there
will not be any surprising problems with implicit conversions nor compiler
warnings.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Wed, 24 Oct 2007 13:21:47 -0400, Warp wrote:
>> (a) sloppy
>> declaration of the variable type, letting the default signed value be
>> used rather than specifying an unsigned data type
>
> How exactly would that solve the problem? The only difference would be
> that instead of seeing something like "-1234" you would see something
> like "4294966062" (or "18446744073709550382" if they are using 64-bit
> values), which isn't any more helpful. In fact, it's actually worse
> because it's even more confusing.
Because the problem was caused by an overflow condition in the
calculator. IIRC, I did the math, and the value was in fact correct,
just with the wrong sign. But the fact that it evaluated as a negative
number threw all sorts of other issues with regards to filesystem
compression and problems with server-based programs that were checking
for free space and seeing there was none available when in fact there was
more than enough.
Jim
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
scott wrote:
> Look, nobody in the world apart from you cares
Yeah, I noticed. And I guess that's why this situation has been allowed
to come into existence.
> Microsoft are not going to employ 50 people for a few months
> to go through and optimise for RAM usage just to make you feel better.
Indeed no - their job is to research new techniques for slowing software
down as much as possible to boost sales of expensive new hardware.
(Presumably this is why the hardware vendors love them so much...)
> I'm pretty sure that they don't just randomly insert code to use
> up RAM for no reason...
Oh really? Do you have any actual evidence for that?
(I doubt most people make inefficient code on purpose, but M$ I'm not so
sure about.)
>> You say "only" as if 10 MB is a small amount of RAM...
>
> I can buy a 1024 MB USB stick for £5.87 (from ebuyer.com). That makes
> 10 MB cost about 6p. Fast RAM for a computer is about 5 or 6 times that
> price - not much is it?
Now I'm puzzled - when I bought 1 GB of RAM for my PC, I had to pay
several hundred pounds for it... Am I living in an alternate reality or
something?
>> I mean, 20 *years* ago computers could do that instanteneously with a
>> fraction of the RAM and CPU power. Why are we not coding like that any
>> more??
>
> Because we (well, most of us) have better computers than we did 20 years
> ago?
And that's just it, isn't it?
Why bother fixing the problem when you can just throw more hardware at it.
It's like those stories you read about on The Daily WTF where some idiot
puts together a horribly inefficient SQL system, and rather than make
the obvious change to improve performance, the client ends up buying a
small cluster of high-end servers. What the hell is WRONG with the world?!
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
scott wrote:
> Look, nobody in the world apart from you cares if Word uses up 10MB or
> 6MB of RAM, Microsoft are not going to employ 50 people for a few months
> to go through and optimise for RAM usage just to make you feel better.
On the other hand, they *do* optimize disk usage. Ever look at the
full-blown "advanced" tab on the installer? There's probably 50
different independent packages you can decide to install or not.
>> I mean, 20 *years* ago computers could do that instanteneously with a
>> fraction of the RAM and CPU power. Why are we not coding like that any
>> more??
>
> Because we (well, most of us) have better computers than we did 20 years
> ago?
And 50 years ago, people were hand-compiling machine code, punching it
on tape, and watching the results by actually looking at the bits in
memory as they glowed on the screen. So? :-)
I'm still surprised I can get a shaded textured 3D scene with shadows
calculated refreshed faster than my monitor sync rate, but PowerPoint
can't smoothly scroll text onto the screen without tearing it. :-?
--
Darren New / San Diego, CA, USA (PST)
Remember the good old days, when we
used to complain about cryptography
being export-restricted?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Jim Henderson wrote:
> To someone with a little background in C
There's the problem right there. As one professor I knew put it, "There
are two kinds of languages. The ones that support unbounded arithmetic
types, and the ones that don't know they need to support unbounded
arithmetic types."
I can't tell you how many tools I went through looking for one that
would handle a file >4G for restartable downloads. I finally had to
install bittorrent at both ends just to move the file across the
network. In spite of the fact that FTP uses ASCII representations for
all numbers, and hence has no inherent limitation in the protocol.
--
Darren New / San Diego, CA, USA (PST)
Remember the good old days, when we
used to complain about cryptography
being export-restricted?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Jim Henderson wrote:
> Because the problem was caused by an overflow condition in the
> calculator.
Actually, Thief has a similar problem in its installer. If you're within
60 meg (or so) of having 32G free disk space free, it won't install,
complaining you don't have enough space. If you then create a 80 meg
file, it'll install fine.
Oh, and Warp, they aren't "unsigned integers" and "signed integers" in
C. They're "ints", not "integers". ;-) That's the root of the problem.
Mixing signed integers and unsigned integers are fine. It's just C's
weird conversion rules for ints that are problematic. ;-) </nit>
--
Darren New / San Diego, CA, USA (PST)
Remember the good old days, when we
used to complain about cryptography
being export-restricted?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|