 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Orchid XP v8 wrote:
> Like I said, it seems there's a relationship between capacity and speed,
> such that by the time you get to this size, there wouldn't be much speed
> advantage to putting it on-chip.
In all the cases I know about, the problem was not that, but that there was
a concern that there was a cap on the amount of memory.
Either you need X meg of memory to do what your chip is designed to do, or
you want an arbitrary "lots" of memory. (E.g., you're either building a
phone that doesn't run apps not installed when you built it, or you want to
run as many apps at once as you want.) If you ever want to add RAM, you
need to have the interface stuff for it anyway. If you don't ever want to
add RAM, chances are you don't need more than a few hundred meg to support
your single-chip application.
--
Darren New, San Diego CA, USA (PST)
Eiffel - The language that lets you specify exactly
that the code does what you think it does, even if
it doesn't do what you wanted.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 6/14/2010 2:29 PM, Orchid XP v8 wrote:
> clipka wrote:
>
>> BTW, it's not harddrive life that worries me in such situations, but
>> the bare struggle to get the system's attention back. Try killing a
>> swapping-mad program when it keeps thrashing the task manager out of
>> main memory... >_<
>
> Heh. Fun... Actually, the one I usually have trouble with is Alt+Tab
> from a game back to the Windows desktop. Takes several lifetimes in both
> directions. No idea why.
>
> One thing I discovered back at uni: If your Java program accidentally
> goes into an infinite loop, it dies almost instantly with a stack
> overflow. If your Smalltalk program accidentally loops forever, it's
> usually a few minutes before you realise that it's actually filling the
> entire VM address space with garbage. Now try stopping it. :-)
>
You can do it in C#, too .. I forget exactly what I was writing, but I
threw something into an infinite loop that allocated objects and added
them to a list. A split second after I hit F5 to run the program, I
realized my mistake. That was enough time for it to allocate some 4GB of
memory, By the time I reacted to stop the program it had ballooned up to
10GB.
64 bit is great, but try dealing with that sort of memory stress (My
system has 4GB of memory ...)
--
~Mike
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Mike Raiford wrote:
> 64 bit is great, but try dealing with that sort of memory stress (My
> system has 4GB of memory ...)
Generally that kind of thing isn't so bad though. All the garbage that
gets paged out never gets paged back in again. Assuming you can page in
the code to stop the damned thing, once it's been killed the system
takes a little while to page all the live stuff back in, and then it
goes back to normal.
Now, when your program is actively *using* several GB of RAM... that's
not so fun. (!!)
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 6/15/2010 2:09 PM, Orchid XP v8 wrote:
>
> Now, when your program is actively *using* several GB of RAM... that's
> not so fun. (!!)
>
Photoshop did this ... I was building an image out of a stack of shots
with several shots at full resolution 16-bit per channel color. The
algorithm that determines which parts were in sharpest focus apparently
set out to spread itself out as much as possible in memory.
I walked away for a few minutes after starting the process, came back
and tried to switch to my browser, and found the system unresponsive. It
stayed that way for about 30 min before task manager finally appeared.
Again, 10GB of memory, but actively working with that 10GB. I had other
stuff I hadn't saved yet, and wanted a chance to save, so I waited ....
before I killed PS.
--
~Mike
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
>> Now, when your program is actively *using* several GB of RAM... that's
>> not so fun. (!!)
>
> Photoshop did this ... I was building an image out of a stack of shots
> with several shots at full resolution 16-bit per channel color. The
> algorithm that determines which parts were in sharpest focus apparently
> set out to spread itself out as much as possible in memory.
>
> I walked away for a few minutes after starting the process, came back
> and tried to switch to my browser, and found the system unresponsive. It
> stayed that way for about 30 min before task manager finally appeared.
> Again, 10GB of memory, but actively working with that 10GB. I had other
> stuff I hadn't saved yet, and wanted a chance to save, so I waited ....
> before I killed PS.
Pff. Accidentally write some Haskell code with the wrong strictness
properties and watch it swallow several *hundred* GB by mistake... It's
not even funny.
Usually very easy to kill though, if that's any consolation. (The key,
as I said, is to set the maximum heap size to something smaller than
physical RAM - especially if you know your program isn't supposed to
need lots of RAM anyway...)
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Orchid XP v8 wrote:
> Generally that kind of thing isn't so bad though. All the garbage that
> gets paged out never gets paged back in again.
Sometimes when you kill a big process, it actually does have to read
everything back in, to run finalizers or whatever, and to generally
deallocate memory. It's not unusual to hit ctrl-C on a CLI-driven program
and have it take a minute or more to exit even if you're not catching signals.
--
Darren New, San Diego CA, USA (PST)
Eiffel - The language that lets you specify exactly
that the code does what you think it does, even if
it doesn't do what you wanted.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Darren New wrote:
> Orchid XP v8 wrote:
>> Generally that kind of thing isn't so bad though. All the garbage that
>> gets paged out never gets paged back in again.
>
> Sometimes when you kill a big process, it actually does have to read
> everything back in, to run finalizers or whatever, and to generally
> deallocate memory. It's not unusual to hit ctrl-C on a CLI-driven
> program and have it take a minute or more to exit even if you're not
> catching signals.
I haven't seen that problem with programs I've written myself (which
obviously don't use finalisers), but I've seen other software do this...
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
>> Sure, but just as an example, what's the fastest way to sort a 10 KB
>> array
>> when you have 16 KB or RAM?
>
> You just go and do it. I'm sure the sorting algorithm fits in the
> remaining
> 6KB.
>
> Or did you get those two numbers backwards?
Actually I was thinking about algorithms that are not in-place (eg radix
sort). Whilst in a mathematical sense they might seem to be faster, if you
start having to use slower storage (where some other algorithm doesn't) then
obviously this might no longer be the case.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
>> One thing I discovered back at uni: If your Java program accidentally
>> goes into an infinite loop, it dies almost instantly with a stack
>> overflow.
>
> That's just because the VM only claims a small amount of RAM to start with...
> quite irritating actually, it took me ages to work out I had to increase its
> allocation to get my fractal plotter to be able to hold a decent image :-)
>
> (but then it was too slow anyway so I rewrote it in C)
Actually, I think this is probably bogus.
Any object data goes in the heap, not the stack. A *stack* overflow
indicates a sequence of method calls nested too deeply.
Heck, Haskell programs (more exactly, GHC-compiled Haskell programs)
often die from a *stack* overflow very quickly. But they don't die from
*heap* exhaustion (unless you set the magic flag).
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |