|
|
clipka wrote:
> Chambers <ben### [at] pacificwebguycom> wrote:
>> > Basically we need
>> > to drastically reduce the number of times we get memory from the global
>> > heap, while at the same time trying to avoid getting too much memory at any
>> > given time (which may not be needed).
>>
>> If memory is allocated in chunks, then maybe a command-line parameter to
>> set the chunk size would be useful for testing until an optimum solution
>> is found. Also, this would allow people with more RAM to use larger
>> cache sizes as a default, avoiding the penalty completely.
>
> Another option: Allocate chunks of increasing size. Allocated N bytes in total
> and they didn't suffice? Then allocate another N bytes next time for a total of
> 2*N. Still not enough? Go for another 2*N bytes.
>
> This way you can start out small to save memory in case it's not used much
> (waste is guaranteed to never be more than 50%), but get out of the "cost zone"
> quickly if you really need a lot of it.
>
> Add some "hard deck" if desired to make sure you don't waste 0.9 GB if you
> happen to need 1.1GB.
>
> Don't use dedicated pools for particular data structures, but allocate blocks
> for generic use, rolling your own (thread local) heap management.
Most of this stuff was already done, it's not as simple as it appears.
That said, try the current test exe.
-- Chris
Post a reply to this message
|
|