|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 14/03/2012 02:36 PM, clipka wrote:
> Am 14.03.2012 10:55, schrieb Invisible:
>
>>> Note that the dispose is gone for good. Shared pointers use reference
>>> counters to accomplish this feat.
>>
>> Every time I hear somebody say this, a part of me wants to scream "you
>> /do/ understand that reference counting doesn't actually work, right?"
>
> Which is because...?
Allow me to rephrase: Reference counting works if and only if the node
reference graph is acyclic.
Sometimes it's feasible to ensure that this is the case. Sometimes it is
not.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 14/03/2012 02:15 PM, Warp wrote:
> Orchid Win7 v1<voi### [at] devnull> wrote:
>> So 16 bytes of space overhead per node is worth it if you can avoid one
>> indirect jump instruction?
>
> No, it's worth it if you can avoid one extraneous memory allocation
> (or having to manage the allocated memory manually).
>
> A memory allocation will easily consume those 16 bytes of extra space
> anyways (the memory allocator bookkeeping, memory alignment, and the pointer
> used to refer to the allocated object), so it's not like you are gaining
> anything.
Heh, really? Most languages I work with can allocate new objects in O(1)
time and using O(1) memory for bookkeeping. I'm not used to dynamic
allocation being anything to worry about.
>> Mmm, I hadn't even thought of using an /array/. I suppose the upper
>> bound on the tree size is statically known, so why not?
>
> The array doesn't necessarily have to be fixed in size.
It does if it's static. Either that or the source I'm reading is
incorrect...
> Calling a virtual function is in practice not any slower than calling
> a regular function.
>
> Calling 'new', however, can be really slow (comparatively speaking).
Mmm, OK.
> If you can reduce x millions of 'new' calls into eg. log(x millions) of
> 'new' calls, your program will probably become much faster.
Well, sure. If I was going to allocate, I don't know, one object for
every pixel on the screen, then sure, I'd try to find a way to not do
that dynamically. But for two-dozen objects of a few bytes each, I
really wouldn't expect dynamic allocation to be a big deal. (Then again,
I don't write C++ for a living...)
>> Out of curiosity, if /you/ were writing a piece of code that's supposed
>> to take a text string, parse it as an expression, and then execute the
>> resulting expression, how would you implement that?
>
> You mean like this? http://warp.povusers.org/FunctionParser/
...I'd sit down and study that, but I'm guessing it's designed more for
usability than readability.
>> The most obvious way
>> is to make every valid kind of expression be a different class, and have
>> an abstract base class with an "execute this" method, which gets
>> implemented differently for each type of expression.
>
> If the parser consists of some dozens expression parsers which are added
> to the core module once at program start, then the slowness of memory
> allocation becomes completely irrelevant. You are not performing millions
> of 'new' operations, only a few dozen of them. The virtual function calls
> themselves are pretty efficient. (What you *should* avoid is calling 'new'
> and 'delete' during the parsing, if you want it to be as fast as possible.)
That was my feeling. The typical data sizes I'm working with are so
minute that efficiency becomes almost moot. Even an exponential-time
algorithm is pretty quick given small enough input. ;-)
> The major design issue becomes the management of those dynamically
> allocated objects. (Smart pointers may be a good solution to this.)
Well, yes... I'm certainly not looking forward to tracking down memory
management faults. :-/
>> OK, well, if there is a single TreeNode class, I could write something like
>
>> TreeNode(TreeNode t0, TreeNode t1)
>> {
>> child0 = new TreeNode;
>> child1 = new TreeNode;
>> *child0 = t0;
>> *child1 = t1;
>> }
>
> Since TreeNode there is managing dynamically allocated memory, you'll
> have to write a proper copy constructor and assignment operator or else
> you have a leak and/or accesses to deleted memory. (And as I said in my
> previous post, that's not trivial because of all the options you have.)
How about if I never actually need to duplicate the tree? Does that make
things any simpler?
>> If, however, the TreeNode class is abstract, obviously this won't work
>> at all. I now need to know what the /actual/ type of t0 is before I can
>> allocate sufficient space for it.
>
> Well, that's easy. You should do this:
>
> class TreeNode
> {
> protected:
> virtual TreeNode* clone() const = 0;
>
> public:
> TreeNode(const TreeNode& t0, const TreeNode& t1):
> child0(t0.clone()),
> child1(t1.clone())
> {}
>
> // Implement destructor, copy constructor and assignment
> // operator appropriately here.
> };
>
> Then every inherited class implements that 'clone()' method like:
>
> class FancyNode: public TreeNode
> {
> protected:
> virtual TreeNode* clone() const
> {
> return new FancyNode(*this);
> }
> };
I definitely need to find a more thorough reading source; some of that
syntax isn't even /explained/ on the site I read...
>> (I'm also fuzzy on what happens when
>> you try to delete a pointer - does it know what the "real" size of the
>> object is, or does it use the declared type of the pointer?)
>
> If the pointer is handling a dynamically bound object, then the
> destructor of the base class has to be virtual. Then 'delete' is able
> to destroy the object appropriately.
Right. So as long as you remember to make the destructor virtual, it
will look up the correct code from the vtable.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Invisible <voi### [at] devnull> wrote:
> On 14/03/2012 02:15 PM, Warp wrote:
> > Orchid Win7 v1<voi### [at] devnull> wrote:
> >> So 16 bytes of space overhead per node is worth it if you can avoid one
> >> indirect jump instruction?
> >
> > No, it's worth it if you can avoid one extraneous memory allocation
> > (or having to manage the allocated memory manually).
> >
> > A memory allocation will easily consume those 16 bytes of extra space
> > anyways (the memory allocator bookkeeping, memory alignment, and the pointer
> > used to refer to the allocated object), so it's not like you are gaining
> > anything.
> Heh, really? Most languages I work with can allocate new objects in O(1)
> time and using O(1) memory for bookkeeping. I'm not used to dynamic
> allocation being anything to worry about.
Last time I checked, 16 bytes is O(1) memory.
> >> Mmm, I hadn't even thought of using an /array/. I suppose the upper
> >> bound on the tree size is statically known, so why not?
> >
> > The array doesn't necessarily have to be fixed in size.
> It does if it's static. Either that or the source I'm reading is
> incorrect...
You don't have to use a static array.
> Well, sure. If I was going to allocate, I don't know, one object for
> every pixel on the screen, then sure
But that's exactly what object-oriented design would have you do.
Anyways, the major issue with dynamic allocation is memory management.
If by avoiding allocating individual objects manually you save yourself
the trouble of having to manage the memory, it's a win-win situation.
Why would you not want to do that?
> >> OK, well, if there is a single TreeNode class, I could write something like
> >
> >> TreeNode(TreeNode t0, TreeNode t1)
> >> {
> >> child0 = new TreeNode;
> >> child1 = new TreeNode;
> >> *child0 = t0;
> >> *child1 = t1;
> >> }
> >
> > Since TreeNode there is managing dynamically allocated memory, you'll
> > have to write a proper copy constructor and assignment operator or else
> > you have a leak and/or accesses to deleted memory. (And as I said in my
> > previous post, that's not trivial because of all the options you have.)
> How about if I never actually need to duplicate the tree? Does that make
> things any simpler?
You are copying TreeNode objects in the above code. If you want to do that,
there's no way around it: You have to write a proper copy constructor and
assignment operator for the TreeNode class, else you'll have leaks and
bad accesses.
What do you think "*child0 = t0" means?
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 14.03.2012 10:55, schrieb Invisible:
>> There's nothing "incorrect" about that way of placing braces, even in
>> C++. It's syntactically correct, so say what you will.
>
> Writing the entire program in one source file on a single 435,235,342
> character line is "syntactically correct". But no sane programmer would
> ever do such a horrifying thing. :-P
And not every compiler need to support such a horrifying thing.
(According to the C++ spec, the maximum number of characters per source
line is up to the implementation, but the recommended minimum for that
maximum is 65536.)
>> Anyway, I personally stick to IDEs developed by software development
>> companies (and I mean companies that develop other stuff besides
>> programming languages and IDEs). So far the best IDE experience has been
>> Visual Studio for C#. One might snarl at the company's marketing
>> strategies, but they seem to know what helps software developers to be
>> productive.
>
> I don't know about "what helps software developers to be productive"...
> How about "what makes software developers buy our product rather than
> somebody else's"? Or even "what makes software developers develop for
> our platform rather than somebody else's"? ;-)
Seriously, I'm pretty sure MS uses Visual Studio themselves in-house. In
a serious number of seriously large projects. Knowing that every second
a programmer is busy ranting about the development tools is one second
that costs money.
> The entire MS business model fundamentally depends on getting everyone
> to use the MS platform, and then be locked in. Well, if your platform
> has naff-all software for it, nobody will buy it. But if your platform
> has all the best stuff, people will pay. And then be trapped. So it pays
> (literally) to make it easy for people to write code for your platform
> and only your platform. >:-D
While it is an open secret that MS' business model is centered on
locking in their customers wherever they can, this approach of "make it
easy for people to write software for your platform, and your platform
will have the best software in existance" wouldn't fly: Making it easy
for people to write code for your platform doesn't necessarily mean
they'll produce /good/ software for it. You're also bound to get a lot
of crap besides.
And then you'd also expect MS to give away VS for free (the real thing,
with the tools to write the real powerful software, not just crappy
hello world stuff), yet for some reason they don't.
I think there's another reason why VS is so good. I'm convinced they're
not trying to make it easy for /anyone/ to write code - they're trying
to make it easy for /themselves/. And make extra money by selling the
toolset.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> Heh, really? Most languages I work with can allocate new objects in O(1)
>> time and using O(1) memory for bookkeeping. I'm not used to dynamic
>> allocation being anything to worry about.
>
> Last time I checked, 16 bytes is O(1) memory.
You seem to suggest that there's 16 bytes of overhead /per object/. I'm
saying I'm used to there being a fixed number of bytes of overhead,
regardless of the number of objects. That's O(N) vs O(1).
>>>> Mmm, I hadn't even thought of using an /array/. I suppose the upper
>>>> bound on the tree size is statically known, so why not?
>>>
>>> The array doesn't necessarily have to be fixed in size.
>
>> It does if it's static. Either that or the source I'm reading is
>> incorrect...
>
> You don't have to use a static array.
Sure. But then you're dynamically allocating again.
>> Well, sure. If I was going to allocate, I don't know, one object for
>> every pixel on the screen, then sure
>
> But that's exactly what object-oriented design would have you do.
Sure. And I'm saying, if I wanted to do something like that, then yes, I
might go out of my way to try to make it efficient. (E.g., by not using
elaborate inheritance hierarchies and such.)
> Anyways, the major issue with dynamic allocation is memory management.
> If by avoiding allocating individual objects manually you save yourself
> the trouble of having to manage the memory, it's a win-win situation.
> Why would you not want to do that?
Sure, avoiding manual memory management is always a good idea. It seems
the cost is that you have less flexibility in designing your classes.
(Then again, how many different kinds of pixel can there possibly be?)
>>> Since TreeNode there is managing dynamically allocated memory, you'll
>>> have to write a proper copy constructor and assignment operator or else
>>> you have a leak and/or accesses to deleted memory. (And as I said in my
>>> previous post, that's not trivial because of all the options you have.)
>
>> How about if I never actually need to duplicate the tree? Does that make
>> things any simpler?
>
> You are copying TreeNode objects in the above code.
*facepalm*
Right. I think I know how to redesign to avoid that. Should make things
a bit more coherent...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 14.03.2012 15:38, schrieb Invisible:
> On 14/03/2012 02:36 PM, clipka wrote:
>> Am 14.03.2012 10:55, schrieb Invisible:
>>
>>>> Note that the dispose is gone for good. Shared pointers use reference
>>>> counters to accomplish this feat.
>>>
>>> Every time I hear somebody say this, a part of me wants to scream "you
>>> /do/ understand that reference counting doesn't actually work, right?"
>>
>> Which is because...?
>
> Allow me to rephrase: Reference counting works if and only if the node
> reference graph is acyclic.
>
> Sometimes it's feasible to ensure that this is the case. Sometimes it is
> not.
While this is true, it is not a problem specific to reference counting,
but one that is inherent in /any/ attempt at automatic object lifetime
management.
That's why C++11 also provides weak pointers. They're not a cure-all for
such problems, but they do help in some situations (such as when the
cyclic references come from pointers to parent objects in a tree).
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> the recommended minimum for that maximum is 65536.
Irony...
> Seriously, I'm pretty sure MS uses Visual Studio themselves in-house. In
> a serious number of seriously large projects. Knowing that every second
> a programmer is busy ranting about the development tools is one second
> that costs money.
Well, yeah, there is that. If it's actually true...
> And then you'd also expect MS to give away VS for free (the real thing,
> with the tools to write the real powerful software, not just crappy
> hello world stuff), yet for some reason they don't.
Um... So how is the pro version actually different from VS Express?
> I think there's another reason why VS is so good. I'm convinced they're
> not trying to make it easy for /anyone/ to write code - they're trying
> to make it easy for /themselves/. And make extra money by selling the
> toolset.
Quite possibly. ;-)
Then again, Valve released their editing tools, and (from what I hear)
they are really, really hard to use...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> Allow me to rephrase: Reference counting works if and only if the node
>> reference graph is acyclic.
>>
>> Sometimes it's feasible to ensure that this is the case. Sometimes it is
>> not.
>
> While this is true, it is not a problem specific to reference counting,
> but one that is inherent in /any/ attempt at automatic object lifetime
> management.
And automatic garbage collection is the only general-purpose solution to
this problem. Unfortunately, it's also very expensive (in time and often
in space).
> That's why C++11 also provides weak pointers. They're not a cure-all for
> such problems, but they do help in some situations (such as when the
> cyclic references come from pointers to parent objects in a tree).
I'm not even sure what the semantics of a "weak pointer" would be in the
absence of automatic memory management...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 14.03.2012 10:42, schrieb Invisible:
> It seems either they implemented everything twice (once with, and once
> without generics), or it's actually legal to not specify type parameters
> and then they default to Object. Which, either way, is highly
> counter-intuitive...
It's actually that they /first/ implemented the stuff as non-generics
(because Java didn't have them back then), and then added a generics
implementation later when they discovered that generics were a good
thing to have after all (retaining the old non-generics implementation
for backward compatibility).
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 14/03/2012 03:42 PM, clipka wrote:
> Am 14.03.2012 10:42, schrieb Invisible:
>
>> It seems either they implemented everything twice (once with, and once
>> without generics), or it's actually legal to not specify type parameters
>> and then they default to Object. Which, either way, is highly
>> counter-intuitive...
>
> It's actually that they /first/ implemented the stuff as non-generics
> (because Java didn't have them back then), and then added a generics
> implementation later when they discovered that generics were a good
> thing to have after all (retaining the old non-generics implementation
> for backward compatibility).
Backwards compatibility is pretty evil, eh? ;-)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|