|
 |
Warp wrote:
> Darren New <dne### [at] san rr com> wrote:
>> Warp wrote:
>>> I still can't comprehend what's so bad in the compiler doing the check
>>> that you don't access the private members.
>
>> If it's actually enforced (as in an error rather than a warning), it makes
>> lots of things harder than they need to be, because the running code winds
>> up with less information than the compiler had. You're throwing away half
>> the work the compiler did once you generate the code, so you wind up with
>> lots of boilerplate to enter into code things the compiler already knows.
>
> That's the whole idea in modular thinking: You don't know what's inside
> the module, and you *don't care*. You don't need to care.
It also prevents code from *inside* the module from doing certain things in
an easy way.
> As soon as you start worrying what's inside the module, you are already
> breaking module boundaries and abstraction.
Not if you're doing it from inside the module.
>> If it's half-enforced, as in the compiler complains and won't compile the
>> code, but there's ways to get around it anyway (on purpose or by mistake),
>> then it's IMO the worst of all possible worlds. You'll spend hours or days
>> trying to debug code that's already right because the client is convinced
>> the other code they've written is bugfree and it's easier to blame you than
>> to find the wild pointer in their own code. The whole idea of class
>> invariants goes out the window.
>
> No you'll explain to me how a naming convention of public variables helps
> this problem.
The naming convention doesn't help that. The safety of the language helps
that. In other words, what helps it is not the naming convention, but the
fact that code that accesses private variables *must* use the naming
convention to do so.
Then you can take the Python code you're running, and look for the string
"._framenumber". If you don't find that, nobody outside your class is
changing the framenumber, even by accident. (Simplified of course, but
that's the idea.)
In the CLOS case, you can just look for the name of the private variable
that you suspect is getting changed from outside. Or you can change the
source code you have to consistently rename that private variable and run it
again and get a useful indication of exactly where in the outside code it's
violating your encapsulation.
In an unsafe language, I can have code that was compiled before I even wrote
my class accidentally violate my class's encapsulation, and it'll be almost
impossible to figure out what is going on if the program's big enough or if
I don't have the source to that code.
I'm just less worried about intentionally violating modularity than
accidentally doing so. When I do it on purpose, I know the cost/benefit
trade-offs. Doing so may be a kludge, but there's reasons kludges get
implemented in the commercial world.
(I've had server code written in unsafe languages where it would dump core
on a segfault, and the providers wanted to blame it on the fact that the Xen
kernel was on the machine. We weren't booting Xen, but they wouldn't even
look at their code until we uninstalled that kernel. Just as an example of
how people can avoid admitting their own bugs might be part of the problem.
Another time, the "calculation" part of the program did its work, clobbered
memory, wrote on the screen "now generating report", tried to create the
data file for the report generator, and crashed out in a way that locked up
the machine. As the author of the report generator, I got to spend several
*weeks* trying to figure out what it was before I discovered it was just the
calc routines not knowing their own calling structure and hence returning to
the wrong place on the stack at some point. This included printing out *all*
of the code I'd written, about a 5" stack, and going thru line by line
looking for bugs. (Found one, which wasn't the problem. :-) Another problem
that wouldn't have happened in a safe language.)
If you're in a particularly hostile environment, the best way is to have
well-defined class contracts in a safe environment that prevents the abusers
from bypassing those contracts. If you're not in a hostile environment, you
just get the author to agree to support revealing the private information
you need.
> IMO if anyone feels the need to break your interface and access private
> members directly, then your class design sucks and should be redesigned.
I agree. That's why I say that CLOS allowing you to get to private members
from outside the class *if* you read the source code to the class and know
what they're called isn't, in practice, a problem. The only time someone
would do that is if for some reason they can't *change* the source (because,
perhaps, they're expecting an update), but they can't get the author to make
an update that provides the interface that someone needs. At which point the
only choice is (1) toss the entire module (rewrite, buy different version,
etc), or (2) take on the burden of staying compatible with updates (which
may be infrequent enough that it's in practice not a problem). It's a
cost/benefit sort of thing.
> If your class design is good, nobody will need to access anything
> directly.
I also agree. Or, at least, nobody will access anything directly without the
author's "permission". If I say "it's OK to serialize this object, send it
over the wire, and reconstitute it in another address space", I'd still say
that's a good class design. If I'm worried about it getting stored for
longer than the lifetime of the declaration of the class (i.e., the "stored
in database" example), I'll write code to deal with it, or to convert the
old versions to new versions.
--
Darren New, San Diego CA, USA (PST)
My fortune cookie said, "You will soon be
unable to read this, even at arm's length."
Post a reply to this message
|
 |