|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Invisible wrote:
> One assumes that if you don't need foo(), you wouldn't bother including
> the header file.
You may use printf() without using sprintf(), fprintf(), and vprintf(). But
they're all in the same header file.
In any case, I'm telling you what's *correct*, regardless of whether you
think it's good style. :-)
>> Sort of. That's called "linking." Then there's "loading", which is
>> when you put it into memory and *again* adjust a bunch of addresses.
>
> Right. So what you're saying is that there's actually a second linking
> stage each time final executable is run?
No, there's loading. :-) If the software does both steps, then it's a
"linking loader".
But yes, there can be multiple linking phases, especially with DLLs and such.
If you have code compiled to run at 0x100 and you copy it from disk to
memory at 0x300, you need to add 0x200 to every address that points into the
program. That's loading. Often not needed nowadays with CPUs that can handle
code that uses only relative offsets.
> In summary... you can't call the OS from C. You can only write C wrapper
> functions around the assembly code that calls the OS. (And the wrapper
> then of course looks like a normal C function...)
Right. Generally speaking.
I've always wondered why people think C is good for writing kernel-level
code, as the only facility in C that actually deals with the sorts of things
you do in a kernel is "volatile".
>> It's an instruction that invokes an indirect branch.
>
> I see. So that's the mechanism on the IA32 platform, is it? (I thought
> only the BIOS uses this method...)
I believe so. I stopped paying attention around the 286 era.
> Interestingly enough, the Motorola 68000 does in fact have two modes of
> operation: "user mode" and "supervisor mode". I have no idea what the
> distinction is.
Some opcodes in user mode will cause a software interrupt instead of doing
what they do in supervisor mode. For example, trying to turn off the
interrupts will "trap" instead of turning off the interrupts. (Really?
68000? Not 68020?)
> the MMU-enabled variant of the CPU could support memory protection if
> you wanted.
Yes. There's actually two things you need for demand-paged VM. You need
virtual addressing (which the 68010 supported, I think), and you need
restartable instructions (which the 68020 supported). If you try to store
four bytes, and the first two bytes get stored and the second two hit a page
that has to be swapped in, you're kind of hosed if your processor doesn't
deal with that properly. (Say, by checking before you store anything that
all the pages are available, or "unwriting" the failed write, or something.)
> And finally, it's perfectly possible to make a multiuser OS without
> memory protection. It just won't have any memory protection.
You can do the memory protection in software, tho. Not *too* uncommon.
--
Darren New, San Diego CA, USA (PST)
"We'd like you to back-port all the changes in 2.0
back to version 1.0."
"We've done that already. We call it 2.0."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Chambers wrote:
> In fact, one of my biggest complaints with C# is the lack of class
> prototypes. I miss having a public interface which fits entirely on one
> (maybe two) screen(s).
That's an IDE issue, not a language issue. See, for example, Eiffel's IDEs.
--
Darren New, San Diego CA, USA (PST)
"We'd like you to back-port all the changes in 2.0
back to version 1.0."
"We've done that already. We call it 2.0."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp <war### [at] tagpovrayorg> wrote:
> Object-oriented programming closely matches the thought process of people.
> OOP can be deconstructed into to most basic elements: Concepts and algorithms.
>
> People think about things conceptually. For example, you can have one pen,
> one car, one dog, and so on.
>
> Moreover, people use hierarchies of concepts. Some concepts are more
> abstract while other concepts are more concrete. For example, the concept
> of "animal" is more abstract than the concept of "dog" or "cat", which are
> more concrete. Moreover, there's a hierarchical relationship between these
> concepts: A dog is an animal, and a cat is an animal (but a dog is not a cat).
Ah, yes - here we enter the realm of blah that scared *me* away from OOP when I
first came into contact with it:
"WTF - what does *this* crap have to do with *programming*?!"
Honestly, even as a professional SW developer for over a decade now who
definitely prefers OO concepts, I think this is perfect BS.
Despite all claims, I'm convinced this has virtually *nothing* to do with how
normal people *really* think (at least when faced with the task of explaing to
some dumb box how it should *do* something), and has even less to do with OO
*programming*.
David, you hear me? *This* is *not* OOP. This is indeed BS originally from
people leaning on the shallow theoretical side, trying to sell "OO"-labelled
products (compilers, tools, training, consulting, whatever) to people who
haven't experienced the benefits of OOP in practice yet.
*True* OOP is primarily about encapsulation: You highly integrate data
structures and the algorithms operating on them into so-called "objects"; you
precisely define what operations there should be to manipulate the data (for
instance, on a data structure to be used as a stack, you'd primarily want a
"push" and a "pop" operation, plus maybe a few more), and you hide the data
structure from any other code (to the best extent possible with the chosen
language), to prevent the data from being manipulated in any different way.
This is a very powerful tool for keeping track of how and where the data
structures are actually manipulated, so you can more easily change the inner
workings of an object if needs be.
Second, OOP is about "polymorphism": You define operations that should be
possible on various different types of objects (for instance, you might define
a "compute bounding box" operation for all geometric primitives as well as CSG
aggregates) without necessarily doing the same way internally, so they can be
easily processed alongside each other by the same calling code for some purpose
(e.g. get the bounding boxes of all objects) despite any differences in how the
particular operation is actually performed for each type of object.
These, in my eyes, are the most important aspects of OOP. There are others, like
"inheritance" (you might implement part of a new object type by simply referring
to - aka "inheriting from" - a similar object's existing implementation where
applicable, allowing you to easily re-use the existing code) that are of
practical value, too, but I wouldn't rank them as important as encapsulation
and polymorphism.
Strangely enough, inheritence is what the typical blah seems to focus on; on the
other hand, maybe it just feels that way because it's the only thing *really*
unfamiliar to a programmer: Encapsulation, for instance, could be regarded as
modularization driven to the extreme; and as for polymorphism, most programmers
will at least know the underlying problem of how to manage a collection of
elements with different data types. But inheritance? There's really nothing
like this in the world of classical imperative programming, nor does there seem
to be any need for it. Indeed, it is a solution for a problem you won't even
encounter unless you have already entered the realm of OOP. So it's somewhat
ridiculous trying to *introduce* OOP with this.
(I must confess that without this discussion, I would probably try the same
approach to introduce OOP to someone - even though I should know better,
originally having been deterred by this myself; never gave much thought to it
though until now.)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New <dne### [at] sanrrcom> wrote:
> I've always wondered why people think C is good for writing kernel-level
> code, as the only facility in C that actually deals with the sorts of things
> you do in a kernel is "volatile".
Because C allows many types of low-level optimization that are very
difficult, if not even outright impossible, in higher-level languages.
For example, you know how a C struct will map onto memory locations,
on every possible platform which your kernel can be compiled for. You know
exactly what assigning one struct instance to another entails as machine
code. If you need a pointer which should point to a raw memory address,
you can create one. You can be sure that something like garbage collection
will *not* suddenly and unexpectedly kick in during a highly-critical
operation. If needed, you can even write inline-asm for things which cannot
be done in C directly (eg. write or read from ports, issue some exotic CPU
commands such as disabling interrupts, etc).
And of course C allows for programmatical optimizations which can often
be difficult in higher-level languages. Memory usage optimization would be
the main example. In something like a kernel you really don't want to waste
too much memory if you don't have to.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New <dne### [at] sanrrcom> wrote:
> That's an IDE issue, not a language issue. See, for example, Eiffel's IDEs.
IMO a language shouldn't rely on IDE or text editor features in order to
be readable.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> Because C allows many types of low-level optimization that are very
> difficult, if not even outright impossible, in higher-level languages.
> For example, you know how a C struct will map onto memory locations,
> on every possible platform which your kernel can be compiled for.
Not in a portable way. You don't know what kinds of padding happen, or what
order the integers are in.
> You know
> exactly what assigning one struct instance to another entails as machine
> code.
That's true of lots of low-level languages too. FORTH and Ada both spring to
mind, for example. Lots of languages of the same ilk have instructions for
laying out structures at a per-byte level. (Erlang? Modula-whatever?)
> If you need a pointer which should point to a raw memory address,
> you can create one.
Not in C. Only in some language that looks like C but invoked undefined
behavior.
> You can be sure that something like garbage collection
> will *not* suddenly and unexpectedly kick in during a highly-critical
> operation.
True, but there are lots of languages I consider better than C (as in, more
powerful) that don't have GC. Anything with managed memory is going to be a
mess for writing kernel code in, I'll grant you. :-) At least for a general
kernel where you can't control what else runs.
> If needed, you can even write inline-asm for things which cannot
> be done in C directly (eg. write or read from ports, issue some exotic CPU
> commands such as disabling interrupts, etc).
But not in C. That's kind of my point. There's whole bunches of stuff that
happen in kernels that C just doesn't define, just like this.
> And of course C allows for programmatical optimizations which can often
> be difficult in higher-level languages. Memory usage optimization would be
> the main example.
I'm not sure what that means. If I have an array of 12-bit values in C,
that's a PITA to implement compared to a lot of other languages (including C++).
--
Darren New, San Diego CA, USA (PST)
"We'd like you to back-port all the changes in 2.0
back to version 1.0."
"We've done that already. We call it 2.0."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> Darren New <dne### [at] sanrrcom> wrote:
>> That's an IDE issue, not a language issue. See, for example, Eiffel's IDEs.
>
> IMO a language shouldn't rely on IDE or text editor features in order to
> be readable.
So you'd rather rely on the programmer doing a job the IDE ought to be
automating, just to insist it's there?
We're discussing basically manually built .h files vs automatically built .h
files, and you're suggesting that manually built .h files are better because
the compiler can force someone to provide them, whereas with an IDE they
might not have a nice summary? I don't think you really mean that.
--
Darren New, San Diego CA, USA (PST)
"We'd like you to back-port all the changes in 2.0
back to version 1.0."
"We've done that already. We call it 2.0."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka <nomail@nomail> wrote:
> Ah, yes - here we enter the realm of blah that scared *me* away from OOP when I
> first came into contact with it:
> "WTF - what does *this* crap have to do with *programming*?!"
It has to do with program design. The larger the program is, the more
important it is for it to have been designed properly. A large program
without proper design easily becomes unmaintainable and incomprehensible.
One of the most basic methods for keeping a large program manageable is
to subdivide it into smaller parts, hierarchically. When you write something
inside the large program, you shouldn't have to be keeping in mind the
*entire* program in order to be able to write that small part. You should
be able to keep in mind only the *relevant* parts of the rest of the program
necessary to make that small part work.
It's the same in almost everything that is big and complex, not just
programming. The CEO of a large company doesn't have to worry about what
every and each single one of the ten thousand employees is doing. That would
be impossible. He just can't be managing ten thousand things at once. It
becomes even more complicated when they have sub-contractors and other such
companies to deal with. Instead, there's a *hierarchy* of management in the
company: The CEO controls a dozen or so bosses, who control a dozen of
managers, who control individual employees, or however a big company is
subdivided.
The functionality of a large program can be largely divided into a
hierarchy by distributing the code logically into functions. However,
functions are not enough when you need to handle enormous amounts of
different types of *data*. Data management has to also be divided into
logical parts, often hierarchically.
The solution presented (although not originally invented) by object
orientedness to this problem is the concept of modules: Modules can have
both functionality and data, and they enclose both in a way that makes
managing the data easier in a large project. Modules can also form a
hierarchy (so modules can define sub-modules inside them, or own instances
of other modules, etc).
Most concepts can be naturally expressed as modules. If you have, for
example, the concept of "string", you can write a module which represents
such a string, with all the necessary data and functionality related to
strings. When such a string module has been properly designed, it becomes
easier and more manageable to handle strings in the program. The rest of
the code doesn't have to worry about how strings are handled; they just
use the functionality provided by the module. This also makes it easier
to *change* the implementation of this string module without breaking
existing code.
> Despite all claims, I'm convinced this has virtually *nothing* to do with how
> normal people *really* think (at least when faced with the task of explaing to
> some dumb box how it should *do* something), and has even less to do with OO
> *programming*.
Oh, it has a lot to do with how normal people think, and how other things
(such as big companies) are organized.
When you go to a grocery store to buy food, you don't have to care how
the grocery store works internally. You just use its "interface" to buy the
food and that's it. The store takes care of its own inner functionality.
That's modularity and object-orientedness in action, in real-life.
> David, you hear me? *This* is *not* OOP. This is indeed BS originally from
> people leaning on the shallow theoretical side, trying to sell "OO"-labelled
> products (compilers, tools, training, consulting, whatever) to people who
> haven't experienced the benefits of OOP in practice yet.
I'm sorry, but I think that the one writing bullshit is you.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New <dne### [at] sanrrcom> wrote:
> Warp wrote:
> > Because C allows many types of low-level optimization that are very
> > difficult, if not even outright impossible, in higher-level languages.
> > For example, you know how a C struct will map onto memory locations,
> > on every possible platform which your kernel can be compiled for.
> Not in a portable way. You don't know what kinds of padding happen, or what
> order the integers are in.
Yes, you do. Maybe you didn't understand the "on every possible platform
which your kernel can be compiled for" part?
Ok, maybe I expressed myself poorly and it should have written "on every
possible platform your kernel has been ported to".
> > You know
> > exactly what assigning one struct instance to another entails as machine
> > code.
> That's true of lots of low-level languages too. FORTH and Ada both spring to
> mind, for example. Lots of languages of the same ilk have instructions for
> laying out structures at a per-byte level. (Erlang? Modula-whatever?)
The difference is probably that neither FORTH nor Ada have the same amount
of libraries, platform support or optimizing compilers, nor are they nearly
as popular.
> > If you need a pointer which should point to a raw memory address,
> > you can create one.
> Not in C. Only in some language that looks like C but invoked undefined
> behavior.
Of course in C. And "undefined behavior" can also mean "works as desired
in this platform". When you know what the compiler is doing, and you are
writing platform-specific code, C allows you to do a whole lot of things
you can't do with other languages.
Most DOS demos written in C used raw pointers (eg. to the VGA memory
buffer). They worked just fine on that platform.
> > And of course C allows for programmatical optimizations which can often
> > be difficult in higher-level languages. Memory usage optimization would be
> > the main example.
> I'm not sure what that means. If I have an array of 12-bit values in C,
> that's a PITA to implement compared to a lot of other languages (including C++).
It means that many if not most of the "high-level" languages pay zero
attention to memory usage. They freely and carelessly allocate memory like
it was candy because, you know, all computers have gigazillions bytes of
RAM and any program written in that language will be run alone, so it
doesn't have to worry about other programs which might also want some of
the memory.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New <dne### [at] sanrrcom> wrote:
> Warp wrote:
> > Darren New <dne### [at] sanrrcom> wrote:
> >> That's an IDE issue, not a language issue. See, for example, Eiffel's IDEs.
> >
> > IMO a language shouldn't rely on IDE or text editor features in order to
> > be readable.
> So you'd rather rely on the programmer doing a job the IDE ought to be
> automating, just to insist it's there?
Yes, because IDEs are necessarily quite platform-specific.
> We're discussing basically manually built .h files vs automatically built .h
> files, and you're suggesting that manually built .h files are better because
> the compiler can force someone to provide them, whereas with an IDE they
> might not have a nice summary? I don't think you really mean that.
You said "That's an IDE issue, not a language issue." I read that to mean
that a language (like C# in this case) doesn't need to be designed to be
readable because an IDE can be used to make it readable.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|