 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
"Orchid XP v8" <voi### [at] dev null> wrote in message
news:4ae0be3d$1@news.povray.org...
> I mean, sure, it has support for twiddling bits and stuff. But you'd think
> if you were doing low-level work, you would have a way to explicitly say
> how many bits you want to use. Yet C provides no such facility.
Wouldn't bit fields do what you're describing? As in:
struct PackBits {
unsigned int field1:2;
unsigned int field2:1;
unsigned int field3:4;
unsigned int field4:1;
}
I can't recall if that's in K&R, but I'm sure it's at least ANSI C.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Captain Jack wrote:
> Wouldn't bit fields do what you're describing?
Only if they're all aligned to byte boundaries, and you know what order the
fields are in. I.e., yes, but in no way portably. There's no way to portably
(for example) lay a C structure for an IP datagram header onto the header
and just use it.
Contrast with (say) Ada, where you can say
type FatTable = packed array[0..1000] of integer 0..4095;
and automatically get instructions that pack and unpack 12-bit entries in
your array of memory. (Modulo syntax, mind. :-)
Ada also supports context switching, interrupt handling, defined ways of
pointing to particular areas of memory (i.e., so you can tell the linker to
put the machine registers at a particular place), arbitrary ranges for
integers, for floats, and for decimal numbers. (i.e., you tell Ada what
range/precision you need, and it picks the best representation, instead of
trying to find the best representation from amongst the things the compiler
offers.) It supports prioritized interrupt handling, including blocking
lower-level interrupts while a higher-level one is running, handling
priority inversion, and scheduling threads based on the same priorities as
interrupts. It also supports volatile variables (which might be changed by
hardware) and atomic operations (where you can guarantee that if you're
writing a 2-byte integer, you won't get an instruction that stores the value
using two 1-byte store instructions, which is also important for hardware),
and "protected" operations that take advantage of hardware instructions for
blocking multiple threads (i.e., which take advantage of hardware locks).
I don't think C handles *any* of that. About the closest it comes is
volatile (sort of) and undefined behavior that *often* does what you'd
expect when using addresses, unless your memory model is too far different
from C's.
--
Darren New, San Diego CA, USA (PST)
I ordered stamps from Zazzle that read "Place Stamp Here".
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
"Darren New" <dne### [at] san rr com> wrote in message
news:4ae0c668$1@news.povray.org...
> Captain Jack wrote:
>> Wouldn't bit fields do what you're describing?
>
> Only if they're all aligned to byte boundaries, and you know what order
> the fields are in. I.e., yes, but in no way portably. There's no way to
> portably (for example) lay a C structure for an IP datagram header onto
> the header and just use it.
That's true... I know the specification calls for packing the bytes as
tightly as possible, but there's no standard spec for alignment.
I used to use Borland's Turbo C (v2, IIRC) way back when on DOS machines. I
remember that it had custom pre-processor directives for controlling byte
and word alignment, but I'm sure those weren't in any way standard.
I also used to make use of its "asm" keyword which would let me insert x86
assembler code into the middle of my C code, and I'd often use that to
squeeze some extra bits out of my memory usage. Contrast that with my
current job, where we use .NET, and I don't even keep track of what I've
allocated and deallocated, or how much I've used. I seem to have grown fat
and lazy on the backs of the developers at Redmond. 8D
> I don't think C handles *any* of that. About the closest it comes is
> volatile (sort of) and undefined behavior that *often* does what you'd
> expect when using addresses, unless your memory model is too far different
> from C's.
But that was what's so great about pure C... nothing lets you shoot yourself
in the foot with confidence the way C does. <g>
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Captain Jack wrote:
> That's true... I know the specification calls for packing the bytes as
> tightly as possible, but there's no standard spec for alignment.
I also believe (but I am too lazy to look it up right now) that it's
impossible to portably know whether
struct PackBits alpha = {0, 1, 0, 0};
struct PackBits beta = {0, 0, 0, 1};
alpha or beta will yield a larger number when cast to an int. I.e., I don't
think the standard even says whether the fields are allocated MSB or LSB first.
> I also used to make use of its "asm" keyword which would let me insert x86
> assembler code into the middle of my C code, and I'd often use that to
> squeeze some extra bits out of my memory usage.
Yep. When you really need to talk to the machine directly, C falls down.
That was the point of asking "why is C better?" It was only better for
portability compared to the other languages of the time.
--
Darren New, San Diego CA, USA (PST)
I ordered stamps from Zazzle that read "Place Stamp Here".
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Darren New schrieb:
> I also believe (but I am too lazy to look it up right now) that it's
> impossible to portably know whether
>
> struct PackBits alpha = {0, 1, 0, 0};
> struct PackBits beta = {0, 0, 0, 1};
>
> alpha or beta will yield a larger number when cast to an int. I.e., I
> don't think the standard even says whether the fields are allocated MSB
> or LSB first.
That's absolutely right: While standard C by now /does/ address the
problem of detecting the range of values a certain integer type can
hold, how many bits a "char" has, and even what base, mantissa size and
exponent size a floating point format uses - still nobody has introduced
anything that would make it possible to detect endianness at compile time.
Multi-character constants may be a way to detect this with some
reliability, by testing e.g.
#if '\xAB\xCD\xEF\xGH' == 0xABCDEFGH
#define BIG_ENDIAN
#elif '\xAB\xCD\xEF\xGH' == 0xGHEFCDAB
#define LITTLE_ENDIAN
#else
#define UNKNOWN_ENDIAN
#end
but the C99 standard does not explicitly specify any byte ordering rules
of multi-character constants either.
The best you can do is include a runtime self-test routine in the code
to actively check whether compile-time endianness assumptions were right.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Orchid XP v8 wrote:
>
> The concept of performing a multi-table join where the tables are all
> stored on magnetic tape scares me. o_O
>
> My God, it could take months...
>
Not at all. Simply sort the two (or more) files in sequence by the join
key (one or more fields). Then it is a matched scan or 'merge join'
through both. Variants for Inner, Left, Right and Full Outer joins.
As to sorting large tape files - You already have my reminiscence of
'Real Sort I' on your blog. Actually that whole thing was equivalent to
a join of the account table to the transaction table when both were
stored as sequential files on multiple volumes of tape. Multiple months
of Tran had to be merged and sorted into account sequence.
This was before disk databases and SQL were practical considerations for
this application.
Back then (circa 1984) the bank that I worked for might have had about
40 disk drives. Each the size of a washing machine. They were IBM 3350
DASD providing about 300Mb capacity each - so long as you chose an
efficient block size for the record length.
So that was about 12Gb of disk in total - for a largish bank. In my
pocket I now carry a 16Gb memory stick to backup files and sync across
machines.
Tapes back then held something like 100Mb per reel (IBM 3420 1200 foot
?) reel. But there could be thousands of tapes in the library so that
was hundreds of Gb.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
>> The concept of performing a multi-table join where the tables are all
>> stored on magnetic tape scares me. o_O
>>
>> My God, it could take months...
>
> Not at all. Simply sort the two (or more) files in sequence by the join
> key (one or more fields). Then it is a matched scan or 'merge join'
> through both. Variants for Inner, Left, Right and Full Outer joins.
>
> As to sorting large tape files - You already have my reminiscence of
> 'Real Sort I' on your blog. Actually that whole thing was equivalent to
> a join of the account table to the transaction table when both were
> stored as sequential files on multiple volumes of tape. Multiple months
> of Tran had to be merged and sorted into account sequence.
>
> This was before disk databases and SQL were practical considerations for
> this application.
Indeed - it sounds like back then, doing this kind of stuff was pushing
hard against the limits of technical feasibility. Like every individual
operation had to be planned and hand-tuned and assisted by an army of
technitions. Seems like having an SQL language would be a bit redundant.
...and yet, SQL is apparently that old. Go figure!
> So that was about 12Gb of disk in total - for a largish bank. In my
> pocket I now carry a 16Gb memory stick to backup files and sync across
> machines.
Damn, they make memory sticks that large now??
> Tapes back then held something like 100Mb per reel (IBM 3420 1200 foot
> ?) reel. But there could be thousands of tapes in the library so that
> was hundreds of Gb.
Amusingly, today I work with LTO3 tapes, which hold 400GB each. So...
only about a thousand times more. (Not, say, several million.) Then
again, I think those tapes might be slightly bigger physically too!
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 10/21/2009 8:30 AM, Invisible wrote:
> - SQL existed 15 years before high-capacity storage devices appeared.
> (This is worse than it appears. You wouldn't even realise that a
> language like SQL was *necessary* unless databases themselves had
> already existed for some considerable length of time. And after that
> there would obviously be a rash of incompatible proprietry languages
> until people decided to design a standardised one.)
Just started learning TSQL, and I have to say that I _HATE_ it. I would
much rather use a typical object oriented language.
I understand that TSQL is just a wrapper that hides the highly optimized
operations the server _actually_ performs. But the "spoken sentence
grammar" type of syntax TSQL uses is highly variable and inconsistent,
and to me very confusing and frustrating. Bleh.
Mike
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
SharkD wrote:
> Just started learning TSQL, and I have to say that I _HATE_ it. I would
> much rather use a typical object oriented language.
>
> I understand that TSQL is just a wrapper that hides the highly optimized
> operations the server _actually_ performs. But the "spoken sentence
> grammar" type of syntax TSQL uses is highly variable and inconsistent,
> and to me very confusing and frustrating. Bleh.
I've never seen TSQL, but if it's anything like SQL...
...well, look up some COBOL example code sometime. You'll see what I
mean. ;-)
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Orchid XP v8 <voi### [at] dev null> wrote:
> I've never seen TSQL, but if it's anything like SQL...
How about making some googling? TSQL is SQL + some extensions.
--
- Warp
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |