|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 13/08/2012 01:08 PM, Invisible wrote:
> I think perhaps my biggest criticism of C# is that there doesn't seem to
> be an introduction anywhere. I can read about isolated features of the
> language on MSDN, but nowhere can I seem to find a coherent introduction...
I downloaded the C# "language specification". Rather than being an
actual specification (which would usually be a dense, incomprehensible
document supplying a rigorous mathematical definition of the language
for use by compiler implementers), it seems that /this/ is the
comprehensible introduction that actually explains stuff. Weird...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 15.08.2012 10:11, schrieb Invisible:
>> Right. But it has something to do with whether the language "solves
>> everything" [for practical purposes].
>
> I was referring only to the design of the language itself, not the
> libraries that go with it, nor any of the various other tools required
> to make a language generally useful. It's no secret that Haskell fails
> spectacularly on that count.
I'm reiterating: I don't doubt that the Haskell /core language/ is
simple and elegant. I doubt that it "solves everything".
As soon as you've dealt with the "solves everything" part, we'll talk
again, and see if the "simple and elegant" also applies to that one. My
claim is that the combo isn't possible.
THAT is the point I'm trying to get through to you.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> I was referring only to the design of the language itself, not the
>> libraries that go with it, nor any of the various other tools required
>> to make a language generally useful. It's no secret that Haskell fails
>> spectacularly on that count.
>
> I'm reiterating: I don't doubt that the Haskell /core language/ is
> simple and elegant. I doubt that it "solves everything".
It solves every problem of core application logic. It doesn't solve the
problem of talking to the outside world. (That's the job of the
libraries, and once they currently don't do so well.)
> As soon as you've dealt with the "solves everything" part, we'll talk
> again, and see if the "simple and elegant" also applies to that one. My
> claim is that the combo isn't possible.
>
> THAT is the point I'm trying to get through to you.
You somehow believe that writing a bunch of libraries would make the
language no longer elegant?
Well, I guess since it's unlikely to ever happen, we'll never know...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 14/08/2012 03:42 PM, Invisible wrote:
> As with any language, my WTF list:
On the other hand, it seems they certainly did a fair few things right.
It's messy in there, but it does have moments of hope.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 14/08/2012 03:42 PM, Invisible wrote:
> As with any language, my WTF list:
Enums. Urgh. An enumeration is supposed to be a type that can only take
on the specified set of values. Except that in C#, an enumeration can
take on /any/ integer value. It's just that some of these values also
have friendly names. *sigh*
What, no bitfield type? :-P
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 15.08.2012 09:56, schrieb Invisible:
>> Hint: There's more to Unicode than just ~1 million code points with
>> glyphs mapped to some of them.
>
> Sure. But if you can't even /write down/ those code-points, that's kinda
> limiting.
>
> (It also strikes me that if the String type /can/ hold them all and Char
> /can't/, that has interesting implications for trying to iterate over a
> String one Char at a time...)
As soon as you think of combining diacritics, you'll see that this type
of limitation is actually inevitable, even with a way to hold any
code-point value in a char: You can, for instance, hold the canonical
representation of Lowercase A + Acute Accent in a char (because it has a
dedicated code point, U+00E1), but you can't do the same with the
canonical representation of Lowercase A + Double Acute Accent (because
that's the sequence U+0061 U+030B) - or with the non-canonical
representation of A + Acute Accent, for that matter (U+0061 U+0301).
>>> Pop quiz: if x is a string and y is a delegate, does it do delegate
>>> concatenation or string concatenation?
>>
>> Provided it doesn't raise an error, obviously you'll get string
>> concatenation,
>
> Yes.
>
>> because you can't convert strings to delegates.
>
> No.
>
> From what I can tell, it never tries to convert a string to anything
> else, but it /does/ try to convert anything else to a string.
Yes, exactly.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 15.08.2012 11:46, schrieb Invisible:
> On 14/08/2012 03:42 PM, Invisible wrote:
>
>> - The "char" type works with Unicode. Well done. Oh, but wait... It only
>> stores 16 bits, and yet Unicode actually requires 24 bits to represent a
>> single code-point. So this "Unicode character" only actually covers the
>> Basic Multilingual Plane. FAIL!
>
> Oh great. Apparently "char" doesn't store a code-point at all, it stores
> a code-unit.
>
> For anything in the BMP, these are effectively the same thing. For
> anything outside that range, *you* must manually write the code to
> decode UTF-16 into actual code-points (which then do not fit into a
> "char").
Uh... why does this come as a surprise to you?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>>> Hint: There's more to Unicode than just ~1 million code points with
>>> glyphs mapped to some of them.
>>
>> Sure. But if you can't even /write down/ those code-points, that's kinda
>> limiting.
>>
>> (It also strikes me that if the String type /can/ hold them all and Char
>> /can't/, that has interesting implications for trying to iterate over a
>> String one Char at a time...)
>
> As soon as you think of combining diacritics, you'll see that this type
> of limitation is actually inevitable
I guess combining characters are The Real WTF...
(No, wait - that's the BOM.)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 15.08.2012 12:36, schrieb Invisible:
>>> I was referring only to the design of the language itself, not the
>>> libraries that go with it, nor any of the various other tools required
>>> to make a language generally useful. It's no secret that Haskell fails
>>> spectacularly on that count.
>>
>> I'm reiterating: I don't doubt that the Haskell /core language/ is
>> simple and elegant. I doubt that it "solves everything".
>
> It solves every problem of core application logic. It doesn't solve the
> problem of talking to the outside world. (That's the job of the
> libraries, and once they currently don't do so well.)
Neither does it solve every problem of core application logic in a
/practical/ manner. At least that's the impression I get from your
occasional rants. So for those things you'll need libraries, too.
>> As soon as you've dealt with the "solves everything" part, we'll talk
>> again, and see if the "simple and elegant" also applies to that one. My
>> claim is that the combo isn't possible.
>>
>> THAT is the point I'm trying to get through to you.
>
> You somehow believe that writing a bunch of libraries would make the
> language no longer elegant?
I believe that writing a sufficient bunch of libraries to care for all
needs would make the /combo/ non-elegant in various places.
That said, if the core language was /that/ simple, elegant and
cover-all, how come nobody has managed to put an easy-to-use cover-all
library for X yet? (X := any feature already implemented in multiple
libraries but with different severe limitations to each of them. Dunno
which example it was you did your rant over - mutable lists or some such?)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>>> - The "char" type works with Unicode. Well done. Oh, but wait... It only
>>> stores 16 bits, and yet Unicode actually requires 24 bits to represent a
>>> single code-point. So this "Unicode character" only actually covers the
>>> Basic Multilingual Plane. FAIL!
>>
>> Oh great. Apparently "char" doesn't store a code-point at all, it stores
>> a code-unit.
>>
>> For anything in the BMP, these are effectively the same thing. For
>> anything outside that range, *you* must manually write the code to
>> decode UTF-16 into actual code-points (which then do not fit into a
>> "char").
>
> Uh... why does this come as a surprise to you?
I guess I'm used to using a programming language where a Char is...
well... any valid Unicode code-point, and once you set the encoding of a
file handle, the library does all necessary encoding and decoding,
whether it's UTF-8, UTF-16, Latin-1 or whatever.
Still, I suppose it's better than char = unsigned byte. :-P
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |