|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 8/14/2012 4:20, Warp wrote:
> Does Object contain member variables? If not, then I don't see the problem.
Good point. I don't know.
> Even if it does, then the language could implicitly do with it what C++'s
> virtual inheritance does. You can still forbid all other types of diamond
> inheritance if you so wish.
You could, but then it's getting weird.
--
Darren New, San Diego CA, USA (PST)
"Oh no! We're out of code juice!"
"Don't panic. There's beans and filters
in the cabinet."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 8/14/2012 7:42, Invisible wrote:
> - "As of C# 2.0, it is also possible to have an array in a structure." (Erm,
> why the HELL would it not be possible to do that before??)
Because structs are fixed size determined by the compiler (and stored in the
stack generally speaking), and arrays are variable sized and stored in the
heap. You can share references to arrays but not to structs, so what happens
when you pass a reference to the array inside a struct and then discard the
variable holding the struct?
> - "x & y" performs a Boolean-OR if the arguments are Bools, and a bitwise-OR
> if they are integers. The same goes for all the other logical operators.
> Except NOT, which has "!x" for Bools and "~x" for integers. WTF?
Makes sense if you think about where those symbols came from, namely C.
> - "x + y" performs addition. Unless either argument is a string, in which
> case the other is converted to a string as well (if not a string already)
> and the strings are concatenated. Unless both arguments are delegates, in
> which case they are concatenated. (I guess there's a /reason/ the Java guys
> claim that operator overloading is evil!)
If you don't know what types you're adding together, you're already in trouble.
> - Goto? Seriously? Well, I suppose /technically/ that's not actually a WTF...
Useful for generated code like state machines.
> - Anonymous delegates /and/ lambda functions?
Yes?
> - A delegate is a thing which is called when an event is fired, but an event
> /is/ a delegate?? (In fairness, this is probably a Wikibooks WTF rather than
> anything to do with the design of C#.)
A delegate is basically a function pointer you can invoke. An event is
basically a delegate with only certain operators (+= and -=) exposed outside
the class that declares the delegate. So think of "event" like "restricted
interface to a delegate".
"firing an event" is exactly the same thing as "invoking a delegate."
> - Extension methods. Just... what??
Useful for LINQ. Useful in general for readability.
> - A non-generic queue is called Queue, and a generic queue is called
> Queue<T>. And yet, for the other container types, the generic and
> non-generic versions have completely different, unrelated names. (I guess
> that's backwards compatibility for you... oh, wait.)
Yep. Generics didn't show up until V3 or so, I think.
--
Darren New, San Diego CA, USA (PST)
"Oh no! We're out of code juice!"
"Don't panic. There's beans and filters
in the cabinet."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 8/14/2012 14:06, Orchid Win7 v1 wrote:
> However, char, the type of a single character, apparently covers the BMP
> only. If you're going to go to all the trouble of actually supporting
> Unicode, why not support the whole thing?
You do know that unicode has evolved too, right?
> Yeah, sounds like more the sort of thing you'd expect in a scripting language.
No. It's perfectly reasonable rules based on static types.
> OTOH, it appears that it's the only autoconversion that can happen.
Not at all. Indeed, you can declare your own implicit conversions for your
own types.
> Pop quiz: if x is a string and y is a delegate, does it do delegate
> concatenation or string concatenation?
It fails to compile, because there's no implicit conversion between the two.
--
Darren New, San Diego CA, USA (PST)
"Oh no! We're out of code juice!"
"Don't panic. There's beans and filters
in the cabinet."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 8/15/2012 4:08, Invisible wrote:
> I guess I'm used to using a programming language where a Char is... well...
> any valid Unicode code-point,
You mean, you only use languages invented after Unicode had more than 65536
code points defined.
--
Darren New, San Diego CA, USA (PST)
"Oh no! We're out of code juice!"
"Don't panic. There's beans and filters
in the cabinet."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 8/15/2012 4:52, clipka wrote:
> You're right there. I guess it's to provide support for bitfield-style use
> of enums.
That too. They're just conveniently-named integers.
I far prefer Java's enums, which are basically a full-blown class (with
member variables and static and instance functions and everything), but with
all valid instances of the class being declared in the enum at program
start-up time. So, basically, a full class plus a bunch of "static BLAH =
new ..." sorts of operations.
--
Darren New, San Diego CA, USA (PST)
"Oh no! We're out of code juice!"
"Don't panic. There's beans and filters
in the cabinet."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 16.08.2012 01:40, schrieb Darren New:
> On 8/14/2012 4:52, clipka wrote:
>> I guess you've heard of code re-use through delegation, no?
>
> Then every type of screen item has to delegate that call to something
> else, and every time the superclass adds a function, then all subclasses
> have to write code to delegate it properly.
You're presuming here that GUI item superclasses keep growing new
functions. That's not a necessary prerequisite for a GUI library though.
>> I do agree that at least inheritance among interfaces would be nice to
>> have,
>
> I don't know of any language that implements inheritance and interfaces
> that doesn't allow interfaces to inherit. Quite honestly, that would
> make almost no sense.
I totally agree that it's a good choice to not design a language that
way. But still, even with such a language you /could/ write a GUI library.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 16.08.2012 01:58, schrieb Darren New:
> I far prefer Java's enums, which are basically a full-blown class (with
> member variables and static and instance functions and everything), but
> with all valid instances of the class being declared in the enum at
> program start-up time. So, basically, a full class plus a bunch of
> "static BLAH = new ..." sorts of operations.
Strangely enough, I imagined to remember that it was C# that did enums
that way. Don't know where and when I mixed that up.
Yeah, I really do like that approach as well.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 16/08/2012 12:02 AM, Darren New wrote:
> On 8/14/2012 3:17, clipka wrote:
>> If you want to give such guarantees, you can't handle variables of
>> types for
>> which you have no declaration, because obviously you can't tell
>> whether they
>> can do A, B, C and D or not.
>
> To be fair, I suspect Haskell just treats that declaration as if you're
> returning Object in C#. In other words, haskell I suspect doesn't let
> you return a value whose type you can't see, but instead of making that
> illegal, it essentially inserts a cast for you to an opaque supertype.
> (So to speak, of course, since Haskell isn't OO.)
You're probably write. (Depending on how you want to argue about the
semantics.) My point was just that "oh, it's /obviously/ impossible"
isn't actually so obvious.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> Heh. And I thought Haskell was the only language stupid enough to
>> claim that
>> the reading the formal language specification is a good way to learn
>> how to
>> program with it...
>
> Most languages lack a formal spec. C# gets close. Hermes actually
> generates the compiler based on the formal spec, so yeah.
C# is supposedly an ISO standard, so there must be a published
specification for it. However, while the spec is the thing you want to
read if you're building a new implementation (e.g., Mono), it's usually
an utterly lousy way to learn how to actually /use/ a programming language.
(Although, as I discovered, the C# "spec" actually reads more like a
tutorial, so...)
> So show me a function that returns a value of a type I can't see.
>> module Foobar (Foo (..), foobar) where
>>
>> data Foo = Foo Int deriving Show
>> data Bar = Bar Int deriving Show
>>
>> foobar :: Foo -> Bar
>> foobar (Foo x) = Bar x
>>
>> Notice how Bar is not exported, yet foobar is exported, and its return
>> type
>> is Bar.
>
> So you declare foobar returning an interface that support Show, which is
> what you've done.
It still works if you take Show out. (Although in that case it becomes
even more pointless.)
My point isn't that this is a useful thing to do. It's just that it
doesn't necessarily /have/ to be impossible. C# /chose/ to make it
impossible - a not unreasonable choice, but not the /only/ choice.
>>> Well, actually, everything's an object.
>>
>> Except that no, no it isn't. Apparently only values of reference types
>> are considered "objects". :-P
>
> Uhhhh, no. Everything is an object.
>
>> You could /almost/ say "everything's a class", except that no, we have
>> enums and stuff which aren't considered "classes".
>
> Of course they're classes. If you want, you can even look up what their
> superclass is. For example, 23 is of class System.Integer32 or some such.
According to the spec document, any value of a value type is not
formally referred to as an "object"; that term is reserved for values of
reference type. Similarly, the term "class" is reserved to refer to
types declared with the "class" keyword.
Of course, with the C# unified type system, all types /behave/ as
classes in the usual OO sense (which is the point you're making). It's
just that the C# language spec does not /call/ then classes.
>> Right. So enumerations aren't integers, and that's why you define the
>> corresponding integer value of each case right there in the
>> definition? :-P
>
> They correspond to integers. They aren't integers. I *do* think Java did
> that one much better.
What, by not having enumerations at all?
>> If you have real enumerations, why do you need a special-case for bool?
>
> Because you have syntax in the language that uses specific types, like
> "if" statements.
Haskell has special if syntax, and yet Bool is defined in library code.
>> (The same argument applies to Haskell: Why the HELL did they special-case
>> the list type, when you can trivially define it manually? What there they
>> thinking??)
>
> They were thinking you needed special syntax.
You need special syntax for integers, and yet anybody that wants to can
define a new integer type. (E.g., I wrote a library that lets you deal
with "half integers" - numbers which are integers when multiplied by 2.)
What they /should/ have done is let you use what is now "list literal
syntax" for /any/ suitable container type. Unfortunately, they didn't.
And what's with [Int], anyway? Would List Int just be too readable?
> Hermes, for example, does not have a string type. But if you have an
> enum where each name is one character long, you can make an array of
> that value by sticking it inside quotes. Which I thought was kind of cute.
Nice. :-S
>> OK, whatever. I haven't really explored it all that much yet, but it
>> doesn't appear to compare well to Eiffel.
>
> What doesn't, C#? There are various things Eiffel does theoretically
> better, if they're actually implemented.
I doubt Eiffel has half as many libraries as C# though. Like most
well-designed languages, it never really took off.
(I'm guessing C# took off because it's explicitly designed to look like
one of the other badly-designed languages which is already wildly
popular...)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> So if a class tries to implement both interfaces, it can't provide
>> different
>> implementations for these two distinct methods merely because their names
>> clash?
>
> That depends on your language. Java? I don't think so. C#? Definitely.
> But then you can't invoke the interface call without specifying which
> one you mean, so there's no ambiguity.
From what I've seen, you can't just use the fully-qualified method
name, you have to actually cast the entire object to the type of the
interface first. Which seems rather necessarily long-winded...
>> According to the Great Language Shootout, C# Mono is 3x slower than C.
>
> So, an open source clone is slow, thus the version Microsoft uses that
> compiles down to optimized native code must be also?
According to Wikipedia, Mono /also/ does JIT compilation to native code.
(Although I agree it would be better if there were published benchmark
results for .NET itself.)
>>> Sure there is, because you recompile the code while it's running.
>> That sounds remarkably hard to get right...
>
> No harder than compiling it in the first place while it's running. Why
> is it harder to re-JIT than to JIT?
Because you have to make sure it actually happens, and uses up-to-date
information. It looks remarkably easy for this step to get missed under
sufficiently unusual conditions.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|