POV-Ray : Newsgroups : povray.off-topic : Lots of statistics Server Time
29 Jul 2024 16:30:20 EDT (-0400)
  Lots of statistics (Message 108 to 117 of 177)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Invisible
Subject: Re: C#
Date: 16 Aug 2012 04:27:36
Message: <502caef8$1@news.povray.org>
On 16/08/2012 12:02 AM, Darren New wrote:
> On 8/14/2012 3:17, clipka wrote:
>> If you want to give such guarantees, you can't handle variables of
>> types for
>> which you have no declaration, because obviously you can't tell
>> whether they
>> can do A, B, C and D or not.
>
> To be fair, I suspect Haskell just treats that declaration as if you're
> returning Object in C#. In other words, haskell I suspect doesn't let
> you return a value whose type you can't see, but instead of making that
> illegal, it essentially inserts a cast for you to an opaque supertype.
> (So to speak, of course, since Haskell isn't OO.)

You're probably write. (Depending on how you want to argue about the 
semantics.) My point was just that "oh, it's /obviously/ impossible" 
isn't actually so obvious.


Post a reply to this message

From: Invisible
Subject: Re: C#
Date: 16 Aug 2012 04:38:58
Message: <502cb1a2$1@news.povray.org>
>> Heh. And I thought Haskell was the only language stupid enough to
>> claim that
>> the reading the formal language specification is a good way to learn
>> how to
>> program with it...
>
> Most languages lack a formal spec. C# gets close. Hermes actually
> generates the compiler based on the formal spec, so yeah.

C# is supposedly an ISO standard, so there must be a published 
specification for it. However, while the spec is the thing you want to 
read if you're building a new implementation (e.g., Mono), it's usually 
an utterly lousy way to learn how to actually /use/ a programming language.

(Although, as I discovered, the C# "spec" actually reads more like a 
tutorial, so...)

> So show me a function that returns a value of a type I can't see.

>> module Foobar (Foo (..), foobar) where
>>
>> data Foo = Foo Int deriving Show
>> data Bar = Bar Int deriving Show
>>
>> foobar :: Foo -> Bar
>> foobar (Foo x) = Bar x
>>
>> Notice how Bar is not exported, yet foobar is exported, and its return
>> type
>> is Bar.
>
> So you declare foobar returning an interface that support Show, which is
> what you've done.

It still works if you take Show out. (Although in that case it becomes 
even more pointless.)

My point isn't that this is a useful thing to do. It's just that it 
doesn't necessarily /have/ to be impossible. C# /chose/ to make it 
impossible - a not unreasonable choice, but not the /only/ choice.

>>> Well, actually, everything's an object.
>>
>> Except that no, no it isn't. Apparently only values of reference types
>> are considered "objects". :-P
>
> Uhhhh, no. Everything is an object.
>
>> You could /almost/ say "everything's a class", except that no, we have
>> enums and stuff which aren't considered "classes".
>
> Of course they're classes. If you want, you can even look up what their
> superclass is. For example, 23 is of class System.Integer32 or some such.

According to the spec document, any value of a value type is not 
formally referred to as an "object"; that term is reserved for values of 
reference type. Similarly, the term "class" is reserved to refer to 
types declared with the "class" keyword.

Of course, with the C# unified type system, all types /behave/ as 
classes in the usual OO sense (which is the point you're making). It's 
just that the C# language spec does not /call/ then classes.

>> Right. So enumerations aren't integers, and that's why you define the
>> corresponding integer value of each case right there in the
>> definition? :-P
>
> They correspond to integers. They aren't integers. I *do* think Java did
> that one much better.

What, by not having enumerations at all?

>> If you have real enumerations, why do you need a special-case for bool?
>
> Because you have syntax in the language that uses specific types, like
> "if" statements.

Haskell has special if syntax, and yet Bool is defined in library code.

>> (The same argument applies to Haskell: Why the HELL did they special-case
>> the list type, when you can trivially define it manually? What there they
>> thinking??)
>
> They were thinking you needed special syntax.

You need special syntax for integers, and yet anybody that wants to can 
define a new integer type. (E.g., I wrote a library that lets you deal 
with "half integers" - numbers which are integers when multiplied by 2.)

What they /should/ have done is let you use what is now "list literal 
syntax" for /any/ suitable container type. Unfortunately, they didn't.

And what's with [Int], anyway? Would List Int just be too readable?

> Hermes, for example, does not have a string type. But if you have an
> enum where each name is one character long, you can make an array of
> that value by sticking it inside quotes. Which I thought was kind of cute.

Nice. :-S

>> OK, whatever. I haven't really explored it all that much yet, but it
>> doesn't appear to compare well to Eiffel.
>
> What doesn't, C#? There are various things Eiffel does theoretically
> better, if they're actually implemented.

I doubt Eiffel has half as many libraries as C# though. Like most 
well-designed languages, it never really took off.

(I'm guessing C# took off because it's explicitly designed to look like 
one of the other badly-designed languages which is already wildly 
popular...)


Post a reply to this message

From: Invisible
Subject: Re: C#
Date: 16 Aug 2012 06:14:58
Message: <502cc822$1@news.povray.org>
>> So if a class tries to implement both interfaces, it can't provide
>> different
>> implementations for these two distinct methods merely because their names
>> clash?
>
> That depends on your language. Java? I don't think so. C#? Definitely.
> But then you can't invoke the interface call without specifying which
> one you mean, so there's no ambiguity.

 From what I've seen, you can't just use the fully-qualified method 
name, you have to actually cast the entire object to the type of the 
interface first. Which seems rather necessarily long-winded...

>> According to the Great Language Shootout, C# Mono is 3x slower than C.
>
> So, an open source clone is slow, thus the version Microsoft uses that
> compiles down to optimized native code must be also?

According to Wikipedia, Mono /also/ does JIT compilation to native code. 
(Although I agree it would be better if there were published benchmark 
results for .NET itself.)

>>> Sure there is, because you recompile the code while it's running.
>> That sounds remarkably hard to get right...
>
> No harder than compiling it in the first place while it's running. Why
> is it harder to re-JIT than to JIT?

Because you have to make sure it actually happens, and uses up-to-date 
information. It looks remarkably easy for this step to get missed under 
sufficiently unusual conditions.


Post a reply to this message

From: Invisible
Subject: Re: C#
Date: 16 Aug 2012 07:13:11
Message: <502cd5c7@news.povray.org>
>> So what do you think Haskell solves inelegantly?
>
> Does it do hierarchies of types (like windows) nicely, where you don't
> have to go back and fix every use of an object when you add another one
> to the hierarchy?

Yeah, you can do this.

> Does it interface well to low-level stuff without having to write wrappers?

Does /any/ high-level language do this?

Haskell lets you talk to low-level stuff fairly easily. It's not 
trivial, but it's reasonably painless. (Depending on the complexity of 
the thing you're trying to access.)

> Does it handle threads running on various address spaces (i.e.,
> distributed processing) well?

The language itself does not include this feature. You would have to use 
some kind of library.

> How is it on the code generation part? Can you write programs that emit
> code that it then loads back in?

None of the extant implementations let you do this.

[Or rather, GHC is /supposed/ to support this, but I couldn't actually 
get it to work.]

> Can you write programs that notice your
> SQL schema has changed and modifies the running program to account for
> that?

Since it's not possible to receive notifications for schema changes in 
the first place, I don't see how this is feasible.

> Can you compile and distribute Haskell code, let other people use it,
> and then release new object code that is compatible with the existing
> code people already wrote and compiled without breaking things?

No.

Haskell is usually distributed in source form. Partly that's because the 
community is small, and nobody has the resources to compile their code 
for every target platform. Partly it's because most Haskell code is 
open-source anyway. But partly it's because the extant Haskell 
implementations have very poor object code compatibility.

> How elegantly does it interface to dynamic languages like javascript?

What do you mean by "interface"?

> How elegantly does it handle the types in SQL tables? Stuff like
> nullable big decimals? Strings in various locales?

Depends on which SQL library you use.

> How easy is it to parse and compile, such that you could do so on a
> keystroke by keystroke basis? When you make a mistake, is it easy for
> the compiler to pinpoint where you made the mistake? Is it easy for the
> IDE to know when it's safe to recompile the code or whether you're still
> typing?

The Leksah IDE compiles as you type, and highlights compilation errors 
in near-realtime. (Like, there's a delay of a couple of seconds.)

Haskell's type inference can make pinpointing the actual error location 
kinds tricky; you can counter this by adding more explicit type 
signatures to your code. (This is arguably good practise anyway.)



More to the point, most of the questions above aren't about the language 
itself, they're about all the supporting infrastructure that goes with a 
language to make it useful. While these are obviously important things, 
they aren't part of the core language.

For example, GHC has laughable binary compatibility. Every minor 
point-release of the compiler generates incompatible object code. This 
has nothing to do with language design, however. It's entirely possible 
that some day somebody will write a rival compiler which has excellent 
binary compatibility. The language itself does not make this impossible 
or even difficult. It's just that with everybody distributing code in 
source form, there's no real incentive for people to work towards this goal.

Similar remarks go for most of your points. There are languages like 
Erlang where support for distributed processing is built-in. Haskell is 
not such a language. On the other hand, we already have a library for 
shared-memory synchronisation and communication between threads; it's 
not inconceivable that some day somebody will extend it to work across 
node boundaries. But whether this happens or not is not a property of 
the language. The fact that the language is powerful enough for this to 
support this is a property of the language.

In summary: If you're saying that C# has better tools and better 
libraries for doing real-world stuff, then I have no argument. If you're 
saying that the C# language design is superior, then I must disagree.


Post a reply to this message

From: Warp
Subject: Re: C#
Date: 16 Aug 2012 10:33:12
Message: <502d04a8@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> I might have asked this before, but do you have any good (hopefully generic) 
> examples of where MI really helps out?

It's not commonplace for sure, but I have had actual situations where
multiple inheritance has been useful (and also at least one situation
where multiple inheritance would have been useful but was not available
because the language, in this case Objective-C, didn't support it).
Coincidentally (or perhaps not) all the cases I remember have been related
to GUI programming.

These have been cases where I have had an object which should act as a
screen element (whatever the basic element may be called in a particular
GUI library) and also as another class, neither of which could have been
just an interface (because both base classes contained functionality of
their own).

(In the Objective-C case I had to get around the limitation by using
preprocessor macros which could be used to add the required member
variables and functions in all the classes that needed them. It was quite
ugly.)

-- 
                                                          - Warp


Post a reply to this message

From: Warp
Subject: Re: C#
Date: 16 Aug 2012 10:34:51
Message: <502d050b@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> > Even if it does, then the language could implicitly do with it what C++'s
> > virtual inheritance does. You can still forbid all other types of diamond
> > inheritance if you so wish.

> You could, but then it's getting weird.

What would be weird about it?

-- 
                                                          - Warp


Post a reply to this message

From: clipka
Subject: Re: C#
Date: 16 Aug 2012 12:08:55
Message: <502d1b17$1@news.povray.org>
Am 16.08.2012 13:13, schrieb Invisible:

> More to the point, most of the questions above aren't about the language
> itself, they're about all the supporting infrastructure that goes with a
> language to make it useful. While these are obviously important things,
> they aren't part of the core language.

YES, EXACTLY! So much for the "solution to everything".

> In summary: If you're saying that C# has better tools and better
> libraries for doing real-world stuff, then I have no argument. If you're
> saying that the C# language design is superior, then I must disagree.

The point is, if it's the right tool for the right job, its design /is/ 
superior (for this job).

Design superiority isn't in elegant simplicity, but in how actually 
useful it is - for a given purpose.


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: C# WTF list
Date: 16 Aug 2012 12:16:12
Message: <502d1ccc$1@news.povray.org>
On 16/08/2012 12:56 AM, Darren New wrote:
> On 8/15/2012 4:08, Invisible wrote:
>> I guess I'm used to using a programming language where a Char is...
>> well...
>> any valid Unicode code-point,
>
> You mean, you only use languages invented after Unicode had more than
> 65536 code points defined.

And when was that? About 10 years ago? :-P


Post a reply to this message

From: clipka
Subject: Re: C#
Date: 16 Aug 2012 12:16:36
Message: <502d1ce4$1@news.povray.org>
Am 16.08.2012 10:38, schrieb Invisible:

> I doubt Eiffel has half as many libraries as C# though. Like most
> well-designed languages, it never really took off.

I'd dare say, "like most (if not all) languages with well-intended 
simple and elegant design" - and that it's /because/ of this design that 
they never really took off. Because its simplicity limits its usefulness 
for particular use cases, and the elegancy (of the whole smash) goes 
down the drain once you add extensions (including, but not limited to, 
libraries) to make it useful.


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: C#
Date: 16 Aug 2012 12:23:06
Message: <502d1e6a$1@news.povray.org>
On 16/08/2012 05:16 PM, clipka wrote:
> Am 16.08.2012 10:38, schrieb Invisible:
>
>> I doubt Eiffel has half as many libraries as C# though. Like most
>> well-designed languages, it never really took off.
>
> I'd dare say, "like most (if not all) languages with well-intended
> simple and elegant design" - and that it's /because/ of this design that
> they never really took off. Because its simplicity limits its usefulness
> for particular use cases, and the elegancy (of the whole smash) goes
> down the drain once you add extensions (including, but not limited to,
> libraries) to make it useful.

You keep saying this, and you keep being wrong. However, it is clear to 
me that nothing I say is going to convince you otherwise, so there we are.


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.