POV-Ray : Newsgroups : povray.off-topic : Learning C# Server Time
29 Jul 2024 02:32:05 EDT (-0400)
  Learning C# (Message 11 to 20 of 32)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Orchid Win7 v1
Subject: Re: Learning C#
Date: 29 Sep 2012 06:18:15
Message: <5066cae7$1@news.povray.org>
On 28/09/2012 09:32 PM, Warp wrote:
> Orchid Win7 v1<voi### [at] devnull>  wrote:
>> Like anything else, there are some good things, and not so good things.
>> The one that really tickles me is the author insisting that because
>> assemblies are loaded dynamically, and because CIL takes up less space
>> than machine code and is JIT-compiled, you can "drastically reduce the
>> working set of your application".
>
> I don't even understand what that means.

C# is compiled to CIL, which is then JIT-compiled to machine code. The 
author claims (and I am highly sceptical) that CIL is much smaller than 
machine code. The author also says that since only the parts of your 
code which actually /run/ get compiled to machine code, the running 
machine code is smaller, giving your application a smaller "working set".

This, it is claimed, can reduce page faults, improve cache performance, 
and so forth.

Yes, that's right. Your application is using 250MB of heap space, but 
the executable binary is 4KB smaller, and that's the important thing, right?

> I have seen several arguments in the past that C++ templates
> are bad because they increase code size, thus increasing memory usage.

> In other words, their solution to the "huge" "problem" of increased code
> size is to increase memory usage ten-fold in order to make the executable
> binary slightly smaller. Great job.

Yeah, that's about the size of it.


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Learning C#
Date: 29 Sep 2012 06:19:35
Message: <5066cb37$1@news.povray.org>
>> Heh. It amuses me that run-time polymorphism via inheritance is /the/
>> central contribution of the OO movement, and here I am reading a book
>> warning me not to use inheritance under any circumstances unless
>> absolutely unavoidable...
>
> So, what's their solution? You have to reinvent the wheel all the time?

Use interfaces instead.

> So if you need a linked list of widgets and a linked list of gizmos, you
> have to have two complete sets of add(), remove() and iterate() methods?

That is better handled with generics than any kind of inheritance. A 
better example might be a single-linked list and a double-linked list.


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Learning C#
Date: 29 Sep 2012 06:27:58
Message: <5066cd2e$1@news.povray.org>
On 28/09/2012 09:27 PM, Warp wrote:
> Orchid Win7 v1<voi### [at] devnull>  wrote:
>> Heh. It amuses me that run-time polymorphism via inheritance is /the/
>> central contribution of the OO movement, and here I am reading a book
>> warning me not to use inheritance under any circumstances unless
>> absolutely unavoidable...
>
> Does it explain why?

In most OO languages (e.g., Java, Eiffel, Smalltalk...) you can override 
any method. Java adds a "final" keyword which you can use to disable 
overriding on a given method (presumably resulting in a small 
performance benefit). Eiffel of course achieves the same thing 
automatically using whole-program analysis - then again, Eiffel doesn't 
support dynamic code loading like Java does.

Regardless, C# turns this backwards. In C#, by default you /cannot/ 
override anything, unless you explicitly turn it on by writing 
"virtual". Meaning that if the author of some class didn't foresee the 
potential need to subclass it, you may not be able to override the 
necessary methods.

But that's nothing. According to the author of this book, you should 
mark all classes as "sealed", preventing them from ever being 
subclassed. Every class should be sealed, and no methods should be 
marked virtual, unless absolutely necessary.

Apparently it is "very hard" to design a class in such a way that it can 
be subclassed correctly. So you should only allow this possibility where 
a class has been explicitly designed with the intention of being 
subclassed later. In all other cases, you should disable subclassing, 
because if you didn't design for it, it won't work anyway.

All of which strikes me as a little baffling. I mean, I know that 
inheritance is overused and collaboration is underused. But this just 
seems excessive!

Then again, this is from the same book that explains how to write 
destructors (i.e., "don't") before explaining how to write methods with 
named arguments. The former is an obscure and very dangerous feature 
which you should never use, under any circumstances, ever. The latter is 
a pretty simple and common feature. Really, the order of presentation 
here seems rather strange. Rather than a coherent introduction, it seems 
like a random grab-bag of "features", in no particular order...


Post a reply to this message

From: Darren New
Subject: Re: Learning C#
Date: 29 Sep 2012 12:51:55
Message: <5067272b$1@news.povray.org>
On 9/29/2012 3:19, Orchid Win7 v1 wrote:
>>> Heh. It amuses me that run-time polymorphism via inheritance is /the/
>>> central contribution of the OO movement, and here I am reading a book
>>> warning me not to use inheritance under any circumstances unless
>>> absolutely unavoidable...
>>
>> So, what's their solution? You have to reinvent the wheel all the time?
>
> Use interfaces instead.

Sounds like a stupid book. I wouldn't trust its advice.

Don't use inheritance unless you're obviously making a subclass of the thing 
you're inheriting from. Don't use polymorphic dispatch unless you intended 
to use polymorphic dispatch.

I think of all the code in my entire project, I have exactly one set of 
routines that inherit from some other routine I wrote, and they're basically 
type-specific widgets for displaying the results of searching over 
collections of various kinds of documents.

I have a bunch of .. well, database connectors, one for each table, that all 
inherit from the general "database connector."  I have a bunch of protobufs 
that all inherit from "generic protobuf".

Otherwise, there's virtually no inheritance in my work code.

I have a game, where I do indeed have some inheritance, but where multiple 
inheritance might very well have worked out better. For each stackable 
screen (say, you have a help screen on top of a file selection screen on top 
of a start game screen on top of a top-level-menu screen), it inherits from 
the appropriate superclass (input-blocking screen, input-blocking screen 
with timeout, etc).  Other than that, again, very general.

IME, libraries often have classes you'd almost never use as is, which you 
can inherit from, and applications generally have very few classes you 
actually inherit from. And for the latter, you almost always know when 
you're writing the base class that you're going to be inheriting from it.

-- 
Darren New, San Diego CA, USA (PST)
   "They're the 1-800-#-GORILA of the telecom business."


Post a reply to this message

From: Darren New
Subject: Re: Learning C#
Date: 29 Sep 2012 13:32:20
Message: <506730a4$1@news.povray.org>
On 9/29/2012 3:27, Orchid Win7 v1 wrote:
> Regardless, C# turns this backwards. In C#, by default you /cannot/ override
> anything, unless you explicitly turn it on by writing "virtual". Meaning
> that if the author of some class didn't foresee the potential need to
> subclass it, you may not be able to override the necessary methods.

In practice, this is generally not a problem. If you're not thinking of code 
at the level of overriding virtual methods, it's usually impossible to 
override a method correctly, unless you're just doing something like adding 
logging that the method got called. (At which point, how do you get your 
custom class into the places where it's supposed to be, if others are 
instantiating the wrong class?)

> Apparently it is "very hard" to design a class in such a way that it can be
> subclassed correctly. So you should only allow this possibility where a
> class has been explicitly designed with the intention of being subclassed
> later. In all other cases, you should disable subclassing, because if you
> didn't design for it, it won't work anyway.

While this is excessive, it's probably good advice for people who haven't 
done a fair amount of work on such projects. Sealing the class is probably a 
bit much, but not declaring things virtual that you don't expect a subclass 
will want to override is not a bad thing.

> All of which strikes me as a little baffling. I mean, I know that
> inheritance is overused and collaboration is underused. But this just seems
> excessive!

It is. But then again, absolute advice given in a beginner's book is usually 
excessive. You develop the good habit, *then* you learn from experience 
where the right place to break the habit is.

> Then again, this is from the same book that explains how to write
> destructors (i.e., "don't") before explaining how to write methods with
> named arguments. The former is an obscure and very dangerous feature which
> you should never use, under any circumstances, ever. The latter is a pretty
> simple and common feature. Really, the order of presentation here seems
> rather strange.

Sounds like he wanted to mention destructors along with constructors, and 
since he had nothing to say about them besides "don't", he had no other good 
place to put them.

-- 
Darren New, San Diego CA, USA (PST)
   "They're the 1-800-#-GORILA of the telecom business."


Post a reply to this message

From: Darren New
Subject: Re: Learning C#
Date: 29 Sep 2012 13:40:06
Message: <50673276$1@news.povray.org>
On 9/29/2012 3:18, Orchid Win7 v1 wrote:
> This, it is claimed, can reduce page faults, improve cache performance, and
> so forth.

Reducing the working set is independent of how much code you have. It 
depends on how much code you're working with right at this very moment. The 
author doesn't understand the concept of "working set".

Working set is, roughly, the number of pages you're accessing faster than 
paging in swapped-out pages is fast enough to keep up with. If you have a 
300-meg executable in a tight loop 23K in size, you have a 23K working set.

> Yes, that's right. Your application is using 250MB of heap space, but the
> executable binary is 4KB smaller, and that's the important thing, right?

The part of the heap you're using is also part of the working set.

>> I have seen several arguments in the past that C++ templates
>> are bad because they increase code size, thus increasing memory usage.

Well, C++'s target platforms include things that C# isn't targeted at. If 
you're trying to fit your code into a credit card terminal, you're going to 
be worried about C++ template code bloat and the features of C#'s execution 
model won't even enter the conversation.

>> In other words, their solution to the "huge" "problem" of increased code
>> size is to increase memory usage ten-fold in order to make the executable
>> binary slightly smaller. Great job.
>
> Yeah, that's about the size of it.

Nah. I'm pretty sure C# generics use the same code for each reference type. 
I.e., you might have a version for int, a version for float, and one version 
for anything descended from Object.

The reason C++ makes more code is that it actually does stuff like inline 
the right calls, doing type-specific generic expansions. C# doesn't do that, 
so I think you get far fewer versions of the code. Especially for generics 
that are anchored, i.e., that are generic over some specific superclass.

-- 
Darren New, San Diego CA, USA (PST)
   "They're the 1-800-#-GORILA of the telecom business."


Post a reply to this message

From: Warp
Subject: Re: Learning C#
Date: 29 Sep 2012 15:35:48
Message: <50674d94@news.povray.org>
Orchid Win7 v1 <voi### [at] devnull> wrote:
> Then again, this is from the same book that explains how to write 
> destructors (i.e., "don't")

If C# is anything like Java in this regard, then destructors are pretty
much useless (because when you need destructors for something other than
freeing memory, which isn't necessary in C#, you usually need the
destructor to be called in a deterministic manner at a specific point in
the program, not at some undetermined time in the future maybe.)

In C++ destructors are useful for more than just freeing memory. For
example they are commonly used in things like locks: If a function needs
to use a lock, it can make a static lock instance which frees the lock
in its destructor, and it's guaranteed to be freed immediately when the
function is exited, regardless of how it's exited (ie. no matter where
the 'return' sentence is, or even if it's exited by throwing an
exception). It's handy because you don't need to any special code to
make sure that the destructor is called when the function is exited.

-- 
                                                          - Warp


Post a reply to this message

From: Warp
Subject: Re: Learning C#
Date: 29 Sep 2012 15:44:38
Message: <50674fa6@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> Well, C++'s target platforms include things that C# isn't targeted at. If 
> you're trying to fit your code into a credit card terminal, you're going to 
> be worried about C++ template code bloat and the features of C#'s execution 
> model won't even enter the conversation.

"Template code bloat" is a myth with little to no actual evidence.

If you need the same code to handle more than one type, you have to do that
somehow. Either you duplicate the code for each type, or you use runtime
polymorphism, or alternatively you use some really contrived code full of
conditionals that checks the actual type of the data when it tries to use it.

In most cases option #1 is most probably going to take *less* memory and
be a lot more efficient than either of the other options. That's because
with option #1 the compiler can optimize the code for those types while
with the other options it can't. In fact, your code might become *smaller*
when it's optimized for each type rather than larger.

Even if it doesn't become smaller, the increase in memory usage will most
probably be less than with the other solutions. If even that's too much,
then you shouldn't be using several types in the first place, but than it's
not the templates' fault, it's your own fault for wanting to use several
types on a machine with 1 kilobyte of RAM.

> The reason C++ makes more code is that it actually does stuff like inline 
> the right calls, doing type-specific generic expansions.

Inlining can actually *reduce* the size of the binary in some cases (because
of the subsequent compile-time optimizations).

-- 
                                                          - Warp


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Learning C#
Date: 29 Sep 2012 17:17:14
Message: <5067655a$1@news.povray.org>
On 29/09/2012 08:35 PM, Warp wrote:
> Orchid Win7 v1<voi### [at] devnull>  wrote:
>> Then again, this is from the same book that explains how to write
>> destructors (i.e., "don't")
>
> If C# is anything like Java in this regard, then destructors are pretty
> much useless

That was pretty much it, yes.

In particular, any objects that "this" points to might already have been 
destroyed, so you can't touch those. You also have to make sure you do 
/not/ cause any dead objects to become alive again by adding pointers to 
them. Destructors are also run in a separate thread, so you have to be 
careful about thread-safety... In short, writing a correct destructor is 
a nightmare.

> In C++ destructors are useful for more than just freeing memory. For
> example they are commonly used in things like locks: If a function needs
> to use a lock, it can make a static lock instance which frees the lock
> in its destructor, and it's guaranteed to be freed immediately when the
> function is exited, regardless of how it's exited (ie. no matter where
> the 'return' sentence is, or even if it's exited by throwing an
> exception). It's handy because you don't need to any special code to
> make sure that the destructor is called when the function is exited.

Yeah, if destructors behave in a deterministic way, I'd imagine there's 
a few useful things you can do with them.


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Learning C#
Date: 29 Sep 2012 17:28:03
Message: <506767e3$1@news.povray.org>
On 29/09/2012 06:40 PM, Darren New wrote:
> On 9/29/2012 3:18, Orchid Win7 v1 wrote:
>> This, it is claimed, can reduce page faults, improve cache
>> performance, and so forth.
>
> The author doesn't understand the concept of "working set".

The author claims that if you never try to print anything, the code for 
doing printing will never be JIT-compiled, thereby resulting in a 
smaller working set.

 From what you're saying, if that code is already compiled to machine 
code but just never /runs/, then it won't increase the working set 
anyway. (Unless it causes the running code to span more VM pages.)

>> Yeah, that's about the size of it.
>
> Nah. I'm pretty sure C# generics use the same code for each reference
> type. I.e., you might have a version for int, a version for float, and
> one version for anything descended from Object.

Sure. I meant in the C++ case.

> The reason C++ makes more code is that it actually does stuff like
> inline the right calls, doing type-specific generic expansions. C#
> doesn't do that, so I think you get far fewer versions of the code.
> Especially for generics that are anchored, i.e., that are generic over
> some specific superclass.

And Haskell generates one version of the code, and lets you request 
specific versions if you want them.

Oh, wait...


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.