POV-Ray : Newsgroups : povray.off-topic : Adventures with WCF : Re: Adventures with WCF Server Time
28 Jul 2024 22:17:44 EDT (-0400)
  Re: Adventures with WCF  
From: Orchid Win7 v1
Date: 14 Nov 2013 14:38:27
Message: <528526b3$1@news.povray.org>
>> TCP already handles splitting a message into packets and reassembling it
>> on the other side.
>
> TCP only ensures that data that is split in multpiple packets will be
> rassembled in the right order at the other end. It does not tell you
> ahead of time how much data to expect, so you can't know in advance if
> you should still be waiting for the end of that XML document, or if it
> was malformed and is missing closing tags, and you should simply reject it.

If the XML is machine-generated, it shouldn't ever be malformed. If it 
is, that's a bug. (Which I suppose is something real applications have 
to deal with, but it doesn't concern me overly much.)

>> Of the things you've mentioned, error reporting is
>> the *only* thing TCP doesn't already do - and that's quite easy to deal
>> with using XML. (HTTP lets you report a 3-digit error code. XML lets you
>> report an entire stack trace if you wish.)
>
> Well, duh. XML is a document markup language. You can return a 249 page
> court document if you'd like, however parsing that response might prove
> to be more cumbersome than necessary. There's nothing that prevents you
> from sending that stack dump via HTTP either.

Sure. It's just that HTTP isn't adding any real benefit here.

>> I don't see how parsing XML over HTTP is any easier than parsing XML
>> over raw TCP.
>
> Parsing XML is parsing XML. However, getting the appliance to recognize
> that the data in that tcp flow on a weird port is actually XML is
> harder. It can be done in most boxes made by the big players in the
> field, but it's more work.

How "hard" can it be to look for traffic on a particular port number? 
Sheesh...

>>> I'm not talking about caching, I'm talking about relaying. Most proxies
>>> are indeed caching proxies, but their priamry function is to relay
>>> traffic from one network to another network and keeping both networks
>>> isolated.
>>
>> I have no idea why you'd want to do that.
>>
>
> To prevent unauthorized traffic from leaving or entering your network
> (or parts of it). To mask the details of how the back-end of your server
> farm is laid out. There are many valid uses for proxies, apart from
> caching the Google Doodle of the day to speed up the users' connection
> and cut down on bandwidth usage.

Isn't that what a firewall does?

>> I still don't really see why you need any additional services beyond
>> what TCP provides. What do these proprietary protocols get you?
>>
>
> Just like SMTP is a well defined and understood way of exchanging user
> messages between machines, and FTP a well defined way of exchanging
> files between machines. Some people decided to create new protocols that
> allowed them to send messages between applications, route them to
> specific queues, define priority levels, etc... in a well defined manner
> that would allow them to exchange information using a common "language".

OK, well if you need to relay messages between multiple machines or you 
need to actually *queue* them (and presumably receive responses 
asynchronously) and you start needing prioritisation and so forth... 
yes, that becomes a little bit more complex.

Literally *all* I am trying to do here is have one piece of code invoke 
methods in other piece of code, which happens to be on a different 
machine. That's all I actually want. And everybody insists that WCF is 
the only way to achieve that, so...

> Of course, these protocols would be overkill for a project of the size
> of your testbed

Perhaps that's what it comes down to.

The long and short of it is, I managed to configure WCF to *not* use 
HTTP, and it works exactly the same as if it *did* use HTTP. Which, to 
me, rather suggests that using HTTP is a total waste of time.

>> You speak as if having a TCP connection open is some terrible overhead.
>> If you're allocating a session ID, I would suggest that storing the
>> details of this session somewhere is already using *far* more resources
>> than a simple TCP connection...
>
> Nope. Denials of service target the number of simultaneous connections
> for a reason. Most servers will fork a thread for each open TCP
> connection, whereas a session ID is a few bytes in a table (more,
> obviously if you have other data associated with that seesion to track,
> such as items in your shopping cart, for example). Managing all these
> threads on a busy server can become a problem real fast.

Isn't that why *real* servers use thread pools or green threads or some 
similar mechanism which avoids allocating an entire OS thread for every 
single individual TCP connection?

It seems to me that what you're *actually* saying is that having lots of 
OS threads is a lot of overhead. Well, yeah, nothing new there...

> *We were on the phone with some pretty interesting "core" programmers by
> the time we reached that conclusion.

Ah yes, the joys of garbage collection...


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.