POV-Ray : Newsgroups : povray.off-topic : Adventures with WCF : Re: Adventures with WCF Server Time
28 Jul 2024 22:30:12 EDT (-0400)
  Re: Adventures with WCF  
From: Orchid Win7 v1
Date: 11 Nov 2013 14:29:03
Message: <52812fff@news.povray.org>
>>> - Everybody and their grandmother knows how to handle HTTP. You don't
>>> need to rewrite a custom interface to handle AndrewTP over port 31337
>>> when you want to interface with the application. (Just dealing with the
>>> XML parser is enough headaches already!)
>>
>> But this is *still* using XML - it's just that it's sending it on top of
>> HTTP rather than over a raw TCP socket.
>
> XML is a document formatting language. Not a transfer protocol. XML does
> not do error handling, retransmits, keepalives, etc... If you use raw
> TCP sockets, you need to reinvent all those wheels.

You send an XML document explaining what method you want to call. The 
server sends back an XML document either saying it went OK, or 
describing why it didn't. Provided you standardise the XML, you don't 
particularly need anything from the transport protocol.

> In an entreprise setting. you need to do application layer parsing of
> your traffic to ensure that, amongst other things, PII does not excape,
> so using HTTP means that a third party firewall will be able to do deep
> packet inspection out of the box.

How does HTTP have any baring on this?

> Also, HTTP is very easy to proxify, so your application will work in
> most entreprise settings, without having to resort to Socks, or other
> forms of tunnels.

If you're trying to send commands and replies, the *last* thing you want 
is some proxy caching it for you...

This is the fundamental problem with HTTP. It's designed to transfer 
static documents, not ephemeral commands and replies. Sure, if you bash 
it hard enough, you can just about make it work. But that's like saying 
that you *can* use a vegetable knife as a screwdriver...

>>> How does your client know the difference between the server processing
>>> the request and the server having exploded in a firey ball of ones and
>>> zeroes?
>>
>> No, this is the *server* not waiting for the *client*.
>>
>
> Well, it works both ways. If your client got abducted by aliens (or more
> likely, bluescreened) How long do you sit there wasting resources?

Until the test runner kills the client and the server.

> What if I just decided to start 65536 sessions on your server and let
> them idle? Can you spell "Denial of service"?

There will only ever be one client.

> You may not need this robustness in your isolated test bed environment,
> but the makers of the WCF framework can't (or shouldn't) assume that no
> one will ever release one of these apps into the wild.

OK, well that's fair enough. But let *me* configure how I actually want 
the thing to work.

[In fairness, WCF *has* the capability to configure it the way I want 
it. It's just that the Mono implementation is broken - it blatantly 
ignores my configuration settings!]

> As I mentioned in my previous message, there should be no relationship
> between the TCP connection and the application session. Your
> application, over the course of a single user session, may need to
> create and teardown multiple TCP sessions with the same server. Or at
> least, this would be the smart way of doing it. You shouldn't have to
> rely on TCP's limited error-handling to survive a network glitch.

I don't see why you would ever need more than one TCP connection - 
unless you're actually trying to issue multiple commands simultaneously 
or something...

>> It appears to be impossible to tell under WCF when the connection has
>> been broken.
>
> In that case, the framework is rather flimsy.

Indeed.

>> The solution I eventually came up with was to connect, send one command,
>> and then immediately disconnect. So now the program makes a new
>> connection for every individual command. It's inefficient, but what else
>> can I do?
>
> Unless you plan on sending multiple commands in rapid fire, that's how
> it should be. Yes, you sent 6 more TCP packets per "command", but your
> app is now much more tolerant of network glitches and doesn't waste
> resources on either end of the communication channel waiting for a
> client or server that is no longer there.

6 more packets, each with a round-trip delay. A bunch more kernel-mode 
calls on both sides. And the network layer just got a bunch harder to 
debug. It works, but it's hardly an ideal solution.

(Hell, UDP would be better if that's your plan... But WCF doesn't appear 
to support that.)


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.