POV-Ray : Newsgroups : povray.off-topic : Adventures with WCF : Re: Adventures with WCF Server Time
28 Jul 2024 22:27:45 EDT (-0400)
  Re: Adventures with WCF  
From: Francois Labreque
Date: 12 Nov 2013 22:47:57
Message: <5282f66d$1@news.povray.org>

> On 12/11/2013 02:01 PM, Francois Labreque wrote:

>> How do you tell your server (or client) that there should be more data
>> coming?
>>
>> How do you handle errors?
>>
>> The answer is to reinvent XML wheels that already exist in HTTP.
>
> TCP already handles splitting a message into packets and reassembling it
> on the other side.

TCP only ensures that data that is split in multpiple packets will be 
rassembled in the right order at the other end.  It does not tell you 
ahead of time how much data to expect, so you can't know in advance if 
you should still be waiting for the end of that XML document, or if it 
was malformed and is missing closing tags, and you should simply reject it.

> Of the things you've mentioned, error reporting is
> the *only* thing TCP doesn't already do - and that's quite easy to deal
> with using XML. (HTTP lets you report a 3-digit error code. XML lets you
> report an entire stack trace if you wish.)
>

Well, duh.  XML is a document markup language.  You can return a 249 
page court document if you'd like, however parsing that response might 
prove to be more cumbersome than necessary.  There's nothing that 
prevents you from sending that stack dump via HTTP either.  In fact, 
many applications do.  Having a 200 or 503 at the start makes it easier 
for the application to know if it succeeded or failed, though.

The short error codes used by SMTP, FTP, HTTP, et al. are for efficiency.

>>>> In an entreprise setting. you need to do application layer parsing of
>>>> your traffic to ensure that, amongst other things, PII does not excape,
>>>> so using HTTP means that a third party firewall will be able to do deep
>>>> packet inspection out of the box.
>>>
>>> How does HTTP have any baring on this?
>>
>> Everyone knows how HTTP is defined, so it's easier to find the relevant
>> data in an HTTP stream, than in a proprietary protocol.
>
> I don't see how parsing XML over HTTP is any easier than parsing XML
> over raw TCP.
>

Parsing XML is parsing XML.  However, getting the appliance to recognize 
that the data in that tcp flow on a weird port is actually XML is 
harder.  It can be done in most boxes made by the big players in the 
field, but it's more work.

>>>> Also, HTTP is very easy to proxify, so your application will work in
>>>> most entreprise settings, without having to resort to Socks, or other
>>>> forms of tunnels.
>>>
>>> If you're trying to send commands and replies, the *last* thing you want
>>> is some proxy caching it for you...
>>
>> I'm not talking about caching, I'm talking about relaying. Most proxies
>> are indeed caching proxies, but their priamry function is to relay
>> traffic from one network to another network and keeping both networks
>> isolated.
>
> I have no idea why you'd want to do that.
>

To prevent unauthorized traffic from leaving or entering your network 
(or parts of it).  To mask the details of how the back-end of your 
server farm is laid out.  There are many valid uses for proxies, apart 
from caching the Google Doodle of the day to speed up the users' 
connection and cut down on bandwidth usage.

>>> This is the fundamental problem with HTTP. It's designed to transfer
>>> static documents, not ephemeral commands and replies.
>>
>> You are correct, that there are more specialized messagin protocols out
>> there (most of them proprietary), such as IBM MQ-Series or TIBCO
>> Rendezvous, just to name them, that are more suited to handling
>> ephemeral commands, than HTTP, but most of the civlized world has no
>> problems whatsoever using HTTP (or more often HTTPS) to exchange data
>> between systems and even companies.
>>
>> Since the makers of WCF do not know ahead of time what you will be doing
>> with their framework, they decided to pick the most widely used tranfer
>> protocol to make most of their users' life less miserable.
>
> I still don't really see why you need any additional services beyond
> what TCP provides. What do these proprietary protocols get you?
>

Just like SMTP is a well defined and understood way of exchanging user 
messages between machines, and FTP a well defined way of exchanging 
files between machines.  Some people decided to create new protocols 
that allowed them to send messages between applications, route them to 
specific queues, define priority levels, etc... in a well defined manner 
that would allow them to exchange information using a common "language".

Of course, these protocols would be overkill for a project of the size 
of your testbed, but in an entreprise where the shop floor work load 
management application needs to talk to the inventory system and the 
employee scheduler; the inventory system needs to talk to the purchasing 
application and the shipping/receiving management system; the employee 
scheduler needs to talk with the payroll system; who also needs to talk 
to the accounting system; who needs to interface with the external bank 
system; and the inventory management can automatically place orders with 
suppliers, you need something can efficiently route these messages 
between all these systems.

>>> I don't see why you would ever need more than one TCP connection -
>>> unless you're actually trying to issue multiple commands simultaneously
>>> or something...
>>>
>>
>> In your particular case, maybe, but you are using a general purpose
>> framework that is built for general use.
>>
>> Let's say a travel agent is discussing various possible plane/hotel
>> combinations with a client. She shouldn't "hog" a tcp socket open for
>> minutes while the client makes up his mind, nor should she have to
>> re-enter her credentials everytime she checks the availablity at another
>> hotel, or makes the final reservation.
>>
>> In the background, the application would open one session and issue a
>> session ID, after having verified the user's credentials. Then after
>> that, whenever the travel agent makes a query through her reservation
>> system, the client-side application would open a new TCP sessions,
>> include the application ID inside the payload and send one (or more)
>> commands to the server. The server would send the reply and the TCP
>> connection would then close to leave resources available for the next
>> agent.
>
> You speak as if having a TCP connection open is some terrible overhead.
> If you're allocating a session ID, I would suggest that storing the
> details of this session somewhere is already using *far* more resources
> than a simple TCP connection...

Nope.  Denials of service target the number of simultaneous connections 
for a reason.  Most servers will fork a thread for each open TCP 
connection, whereas a session ID is a few bytes in a table (more, 
obviously if you have other data associated with that seesion to track, 
such as items in your shopping cart, for example).  Managing all these 
threads on a busy server can become a problem real fast.

Of course, your test bed does not need to worry about that, since you 
dont expect to have more than one client and you don't expect to have 
the server go back to listening mode for new clients while processing 
the request of the first client.

[CSB]
About 10 years ago, we were in a "critsit" because of problems with a 
customer's online banking web site.  Each server in the server farm 
would grind to a halt every 6 hours or so for 5-10 minutes and then pick 
up where it left off.  Of course, users weren't happy, and neither were 
the mainframe guys in the back because of the number of unfinished 
transactions that had to be rolled-back.

It turned out that Sun's HTTP server's (formlery Netscape Entreprise 
Server) method of hanlding the high number of tcp connections was not to 
have one thread per connection, but to put each connection state in a 
table and have a pool of worker threads work the connections in turn. 
Every once in a while, a "janitor" thread would spawn and freeze the 
connection table while it went down the list to send an ACK to each 
client, wait a few seconds and then close the connections that didn't 
respond.*  While it was doing this, no new connections could be 
established, nor taken down gracefully.  The support team keept upping 
the number of allowable connections, and the number of worker threads 
hoping to get rid of the problem, but only made it wors when the janitor 
did start.  In the end, they figured that having a very small connection 
table, and having the janiotr thread constantly stopping it for a few 
seconds was a much better solution than having the server come to a 
complete stop for minutes at a time every few hours.

*We were on the phone with some pretty interesting "core" programmers by 
the time we reached that conclusion.
[/CSB]

-- 
/*Francois Labreque*/#local a=x+y;#local b=x+a;#local c=a+b;#macro P(F//
/*    flabreque    */L)polygon{5,F,F+z,L+z,L,F pigment{rgb 9}}#end union
/*        @        */{P(0,a)P(a,b)P(b,c)P(2*a,2*b)P(2*b,b+c)P(b+c,<2,3>)
/*   gmail.com     */}camera{orthographic location<6,1.25,-6>look_at a }


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.