POV-Ray : Newsgroups : povray.off-topic : Random griping about Debian "Lenny", Gnome gdb and XDMCP... : Re: Random griping about Debian "Lenny", Gnome gdb and XDMCP... Server Time
5 Sep 2024 05:23:56 EDT (-0400)
  Re: Random griping about Debian "Lenny", Gnome gdb and XDMCP...  
From: Darren New
Date: 1 Nov 2009 17:47:46
Message: <4aee1012$1@news.povray.org>
Patrick Elliott wrote:
> In other words, it won't even *ask for* a new 
> fragment, if the fragment would exceed the cap. 

Yes.

> The ones that do 
> throttle for FTP often take an, "its sending me stuff at Xkbs, so I need 
> to ignore Y% of it." You haven't throttled anything, because the server 
> on the other end has no idea *why* you keep requesting the same packets, 

Uh, no. The client just stops reading from the kernel buffers, which fill 
up, so the TCP window doesn't get opened up again, and the other side 
doesn't get more buffer space to send into. When the client then reads 64K, 
your TCP stack opens the window up by 64K, so the remote side sends 64K.

There's no need to request the same packets. Indeed, that would be a rather 
asinine way of trying to limit how much data you're getting.

> and your FTP software has to constantly check that what its getting doesn't exceed 
> the amount you set, 

Errr, no.  If you set your download to (say) 64K/second, your FTP client 
reads 64K from the socket, then sleeps till the end of 1 second, then reads 
another 64K, then sleeps again, etc. If it gets only 32K, it can either read 
another 32K immediately, or wait for half a second, or whatever. The whole 
TCP rate-limiting backpressure mechanism keeps the sender from sending 
faster than it's being read at the client.

> and simply not send back an, "I got it.", message 

That's exactly what TCP does, along with a "this is how much buffer space I 
have for you to fill up now." It's just built into the TCP layer rather than 
the application layer.

> This makes for a damned wobbly limiter.

It's the fact that it's going thru a number of layers that make it faster or 
slower. It's wobbly with bittorrent too, because the requests have to go out 
significantly before the data comes back.  Just look at the bandwidth chart 
on a client that gives you that. You're never going to get a constant rate. 
The best you can do is a decent average over a fairly short time once the 
stream hits a steady state.

 > lol

You know, I'm constantly amazed at how amusing you find computers to be. :)

> I haven't, myself, ever seen someone that actually limits what *other* 
> applications do use. 

Sure. It's called QoS. Quality of Service. It has been built into IP since 
the first versions. If you're using Windows, open up your network connection 
properties and check "QoS Packet Scheduler" and bingo, you have something 
that limits what applications use.  The FTP client says "I want high 
bandwidth and I don't care about latency" and the IM client says "I want low 
latency and I don't care about bandwidth." And the routers in the middle 
also deal with it.

It's also why you plug your VOIP box in front of your router instead of 
behind it, for example.

> and the default behavior is, "try to give all clients equal capacity". 

Not quite.

> Existing *standard* protocols do not have mechanisms to specify what 
> amount if bandwidth you have to work with, 

It's not up to the protocol, but the client. Welcome to the Internet. :-)

> nor do they have a way to do "fragment by fragment requests". 

I don't know what this means.

> not limiting bandwidth in an efficient way, but instead relying on the 
> other end to a) respond to requests to resend, b) resend at all.

No. Go read how TCP works. If you don't actually lose the packets in the 
network, you don't have to resend any. There *are* fragment-by-fragment 
requests. It's called the "TCP Window".

> cases, FTP in such situations will just time out, or drop connection, if 
> the server on the other end has issues, or poor settings. 

If the FTP server isn't feeding clients fast enough to keep them from timing 
out, then you're just overloading your FTP server, yah. But if your 
bittorrent server is overloaded, you'll also have clients going elsewhere to 
get the file.

> The simple fact that you don't have to worry about dropped connections, 
> resuming issues, linear downloads, 

Bittorrent is a good download protocol, but HTTP has all that stuff too. The 
advantage of bittorrent is being able to talk to multiple servers.

Anyway, it's not "streaming" that's the problem, but real-time delivery. 
Neither bittorrent nor FTP handle "streaming" even if you have enough 
bandwidth for real-time.

> *or* trying to throttle bandwidth by 
> actually increasing the risk of time outs, failed connections and packet 
> loss, 

Throttling FTP traffic doesn't increase the risk of timeouts, failed 
connections, or packet loss. The only risk of failed connections is that 
connections take longer to finish, and thus stay open longer, so might use 
up resources more. But that's like saying slow-eating restaurant patrons 
increase the risk of slow food delivery - only in the sense that the people 
outside the restaurant will be waiting longer.

> FTP, no matter how you do it, risks all of the problems, 

Not really.

The only difference is that with bittorrent, the protocol is designed that 
the only way to do a download is by requesting chunks. In FTP, being able to 
restart a download, or take only the middle of a file, is an option.

-- 
   Darren New, San Diego CA, USA (PST)
   I ordered stamps from Zazzle that read "Place Stamp Here".


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.