|
 |
Darren New wrote:
> Patrick Elliott wrote:
>> This is not actually the case. Well, at least with Windows. For some
>> wacky reason, Windows handles "multiple" connections faster than a
>> single one. Don't ask me why..
>
> TCP Windowing perhaps. Lots of overhead in the kernel to turn around an
> ACK perhaps.
>
>> Oh, and that is one thing you can **not** do with FTP. FTP is blind to
>> how much bandwidth you use. It will use as much as it can, sans what
>> ever other processes are using.
>
> Well, technically, that's the client, not the protocol itself.
>
> The box I am programming for work, when you pick up the VOIP phone to
> make a call, it throttles back all the HTTP and FTP connections it's
> proxying, and then lets them go full speed again when you hang up.
>
Well, yeah. There are some managers that you can throttle as well, but
bittorrent I think works by knowing how much is coming in and going out,
and limiting it directly. In other words, it won't even *ask for* a new
fragment, if the fragment would exceed the cap. The ones that do
throttle for FTP often take an, "its sending me stuff at Xkbs, so I need
to ignore Y% of it." You haven't throttled anything, because the server
on the other end has no idea *why* you keep requesting the same packets,
which is already sent, just that they didn't get their, and your FTP
software has to constantly check that what its getting doesn't exceed
the amount you set, and simply not send back an, "I got it.", message
when it wants to slow things down. This makes for a damned wobbly
limiter. lol
I haven't, myself, ever seen someone that actually limits what *other*
applications do use. Depending if it installs as a service, I suppose
there may be some way to manage that. Usually though, you can't set
bandwidth priority, and the default behavior is, "try to give all
clients equal capacity". Mind, the problem with an application that
*can* throttle everything else back is you run into the same issue.
Existing *standard* protocols do not have mechanisms to specify what
amount if bandwidth you have to work with, nor do they have a way to do
"fragment by fragment requests". This create the same jittery result as
a client based throttle, though, maybe not *quite* as bad, since you are
not limiting bandwidth in an efficient way, but instead relying on the
other end to a) respond to requests to resend, b) resend at all. In some
cases, FTP in such situations will just time out, or drop connection, if
the server on the other end has issues, or poor settings. Having one
client "grab" all the bandwidth means you risk increasing the odds of
this kind of failure.
This is imho why I think for "data transfer", which doesn't require
streaming, we need to move to something like bittorrent for *all* of it.
The simple fact that you don't have to worry about dropped connections,
resuming issues, linear downloads, *or* trying to throttle bandwidth by
actually increasing the risk of time outs, failed connections and packet
loss, all make it vastly superior, even if it didn't have the p2p
aspect. FTP, no matter how you do it, risks all of the problems,
including making them worse, if something, intentionally or not, is
using your bandwidth.
--
void main () {
If Schrödingers_cat is alive or version > 98 {
if version = "Vista" {
call slow_by_half();
call DRM_everything();
}
call functional_code();
}
else
call crash_windows();
}
<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
3D Content, and 3D Software at DAZ3D!</A>
Post a reply to this message
|
 |