 |
 |
|
 |
|
 |
|  |
|  |
|
 |
From: Darren New
Subject: Re: Random griping about Debian "Lenny", Gnome gdb and XDMCP...
Date: 1 Nov 2009 13:13:23
Message: <4aedcfc3$1@news.povray.org>
|
|
 |
|  |
|  |
|
 |
Patrick Elliott wrote:
> This is not actually the case. Well, at least with Windows. For some
> wacky reason, Windows handles "multiple" connections faster than a
> single one. Don't ask me why..
TCP Windowing perhaps. Lots of overhead in the kernel to turn around an ACK
perhaps.
> Oh, and that is one thing you can **not** do with FTP. FTP is blind to
> how much bandwidth you use. It will use as much as it can, sans what
> ever other processes are using.
Well, technically, that's the client, not the protocol itself.
The box I am programming for work, when you pick up the VOIP phone to make a
call, it throttles back all the HTTP and FTP connections it's proxying, and
then lets them go full speed again when you hang up.
--
Darren New, San Diego CA, USA (PST)
I ordered stamps from Zazzle that read "Place Stamp Here".
Post a reply to this message
|
 |
|  |
|  |
|
 |
From: Patrick Elliott
Subject: Re: Random griping about Debian "Lenny", Gnome gdb and XDMCP...
Date: 1 Nov 2009 15:24:47
Message: <4aedee8f$1@news.povray.org>
|
|
 |
|  |
|  |
|
 |
Darren New wrote:
> Patrick Elliott wrote:
>> This is not actually the case. Well, at least with Windows. For some
>> wacky reason, Windows handles "multiple" connections faster than a
>> single one. Don't ask me why..
>
> TCP Windowing perhaps. Lots of overhead in the kernel to turn around an
> ACK perhaps.
>
>> Oh, and that is one thing you can **not** do with FTP. FTP is blind to
>> how much bandwidth you use. It will use as much as it can, sans what
>> ever other processes are using.
>
> Well, technically, that's the client, not the protocol itself.
>
> The box I am programming for work, when you pick up the VOIP phone to
> make a call, it throttles back all the HTTP and FTP connections it's
> proxying, and then lets them go full speed again when you hang up.
>
Well, yeah. There are some managers that you can throttle as well, but
bittorrent I think works by knowing how much is coming in and going out,
and limiting it directly. In other words, it won't even *ask for* a new
fragment, if the fragment would exceed the cap. The ones that do
throttle for FTP often take an, "its sending me stuff at Xkbs, so I need
to ignore Y% of it." You haven't throttled anything, because the server
on the other end has no idea *why* you keep requesting the same packets,
which is already sent, just that they didn't get their, and your FTP
software has to constantly check that what its getting doesn't exceed
the amount you set, and simply not send back an, "I got it.", message
when it wants to slow things down. This makes for a damned wobbly
limiter. lol
I haven't, myself, ever seen someone that actually limits what *other*
applications do use. Depending if it installs as a service, I suppose
there may be some way to manage that. Usually though, you can't set
bandwidth priority, and the default behavior is, "try to give all
clients equal capacity". Mind, the problem with an application that
*can* throttle everything else back is you run into the same issue.
Existing *standard* protocols do not have mechanisms to specify what
amount if bandwidth you have to work with, nor do they have a way to do
"fragment by fragment requests". This create the same jittery result as
a client based throttle, though, maybe not *quite* as bad, since you are
not limiting bandwidth in an efficient way, but instead relying on the
other end to a) respond to requests to resend, b) resend at all. In some
cases, FTP in such situations will just time out, or drop connection, if
the server on the other end has issues, or poor settings. Having one
client "grab" all the bandwidth means you risk increasing the odds of
this kind of failure.
This is imho why I think for "data transfer", which doesn't require
streaming, we need to move to something like bittorrent for *all* of it.
The simple fact that you don't have to worry about dropped connections,
resuming issues, linear downloads, *or* trying to throttle bandwidth by
actually increasing the risk of time outs, failed connections and packet
loss, all make it vastly superior, even if it didn't have the p2p
aspect. FTP, no matter how you do it, risks all of the problems,
including making them worse, if something, intentionally or not, is
using your bandwidth.
--
void main () {
If Schrödingers_cat is alive or version > 98 {
if version = "Vista" {
call slow_by_half();
call DRM_everything();
}
call functional_code();
}
else
call crash_windows();
}
<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
3D Content, and 3D Software at DAZ3D!</A>
Post a reply to this message
|
 |
|  |
|  |
|
 |
From: Darren New
Subject: Re: Random griping about Debian "Lenny", Gnome gdb and XDMCP...
Date: 1 Nov 2009 17:47:46
Message: <4aee1012$1@news.povray.org>
|
|
 |
|  |
|  |
|
 |
Patrick Elliott wrote:
> In other words, it won't even *ask for* a new
> fragment, if the fragment would exceed the cap.
Yes.
> The ones that do
> throttle for FTP often take an, "its sending me stuff at Xkbs, so I need
> to ignore Y% of it." You haven't throttled anything, because the server
> on the other end has no idea *why* you keep requesting the same packets,
Uh, no. The client just stops reading from the kernel buffers, which fill
up, so the TCP window doesn't get opened up again, and the other side
doesn't get more buffer space to send into. When the client then reads 64K,
your TCP stack opens the window up by 64K, so the remote side sends 64K.
There's no need to request the same packets. Indeed, that would be a rather
asinine way of trying to limit how much data you're getting.
> and your FTP software has to constantly check that what its getting doesn't exceed
> the amount you set,
Errr, no. If you set your download to (say) 64K/second, your FTP client
reads 64K from the socket, then sleeps till the end of 1 second, then reads
another 64K, then sleeps again, etc. If it gets only 32K, it can either read
another 32K immediately, or wait for half a second, or whatever. The whole
TCP rate-limiting backpressure mechanism keeps the sender from sending
faster than it's being read at the client.
> and simply not send back an, "I got it.", message
That's exactly what TCP does, along with a "this is how much buffer space I
have for you to fill up now." It's just built into the TCP layer rather than
the application layer.
> This makes for a damned wobbly limiter.
It's the fact that it's going thru a number of layers that make it faster or
slower. It's wobbly with bittorrent too, because the requests have to go out
significantly before the data comes back. Just look at the bandwidth chart
on a client that gives you that. You're never going to get a constant rate.
The best you can do is a decent average over a fairly short time once the
stream hits a steady state.
> lol
You know, I'm constantly amazed at how amusing you find computers to be. :)
> I haven't, myself, ever seen someone that actually limits what *other*
> applications do use.
Sure. It's called QoS. Quality of Service. It has been built into IP since
the first versions. If you're using Windows, open up your network connection
properties and check "QoS Packet Scheduler" and bingo, you have something
that limits what applications use. The FTP client says "I want high
bandwidth and I don't care about latency" and the IM client says "I want low
latency and I don't care about bandwidth." And the routers in the middle
also deal with it.
It's also why you plug your VOIP box in front of your router instead of
behind it, for example.
> and the default behavior is, "try to give all clients equal capacity".
Not quite.
> Existing *standard* protocols do not have mechanisms to specify what
> amount if bandwidth you have to work with,
It's not up to the protocol, but the client. Welcome to the Internet. :-)
> nor do they have a way to do "fragment by fragment requests".
I don't know what this means.
> not limiting bandwidth in an efficient way, but instead relying on the
> other end to a) respond to requests to resend, b) resend at all.
No. Go read how TCP works. If you don't actually lose the packets in the
network, you don't have to resend any. There *are* fragment-by-fragment
requests. It's called the "TCP Window".
> cases, FTP in such situations will just time out, or drop connection, if
> the server on the other end has issues, or poor settings.
If the FTP server isn't feeding clients fast enough to keep them from timing
out, then you're just overloading your FTP server, yah. But if your
bittorrent server is overloaded, you'll also have clients going elsewhere to
get the file.
> The simple fact that you don't have to worry about dropped connections,
> resuming issues, linear downloads,
Bittorrent is a good download protocol, but HTTP has all that stuff too. The
advantage of bittorrent is being able to talk to multiple servers.
Anyway, it's not "streaming" that's the problem, but real-time delivery.
Neither bittorrent nor FTP handle "streaming" even if you have enough
bandwidth for real-time.
> *or* trying to throttle bandwidth by
> actually increasing the risk of time outs, failed connections and packet
> loss,
Throttling FTP traffic doesn't increase the risk of timeouts, failed
connections, or packet loss. The only risk of failed connections is that
connections take longer to finish, and thus stay open longer, so might use
up resources more. But that's like saying slow-eating restaurant patrons
increase the risk of slow food delivery - only in the sense that the people
outside the restaurant will be waiting longer.
> FTP, no matter how you do it, risks all of the problems,
Not really.
The only difference is that with bittorrent, the protocol is designed that
the only way to do a download is by requesting chunks. In FTP, being able to
restart a download, or take only the middle of a file, is an option.
--
Darren New, San Diego CA, USA (PST)
I ordered stamps from Zazzle that read "Place Stamp Here".
Post a reply to this message
|
 |
|  |
|  |
|
 |
From: Warp
Subject: Re: Random griping about Debian "Lenny", Gnome gdb and XDMCP...
Date: 2 Nov 2009 11:49:54
Message: <4aef0db1@news.povray.org>
|
|
 |
|  |
|  |
|
 |
clipka <ano### [at] anonymous org> wrote:
> Warp schrieb:
> > Why are you so paranoid about the bittorrent protocol?
> Am I?
> If I'd /need/ it and still refused to use it, /that/ would be paranoid.
> I don't think there's much paranoia in keeping the number of installed
> programs low by avoiding to install software I don't normally need.
> > Even if downloading via bittorrent wouldn't give you any speed advantage,
> > it's not like you would lose something by using it. If a large file is
> > available primarily through bittorrent, then why not? I see no reason to
> > avoid it on principle. Just download it.
> So I guess you're installing each and every piece of software in the
> world that doesn't make you lose anything, just because other people say
> you should because they're all enthusiastic about it?
Right. Keep up inventing excuses why you won't switch to newer, proven
technology.
--
- Warp
Post a reply to this message
|
 |
|  |
|  |
|
 |
From: Patrick Elliott
Subject: Re: Random griping about Debian "Lenny", Gnome gdb and XDMCP...
Date: 2 Nov 2009 14:09:57
Message: <4aef2e85$1@news.povray.org>
|
|
 |
|  |
|  |
|
 |
Darren New wrote:
>> FTP, no matter how you do it, risks all of the problems,
>
> Not really.
>
> The only difference is that with bittorrent, the protocol is designed
> that the only way to do a download is by requesting chunks. In FTP,
> being able to restart a download, or take only the middle of a file, is
> an option.
>
Going to shorten this and just say that, what you describe is
interesting, but not what you "see" as a layman, actually having to deal
with the result. I couldn't care less, for example, if HTTP has a mess
of things in it, it **still** can't deal with a server that runs into a
serious enough problem that it needs to restart. As for FTP resumes,
that only works of the damn server supports them. Ironically, half the
"big" companies on the net disable it, so you end up starting at the
"first" block of the file, every time you resume. Restart on those is
"literally" restart. And I won't even get into the numerous idiot things
they do on web sites, which are intended to do everything from make
download managers harder to use, verify who is downloading, etc., which
can do anything from making "all" downloads from the site impossible, if
you are not using "generic" in-browser support, and no plugins at all,
to causing it to refuse you, because the "client" you have making the DL
request doesn't match the client that initiated the link to download it.
Mind, all that later stuff is pure nonsense with the web design and the
lack of any "consistent" way to prevent a) multiple connections via FTP,
when that is the intent of all the hoops you jump through, or b) what
ever else they are trying to do.
All I know is, FTP is the least reliable way I know of, without a
manager that is *far* more robust than either the one in IE, or Firefox,
or *any* other browser I have ever used, and HTTP isn't much better, in
"some" cases. :( We need something where the fact that the server is
overloaded doesn't mean a) having to come back and retry 3 hours later,
when you might be at work (or every day, in hopes the load drops), or b)
your 5 hour (or worse 5 day, if you have a slow connection), download
isn't going to fail 50 minutes in, every single time, FTP/HTTP or what
ever, and the server won't resume where it left off. Oh.. And the one
**big** issue imho.. If you have glitchy wiring, or other issues,
FTP/HTTP can land you "bad" files, and short of running an hash on it,
which only some sites even provide (as a separate download), you have no
damn way of knowing if the 4GB ISO you downloaded, or the 6GB of game
files, or what ever, "got" to your machine intact. Nothing like trying
to download the same installer 10 times, and having it fail with, "This
executable is either corrupt or the wrong format", every single time,
each time an hour download. What little error correction exists in FTP
and HTTP is in the packet level, and it simply *doesn't work* well
enough to be certain that the file will arrive intact, when it seems to
work at all.
Like I said, unless you need a streaming protocol, there are "huge"
issues with FTP *and* HTTP, when it comes to having them be both certain
to be reliable and certain to complete. That the torrent might go
looking some place else, if the main source fails is beside the point.
It *can*. The closest to that *at all* for FTP/HTTP is some things like
Getright's support of "search known download sites for a link to the
same named file", which is worthless if its not *actually* the same
file, or version.
--
void main () {
If Schrödingers_cat is alive or version > 98 {
if version = "Vista" {
call slow_by_half();
call DRM_everything();
}
call functional_code();
}
else
call crash_windows();
}
<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
3D Content, and 3D Software at DAZ3D!</A>
Post a reply to this message
|
 |
|  |
|  |
|
 |
From: clipka
Subject: Re: Random griping about Debian "Lenny", Gnome gdb and XDMCP...
Date: 2 Nov 2009 14:58:33
Message: <4aef39e9@news.povray.org>
|
|
 |
|  |
|  |
|
 |
Warp schrieb:
> Right. Keep up inventing excuses why you won't switch to newer, proven
> technology.
You're talking like not switching would be an act of utmost evil. Is it?
Will the world end in 2012 if I don't switch?
Post a reply to this message
|
 |
|  |
|  |
|
 |
From: Warp
Subject: Re: Random griping about Debian "Lenny", Gnome gdb and XDMCP...
Date: 2 Nov 2009 15:24:39
Message: <4aef4007@news.povray.org>
|
|
 |
|  |
|  |
|
 |
clipka <ano### [at] anonymous org> wrote:
> Warp schrieb:
> > Right. Keep up inventing excuses why you won't switch to newer, proven
> > technology.
> You're talking like not switching would be an act of utmost evil. Is it?
> Will the world end in 2012 if I don't switch?
Most likely, yes.
--
- Warp
Post a reply to this message
|
 |
|  |
|  |
|
 |
From: Nicolas Alvarez
Subject: Re: Random griping about Debian "Lenny", Gnome gdb and XDMCP...
Date: 11 Nov 2009 11:24:11
Message: <4afae52b$1@news.povray.org>
|
|
 |
|  |
|  |
|
 |
Orchid XP v8 wrote:
>> Of all the versions of Linux I've used, OpenSuSE has been most friendly
>> in terms of admin. (Even better than Ubuntu in my experience.)
>
> I read somewhere that "ubuntu" is an ancient African word meaning "I
> can't get Debian to work"...
It was probably in the email signature of a Debian developer. I've seen many
of those :)
Post a reply to this message
|
 |
|  |
|  |
|
 |
From: Nicolas Alvarez
Subject: Re: Random griping about Debian "Lenny", Gnome gdb and XDMCP...
Date: 11 Nov 2009 11:26:45
Message: <4afae5c5$1@news.povray.org>
|
|
 |
|  |
|  |
|
 |
Orchid XP v8 wrote:
> Neeum Zawan wrote:
>
>> If your Internet connection is slow, then perhaps it makes no
>> difference. On some servers, downloading an ISO is as slow as 100 KB/s.
>> Using Bittorrent, I can max out my connection (over a MB/s).
>
> Well, in theory BT makes it easier to get good download rates. In
> reality, using BT is like downloading anything else; sometimes you get
> good speeds, and sometimes you don't. It depends what you're trying to
> download. (E.g., try obtaining some old version of Debian. You'll find
> it has, like, 3 seeds and no other clients, and it still takes weeks to
> download. Try getting the latest version and you should have no
> trouble.) It's not much different than the variable download rates
> single servers provide.
But you're being nicer to the mirror providers, because you're not helping
overload a single server.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |