POV-Ray : Newsgroups : povray.off-topic : PIPA and SOPA : Re: PIPA and SOPA Server Time
29 Jul 2024 22:30:42 EDT (-0400)
  Re: PIPA and SOPA  
From: Invisible
Date: 1 Feb 2012 11:45:18
Message: <4f296c1e$1@news.povray.org>
>>> The big advantage to adding disks virtually instead of physically is
>>> that you are not limited by the form factor of the server, the power
>>> output of the PSUs and most importantly, they can be done on the fly,
>>> without having to take an outage.
>>
>> But you still need to (at a minimum) reboot the server after you change
>> the SAN mapping anyway.
>
> Obviously, Windows may complain if you try to pull the C: drive from
> under its feet, but you shouldn't actually boot from a SAN drive under
> normal circumstances anyway.

Oh, I see. Well, if you're not changing the OS partition, then yes, that 
should work.

>>> In your environment, it may not be that big of an issue, but when you
>>> have contracted service level agreements that you will have 99.99%
>>> uptime, you have no other choice. Do the math, that means 1m42s of
>>> downtime per month... try powering off a server, taking it out of the
>>> rack, adding a disk and powering it back on in less than 10 times that!
>>
>> Uhuh. And when one server physically dies, it's going to be down a tad
>> longer than 1m42s. So if you don't have the spare capacity to handle
>> that, a SAN still hasn't solved your problem.
>
> Need help moving those goalposts? They look heavy! We weren't talking
> about a server dying. We were talking about needing more disk space.

That was not clear to me. You made it sound like "hey, if you have a 
SAN, then when one data centre dies, you can just use the other one!" 
You're going to need more than just a SAN to do that.

I would have thought that needing more disk space is such a crushingly 
rare event that it makes almost no sense to optimise for it. If you have 
to take a server offline once every 5 years to add another disk, that's 
still 99.99% uptime.

>> This makes no sense at all. How the hell can a 6 Gbit/s SATA link
>> perform the same as a 100 Mbit/sec Ethernet link? Never mind a 10
>> Mbit/sec Internet link. That makes no sense at all. (Unless your actual
>> disks are so feeble that all of them combined deliver less than 10
>> Mbit/sec of data transfer speed...)
>
> You're making a strawman argument. No one ever said that SANs run over
> 10Mbit ethernet.

Neither did I. I said 100 Mbit Etherhet. (It's quoted right there.)

You claimed that it's not insane to run a SAN over the Internet, despite 
the fact that typical Internet speeds are roughly 10 Mbit/sec or slower.

> most SAN implementations run dedicated protocols over
> fibre at Gbps speeds.

It's news to me that such things even exist yet - but perhaps that was 
your point?

Still, it looks like the UK site will soon be in possession of their 
very own SAN, so I guess I'll be able to watch it fail up close and 
personal. o_O

> Four 9s and Five 9s uptime is expensive.

Hell yes.

Actually, I think we can simplify this to "uptime is expensive". I've 
yet to see a method of improving uptime that's cheap.

>> For that, you would need [at least] two geographically remote sites
>> which duplicate everything - disk and other hardware as well. I'm
>> struggling to think of a situation where the volume of data produced per
>> hour is so low that you can actually keep it synchronised over the
>> Internet. And if it isn't in sync, then a failover to from primary to
>> secondary system entails data loss.
>
> Not the Internet, multiple dedicated 10Gbps DWDM links. You'd be
> surprised to know that most Fortune 1000 entreprises actually do this
> already.

So what you're saying is that a handful of the richest companies on 
Earth can afford to do this? Yeah, I guess that'll be why I haven't seen 
it before. :-P

>>> That's because you keep forgetting that other people may have other
>>> needs than yours.
>>
>> Perhaps. But saying "we work in the financial industry" doesn't tell me
>> how your needs are different than mine. Really, the only way you'd truly
>> come to understand this is by actually /working/ in that industry. And
>> that's just not possible.
>
> Are you saying it's impossible to work in the banking industry? Or are
> you saying that the banking industry is so secretive about their work
> that you can't even find out how they operate without working there?

I'm saying it's not possible to go work in every random industry just to 
find out what makes it tick.

> Of course, you can also decide that it's not worth it to learn about
> these things and simply shrug it off, but don't complain that you can't
> get jobs outside of the little hole you dug yourself into.

I didn't say it's not possible to get a job, I said it's not possible to 
get insight without getting a job.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.