POV-Ray : Newsgroups : povray.off-topic : PIPA and SOPA : Re: PIPA and SOPA Server Time
29 Jul 2024 22:35:58 EDT (-0400)
  Re: PIPA and SOPA  
From: Francois Labreque
Date: 1 Feb 2012 12:53:20
Message: <4f297c10@news.povray.org>
Le 2012-02-01 11:45, Invisible a écrit :
>>>> The big advantage to adding disks virtually instead of physically is
>>>> that you are not limited by the form factor of the server, the power
>>>> output of the PSUs and most importantly, they can be done on the fly,
>>>> without having to take an outage.
>>>
>>> But you still need to (at a minimum) reboot the server after you change
>>> the SAN mapping anyway.
>>
>> Obviously, Windows may complain if you try to pull the C: drive from
>> under its feet, but you shouldn't actually boot from a SAN drive under
>> normal circumstances anyway.
>
> Oh, I see. Well, if you're not changing the OS partition, then yes, that
> should work.
>
>>>> In your environment, it may not be that big of an issue, but when you
>>>> have contracted service level agreements that you will have 99.99%
>>>> uptime, you have no other choice. Do the math, that means 1m42s of
>>>> downtime per month... try powering off a server, taking it out of the
>>>> rack, adding a disk and powering it back on in less than 10 times that!
>>>
>>> Uhuh. And when one server physically dies, it's going to be down a tad
>>> longer than 1m42s. So if you don't have the spare capacity to handle
>>> that, a SAN still hasn't solved your problem.
>>
>> Need help moving those goalposts? They look heavy! We weren't talking
>> about a server dying. We were talking about needing more disk space.
>
> That was not clear to me. You made it sound like "hey, if you have a
> SAN, then when one data centre dies, you can just use the other one!"
> You're going to need more than just a SAN to do that.
>

No. I said that if you have two SANs in separate data centres, with data 
replicated on both data centres, then it greatly reduces your down time 
in case of a disaster.  I never claimed that all SANs were designed that 
way, nor that it was the only requirement for instant recovery.

> I would have thought that needing more disk space is such a crushingly
> rare event that it makes almost no sense to optimise for it.

No it's not.  You may need to run an app in debug mode for a while and 
need extra space to store the dumps.  You may run into seasonal peeks 
and need extra storage just for that period. Etc...  Sizing hundreds of 
servers for a worst case scenario is not efficient use of the company's 
money.  You are much better off having some amount of slack space that 
you can swing around when needed.

> If you have
> to take a server offline once every 5 years to add another disk, that's
> still 99.99% uptime.
>

Not if your contract says "monthly uptime of 99.99%". ;-)

>>> This makes no sense at all. How the hell can a 6 Gbit/s SATA link
>>> perform the same as a 100 Mbit/sec Ethernet link? Never mind a 10
>>> Mbit/sec Internet link. That makes no sense at all. (Unless your actual
>>> disks are so feeble that all of them combined deliver less than 10
>>> Mbit/sec of data transfer speed...)
>>
>> You're making a strawman argument. No one ever said that SANs run over
>> 10Mbit ethernet.
>
> Neither did I. I said 100 Mbit Etherhet. (It's quoted right there.)
>
> You claimed that it's not insane to run a SAN over the Internet,

When did I say that?

> despite
> the fact that typical Internet speeds are roughly 10 Mbit/sec or slower.
>
>> most SAN implementations run dedicated protocols over
>> fibre at Gbps speeds.
>
> It's news to me that such things even exist yet - but perhaps that was
> your point?
>

Actually, it shouldn't be news to you.  We went over this in great 
details a few months ago.  Remember my nice ascii-graphics chart with 
the servers, SAN switches, drive enclosures and tape units?

> Still, it looks like the UK site will soon be in possession of their
> very own SAN, so I guess I'll be able to watch it fail up close and
> personal. o_O
>

Not that SANs are infaillible, but why do you assume that it will fail?

>> Four 9s and Five 9s uptime is expensive.
>
> Hell yes.
>
> Actually, I think we can simplify this to "uptime is expensive". I've
> yet to see a method of improving uptime that's cheap.
>
>>> For that, you would need [at least] two geographically remote sites
>>> which duplicate everything - disk and other hardware as well. I'm
>>> struggling to think of a situation where the volume of data produced per
>>> hour is so low that you can actually keep it synchronised over the
>>> Internet. And if it isn't in sync, then a failover to from primary to
>>> secondary system entails data loss.
>>
>> Not the Internet, multiple dedicated 10Gbps DWDM links. You'd be
>> surprised to know that most Fortune 1000 entreprises actually do this
>> already.
>
> So what you're saying is that a handful of the richest companies on
> Earth can afford to do this?

There's more than a handful of companies who can afford it.  A few £M in 
extra telco costs per year is nothing compared to the prospect of going 
out of business because your data centre had a 110-story building crash 
on top of it.

 > Yeah, I guess that'll be why I haven't seen it before. :-P

I have never seen the Merryll-Lynch data centre first hand, either, but 
that isn't necessary to know that they were back up and running hours 
after the WTC towers fell... Reading the story of how they restarted 
their operations from their disaster recovery location was enough.  Then 
the "Interesting...how'd they manage that?" questions popped up in my 
head, and I started digging...

Which is the main point of this whole discussion: reading the newspaper 
and other news-related web sites, once in a while, is not a bad thing, 
even if the event in question doesn't affect you directly, there may be 
bits of insight to be gathered.

-- 
/*Francois Labreque*/#local a=x+y;#local b=x+a;#local c=a+b;#macro P(F//
/*    flabreque    */L)polygon{5,F,F+z,L+z,L,F pigment{rgb 9}}#end union
/*        @        */{P(0,a)P(a,b)P(b,c)P(2*a,2*b)P(2*b,b+c)P(b+c,<2,3>)
/*   gmail.com     */}camera{orthographic location<6,1.25,-6>look_at a }


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.