|
![](/i/fill.gif) |
Le 2012-02-01 09:03, Invisible a écrit :
>>>> Having a general idea of how entrerpise apps like SAP, BEA
>>>> Weblogic, or Websphere work is never a bad thing
>>>
>>> Sure. But (for example) I gather that some people use SAN technology. I
>>> cannot for the life of me begin to imagine why you would accept such a
>>> massive performance hit in exchange for the mere ability to plug and
>>> unplug disks virtually rather than physically. But apparently everybody
>>> is doing it, for reasons unknown.
>>
>> The big advantage to adding disks virtually instead of physically is
>> that you are not limited by the form factor of the server, the power
>> output of the PSUs and most importantly, they can be done on the fly,
>> without having to take an outage.
>
> But you still need to (at a minimum) reboot the server after you change
> the SAN mapping anyway.
>
No. Do you need to reboot after swapping a CD or a usb key? OSes have
been able to handle adding and removing storage space for a while, now.
Obviously, Windows may complain if you try to pull the C: drive from
under its feet, but you shouldn't actually boot from a SAN drive under
normal circumstances anyway.
I'm no Windows Server expert, so I don't know if it allows you to extend
disk space on the fly, but I do know that many flavors of Unix do allow it.
>> In your environment, it may not be that big of an issue, but when you
>> have contracted service level agreements that you will have 99.99%
>> uptime, you have no other choice. Do the math, that means 1m42s of
>> downtime per month... try powering off a server, taking it out of the
>> rack, adding a disk and powering it back on in less than 10 times that!
>
> Uhuh. And when one server physically dies, it's going to be down a tad
> longer than 1m42s. So if you don't have the spare capacity to handle
> that, a SAN still hasn't solved your problem.
>
Need help moving those goalposts? They look heavy! We weren't talking
about a server dying. We were talking about needing mroe disk space. A
SAN is not the remedy to all problem, obviously. Just like having
onsite generators can help you avoid power outages, but as we saw last
March, they aren't necessarily very helpful if a tsunami took the fuel
tanks away.
>> We've also had the performance discussion before. Yes, the theoretical
>> access speed of a local SATA drive is much faster than that of a SAN
>> attached logical disk, but in actual real world practice, with real
>> world data, there's not much of a difference, even on SANs located
>> halfway across town in another building.
>
> This makes no sense at all. How the hell can a 6 Gbit/s SATA link
> perform the same as a 100 Mbit/sec Ethernet link? Never mind a 10
> Mbit/sec Internet link. That makes no sense at all. (Unless your actual
> disks are so feeble that all of them combined deliver less than 10
> Mbit/sec of data transfer speed...)
>
You're making a strawman argument. No one ever said that SANs run over
10Mbit ethernet. While it is possible to use iSCSI on a 10 or 100 Mbps
ehternet lan, most SAN implementations run dedicated protocols over
fibre at Gbps speeds.
Last time we went over this, I pointed you to studies that showed that
over millions of "real-life" I/O operations, the SAN was performing no
worse than locally-connected disks.
See for example:
http://www.sqlteam.com/article/which-is-faster-san-or-directly-attached-storage
>> Which brings us to the last big advantage of SANs. Instant disaster
>> recovery! Your main office is now a pile of smoldering ruins? No biggie,
>> just reassing the LUNs from that offsite SAN to another machine at the
>> business continuity location and power it up, and presto! your business
>> is back on its feet.
>>
>> sometimes, taking a small performance hit on each I/O operation is
>> nothing compared to being able to get the company back up and running
>> even though a hurricane decided to pay your data centre a visit.
>
> That works if a hurricane takes out the data centre with your SAN in it.
> Not so much if it takes out the data centre with your compute devices in
> it. :-P
>
Most businesses will invest in redundant servers BEFORE investing in
redundant facilities, therefore, if you have more than one data centre
with redundand SANs, you more than likely built your business continuity
plan so that you also have enough CPUs avaialble at both sites to run
your entire operation from just one of those sites, and you probably run
monthly or quarterly disaster drills to make sure that your plan
actually works.
Four 9s and Five 9s uptime is expensive.
> For that, you would need [at least] two geographically remote sites
> which duplicate everything - disk and other hardware as well. I'm
> struggling to think of a situation where the volume of data produced per
> hour is so low that you can actually keep it synchronised over the
> Internet. And if it isn't in sync, then a failover to from primary to
> secondary system entails data loss.
>
Not the Internet, multiple dedicated 10Gbps DWDM links. You'd be
surprised to know that most Fortune 1000 entreprises actually do this
already.
All entreprise database systems (Oracle, DB2, MS-SQL, et al.) allow for
the real-time duplication of the data across multiple tables, so it's
easy to tell your DBMS to write to table 1 on disk E: and to table 2 on
disk F:, and to map those disks to LUNs that are on different physical
SANs. Most SANs can also handle this internally without the server OS
knowing anything about it, just like RAID is transparent to the OS and
applications.
>>> So in this instance, I know what the world is doing, but I still have
>>> absolutely no insight at all. It hasn't helped.
>>
>> That's because you keep forgetting that other people may have other
>> needs than yours.
>
> Perhaps. But saying "we work in the financial industry" doesn't tell me
> how your needs are different than mine. Really, the only way you'd truly
> come to understand this is by actually /working/ in that industry. And
> that's just not possible.
>
Are you saying it's impossible to work in the banking industry? Or are
you saying that the banking industry is so secretive about their work
that you can't even find out how they operate without working there?
Google "Tandem computers". Their story is very well known and was
instrumental to the rise of computers in the financial world. There's
no need to be working at a bank to find out about that. Look at the job
offers for a big bank. If, for example, they're looking for someone
with Solaris and Oracle knowledge and "experience with high-availablilty
is an asset" that should tell you enough about the products used at the
bank in question, and if you're interested to know more about how
Solaris and Oracle handle HA, you can simply go to their web site, read
white papers, Gartner studies, etc...
Of course, you can also decide that it's not worth it to learn about
these things and simply shrug it off, but don't complain that you can't
get jobs outside of the little hole you dug yourself into.
--
/*Francois Labreque*/#local a=x+y;#local b=x+a;#local c=a+b;#macro P(F//
/* flabreque */L)polygon{5,F,F+z,L+z,L,F pigment{rgb 9}}#end union
/* @ */{P(0,a)P(a,b)P(b,c)P(2*a,2*b)P(2*b,b+c)P(b+c,<2,3>)
/* gmail.com */}camera{orthographic location<6,1.25,-6>look_at a }
Post a reply to this message
|
![](/i/fill.gif) |