POV-Ray : Newsgroups : povray.off-topic : Driving backwards : Re: Driving backwards Server Time
29 Jul 2024 22:34:14 EDT (-0400)
  Re: Driving backwards  
From: Invisible
Date: 12 Aug 2011 11:55:48
Message: <4e454d04@news.povray.org>
>> SATA 3.0 connection: 6.0 GB/sec
>>
>> Fast Ethernet connection: 0.1 GB/sec
>>
>> That's, what, 6000x slower?
>
> No, that's 60x times slower.

Epic math fail. :'{

> Besides, SAN connections are NOT fast ethernet. Depending on the length
> and type of fibre, the speed will vary from 100 MBps to 1.6 GBps, which
> is still admittedly slower than a SATA 3 controller, but unless you are
> copying one large sequential file, you will also need to worry about
> drive seek time and file system layout to get acutal throughput
> comparisons, and Google says it's a wash.

I rather doubt that drive seek time is longer than network latency. I 
also rather doubt that you can't saturate a single network link with 
half a dozen drives.

>> I fail to see why RAID is no longer an issue. (Unless you mean that it's
>> now acceptable to lose data in the event of a hardware fault.)
>
> RAID is an issue for the SAN administrator, not the server
> administrator.

Oh, I see.

This applies only if the SAN administrator and the server administrator 
aren't the same person. In our setup, with a grand total of 4 IT staff, 
I doubt this is relevant.

Like I said, increased complexity.

>> I've also never seen a sever with more than one disk controller, come to
>> mention it...
>
> Then your servers are far from optimal.

Well, I guess you can /always/ have more fault tolerance. But in 10 
years of working here, I've only seen one controller fail. Come to think 
of it, I've only seen about 6 drives fail (including desktop PCs and 
laptops).

Currently, I have two servers. One has 2 disks, the other has 3 active 
disks plus a hot spare. I doubt adding more controllers is worth the effort.

> With the exception of the OS drive (You don't want to boot off the SAN),

> *Don't run your domain controllers or DNSes from a SAN either.

Any specific reason?

>> I don't see how that works.
>
> The DVD or tape drives have direct backplane connections to the disk
> arrays so they can read the disks without impacting the network or
> competing with the server's own IO requests.

So how does it obtain a consistent image of the filesystem?

> Just like a hardware RAID controller can rebuild a failed drive without
> the any noticeable performance degradation.

[RAID does it by constantly resynchronising.]

>>> You just had a very weird problem? Just clone the DB to an new set of
>>> disks, and assign those on a lab server so that the developpers can work
>>> on the issue all weekend.
>>
>> You can trivially do that without a SAN. Disk imaging tools have existed
>> for decades. Either way, you still have to take the DB server offline
>> while you do this.
>
> No. That's the point. The SAN administrator simply mirrors the disks in
> the background, without the server even knowing about it.

Like I say, I don't see how you can obtain a consistent image of the 
filesystem without cooperation from the server.

>> My point is, if you have 10 servers, each with their own dedicated SATA
>> connection (or whatever), you have 10 lots of bandwidth. And if one
>> server decides to hog it all, it can't slow the others down.
>>
>> If you have a SAN, you have /one/ access channel, which all the servers
>> have to share between them. If one server decides to hit it really hard
>> (or just crashes and starts sending gibberish), all the others get
>> clobbered.
>
> Again, wrong. Each server has dedicated access to the SAN fabric and the
> SAN switches will dispatch I/O requests to the proper drive(s) without
> impacting other servers' I/O requests.

There is one network switch. All data must pass through this single 
device. If it only processes X bytes per second, than all the servers 
between them cannot access more than X bytes per second. The end.

Unless every single server and several single disk has its own dedicated 
channel to the switch (which is presumably infeasible), there will be 
additional bottlenecks for components sharing channels too.

>>>> In all, this seems like an extremely bad idea.
>>>
>>> Have they actually stated that they would not be taking tape (or more
>>> than likely DVD) backups of that data for archival?
>>
>> Bizarro. Why would you back up to a DVD that holds a piffling 4.2 GB of
>> data when a single tape holds 400 GB? That makes no sense at all.
>
> Tapes degrade over time faster than DVDs do.

In which universe? I would have said the exact opposite was true. I've 
seen CDs and DVDs burned 6 months ago where the ink has faded to the 
point where the disk is utterly unusable. I've yet to see a tape fail to 
read.

> I/O is sequential on a tape. If you need a specific file, you need to
> read through the whole tape from the beginning.

That's true though. Fortunately, file restoration is a rare event, so it 
doesn't matter too much.

> If you only have a 10GB incremental per day, and need to sent it to
> offsite storage every day (sometimes required by law), using 400GB tape
> is a serious waste of money.

The way we do it is to store all the tapes off-site, except today's 
tape. Rotate the tapes until they become full, then add in a new tape. 
All 400GB of capacity gets used, and yet at any time the majority of 
tapes are off-site.

> Each application has its preferred solution. Your mileage may vary.

Fair enough. I'm just saying, for our setup, this wouldn't be a good idea.

>>> In all likelyhood, that remote copy will probably be backed up to
>>> offsite storage - if not, it should be, just for the sake of business
>>> continuity, but in many cases, for legal reasons as well.
>>
>> Nope. The idea, apparently, is that there will be two remote copies, at
>> two different sites. If one dies, we've still got the other site.
>> They're planning to have no tape at all, just spinning disk.
>
> That _is_ silly.

...which is what I said a week ago. ;-)


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.