POV-Ray : Newsgroups : povray.off-topic : Driving backwards : Re: Driving backwards Server Time
29 Jul 2024 22:24:28 EDT (-0400)
  Re: Driving backwards  
From: Invisible
Date: 15 Aug 2011 09:29:48
Message: <4e491f4c$1@news.povray.org>
>> Oh, I see.
>>
>> This applies only if the SAN administrator and the server administrator
>> aren't the same person. In our setup, with a grand total of 4 IT staff,
>> I doubt this is relevant.
>
> This is why I said it would be silly to have a SAN for only a handful of
> devices.

Fair enough.

>> Well, I guess you can /always/ have more fault tolerance. But in 10
>> years of working here, I've only seen one controller fail. Come to think
>> of it, I've only seen about 6 drives fail (including desktop PCs and
>> laptops).
>
> Not fault tolerance. PERFORMANCE. You know, that thing you were saying
> would degrade with a SAN.

How does adding more controllers increase performance? The controller is 
usually the fastest component in the system. The bottleneck is usually 
either the bus to the CPU, or the connections to the HDs.

>>> The DVD or tape drives have direct backplane connections to the disk
>>> arrays so they can read the disks without impacting the network or
>>> competing with the server's own IO requests.
>>
>> So how does it obtain a consistent image of the filesystem?
>
> The same way you do with a local backup agent. The SAN is not a magical
> invention that removes all worldly constraints. In other words, you
> schedule the backup at the time where its least likely there will be
> changes to the file system, and pray to the deity of your choice.

That's /not/ how a local backup agent works. Indeed, the entire 
/purpose/ of a local backup agent is to construct a consistent image (by 
whatever means that requires) to write to tape.

> In other words, in a data center for a multinational corporation where
> planes are up in the air, or trading is going on in a stock exchange
> somewhere 24 hrs a day, 7 days a week, you NEVER have a consistent copy.
> You simply hope to have a GOOD ENOUGH copy.

Nonsense. If you're doing something like that, you're probably using a 
relational database. And any half-decent product of that type provides a 
way to construct a consistent image without ever stopping the database. 
(You do, however, need the cooperation of the server that's running the 
thing. You can't just take a block-level image of the filesystem and 
expect to get anything sensible out of it.)

>> There is one network switch. All data must pass through this single
>> device. If it only processes X bytes per second, than all the servers
>> between them cannot access more than X bytes per second. The end.
>
> There are multiple SAN switches (you definitely DON'T want a single
> point of failure). Each with enough backplane bandwidth, processor power
> and ports to handle I/O requests at wire speed.

So what you're saying is that you wire up two independent networks, and 
connect every device to both of them, so that if the switch running 
network A fails, you can use network B instead?

>> Unless every single server and every single disk has its own dedicated
>> channel to the switch (which is presumably infeasible), there will be
>> additional bottlenecks for components sharing channels too.
>
> You have presumed wrong. This is exactly what happens.

Damn. This stuff is more expensive than I thought... Still, great way to 
make your department look important, I guess.

>>> Tapes degrade over time faster than DVDs do.
>>
>> In which universe? I would have said the exact opposite was true. I've
>> seen CDs and DVDs burned 6 months ago where the ink has faded to the
>> point where the disk is utterly unusable. I've yet to see a tape fail to
>> read.
>
> Stop buying cheap DVDs then ;)

To be fair, this seems to have become less of a problem has CD-R 
technology has improved over the years. Much like anything else, I imagine.

> I've seen tapes fail in drives.
> I've seen tapes where someone had to actually change the plastic leader
> thingie because it had become so mangled from repeated use that it
> wouldn't engage in the drive any more. Etc...

Hell, one time I tried to destroy a tape on purpose and I actually 
couldn't do it. We soaked the tape in lab chemicals and it wouldn't 
degrade, I tried to cut it up but it was too indestructible, it was 
really, really hard to get rid of this stuff. (Although probably a lot 
easier to just make the drive not read it any more.)

>>> I/O is sequential on a tape. If you need a specific file, you need to
>>> read through the whole tape from the beginning.
>>
>> That's true though. Fortunately, file restoration is a rare event, so it
>> doesn't matter too much.
>
> Again that depends on the application.

OK, for /our/ application, it's a rare event. And that's really what I'm 
interested in. Does this technology make any sense at all for us? From 
what I can see, no, it doesn't.

> I've since moved away from server support, so I can't comment on what
> the storage admins do nowadays. All I know if they have a really cool
> robot that fetches tapes and DVDs from the racks instead of relying on
> "tape monkeys" as we used to call them.

I have a small tape robot. If I wanted to, I could load a year's worth 
of tapes into it and never have to change tape until next year. 
(Unfortunately, I can't actually do that, because then none of the tapes 
would be "off site".) It even has a cute little barcode reader.

That was HQ's idea. They don't often have good ideas, but this was one 
of them. ;-)

Now of course, they're talking about taking that away and using our slow 
Internet connection to do the same job... *sigh*


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.