POV-Ray : Newsgroups : povray.off-topic : Driving backwards : Re: Driving backwards Server Time
29 Jul 2024 22:31:40 EDT (-0400)
  Re: Driving backwards  
From: Invisible
Date: 16 Aug 2011 10:29:44
Message: <4e4a7ed8$1@news.povray.org>
>>>> So how does it obtain a consistent image of the filesystem?
>>>
>>> The same way you do with a local backup agent. The SAN is not a magical
>>> invention that removes all worldly constraints. In other words, you
>>> schedule the backup at the time where its least likely there will be
>>> changes to the file system, and pray to the deity of your choice.
>>
>> That's /not/ how a local backup agent works. Indeed, the entire
>> /purpose/ of a local backup agent is to construct a consistent image (by
>> whatever means that requires) to write to tape.
>
> No. the purpose of a local backup agent is:
> 1) to have the means to back the files up at shceduled intervals, and
> 2) talk to the backup unit (be it another locally-mounted file system,
> removable storage media, or file transfer over a network).
>
> Both of these functions are nowadays available directly from the OS, but
> most entreprises will have a separate tool to do this as there are other
> advantages when it comes to centralized management, or the ability to
> lock access to certain files or directories, or able to speak in a
> dialect understood by the application (e.g. Oracle "live backup" agent),
> but in many cases, it is not necessary and the backup job can run while
> the users access their files. At that point, it becomes a management
> decision based on the relative costs penalty of scheduling down time vs.
> losing absolute file consistency.

OK, first of all, let me split two things out:

First, we have consistency of the physical file system itself.

Second, we have consistency of the logical files contained on disk.

If the FS becomes inconsistent, usually that renders it completely 
unreadable without advanced recovery tools. An unreadable backup is useless.

If a logical file becomes inconsistent, that might just mean that Word 
crashes when you try to open your document. So the backup copy of that 
document is useless, but other documents on the disk are probably usable.

Either way, the point of having a remote backup agent is to prevent 
these kinds of inconsistencies. Firstly, by accessing the file system 
through the OS, you guarantee that what you see is a consistent image of 
the file system. Secondly, I know on Windows you can do things like 
Shadow Copy, which guarantees a consistent image of all logical files, 
even if people are editing them while you back them up.

It's completely possible to back up remote disks without installing a 
remote agent. But nobody does this, because the file consistency issues 
are too bad.

(And yes, then there are other advantages. A local backup agent can use 
the NTFS journal to work out which files have changed recently for 
incremental backups. It can compress data before sending it over the 
network. And so forth.)

> As I said, if it absolutely, positiviely, legally
> requires a consistent copy, means will be taken to achieve that. But if
> it's just a minor inconvenience for a few persons to have an older draft
> version of their weekly powerpoint presentation to upper management
> having been saved prior to the server failure, then so be it.

An older draft isn't so bad. When the file becomes completely 
unreadable, that generally upsets people.

>> So what you're saying is that you wire up two independent networks, and
>> connect every device to both of them, so that if the switch running
>> network A fails, you can use network B instead?
>
> The storage network hardware is all interconnected, but the storage
> network is independant from the user-traffic network.

Interesting that you've drawn the tape robot at the opposite end of the 
SAN. Usually that's /controlled by/ one of the servers.

> Typicially, the SAN switches will have up to 16 or so "front end" ports
> and 2 or more higher-speed "Back-end" ports to the drive bays. Some
> high-load servers might have direct connections to the drive bays.

I still find it hard to believe that a shared fabric can come anywhere 
close to the performance of a dedicated fabric.

>>>> Unless every single server and every single disk has its own dedicated
>>>> channel to the switch (which is presumably infeasible), there will be
>>>> additional bottlenecks for components sharing channels too.
>>>
>>> You have presumed wrong. This is exactly what happens.
>>
>> Damn. This stuff is more expensive than I thought... Still, great way to
>> make your department look important, I guess.
>
> It's the only way to achieve "five 9s" uptime (i.e. 99.999%), which is
> what most banks and multinationals require. Again, for a small to medium
> shop, with a dozen to 50 servers or so servers, or no requirements for
> such high SLAs, then it's overkill.

So people really are willing to sacrifice huge amounts of performance 
just for increased uptime?


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.