|
 |
On 3/8/2011 4:43 PM, Darren New wrote:
> Tom Austin wrote:
>> Having unoptimized programs makes it very slow. Programs written as
>> fast as possible do not benefit from being efficient.
>
> Modern database systems are designed to take normalized tables and make
> accessing them very fast. That's the fundamental operation a relational
> database management system performs.
>
> How big is your total collection of active databases right now? If it
> fits in (say) a gigabyte, chances are you can't possibly make it too slow.
>
> If it *is* too slow when done "right", *then* you get out the query
> analyzer and look to see what it's spending its time doing, and you add
> the one index that cuts that time from minutes to miliseconds.
>
I think most of our slow downs is currently in the applications that
access it. Basically inefficient routines and use of objects. A
required approach when coding time as of the essence.
>> Thanks for the acronym - more info to go on :-)
>>
>> I don't know if we will try or need to set up transactions, but they
>> could be helpful if something is not a simple insert or update.
>
> You will need transactions. Trust me on this one. :-) If nothing else,
> you'll need transactions when you're archiving out an old job.
>
I've worked with transactions some in the past and they make a lot of
sense. I'll have to try to incorporate them as we move forward.
Does it make sense to set up a transaction wrapper on even simple single
updates?
> That said, it's surprisingly common that if you structure your queries
> right, you can make every operation essentially an "append to this
> table", followed by an eventual "update that table in a way that if it
> fails, we can reconstruct that table entirely from scratch". Think
> Journal -> General Ledger.
>
OK, I've been dealing with databases for 15 years now - on mostly a low
level. that just went right over my head. Makes sense, but I have no
concept of how it is implemented.
>
> That siad, btw, it's often the case that you *can* find text that won't
> change often enough to make a difference. Stock ticker, street name,
> radio station call letters. If it changes rarely enough that you're
> willing to treat it as an "delete all the old, insert all the new" sort
> of transaction, using text is OK. One generally doesn't do this with
> people, but saying "if you change your stock ticker, you have to treat
> it as selling all the old shares and getting the same number of new
> shares from the new ticker name" isn't too unreasonable.
>
for some reason I am hesitant to present the key as something that the
end user interacts with. Maybe I'm a control freak :-&
>> OK, you want a job surveying utility poles :-) You have good insight.
>
> I've just done this dozens of times in the last couple decades.
>
what - survey telephone poles?
>
> Look at it this way: This is the data that is vital to your company, in
> the strictest sense. Your company's job is to manage and evolve this
> data. You don't have time to get it wrong.
that is very correct. Our first stab was fast and mostly dirty. We
spent a lot of time to try to get the data right, but we missed some
pieces of data.
>
> What about your current system is so broken that you can't afford the
> time to fix it?
>
It has layers upon layers of fixes. The end users had to fight with it
to get it to put out data correctly. it was an ad-hoc rich system to
the point where I refused to make any more changes because of the risk
of data corruption/loss and our client getting crap for a deliverable.
There were some cool features that I implemented - like a GIS type
interface - before GIS systems were popular.
> The other possibility is to hire someone who actually does this sort of
> stuff for a living for long enough to show you how to set up the basics
> of the database.
>
It's a balance. Depending on our requirements it could take just as
much of our time working with the guy and doing it ourselves.
>>>> Exporting the data for archival purposes could be a good idea. Even
>>>> exporting a fully functional database with only the particular job
>>>> data required could be good.
>>>
>>> All those things are not-too-hard if all the data is in the database.
>>>
>>
>> Yes, and if the database schema is identical, the same programs can
>> use any database - from mySQL to .mdb files.
>
> That's the goal. As you get more sophisticated, you find that breaks
> down. When you use views in your "real" database and you archive it to
> something that has no views, you find yourself with a problem.
>
yes, one just has to pay attention and not set up things that break.
It could be that applications do not rely on the views (or other
non-common features) But then one could be crippling themselves.
> That said, even a cheap machine nowadays has more than enough processing
> power to do whole boatloads of database processing.
>
> About how much data do you have, in total and in active jobs? Many
> people seriously underestimate just how much processing even a $1000
> machine can do on a SQL database.
>
yes, I don't think it will take much of a machine. We have an old
server box (PIII 600 I think) that would likely do just that. Or I
might incorporate MySQL on our current file server (Linux) to simplify
things and use the old box for development.
>> Well, almost free - just takes time to set up. By the time all is said
>> and done I figure that mySQL will take about $1k - $2k in my time to
>> set up reliability. Need to get it installed on our server and set up
>> connection methods and backup methods. Not too difficult, but I don't
>> do it every day.
>
> Be sure to factor in some ongoing maintenance time. Occasionally your
> replication will break, or something like that, and you'll have to fix
> stuff in a panic. :-)
>
yes, just like last week with out servers.
In the past 3 years I have had 1 or 2 incidents that required immediate
attention.
The past week brought no less than 5 of them.
From full OS drives to failed RAID mirrors.
Luckily none of them were damaging. Just annoying and cost a couple of
hours of everyone's time.
Post a reply to this message
|
 |