POV-Ray : Newsgroups : povray.off-topic : Database Questions : Re: Database Questions Server Time
3 Sep 2024 19:18:27 EDT (-0400)
  Re: Database Questions  
From: Tom Austin
Date: 9 Mar 2011 12:38:57
Message: <4d77bb31$1@news.povray.org>
On 3/8/2011 6:22 PM, Darren New wrote:
> Tom Austin wrote:
>> I think most of our slow downs is currently in the applications that
>> access it. Basically inefficient routines and use of objects. A
>> required approach when coding time as of the essence.
>
> Yeah, but now you hve some luxury to do it right, I'd wager, since you
> already have a working system.
>

Exactly - but just how much time we can allocate to it is yet to be 
seen.  But I think I can push for more time now because it will cost 
much more time later.  But this only works to a point.

>> Does it make sense to set up a transaction wrapper on even simple
>> single updates?
>
> If you're modifying more than one row at a time, yes. In most database
> engines, each statement you submit becomes its own transaction. This
> isn't true on all types of MySql tables. For example, if you have a
> MyISAM table and you say "update blah set x = 50 where y = 3" and it
> gets interrupted half way, you might update only half the rows where y = 3.
>
> A single append of a single row is atomic. You're not going to corrupt
> the database per se. Just the consistency of the data inside the database.
>
> So, yes, there's really so little overhead in using a transaction around
> anything that affects more than one row that it makes no sense not to do
> so.
>

Makes sense - on anything that affects more than 1 row in 1 table is OK. 
  Transact wrapper everything else :-)

>>> That said, it's surprisingly common that if you structure your queries
>>> right, you can make every operation essentially an "append to this
>>> table", followed by an eventual "update that table in a way that if it
>>> fails, we can reconstruct that table entirely from scratch". Think
>>> Journal -> General Ledger.
>>>
>>
>> OK, I've been dealing with databases for 15 years now - on mostly a
>> low level. that just went right over my head. Makes sense, but I have
>> no concept of how it is implemented.
>
> I've usually done financial databases. So, basically, all the history of
> everything in and out of the database is recorded in one set of tables,
> while the current state is in a second set of tables.
>
> Like your checkbook: Every check you write gets a row appended, and you
> never change that row except to set a flag that you never unset (the
> "this check cleared" flag). But there's another table that lists each
> account and says how much money is in it. In theory, you can construct
> the latter from the former, if you keep all your old checkbook registers
> around.
>

Ok, makes sense - if you keep a 'transaction' history.  In the financial 
world - highly recommended.

I don't think we will have to go that far.

Our bigger problem is getting the data that is entered to be correct. 
Not just structurally correct, but actually correct.  This has to do 
with the interfaces that were assembled with too much haste.

>> for some reason I am hesitant to present the key as something that the
>> end user interacts with. Maybe I'm a control freak :-&
>
> No, there's almost never a reason to present the key that way. That
> doesn't make it less useful as a key. :-)
>

True.  I believe that if the system is built right the user will NEVER 
have to look at the raw data.  The interfaces should give the user all 
of the information that they need.  And if they need to much with raw 
data for some kind of reporting, the system can output some form of csv 
that the user can manipulate at will.

The keys can be almost anything and it not change the functionality of 
the database or how the user interacts with.  it does make it somewhat 
more readable for the person trying to figure out a problems that 
somehow got into the data.

>>>> OK, you want a job surveying utility poles :-) You have good insight.
>>> I've just done this dozens of times in the last couple decades.
>> what - survey telephone poles?
>
> No, I only did that once. :-) Design databases from scratch.
>
>> that is very correct. Our first stab was fast and mostly dirty. We
>> spent a lot of time to try to get the data right, but we missed some
>> pieces of data.
>
> That's not unusual. The trick is to have the database structured in a
> way you can add the missed data later without *changing* what's already
> there. The best way to do that is to primarily record into the database
> exactly what's outside the database, reflecting reality as closely as
> possible.
>

Yes, agreed.  Depending on how quickly the thing was built, it can be 
very difficult if a complete aspect was missed.  Also, one has to define 
*what* and *how* base rules before adding the data or you get a mess - 
which we are approaching.

>
>>> What about your current system is so broken that you can't afford the
>>> time to fix it?
>>>
>>
>> It has layers upon layers of fixes. The end users had to fight with it
>> to get it to put out data correctly. it was an ad-hoc rich system to
>> the point where I refused to make any more changes because of the risk
>> of data corruption/loss and our client getting crap for a deliverable.
>
> Been there done that. Along with "before I start, has this ever been
> backed up?" Answered by "No, it has never been reliable enough we could
> get a backup of the data." Followed by "Thanks, but I like my current
> job enough."
>

We are not quite that far :-)  But it is nice to be starting with trying 
to make a working system instead of forcing a broken one to continue to 
work.


>>> The other possibility is to hire someone who actually does this sort of
>>> stuff for a living for long enough to show you how to set up the basics
>>> of the database.
>>>
>>
>> It's a balance. Depending on our requirements it could take just as
>> much of our time working with the guy and doing it ourselves.
>
> This is true. I don't think it would take any less time, but you'd
> probably have a more robust system when you were done.
>

very true - worth considering

>> yes, one just has to pay attention and not set up things that break.
>> It could be that applications do not rely on the views (or other
>> non-common features) But then one could be crippling themselves.
>
> Yes. It's better to just say "if we need multiple archives of this
> database, we'll use the same engine."
>

I just have to get this through to the rest of the guys.

>> yes, I don't think it will take much of a machine. We have an old
>> server box (PIII 600 I think) that would likely do just that. Or I
>> might incorporate MySQL on our current file server (Linux) to simplify
>> things and use the old box for development.
>
> Yeah. And you'll likely want a replicated server as well.
>

Not a bad idea - but that can get a bit more complicated.  But I will 
take it as a strong suggestion.

>> Luckily none of them were damaging. Just annoying and cost a couple of
>> hours of everyone's time.
>
> Yeah, that's much better than the place catching on fire. :-)
>

We do have off-site backups :-)   Just 500GB in one big tarball tho :-(


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.