So where I work we contract out the DBA’s to a company in MA. The stupidity is astounding. A DBA is a Database Administrator.
They’re trying to do a live restore of live data on a test server. Except Avamar is throwing errors and I get an email saying there must be something wrong with the config. Problem is Avamar is running perfectly.
Then I get an email from DBA’s that the load average on the test server is jacking to 50% or more. Now I know from experience if you look at the top processes and you see Oracle has been running for a hundred hours you can pretty much be certain that Oracle. The issue with Oracle is that it’s gotten bloated over time – with all sort of data redundancy, checks, etc. So it can take a server whose normal load average is .2 to 50. Orders of magnitude.
So I captured an image of the top processes, brought it into MS Paint and drew a line around the offenders and sent it back to them saying “I wonder what could be causing the high load.” In essence it’s the I.T. equivalent of Physician heal thyself. You’re smart enough to login to a shell and pull up the top utility, but you cannot see that it’s the product you are responsible for causing said problem? I think a more apt analogy would be they can’t see the forest for the trees.
You see I know when Oracle can’t get what it wants, the whole redundancy and checking panics. It’ll consume near every cpu cycle on multiple cores. This is a common thread with database products. Oracle has this, MySQL can’t handle bad SQL but I’ve never run into a Postgres database that did that.