How to catch a SQL Server cracker?

Short description:

The guys on my team are working on a production database (sql server 2005). We've added various things like constraints, added triggers, etc.

We now found that someone or something was resetting our changes at different times. The problem is that we all share a common admin login. (stupid, yes, I know, we fix it). This is frustrating and at this point we just want to know what whodunnit is.

How would you go in search of the culprit?

NOTE. I am NOT looking for a way to fix this, which is already done. I'm looking for a way to track down the culprit.

0


source to share


11 replies


Stay away from production databases. Create your scripts and email them to the DBA (if you don't have one, get one). Developers with access to a production database is a recipe for disaster - I don't have one and don't want to have one.



+10


source


Tracking your problem is obviously a symptom, not a cause: since this is a SQL Server 2005 database, there should be a "Default" trace that comes out of the box. It is very lightweight but includes some creation and deletion of objects. You can view it from the view sys.traces

using the following query:

SELECT *
FROM sys.traces
WHERE id = 1

      

It spills over a few MB, so its usefulness will depend on how much activity is on the server.



Presumably the real reason is not related to your changes and version control.

Agree with other posters that have mentioned that all changes to the production database should only be done by the administrator and not by individual developers.

+4


source


I am assuming you have an audit trail with change data capture features. This will keep track of who, what and when each change happens.

Are rollbacks intermittent or consistent? Any chance you turned off auto-wake up and forget to commit your changes?

There cannot be that many people who have sufficient rights to do this. Find out who can do it and ask. Better than any technology you can create.

Hacker? It must be someone from the inside. If anyone from your firewall has access to this database, then you need to talk to your network users.

Try adding a monitor to this url and port to see what requests went through.

+2


source


What you will need to watch out for is that if someone maliciously changes the database and they have admin access, you have to assume that they are smart enough to cover their tracks. At this point, you can stop further damage, but if the attacker is actually good, you either blame the wrong person for changing the log files, or all the evidence points to the right person.

The best way to do this is to ensure that no one has direct admin access to the production database. We have a system set up so that no account has administrative access by default and each has its own accounts. Nobody can use the SA account.

Someone has to grant access to the account, and it is automatically deleted 24 hours after receipt. Ideally, the same person granting access should not be the one who gets administrative access to the database. Thus, two people should always be involved in making changes to the system.

Ideally, two people should always be involved in making changes. This way the second person can check what the first person is doing. (It's easy to make mistakes at 10 a.m. after a few hours of work).

People will counteract this by stating that sometimes they "need" to be able to make quick changes. In most cases, this is not the case. It may take an additional 10 minutes to get the second person involved and explain the situation. It will take years to remove the reputation of someone stealing / modifying corporate data.

+2


source


By adding user-level security you should have.

+1


source


Can you cross-reference the rollback times with the location of the people on the team?

Or, alternatively, just ask everyone?

+1


source


SQL Server 2005 added DDL and DML triggers so you can keep track of who modifies the data as well as the data structure.

+1


source


If you're fixing it - and by "fixing it" I mean locking the production database and following some of the other practices mentioned here - then don't worry about finding the culprit. It was random anyway, and when you block it, someone will start to wonder why something isn't working.

Tracking the user who did this won't solve anything. If it was malicious, they will lie and say it was accidental.

The root cause is security in the database, which is why the team on error is the one that allowed this database to be so susceptible.

+1


source


Asking everyone is not helpful, people lie and / or don't know they screwed it up. We assume this is malicious, but we hope it is not.

0


source


Wow, you have a real problem. If you can't trust your own people ...

Disable time for all but one identifier. Make sure the person knows what they are doing and is not lying.

0


source


In addition to what you have already received in the replies, my vote is that it is nobody; you are simply wrong about how you use the system.

Now, don't get me wrong, I am not talking about incompetence here. What I mean, however, is that there may be scripts that run periodically, and someone rightly mentioned that sometimes auto-commit can be turned off and someone is being fooled.

I also believe you are asking for problems by mixing ANY development in a production environment. CHEAP Disk Space - Terabytes Currently Under $ 300! You don't need whiz-bang performance for development in most cases ...

0


source







All Articles