Is it possible to prevent batch update at the sql database level?

A simple stupid UPDATE table SET something=another WHERE (always true)

crash will easily destroy everything in the database. It could be human error, SQL injection / overflow / truncation attack, or a bug in the code that creates the WHERE.

Are popular databases known for a feature that protects tables by the maximum number of rows that can be updated in a single SQL statement?

I mean some kind of defensive setting that applies to database pre-access right in the database: no workaround, less code for the program, no human error (if you have too few permissions).

+1


source to share


7 replies


You can add a trigger that checks how many rows are updated (count the Inserted trigger table) and RAISEERROR if there are too many rows.



+6


source


I do not know anything.

I'm not sure if this will solve anything. How can a database distinguish between an SQL injection attack and an overnight package update that exceeds the limit?



It assumes auto-commit is set to true. If a SQL injection attack fails, you always have the option to roll it over, assuming you are looking at logs, etc.

I think the real answer is better application layering, validation, binding, etc. You cannot perform SQL injection if these measures are in place.

+1


source


The short answer is no ...

Oracle allows you to set profiles that can be assigned to users to limit the use of resources such as processor, logical reads. However, it is not intended for your purpose, it has more to do with resource management in a multi-user environment.

Perhaps more importantly, it also has a flashback table so that unintended changes can be easily reversed.

Most of your scenarios should use other means:

  • human error: most users should not be given update privileges on tables, they should be forced to call an API (usually through an application) to perform updates. DBAs need to be very careful when accessing databases - don't limit row boundaries in any way, they might drop the table entirely !
  • Injection attack: they can and should be prevented.
  • Code errors: these should be checked with proper testing

If your data is important, it must be properly protected and verified as stated above, and there is no need for a maximum row update limit; if your data is not important enough to protect, as stated above, then why bother?

+1


source


As David B first pointed out, you can do this with a trigger. It's good practice to start triggers with @@ ROWCOUNT test anyway. So, imagine:

CREATE TRIGGER dbo.trg_myTrigger_UD ON dbo.myTable FOR UPDATE, DELETE
AS
IF @@ROWCOUNT <> 1 RETURN

      

This will remove any updates and / or deletions that affect more than one line.

Generally, I start by testing for the number of rows equal to <> 0. The point is that the trigger firing was caused by not actually touching any rows (table UPDATE SET col1 = 'hey' WHERE 1 = 0) then it doesn't make sense to run the startup code as it won't do anything.

+1


source


I understand your reasons, but how do you want to handle parties that are legal?

If you manually change some changes and you want to "undo" the changes, use transactions. If you want to be able to reverse engineer your data, use a changelog. But you cannot create a "correct / incorrect batch" check from only a batch with 100% correct results.

0


source


You just need to write the stored procedures and only expose them to users. And you won't be running a privileged account in normal situations. If necessary, connect only to the administrator.

0


source


You can wrap the update in a transaction and query the user (telling how many rows will be updated) before committing.

-1


source







All Articles