Can I sacrifice database fundamentals in this case for speed?

I work for a company that is using Access database for outbound cms and I have migrated to a SQL server based system. There's a data list table (not standardized) and a call table. It currently has about one update per second. All call results along with date, time and agent ID are stored in the call table. Agents have a predefined set of records that they will call every day (this includes records from different lists of data, sorted to distribute evenly across their entire set). Note that the data list entry is called once a day.

To ensure speed, the current updates to this system are stored in duplicate call table tables in the data list table. They are then copied to the call table in the batch process at the end of the day.

The reason for this is not the rate at which a new record can be added to the live call table, but when the user application is closed / open and reloads the user dataset, I need to check which records have not been called today - I need it would run the stored proc on the server that fetched the most recent call from the call table and check if its calldate was out of date. I find that a more expensive query than checking if a field in a data-list table is NULL.

With this setting, I run an expensive query at the end of each day.

There are many pitfalls in this design, the main limitation is my inexperience. This is my first SQL server system. This is very important and I had to make sure it worked and I could easily dump the data back in order to access the db during a live failure. It ran for 11 months (no live failure, less downtime than the old system).

I've created pretty well normalized databases for other things (with a lot fewer users), but I'm hesitant to implement this for the calling database.

Specifically, I would like to know your thoughts on whether duplication of call fields in the data list table is necessary in my current setup or should I use a call table. Please try to answer this from my point of view. I know you DBAs can cringe!

+2


source to share


5 answers


I think normalizing it depends on how much you can do and what you might need.

For example, as Jan pointed out, it has been running for so long, are there some features they want to add that will affect the database schema?

If not, then just leave it as it is, but if you need to add new features that modify the database, you might want to know about normalizing it at this point.



You will not need to call the stored procedure, you should be able to use a select statement to get max (id) by the user id or max (id) on the table, whichever you want to do.

Before you decide to normalize or make any significant architecture changes, first look at why you are doing it. If you are only doing this because you think it needs to be done, stop and see if there is anything else you can do, perhaps add unit tests so you can get multiple times for how long the operation will take ... The numbers are good before making major changes to see if there is any real benefit.

0


source


Redesigning an already working database can be a major drawback. Rather, try optimizing what you are currently using by starting from scratch. Think about indexes, referential integrity, key allocation techniques, proper use of joins, etc.

Actually, take a look here:



Errors in database design made by application developers

This describes some very useful pointers.

+2


source


What the Nazis of Normalization forgets is that database design usually has two stages: Logical Design and Physical Design. The logical design is for normalization, and the physical design is "now getting the job done", considering, among other things, the benefits of normalization and the benefits of breaking nominalization.

The classic example is the Order table and the Order-Detail table, and the order header table has a "total price" where this value was derived from order details and related tables. Having a total order price in this case still makes sense, but it breaks normalization.

A normalized database is designed to keep your high availability and your database flexible. But performance optimization is one of the considerations that physical design considers. Take a look at reporting databases, for example. And don't make me start by storing time series data.

Ask yourself: How confident am I that my mobility or flexibility was significantly hampered by this decision? Does this cause a lot of code changes or data redesigns when I change something? If not, and you're happy your design works as needed, I wouldn't bother.

+2


source


I would like to ask you to be a little clearer about the particular dilemma you are facing. If your system has performed so well for 11 months, what makes you think it needs any changes?

0


source


I'm not sure if you know that "Database Basics" can refer to "Logical Database Basics" as well as "Physical Database Design Basics" and are you aware of the difference.

The fundamentals of logical database design should not (and indeed cannot) be "sacrificed" for speed precisely because speed is determined only by physical design choices, the primary resolution factor, which is precisely speed and performance.

0


source







All Articles