Massive View vs History table

I am working on an 11g oracle database. The system ATM has many history tables for transaction-related records. The problem we are having is transaction history records spanning multiple tables. As an example, we have a table TransactionDetail

that has a version column and a table TransactionQuestions

that has its own column. When historical information is required, the massive view is used to retrieve that data from the database - it is incredibly slow, and because of the size and complexity with versions and many joins, we constantly find bugs.

The approach we are going to take is to store the data in multiple history tables, for the sole purpose, which will later be combined into a bulk view, is to keep the consistent state of the system for each transaction into a massive table with many columns instead. put the index on the PC and - the problem is solved.

This will solve the performance problem, since the data will be received without aggregation.
 The biggest flaw in the table approach in my eyes is if there is an error in the SQL View structure, then it can be corrected and the actual history data is not affected if there is an error in the engine (which will be moved from the view to the code essentially) that writes data to history table - data becomes corrupt and impossible to fix.

What other disadvantages does a massive history table do when matched against a view that combines data from multiple history tables?

+3


source to share


1 answer


Have you considered creating a materialized view ? Materialized views have significantly better storage characteristics while maintaining the normalized table structure. They have a couple of trade-offs, including higher disk utilization (mostly formalizing a "massive history table"). This basically creates your massive history table and allows the oracle to do the heavy lifting of serializing changes to normalized tables.



+3


source







All Articles