A design pattern that allows you to inject code at specific points
I am trying to allow developers to extend my code at specific points of execution.
My specific example is a database transaction wrapper. The wrapper takes care of many details that we wanted to take away from the developer and is used in several projects.
In every project, however, there are certain things they would like to do automatically during a transaction. I would like to add hooking points that each project can set up to run the code.
For example, every table in our database has a Date Entered field that is updated every time a record is changed. However, we want all dates to be the same for a transaction, however many records were affected (i.e. 4 records in table A, 1 record in table B, ...).
My idea is to define the interception points "TransactionStarting", "TransactionStarted", "StatementExecuting", "StatementExecuted", ... and pass a context object to each point.
The project can then define an "EnteredDateManager" class that stores the current date at the "TransactionStarted" time and updates each EnteredDate object at the "StatementExecuting" time.
I would like to set this in web / app.config file and allow registration of multiple intercept classes. If more than one class is registered, they should be run in the order in which they were registered.
I was thinking about just raising events, but I want the order to matter. I also want to be able to share the state between different points. In my example above, the EnteredDate property is set at the TransactionStarted point and is used in the StatementExecuting clause.
Is this a chain of responsibility chains? AOP? This is similar to how the ASP.Net pipeline works, but they use events and do not guarantee order as far as I know.
Any direction / examples would be great.
thank
source to share
Sounds like Aspect Oriented Programming to me. Check out PostSharp .
Here's an example of tracking from their website:
public class TraceAttribute : OnMethodBoundaryAspect
{
public override void OnEntry( MethodExecutionEventArgs eventArgs)
{ Trace.TraceInformation("Entering {0}.", eventArgs.Method); }
public override void OnExit( MethodExecutionEventArgs eventArgs)
{ Trace.TraceInformation("Leaving {0}.", eventArgs.Method); }
}
I use it for logging / tracing, caching and performance monitoring.
source to share
One way to do this is to simply use the basic strategy pattern. With a strategy, you are basically pushing the functionality into a separate class that will be called by your class, rather than implementing the logic directly in the original class. Strategy classes can be included in the original class by setting them through interface-based properties or through constructor arguments (or both). Thus, the user can choose to inject different types of functions into some processing flow defined by the original class.
source to share
If it's .NET specific (you mentioned ASP.NET), I would highly recommend looking into the System.Transactions namespace and reading up about creating and registering resource managers in a transaction.
Using TransactionScope, you can create a transaction in a context, and a resource manager running in that context can detect the existence of a transaction and register it (this will be synonymous with your TransactionStart event). After registration, each resource manager has the option to either commit their changes, roll back, or initiate a rollback of the transaction.
The Systen.Transactions namespace, which was introduced in .NET 2.0, offers some pretty powerful tools for creating transactions and managing transactional resources. You have the option with both light transactions and your more complex, fully distributed transactions managed by the MSDTC service. Transactions can be single-phase or two-phase, which provides more flexibility and stability in the event of a transaction failure.
source to share