Cost of creating dbcontext for web request in ASP.Net

I am using unit of work and repository pattern together with EF6 in my asp.net web application. The DbContext object is created and destroyed on every request.

I think it is costly to create a new dbcontext for every request (I haven't done any performance evaluation).

Is it okay to ignore this cost of creating a DbContext on every request? Has anyone done some kind of desktop labeling?

+3


source to share


2 answers


Entity Framework is not thread safe, meaning you cannot use a context across multiple threads. IIS uses a stream for every request sent to the server. With this in mind, you must have a context for the request. In addition, you are exposed to serious risks of unexplained and seemingly random exceptions and potentially incorrect data that are stored in the database.



Finally, the creation of the context is not as expensive as the operation. If you are experiencing slow application applications (not on first launch, but after using the site), your problem probably lies elsewhere.

+3


source


Creating a new context is ridiculously cheap, around 137 ticks on average (0.0000137 seconds) in my application.

On the other hand, hanging in context can be incredibly expensive, so dispose of it often.

The more objects you request, the more objects will be tracked in context. Since the entities are POCOS, the entity framework has absolutely no idea which ones you changed, except to examine each one in context and tag it accordingly.

Of course, once they are tagged, they will only make database calls for the ones that need updating, but they determine which ones need updating, which is expensive when there are many tracked objects because it has to check all POCOS against known values ​​to see if they have changed.

This change tracking when invoking save changes is so expensive that if you're just reading and updating one record at a time, you'd better get rid of the context after each record and create a new one. The alternative is hanging on the context, so that every entry you read results in a new object in the context, and every time you call save it changes one object more slowly.



And yes, it is really slower. If you update, for example, 10,000 objects, loading one at a time into the same context, the first save will take about 30 ticks, but each subsequent save will take longer until the last one takes more than 30,000 ticks. In contrast, every time a new context is created, it will result in a consistent 30 ticks per update. In the end, due to the cumulative slowdown in hovering of the context and all tracked objects, getting rid of the context and re-creating the context before each commit ends up taking only 20% (1/5 of the total time)!

This is why you really should only call save changes once in the context, ever, and then get rid of it. If you are calling save changes more than once with a lot of objects in the context, you may not be using it correctly. The exceptional case is obviously when you are doing something transactional.

If you need to perform some transactional operation, you need to manually open your own SqlConnection and either start a transaction on it, or open it in a TransactionScope. Then you can create your DbContext by passing the same open connection to it. You can do this over and over, each time deleting the DbContext object, leaving the connection open. Usually DbContext handles opening and closing the context for you, but if you pass it an open connection, it won't try to close it automatically.

Thus, you see the DbContext as a helper for tracking changes to objects in an open connection. You create and destroy it as many times as you want on the same connection where you can start the transaction. It is very important to understand what is going on under the hood.

+1


source







All Articles