Is this a good solution for clearing C # MemoryCache?

I read the questions and answers I could find about clearing MemoryCache in C #. There have been many recommendations such as: 1. List the cache and remove all items - according to others it is not good to force the enumerator to block everything and all sorts of apocalypse happen, quoting part of the documentation I did not find, and showing a warning, I could not reproduce ... In any case, I don't think this is a very effective solution.

  • Store keys in a separate collection and iterate over to remove items from the cache. - Also, it is not very thread safe, it is also not very efficient.

  • Dispose of the old cache and create a new one. This sounds good, an obvious problem that comes to mind, and several comments indicate that existing links to the old cache can cause problems. Of course, the order of actions does matter, you need to keep a link to the old one, create a new one instead of the old one and dispose of the old one - not everyone seemed to notice this little nuance.

So now? One of my colleagues suggested using MemoryCache for my problem and he wrapped the cache object in another class that has the ability to get the key (if need to load it from the db), delete the key, or clear the cache.The first two are not important now, the third is interesting. I used the third solution for this, following the principle of "it is guaranteed that there are no additional references to the MemoryCache object other than my own implementation". Relevant code:

Constructor:

public MyCache()
{
    _cache = new MemoryCache("MyCache");
}

      

ClearCache:

public void ClearCacheForAllUsers()
{
    var oldCache = _cache;
    _cache = new MemoryCache("MyCache");
    oldCache.Dispose();
}

      

_cache is a private MemoryCache object.

Could this cause problems in a multithreaded environment? I have some problems calling read and post in parallel. Should I implement some kind of locking mechanism that allows concurrent reads, but forces a pure cache function to wait while the current reads end in the cache?

My guess is yes it is necessary to implement this, but I would like to get some idea before getting to it.

Robert

EDIT 1 in Voo's answer: It is guaranteed that none of the "wrapper" (MyCache) gets a reference to the _cache object. The following worries me:

T1:
MyCache.ClearAll()
    var oldCache = _cache
T2:
MyCache.GetOrReadFromDb("stuff")
T1:
    _cache=new MemoryCache("MyCache")
    oldCache.Dispose()
T2:
    *something bad*

      

Apart from the T2 thread, which is still using the old cache, which is not preferred, but something I can live with, could there be a situation where the Get method somehow accesses the old cache in a remote state, or reads a new one without data?

Imagine that the GetOrReadFromDb function looks like this:

public object GetOrReadFromDb(string key)
{
    if(_cache[key]==null)
    {
        //Read stuff from DB, and insert into the cache
    }
    return _cache[key];
}

      

I think it is possible that control is removed from the read stream and passed to cleanup, for example, immediately after reading from the db, and before returning with the _chache [key] value, and this could cause problems. Is this a real problem?

+3


source to share


2 answers


The problem with your solution is that it introduces a possible race condition that would require ugly locks or other thread synchronization solutions.

Use the built-in solution instead.

You can use your own class ChangeMonitor

to clear your items from the cache. You can add an instance ChangeMonitor

to the property ChangeMonitors

CacheItemPolicy

that you set when you call MemoryCache.Set

.

For example:



class MyCm : ChangeMonitor
{
    string uniqueId = Guid.NewGuid().ToString();

    public MyCm()
    {
        InitializationComplete();
    }

    protected override void Dispose(bool disposing) { }

    public override string UniqueId
    {
        get { return uniqueId; }
    }

    public void Stuff()
    {
        this.OnChanged(null);
    }
}

      

Usage example:

        var cache = new MemoryCache("MyFancyCache");
        var obj = new object();
        var cm = new MyCm();
        var cip = new CacheItemPolicy();
        cip.ChangeMonitors.Add(cm);

        cache.Set("MyFancyObj", obj, cip);

        var o = cache.Get("MyFancyObj");
        Console.WriteLine(o != null);

        o = cache.Get("MyFancyObj");
        Console.WriteLine(o != null);

        cm.Stuff();

        o = cache.Get("MyFancyObj");
        Console.WriteLine(o != null);

        o = cache.Get("MyFancyObj");
        Console.WriteLine(o != null);

      

+2


source


Never used a class MemoryCache

, but there was an obvious race condition there. It is possible that thread T1 is calling GetElement

(or whatever you call it) while another thread T2 is executing ClearCacheForAllUsers

.

Then the following can happen:

T1:
reg = _cache
T2:
oldCache = _cache
_cache = new MemoryCache("MyCache")
oldCache.Dispose()
T1:
reg.Get()   // Ouch bad! Dispose tells us this results in "unexpected behavior"

      



A trivial solution would be to introduce synchronization.

If blocking is not acceptable you can use some clever tricks, but the simplest solution would be to use Finalize

, I would say. Of course, generally a bad idea, but assuming that the only thing the cache does is memory usage, so it doesn't hurt to have the cache only collect under memory pressure, so you avoid worrying about race conditions by letting the GC worry about it ... Please note that _cache

should be unstable.

Another possible approach is that, since the implementation detail Get

seems to always return null

after the cache has been removed. In this case, you could avoid blocking in most cases, except in rare situations. This is an implementation though, and I wouldn't want to rely on such a small detail personally.

0


source







All Articles