Profile Profiling Using Attributes

Is it possible to profile individual methods using attributes in .NET?

I am currently trying to find some of the bottle necks in a large legacy application that makes heavy use of static methods. Framework integration is simply not an option at the moment. Since most calls use static methods, interfaces and dependency injection are not available. Breaking the code to diagnose the logs is also not a viable solution.

I know there are some profiling tools on the market, but they are not currently included in the budget. Ideally, I could create my own custom attribute that logs some basic information about method input and method exit. I've never worked with custom attributes, so it would be as clear as possible.

If possible, I would like to enable profiling via a config file. This would provide profiling with unit and integration tests.

+2


source to share


6 answers


You cannot use attributes for what you are doing. However, you have several options:

First, many of the profiler tools (like RedGate ANTS ) are relatively inexpensive ($ 200-300), easy to use, and most offer free evaluation periods of a couple of weeks - so you can see if they give you a lift. which you need now before you decide whether to buy them. In addition, the .NET CLR profiler is free to download.

If this is not possible, PostSharp is probably the easiest way to insert this kind of logic into your code.

Finally, if you can't use PostSharp for whatever reason, and you're ready to go and add attributes to your code, you can also add a simple toolbox for each method in the form of a use-block:



public void SomeMethodToProfile()
{
    // following line collects information about current executing method
    // and logs it when the metric tracker is disposed of
    using(MetricTracker.Track(MethodBase.GetCurrentMethod()))
    { 
        // original code here...
    }
}

      

A typical MetricTracker implementation looks something like this:

public sealed class MetricTracker : IDisposable
{
    private readonly string m_MethodName;
    private readonly Stopwatch m_Stopwatch;

    private MetricTracker( string methodName ) 
       { m_MethodName = methodName; m_Stopwatch = Stopwatch.StartNew(); }

    void IDisposable.Dispose()
       { m_Stopwatch.Stop(); LogToSomewhere(); }

    private void LogToSomewhere()
       { /* supply your own implementation here...*/ }

    public static MetricTracker Track( MethodBase mb )
       { return new MetricTracker( mb.Name ); }
}

      

+4


source


You can use PostSharp to do some weaving, basically by turning:

[Profiled]
public void Foo()
{
     DoSomeStuff();
}

      

in



public void Foo()
{
    Stopwatch sw = Stopwatch.StartNew();
    try
    {
        DoSomeStuff();
    }
    finally
    {
        sw.Stop();
        ProfileData.AddSample("Foo", sw.Elapsed);
    }
}

      

Indeed, looking at the PostSharp documentation, you can use Gibraltar (with PostSharp) for this if you can afford it. Otherwise, you might well spend a day or so hanging PostSharp, but it might still be worth it.

Note that I know that you said that you cannot afford to integrate the framework into your codebase, but it is not as if you will actually "integrate" as much as forcing PostSharp to do some subsequent compilation in code.

+3


source


I was looking for something very similar to what you described. I couldn't find a structure like this, so I rolled back. I have to point out that this is very simple , but sometimes just fine!

I would describe this as benchmarking responds to unit testing. The concept is to isolate sections of code to measure or compare speed.

A typical example of using the attribute would look like this:

[ProfileClass]
public class ForEachLoopBenchmarks
{
    [ProfileMethod]
    public void ForLoopBenchmark()
    {
        List<int> list = GetNumberList();

        for (int i = 0; i < list.Count; i++)
        {
        }
    }

    [ProfileMethod]
    public void ForEachLoopBenchmark()
    {
        List<int> list = GetNumberList();

        foreach (int i in list)
        {
        }
    }

    private List<int> GetNumberList()
    {
        List<int> list = new List<int>();
        for (int i = 0; i < 1000; i++)
        {
            list.Add(i);
        }
        return list;
    }
}

      

Then you create a console application and paste the below code into the Main method and add an assembly reference that contains the previously described class, decorated with attributes. The execution time for each method (runs 1000 times) will be printed to the console.

class Program
{
    static void Main(string[] args)
    {
        ProfileRunner rp = new ProfileRunner();
        rp.Run();
    }
}

      

The console output will look like this:

console output

You need to add a reference to pUnit.dll to your console application and class library that contains methods marked with attributes.

You can get this as a package from Nuget here .

Nuget Command: PM> Install-Package pUnit

If you prefer the complete source code, you can find it on Github here .

I founded a method that actually measures the execution time for this question: fooobar.com/questions/30272 / ...

I'll cover the implementation in more detail in the next post.

+1


source


There are also some free profiler tools to look out for.

0


source


You cannot implement this via attributes unless you want to use Aspect Oriented Programming with PostSharp to achieve it.

You could set the conditional logic there based on the definition (potentially given in the build config). This can enable or disable logging with timings depending on the current compilation settings.

0


source


I am doing performance tuning in C #. All I need is this technique. It does not matter.

This is based on a simple idea. If you wait much longer than necessary, that means that part of the program is also waiting much longer than necessary for something that really doesn't need to be done.

And how does he wait? Almost always at the call site in the call stack.

So, if you just pause it while you wait and look at the call stack, you can see what it is waiting for, and if it doesn't have to (which it usually isn't), you can see why right away.

Don't trust just one sample - do it multiple times. All that comes up in a few stack examples is that if you can do something about it, it will save a lot of time.

So you can see that this is not about temporary functions or counting how many times they are called. It's about dropping out of the program several times and asking her what she is doing and why. If something is spending 80% (or 20% or whatever) of that time, then 80% of the cycles will probably not be really necessary, so just go and watch. You don't need precision measurements.

He works with big problems. It also works with minor problems. And if you do all this more than once, as the program develops rapidly, small problems become relatively large and easier to find.

0


source







All Articles