With so many system resources available, how sure are you that your code is configured?

As the processors get faster, the hard drives spin, the bit flies so fast, the network speed also increases, but it's not as easy to say bad code from good code as it used to be.

I remember a time when you could optimize a piece of code and would no doubt perceive a performance improvement. Those days are almost over. Instead, I think we now have a set of rules to follow, like "Don't declare variables inside loops" and so on. It's nice to stick with them to write good code by default. But how do you know that it cannot be improved even without some tool?

Some might argue that a couple of nanoseconds won't make much of a difference these days. In truth, we're stuck with so many layers that you get a stunning effect.

I am not saying that we have to optimize every little millisecond from our code, as that would be expensive and impracticable. I believe that we should do our best, given our time constraints, and write efficient code.

I'm just curious to know what tools you use to profile and measure the performance of your code, if at all possible.

0


source to share


6 answers


There is a big difference between "good" code and "fast" code. They are not completely separate from each other, but "fast" code does not mean "good". Often times, “fast” actually means “bad code” because readability compromises must be made to make it fast.

As I look at it, hardware is cheap, programmers are expensive . Unless there is a major performance issue with a piece of code, you never have to worry about speed. If there are performance issues, you will notice them. It is only when you notice a performance problem on good hardware that you should worry about optimization (in my opinion)



If you get to the point where your code is slow but you can't figure out why, I would use a profiler like ANT or dotTrace if you're in the .NET world (I'm sure there are other possibilities for other platforms and languages). They are very useful, but I only had one situation where I needed a profiler to determine the problem. It was that now that I know the problem, I no longer need a profiler to tell me about it, because I will never forget the time it took to optimize it.

+3


source


I think that optimization should not be seen as looking for every line of code, but rather that asymptotic complexity is your algorithm. For example, using bubble sort is probably one of the worst sorting algorithms you can use from an optimization standpoint. It takes the longest time. Quicksort and mergesort are faster in terms of sorting and should always be used before bubble sorting.

If you are constantly optimizing optimization as you develop a solution to a problem, then you should be able to write readable code that other developers will approve of. Also, if you are programming in a higher level language that will be compiled before it runs, remember that compilers are doing some awesome optimizations at the moment that you or I cannot think of, and also (more importantly) don't need to worry.

Stick to good and low big O () and it needs to be optimized pretty well. If you are working with millions or more in some dataset then find a big O (logn) algorithm. They are great for large tasks and will optimize your code.



Let the compilers work on the string by optimizing the string code, so you can focus on solutions.

There are times that require line-by-line optimization, and if so, that you need that speed, you might want to take a look at the assembly so you can control every line you write.

+4


source


This is an absolutely urgent problem, but not for most developers. Most developers are interested in getting a product that works for their employer. Optimized code is rarely needed.

The best way to make sure your code is fast is to test or profile it. Many compiler optimizers create unintuitive oddities in the performance of the programmer's code, so ultimately the measurement becomes significant.

0


source


In my experience, Rational Quantify has given me the best results in terms of code customization. It is not free, but it is very fully featured and seems to give me the most useful results.

In terms of free tools, check out gprof or oprofile if you're on a Unix environment. They're not as good as some of the commercial tools, but they can often point you in the right direction.

On the other hand, I am almost always amazed at which profilers appear the first time I use them. You may have an intuition as to where the code can be bottlenecked, and it can often be completely wrong.

0


source


Almost all the code I write is fast enough. On the rare occasion that this is not the case, for C, C ++ and Objective Caml I use the venerable gprof

and excellent one valgrind

with its excellent renderer kcachegrind

(part of the KDE SDK, don't be fooled by the legacy code on sourceforge).

MLton Standard ML Compiler and Glasgow Haskell Compiler come with excellent profilers.

I want a better Lua profiler .

0


source


Maybe a profiler? There are those that are available for almost all platforms and languages.

-1


source







All Articles