32-bit and 64-bit performance

I have a .NET assembly (3.5 framework) and it basically has a bunch of custom controls that do a lot of things like plotting points, etc. The assembly compiles as "AnyCPU", so I can use it on both 32-bit and 64-bit. When I tried to compare the performance of an application using this assembly in 32-bit and 64-bit, I see interesting results. Application performance has two parts: one is the back end, which does a lot of computation and data copying. The other is the actual part of the control drawing. From the results, it looks like the first part is faster in 32-bit and the second is faster in 64-bit architecture.Can anyone explain this behavior? Is the compute and copy data component slower in the 64 bit case, because floating point arithmetic is slower in 64 bits compared to 32 bit?

PS: I evaluated the app on a 64-bit AMD machine with 8GB of RAM, with both 32-bit and 64-bit Vista installed on it.

+2


source to share


1 answer


If you have a lot of object references in your computational part, each reference will take twice as much space in the 64-bit CLR, which will lead to increased memory usage and therefore garbage collection. This is the main difference I can think of, but they also have different JITs - it could be that the computational part of your application happens to be JIT bits where 32-bit is better.



+3


source







All Articles