Is it inefficient to frequently create short-lived new instances of the class?

I have a C # program that keeps track of a player's position in a game. In this program I have a class called Waypoint (X, Y, Z), which represents a location on the game map. In one of the strands I create, I keep checking the player's distance from a certain target point, pretty quickly after each other during (true) loops. The Waypoint class has a method called public double Distance (Waypoint wp) that calculates the distance from the current waypoint to the point passed as a parameter.

Question: Can I create a new waypoint for the player's position every time I want to check the distance from the player to the target point? The program could then, for some time (true) cycle, create this player waypoint over and over again, just for the purpose of calculating the distance.

PS: My program probably needs resource efficiency as it works with multiple threads with continuous loops doing various jobs like placing the X, Y, Z player in the UI.

Thank you so much!

+3


source to share


2 answers


The actual creation of an object with a very short lifetime is negligible. Creating a new object pretty much just involves increasing the heap pointer by the object's size and setting those bits to zero. This won't be a problem.

As for actually collecting these objects, when the garbage collector does the collection, it takes all the objects that are still alive and copies them. Any objects that are not "live" are not affected by the GC here, so they do not add work to the collections. If the objects you create are never, or very rarely, alive during a GC collection, then they don't add any overhead there.

The only thing they can do is reduce the amount of memory currently available, so that the GC must collect significantly more frequently than it otherwise would. The GC will actually collect when it needs even more memory. If you are constantly using all the available memory by creating these short-lived objects, you can increase the speed of the collections in your program.



Of course, a lot of objects would be required to actually significantly influence how often collections are collected. If you're worried, you should spend some time looking at how often your program runs collections with and without this block of code to see what effect it has. If you do call data collection much more often than you would otherwise, and you notice performance issues as a result, consider trying to fix the problem.

There are two possibilities that come to mind for solving this problem if you find a measurable increase in the number of collections. You can investigate the possibility of using value types instead of reference types. It may or may not make sense conceptually in context to you, and it may or may not really help solve the problem. It depends too much on specifics not mentioned, but it is at least something to look out for. Another possibility is to aggressively cache objects so that they can be reused over time. This also needs to be studied carefully as it can significantly increase the complexity of the program and make it much more difficult to write programs that are correct, maintainable, and easily validated.but it can be an effective tool for re-entering memory if used correctly.

0


source


What the other answers say:
- maybe you should make stack-local instances because it shouldn't be expensive, and
- maybe you shouldn't, because memory allocation can be expensive.
It's guesswork - educated guesswork - but still guessing.

You are the only one who can answer the question by actually figuring out (without guessing) if this news is taking on a large enough percentage of wall clock time to worry about.

Method I (and many others) relies on a random pause to answer such a question .

The idea is simple. Suppose that this news, if somehow eliminated, will save - select a percentage, for example 20%. This means that if you just hit the pause button and display the call stack, you have at least a 20% chance of catching it in action. So if you do it 20 times, you will see that he does it about 4 times, give or take.

If you do this, you will see what counts for time.
- If this is news, you will see it.
- If it's something else, you'll see it.
You don't know exactly how much it costs, but you don't need to know.
What you need to know is the problem and what it tells you.




ADDED: If you are carrying with me an explanation of how this performance tuning could be, here is an illustration of a hypothetical situation:
enter image description here
When you take stack samples, you can find a number of things that could be improved, one of which could be memory allocation, and that could be not even very big, since in this case it (C) only takes 14%, He tells you that something else is taking much longer, namely (A).

So, if you fix (A), you get a 1.67x acceleration factor. Not bad.

Now, if you repeat this process, it tells you that (B) will save you a lot of time. So you fix it and (in this example) get another 1.67x for a total acceleration of 2.78x.

Now you do it again, and you see that the original thing you suspected, memory allocation, is indeed the bulk of the time. So you fix it and (in this example) get another 1.67x for a total speedup of 4.63x. Now this is a serious acceleration.

So point 1) don't forget what to speed up - let the diagnostics tell you what to fix, and 2) repeat the process to do some speed ups. The way you get a real boost is because things that were small to start with got a lot more significant when you took out other things.

0


source







All Articles