Cycle main loop stutters sometimes

I ran into an issue where my game loop was shaded about once (variable spacing). Then one frame takes more than 60ms, while all others take less than 1ms.

After simplifying things, I ended up with the following program which reproduces the error. It only measures the frame time and reports it.

#include <iostream>
#include "windows.h"

int main()
{
    unsigned long long frequency, tic, toc;
    QueryPerformanceFrequency((LARGE_INTEGER*)&frequency);
    QueryPerformanceCounter((LARGE_INTEGER*)&tic);
    double deltaTime = 0.0;
    while( true )
    {
        //if(deltaTime > 0.01)
        std::cerr << deltaTime << std::endl;
        QueryPerformanceCounter((LARGE_INTEGER*)&toc);
        deltaTime = (toc - tic) / double(frequency);
        tic = toc;
        if(deltaTime < 0.01) deltaTime = 0.01;
    }
}

      

Again, one frame is in many cases much slower than the others. Once added if

, let the error go away (cerr is never called). My original problem did not contain cerr / cout. However, I see this as a reproduction of the same error.

cerr gets cleaned up on every iteration, so this is not what happens to create single slow frames. I know from the profiler (Very Sleepy) that the thread internally uses the lock / critical section, but that doesn't change anything because the program is one-way.

What makes single iterations stop so hard?

Edit . I did some more tests:

  • Adding std::this_thread::sleep_for( std::chrono::milliseconds(7) );

    and therefore reducing the CPU load does not change anything.
  • With the printf("%f\n", deltaTime);

    problem goes away (possibly because it doesn't use a mutex and memory allocation unlike a thread)
+3


source to share


2 answers


Window design does not guarantee an upper limit for any runtime, as it dynamically allocates runtime resources to all programs using some logic - for example, the scheduler will allocate resources to a high priority process and starve below priority processes in some cases. Programs are statistically more likely - eventually - to be affected by such things if they run hard loops and consume a lot of CPU resources. Because, again, eventually the scheduler will temporarily raise the priority of programs that are starving and / or lower the priority of programs that are starving others (in your case, by running a hard cycle).

Doing output in std::cerr

conditionally does not change the fact of this event - it just changes the likelihood that it will happen at a given time interval, since it changes how the program uses system resources in a loop and therefore changes how it interacts with the system scheduler, policies, etc. .d.



Things like this affect programs running on all non-real-time operating systems, although the exact impact depends on how each OS is implemented (for example, scheduling strategies, other policies that control program access to resources, etc.). There is always a nonzero probability (even if it is small) of such stalls that occur.

If you want absolute guarantees that there are no stalls for such things, you need a real-time operating system. These systems are designed to make things more predictable in terms of timing, but this comes with trade-offs as it also requires your programs to be designed with the intention that they MUST complete certain functions within specified time intervals. Real-time operating systems use different strategies, but their forced selection of restrictions can cause the program to crash if the program is not designed with such things in mind.

+2


source


I'm not sure about this, but it may happen that the system interrupts your main thread to allow others to start, and since it takes a while (I remember that on my Windows XP machine the quantum was 10ms) it will stop the frame ...

This is very noticeable because this is a single threaded application, if you use multiple threads they usually go to multiple CPU cores (if available) and kiosks will still be here but less important (if you have implemented your application logic right).



Edit: here you can get more information about the schedulers windows and linux. Basically, windows use quanta (ranging from a few milliseconds to 120ms on Windows Server).

Edit 2: You can see a more detailed description here in Windows Scheduler .

+1


source







All Articles