Synchronizing open gl with application code

I am making this game using open gl es 2.0 on android using c / c ++ (NDK)

For the most inappropriate thing in the game, I have laid out two schemes for the game loop!

Scheme 1:

enter image description here

step a. The game reads input and updates to states and physics.

step b. After step a is complete, it draws a graphic using the dataset in step a

circuit 2 parallel circuit:

enter image description here

step a. Thread (thread A) constantly reads input and updates game state and physics

step b. Another thread (thread B) retrieves data generated by thread A using open GL draw calls (with max fps)

Problems Scheme 1:

The problem with circuit 1 is that I don't know what will happen in the case where there are many objects in the scene and so the actual drawing operation in the GPU (not the GL api calls) takes longer than the required frame time (say 1/60 second).

Since most openGL Api calls return immediately, this can lead to the illusion that the next frame can be drawn, while in fact the last drawing frame will still continue while the loop issues the next frame draw call.

So the draw calls will add up and can reach the limit. What happens at this limit? will it block further api calls or will it just refuse calls or something?

Scheme 2:

In scheme 2, the problem is similar. while you are issuing draw calls, you will have to put the I / Update thread to sleep to keep the game state from changing in the middle of a draw. So, again, when the draw operation is longer than the estimated frame delay, how would you implement frame drop since draw calls return immediately?

EDIT: There are many "cans", "shoulds" and "mosts" on this official page, how can you be sure of anything?

It looks like the openGL spec has no platform or implementation independent rules involved with using parallel processing or streaming for synchronization at all! How can they miss this?

+3


source to share


1 answer


In terms of GPU execution, your two settings are essentially equivalent. As other comments indicate, a driver can (should) stop execution of GL commands if the command buffer is overloaded (for example, you have issued too many state changes). This is highly dependent on the driver implementation and it is difficult to determine in advance when these kiosks will occur because you are not aware of the command buffer driver implementation. You just have to rely on this and the implementation details are hidden from you.



If you are highly concerned about CPU / GPU sync, you can use glFenceSync + glClientWaitFence ( https://www.khronos.org/opengles/sdk/docs/man3/html/glFenceSync.xhtml ) to determine when the GPU has finished processing commands GPU to the point where the fence was inserted and (potentially) wait for these commands to complete. For example, you can stop the CPU if the pick from the Nth previous frame has not completed. Generally this is not necessary in the case you describe, it is usually only used when the CPU and GPU have unsynchronized access to the same resource (e.g. with glMapBufferRange + GL_MAP_UNSYNCHRONIZED_BIT).

0


source







All Articles