Calculating Maximum Parallel Work Groups

I was wondering if there is a standard way to programmatically determine the number of maximum parallel workgroups that can run on a GPU.

For example, on an NVIDIA card with 5 compute units (or SM), there can be no more than 8 workgroups (or blocks) per compute unit, so the maximum number of workgroups that can be run simultaneously is 40.

Since I can find the number of compute units with clGetDeviceInfo

, I only need the maximum number of workgroups that can be run on a compute unit.

Thank!

+3


source to share


2 answers


The maximum number of groups per run / SM is limited by hardware resources. Let me take an example of Intel Gen8 GPU. It contains 16 barrier registers per chunk. Thus, no more than 16 working groups can work simultaneously.



In addition, the amount of local memory available for each sub-section (64 KB). If, for example, a workgroup requires 32KB of shared local memory, only two of those workgroups can run concurrently, regardless of the size of the workgroup.

+3


source


I usually use the number of compute units as the number of workgroups. I like to increase the size of the groups to saturate the hardware rather than forcing the gpu to schedule multiple workgroups to run concurrently.



I don't know how to determine the maximum number of groups without looking at the vendor specifications.

-1


source







All Articles