How to use GPU in OpenMP?

I am trying to get some code to run on a GPU using OpenMP but I am failing. In my code, I am doing matrix multiplication using loops for

: once using OpenMP pragma tags and once without. (This is so that I can compare the runtime.) After the first loop, I call omp_get_num_devices()

(this is my main test to see if I am actually connecting to the GPU.) No matter what I try, it always returns 0. omp_get_num_devices()

The computer I am using has dual NVIDIA Tesla K40M GPUs . CUDA 7.0 and CUDA 7.5 are available on the computer as modules, and the CUDA 7.5 module is usually active. gcc 4.9.3, 5.1.0 and 7.1.0 are available as modules, with the gcc 7.1.0 module usually active. I am compiling my code with $ g++ -fopenmp -omptargets=nvptx64sm_35-nvidia-linux ParallelExperimenting.cpp -o ParallelExperimenting

. I had OpenMP code that was parallelized successfully using the CPU but not the GPU.

My main goal here is to get omp_get_num_devices()

to return 2 as proof that I can detect and use GPUs with OpenMP.
Any help I get here would be greatly appreciated.

Here is the code I am using to check if the GPU is being used correctly or not:

#include <omp.h>
#include <fstream>
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <time.h>
#include <iomanip>
#include <cstdio>
#include <stdlib.h>
#include <iostream>
#include <time.h>
using namespace std;

double A [501][501];
double B [501][501];
double C [501][501][501];
double D [501][501];
double E [501][501];
double F [501][501][501];
double dummyvar;
int Mapped [501];

int main() {
    int i, j, k, l, N, StallerGPU, StallerCPU;

    //
    N = 500;

    // Variables merely uses to make the execution take longer and to
    //   exaggurate the difference in performance between first and second
    //   calculation
    StallerGPU = 200;
    StallerCPU = 200;

    std::cout << " N = " << N << "\n";
    // generate matrix to be used in first calculation
    for (i=0; i<N; i++) {
        for (k=0; k<N; k++) {
            if (i == k) {
                A[i][k] = i+1;
            } else {
                A[i][k] = i * k / N;
            }
        }
    }
    // generate other matrix to be used for the first calculation
    for (k=0; k<N; k++) {
        for (j=0; j<N; j++) {
            B[k][j] = 2*(N-1)-k-j;
        }
    }

//    Slightly adjusted matrices for second calculation
    for (i=0; i<N; i++) {
        for (k=0; k<N; k++) {
            if (i == k) {
                D[i][k] = i+2;
            } else {
                D[i][k] = i * k / N - 1;
            }
        }
    }

    for (k=0; k<N; k++) {
        for (j=0; j<N; j++) {
            E[k][j] = 2*(N+1)-k-j;
        }
    }

    dummyvar = 0;

    //Run the multiplication in parallel using GPUs

    double diff;
    time_t time1;
    time1 = time( NULL ); // CPU time counter
    cout << endl << " GPU section begins at " << ctime(&time1) << endl;

        //    This pragma is frequently changed to try different tags
        #pragma omp for collapse(4) private(i, j, k, l)

        for (i=0; i<N; i++) {
//            Mapped[i] = omp_is_initial_device();
            for (j=0; j<N; j++) {
                for (k=0; k<N; k++) {
                    for(l = 0; l < StallerGPU; l++ ) {
                        C[i][j][k] = A[i][k] * B[k][j] ;
                        dummyvar += A[i][k] * B[k][j] * (l + 1);
                    }
                }
//            cout << " i " << i << endl;
            }
        }


    //record the time it took to run the multiplication    
    time_t time2 = time( NULL );
    cout << " number of devices: " << omp_get_num_devices() << endl;
    cout << " dummy variable: " << dummyvar << endl;

    float cpumin = difftime(time2,time1);
    diff = difftime(time2,time1);
    cout << " stopping at delta GPU time: " << cpumin << endl; 
    cout << " terminating at " << ctime(&time2) << endl;
    cout << " GPU time elasped " << diff << " s" << endl;
    cout << endl;

    dummyvar = 0;
    time_t time3 = time( NULL );
    cout << endl << " CPU section begins at " << ctime(&time3) << endl;
//    #pragma omp single
    for (i=0; i<N; i++) {
        for (j=0; j<N; j++) {
            for (k=0; k<N; k++) {
                for (int l=0; l<StallerCPU; l++) {
                    F[i][j][k] = D[i][k] * E[k][j];
                    dummyvar += D[i][k] * E[k][j] * (l - 1);
                }
            }
        }
    }
    // the sum to complete the matrix calculation is left out here, but would
    // only be used to check if the result of the calculation is correct

    time_t time4 = time( NULL );
    cpumin = difftime(time4,time3);
    diff = difftime(time4,time3);
    cout << " dummy variable: " << dummyvar << endl;
    cout << " stopping at delta CPU time: " << cpumin << endl; 
    cout << " terminating at " << ctime(&time4) << endl;
    cout << " CPU time elasped " << diff << " s" << endl;
    //Compare the time it took to confirm that we actually used GPUs to parallelize.
}

      

Here is the output from running the CUDA code of the sampleQuery sample.

./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 2 CUDA Capable device(s)

Device 0: "Tesla K40m"
  CUDA Driver Version / Runtime Version          7.5 / 7.5
  CUDA Capability Major/Minor version number:    3.5
  Total amount of global memory:                 11520 MBytes (12079136768 bytes)
  (15) Multiprocessors, (192) CUDA Cores/MP:     2880 CUDA Cores
  GPU Max Clock rate:                            745 MHz (0.75 GHz)
  Memory Clock rate:                             3004 Mhz
  Memory Bus Width:                              384-bit
  L2 Cache Size:                                 1572864 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Enabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 130 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Device 1: "Tesla K40m"
  CUDA Driver Version / Runtime Version          7.5 / 7.5
  CUDA Capability Major/Minor version number:    3.5
  Total amount of global memory:                 11520 MBytes (12079136768 bytes)
  (15) Multiprocessors, (192) CUDA Cores/MP:     2880 CUDA Cores
  GPU Max Clock rate:                            745 MHz (0.75 GHz)
  Memory Clock rate:                             3004 Mhz
  Memory Bus Width:                              384-bit
  L2 Cache Size:                                 1572864 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Enabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 131 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
> Peer access from Tesla K40m (GPU0) -> Tesla K40m (GPU1) : Yes
> Peer access from Tesla K40m (GPU1) -> Tesla K40m (GPU0) : Yes

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 7.5, CUDA Runtime Version = 7.5, NumDevs = 2, Device0 = Tesla K40m, Device1 = Tesla K40m
Result = PASS

      

+3


source to share


3 answers


GCC 4.9.3 and 5.1.0 definitely do not support GPU offloading of OpenMP. GCC 7.1.0 supports it, however it must be created using specific configuration options as described here .



+2


source


I may be wrong, but I think you need to make some code fixes as you wrote (you may already know this). To actually run a GPU target using OpenMP, you need to replace:

#pragma omp for collapse(4) private(i, j, k, l)

      

from

#pragma omp target teams distribute parallel for collapse(4) private(i, j, k, l)

      



You can check if the kernel is actually running on the GPU by profiling your executable with "nvprof". It should show the kernel running on the GPU. You can also change the number of teams and threads in the target region using the num_teams and thread_limit suggestions and you should see the corresponding changes in your profile.

To actually check programmatically if the target area is running on the target device, I use the "omp_is_initial_device ()" call, which returns 0 when called from the accelerator. Here's an example:

int A[1] = {-1};
#pragma omp target
{
  A[0] = omp_is_initial_device();
}

if (!A[0]) {
  printf("Able to use offloading!\n");
}

      

+2


source


Perhaps I'm in the wrong direction. but I want to help,

since I have ever been in a weird situation using GPU.

You need to be in the Linux "video" group, so you can use the GPU.

or all the results returned from the GPU will be 0.

So, I would suggest that you run the CUDA sample code to check if you are in a situation where I am stuck.

This is strange. I'm not sure if I described this correctly. Hope it helps.


according to this: https://wiki.gentoo.org/wiki/NVidia/nvidia-drivers

The user (s) who need to access the video card must be added to the video group

0


source







All Articles