Is there a workaround for the inflexible use of texture references in CUDA

I have some textures that I want to read during kernel startup. The documentation says they need to be defined globally. The problem is that I want to call the same kernel functions (which are quite complex) with different texture sources.

texture<unsigned char, 2, cudaReadModeElementType> g_tex_a;
texture<unsigned char, 2, cudaReadModeElementType> g_tex_b;

__global__ void gpu_kernel_1()
{
    // long complicated kernel
    foo = tex2D(g_tex_a, x,y);
}

__global__ void gpu_kernel_2()
{
    // long complicated kernel
    bar = tex2D(g_tex_a, x,y);
}

main()
{
    gpu_kernel_1<<<grid, block>>>();
    gpu_kernel_2<<<grid, block>>>();
}

      

Is it not possible to pass the path to the kernel whose texture it should read using tex2D for example. g_tex_a or g_tex_b? It seems like the compiler needs to know the texture reference at compile time. I want to be able to reuse the gpu_kernel_1 and gpu_kernel_2 code to work with different textures.

I don't really understand how the texture reference object is used by the host and device code. At the moment I have a terrible solution to replicate all the code for each core, with the only change being that the tex2D function uses a different texture reference, for example. gpu_kernel_1_with_tex_a (), gpu_kernel_1_with_tex_b ().

Is there any other solution? Thank.

+3


source to share


2 answers


Before a texture reference can be used, it must be mapped to memory, for example cudaBindTextureToArray (). It is in the display that you decide what data should work on.



+1


source


In addition to the Texture API, CUDA 5.0 now supports the Texture Object API. Instead of defining a texture reference globally, with this new API you can now define a texture object locally and pass it to the kernel as a function argument. Have a look at the CUDA Programming Guide (Section 3.2.10.1.1).



+1


source







All Articles