Can CUDA cores be virtual functions?
The question is pretty simple, but let me give you an overview of my structure. I have an abstract class AbstractScheme
representing the type of computation (kind of discretization for the equation, but it doesn't matter). Every implementation must provide a method to return the schema name and must implement a protected function that is the core of CUDA. The base abstract class provides a public method that calls the CUDA core and returns the time it takes to complete the core.
class AbstractScheme
{
public:
/**
* @return The name of the scheme is returned
*/
virtual std::string name() const =0;
/**
* Copies the input to the device,
* computes the number of blocks and threads,
* launches the kernel,
* copies the output to the host,
* and measures the time to do all of this.
*
* @return The number of milliseconds to perform the whole operation
* is returned
*/
double doComputation(const float* input, float* output, int nElements)
{
// Does a lot of things and calls this->kernel().
}
protected:
/**
* CUDA kernel which does the computation.
* Must be implemented.
*/
virtual __global__ void kernel(const float*, float*, int) =0;
};
I also have several implementations of this base class. But when I try to compile with nvcc 7.0 I get this error referring to the line where I am defining the function kernel
in AbstractScheme
(last line in the above list):
myfile.cu(60): error: illegal combination of memory qualifiers
I couldn't find any resource saying that kernels cannot be virtual functions, but I have a feeling this is a problem. Can you explain the reason for this?I clearly understand how and why functions __device__
cannot be virtual functions (virtual functions are pointers to actual [host] functions stored in an object, and you cannot call such a function from device code), but I am not sure about functions __global__
.
EDIT: Part of the question I sketched out is wrong. Please read the comments to understand why.
source to share