Change openmp to tbb
#pragma omp parallel for private(x,y)
for (int j = 0; j < nDstSizeY; j++)
{
for (int i = 0; i < nDstSizeX; i++){
x = MapX.at<float>(j, i);
y = MapY.at<float>(j, i);
if (nSrcType == CV_8UC1)
{
Dst.at<uchar>(j, i) = Bilinear8UC1(Src, x, y);
}
else
{
Dst.at<Vec3b>(j, i) = Bilinear8UC3(Src, x, y);
}
}
}
I want to make tbb code, but in local variable (private (x, y) on openmp) my program doesn't run any faster, my tbb code is like this
tbb::parallel_for(0, nDstSizeY, [&](int j){
for (int i = 0; i < nDstSizeX; i++)
{
x = MapX.at<float>(j, i);
y = MapY.at<float>(j, i);
if (nSrcType == CV_8UC1)
{
Dst.at<uchar>(j, i) = Bilinear8UC1(Src, x, y);
}
else
{
Dst.at<Vec3b>(j, i) = Bilinear8UC3(Src, x, y);
}
}
});
how can i fix this? sorry for my bad english
source to share
This translation in TBB is inconsistent due to x
and y
shared between threads due to [&]
. If you want to keep private(x,y)
as is during translation to TBB, add it explicitly to the yambda capture:
[&,x,y](int j)
Or simply declare the local variables x
and y
within the lambda. Otherwise, it will lead to data failure in shared x
and y
.
Another tip is to use blocked_range2d
, which can enable additional cache optimizations:
tbb::parallel_for( tbb::blocked_range2d<int>(0, nDstSizeY, 0, nDstSizeX)
, [&](tbb::blocked_range2d<int> r) {
for( int j = r.rows().begin(); j < r.rows().end(); j++ )
for( int i = r.cols().begin(); i < r.cols().end(); i++ ) {
int x = MapX.at<float>(j, i);
int y = MapY.at<float>(j, i); // note: locally declared variables
if (nSrcType == CV_8UC1)
Dst.at<uchar>(j, i) = Bilinear8UC1(Src, x, y);
else
Dst.at<Vec3b>(j, i) = Bilinear8UC3(Src, x, y);
}
});
source to share