Stabilizing video in OpenCV
I am trying to implement video stabilization using the OpenCV videostab module. I need to do this in a stream, so I am trying to get motion between two frames. After studying the documentation, I decided to do it this way:
estimator = new cv::videostab::MotionEstimatorRansacL2(cv::videostab::MM_TRANSLATION);
keypointEstimator = new cv::videostab::KeypointBasedMotionEstimator(estimator);
bool res;
auto motion = keypointEstimator->estimate(this->firstFrame, thisFrame, &res);
std::vector<float> matrix(motion.data, motion.data + (motion.rows*motion.cols));
Where firstFrame
and thisFrame
are fully initialized frames. The problem is that the method estimate
always returns a matrix like this:
In this matrix, only the last value ( matrix[8]
) changes from frame to frame. Am I using videostab objects correctly and how can I apply this matrix to the frame to get the result?
source to share
I'm new to OpenCV, but this is how I solved this problem. The problem lies in the line:
std::vector<float> matrix(motion.data, motion.data + (motion.rows*motion.cols));
For me the matrix motion
is of type 64-bit double
(check here here ) and copy it to std::vector<float> matrix
type 32-bit float
messes - values. To fix this problem, try replacing the following line:
std::vector<float> matrix;
for (auto row = 0; row < motion.rows; row++) {
for (auto col = 0; col < motion.cols; col++) {
matrix.push_back(motion.at<float>(row, col));
}
}
I've tested it running estimator
on a duplicate set of points and it gives the expected results, with most entries close to 0.0
and matrix[0], matrix[4] and matrix[8]
are 1.0
(using the author's code with this parameter, it gives the same error values when displaying the author's image).
source to share