OpenCV: reading video with multiple cores

I am working on a program that does simple analysis (color, etc.) of videos, in particular films. Since movies are often thousands of frames, I figured that using a movie with the function cap.read()

could simply not be used to capture each frame individually. I would like to use multiple cores or processes to read and store the relevant information from each frame for specific sections of the video; for example, one core reads and analyzes frames from the first quarter of a video, while another core does the same for the other parts and merges the information after all frames have been read. How can I do that?

I am relatively new to opencv and multithreading practice, but I really want to learn. Even if you can provide some resources for me, I would appreciate it!

+3


source to share





All Articles