Why does VideoClip frames change when recorded to a video file?
I wrote the following code:
from moviepy.editor import *
from PIL import Image
clip= VideoFileClip("video.mp4")
video= CompositeVideoClip([clip])
video.write_videofile("video_new.mp4",fps=clip.fps)
then to check if the frames have changed or not, and if they have changed, which function changed them, I extracted the first frame of "clip", "video" and "video_new.mp4" and compared them:
clip1= VideoFileClip("video_new.mp4")
img1= clip.get_frame(0)
img2= video.get_frame(0)
img3= clip1.get_frame(0)
a=img1[0,0,0]
b=img2[0,0,0]
c=img3[0,0,0]
I found that a = 24, b = 24, but c = 26 .... infact when running the array comparison loop, I found that "img1" and "img2" were identical, but "img3" was different. I suspect the video.write_videofile function is responsible for the change in the array. But I don't know why ... Can someone explain this to me and also suggest a way to record clips without changing their frames?
PS: I have read the documents 'VideoFileClip'
, 'FFMPEG_VideoWriter'
, 'FFMPEG_VideoReader'
but could not find anything useful ... I need to read an accurate picture, as it was to write code, I'm working on, Please suggest me a way.
source to share
Like JPEG, MPEG-4 uses lossy compression, so it's no surprise that the frames read from "video_new.mp4" aren't exactly identical to those in "video.mp4". As well as changes due to simple lossy compression, there are also variations that arise from the wide variety of encoding parameters that can be used by programs that write MPEG data.
If you really need to be able to read the same frame data that you write, you will have to use a different file format, but be careful: your files will be huge!
Choosing a video format depends in part on what the image data is and what you want to do with it. If your data uses 256 colors or less, and you're not going to perform transformations on it that will change colors, a simple GIF is a good choice. But keep in mind that even something like non-integer scaling changes colors.
If you want to parse image data and transform it in various ways, it makes sense to use a format with better color support than GIF, such as a PNG image stream, which I believe is what Zulko mentions in his answer. FWIW, there is a PNG related animation format called MNG , but it is poorly supported or widely known.
Another option is to use a PPM image stream , or perhaps even a YUV stream , which is useful for certain types of analysis and is convenient if you intend to encode as MPEG for final consumption. PPM format is very simple and easy to use; YUV is a bit messy as it is a raw format with no header data, so you need to keep track of the image size and resolution data yourself.
The size of the PPM file or YUV streams is large as they do not contain any compression at all, but of course they can be compressed using standard compression methods if you want to save a little space while saving them to disk. OTOH, typical video workflows that use such streams often don't document them to disk: they are sent in pipes (possibly using named pipes), so file size (mostly) doesn't matter.
Although these formats take up a lot of space compared to MPEG-based files, they are far superior to being used as intermediate formats when doing image data analysis and transformation, because every time you write and read MPEG, you lose a little bit of quality.
I am assuming that you intend to analyze and transform image data using PIL / Pillow. But you can also work with PPM and YUV streams using the ffmpeg / avconv command line programs; and the ffmpeg family works happily with sets of individual image files and GIFs too.
source to share