PTS and DTS calculation for video and audio frames
I am getting video files recorded in H264 format and G.711 PCM audio encoded data from two different streams to mux / write to container mov
.
Write function signatures are written as:
bool WriteAudio(const unsigned char *pEncodedData, size_t iLength);
bool WriteVideo(const unsigned char *pEncodedData, size_t iLength, bool const bIFrame);
And the function for adding audio and video streams looks like this:
AVStream* AudioVideoRecorder::AddMediaStream(enum AVCodecID codecID) {
Log("Adding stream: %s.", avcodec_get_name(codecID));
AVCodecContext* pCodecCtx;
AVStream* pStream;
/* find the encoder */
AVCodec* codec = avcodec_find_encoder(codecID);
if (!codec) {
LogErr("Could not find encoder for %s", avcodec_get_name(codecID));
return NULL;
}
pStream = avformat_new_stream(m_pFormatCtx, codec);
if (!pStream) {
LogErr("Could not allocate stream.");
return NULL;
}
pStream->id = m_pFormatCtx->nb_streams - 1;
pStream->time_base = (AVRational){1, VIDEO_FRAME_RATE};
pCodecCtx = pStream->codec;
switch(codec->type) {
case AVMEDIA_TYPE_VIDEO:
pCodecCtx->codec_id = codecID;
pCodecCtx->bit_rate = VIDEO_BIT_RATE;
pCodecCtx->width = PICTURE_WIDTH;
pCodecCtx->height = PICTURE_HEIGHT;
pCodecCtx->gop_size = VIDEO_FRAME_RATE;
pCodecCtx->pix_fmt = PIX_FMT_YUV420P;
m_pVideoStream = pStream;
break;
case AVMEDIA_TYPE_AUDIO:
pCodecCtx->codec_id = codecID;
pCodecCtx->sample_fmt = AV_SAMPLE_FMT_S16;
pCodecCtx->bit_rate = 64000;
pCodecCtx->sample_rate = 8000;
pCodecCtx->channels = 1;
m_pAudioStream = pStream;
break;
default:
break;
}
/* Some formats want stream headers to be separate. */
if (m_pOutputFmt->flags & AVFMT_GLOBALHEADER)
m_pFormatCtx->flags |= CODEC_FLAG_GLOBAL_HEADER;
return pStream;
}
Inside functions WriteAudio(..)
and WriteVideo(..)
I create AVPakcet
with av_init_packet(...)
and install pEncodedData
and iLength
like packet.data
and packet.size
. I printed packet.pts
and packet.dts
and its equivalent AV_NOPTS_VALUE
.
Now, how can I correctly calculate the PTS, DTS and packet ( packet.dts
, packet.pts
and packet.duration
) durations for audio and video data so that I can sync the audio and video and play them correctly? I've seen many examples on the internet, but none of them make sense to me. I am new to ffmpeg
and my concept may be wrong in some context. I want to do it accordingly.
Thanks in advance!
EDIT: There is no B-frame in my video streams. So, I think PTS and DTS can stay the same here.
source to share
PTS / DTS are time stamps, they must be set to the time stamps of the input data. I don't know where your date is coming from, but any input has a timestamp associated with it. Typically the timestamps of the input media or metrics from the system if you are recording from your sound card + webcam, etc. You must convert these numbers to the expected form and then assign them AVPacket.pts/dts
.
source to share