Home/News | About | Download | Documentation | Forum | Bug Reports | Contact | Donations | Consulting | Projects | Legal | Security | FATE


muxing.c memory leak

A collection of useful tutorials for some common tasks.

muxing.c memory leak

Postby mytree » Tue Feb 19, 2013 6:21 am

Hi, I left the message for asking about ffmpeg example "muxing.c".
Here is my build environments.

OS : Windows 7
Platform : Visual Studio 2010
FFmpeg version: 2013-02-17 git-b8bb661 ( Zeranoe's FFmpeg Builds Home Page )

I found that memory was increased gradually when I tested following code using "muxing.c".

Only encoding file's open & close functions increased the memory.

Is there any method to solve this problem?

----------------------------------------------------------------------------
source code

Code: Select all
#ifndef WIN32
#error   "This test code is only supported Windows OS."
#endif   //   WIN32

#define   _CRT_SECURE_NO_WARNINGS

//--------------------------------------------------------------------------
//      ffmpeg sample settings

#include <Windows.h>

#include <iostream>
#include "stdint.h"                                       //   Integer & "inline" define

#ifndef INT64_C
#define INT64_C(val)   val##i64
#define UINT64_C(val)   val##ui64
#endif   //   INT64_C

extern "C"
{
#include "libavutil/mathematics.h"
#include "libavutil/log.h"
#include "libavformat/avformat.h"
#include "libswscale/swscale.h"
}

static char g_szErrorBuf[ AV_ERROR_MAX_STRING_SIZE ] = "";         //   Error String Temp Buffer

#ifdef av_err2str
#undef av_err2str
#define av_err2str( errcode )   av_make_error_string( g_szErrorBuf, AV_ERROR_MAX_STRING_SIZE, errcode )
#endif      //   av_err2str

#pragma comment( lib, "avutil.lib" )      //   _av_free
#pragma comment( lib, "avformat.lib" )      //   _avio_close
#pragma comment( lib, "avcodec.lib" )      //   _avcodec_*
#pragma comment( lib, "swscale.lib" )      //   _sws_*

//--------------------------------------------------------------------------

/* 5 seconds stream duration */
#define STREAM_DURATION   5.0   //200.0
#define STREAM_FRAME_RATE 25 /* 25 images/s */
#define STREAM_NB_FRAMES  ((int)(STREAM_DURATION * STREAM_FRAME_RATE))
#define STREAM_PIX_FMT    AV_PIX_FMT_YUV420P /* default pix_fmt */

static int sws_flags = SWS_BICUBIC;

static float t, tincr, tincr2;
static int16_t *samples;
static int audio_input_frame_size;

/* Add an output stream. */
//static AVStream *add_stream(AVFormatContext *oc, AVCodec **codec, enum AVCodecID codec_id)
//{
//    AVCodecContext *c;
//    AVStream *st;
//
//    /* find the encoder */
//    *codec = avcodec_find_encoder(codec_id);
//    if (!(*codec)) {
//        fprintf(stderr, "Could not find encoder for '%s'\n", avcodec_get_name(codec_id));
//        exit(1);
//    }
//
//    st = avformat_new_stream(oc, *codec);
//    if (!st) {
//        fprintf(stderr, "Could not allocate stream\n");
//        exit(1);
//    }
//    st->id = oc->nb_streams-1;
//    c = st->codec;
//
//    switch ((*codec)->type) {
//    case AVMEDIA_TYPE_AUDIO:
//        st->id = 1;
//        c->sample_fmt  = AV_SAMPLE_FMT_S16;
//        c->bit_rate    = 64000;
//        c->sample_rate = 44100;
//        c->channels    = 2;
//        break;
//
//    case AVMEDIA_TYPE_VIDEO:
//        avcodec_get_context_defaults3(c, *codec);
//        c->codec_id = codec_id;
//
//        c->bit_rate = 400000;
//        /* Resolution must be a multiple of two. */
//        c->width    = 352;
//        c->height   = 288;
//        /* timebase: This is the fundamental unit of time (in seconds) in terms
//         * of which frame timestamps are represented. For fixed-fps content,
//         * timebase should be 1/framerate and timestamp increments should be
//         * identical to 1. */
//        c->time_base.den = STREAM_FRAME_RATE;
//        c->time_base.num = 1;
//        c->gop_size      = 12; /* emit one intra frame every twelve frames at most */
//        c->pix_fmt       = STREAM_PIX_FMT;
//
//      c->qmin            =   2;
//      c->qmax            =   31;
//
//        if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
//            /* just for testing, we also add B frames */
//            c->max_b_frames = 2;
//        }
//        if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
//            /* Needed to avoid using macroblocks in which some coeffs overflow.
//             * This does not happen with normal video, it just happens here as
//             * the motion of the chroma plane does not match the luma plane. */
//            c->mb_decision = 2;
//        }
//    break;
//
//    default:
//        break;
//    }
//
//    /* Some formats want stream headers to be separate. */
//    if (oc->oformat->flags & AVFMT_GLOBALHEADER)
//        c->flags |= CODEC_FLAG_GLOBAL_HEADER;
//
//    return st;
//}

static AVStream *add_stream(AVFormatContext** ppOC, AVCodec **codec, enum AVCodecID codec_id)
{
   AVFormatContext* pOC = *ppOC;
    AVCodecContext *c;
    AVStream *st;

    /* find the encoder */
    *codec = avcodec_find_encoder(codec_id);
    if (!(*codec)) {
        fprintf(stderr, "Could not find encoder for '%s'\n", avcodec_get_name(codec_id));
        exit(1);
    }

    st = avformat_new_stream( pOC, *codec);
    if (!st) {
        fprintf(stderr, "Could not allocate stream\n");
        exit(1);
    }
    st->id = pOC->nb_streams-1;
    c = st->codec;

    switch ((*codec)->type) {
    case AVMEDIA_TYPE_AUDIO:
        st->id = 1;
        c->sample_fmt  = AV_SAMPLE_FMT_S16;
        c->bit_rate    = 64000;
        c->sample_rate = 44100;
        c->channels    = 2;
        break;

    case AVMEDIA_TYPE_VIDEO:
        avcodec_get_context_defaults3(c, *codec);
        c->codec_id = codec_id;

        c->bit_rate = 400000;
        /* Resolution must be a multiple of two. */
        c->width    = 352;
        c->height   = 288;
        /* timebase: This is the fundamental unit of time (in seconds) in terms
         * of which frame timestamps are represented. For fixed-fps content,
         * timebase should be 1/framerate and timestamp increments should be
         * identical to 1. */
        c->time_base.den = STREAM_FRAME_RATE;
        c->time_base.num = 1;
        c->gop_size      = 12; /* emit one intra frame every twelve frames at most */
        c->pix_fmt       = STREAM_PIX_FMT;

      c->qmin            =   2;
      c->qmax            =   31;

        if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
            /* just for testing, we also add B frames */
            c->max_b_frames = 2;
        }
        if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
            /* Needed to avoid using macroblocks in which some coeffs overflow.
             * This does not happen with normal video, it just happens here as
             * the motion of the chroma plane does not match the luma plane. */
            c->mb_decision = 2;
        }
    break;

    default:
        break;
    }

    /* Some formats want stream headers to be separate. */
    if (pOC->oformat->flags & AVFMT_GLOBALHEADER)
        c->flags |= CODEC_FLAG_GLOBAL_HEADER;

    return st;
}

static void open_audio(AVFormatContext *oc, AVCodec *codec, AVStream *st)
{
    AVCodecContext *c;
    int ret;

    c = st->codec;

   /* open it */
    ret = avcodec_open2(c, codec, NULL);
    if (ret < 0) {
        fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret));
      exit(1);
    }

    /* init signal generator */
    t     = 0;
    tincr = (float)( 2 * M_PI * 110.0 / c->sample_rate );
    /* increment frequency by 110 Hz per second */
    tincr2 = (float)( 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate );

    if (c->codec->capabilities & CODEC_CAP_VARIABLE_FRAME_SIZE)
        audio_input_frame_size = 10000;
    else
        audio_input_frame_size = c->frame_size;
    samples = (int16_t*)av_malloc(audio_input_frame_size *
                        av_get_bytes_per_sample(c->sample_fmt) *
                        c->channels);
    if (!samples) {
        fprintf(stderr, "Could not allocate audio samples buffer\n");
        exit(1);
    }
}

static void close_audio(AVFormatContext *oc, AVStream *st)
{
    avcodec_close(st->codec);

    av_free(samples);
}

static AVFrame *frame;
static AVPicture src_picture, dst_picture;
static int frame_count;

static void open_video(AVFormatContext *oc, AVCodec *codec, AVStream *st)
{
    int ret;
    AVCodecContext *c = st->codec;

   /* open the codec */
    ret = avcodec_open2(c, codec, NULL);
    if (ret < 0) {
        fprintf(stderr, "Could not open video codec: %s\n", av_err2str(ret));
        exit(1);
    }

    /* allocate and init a re-usable frame */
    frame = avcodec_alloc_frame();
    if (!frame) {
        fprintf(stderr, "Could not allocate video frame\n");
        exit(1);
    }

    /* Allocate the encoded raw picture. */
    ret = avpicture_alloc(&dst_picture, c->pix_fmt, c->width, c->height);
    if (ret < 0) {
        fprintf(stderr, "Could not allocate picture: %s\n", av_err2str(ret));
        exit(1);
    }

    /* If the output format is not YUV420P, then a temporary YUV420P
     * picture is needed too. It is then converted to the required
     * output format. */
    if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
        ret = avpicture_alloc(&src_picture, AV_PIX_FMT_YUV420P, c->width, c->height);
        if (ret < 0) {
            fprintf(stderr, "Could not allocate temporary picture: %s\n",
                    av_err2str(ret));
            exit(1);
        }
    }

    /* copy data and linesize picture pointers to frame */
    *((AVPicture *)frame) = dst_picture;
}

static void close_video(AVFormatContext *oc, AVStream *st)
{
    avcodec_close(st->codec);
   avpicture_free( &src_picture );   //av_free(src_picture.data[0]);
   avpicture_free( &dst_picture );   //av_free(dst_picture.data[0]);
    avcodec_free_frame(&frame);      //av_free(frame);
}

/* Prepare a 16 bit dummy audio frame of 'frame_size' samples and
 * 'nb_channels' channels. */
static void get_audio_frame(int16_t *samples, int frame_size, int nb_channels)
{
    int j, i, v;
    int16_t *q;

    q = samples;
    for (j = 0; j < frame_size; j++) {
        v = (int)(sin(t) * 10000);
        for (i = 0; i < nb_channels; i++)
            *q++ = v;
        t     += tincr;
        tincr += tincr2;
    }
}

static void write_audio_frame(AVFormatContext *oc, AVStream *st)
{
    AVCodecContext *c;
    AVPacket pkt = { 0 }; // data and size must be 0;
    AVFrame *frame = avcodec_alloc_frame();
    int got_packet, ret;

    av_init_packet(&pkt);
    c = st->codec;

    get_audio_frame(samples, audio_input_frame_size, c->channels);
    frame->nb_samples = audio_input_frame_size;
    avcodec_fill_audio_frame(frame, c->channels, c->sample_fmt,
                             (uint8_t *)samples,
                             audio_input_frame_size *
                             av_get_bytes_per_sample(c->sample_fmt) *
                             c->channels, 1);

    ret = avcodec_encode_audio2(c, &pkt, frame, &got_packet);
    if (ret < 0) {
        fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret));
        exit(1);
    }

    if (!got_packet)
        return;

    pkt.stream_index = st->index;

    /* Write the compressed frame to the media file. */
    ret = av_interleaved_write_frame(oc, &pkt);
    if (ret != 0) {
        fprintf(stderr, "Error while writing audio frame: %s\n",
                av_err2str(ret));
        exit(1);
    }
    avcodec_free_frame(&frame);
}

/* Prepare a dummy image. */
static void fill_yuv_image(AVPicture *pict, int frame_index, int width, int height)
{
    int x, y, i;

    i = frame_index;

    /* Y */
    for (y = 0; y < height; y++)
        for (x = 0; x < width; x++)
            pict->data[0][y * pict->linesize[0] + x] = x + y + i * 3;

    /* Cb and Cr */
    for (y = 0; y < height / 2; y++) {
        for (x = 0; x < width / 2; x++) {
            pict->data[1][y * pict->linesize[1] + x] = 128 + y + i * 2;
            pict->data[2][y * pict->linesize[2] + x] = 64 + x + i * 5;
        }
    }
}

static void write_video_frame(AVFormatContext *oc, AVStream *st)
{
    int ret;
    static struct SwsContext *sws_ctx;
    AVCodecContext *c = st->codec;

    if (frame_count >= STREAM_NB_FRAMES) {
        /* No more frames to compress. The codec has a latency of a few
         * frames if using B-frames, so we get the last frames by
         * passing the same picture again. */
    } else {
        if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
            /* as we only generate a YUV420P picture, we must convert it
             * to the codec pixel format if needed */
            if (!sws_ctx) {
                sws_ctx = sws_getContext(c->width, c->height, AV_PIX_FMT_YUV420P,
                                         c->width, c->height, c->pix_fmt,
                                         sws_flags, NULL, NULL, NULL);
                if (!sws_ctx) {
                    fprintf(stderr,
                            "Could not initialize the conversion context\n");
                    exit(1);
                }
            }
            fill_yuv_image(&src_picture, frame_count, c->width, c->height);
            sws_scale(sws_ctx,
                      (const uint8_t * const *)src_picture.data, src_picture.linesize,
                      0, c->height, dst_picture.data, dst_picture.linesize);
        } else {
            fill_yuv_image(&dst_picture, frame_count, c->width, c->height);
        }
    }

    if (oc->oformat->flags & AVFMT_RAWPICTURE) {
        /* Raw video case - directly store the picture in the packet */
        AVPacket pkt;
        av_init_packet(&pkt);

        pkt.flags        |= AV_PKT_FLAG_KEY;
        pkt.stream_index  = st->index;
        pkt.data          = dst_picture.data[0];
        pkt.size          = sizeof(AVPicture);

        ret = av_interleaved_write_frame(oc, &pkt);
    } else {
        /* encode the image */
        AVPacket pkt;
        int got_output;

        av_init_packet(&pkt);
        pkt.data = NULL;    // packet data will be allocated by the encoder
        pkt.size = 0;

        ret = avcodec_encode_video2(c, &pkt, frame, &got_output);
        if (ret < 0) {
            fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));
            exit(1);
        }

        /* If size is zero, it means the image was buffered. */
        if (got_output) {
            if (c->coded_frame->key_frame)
                pkt.flags |= AV_PKT_FLAG_KEY;

            pkt.stream_index = st->index;

            /* Write the compressed frame to the media file. */
            ret = av_interleaved_write_frame(oc, &pkt);
        } else {
            ret = 0;
        }
    }
    if (ret != 0) {
        fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret));
        exit(1);
    }
    frame_count++;
}

class CTest
{
private:
   AVOutputFormat*            fmt;
    AVFormatContext*         oc;
    AVStream*               audio_st;
   AVStream*               video_st;
    AVCodec*               audio_codec;
   AVCodec*               video_codec;

   double                  audio_pts, video_pts;

public:
   CTest( void ) : fmt( NULL ), oc( NULL ), audio_st( NULL ), video_st( NULL ), audio_codec( NULL ), video_codec( NULL )
      ,audio_pts( 0.0 ), video_pts( 0.0 )
   {
   }

   virtual ~CTest( void )
   {

   }

   bool Open( std::string strFileName )
   {
      const char* pszFileName = strFileName.c_str();
      int iRet;

      /* allocate the output media context */
      avformat_alloc_output_context2(&oc, NULL, NULL, pszFileName);
      if (!oc) {
         printf("Could not deduce output format from file extension: using MPEG.\n");
         avformat_alloc_output_context2(&oc, NULL, "mpeg", pszFileName);
      }
      if (!oc) {
         return false;
      }
      fmt = oc->oformat;

      /* Add the audio and video streams using the default format codecs and initialize the codecs. */
      video_st = NULL;
      audio_st = NULL;

      if (fmt->video_codec != AV_CODEC_ID_NONE) {
         video_st = add_stream(&oc, &video_codec, fmt->video_codec);
      }
      if (fmt->audio_codec != AV_CODEC_ID_NONE) {
         audio_st = add_stream(&oc, &audio_codec, fmt->audio_codec);
      }

      /* Now that all the parameters are set, we can open the audio and
       * video codecs and allocate the necessary encode buffers. */
      if (video_st)
         open_video(oc, video_codec, video_st);
      if (audio_st)
         open_audio(oc, audio_codec, audio_st);

      av_dump_format(oc, 0, pszFileName, 1);

      // open the output file, if needed
      if (!(fmt->flags & AVFMT_NOFILE)) {
         iRet = avio_open(&oc->pb, pszFileName, AVIO_FLAG_WRITE);
         if (iRet < 0) {
            fprintf(stderr, "Could not open '%s': %s\n", pszFileName, av_err2str(iRet));
            return false;
         }
      }

      // Write the stream header, if any.
      iRet = avformat_write_header(oc, NULL);
      if (iRet < 0) {
         fprintf(stderr, "Error occurred when opening output file: %s\n", av_err2str(iRet));
         return false;
      }

      return true;
   }

   void Close( void )
   {
      /* Write the trailer, if any. The trailer must be written before you
       * close the CodecContexts open when you wrote the header; otherwise
       * av_write_trailer() may try to use memory that was freed on
       * av_codec_close(). */
      av_write_trailer(oc);

      /* Close each codec. */
      if (video_st)
         close_video(oc, video_st);
      if (audio_st)
         close_audio(oc, audio_st);

      /* Free the streams. */
      for (unsigned int i = 0; i < oc->nb_streams; i++) {
         av_freep(&oc->streams[i]->codec);
         av_freep(&oc->streams[i]);

      }

      if (!(fmt->flags & AVFMT_NOFILE))
         /* Close the output file. */
         avio_close(oc->pb);

      /* free the stream */
      av_free(oc);
      
   }

   void Write( void )
   {
      if (frame)
         frame->pts = 0;
      for (;;) {
         /* Compute current audio and video time. */
         if (audio_st)
            audio_pts = (double)audio_st->pts.val * audio_st->time_base.num / audio_st->time_base.den;
         else
            audio_pts = 0.0;

         if (video_st)
            video_pts = (double)video_st->pts.val * video_st->time_base.num /
                     video_st->time_base.den;
         else
            video_pts = 0.0;

         if ((!audio_st || audio_pts >= STREAM_DURATION) &&
            (!video_st || video_pts >= STREAM_DURATION))
            break;

         /* write interleaved audio and video frames */
         if (!video_st || (video_st && audio_st && audio_pts < video_pts)) {
            write_audio_frame(oc, audio_st);
         } else {
            write_video_frame(oc, video_st);
            frame->pts += av_rescale_q(1, video_st->codec->time_base, video_st->time_base);
         }
      }
   }
};

int main(int argc, char **argv)
{
   _CrtSetDbgFlag( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF);      // Memory Leak Check
   //_CrtSetBreakAlloc( 143 );

   av_register_all();         // Initialize libavcodec, and register all codecs and formats.
   
   const char *filename = "Muxing.mpg";
   const DWORD dwLimitedTick = 30000;

   char szEnd;
   bool bIsOpen;
   CTest test;
   DWORD dwStartTick, dwEndTick, dwDiffTick;
   
   dwStartTick = GetTickCount();
   
   do
   {
      bIsOpen = test.Open( filename );
   
      if ( bIsOpen != false )
      {
         test.Write();
         test.Close();
      }

      dwEndTick = GetTickCount();
      dwDiffTick = dwEndTick - dwStartTick;

      Sleep( 1 );

   } while ( dwDiffTick < dwLimitedTick );

   std::cin >> szEnd;
   
    return 0;
}
mytree
 
Posts: 3
Joined: Tue Feb 19, 2013 5:51 am

Re: muxing.c memory leak

Postby burek » Wed Feb 20, 2013 10:21 pm

Hi,

Is there any way you could shorten your test source code? I believe noone would be interested to analyze that much of the code, unfortunately. Help us to help you :)
burek
 
Posts: 867
Joined: Mon May 09, 2011 10:16 pm
Location: Serbia

Re: muxing.c memory leak

Postby mytree » Fri Feb 22, 2013 12:01 am

Hi, thank you for response to my message.
Above functions are equal to "muxing.c" code, ffmpeg encoding example.
I reformed the code to adjust my environment.
And I repeated the encoding open & close function for a period of time.

Test code are composed following structure.

void main( void )
{
repeat for 30 seconds...
{
encoding open.
data encoding.
encoding close.
}
}

Memory has been increased gradually during repeating encoding process.
But when the program terminated, there was no memory leak.
mytree
 
Posts: 3
Joined: Tue Feb 19, 2013 5:51 am

Re: muxing.c memory leak

Postby burek » Tue Mar 05, 2013 12:07 am

I've created a quick bug track issue, so you might add things you find missing and monitor the issue for resolution.
burek
 
Posts: 867
Joined: Mon May 09, 2011 10:16 pm
Location: Serbia

Re: muxing.c memory leak

Postby mytree » Wed Mar 06, 2013 7:11 am

Hi, sorry, I'm late. I checked the link.
I saw cehoyos made a request for valgrind output for memory leak.
At first, I considered compiling my sample in linux.
But I'm windows user. I'm poor at handling valgrind and linux.
So it is hard for me to compile ffmpeg library in linux,
and adjust my sample code in linux, and run with the valgrind.

and There is no memory leak in program's termination time.
Memory increase is detected during program's running time.
I wondered it is possible to check the differency between memory snap shots
using valgrind in program's running time.

I wanted to explain the problem with leak,
but I don't know how to use TRAC system. I have no ID this site.

please tell me your email, I will send my sample project.

It is visual studio 2010 project.
when the program is terminated, there was no memory leak.
But when the program is working, memory is increasing gradually.
All I executed are encoding open & write & close function in ffmpeg's muxing.c example.
I run the program, and I saw this problem in windows task manager.

please help me.
mytree
 
Posts: 3
Joined: Tue Feb 19, 2013 5:51 am

Re: muxing.c memory leak

Postby burek » Wed Mar 06, 2013 12:24 pm

Check the ticket again :)
Fixed by Nicolas George.


Now, if you are not compiling your own ffmpeg, you'll have to wait for windows builds to get build with this latest version, so you get the ffmpeg with your issue fixed on windows.
burek
 
Posts: 867
Joined: Mon May 09, 2011 10:16 pm
Location: Serbia

Re: muxing.c memory leak

Postby randyboy » Tue Dec 03, 2013 8:02 am

Add after below line
Code: Select all
ret = av_interleaved_write_frame(formatContext, &pkt);


this
Code: Select all
av_free_packet(&pkt);
randyboy
 
Posts: 1
Joined: Tue Dec 03, 2013 7:53 am


Return to Tutorials

Who is online

Users browsing this forum: No registered users and 1 guest