Real-Time Video Capture with FFmpeg

Posted: June 11th, 2013 | Tags: , | 5 Comments »

Working on a distributed team means that often the best way to share new results is via video captures of simulations. Previously I would do this by dumping uncompressed frames from OpenGL to disk, and then compressing with FFmpeg. I prefer this over tools like Fraps because it gives more control over compression quality, and has no watermarking or time limits.

The problem with this method is simply that saving uncompressed frames generates a large amount of data that quickly fills up the write cache and slows down the whole system during capture, it also makes FFmpeg disk bound on reads during encoding.

Thankfully there is a better alternative, by using a direct pipe between the app and FFmpeg you can avoid this disk IO entirely. I couldn't find a concise example of this on the web, so here's how to do it in a Win32 GLUT app.

At startup:

#include <stdio.h>

// start ffmpeg telling it to expect raw rgba 720p-60hz frames
// -i - tells it to read frames from stdin
const char* cmd = "ffmpeg -r 60 -f rawvideo -pix_fmt rgba -s 1280x720 -i - "
                  "-threads 0 -preset fast -y -pix_fmt yuv420p -crf 21 -vf vflip output.mp4";

// open pipe to ffmpeg's stdin in binary write mode
FILE* ffmpeg = _popen(cmd, "wb");

int* buffer = new int[width*height];

After rendering each frame, grab back the framebuffer and send it straight to the encoder:

glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);

fwrite(buffer, sizeof(int)*width*height, 1, ffmpeg);

When you're done, just close the stream as follows:


With these settings FFmpeg generates a nice H.264 compressed mp4 file, and almost manages to keep up with my real-time simulations.

This has has vastly improved my workflow, so I hope someone else finds it useful.

Update: Added -pix_fmt yuv420p to the output params to generate files compatible with Windows Media Player and Quicktime.


5 Comments on “Real-Time Video Capture with FFmpeg”

  1. 1 Anvesh Manne said at 2:08 pm on July 8th, 2013:


    Once i am done writing all the frames, how do i tell the ffmpeg program that i have reached the end of my frames?


  2. 2 mmack said at 10:14 pm on July 8th, 2013:

    Hi Anvesh, you just need to call _pclose() on the stream. I updated the post to include this step.


  3. 3 Mads Ravn said at 6:38 pm on July 28th, 2013:

    Video capture with FFmpeg can also be done by hooking onto the X server with FFmpeg.

    It is explained here

  4. 4 GL: How to capture OpenGL to video on the fly | Here be dragons said at 6:33 am on January 29th, 2015:

    […] However, there is a way to save both time and space a lot with a minimal code change. Please see Real-Time Video Capture with FFmpeg by Miles Macklin for the […]

  5. 5 Dries said at 10:31 am on February 12th, 2015:


    thanks for this post. I do have a question. I want to do this with multiple inputs (one for video like you and one for audio). Both should be written in the same way with fwrite() because both streams are generated realtime.

    Do you know how I can do this?

Leave a Reply