Working on a distributed team means that often the best way to share new results is via video captures of simulations. Previously I would do this by dumping uncompressed frames from OpenGL to disk, and then compressing with FFmpeg. I prefer this over tools like Fraps because it gives more control over compression quality, and has no watermarking or time limits.
The problem with this method is simply that saving uncompressed frames generates a large amount of data that quickly fills up the write cache and slows down the whole system during capture, it also makes FFmpeg disk bound on reads during encoding.
Thankfully there is a better alternative, by using a direct pipe between the app and FFmpeg you can avoid this disk IO entirely. I couldn't find a concise example of this on the web, so here's how to do it in a Win32 GLUT app.
#include <stdio.h> // start ffmpeg telling it to expect raw rgba 720p-60hz frames // -i - tells it to read frames from stdin const char* cmd = "ffmpeg -r 60 -f rawvideo -pix_fmt rgba -s 1280x720 -i - " "-threads 0 -preset fast -y -pix_fmt yuv420p -crf 21 -vf vflip output.mp4"; // open pipe to ffmpeg's stdin in binary write mode FILE* ffmpeg = _popen(cmd, "wb"); int* buffer = new int[width*height];
After rendering each frame, grab back the framebuffer and send it straight to the encoder:
glutSwapBuffers(); glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer); fwrite(buffer, sizeof(int)*width*height, 1, ffmpeg);
When you're done, just close the stream as follows:
With these settings FFmpeg generates a nice H.264 compressed mp4 file, and almost manages to keep up with my real-time simulations.
This has has vastly improved my workflow, so I hope someone else finds it useful.
Update: Added -pix_fmt yuv420p to the output params to generate files compatible with Windows Media Player and Quicktime.
Update: For OSX / Linux, change:
FILE* ffmpeg = _popen(cmd, "wb");into
FILE* ffmpeg = popen(cmd, "w");