有没有办法在 C# 应用程序中使用 ffmpeg?

Is there a way of using ffmpeg in c# app?

我正在使用 ffmpeg.org and when I run ffmpeg -y -f vfwcap -r 25 -i 0 out.mp4 in command line I can grab the video from my webcam and write it to the out.mp4 file. However, I can't see that stream anywhere. I thought about writing some simple wrapper in c# that is built on ffmpeg functionality, so far I found post mentioned on Stack before,但没有关于实时显示数据(而不是将其保存到文件中)的内容。有没有人有任何经验?例如,我可以 'draw' 从图片框或其他组件上的网络摄像头接收到的数据吗? 谢谢!

您可以使用 MediaElementMediaPlayer 控件。

MediaElement is a UIElement that is supported by the Layout and can be consumed as the content of many controls. It is also usable in Extensible Application Markup Language (XAML) as well as code. MediaPlayer, on the other hand, is designed for Drawing objects and lacks layout support. Media loaded using a MediaPlayer can only be presented using a VideoDrawing or by directly interacting with a DrawingContext. MediaPlayer cannot be used in XAML.

  • MediaElement

样本XAML:

<MediaElement Source="path\to\out.mp4" Name="myMediaElement" 
     Width="450" Height="250" LoadedBehavior="Manual" UnloadedBehavior="Stop" Stretch="Fill" 
     MediaOpened="Element_MediaOpened" MediaEnded="Element_MediaEnded"/>
  • 媒体播放器
    // 
    // Create a VideoDrawing. 
    //      
    MediaPlayer player = new MediaPlayer();

    player.Open(new Uri(@"sampleMedia\xbox.wmv", UriKind.Relative));

    VideoDrawing aVideoDrawing = new VideoDrawing();

    aVideoDrawing.Rect = new Rect(0, 0, 100, 100);

    aVideoDrawing.Player = player;

    // Play the video once.
    player.Play();       

Multimedia Overview on MSDN

您链接的 post 中的一条评论是这样说的:

How about writing a C++/CLI wrapper around ffmpeg's native interface and then calling your wrapper interface from your application?

我认为这正是您想要做的。 (请注意,FFmpeg 在 Visual Studio 的最新版本中多年来一直运行良好,因此链接 post 中对此评论的响应不适用。)

你基本上会创建一个相机输入(这存在于 libavdevice), and then you would encode this to h264 in a mp4 container (see output_example.c). To get a live display, you would take the data generated from the vfwcap source, decode it (using the "rawvideo" decoder in libavcodec). This gives you a AVFrame, which has the data pointers to display the image in any native UI element in your application, typically using direct3d or opengl. Read the documentation 以了解更多关于这一切的信息。