如何在 Python/Windows10 上使用 FFMPEG 和 Pipe 进行屏幕录制?

How to use FFMPEG on Python/Windows10 with Pipe for Screen recording?

我想用 ffmpeg 录制屏幕,因为它似乎是唯一可以用鼠标光标录制屏幕区域的播放器

以下代码改编自 i want to display mouse pointer in my recording,但不适用于 Windows 10 (x64) 设置(使用 Python 3.6)。

#!/usr/bin/env python3

# ffmpeg -y -pix_fmt bgr0 -f avfoundation -r 20 -t 10 -i 1 -vf scale=w=3840:h=2160 -f rawvideo /dev/null

import sys
import cv2
import time
import subprocess
import numpy as np

w,h = 100, 100

def ffmpegGrab():
    """Generator to read frames from ffmpeg subprocess"""

    #ffmpeg -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 640x480 -show_region 1 -i desktop output.mkv #CODE THAT ACTUALLY WORKS WITH FFMPEG CLI

    cmd = 'D:/Downloads/ffmpeg-20200831-4a11a6f-win64-static/ffmpeg-20200831-4a11a6f-win64-static/bin/ffmpeg.exe -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 100x100 -show_region 1 -i desktop -f image2pipe, -pix_fmt bgr24 -vcodec rawvideo -an -sn' 

    proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
    #out, err = proc.communicate()
    while True:
        frame = proc.stdout.read(w*h*3)
        yield np.frombuffer(frame, dtype=np.uint8).reshape((h,w,3))

# Get frame generator
gen = ffmpegGrab()

# Get start time
start = time.time()

# Read video frames from ffmpeg in loop
nFrames = 0
while True:
    # Read next frame from ffmpeg
    frame = next(gen)
    nFrames += 1

    cv2.imshow('screenshot', frame)

    if cv2.waitKey(1) == ord("q"):
        break

    fps = nFrames/(time.time()-start)
    print(f'FPS: {fps}')


cv2.destroyAllWindows()
out.release()

如上所述使用 'cmd',我会得到以下错误:

b"ffmpeg version git-2020-08-31-4a11a6f Copyright (c) 2000-2020 the FFmpeg developers\r\n  built with gcc 10.2.1 (GCC) 20200805\r\n  configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libsrt --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libgsm --enable-librav1e --enable-libsvtav1 --disable-w32threads --enable-libmfx --enable-ffnvcodec --enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf\r\n  libavutil      56. 58.100 / 56. 58.100\r\n  libavcodec     58.101.101 / 58.101.101\r\n  libavformat    58. 51.101 / 58. 51.101\r\n  libavdevice    58. 11.101 / 58. 11.101\r\n  libavfilter     7. 87.100 /  7. 87.100\r\n  libswscale      5.  8.100 /  5.  8.100\r\n  libswresample   3.  8.100 /  3.  8.100\r\n  libpostproc    55.  8.100 / 55.  8.100\r\nTrailing option(s) found in the command: may be ignored.\r\n[gdigrab @ 0000017ab857f100] Capturing whole desktop as 100x100x32 at (10,20)\r\nInput #0, gdigrab, from 'desktop':\r\n  Duration: N/A, start: 1599021857.538752, bitrate: 9612 kb/s\r\n    Stream #0:0: Video: bmp, bgra, 100x100, 9612 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc\r\n**At least one output file must be specified**\r\n"

这是 proc(也是 proc.communicate)的内容。尝试将此消息调整为 100x100 大小的图像后,程序立即崩溃。

我不想有输出文件。我需要将 Python 子进程与 Pipe 一起使用,以便直接将这些屏幕帧传送到我的 Python 代码,根本不需要 IO。

如果我尝试以下操作:

cmd = 'D:/Downloads/ffmpeg-20200831-4a11a6f-win64-static/ffmpeg-20200831-4a11a6f-win64-static/bin/ffmpeg.exe -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 100x100 -i desktop -pix_fmt bgr24 -vcodec rawvideo -an -sn -f image2pipe'

proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)

然后'frame',里面'while True',填充b''。

尝试使用以下库但没有成功,因为我根本找不到捕获鼠标光标或捕获屏幕的方法:https://github.com/abhiTronix/vidgear, https://github.com/kkroening/ffmpeg-python

我错过了什么? 谢谢。

您缺少管道的 -(或 pipe:pipe:1),如:

ffmpeg.exe -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 100x100 -i desktop -pix_fmt bgr24 -vcodec rawvideo -an -sn -f image2pipe -

参见 FFmpeg pipe protocol documentation

@Trmotta IDK,我很惊讶你不能使用vidgear in the first place, since it the easiest python framework available for video-processing. I can implement your code in more cleaner and using fewer lines with vidgear APIs,如下:

# import required libraries
from vidgear.gears import ScreenGear
from vidgear.gears import WriteGear
import cv2


# define dimensions of screen w.r.t to given monitor to be captured
options = {'top': 10, 'left': 20, 'width': 100, 'height': 100}

# define suitable FFmpeg parameters(such as framerate) for writer
output_params = {"-input_framerate":30,}

# open video stream with defined parameters
stream = ScreenGear(monitor=1, logging=True, **options).start()

# Define writer with defined parameters and suitable output filename for e.g. `Output.mp4`
writer = WriteGear(output_filename = 'Output.mp4', logging = True, **output_params)

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break


    # {do something with the frame here}

    # write gray frame to writer
    writer.write(frame)

    # Show output window
    cv2.imshow("Screenshot", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

# safely close writer
writer.close()

相关文档在这里: https://abhitronix.github.io/vidgear/gears/screengear/overview/

VidGear 文档: https://abhitronix.github.io/vidgear/gears