将 numpy 数组通过管道传输到虚拟视频设备

Pipe numpy array to virtual video device

我想将图像传输到虚拟视频设备(例如 /dev/video0),图像是在具有所需帧速率的循环内创建的。

在这个最小的例子中,我只有两个数组在 cv2 window 中交替出现。现在我正在寻找一个好的解决方案来将阵列传输到虚拟设备。

我看到 ffmpeg-python 可以 运行 与 ffmpeg.run_async() 异步,但到目前为止我无法使用此包进行任何操作。

没有 ffmpeg 内容的示例代码:

#!/usr/bin/env python3

import cv2
import numpy as np
import time

window_name = 'virtual-camera'
cv2.namedWindow(window_name, cv2.WINDOW_GUI_EXPANDED)

img1 = np.random.uniform(0, 255, (1080, 1440, 3)).astype('uint8')
img2 = np.random.uniform(0, 255, (1080, 1440, 3)).astype('uint8')

for i in range(125):
    time.sleep(0.04)
    if i % 2:
        img = img1
    else:
        img = img2
    cv2.imshow(window_name, img)
    cv2.waitKey(1)
cv2.destroyAllWindows()

首先,您必须设置一个虚拟相机,例如v4l2loopback。请参阅 here 了解如何安装它(忽略使用示例)。
然后,你可以像写入普通文件一样写入虚拟相机(也就是说,让 openCV 写入图像说 /dev/video0;如何做到这一点你必须自己去发现,因为我不是 openCV 的专家) .
最后,您可以使用 ffmpeg-python/dev/video0 作为输入文件,对视频做一些事情,就是这样!

因为 wrote in his answer, it is possible to create a dummy device with the package v4l2loopback. To publish images, videos or the desktop to the dummy device was already easy with ffmpeg, but i want to pipe it directly from the python script - where i capture the images - to the dummy device. I still think it's possible with ffmpeg-python, but i found this great answer from Alp which sheds light on the darkness. The package pyfakewebcam完美解决了问题

为了完整起见,这是我扩展的最小工作示例:

#!/usr/bin/env python3

import time

import cv2
import numpy as np
import pyfakewebcam

WIDTH = 1440
HEIGHT = 1080
DEVICE = '/dev/video0'

fake_cam = pyfakewebcam.FakeWebcam(DEVICE, WIDTH, HEIGHT)

window_name = 'virtual-camera'
cv2.namedWindow(window_name, cv2.WINDOW_GUI_EXPANDED)

img1 = np.random.uniform(0, 255, (HEIGHT, WIDTH, 3)).astype('uint8')
img2 = np.random.uniform(0, 255, (HEIGHT, WIDTH, 3)).astype('uint8')

for i in range(125):
    time.sleep(0.04)
    if i % 2:
        img = img1
    else:
        img = img2
    fake_cam.schedule_frame(img)
    cv2.imshow(window_name, img)
    cv2.waitKey(1)
cv2.destroyAllWindows()