一次从 OpenCV 中的两个摄像头捕获视频
Capturing video from two cameras in OpenCV at once
如何使用 Python API 使用 OpenCV 同时(或几乎)从两个或多个摄像头捕获视频?
我有三个网络摄像头,都支持视频流,位于 /dev/video0、/dev/video1 和 /dev/video2。
以tutorial为例,单摄像头抓图就是:
import cv2
cap0 = cv2.VideoCapture(0)
ret0, frame0 = cap0.read()
cv2.imshow('frame', frame0)
cv2.waitKey()
这很好用。
但是,如果我尝试初始化第二个摄像头,试图从中 read()
returns None:
import cv2
cap0 = cv2.VideoCapture(0)
cap1 = cv2.VideoCapture(1)
ret0, frame0 = cap0.read()
assert ret0 # succeeds
ret1, frame1 = cap1.read()
assert ret1 # fails?!
为了确保我没有不小心给 OpenCV 一个糟糕的相机索引,我单独测试了每个相机索引,它们都可以自己工作。例如
import cv2
#cap0 = cv2.VideoCapture(0)
cap1 = cv2.VideoCapture(1)
#ret0, frame0 = cap0.read()
#assert ret0
ret1, frame1 = cap1.read()
assert ret1 # now it works?!
我做错了什么?
编辑:我的硬件是 Macbook Pro 运行ning Ubuntu。专门研究 Macbook 上的问题,我发现其他人 运行 也遇到过这个问题,包括 OSX 和不同类型的相机。如果我访问 iSight,我的代码中的两个调用都会失败。
是的,您确实受到了 USB 带宽的限制。尝试以 full-rez 从两个设备读取你可能会遇到错误:
libv4l2: error turning on stream: No space left on device
VIDIOC_STREAMON: No space left on device
Traceback (most recent call last):
File "p.py", line 7, in <module>
assert ret1 # fails?!
AssertionError
然后当您将分辨率缩小到 160x120 时:
import cv2
cap0 = cv2.VideoCapture(0)
cap0.set(3,160)
cap0.set(4,120)
cap1 = cv2.VideoCapture(1)
cap1.set(3,160)
cap1.set(4,120)
ret0, frame0 = cap0.read()
assert ret0 # succeeds
ret1, frame1 = cap1.read()
assert ret1 # fails?!
现在它似乎起作用了!我打赌你的两个摄像头都连接在同一张 USB 卡上。您可以使用 运行 lsusb
命令来确定,它应该指示如下内容:
Bus 001 Device 006: ID 046d:081b Logitech, Inc. Webcam C310
Bus 001 Device 004: ID 0409:005a NEC Corp. HighSpeed Hub
Bus 001 Device 007: ID 046d:0990 Logitech, Inc. QuickCam Pro 9000
Bus 001 Device 005: ID 0409:005a NEC Corp. HighSpeed Hub
Bus 001 Device 003: ID 0409:005a NEC Corp. HighSpeed Hub
Bus 001 Device 002: ID 1058:0401 Western Digital Technologies, Inc.
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
(注意两个摄像头在同一条总线上。)如果可能,您可以将另一张 USB 卡添加到您的机器以获得更多带宽。我以前这样做是为了 运行 在一台机器上以全分辨率使用多个摄像头。尽管那是带有可用主板插槽的塔式工作站,但不幸的是,您可能在 MacBook 笔记本电脑上没有该选项。
我已经使用 "imutils" 并阅读了图像上的网络摄像头显示。
import imutils
捕获视频帧
#--- WebCam1
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH,300)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT,300)
#--- WebCam2
cap1 = cv2.VideoCapture(1)
cap1.set(cv2.CAP_PROP_FRAME_WIDTH,300)
cap1.set(cv2.CAP_PROP_FRAME_HEIGHT,300)
#--- WebCam3
cap2 = cv2.VideoCapture(2)
cap2.set(cv2.CAP_PROP_FRAME_WIDTH,300)
cap2.set(cv2.CAP_PROP_FRAME_HEIGHT,300)
#--- WebCame4
cap3 = cv2.VideoCapture(3)
cap3.set(cv2.CAP_PROP_FRAME_WIDTH,300)
cap3.set(cv2.CAP_PROP_FRAME_HEIGHT,300)
我创建函数 read_frame
发送关于 Image.fromarray 的参数并显示
def read_frame():
webCameShow(cap.read(),display1)
webCameShow(cap1.read(),display2)
webCameShow(cap2.read(),display6)
webCameShow(cap3.read(),display7)
window.after(10, read_frame)
最终函数在 "imageFrame"
上显示视频
def webCameShow(N,Display):
_, frameXX = N
cv2imageXX = cv2.cvtColor(frameXX, cv2.COLOR_BGR2RGBA)
imgXX = Image.fromarray(cv2imageXX)
#imgtkXX = ImageTk.PhotoImage(image=imgXX)
Display.imgtk = imgtkXX
Display.configure(image=imgtkXX)
例子。
4-webcam
YouTube:
Youtube
使用 OPENCV 和两个标准 USB 摄像头,我能够使用多线程来完成此操作。本质上,定义一个打开 opencv window 和 VideoCapture 元素的函数。然后,创建两个线程,将相机 ID 和 window 名称作为输入。
import cv2
import threading
class camThread(threading.Thread):
def __init__(self, previewName, camID):
threading.Thread.__init__(self)
self.previewName = previewName
self.camID = camID
def run(self):
print "Starting " + self.previewName
camPreview(self.previewName, self.camID)
def camPreview(previewName, camID):
cv2.namedWindow(previewName)
cam = cv2.VideoCapture(camID)
if cam.isOpened(): # try to get the first frame
rval, frame = cam.read()
else:
rval = False
while rval:
cv2.imshow(previewName, frame)
rval, frame = cam.read()
key = cv2.waitKey(20)
if key == 27: # exit on ESC
break
cv2.destroyWindow(previewName)
# Create two threads as follows
thread1 = camThread("Camera 1", 1)
thread2 = camThread("Camera 2", 2)
thread1.start()
thread2.start()
学习如何穿线的好资源 python:https://www.tutorialspoint.com/python/python_multithreading.htm
这一直困扰着我很长一段时间,所以我在 OpenCV 之上制作了一个库来处理多个相机和视口。我 运行 遇到了一堆问题,比如视频默认不压缩,或者 windows 只在主线程中显示。到目前为止,我能够在 Windows 上实时显示两个 720p 网络摄像头。
尝试:
pip install CVPubSubs
然后,在 python 中:
import cvpubsubs.webcam_pub as w
from cvpubsubs.window_sub import SubscriberWindows
t1 = w.VideoHandlerThread(0)
t2 = w.VideoHandlerThread(1)
t1.start()
t2.start()
SubscriberWindows(window_names=['cammy', 'cammy2'],
video_sources=[0,1]
).loop()
t1.join()
t1.join()
尽管它相对较新,所以请告诉我任何错误或未优化的代码。
尝试使用此代码...
它按预期工作......
这是两个摄像头,如果你想要更多的摄像头,只需创建 "VideoCapture()" 对象...例如第三个摄像头将具有:cv2.VideoCapture(3) 和 while 循环中的相应代码
import cv2
frame0 = cv2.VideoCapture(1)
frame1 = cv2.VideoCapture(2)
while 1:
ret0, img0 = frame0.read()
ret1, img00 = frame1.read()
img1 = cv2.resize(img0,(360,240))
img2 = cv2.resize(img00,(360,240))
if (frame0):
cv2.imshow('img1',img1)
if (frame1):
cv2.imshow('img2',img2)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
frame0.release()
frame1.release()
cv2.destroyAllWindows()
一切顺利!
frame0 = cv2.VideoCapture(1)
frame1 = cv2.VideoCapture(2)
必须是:
frame0 = cv2.VideoCapture(0) # index 0
frame1 = cv2.VideoCapture(1) # index 1
所以它运行
为@TheoreticallyNick 之前发布的内容添加一点内容:
import cv2
import threading
class camThread(threading.Thread):
def __init__(self, previewName, camID):
threading.Thread.__init__(self)
self.previewName = previewName
self.camID = camID
def run(self):
print("Starting " + self.previewName)
camPreview(self.previewName, self.camID)
def camPreview(previewName, camID):
cv2.namedWindow(previewName)
cam = cv2.VideoCapture(camID)
if cam.isOpened():
rval, frame = cam.read()
else:
rval = False
while rval:
cv2.imshow(previewName, frame)
rval, frame = cam.read()
key = cv2.waitKey(20)
if key == 27: # exit on ESC
break
cv2.destroyWindow(previewName)
# Create threads as follows
thread1 = camThread("Camera 1", 0)
thread2 = camThread("Camera 2", 1)
thread3 = camThread("Camera 3", 2)
thread1.start()
thread2.start()
thread3.start()
print()
print("Active threads", threading.activeCount())
这将为您拥有的每个网络摄像头打开一个新线程。就我而言,我想打开三个不同的提要。在 Python 3.6 上测试。如果您有任何问题,请告诉我,还要感谢 TheoreticallyNick 提供的 readable/functioning 代码!
有点晚了,但您可以使用我的 VideGear 库的 CamGear API,它可继承地提供多线程,并且 您可以用更少的行数编写相同的代码。此外,所有相机流都将完全同步。
下面是两个相机流的示例代码:
# import required libraries
from vidgear.gears import VideoGear
import cv2
import time
# define and start the stream on first source ( For e.g #0 index device)
stream1 = VideoGear(source=0, logging=True).start()
# define and start the stream on second source ( For e.g #1 index device)
stream2 = VideoGear(source=1, logging=True).start()
# infinite loop
while True:
frameA = stream1.read()
# read frames from stream1
frameB = stream2.read()
# read frames from stream2
# check if any of two frame is None
if frameA is None or frameB is None:
#if True break the infinite loop
break
# do something with both frameA and frameB here
cv2.imshow("Output Frame1", frameA)
cv2.imshow("Output Frame2", frameB)
# Show output window of stream1 and stream 2 seperately
key = cv2.waitKey(1) & 0xFF
# check for 'q' key-press
if key == ord("q"):
#if 'q' key-pressed break out
break
if key == ord("w"):
#if 'w' key-pressed save both frameA and frameB at same time
cv2.imwrite("Image-1.jpg", frameA)
cv2.imwrite("Image-2.jpg", frameB)
#break #uncomment this line to break out after taking images
cv2.destroyAllWindows()
# close output window
# safely close both video streams
stream1.stop()
stream2.stop()
可以找到更多使用示例here
绕过 USB 带宽限制的一个选项是在开始使用第二个之前释放第一个摄像头,如
import cv2
cap0 = cv2.VideoCapture(0)
ret0, frame0 = cap0.read()
assert ret0 # succeeds
cap0.release()
cap1 = cv2.VideoCapture(1)
ret1, frame1 = cap1.read()
assert ret1 # succeeds as well
对我来说,释放相机和打开新相机需要 0.5-1 秒,这是否是可接受的时间延迟取决于您的使用情况。
除此之外并降低相机的输出分辨率(如果相机允许...),唯一的选择似乎是为每个相机添加一个 PCI USB 板(只有在台式计算机上才有可能) ).
多线程不会让您绕过带宽限制。
如何使用 Python API 使用 OpenCV 同时(或几乎)从两个或多个摄像头捕获视频?
我有三个网络摄像头,都支持视频流,位于 /dev/video0、/dev/video1 和 /dev/video2。
以tutorial为例,单摄像头抓图就是:
import cv2
cap0 = cv2.VideoCapture(0)
ret0, frame0 = cap0.read()
cv2.imshow('frame', frame0)
cv2.waitKey()
这很好用。
但是,如果我尝试初始化第二个摄像头,试图从中 read()
returns None:
import cv2
cap0 = cv2.VideoCapture(0)
cap1 = cv2.VideoCapture(1)
ret0, frame0 = cap0.read()
assert ret0 # succeeds
ret1, frame1 = cap1.read()
assert ret1 # fails?!
为了确保我没有不小心给 OpenCV 一个糟糕的相机索引,我单独测试了每个相机索引,它们都可以自己工作。例如
import cv2
#cap0 = cv2.VideoCapture(0)
cap1 = cv2.VideoCapture(1)
#ret0, frame0 = cap0.read()
#assert ret0
ret1, frame1 = cap1.read()
assert ret1 # now it works?!
我做错了什么?
编辑:我的硬件是 Macbook Pro 运行ning Ubuntu。专门研究 Macbook 上的问题,我发现其他人 运行 也遇到过这个问题,包括 OSX 和不同类型的相机。如果我访问 iSight,我的代码中的两个调用都会失败。
是的,您确实受到了 USB 带宽的限制。尝试以 full-rez 从两个设备读取你可能会遇到错误:
libv4l2: error turning on stream: No space left on device
VIDIOC_STREAMON: No space left on device
Traceback (most recent call last):
File "p.py", line 7, in <module>
assert ret1 # fails?!
AssertionError
然后当您将分辨率缩小到 160x120 时:
import cv2
cap0 = cv2.VideoCapture(0)
cap0.set(3,160)
cap0.set(4,120)
cap1 = cv2.VideoCapture(1)
cap1.set(3,160)
cap1.set(4,120)
ret0, frame0 = cap0.read()
assert ret0 # succeeds
ret1, frame1 = cap1.read()
assert ret1 # fails?!
现在它似乎起作用了!我打赌你的两个摄像头都连接在同一张 USB 卡上。您可以使用 运行 lsusb
命令来确定,它应该指示如下内容:
Bus 001 Device 006: ID 046d:081b Logitech, Inc. Webcam C310
Bus 001 Device 004: ID 0409:005a NEC Corp. HighSpeed Hub
Bus 001 Device 007: ID 046d:0990 Logitech, Inc. QuickCam Pro 9000
Bus 001 Device 005: ID 0409:005a NEC Corp. HighSpeed Hub
Bus 001 Device 003: ID 0409:005a NEC Corp. HighSpeed Hub
Bus 001 Device 002: ID 1058:0401 Western Digital Technologies, Inc.
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
(注意两个摄像头在同一条总线上。)如果可能,您可以将另一张 USB 卡添加到您的机器以获得更多带宽。我以前这样做是为了 运行 在一台机器上以全分辨率使用多个摄像头。尽管那是带有可用主板插槽的塔式工作站,但不幸的是,您可能在 MacBook 笔记本电脑上没有该选项。
我已经使用 "imutils" 并阅读了图像上的网络摄像头显示。
import imutils
捕获视频帧
#--- WebCam1
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH,300)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT,300)
#--- WebCam2
cap1 = cv2.VideoCapture(1)
cap1.set(cv2.CAP_PROP_FRAME_WIDTH,300)
cap1.set(cv2.CAP_PROP_FRAME_HEIGHT,300)
#--- WebCam3
cap2 = cv2.VideoCapture(2)
cap2.set(cv2.CAP_PROP_FRAME_WIDTH,300)
cap2.set(cv2.CAP_PROP_FRAME_HEIGHT,300)
#--- WebCame4
cap3 = cv2.VideoCapture(3)
cap3.set(cv2.CAP_PROP_FRAME_WIDTH,300)
cap3.set(cv2.CAP_PROP_FRAME_HEIGHT,300)
我创建函数 read_frame
发送关于 Image.fromarray 的参数并显示
def read_frame():
webCameShow(cap.read(),display1)
webCameShow(cap1.read(),display2)
webCameShow(cap2.read(),display6)
webCameShow(cap3.read(),display7)
window.after(10, read_frame)
最终函数在 "imageFrame"
上显示视频def webCameShow(N,Display):
_, frameXX = N
cv2imageXX = cv2.cvtColor(frameXX, cv2.COLOR_BGR2RGBA)
imgXX = Image.fromarray(cv2imageXX)
#imgtkXX = ImageTk.PhotoImage(image=imgXX)
Display.imgtk = imgtkXX
Display.configure(image=imgtkXX)
例子。 4-webcam
YouTube: Youtube
使用 OPENCV 和两个标准 USB 摄像头,我能够使用多线程来完成此操作。本质上,定义一个打开 opencv window 和 VideoCapture 元素的函数。然后,创建两个线程,将相机 ID 和 window 名称作为输入。
import cv2
import threading
class camThread(threading.Thread):
def __init__(self, previewName, camID):
threading.Thread.__init__(self)
self.previewName = previewName
self.camID = camID
def run(self):
print "Starting " + self.previewName
camPreview(self.previewName, self.camID)
def camPreview(previewName, camID):
cv2.namedWindow(previewName)
cam = cv2.VideoCapture(camID)
if cam.isOpened(): # try to get the first frame
rval, frame = cam.read()
else:
rval = False
while rval:
cv2.imshow(previewName, frame)
rval, frame = cam.read()
key = cv2.waitKey(20)
if key == 27: # exit on ESC
break
cv2.destroyWindow(previewName)
# Create two threads as follows
thread1 = camThread("Camera 1", 1)
thread2 = camThread("Camera 2", 2)
thread1.start()
thread2.start()
学习如何穿线的好资源 python:https://www.tutorialspoint.com/python/python_multithreading.htm
这一直困扰着我很长一段时间,所以我在 OpenCV 之上制作了一个库来处理多个相机和视口。我 运行 遇到了一堆问题,比如视频默认不压缩,或者 windows 只在主线程中显示。到目前为止,我能够在 Windows 上实时显示两个 720p 网络摄像头。
尝试:
pip install CVPubSubs
然后,在 python 中:
import cvpubsubs.webcam_pub as w
from cvpubsubs.window_sub import SubscriberWindows
t1 = w.VideoHandlerThread(0)
t2 = w.VideoHandlerThread(1)
t1.start()
t2.start()
SubscriberWindows(window_names=['cammy', 'cammy2'],
video_sources=[0,1]
).loop()
t1.join()
t1.join()
尽管它相对较新,所以请告诉我任何错误或未优化的代码。
尝试使用此代码... 它按预期工作...... 这是两个摄像头,如果你想要更多的摄像头,只需创建 "VideoCapture()" 对象...例如第三个摄像头将具有:cv2.VideoCapture(3) 和 while 循环中的相应代码
import cv2
frame0 = cv2.VideoCapture(1)
frame1 = cv2.VideoCapture(2)
while 1:
ret0, img0 = frame0.read()
ret1, img00 = frame1.read()
img1 = cv2.resize(img0,(360,240))
img2 = cv2.resize(img00,(360,240))
if (frame0):
cv2.imshow('img1',img1)
if (frame1):
cv2.imshow('img2',img2)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
frame0.release()
frame1.release()
cv2.destroyAllWindows()
一切顺利!
frame0 = cv2.VideoCapture(1)
frame1 = cv2.VideoCapture(2)
必须是:
frame0 = cv2.VideoCapture(0) # index 0
frame1 = cv2.VideoCapture(1) # index 1
所以它运行
为@TheoreticallyNick 之前发布的内容添加一点内容:
import cv2
import threading
class camThread(threading.Thread):
def __init__(self, previewName, camID):
threading.Thread.__init__(self)
self.previewName = previewName
self.camID = camID
def run(self):
print("Starting " + self.previewName)
camPreview(self.previewName, self.camID)
def camPreview(previewName, camID):
cv2.namedWindow(previewName)
cam = cv2.VideoCapture(camID)
if cam.isOpened():
rval, frame = cam.read()
else:
rval = False
while rval:
cv2.imshow(previewName, frame)
rval, frame = cam.read()
key = cv2.waitKey(20)
if key == 27: # exit on ESC
break
cv2.destroyWindow(previewName)
# Create threads as follows
thread1 = camThread("Camera 1", 0)
thread2 = camThread("Camera 2", 1)
thread3 = camThread("Camera 3", 2)
thread1.start()
thread2.start()
thread3.start()
print()
print("Active threads", threading.activeCount())
这将为您拥有的每个网络摄像头打开一个新线程。就我而言,我想打开三个不同的提要。在 Python 3.6 上测试。如果您有任何问题,请告诉我,还要感谢 TheoreticallyNick 提供的 readable/functioning 代码!
有点晚了,但您可以使用我的 VideGear 库的 CamGear API,它可继承地提供多线程,并且 您可以用更少的行数编写相同的代码。此外,所有相机流都将完全同步。
下面是两个相机流的示例代码:
# import required libraries
from vidgear.gears import VideoGear
import cv2
import time
# define and start the stream on first source ( For e.g #0 index device)
stream1 = VideoGear(source=0, logging=True).start()
# define and start the stream on second source ( For e.g #1 index device)
stream2 = VideoGear(source=1, logging=True).start()
# infinite loop
while True:
frameA = stream1.read()
# read frames from stream1
frameB = stream2.read()
# read frames from stream2
# check if any of two frame is None
if frameA is None or frameB is None:
#if True break the infinite loop
break
# do something with both frameA and frameB here
cv2.imshow("Output Frame1", frameA)
cv2.imshow("Output Frame2", frameB)
# Show output window of stream1 and stream 2 seperately
key = cv2.waitKey(1) & 0xFF
# check for 'q' key-press
if key == ord("q"):
#if 'q' key-pressed break out
break
if key == ord("w"):
#if 'w' key-pressed save both frameA and frameB at same time
cv2.imwrite("Image-1.jpg", frameA)
cv2.imwrite("Image-2.jpg", frameB)
#break #uncomment this line to break out after taking images
cv2.destroyAllWindows()
# close output window
# safely close both video streams
stream1.stop()
stream2.stop()
可以找到更多使用示例here
绕过 USB 带宽限制的一个选项是在开始使用第二个之前释放第一个摄像头,如
import cv2
cap0 = cv2.VideoCapture(0)
ret0, frame0 = cap0.read()
assert ret0 # succeeds
cap0.release()
cap1 = cv2.VideoCapture(1)
ret1, frame1 = cap1.read()
assert ret1 # succeeds as well
对我来说,释放相机和打开新相机需要 0.5-1 秒,这是否是可接受的时间延迟取决于您的使用情况。
除此之外并降低相机的输出分辨率(如果相机允许...),唯一的选择似乎是为每个相机添加一个 PCI USB 板(只有在台式计算机上才有可能) ).
多线程不会让您绕过带宽限制。