v4l2 Python - 流式视频 - 映射缓冲区

v4l2 Python - streaming video - mapping buffers

我正在为 Raspbian (Raspberry Pi 2) 中的 Python 编写视频捕获脚本,我在使用 v4l2 的 Python 绑定时遇到问题,因为我在内存映射缓冲区方面没有成功。

我需要的:

我尝试过的:

我读过的内容:

我的问题:

  1. 有更好的方法吗?或者如果不是...
  2. 我可以告诉 OpenCV 不要解压图像吗?最好使用 OpenCV 以应用未来的扩展。我发现 here 不允许这样做。
  3. 如何解决 Python 中的映射步骤? (任何工作示例?)

这是我使用 OpenCV 的(缓慢的)工作示例:

import cv2
import time

video = cv2.VideoCapture(0)

print 'Starting video-capture test...'

t0 = time.time()
for i in xrange(100):
    success, image = video.read()
    ret, jpeg = cv2.imencode('.jpg',image)

t1 = time.time()
t = ( t1 - t0 ) / 100.0
fps = 1.0 / t

print 'Test finished. ' + str(t) + ' sec. per img.'
print str( fps ) + ' fps reached'

video.release()

这是我用 v4l2 所做的:

FRAME_COUNT = 5

import v4l2
import fcntl
import mmap

def xioctl( fd, request, arg):

    r = 0

    cond = True
    while cond == True:
        r = fcntl.ioctl(fd, request, arg)
        cond = r == -1
        #cond = cond and errno == 4

    return r

class buffer_struct:
    start  = 0
    length = 0

# Open camera driver
fd = open('/dev/video1','r+b')

BUFTYPE = v4l2.V4L2_BUF_TYPE_VIDEO_CAPTURE
MEMTYPE = v4l2.V4L2_MEMORY_MMAP

# Set format
fmt = v4l2.v4l2_format()
fmt.type = BUFTYPE
fmt.fmt.pix.width       = 640
fmt.fmt.pix.height      = 480
fmt.fmt.pix.pixelformat = v4l2.V4L2_PIX_FMT_MJPEG
fmt.fmt.pix.field       = v4l2.V4L2_FIELD_NONE # progressive

xioctl(fd, v4l2.VIDIOC_S_FMT, fmt)

buffer_size = fmt.fmt.pix.sizeimage
print "buffer_size = " + str(buffer_size)

# Request buffers
req = v4l2.v4l2_requestbuffers()

req.count  = 4
req.type   = BUFTYPE
req.memory = MEMTYPE

xioctl(fd, v4l2.VIDIOC_REQBUFS, req)

if req.count < 2:
    print "req.count < 2"
    quit()

n_buffers = req.count

buffers = list()
for i in range(req.count):
    buffers.append( buffer_struct() )

# Initialize buffers. What should I do here? This doesn't work at all.
# I've tried with USRPTR (pointers) but I know no way for that in Python.
for i in range(n_buffers):

    buf = v4l2.v4l2_buffer()

    buf.type      = BUFTYPE
    buf.memory    = MEMTYPE
    buf.index     = i

    xioctl(fd, v4l2.VIDIOC_QUERYBUF, buf)

    buffers[i].length = buf.length
    buffers[i].start  = mmap.mmap(fd.fileno(), buf.length,
                                  flags  = mmap.PROT_READ,# | mmap.PROT_WRITE,
                                  prot   = mmap.MAP_SHARED,
                                  offset = buf.m.offset )

我将不胜感激任何帮助或建议。非常感谢!

我自己找到了答案,作为 another question. It was not the main topic of the question, but in this source code 中代码的一部分,您可以在 Python 中看到他如何使用 mmap(第 159 行)。此外我发现我不需要写权限。

为什么你不能使用 Raspberry Distribution

附带的 python picamera lib
import io
    import socket
    import struct
    import time
    import picamera


    # create socket and bind host
    client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    client_socket.connect(('192.168.1.101', 8000))
    connection = client_socket.makefile('wb')

    try:
        with picamera.PiCamera() as camera:
            camera.resolution = (320, 240)      # pi camera resolution
            camera.framerate = 15               # 15 frames/sec
            time.sleep(2)                       # give 2 secs for camera to initilize
            start = time.time()
            stream = io.BytesIO()

            # send jpeg format video stream
            for foo in camera.capture_continuous(stream, 'jpeg', use_video_port = True):
                connection.write(struct.pack('<L', stream.tell()))
                connection.flush()
                stream.seek(0)
                connection.write(stream.read())
                if time.time() - start > 600:
                    break
                stream.seek(0)
                stream.truncate()
        connection.write(struct.pack('<L', 0))
    finally:
        connection.close()
        client_socket.close()

在这里添加我刚刚发现的另一个选项,您也可以将 V4L2 后端与 OpenCV 一起使用。

您只需在 VideoCapture 构造函数中指定它即可。例如

cap = cv2.VideoCapture()

cap.open(0, apiPreference=cv2.CAP_V4L2)

cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'))
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 960)
cap.set(cv2.CAP_PROP_FPS, 30.0)

当没有明确指定时,OpenCV 将经常使用另一个相机 API(例如 gstreamer),这通常更慢且更麻烦。在这个例子中,我从限制在 4-5 FPS 到 720p 时高达 15(使用 Intel Atom Z8350)。

如果您希望将它与环形缓冲区(或其他内存映射缓冲区)一起使用,请查看以下资源:

https://github.com/Battleroid/seccam

https://github.com/bslatkin/ringbuffer