将base64字节正确转换为字符串并显示在cv2.imshow

Correctly converting base64 bytes into string and displaying in cv2.imshow

我正在努力寻找解决方案:

我正在尝试创建一个图像流系统,我可以在其中获取所有帧并将它们传递给神经网络,但不知何故,我无法从下面的函数中正确获取 base64 图像字符串。 如果我只是从流中调用解码图像而不是通过我转换为 base64 并在内存中读取它们并使 cv2 正确显示它们的函数传递它,所提供的代码将完美运行。

我负责转换和解码base64的服务器代码函数描述如下:

将图像对象从流转换为 base64 BYTES 并转换为一个 STRING(这是按预期工作的)

def convertImgBase64(image):
    try:
        imgString = base64.b64encode(image).decode('utf-8')
        print('convertida com sucesso')
        return imgString
    except os.error as err :
        print(f"Erro:'{err}'")

应该转换为可读的 cv2 兼容帧的 Base64 解码器(这里是错误开始的地方):

def readb64(base64_string):
    storage = '/home/caio/Desktop/img/'
    try:
        sbuf = BytesIO()
        sbuf.write(base64.b64decode(str(base64_string)))
        pimg = im.open(sbuf)
        out = open('arq.jpeg', 'wb')
        out.write(sbuf.read())
        out.close()
        print('leu string b64')
        return cv2.cvtColor(np.array(pimg), cv2.COLOR_RGB2BGR)
    except os.error as err :
        print(f"Erro:'{err}'")

这是我正在构建的当前服务器,但在继续之前我需要正确完成帧捕获。

from io import BytesIO, StringIO
import numpy as np
import cv2
from imutils.video import FPS
import imagezmq
import base64
import darknet
import os
from PIL import Image as im
from numpy import asarray
from time import sleep

#imagezmq protocol receiver from client
image_hub = imagezmq.ImageHub() 

 def convertImgBase64(image):
    try:
        imgString = base64.b64encode(image).decode('utf-8')
        return imgString
    except os.error as err :
        print(f"Error:'{err}'")

def readb64(base64_string):
    try:
        sbuf = BytesIO()
        sbuf.write(base64.b64decode(str(base64_string)))
        pimg = im.open(sbuf)
        return cv2.cvtColor(np.array(pimg), cv2.COLOR_RGB2BGR)
    except os.error as err :
        print(f"Error:'{err}'")

def capture_img():
    while True:
        camera, jpg_buffer = image_hub.recv_jpg()
        buffer = np.frombuffer(jpg_buffer, dtype='uint8')
        imagedecoder = cv2.imdecode(buffer, cv2.IMREAD_COLOR)
        img = im.fromarray(imagedecoder)
        try:
            string = convertImgBase64(imagedecoder)
            cvimg = readb64(string)
            #cv2.imshow(camera, cvimg) this is the line where its not working!
        except os.error as err :
            print(f"Error:'{err}'")

        cv2.imshow(camera, imagedecoder)
        cv2.waitKey(1) #cv2 wont work without this

        image_hub.send_reply(b'OK') #imageZMQ needs acknowledge that its ok

客户端代码(raspberry pi代码)如下:

import sys

import socket
import time
import cv2
from imutils.video import VideoStream
import imagezmq
import argparse

# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-s", "--server-ip", required=True,
    help="ip address of the server to which the client will connect")
args = vars(ap.parse_args())
# initialize the ImageSender object with the socket address of the
# server
sender = imagezmq.ImageSender(connect_to="tcp://{}:5555".format(
    args["server_ip"]))
# use either of the formats below to specifiy address of display computer
# sender = imagezmq.ImageSender(connect_to='tcp://192.168.1.190:5555')

rpi_name = socket.gethostname()  # send RPi hostname with each image
vs = VideoStream(usePiCamera=True, resolution=(800, 600)).start()
time.sleep(2.0)  # allow camera sensor to warm up
jpeg_quality = 95  # 0 to 100, higher is better quality, 95 is cv2 default
while True:  # send images as stream until Ctrl-C
    image = vs.read()
    ret_code, jpg_buffer = cv2.imencode(
        ".jpg", image, [int(cv2.IMWRITE_JPEG_QUALITY), jpeg_quality])
    sender.send_jpg(rpi_name, jpg_buffer)

我现在的错误输出是这样的:

我一直在尝试 and

的解决方案

如果你知道另一种更好的方法来传递我可以用来在 yolo/darknet 神经网络内部处理的图像对象,那可能会很棒!!

谢谢!

@Christoph Rackwitz 提供的答案是正确的。 ImageZMQ 的设计是发送和接收 OpenCV 图像 WITHOUT 任何 base64 编码。 ImageSender class 发送 OpenCV 图像。 ImageHub class 接收 OpenCV 图像。可选地,ImageZMQ 可以发送一个 jpg 缓冲区(正如您的 Raspberry Pi 客户端代码所做的那样)。

您的 Raspberry Pi 客户端代码基于 ImageZMQ “发送 jpg” example.

因此,您的服务器代码应使用匹配的 ImageZMQ“接收 jpg”example.

ImageZMQ“接收jpg”示例代码的本质是:

import numpy as np
import cv2
import imagezmq

image_hub = imagezmq.ImageHub()
while True:  # show streamed images until Ctrl-C
    rpi_name, jpg_buffer = image_hub.recv_jpg()
    image = cv2.imdecode(np.frombuffer(jpg_buffer, dtype='uint8'), -1)
    # see opencv docs for info on -1 parameter
    cv2.imshow(rpi_name, image)  # 1 window for each RPi
    cv2.waitKey(1)
    image_hub.send_reply(b'OK')

无需 base64 解码。变量 image 已经包含一个 OpenCV 图像。 (仅供参考,我是 ImageZMQ 的作者)