Python 字节流转图像
Python bytestream to image
我正在尝试实现与这个问题相同的结果,但使用的是彩色图像:
Here is my input image
This is what my code display
C++ 方面:
cv::Mat frame = cv::imread("/home/victor/Images/Zoom.png");
int height = frame.rows; //480
int width = frame.cols; // 640
zmq_send(static_cast<void *>(pubSocket), frame.data, (height*width*3*sizeof(uint8_t)), ZMQ_NOBLOCK);
Python 边 :
try:
image_bytes = self._subsocketVideo.recv(flags=zmq.NOBLOCK)
width = 480
height = 640
try:
temp = numpy.frombuffer(image_bytes, dtype=numpy.uint8)
self.currentFrame = temp.reshape(height, width, 3)
except Exception as e :
print("Failed to create frame :")
print(e)
except zmq.Again as e:
raise e
Python 显示图像的代码:这部分有效,我尝试使用静态图像而不是网络上的图像
def videoCB(self):
try:
self._socket.subVideoReceive()
print("Creating QImg")
qimg = QImage(self._socket.currentFrame.data, 480, 640, 3*480, QImage.Format_RGB888)
print("Creating pixmap")
pixmap = QtGui.QPixmap.fromImage(qimg)
print("Setting pixmap")
self.imageHolder.setPixmap(pixmap)
self.imageHolder.show()
except Exception as e:
print(e)
pass
我觉得我有 2 或 3 个问题:
- 为什么我的输出图像宽度大于高度?我尝试在 reshape 中反转高度和宽度,没有任何结果
- 某处似乎有 RGB 混淆
- 总的来说,我觉得数据在那里,但我没有把它们正确地放在一起。
reshape 函数看起来什么都不做,没有它我的输出是一样的。
想法?
使用opencv cv2.imdecode
import cv2
try:
image_bytes = self._subsocketVideo.recv(flags=zmq.NOBLOCK)
width = 480
height = 640
try:
temp = numpy.frombuffer(image_bytes, dtype=numpy.uint8)
self.currentFrame = cv2.imdecode(temp, flags=cv2.IMREAD_COLOR)
except Exception as e :
print("Failed to create frame :")
print(e)
except zmq.Again as e:
raise e
或Pillow
import io
from PIL import Image
...
img = Image.open(image_bytes)
self.currentFrame = np.asarray(img)[..., :-1] # No need for last channel
...
设法使其工作:将 python 侧更改为
image2 = Image.frombytes('RGB', (height,width), image_bytes)
self.currentFrame = ImageQt(image2)
并以
显示
qimg = QImage(self._socket.currentFrame)
pixmap = QtGui.QPixmap.fromImage(qimg)
self.imageHolder.setPixmap(pixmap)
self.imageHolder.show()
我正在尝试实现与这个问题相同的结果,但使用的是彩色图像:
Here is my input image
This is what my code display
C++ 方面:
cv::Mat frame = cv::imread("/home/victor/Images/Zoom.png");
int height = frame.rows; //480
int width = frame.cols; // 640
zmq_send(static_cast<void *>(pubSocket), frame.data, (height*width*3*sizeof(uint8_t)), ZMQ_NOBLOCK);
Python 边 :
try:
image_bytes = self._subsocketVideo.recv(flags=zmq.NOBLOCK)
width = 480
height = 640
try:
temp = numpy.frombuffer(image_bytes, dtype=numpy.uint8)
self.currentFrame = temp.reshape(height, width, 3)
except Exception as e :
print("Failed to create frame :")
print(e)
except zmq.Again as e:
raise e
Python 显示图像的代码:这部分有效,我尝试使用静态图像而不是网络上的图像
def videoCB(self):
try:
self._socket.subVideoReceive()
print("Creating QImg")
qimg = QImage(self._socket.currentFrame.data, 480, 640, 3*480, QImage.Format_RGB888)
print("Creating pixmap")
pixmap = QtGui.QPixmap.fromImage(qimg)
print("Setting pixmap")
self.imageHolder.setPixmap(pixmap)
self.imageHolder.show()
except Exception as e:
print(e)
pass
我觉得我有 2 或 3 个问题:
- 为什么我的输出图像宽度大于高度?我尝试在 reshape 中反转高度和宽度,没有任何结果
- 某处似乎有 RGB 混淆
- 总的来说,我觉得数据在那里,但我没有把它们正确地放在一起。
reshape 函数看起来什么都不做,没有它我的输出是一样的。
想法?
使用opencv cv2.imdecode
import cv2
try:
image_bytes = self._subsocketVideo.recv(flags=zmq.NOBLOCK)
width = 480
height = 640
try:
temp = numpy.frombuffer(image_bytes, dtype=numpy.uint8)
self.currentFrame = cv2.imdecode(temp, flags=cv2.IMREAD_COLOR)
except Exception as e :
print("Failed to create frame :")
print(e)
except zmq.Again as e:
raise e
或Pillow
import io
from PIL import Image
...
img = Image.open(image_bytes)
self.currentFrame = np.asarray(img)[..., :-1] # No need for last channel
...
设法使其工作:将 python 侧更改为
image2 = Image.frombytes('RGB', (height,width), image_bytes)
self.currentFrame = ImageQt(image2)
并以
显示qimg = QImage(self._socket.currentFrame)
pixmap = QtGui.QPixmap.fromImage(qimg)
self.imageHolder.setPixmap(pixmap)
self.imageHolder.show()