如何在多个线程之间共享一个 numpy 数组 python?
How to share a numpy array between multiple threads python?
我实际上是在尝试修改一些 yolov5 脚本。这里我试图在线程之间传递一个数组。
def detection(out_q):
while(cam.isOpened()):
ref, img = cam.read()
img = cv2.resize(img, (640, 320))
result = model(img)
yoloBbox = result.xywh[0].numpy() # yolo format
bbox = result.xyxy[0].numpy() # pascal format
for i in bbox:
out_q.put(i) # 'i' is the List of length 6
def resultant(in_q):
while(cam.isOpened()):
ref, img =cam.read()
img = cv2.resize(img, (640, 320))
qbbox = in_q.get()
print(qbbox)
if __name__=='__main__':
q = Queue(maxsize = 10)
t1 = threading.Thread(target= detection, args = (q, ))
t2 = threading.Thread(target= resultant, args = (q, ))
t1.start()
t2.start()
t1.join()
t2.join()
我试过这个,但它给我这样的错误:
Assertion fctx->async_lock failed at libavcodec/pthread_frame.c:155
那么有没有其他方法可以传数组呢?
任何类型的教程/解决方案表示赞赏。如果对我的问题有任何误解,请告诉我。
非常感谢!!
更新:::
我正在尝试这样..
def detection(ns, event):#
## a = np.array([1, 2, 3]) -
#### a= list(a) | #This is working
## ns.value = a |
## event.set() -
while(cam.isOpened()):
ref, img = cam.read()
img = cv2.resize(img, (640, 320))
result = model(img)
yoloBbox = result.xywh[0].numpy() # yolo format
bbox = result.xyxy[0].numpy() # pascal format
for i in bbox:
arr = np.squeeze(np.array(i))
print("bef: ", arr) -
ns.value = arr | # This is not working
event.set() -
def transfer(ns, event):
event.wait()
print(ns.value)
if __name__=='__main__':
## detection()
manager = multiprocessing.Manager()
namespace = manager.Namespace()
event=multiprocessing.Event()
p1 = multiprocessing.Process(target=detection, args=
(namespace, event),)
p2= multiprocessing.Process(target=transfer, args=(namespace,
event),)
p1.start()
p2.start()
p1.join()
p2.join()
The output from the above "arr" = [ 0 1.8232
407.98 316.46 0.92648 0]
但我得到的只是空白。没有错误,没有警告,只有空白。
我测试过 arr 是有价值的。
我测试了列表,np 数组都在共享标记为工作的数据。
但是为什么“arr”数组中的数据是空白的(共享后)
那我该怎么办?
so is there any other method to pass the array?
是的,您可以使用 multiprocessing.shared_memory
, it is part of standard library since python3.8
, and PyPI has backport 允许在 python3.6
和 python3.7
中使用它。请参阅链接文档中的示例以了解如何将 multiprocessing.shared_memory
与 numpy.ndarray
一起使用
@Daweo 提供的建议使用共享内存的答案是正确的。
但是,也值得考虑使用锁来 'protect' 访问 numpy 数组(不是 thread-safe)。
参见:- this
好的,谢谢你的帮助。我使用多处理队列来共享数据。
然后我将我的程序多处理转移到线程。
def capture(q):
cap =
cv2.VideoCapture(0)
while True:
ref, frame = cap.read()
frame = cv2.resize(frame, (640, 480))
q.put(frame)
def det(q):
model = torch.hub.load('ultralytics/yolov5','yolov5s',device='cpu')
model.conf = 0.30 # model confidence level
model.classes = [0] # model classes (where 0 = person, 2 = car)
model.iou = 0.55 # bounding box accuracy
while True:
mat = q.get()
det = model(mat)
bbox = det.xyxy[0].numpy()
for i in bbox:
print(i)
我实际上是在尝试修改一些 yolov5 脚本。这里我试图在线程之间传递一个数组。
def detection(out_q):
while(cam.isOpened()):
ref, img = cam.read()
img = cv2.resize(img, (640, 320))
result = model(img)
yoloBbox = result.xywh[0].numpy() # yolo format
bbox = result.xyxy[0].numpy() # pascal format
for i in bbox:
out_q.put(i) # 'i' is the List of length 6
def resultant(in_q):
while(cam.isOpened()):
ref, img =cam.read()
img = cv2.resize(img, (640, 320))
qbbox = in_q.get()
print(qbbox)
if __name__=='__main__':
q = Queue(maxsize = 10)
t1 = threading.Thread(target= detection, args = (q, ))
t2 = threading.Thread(target= resultant, args = (q, ))
t1.start()
t2.start()
t1.join()
t2.join()
我试过这个,但它给我这样的错误:
Assertion fctx->async_lock failed at libavcodec/pthread_frame.c:155
那么有没有其他方法可以传数组呢? 任何类型的教程/解决方案表示赞赏。如果对我的问题有任何误解,请告诉我。 非常感谢!!
更新:::
我正在尝试这样..
def detection(ns, event):#
## a = np.array([1, 2, 3]) -
#### a= list(a) | #This is working
## ns.value = a |
## event.set() -
while(cam.isOpened()):
ref, img = cam.read()
img = cv2.resize(img, (640, 320))
result = model(img)
yoloBbox = result.xywh[0].numpy() # yolo format
bbox = result.xyxy[0].numpy() # pascal format
for i in bbox:
arr = np.squeeze(np.array(i))
print("bef: ", arr) -
ns.value = arr | # This is not working
event.set() -
def transfer(ns, event):
event.wait()
print(ns.value)
if __name__=='__main__':
## detection()
manager = multiprocessing.Manager()
namespace = manager.Namespace()
event=multiprocessing.Event()
p1 = multiprocessing.Process(target=detection, args=
(namespace, event),)
p2= multiprocessing.Process(target=transfer, args=(namespace,
event),)
p1.start()
p2.start()
p1.join()
p2.join()
The output from the above "arr" = [ 0 1.8232
407.98 316.46 0.92648 0]
但我得到的只是空白。没有错误,没有警告,只有空白。 我测试过 arr 是有价值的。 我测试了列表,np 数组都在共享标记为工作的数据。 但是为什么“arr”数组中的数据是空白的(共享后) 那我该怎么办?
so is there any other method to pass the array?
是的,您可以使用 multiprocessing.shared_memory
, it is part of standard library since python3.8
, and PyPI has backport 允许在 python3.6
和 python3.7
中使用它。请参阅链接文档中的示例以了解如何将 multiprocessing.shared_memory
与 numpy.ndarray
@Daweo 提供的建议使用共享内存的答案是正确的。
但是,也值得考虑使用锁来 'protect' 访问 numpy 数组(不是 thread-safe)。
参见:- this
好的,谢谢你的帮助。我使用多处理队列来共享数据。 然后我将我的程序多处理转移到线程。
def capture(q):
cap =
cv2.VideoCapture(0)
while True:
ref, frame = cap.read()
frame = cv2.resize(frame, (640, 480))
q.put(frame)
def det(q):
model = torch.hub.load('ultralytics/yolov5','yolov5s',device='cpu')
model.conf = 0.30 # model confidence level
model.classes = [0] # model classes (where 0 = person, 2 = car)
model.iou = 0.55 # bounding box accuracy
while True:
mat = q.get()
det = model(mat)
bbox = det.xyxy[0].numpy()
for i in bbox:
print(i)