raspberry pi 上的 kinect 对象跟踪
Object tracking by kinect on raspberry pi
我正在 raspberry pi 上使用 kinect 进行对象跟踪。
我混合了 2 个代码,因为我需要用 kinect 找到接近的对象,然后在这个过程跟踪灰色对象之后使用 OpenCV 过滤器设置灰色!
但我不能!请帮助我
import freenect
import cv2
import numpy as np
"""
Grabs a depth map from the Kinect sensor and creates an image from it.
"""
def getDepthMap():
depth, timestamp = freenect.sync_get_depth()
np.clip(depth, 0, 2**10 - 1, depth)
depth >>= 2
depth = depth.astype(np.uint8)
return depth
while True:
depth = getDepthMap()
#text_file = codecs.open("log2.txt", "a","utf-8-sig")
#text_file.write(str(depth)+'\n')
depth = getDepthMap()
blur = cv2.GaussianBlur(depth, (5, 5), 0)
cv2.imshow('image', blur)
此代码可以显示两种颜色的对象:黑色和白色
黑色几乎是---
我想将此代码混合到对象跟踪中。但是我不知道。
# find contours in the mask and initialize the current
# (x, y) center of the ball
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)[-2]
center = None
# only proceed if at least one contour was found
if len(cnts) > 0:
# find the largest contour in the mask, then use
# it to compute the minimum enclosing circle and
# centroid
c = max(cnts, key=cv2.contourArea)
((x, y), radius) = cv2.minEnclosingCircle(c)
M = cv2.moments(c)
center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))
# only proceed if the radius meets a minimum size
if radius > 10:
# draw the circle and centroid on the frame,
# then update the list of tracked points
cv2.circle(frame, (int(x), int(y)), int(radius),
(0, 255, 255), 2)
cv2.circle(frame, center, 5, (0, 0, 255), -1)
# update the points queue
pts.appendleft(center)
http://www.pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/
你的代码中的逻辑似乎是正确的。不过,我注意到了一些实施错误。
首先,您应该在 while True
之后缩进该块。您还应该添加对 waitKey()
的调用,这样 OpenCV 就不会卡在 imshow()
:
while True:
depth = getDepthMap()
blur = cv2.GaussianBlur(depth, (5, 5), 0)
cv2.imshow('image', blur)
cv2.waitKey(1)
最后,你应该将下一个块(mask
)的输入与前一个块(blur
)的输出相关联:
mask = blur
我正在 raspberry pi 上使用 kinect 进行对象跟踪。 我混合了 2 个代码,因为我需要用 kinect 找到接近的对象,然后在这个过程跟踪灰色对象之后使用 OpenCV 过滤器设置灰色! 但我不能!请帮助我
import freenect
import cv2
import numpy as np
"""
Grabs a depth map from the Kinect sensor and creates an image from it.
"""
def getDepthMap():
depth, timestamp = freenect.sync_get_depth()
np.clip(depth, 0, 2**10 - 1, depth)
depth >>= 2
depth = depth.astype(np.uint8)
return depth
while True:
depth = getDepthMap()
#text_file = codecs.open("log2.txt", "a","utf-8-sig")
#text_file.write(str(depth)+'\n')
depth = getDepthMap()
blur = cv2.GaussianBlur(depth, (5, 5), 0)
cv2.imshow('image', blur)
此代码可以显示两种颜色的对象:黑色和白色 黑色几乎是--- 我想将此代码混合到对象跟踪中。但是我不知道。
# find contours in the mask and initialize the current
# (x, y) center of the ball
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)[-2]
center = None
# only proceed if at least one contour was found
if len(cnts) > 0:
# find the largest contour in the mask, then use
# it to compute the minimum enclosing circle and
# centroid
c = max(cnts, key=cv2.contourArea)
((x, y), radius) = cv2.minEnclosingCircle(c)
M = cv2.moments(c)
center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))
# only proceed if the radius meets a minimum size
if radius > 10:
# draw the circle and centroid on the frame,
# then update the list of tracked points
cv2.circle(frame, (int(x), int(y)), int(radius),
(0, 255, 255), 2)
cv2.circle(frame, center, 5, (0, 0, 255), -1)
# update the points queue
pts.appendleft(center)
http://www.pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/
你的代码中的逻辑似乎是正确的。不过,我注意到了一些实施错误。
首先,您应该在 while True
之后缩进该块。您还应该添加对 waitKey()
的调用,这样 OpenCV 就不会卡在 imshow()
:
while True:
depth = getDepthMap()
blur = cv2.GaussianBlur(depth, (5, 5), 0)
cv2.imshow('image', blur)
cv2.waitKey(1)
最后,你应该将下一个块(mask
)的输入与前一个块(blur
)的输出相关联:
mask = blur