制作自动注释工具

Making Automatic Annotiation tool

我想为 yolo 对象检测制作一个自动注释工具,它使用以前训练过的模型来查找检测,我设法将一些代码放在一起,但我有点卡住了,据我所知这需要成为YOLO的标注格式:

18 0.154167 0.431250 0.091667 0.612500

用我的代码我得到

0.5576068858305613, 0.5410404056310654, -0.7516528169314066, 0.33822181820869446

我不确定为什么我在第三个数字处得到 -,如果我需要缩短我的浮点数, 如果有人可以帮助我,我将 post 下面的代码,完成这个项目后,如果有人想使用它,我将 post 整个代码

def convert(size, box):
dw = 1./size[0]
dh = 1./size[1]
x = (box[0] + box[1])/2.0
y = (box[2] + box[3])/2.0
w = box[1] - box[0]
h = box[3] - box[2]
x = x*dw
w = w*dw
y = y*dh
h = h*dh
return (x,y,w,h)

以上代码是将坐标转换为YOLO格式的函数,尺寸需要传(w,h),框需要传(x,x+w,y, y+h)

     net = cv2.dnn.readNetFromDarknet(config_path, weights_path)
# path_name = "images/city_scene.jpg"
path_name = image
image = cv2.imread(path_name)
file_name = os.path.basename(path_name)
filename, ext = file_name.split(".")

h, w = image.shape[:2]
# create 4D blob
blob = cv2.dnn.blobFromImage(image, 1 / 255.0, (416, 416), swapRB=True, crop=False)

# sets the blob as the input of the network
net.setInput(blob)

# get all the layer names
ln = net.getLayerNames()
ln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()]
# feed forward (inference) and get the network output
# measure how much it took in seconds
start = time.perf_counter()
layer_outputs = net.forward(ln)
time_took = time.perf_counter() - start
print(f"Time took: {time_took:.2f}s")

boxes, confidences, class_ids = [], [], []
b=[]
a=[]
# loop over each of the layer outputs
for output in layer_outputs:
    # loop over each of the object detections
    for detection in output:
     # extract the class id (label) and confidence (as a probability) of
     # the current object detection
        scores = detection[5:]
        class_id = np.argmax(scores)
        confidence = scores[class_id]
    # discard weak predictions by ensuring the detected
    # probability is greater than the minimum probability
        if confidence > CONFIDENCE:
            # scale the bounding box coordinates back relative to the
            # size of the image, keeping in mind that YOLO actually
         # returns the center (x, y)-coordinates of the bounding
            # box followed by the boxes' width and height
            box = detection[0:4] * np.array([w, h, w, h])
            (centerX, centerY, width, height) = box.astype("float")

        # use the center (x, y)-coordinates to derive the top and
        # and left corner of the bounding box
            x = int(centerX - (width / 2))
            y = int(centerY - (height / 2))
            a = w, h
            convert(a, box)
            boxes.append([x, y, int(width), int(height)])

            confidences.append(float(confidence))
            class_ids.append(class_id)

   idxs = cv2.dnn.NMSBoxes(boxes, confidences, SCORE_THRESHOLD, 
 IOU_THRESHOLD)

font_scale = 1
thickness = 1


 # ensure at least one detection exists
if len(idxs) > 0:

# loop over the indexes we are keeping
    for i in idxs.flatten():


    # extract the bounding box coordinates
        x, y = boxes[i][0], boxes[i][1]
        w, h = boxes[i][2], boxes[i][3]
    # draw a bounding box rectangle and label on the image
        color = [int(c) for c in colors[class_ids[i]]]
        ba=w,h
        print(w,h)


        cv2.rectangle(image, (x, y), (x + w, y + h), color=color, thickness=thickness)
        text = "{}".format(labels[class_ids[i]])
        conf = "{:.3f}".format(confidences[i], x, y)
        int1, int2 = (x, y)
        print(text)
        #print(convert(ba, box))



        #b=w,h
        #print(convert(b, boxes))
        #print(convert(a, box)) #coordinates
        ivan = str(int1)

        b.append([text, ivan])
        #a.append(float(conf))
        #print(a)



    # calculate text width & height to draw the transparent boxes as background of the text
        (text_width, text_height) = \
        cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX, fontScale=font_scale, thickness=thickness)[0]
        text_offset_x = x
        text_offset_y = y - 5
        box_coords = ((text_offset_x, text_offset_y), (text_offset_x + text_width + 2, text_offset_y - text_height))
        overlay = image.copy()
        cv2.rectangle(overlay, box_coords[0], box_coords[1], color=color, thickness=cv2.FILLED)
    # add opacity (transparency to the box)
        image = cv2.addWeighted(overlay, 0.6, image, 0.4, 0)
    # now put the text (label: confidence %)
        cv2.putText(image, text, (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX,
                fontScale=font_scale, color=(0, 0, 0), thickness=thickness)


    text = "{}".format(labels[class_ids[i]],x,y)
    conf = "{:.3f}".format(confidences[i])

问题出在函数中的索引。

box[0]=>center x
box[1]=>center y
box[2]=>width of your bbox
box[3]=>height of your bbox

根据文档,yolo标签是这样的:

<object-class> <x> <y> <width> <height>

哪个 x 和 y 是边界的中心 box.so 你的代码应该是这样的 :

def convert(size, box):
dw = 1./size[0]
dh = 1./size[1]
x = box[0]*dw
y = box[1]*dh
w = box[2]*dw
h = box[3]*dh
return (x,y,w,h)

也许这可以帮到你

   def bounding_box_2_yolo(obj_detections, frame, index):
    yolo_info = []
    for object_det in obj_detections:
        left_x, top_y, right_x, bottom_y = object_det.boxes
        xmin = left_x
        xmax = right_x
        ymin = top_y
        ymax = bottom_y

        xcen = float((xmin + xmax)) / 2 / frame.shape[1]
        ycen = float((ymin + ymax)) / 2 / frame.shape[0]

        w = float((xmax - xmin)) / frame.shape[1]
        h = float((ymax - ymin)) / frame.shape[0]

        yolo_info.append((index, xcen, ycen, w, h))

    return yolo_info

labelimg 有很多东西你也可以用 https://github.com/tzutalin/labelImg/blob/master/libs/yolo_io.py