OpenCV 4 TypeError: Expected cv::UMat for argument 'labels'

OpenCV 4 TypeError: Expected cv::UMat for argument 'labels'

我正在编写一个面部识别程序,当我尝试训练我的识别器时,我总是收到这个错误

TypeError: Expected cv::UMat for argument 'labels'

我的密码是

def detect_face(img):
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml")
    faces = face_cascade.detectMultiScale(gray, scaleFactor=1.2, minNeighbors=5);
    if (len(faces)==0):
        return None, None
    (x, y, w, h) = faces[0]
    return gray[y:y+w, x:x+h], faces[0]

def prepare_training_data():
    faces = []
    labels = []
    for img in photo_name_list: #a collection of file locations as strings
        image = cv2.imread(img)
        face, rect = detect_face(image)
        if face is not None:
            faces.append(face)
            labels.append("me")
    return faces, labels

def test_photos():
    face_recognizer = cv2.face.LBPHFaceRecognizer_create()
    faces, labels = prepare_training_data()
    face_recognizer.train(faces, np.ndarray(labels))

labels 是从 prepare_training_data 返回的图像列表中每张照片的标签列表,我将其转换为 numpy 数组,因为我读到这就是 train() 需要的。

解决方案-标签应该是整数列表,你应该使用numpy.array(labels)(或np.array(labels))。

检查错误缺失的虚拟示例:

labels=[0]*len(faces)
face_recognizer.train(faces, np.array(labels))

我没有在 python 上找到任何关于 openCV 人脸识别器的文档,所以我开始查看 c++ 文档和示例。并且由于 documentation this library uses labels input for train as a std::vector<int>. A cpp example, provided by openCV docs, also uses vector<int> labels. And so on, library even have an error for not an integer input.