将图像位图传递给 Azure Face SDK detectWithStream()
Pass Image Bitmap to Azure Face SDK detectWithStream()
我正在尝试编写一个 React 应用程序,它从网络摄像头抓取一帧并将其传递给 Azure Face SDK (documentation) 以检测图像中的人脸并获取这些人脸的属性 - 在此案例、情绪和头部姿势。
我得到了the quickstart example code here working, which makes a call to the detectWithUrl() method. However, the image that I have in my code is a bitmap, so I thought I would try calling detectWithStream() instead. The documentation for this method says it needs to be passed something of type msRest.HttpRequestBody - I found some documentation for this type的修改版本,它看起来像是Blob、字符串、ArrayBuffer 或ArrayBufferView。问题是,我真的不明白那些是什么或者我如何从位图图像获取该类型的 HttpRequestBody。我以前处理过 HTTP 请求,但我不太明白为什么要将一个请求传递给此方法,或者如何进行传递。
我找到了一些类似的示例和答案来解决我正在尝试做的事情,比如 。不幸的是,它们要么使用不同的语言,要么调用 Face API 而不是使用 SDK。
编辑: 我之前忘记绑定 detectFaces() 方法,所以我最初收到与此相关的不同错误。现在我已经解决了这个问题,我收到以下错误:
Uncaught (in promise) Error: image must be a string, Blob, ArrayBuffer, ArrayBufferView, or a function returning NodeJS.ReadableStream
内部构造函数():
this.detectFaces = this.detectFaces.bind(this);
const msRest = require("@azure/ms-rest-js");
const Face = require("@azure/cognitiveservices-face");
const key = <key>;
const endpoint = <endpoint>;
const credentials = new msRest.ApiKeyCredentials({ inHeader: { 'Ocp-Apim-Subscription-Key': key } });
const client = new Face.FaceClient(credentials, endpoint);
this.state = {
client: client
}
// get video
const constraints = {
video: true
}
navigator.mediaDevices.getUserMedia(constraints).then((stream) => {
let videoTrack = stream.getVideoTracks()[0];
const imageCapture = new ImageCapture(videoTrack);
imageCapture.grabFrame().then(function(imageBitmap) {
// detect faces
this.detectFaces(imageBitmap);
});
})
detectFaces() 方法:
async detectFaces(imageBitmap) {
const detectedFaces = await this.state.client.face.detectWithStream(
imageBitmap,
{
returnFaceAttributes: ["Emotion", "HeadPose"],
detectionModel: "detection_01"
}
);
console.log (detectedFaces.length + " face(s) detected");
});
任何人都可以帮助我了解将什么传递给 detectWithStream() 方法,或者可以帮助我了解使用哪种方法更好地检测网络摄像头图像中的人脸?
我想通了,多亏 header“Image to blob”下的 this page!这是我在调用 detectFaces()
:
之前添加的代码
// convert image frame into blob
let canvas = document.createElement('canvas');
canvas.width = imageBitmap.width;
canvas.height = imageBitmap.height;
let context = canvas.getContext('2d');
context.drawImage(imageBitmap, 0, 0);
canvas.toBlob((blob) => {
// detect faces
this.detectFaces(blob);
})
此代码将位图图像转换为 Blob,然后将 Blob 传递给 detectFaces()
。我还像这样更改了 detectFaces()
以接受 blob
而不是 imageBitmap
,然后一切正常:
async detectFaces(blob) {
const detectedFaces = await this.state.client.face.detectWithStream(
blob,
{
returnFaceAttributes: ["Emotion", "HeadPose"],
detectionModel: "detection_01"
}
);
...
}
我正在尝试编写一个 React 应用程序,它从网络摄像头抓取一帧并将其传递给 Azure Face SDK (documentation) 以检测图像中的人脸并获取这些人脸的属性 - 在此案例、情绪和头部姿势。
我得到了the quickstart example code here working, which makes a call to the detectWithUrl() method. However, the image that I have in my code is a bitmap, so I thought I would try calling detectWithStream() instead. The documentation for this method says it needs to be passed something of type msRest.HttpRequestBody - I found some documentation for this type的修改版本,它看起来像是Blob、字符串、ArrayBuffer 或ArrayBufferView。问题是,我真的不明白那些是什么或者我如何从位图图像获取该类型的 HttpRequestBody。我以前处理过 HTTP 请求,但我不太明白为什么要将一个请求传递给此方法,或者如何进行传递。
我找到了一些类似的示例和答案来解决我正在尝试做的事情,比如
编辑: 我之前忘记绑定 detectFaces() 方法,所以我最初收到与此相关的不同错误。现在我已经解决了这个问题,我收到以下错误:
Uncaught (in promise) Error: image must be a string, Blob, ArrayBuffer, ArrayBufferView, or a function returning NodeJS.ReadableStream
内部构造函数():
this.detectFaces = this.detectFaces.bind(this);
const msRest = require("@azure/ms-rest-js");
const Face = require("@azure/cognitiveservices-face");
const key = <key>;
const endpoint = <endpoint>;
const credentials = new msRest.ApiKeyCredentials({ inHeader: { 'Ocp-Apim-Subscription-Key': key } });
const client = new Face.FaceClient(credentials, endpoint);
this.state = {
client: client
}
// get video
const constraints = {
video: true
}
navigator.mediaDevices.getUserMedia(constraints).then((stream) => {
let videoTrack = stream.getVideoTracks()[0];
const imageCapture = new ImageCapture(videoTrack);
imageCapture.grabFrame().then(function(imageBitmap) {
// detect faces
this.detectFaces(imageBitmap);
});
})
detectFaces() 方法:
async detectFaces(imageBitmap) {
const detectedFaces = await this.state.client.face.detectWithStream(
imageBitmap,
{
returnFaceAttributes: ["Emotion", "HeadPose"],
detectionModel: "detection_01"
}
);
console.log (detectedFaces.length + " face(s) detected");
});
任何人都可以帮助我了解将什么传递给 detectWithStream() 方法,或者可以帮助我了解使用哪种方法更好地检测网络摄像头图像中的人脸?
我想通了,多亏 header“Image to blob”下的 this page!这是我在调用 detectFaces()
:
// convert image frame into blob
let canvas = document.createElement('canvas');
canvas.width = imageBitmap.width;
canvas.height = imageBitmap.height;
let context = canvas.getContext('2d');
context.drawImage(imageBitmap, 0, 0);
canvas.toBlob((blob) => {
// detect faces
this.detectFaces(blob);
})
此代码将位图图像转换为 Blob,然后将 Blob 传递给 detectFaces()
。我还像这样更改了 detectFaces()
以接受 blob
而不是 imageBitmap
,然后一切正常:
async detectFaces(blob) {
const detectedFaces = await this.state.client.face.detectWithStream(
blob,
{
returnFaceAttributes: ["Emotion", "HeadPose"],
detectionModel: "detection_01"
}
);
...
}