Google Cloud Vision - 如何使用 Node.js 发送请求属性

Google Cloud Vision - How to Send Request Properties with Node.js

我正在使用 Google Cloud Vision 来检测图像上的文本。这在大约 80% 的时间都有效。其他 20%,我得到这个错误:

Error: 3 INVALID_ARGUMENT: Request must specify image and features.
    at Object.callErrorFromStatus (C:\Users\emily\workspace\bot\node_modules\@grpc\grpc-js\build\src\call.js:31:26)
    at Object.onReceiveStatus (C:\Users\emily\workspace\bot\node_modules\@grpc\grpc-js\build\src\client.js:180:52)
    at Object.onReceiveStatus (C:\Users\emily\workspace\bot\node_modules\@grpc\grpc-js\build\src\client-interceptors.js:336:141)
    at Object.onReceiveStatus (C:\Users\emily\workspace\bot\node_modules\@grpc\grpc-js\build\src\client-interceptors.js:299:181)
    at C:\Users\emily\workspace\bot\node_modules\@grpc\grpc-js\build\src\call-stream.js:160:78
    at processTicksAndRejections (node:internal/process/task_queues:78:11) {
  code: 3,
  details: 'Request must specify image and features.',
  metadata: Metadata { internalRepr: Map(0) {}, options: {} },
  note: 'Exception occurred in retry method that was not classified as transient'

当我用谷歌搜索这个问题时,似乎我需要发送具体的 headers 来解决这个问题,基本上就像这里指定的那样:https://cloud.google.com/vision/docs/ocr#specify_the_language_optional

但是,我不知道如何使用我正在使用的 Node.js 代码发送这些请求参数,而且我在任何地方都找不到任何示例。有人可以帮我弄清楚如何使用它吗?我当前的代码是这样的:

                // Performs text detection on the image file using GCV
                (async () => {
                    await Jimp.read(attachment.url).then(image => {
                        return image
                            .invert()
                            .contrast(0.5)
                            .brightness(-0.25)
                            .write('temp.png');
                    });

                    const [result] = await googleapis.textDetection('temp.png');
                    const fullImageResults = result.textAnnotations;

谢谢!

如果您将 Node.js 与 Vision API 一起使用,您可以参考此 sample quickstart 代码以在 Vision API 中使用 Node.js 客户端库 [= =48=].

您遇到的错误,可以参考以下代码添加请求参数:

index.js :

async function quickstart() {
   const vision = require('@google-cloud/vision');
    const client = new vision.ImageAnnotatorClient();
    const request = {
     "requests": [
       {
         "image": {
           "source": {
             "imageUri": "gs://bucket1/download.png"
           }
         },
         "features": [
           {
             "type": "TEXT_DETECTION"
           }
         ],
         "imageContext": {
           "languageHints": ["en"]
         }
       }
     ]
   };
    const [result] = await client.batchAnnotateImages(request);
   const detections = result.responses[0].fullTextAnnotation;
   console.log(detections.text);
 }
  quickstart().catch(console.error);

在上面的代码中,我将图像存储在 GCS 中,并在我的代码中使用了该图像的路径。

图片:

输出:

It was the best of
times, it was the worst
of times, it was the age
of wisdom, it was the
age of foolishness...

如果想使用存储在本地系统中的图片文件,可以参考下面的代码。

因为您的文件在您的代码中 local system, first you need to convert it to a base64 encoded string format and pass the same in the request parameters

index.js :

async function quickstart() {

   const vision = require('@google-cloud/vision');

   const client = new vision.ImageAnnotatorClient();
  
   const request ={
     "requests":[
       {
         "image":{
           "content":"/9j/7QBEUGhvdG9...image contents...eYxxxzj/Coa6Bax//Z"
                },
         "features": [
           {
             "type":"TEXT_DETECTION"
           }
         ],
         "imageContext": {
           "languageHints": ["en"]
         }
       }
     ]
   };
   const [result] = await client.batchAnnotateImages(request);
   const detections = result.responses[0].fullTextAnnotation;
   console.log(detections.text);
  
   }
  
   quickstart();