Google Cloud Vision API 使用 Cloud Shell:如何 运行 API 用于多个图像?我的 request.json 应该是什么样子?
Google Cloud Vision API using Cloud Shell: How can I run the API for multiple images? What should my request.json look like?
我 运行 对单个图像进行了测试 [使用 Cloud Shell],request.json 如下所示。我怎样才能 运行 Vision API 整个文件夹的图像?
另外,为什么API到运行的图片用户权限需要是public?
谢谢。
{
"requests": [
{
"image": {
"source": {
"gcsImageUri": "gs://visionapitest/landmark/test.jpeg"
}
},
"features": [
{
"type": "LABEL_DETECTION",
"maxResults": 10
}
]
}
]
}
如果你想使用云端执行请求Shell,你必须按照以下方式进行
{
"requests": [
{
"image": {
"source": {
"gcsImageUri": "gs://visionapitest/landmark/test.jpeg"
}
},
"features": [
{
"type": "LABEL_DETECTION",
"maxResults": 10
}
]
},
{
"image": {
"source": {
"gcsImageUri": "gs://visionapitest/landmark/test2.jpeg"
}
},
"features": [
{
"type": "LABEL_DETECTION",
"maxResults": 10
}
]
}, … ]}
请注意,这不是指定完整文件夹的方法,因为您可以看到“请求”字段是 AnnotateImageRequest 个对象的数组,因此您必须逐项列出 JSON 文件.
另一方面,您可以使用一个可用的 Vision Client Libraries in order to read all the images within the folder. I would like to share a python code snippet I took from the Vision API documentation 动态创建“requests”数组,虽然它只考虑了一张图片,但我修改它以读取整个文件夹。
from google.cloud import vision_v1
from google.cloud.vision_v1 import enums
from google.cloud import storage
from google.cloud.vision_v1 import types
from re import search
def sample_async_batch_annotate_images(
bucket_name,
output_uri
):
"""Perform async batch image annotation."""
client = vision_v1.ImageAnnotatorClient()
storage_client = storage.Client()
blobs = storage_client.list_blobs(
bucket_name, prefix='vision/label/', delimiter='/'
)
requests = []
for blob in blobs:
if search('jpg',blob.name):
input_image_uri = 'gs://' + bucket_name +'/'+ blob.name
print(input_image_uri)
source = {"image_uri": input_image_uri}
image = {"source": source}
features = [
{"type": enums.Feature.Type.LABEL_DETECTION},
]
request = types.AnnotateImageRequest(image=image, features=features)
requests.append(request)
gcs_destination = {"uri": output_uri}
# The max number of responses to output in each JSON file
batch_size = 2
output_config = {"gcs_destination": gcs_destination,
"batch_size": batch_size}
operation = client.async_batch_annotate_images(requests, output_config)
print("Waiting for operation to complete...")
response = operation.result(90)
# The output is written to GCS with the provided output_uri as prefix
gcs_output_uri = response.output_config.gcs_destination.uri
print("Output written to GCS with prefix: {}".format(gcs_output_uri))
不过,您可以将此作为参考,但这取决于您的 use-case 和代码语言偏好。
关于权限的问题,我想你指的是云存储桶的。根据我的理解,没有必要制作您的图像 public,您只需将存储桶中的 read/write Cloud Storage permissions 提供给您正在执行请求的服务帐户。
我 运行 对单个图像进行了测试 [使用 Cloud Shell],request.json 如下所示。我怎样才能 运行 Vision API 整个文件夹的图像? 另外,为什么API到运行的图片用户权限需要是public? 谢谢。
{
"requests": [
{
"image": {
"source": {
"gcsImageUri": "gs://visionapitest/landmark/test.jpeg"
}
},
"features": [
{
"type": "LABEL_DETECTION",
"maxResults": 10
}
]
}
]
}
如果你想使用云端执行请求Shell,你必须按照以下方式进行
{
"requests": [
{
"image": {
"source": {
"gcsImageUri": "gs://visionapitest/landmark/test.jpeg"
}
},
"features": [
{
"type": "LABEL_DETECTION",
"maxResults": 10
}
]
},
{
"image": {
"source": {
"gcsImageUri": "gs://visionapitest/landmark/test2.jpeg"
}
},
"features": [
{
"type": "LABEL_DETECTION",
"maxResults": 10
}
]
}, … ]}
请注意,这不是指定完整文件夹的方法,因为您可以看到“请求”字段是 AnnotateImageRequest 个对象的数组,因此您必须逐项列出 JSON 文件.
另一方面,您可以使用一个可用的 Vision Client Libraries in order to read all the images within the folder. I would like to share a python code snippet I took from the Vision API documentation 动态创建“requests”数组,虽然它只考虑了一张图片,但我修改它以读取整个文件夹。
from google.cloud import vision_v1
from google.cloud.vision_v1 import enums
from google.cloud import storage
from google.cloud.vision_v1 import types
from re import search
def sample_async_batch_annotate_images(
bucket_name,
output_uri
):
"""Perform async batch image annotation."""
client = vision_v1.ImageAnnotatorClient()
storage_client = storage.Client()
blobs = storage_client.list_blobs(
bucket_name, prefix='vision/label/', delimiter='/'
)
requests = []
for blob in blobs:
if search('jpg',blob.name):
input_image_uri = 'gs://' + bucket_name +'/'+ blob.name
print(input_image_uri)
source = {"image_uri": input_image_uri}
image = {"source": source}
features = [
{"type": enums.Feature.Type.LABEL_DETECTION},
]
request = types.AnnotateImageRequest(image=image, features=features)
requests.append(request)
gcs_destination = {"uri": output_uri}
# The max number of responses to output in each JSON file
batch_size = 2
output_config = {"gcs_destination": gcs_destination,
"batch_size": batch_size}
operation = client.async_batch_annotate_images(requests, output_config)
print("Waiting for operation to complete...")
response = operation.result(90)
# The output is written to GCS with the provided output_uri as prefix
gcs_output_uri = response.output_config.gcs_destination.uri
print("Output written to GCS with prefix: {}".format(gcs_output_uri))
不过,您可以将此作为参考,但这取决于您的 use-case 和代码语言偏好。
关于权限的问题,我想你指的是云存储桶的。根据我的理解,没有必要制作您的图像 public,您只需将存储桶中的 read/write Cloud Storage permissions 提供给您正在执行请求的服务帐户。