如何调用google视觉遗留模型?
How to call google vision legacy models?
我想使用旧版 text_detection
和 document_text_detection
模型。 (参考:https://cloud.google.com/vision/docs/service-announcements)
我使用 features
:
以这种方式尝试
import io
from google.cloud import vision
vision_client = vision.ImageAnnotatorClient()
with io.open("/mnt/d/snap.png", 'rb') as image_file:
content = image_file.read()
image = vision.Image(content=content)
response = vision_client.document_text_detection(image=image)
# print(response) --> uses stable models, works fine
feature = vision.Feature(model="builtin/legacy")
response = vision_client.document_text_detection(image=image, features=feature)
# print(response) --> throws error show below
我收到以下错误:
TypeError: dict() got multiple values for keyword argument 'features'
我做错了什么?
试试这个:
import io from google.cloud import vision
vision_client = vision.ImageAnnotatorClient()
with io.open("/mnt/d/snap.png", 'rb') as image_file:
content = image_file.read()
#image = vision.Image(content=content)
response = vision_client.annotate_image({'image': {'content': content},'features': [{'type_': vision.Feature.Type.DOCUMENT_TEXT_DETECTION,'model': "builtin/legacy"}]})
我想使用旧版 text_detection
和 document_text_detection
模型。 (参考:https://cloud.google.com/vision/docs/service-announcements)
我使用 features
:
import io
from google.cloud import vision
vision_client = vision.ImageAnnotatorClient()
with io.open("/mnt/d/snap.png", 'rb') as image_file:
content = image_file.read()
image = vision.Image(content=content)
response = vision_client.document_text_detection(image=image)
# print(response) --> uses stable models, works fine
feature = vision.Feature(model="builtin/legacy")
response = vision_client.document_text_detection(image=image, features=feature)
# print(response) --> throws error show below
我收到以下错误:
TypeError: dict() got multiple values for keyword argument 'features'
我做错了什么?
试试这个:
import io from google.cloud import vision
vision_client = vision.ImageAnnotatorClient()
with io.open("/mnt/d/snap.png", 'rb') as image_file:
content = image_file.read()
#image = vision.Image(content=content)
response = vision_client.annotate_image({'image': {'content': content},'features': [{'type_': vision.Feature.Type.DOCUMENT_TEXT_DETECTION,'model': "builtin/legacy"}]})