google-vision text-detection api 中在哪里使用语言提示?
Where to use Language hints in google-vision text-detection api?
所以我知道 google-vision api 支持多种语言的文本检测。通过使用下面的代码,我可以从图像中检测到英语。但是根据 google 我可以使用参数语言提示来检测其他语言。那么我到底应该把这个参数放在下面代码的什么地方呢?
def detect_text(path):
"""Detects text in the file."""
from google.cloud import vision
imageContext = 'bn'
client = vision.ImageAnnotatorClient(imageContext)
with io.open(path, 'rb') as image_file:
content = image_file.read()
image = vision.types.Image(content=content)
response = client.text_detection(image=image)
texts = response.text_annotations
print('Texts:')
for text in texts:
print('\n"{}"'.format(text.description))
vertices = (['({},{})'.format(vertex.x, vertex.y)
for vertex in text.bounding_poly.vertices])
print('bounds: {}'.format(','.join(vertices)))
detect_text('Outline-of-the-Bangladesh-license-plates_Q320.jpg')
像这样:
response = client.text_detection(
image=image,
image_context={"language_hints": ["bn"]}, # Bengali
)
有关详细信息,请参阅 "ImageContext"。
所以我知道 google-vision api 支持多种语言的文本检测。通过使用下面的代码,我可以从图像中检测到英语。但是根据 google 我可以使用参数语言提示来检测其他语言。那么我到底应该把这个参数放在下面代码的什么地方呢?
def detect_text(path):
"""Detects text in the file."""
from google.cloud import vision
imageContext = 'bn'
client = vision.ImageAnnotatorClient(imageContext)
with io.open(path, 'rb') as image_file:
content = image_file.read()
image = vision.types.Image(content=content)
response = client.text_detection(image=image)
texts = response.text_annotations
print('Texts:')
for text in texts:
print('\n"{}"'.format(text.description))
vertices = (['({},{})'.format(vertex.x, vertex.y)
for vertex in text.bounding_poly.vertices])
print('bounds: {}'.format(','.join(vertices)))
detect_text('Outline-of-the-Bangladesh-license-plates_Q320.jpg')
像这样:
response = client.text_detection(
image=image,
image_context={"language_hints": ["bn"]}, # Bengali
)
有关详细信息,请参阅 "ImageContext"。