Google Python 中的 Cloud Video Intelligence API - 无法 运行 对一个文件夹中的多个视频进行对象跟踪
Google Cloud Video Intelligence API in Python - Unable to run object tracking on multiple videos in a folder
我正在尝试 运行 对包含多个视频的文件夹进行对象跟踪。我的存储桶中有 5 个视频,按照此处的文档,它建议使用通配符 (*) 运算符。但是,当我 运行 整个脚本时,只有 1 个视频被注释,而不是包含 5 个视频的整个文件夹。此外,response2.json 不会在我的 GCS 存储桶中创建为 output_uri。
要识别多个视频,视频 URI 的对象 ID 中可能包含通配符。支持的通配符:'*'匹配0个或多个字符; ‘?’ 匹配 1 个字符。
https://googleapis.dev/python/videointelligence/latest/gapic/v1/types.html
这是我在 input_uri 代码中所做的:
gcs_uri = 'gs://video_intel/*'
如果您查看屏幕截图,它应该是存储桶 ID 名称并在同一文件夹中显示多个视频。
任何人都可以帮助解决这个问题。谢谢。
完整脚本:
import os
os.environ['GOOGLE_APPLICATION_CREDENTIALS']='poc-video-intelligence-da5d4d52cb97.json'
"""Object tracking in a video stored on GCS."""
from google.cloud import videointelligence
video_client = videointelligence.VideoIntelligenceServiceClient()
features = [videointelligence.enums.Feature.OBJECT_TRACKING]
gcs_uri = 'gs://video_intel/*'
output_uri = 'gs://video_intel/response2.json'
operation = video_client.annotate_video(input_uri=gcs_uri, features=features, output_uri=output_uri)
print("\nProcessing video for object annotations.")
result = operation.result(timeout=300)
print("\nFinished processing.\n")
# The first result is retrieved because a single video was processed.
object_annotations = result.annotation_results[0].object_annotations
for object_annotation in object_annotations:
print("Entity description: {}".format(object_annotation.entity.description))
if object_annotation.entity.entity_id:
print("Entity id: {}".format(object_annotation.entity.entity_id))
print(
"Segment: {}s to {}s".format(
object_annotation.segment.start_time_offset.seconds
+ object_annotation.segment.start_time_offset.nanos / 1e9,
object_annotation.segment.end_time_offset.seconds
+ object_annotation.segment.end_time_offset.nanos / 1e9,
)
)
print("Confidence: {}".format(object_annotation.confidence))
# Here we print only the bounding box of the first frame in the segment
frame = object_annotation.frames[0]
box = frame.normalized_bounding_box
print(
"Time offset of the first frame: {}s".format(
frame.time_offset.seconds + frame.time_offset.nanos / 1e9
)
)
print("Bounding box position:")
print("\tleft : {}".format(box.left))
print("\ttop : {}".format(box.top))
print("\tright : {}".format(box.right))
print("\tbottom: {}".format(box.bottom))
print("\n")
请将'gcs_uri = 'gs://video_intel/''修改为gcs_uri = 'gs://video_intel/.*'
我正在尝试 运行 对包含多个视频的文件夹进行对象跟踪。我的存储桶中有 5 个视频,按照此处的文档,它建议使用通配符 (*) 运算符。但是,当我 运行 整个脚本时,只有 1 个视频被注释,而不是包含 5 个视频的整个文件夹。此外,response2.json 不会在我的 GCS 存储桶中创建为 output_uri。
要识别多个视频,视频 URI 的对象 ID 中可能包含通配符。支持的通配符:'*'匹配0个或多个字符; ‘?’ 匹配 1 个字符。 https://googleapis.dev/python/videointelligence/latest/gapic/v1/types.html
这是我在 input_uri 代码中所做的:
gcs_uri = 'gs://video_intel/*'
如果您查看屏幕截图,它应该是存储桶 ID 名称并在同一文件夹中显示多个视频。
任何人都可以帮助解决这个问题。谢谢。
完整脚本:
import os
os.environ['GOOGLE_APPLICATION_CREDENTIALS']='poc-video-intelligence-da5d4d52cb97.json'
"""Object tracking in a video stored on GCS."""
from google.cloud import videointelligence
video_client = videointelligence.VideoIntelligenceServiceClient()
features = [videointelligence.enums.Feature.OBJECT_TRACKING]
gcs_uri = 'gs://video_intel/*'
output_uri = 'gs://video_intel/response2.json'
operation = video_client.annotate_video(input_uri=gcs_uri, features=features, output_uri=output_uri)
print("\nProcessing video for object annotations.")
result = operation.result(timeout=300)
print("\nFinished processing.\n")
# The first result is retrieved because a single video was processed.
object_annotations = result.annotation_results[0].object_annotations
for object_annotation in object_annotations:
print("Entity description: {}".format(object_annotation.entity.description))
if object_annotation.entity.entity_id:
print("Entity id: {}".format(object_annotation.entity.entity_id))
print(
"Segment: {}s to {}s".format(
object_annotation.segment.start_time_offset.seconds
+ object_annotation.segment.start_time_offset.nanos / 1e9,
object_annotation.segment.end_time_offset.seconds
+ object_annotation.segment.end_time_offset.nanos / 1e9,
)
)
print("Confidence: {}".format(object_annotation.confidence))
# Here we print only the bounding box of the first frame in the segment
frame = object_annotation.frames[0]
box = frame.normalized_bounding_box
print(
"Time offset of the first frame: {}s".format(
frame.time_offset.seconds + frame.time_offset.nanos / 1e9
)
)
print("Bounding box position:")
print("\tleft : {}".format(box.left))
print("\ttop : {}".format(box.top))
print("\tright : {}".format(box.right))
print("\tbottom: {}".format(box.bottom))
print("\n")
请将'gcs_uri = 'gs://video_intel/''修改为gcs_uri = 'gs://video_intel/.*'