在从 CircleCI 启动的 Dataflow/Apache-beam 作业中找不到库
Libraries cannot be found on Dataflow/Apache-beam job launched from CircleCI
我遇到了严重的问题 运行使用从 CircleCI 启动的 GCP 数据流 运行ner 连接 python Apache Beam 管道。如果有人能就如何解决这个问题给出任何提示,我将不胜感激,我已经尝试了所有方法,但似乎没有任何效果。
基本上,我运行正在使用 python Apache Beam 管道,它 运行 在 Dataflow 中使用 google-api-python-client-1.12.3
。如果我 运行 我机器上的作业 (python3 main.py --runner dataflow --setup_file /path/to/my/file/setup.py
),它工作正常。如果我 运行 来自 CircleCI 的同一个作业,数据流作业被创建,但它失败并显示一条消息 ImportError: No module named 'apiclient'
.
通过查看 this documentation, I think I should probably use explicitely a requirements.txt
file. If I run that same pipeline from CircleCI, but adding the --requirements_file
argument to a requirements file containing a single line (google-api-python-client==1.12.3
), the dataflow job fails because the workers fail too. In the logs, there's a info message first ERROR: Could not find a version that satisfies the requirement wheel (from versions: none)"
which results in a later error message "Error syncing pod somePodIdHere (\"dataflow-myjob-harness-rl84_default(somePodIdHere)\"), skipping: failed to \"StartContainer\" for \"python\" with CrashLoopBackOff: \"back-off 40s restarting failed container=python pod=dataflow-myjob-harness-rl84_default(somePodIdHere)\"
. I found ,但解决方案似乎对我的情况不起作用。
非常非常感谢任何帮助。非常感谢!
这个问题看起来和你的很相似。解决方案似乎是在 requirements.txt
中明确包含您的需求的依赖项
我遇到了严重的问题 运行使用从 CircleCI 启动的 GCP 数据流 运行ner 连接 python Apache Beam 管道。如果有人能就如何解决这个问题给出任何提示,我将不胜感激,我已经尝试了所有方法,但似乎没有任何效果。
基本上,我运行正在使用 python Apache Beam 管道,它 运行 在 Dataflow 中使用 google-api-python-client-1.12.3
。如果我 运行 我机器上的作业 (python3 main.py --runner dataflow --setup_file /path/to/my/file/setup.py
),它工作正常。如果我 运行 来自 CircleCI 的同一个作业,数据流作业被创建,但它失败并显示一条消息 ImportError: No module named 'apiclient'
.
通过查看 this documentation, I think I should probably use explicitely a requirements.txt
file. If I run that same pipeline from CircleCI, but adding the --requirements_file
argument to a requirements file containing a single line (google-api-python-client==1.12.3
), the dataflow job fails because the workers fail too. In the logs, there's a info message first ERROR: Could not find a version that satisfies the requirement wheel (from versions: none)"
which results in a later error message "Error syncing pod somePodIdHere (\"dataflow-myjob-harness-rl84_default(somePodIdHere)\"), skipping: failed to \"StartContainer\" for \"python\" with CrashLoopBackOff: \"back-off 40s restarting failed container=python pod=dataflow-myjob-harness-rl84_default(somePodIdHere)\"
. I found
非常非常感谢任何帮助。非常感谢!
这个问题看起来和你的很相似。解决方案似乎是在 requirements.txt