无法更新 Python Cloud Dataflow 中的工作状态异常

Failed to update work status Exception in Python Cloud Dataflow

我有一个 Python Cloud Dataflow 作业,它在较小的子集上运行良好,但在完整数据集上似乎没有明显原因而失败。

我在 Dataflow 界面中遇到的唯一错误是标准错误消息:

A work item was attempted 4 times without success. Each time the worker eventually lost contact with the service.

分析 Stackdriver 日志仅显示此错误:

Exception in worker loop: Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 736, in run deferred_exception_details=deferred_exception_details) File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 590, in do_work exception_details=exception_details) File "/usr/local/lib/python2.7/dist-packages/apache_beam/utils/retry.py", line 167, in wrapper return fun(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 454, in report_completion_status exception_details=exception_details) File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 266, in report_status work_executor=self._work_executor) File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/workerapiclient.py", line 364, in report_status response = self._client.projects_jobs_workItems.ReportStatus(request) File "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/clients/dataflow/dataflow_v1b3_client.py", line 210, in ReportStatus config, request, global_params=global_params) File "/usr/local/lib/python2.7/dist-packages/apitools/base/py/base_api.py", line 723, in _RunMethod return self.ProcessHttpResponse(method_config, http_response, request) File "/usr/local/lib/python2.7/dist-packages/apitools/base/py/base_api.py", line 729, in ProcessHttpResponse self.__ProcessHttpResponse(method_config, http_response, request)) File "/usr/local/lib/python2.7/dist-packages/apitools/base/py/base_api.py", line 599, in __ProcessHttpResponse http_response.request_url, method_config, request) HttpError: HttpError accessing https://dataflow.googleapis.com/v1b3/projects//jobs/2017-05-03_03_33_40-3860129055041750274/workItems:reportStatus?alt=json>: response: <{'status': '400', 'content-length': '360', 'x-xss-protection': '1; mode=block', 'x-content-type-options': 'nosniff', 'transfer-encoding': 'chunked', 'vary': 'Origin, X-Origin, Referer', 'server': 'ESF', '-content-encoding': 'gzip', 'cache-control': 'private', 'date': 'Wed, 03 May 2017 16:46:11 GMT', 'x-frame-options': 'SAMEORIGIN', 'content-type': 'application/json; charset=UTF-8'}>, content <{ "error": { "code": 400, "message": "(2a7b20b33659c46e): Failed to publish the result of the work update. Causes: (2a7b20b33659c523): Failed to update work status. Causes: (8a8b13f5c3a944ba): Failed to update work status., (8a8b13f5c3a945d9): Work \"4047499437681669251\" not leased (or the lease was lost).", "status": "INVALID_ARGUMENT" } } >

我假设这个 Failed to update work status 错误与 Cloud Runner 有关?但是由于我在网上没有找到关于这个错误的任何信息,我想知道是否有其他人遇到过它并且有更好的解释?

我正在使用 Google Cloud Dataflow SDK for Python 0.5.5

租约到期的一个主要原因与 VM 上的内存压力有关。您可以 运行 在具有更高内存的机器上尝试您的工作。特别是,highmem 机器类型应该可以解决问题。

有关机器类型的更多信息,请查看 GCE Documentation

下一个 Dataflow 版本 (2.0.0) 应该能够更好地处理这些情况。