尝试从 Scrapy 管道将爬行数据写入 Bigquery 时,请求的身份验证范围不足 (403)
Request had insufficient authentication scopes (403) when trying to write crawling data to Bigquery from pipeline of Scrapy
我正在尝试构建 Scrapy 爬虫:蜘蛛将爬取数据,然后在 pipeline.py 中,数据将保存到 Bigquery。我通过 docker 构建它,设置 crontab 作业并推送到 Google 云服务器到每天 运行ning.
问题是crontab执行scrapy爬虫的时候,得到了"google.api_core.exceptions.Forbidden: 403 GET https://www.googleapis.com/bigquery/v2/projects/project_name/datasets/dataset_name/tables/table_name: Request had insufficient authentication scopes."。
更多细节,当访问它的容器时(docker exec -it ... /bin/bash)并手动执行它(scrapy crawl spider_name) ,它就像魅力一样。数据出现在 Bigquery 中。
我使用具有 bigquery.admin 角色的服务帐户(json 文件)来设置 GOOGLE_APPLICATION_CREDENTIALS。
# spider file is fine
# pipeline.py
from google.cloud import bigquery
import logging
from scrapy.exceptions import DropItem
...
class SpiderPipeline(object):
def __init__(self):
# BIGQUERY
# Setup GOOGLE_APPLICATION_CREDENTIALS in docker file
self.client = bigquery.Client()
table_ref = self.client.dataset('dataset').table('data')
self.table = self.client.get_table(table_ref)
def process_item(self, item, spider):
if item['key']:
# BIGQUERY
'''Order: key, source, lang, created, previous_price, lastest_price, rating, review_no, booking_no'''
rows_to_insert = [( item['key'], item['source'], item['lang'])]
error = self.client.insert_rows(self.table, rows_to_insert)
if error == []:
logging.debug('...Save data to bigquery {}...'.format(item['key']))
# raise DropItem("Missing %s!" % item)
else:
logging.debug('[Error upload to Bigquery]: {}'.format(error))
return item
raise DropItem("Missing %s!" % item)
在 docker 文件中:
FROM python:3.5-stretch
WORKDIR /app
COPY requirements.txt ./
RUN pip install --trusted-host pypi.python.org -r requirements.txt
COPY . /app
# For Bigquery
# key.json is already in right location
ENV GOOGLE_APPLICATION_CREDENTIALS='/app/key.json'
# Sheduler cron
RUN apt-get update && apt-get -y install cron
# Add crontab file in the cron directory
ADD crontab /etc/cron.d/s-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/s-cron
# Apply cron job
RUN crontab /etc/cron.d/s-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
在 crontab 中:
# Run once every day at midnight. Need empty line at the end to run.
0 0 * * * cd /app && /usr/local/bin/scrapy crawl spider >> /var/log/cron.log 2>&1
综上所述,如何获取crontab运行爬虫不报403错误。非常感谢大家的支持。
我建议您直接在代码中加载服务帐户,而不是像这样从环境中加载:
from google.cloud import bigquery
from google.cloud.bigquery.client import Client
service_account_file_path = "/app/key.json" # your service account auth file file
client = bigquery.Client.from_service_account_json(service_account_file_path)
其余代码应保持不变,因为您验证它是一个工作代码
我正在尝试构建 Scrapy 爬虫:蜘蛛将爬取数据,然后在 pipeline.py 中,数据将保存到 Bigquery。我通过 docker 构建它,设置 crontab 作业并推送到 Google 云服务器到每天 运行ning.
问题是crontab执行scrapy爬虫的时候,得到了"google.api_core.exceptions.Forbidden: 403 GET https://www.googleapis.com/bigquery/v2/projects/project_name/datasets/dataset_name/tables/table_name: Request had insufficient authentication scopes."。
更多细节,当访问它的容器时(docker exec -it ... /bin/bash)并手动执行它(scrapy crawl spider_name) ,它就像魅力一样。数据出现在 Bigquery 中。
我使用具有 bigquery.admin 角色的服务帐户(json 文件)来设置 GOOGLE_APPLICATION_CREDENTIALS。
# spider file is fine
# pipeline.py
from google.cloud import bigquery
import logging
from scrapy.exceptions import DropItem
...
class SpiderPipeline(object):
def __init__(self):
# BIGQUERY
# Setup GOOGLE_APPLICATION_CREDENTIALS in docker file
self.client = bigquery.Client()
table_ref = self.client.dataset('dataset').table('data')
self.table = self.client.get_table(table_ref)
def process_item(self, item, spider):
if item['key']:
# BIGQUERY
'''Order: key, source, lang, created, previous_price, lastest_price, rating, review_no, booking_no'''
rows_to_insert = [( item['key'], item['source'], item['lang'])]
error = self.client.insert_rows(self.table, rows_to_insert)
if error == []:
logging.debug('...Save data to bigquery {}...'.format(item['key']))
# raise DropItem("Missing %s!" % item)
else:
logging.debug('[Error upload to Bigquery]: {}'.format(error))
return item
raise DropItem("Missing %s!" % item)
在 docker 文件中:
FROM python:3.5-stretch
WORKDIR /app
COPY requirements.txt ./
RUN pip install --trusted-host pypi.python.org -r requirements.txt
COPY . /app
# For Bigquery
# key.json is already in right location
ENV GOOGLE_APPLICATION_CREDENTIALS='/app/key.json'
# Sheduler cron
RUN apt-get update && apt-get -y install cron
# Add crontab file in the cron directory
ADD crontab /etc/cron.d/s-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/s-cron
# Apply cron job
RUN crontab /etc/cron.d/s-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
在 crontab 中:
# Run once every day at midnight. Need empty line at the end to run.
0 0 * * * cd /app && /usr/local/bin/scrapy crawl spider >> /var/log/cron.log 2>&1
综上所述,如何获取crontab运行爬虫不报403错误。非常感谢大家的支持。
我建议您直接在代码中加载服务帐户,而不是像这样从环境中加载:
from google.cloud import bigquery
from google.cloud.bigquery.client import Client
service_account_file_path = "/app/key.json" # your service account auth file file
client = bigquery.Client.from_service_account_json(service_account_file_path)
其余代码应保持不变,因为您验证它是一个工作代码