Scrapy - NameError: global name 'base_search_url' is not defined
Scrapy - NameError: global name 'base_search_url' is not defined
我试图从 Scrapy 蜘蛛内部调用局部变量 class 但后来我得到 NameError: global name 'base_search_url' is not defined
.
class MySpider(scrapy.Spider):
name = "mine"
allowed_domains = ["www.example.com"]
base_url = "https://www.example.com"
start_date = "2011-01-01"
today = datetime.date.today().strftime("%Y-%m-%d")
base_search_url = 'https://www.example.com/?city={}&startDate={}&endDate={}&page=1',
city_codes = ['on', 'bc', 'ab']
start_urls = (base_search_url.format(city_code, start_date, today) for city_code in city_codes)
我尝试使用 self.base_search_url
代替,但没有用。有人知道怎么解决吗?
仅供参考,我使用 Python 2.7
已解决!我最终使用 __init__()
函数解决了它。
def __init__(self):
self.start_urls = (self.base_search_url.format(city_code, self.start_date, self.today) for city_code in self.city_codes)
来自docs:
start_urls: a list of URLs where the Spider will begin to crawl from.
The first pages downloaded will be those listed here. The subsequent
URLs will be generated successively from data contained in the start
URLs.
起始网址是一个列表
在init方法中设置解决:
def __init__(self):
self.start_urls=[]
self.start_urls.append( (base_search_url.format(city_code, start_date, today) for city_code in city_codes) )
或在 class 声明中(如您在问题中所示):
start_urls=[]
start_urls.append( (base_search_url.format(city_code, start_date, today) for city_code in city_codes) )
备注
确保添加以 http://
或 https://
开头的正确网址。
Python中只有四个范围:LEGB
,因为class
定义的局部范围和list derivation
的局部范围不是嵌套函数, 所以它们不构成 Enclosing scope.Therefore, 它们是两个独立的本地作用域, 不能相互访问.
3个解决方案:
1. global base_search_url
2. def __init__(self) ...
3. start_urls = ('https://www.example.com/?city={}&startDate={}&endDate={}&page=1'.format ... )
我试图从 Scrapy 蜘蛛内部调用局部变量 class 但后来我得到 NameError: global name 'base_search_url' is not defined
.
class MySpider(scrapy.Spider):
name = "mine"
allowed_domains = ["www.example.com"]
base_url = "https://www.example.com"
start_date = "2011-01-01"
today = datetime.date.today().strftime("%Y-%m-%d")
base_search_url = 'https://www.example.com/?city={}&startDate={}&endDate={}&page=1',
city_codes = ['on', 'bc', 'ab']
start_urls = (base_search_url.format(city_code, start_date, today) for city_code in city_codes)
我尝试使用 self.base_search_url
代替,但没有用。有人知道怎么解决吗?
仅供参考,我使用 Python 2.7
已解决!我最终使用 __init__()
函数解决了它。
def __init__(self):
self.start_urls = (self.base_search_url.format(city_code, self.start_date, self.today) for city_code in self.city_codes)
来自docs:
start_urls: a list of URLs where the Spider will begin to crawl from. The first pages downloaded will be those listed here. The subsequent URLs will be generated successively from data contained in the start URLs.
起始网址是一个列表
在init方法中设置解决:
def __init__(self):
self.start_urls=[]
self.start_urls.append( (base_search_url.format(city_code, start_date, today) for city_code in city_codes) )
或在 class 声明中(如您在问题中所示):
start_urls=[]
start_urls.append( (base_search_url.format(city_code, start_date, today) for city_code in city_codes) )
备注
确保添加以 http://
或 https://
开头的正确网址。
Python中只有四个范围:LEGB
,因为class
定义的局部范围和list derivation
的局部范围不是嵌套函数, 所以它们不构成 Enclosing scope.Therefore, 它们是两个独立的本地作用域, 不能相互访问.
3个解决方案:
1. global base_search_url
2. def __init__(self) ...
3. start_urls = ('https://www.example.com/?city={}&startDate={}&endDate={}&page=1'.format ... )