Python scrapy ReactorNotRestartable 替代品
Python scrapy ReactorNotRestartable substitute
我一直在尝试使用 Scrapy
在 Python 中制作具有以下功能的应用程序:
- A rest api(我是用烧瓶做的)监听所有对 crawl/scrap 的请求和return爬取后的响应。(爬取部分足够短,所以可以保持连接直到爬取完成。)
我可以使用以下代码执行此操作:
items = []
def add_item(item):
items.append(item)
# set up crawler
crawler = Crawler(SpiderClass,settings=get_project_settings())
crawler.signals.connect(add_item, signal=signals.item_passed)
# This is added to make the reactor stop, if I don't use this, the code stucks at reactor.run() line.
crawler.signals.connect(reactor.stop, signal=signals.spider_closed) #@UndefinedVariable
crawler.crawl(requestParams=requestParams)
# start crawling
reactor.run() #@UndefinedVariable
return str(items)
现在我面临的问题是在停止反应堆之后(这对我来说似乎是必要的,因为我不想坚持 reactor.run()
)。在第一次请求后,我无法接受进一步的请求。第一个请求完成后,出现以下错误:
Traceback (most recent call last):
File "c:\python27\lib\site-packages\flask\app.py", line 1988, in wsgi_app
response = self.full_dispatch_request()
File "c:\python27\lib\site-packages\flask\app.py", line 1641, in full_dispatch_request
rv = self.handle_user_exception(e)
File "c:\python27\lib\site-packages\flask\app.py", line 1544, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "c:\python27\lib\site-packages\flask\app.py", line 1639, in full_dispatch_request
rv = self.dispatch_request()
File "c:\python27\lib\site-packages\flask\app.py", line 1625, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "F:\my_workspace\jobvite\jobvite\com\jobvite\web\RequestListener.py", line 38, in submitForm
reactor.run() #@UndefinedVariable
File "c:\python27\lib\site-packages\twisted\internet\base.py", line 1193, in run
self.startRunning(installSignalHandlers=installSignalHandlers)
File "c:\python27\lib\site-packages\twisted\internet\base.py", line 1173, in startRunning
ReactorBase.startRunning(self)
File "c:\python27\lib\site-packages\twisted\internet\base.py", line 684, in startRunning
raise error.ReactorNotRestartable()
ReactorNotRestartable
这很明显,因为我们无法重新启动反应堆。
所以我的问题是:
1) 如何为下一次抓取请求提供支持?
2) 有什么方法可以在 reactor.run() 之后不停止地移动到下一行吗?
我建议你使用像Rq这样的队列系统(为简单起见,但其他的很少)。
你可以有一个 craw 函数:
from twisted.internet import reactor
import scrapy
from scrapy.crawler import CrawlerRunner
from scrapy.utils.log import configure_logging
from scrapy.utils.project import get_project_settings
from spiders import MySpider
def runCrawler(url, keys, mode, outside, uniqueid):
runner = CrawlerRunner( get_project_settings() )
d = runner.crawl( MySpider, url=url, param1=value1, ... )
d.addBoth(lambda _: reactor.stop())
reactor.run()
然后在您的主要代码中,使用 Rq 队列来收集爬虫执行:
# other imports
pool = redis.ConnectionPool( host=REDIS_HOST, port=REDIS_PORT, db=your_redis_db_number)
redis_conn =redis.Redis(connection_pool=pool)
q = Queue('parse', connection=redis_conn)
# urlSet is a list of http:// or https:// like url's
for url in urlSet:
job = q.enqueue(runCrawler, url, param1, ... , timeout=600 )
不要忘记启动一个 rq 工作进程,为相同的队列名称工作(此处 parse)。例如,在终端会话中执行:
rq worker parse
这是解决您问题的简单方法
from flask import Flask
import threading
import subprocess
import sys
app = Flask(__name__)
class myThread (threading.Thread):
def __init__(self,target):
threading.Thread.__init__(self)
self.target = target
def run(self):
start_crawl()
def start_crawl():
pid = subprocess.Popen([sys.executable, "start_request.py"])
return
@app.route("/crawler/start")
def start_req():
print ":request"
threadObj = myThread("run_crawler")
threadObj.start()
return "Your crawler is in running state"
if (__name__ == "__main__"):
app.run(port = 5000)
在上述解决方案中,我假设您可以使用 shell/command 行上的命令 start_request.py 文件从命令行启动爬虫。
现在我们正在做的是在 python 中使用线程为每个传入请求启动一个新线程。
现在,您可以轻松地 运行 您的爬虫实例并行处理每次点击。
只需使用 threading.activeCount()
控制您的线程数
我一直在尝试使用 Scrapy
在 Python 中制作具有以下功能的应用程序:
- A rest api(我是用烧瓶做的)监听所有对 crawl/scrap 的请求和return爬取后的响应。(爬取部分足够短,所以可以保持连接直到爬取完成。)
我可以使用以下代码执行此操作:
items = []
def add_item(item):
items.append(item)
# set up crawler
crawler = Crawler(SpiderClass,settings=get_project_settings())
crawler.signals.connect(add_item, signal=signals.item_passed)
# This is added to make the reactor stop, if I don't use this, the code stucks at reactor.run() line.
crawler.signals.connect(reactor.stop, signal=signals.spider_closed) #@UndefinedVariable
crawler.crawl(requestParams=requestParams)
# start crawling
reactor.run() #@UndefinedVariable
return str(items)
现在我面临的问题是在停止反应堆之后(这对我来说似乎是必要的,因为我不想坚持 reactor.run()
)。在第一次请求后,我无法接受进一步的请求。第一个请求完成后,出现以下错误:
Traceback (most recent call last):
File "c:\python27\lib\site-packages\flask\app.py", line 1988, in wsgi_app
response = self.full_dispatch_request()
File "c:\python27\lib\site-packages\flask\app.py", line 1641, in full_dispatch_request
rv = self.handle_user_exception(e)
File "c:\python27\lib\site-packages\flask\app.py", line 1544, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "c:\python27\lib\site-packages\flask\app.py", line 1639, in full_dispatch_request
rv = self.dispatch_request()
File "c:\python27\lib\site-packages\flask\app.py", line 1625, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "F:\my_workspace\jobvite\jobvite\com\jobvite\web\RequestListener.py", line 38, in submitForm
reactor.run() #@UndefinedVariable
File "c:\python27\lib\site-packages\twisted\internet\base.py", line 1193, in run
self.startRunning(installSignalHandlers=installSignalHandlers)
File "c:\python27\lib\site-packages\twisted\internet\base.py", line 1173, in startRunning
ReactorBase.startRunning(self)
File "c:\python27\lib\site-packages\twisted\internet\base.py", line 684, in startRunning
raise error.ReactorNotRestartable()
ReactorNotRestartable
这很明显,因为我们无法重新启动反应堆。
所以我的问题是:
1) 如何为下一次抓取请求提供支持?
2) 有什么方法可以在 reactor.run() 之后不停止地移动到下一行吗?
我建议你使用像Rq这样的队列系统(为简单起见,但其他的很少)。
你可以有一个 craw 函数:
from twisted.internet import reactor
import scrapy
from scrapy.crawler import CrawlerRunner
from scrapy.utils.log import configure_logging
from scrapy.utils.project import get_project_settings
from spiders import MySpider
def runCrawler(url, keys, mode, outside, uniqueid):
runner = CrawlerRunner( get_project_settings() )
d = runner.crawl( MySpider, url=url, param1=value1, ... )
d.addBoth(lambda _: reactor.stop())
reactor.run()
然后在您的主要代码中,使用 Rq 队列来收集爬虫执行:
# other imports
pool = redis.ConnectionPool( host=REDIS_HOST, port=REDIS_PORT, db=your_redis_db_number)
redis_conn =redis.Redis(connection_pool=pool)
q = Queue('parse', connection=redis_conn)
# urlSet is a list of http:// or https:// like url's
for url in urlSet:
job = q.enqueue(runCrawler, url, param1, ... , timeout=600 )
不要忘记启动一个 rq 工作进程,为相同的队列名称工作(此处 parse)。例如,在终端会话中执行:
rq worker parse
这是解决您问题的简单方法
from flask import Flask
import threading
import subprocess
import sys
app = Flask(__name__)
class myThread (threading.Thread):
def __init__(self,target):
threading.Thread.__init__(self)
self.target = target
def run(self):
start_crawl()
def start_crawl():
pid = subprocess.Popen([sys.executable, "start_request.py"])
return
@app.route("/crawler/start")
def start_req():
print ":request"
threadObj = myThread("run_crawler")
threadObj.start()
return "Your crawler is in running state"
if (__name__ == "__main__"):
app.run(port = 5000)
在上述解决方案中,我假设您可以使用 shell/command 行上的命令 start_request.py 文件从命令行启动爬虫。
现在我们正在做的是在 python 中使用线程为每个传入请求启动一个新线程。 现在,您可以轻松地 运行 您的爬虫实例并行处理每次点击。 只需使用 threading.activeCount()
控制您的线程数