使用 Scrapy 和 Splash 抓取 javascript 和 'data-reactid' 内容
Scraping javascript with 'data-reactid' content using Scrapy and Splash
我正在使用 Scrapy + Splash 从动态网站上抓取一些财务数据,但是该网站包含一些代码(动态使用 'data-reactid')因此我不知道如何提取
这是我的蜘蛛:
import scrapy
from scrapy_splash import SplashRequest
class StocksSpider(scrapy.Spider):
name = 'stocks'
allowed_domains = ['gu.qq.com']
start_urls = ['http://gu.qq.com/hk00700/gp/income/']
def start_requests(self):
for url in self.start_urls:
yield SplashRequest(url=url, callback=self.parse,
args={
'wait': 0.5,
},
endpoint='render.html',
)
def parse(self, response):
for data in response.css("div.mod-detail write gb_con submodule finance-report"):
yield{
'table' : data.css("table.fin-table.tbody.tr.td::text").extract()
}
我尝试使用以下命令将结果提取到 csv,但没有任何内容存储到 csv 中:
scrapy crawl stocks -o stocks.csv
这是 运行 这个命令后的日志:
root@localhost:~/finance/finance/spiders# scrapy crawl stocks -o stocks.csv
2018-06-09 10:09:59 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: finance)
2018-06-09 10:09:59 [scrapy.utils.log] INFO: Versions: lxml 4.2.1.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.4.0, w3lib 1.19.0, Twisted 18.4.0, Python 2.7.12 (default, Dec 4 2017, 14:50:18) - [GCC 5.4.0 20160609], pyOpenSSL 18.0.0 (OpenSSL 1.1.0h 27 Mar 2018), cryptography 2.2.2, Platform Linux-4.15.13-x86_64-linode106-x86_64-with-Ubuntu-16.04-xenial
2018-06-09 10:09:59 [scrapy.crawler] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'finance.spiders', 'FEED_URI': 'stocks.csv', 'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter', 'SPIDER_MODULES': ['finance.spiders'], 'BOT_NAME': 'finance', 'ROBOTSTXT_OBEY': True, 'FEED_FORMAT': 'csv'}
2018-06-09 10:09:59 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
2018-06-09 10:09:59 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy_splash.SplashCookiesMiddleware',
'scrapy_splash.SplashMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-06-09 10:09:59 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy_splash.SplashDeduplicateArgsMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-06-09 10:09:59 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-06-09 10:09:59 [scrapy.core.engine] INFO: Spider opened
2018-06-09 10:10:00 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-06-09 10:10:00 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-06-09 10:10:00 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://gu.qq.com/robots.txt> (referer: None)
2018-06-09 10:10:00 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://localhost:8050/robots.txt> (referer: None)
2018-06-09 10:10:17 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://gu.qq.com/hk00700/gp/income/ via http://localhost:8050/render.html> (referer: None)
2018-06-09 10:10:17 [scrapy.core.engine] INFO: Closing spider (finished)
2018-06-09 10:10:17 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 962,
'downloader/request_count': 3,
'downloader/request_method_count/GET': 2,
'downloader/request_method_count/POST': 1,
'downloader/response_bytes': 184825,
'downloader/response_count': 3,
'downloader/response_status_count/200': 2,
'downloader/response_status_count/404': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 6, 9, 10, 10, 17, 510745),
'log_count/DEBUG': 4,
'log_count/INFO': 7,
'memusage/max': 51392512,
'memusage/startup': 51392512,
'response_received_count': 3,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'splash/render.html/request_count': 1,
'splash/render.html/response_count/200': 1,
'start_time': datetime.datetime(2018, 6, 9, 10, 10, 0, 4160)}
2018-06-09 10:10:17 [scrapy.core.engine] INFO: Spider closed (finished)
下面是 link 和我要抓取的网络结构:
http://gu.qq.com/hk00700/gp/income
我对网络抓取很陌生,谁能帮我解释一下我应该如何提取数据?
任何地方都不需要 Splash,只要看看,更改 url 中的查询参数,您将获得 json 响应。删除它根本没有用的启动浏览器。它只会增加您的响应时间。
我正在使用 Scrapy + Splash 从动态网站上抓取一些财务数据,但是该网站包含一些代码(动态使用 'data-reactid')因此我不知道如何提取
这是我的蜘蛛:
import scrapy
from scrapy_splash import SplashRequest
class StocksSpider(scrapy.Spider):
name = 'stocks'
allowed_domains = ['gu.qq.com']
start_urls = ['http://gu.qq.com/hk00700/gp/income/']
def start_requests(self):
for url in self.start_urls:
yield SplashRequest(url=url, callback=self.parse,
args={
'wait': 0.5,
},
endpoint='render.html',
)
def parse(self, response):
for data in response.css("div.mod-detail write gb_con submodule finance-report"):
yield{
'table' : data.css("table.fin-table.tbody.tr.td::text").extract()
}
我尝试使用以下命令将结果提取到 csv,但没有任何内容存储到 csv 中:
scrapy crawl stocks -o stocks.csv
这是 运行 这个命令后的日志:
root@localhost:~/finance/finance/spiders# scrapy crawl stocks -o stocks.csv
2018-06-09 10:09:59 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: finance)
2018-06-09 10:09:59 [scrapy.utils.log] INFO: Versions: lxml 4.2.1.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.4.0, w3lib 1.19.0, Twisted 18.4.0, Python 2.7.12 (default, Dec 4 2017, 14:50:18) - [GCC 5.4.0 20160609], pyOpenSSL 18.0.0 (OpenSSL 1.1.0h 27 Mar 2018), cryptography 2.2.2, Platform Linux-4.15.13-x86_64-linode106-x86_64-with-Ubuntu-16.04-xenial
2018-06-09 10:09:59 [scrapy.crawler] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'finance.spiders', 'FEED_URI': 'stocks.csv', 'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter', 'SPIDER_MODULES': ['finance.spiders'], 'BOT_NAME': 'finance', 'ROBOTSTXT_OBEY': True, 'FEED_FORMAT': 'csv'}
2018-06-09 10:09:59 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
2018-06-09 10:09:59 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy_splash.SplashCookiesMiddleware',
'scrapy_splash.SplashMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-06-09 10:09:59 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy_splash.SplashDeduplicateArgsMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-06-09 10:09:59 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-06-09 10:09:59 [scrapy.core.engine] INFO: Spider opened
2018-06-09 10:10:00 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-06-09 10:10:00 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-06-09 10:10:00 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://gu.qq.com/robots.txt> (referer: None)
2018-06-09 10:10:00 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://localhost:8050/robots.txt> (referer: None)
2018-06-09 10:10:17 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://gu.qq.com/hk00700/gp/income/ via http://localhost:8050/render.html> (referer: None)
2018-06-09 10:10:17 [scrapy.core.engine] INFO: Closing spider (finished)
2018-06-09 10:10:17 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 962,
'downloader/request_count': 3,
'downloader/request_method_count/GET': 2,
'downloader/request_method_count/POST': 1,
'downloader/response_bytes': 184825,
'downloader/response_count': 3,
'downloader/response_status_count/200': 2,
'downloader/response_status_count/404': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 6, 9, 10, 10, 17, 510745),
'log_count/DEBUG': 4,
'log_count/INFO': 7,
'memusage/max': 51392512,
'memusage/startup': 51392512,
'response_received_count': 3,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'splash/render.html/request_count': 1,
'splash/render.html/response_count/200': 1,
'start_time': datetime.datetime(2018, 6, 9, 10, 10, 0, 4160)}
2018-06-09 10:10:17 [scrapy.core.engine] INFO: Spider closed (finished)
下面是 link 和我要抓取的网络结构:
http://gu.qq.com/hk00700/gp/income
我对网络抓取很陌生,谁能帮我解释一下我应该如何提取数据?
任何地方都不需要 Splash,只要看看,更改 url 中的查询参数,您将获得 json 响应。删除它根本没有用的启动浏览器。它只会增加您的响应时间。