需要 Scrapy 登录 vBulletin 指导

Scrapy login to vBulletin guidance needed

我已经阅读了很多关于该主题的 post(包括 scrapy 文档),但由于某种原因我无法登录 vBulletin 网站。让我澄清一下,我不是开发人员,我对 programming/scraping 的了解非常基础,因此如果有人决定提供帮助,请更具体地了解您。

现在让我详细解释一下:

我正在尝试登录我们公司的论坛,从中抓取信息并将其组织到 excel 电子表格中。登录网址为:https://forums.chaosgroup.com/auth/login-form

除了用户名(scrapy)和密码(12345)字段外,源页面中几乎没有隐藏的values/fields。

<input type="hidden" name="url" value="aHR0cHM6Ly9mb3J1bXMuY2hhb3Nncm91cC5jb20v" />
<input type="hidden" id="vb_loginmd5" name="vb_login_md5password" value="">
<input type="hidden" id="vb_loginmd5_utf8" name="vb_login_md5password_utf" value="">

当我从网站提交数据时,我在 Chrome 检查工具中收到以下 POST 请求:

url:aHR0cHM6Ly9mb3J1bXMuY2hhb3Nncm91cC5jb20v
username:scrapy
password:
vb_login_md5password:827ccb0eea8a706c4c34a16891f84e7b
vb_login_md5password_utf:827ccb0eea8a706c4c34a16891f84e7b

大部分时间都是静态信息。我很少发现隐藏的 url:value 改变了它的最后一个字符,但总体上一切都保持不变。

现在我尝试从 Scrapy 蜘蛛(代码如下)提交数据以便登录,但是蜘蛛 returns 到登录页面而不是打开实际的论坛。

# -*- coding: utf-8 -*-
import scrapy
from scrapy.http import FormRequest
from scrapy.utils.response import open_in_browser
from scrapy.shell import inspect_response


class ForumsSpider(scrapy.Spider):
    name = 'forums'
    start_urls = ['https://forums.chaosgroup.com/auth/login-form/']


    def parse(self, response):
        return FormRequest.from_response(response,
                                         formdata={'url':'aHR0cHM6Ly9mb3J1bXMuY2hhb3Nncm91cC5jb20v',
                                                   'username':'scrapy',
                                                   'password':'',
                                                   'vb_login_md5password':'827ccb0eea8a706c4c34a16891f84e7b',
                                                   'vb_login_md5password_utf':'827ccb0eea8a706c4c34a16891f84e7b'},
                                         callback=self.scrape_home_page)


    def scrape_home_page(self, response):
        open_in_browser(response)
        a = response.css('h1::text').extract_first()
        print(a)
        yield a

我从Scrapy得到的日志文件是:https://pastebin.com/XtPHnBcF(为了更好的阅读)

D:\Scrapy\forum>scrapy crawl forums 2018-02-24 11:42:10 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: forum) 2018-02-24 11:42:10 [scrapy.utils.log] INFO: Versions: lxml 4.1.1.0, libxml2
2.9.5, cssselect 1.0.3, parsel 1.3.1, w3lib 1.19.0, Twisted 17.9.0, Python 3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:04:45) [M SC v.1900 32 bit (Intel)], pyOpenSSL 17.5.0 (OpenSSL 1.1.0g  2 Nov 2017), cryptography 2.1.4, Platform Windows-8.1-6.3.9600-SP0 2018-02-24 11:42:10 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'forum', 'COOKIES_DEBUG': True, 'DOWNLOAD_DELAY': 3, 'NEWSPIDER_MODULE': 'forum.spiders', 'SPIDER_MODULES': ['forum.spiders ']} 2018-02-24 11:42:10 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats',  'scrapy.extensions.telnet.TelnetConsole',  'scrapy.extensions.logstats.LogStats'] 2018-02-24 11:42:10 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',  'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',  'scrapy.downloadermiddlewares.retry.RetryMiddleware',  'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',  'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',  'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',  'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',  'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2018-02-24 11:42:10 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',  'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',  'scrapy.spidermiddlewares.referer.RefererMiddleware',  'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',  'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2018-02-24 11:42:10 [scrapy.middleware] INFO: Enabled item pipelines: [] 2018-02-24 11:42:10 [scrapy.core.engine] INFO: Spider opened 2018-02-24 11:42:10 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2018-02-24 11:42:10 [scrapy.extensions.telnet] DEBUG: Telnet console listening on
127.0.0.1:6023 2018-02-24 11:42:11 [scrapy.downloadermiddlewares.cookies] DEBUG: Received cookies from: <200 https://forums.chaosgroup.com/auth/login-form/> Set-Cookie: bbsessionhash=97ed47f40f0376dd5c33276eefe2cb53; path=/; secure; HttpOnly

Set-Cookie: bblastvisit=1519465318; path=/; secure; HttpOnly

Set-Cookie: bblastactivity=1519465318; path=/; secure; HttpOnly

2018-02-24 11:42:11 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://forums.chaosgroup.com/auth/login-form/> (referer: None) 2018-02-24 11:42:11 [scrapy.downloadermiddlewares.cookies] DEBUG: Sending cookies to: <POST https://forums.chaosgroup.com/auth/login> Cookie: bbsessionhash=97ed47f40f0376dd5c33276eefe2cb53; bblastvisit=1519465318; bblastactivity=1519465318

2018-02-24 11:42:13 [scrapy.downloadermiddlewares.cookies] DEBUG: Received cookies from: <200 https://forums.chaosgroup.com/auth/login> Set-Cookie: bblastactivity=1519465321; path=/; secure; HttpOnly

Set-Cookie: bbsessionhash=58e04286cf781704ef718c38d4dbb0a2; path=/; secure; HttpOnly

2018-02-24 11:42:13 [scrapy.core.engine] DEBUG: Crawled (200) <POST https://forums.chaosgroup.com/auth/login> (referer: https://forums.chaosgroup.com/auth/login-form/) None 2018-02-24 11:42:13 [scrapy.core.engine] INFO: Closing spider (finished) 2018-02-24 11:42:13 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 862,  'downloader/request_count': 2,  'downloader/request_method_count/GET': 1,  'downloader/request_method_count/POST': 1,  'downloader/response_bytes': 3538,  'downloader/response_count': 2,  'downloader/response_status_count/200': 2,  'finish_reason': 'finished',  'finish_time': datetime.datetime(2018, 2, 24, 9, 42, 13, 954670),  'log_count/DEBUG': 6,  'log_count/INFO': 7,  'request_depth_max': 1,  'response_received_count': 2,  'scheduler/dequeued': 2,  'scheduler/dequeued/memory': 2,  'scheduler/enqueued': 2,  'scheduler/enqueued/memory': 2,  'start_time': datetime.datetime(2018, 2, 24, 9, 42, 10, 928535)} 2018-02-24 11:42:13 [scrapy.core.engine] INFO: Spider closed (finished)

我试图找出我做错了什么,将我的代码与其他类似代码进行比较,尝试(并成功)登录其他网站,但我无法设法使其与我们的 vBulletin 网站一起使用。

我做错了什么,我错过了什么?如果有人能指出正确的方向,我将非常感激,我会尽力 return 帮助。

在此先感谢大家。

您的登录数据已发布到 https://forums.chaosgroup.com/auth/login

如果您查看该页面的源代码(response.text 在您的 scrape_home_page() 中),您将看到:

<div class="redirectMessage-wrapper">
        <div id="redirectMessage">Logging in...</div>
</div>


<script type="text/javascript">
(function()
{
        var url = "https://forums.chaosgroup.com" || "/";

        //remove hash from the url of the top most window (if any)
        var a = document.createElement('a');
        a.setAttribute('href', url);
        if (a.hash) {
                url = url.replace(a.hash, '');
        }
        else if (url.lastIndexOf('#') != -1) { //a.hash with just # returns empty
                url = url.replace('#', '');
        }



        window.open(url, '_top');
})();
</script>

这表明登录确实成功,您正在使用 javascript 重定向到索引页面。
所以您已经登录,继续抓取所需要做的就是转到索引页面。