用户 Agent/Cookie 解决 MATLAB 中网页抓取的方法
User Agent/Cookie workaround to web-scraping in MATLAB
我已经尝试了几天(使用 this site and MathWorks ) to get around the crumb
that Yahoo Finance add at the end of a link to download a CSV file, e.g. for a CSV with Nasdaq100 data in a Chrome browser you would get the link: https://query1.finance.yahoo.com/v7/finance/download/%5ENDX?period1=496969200&period2=1519513200&interval=1d&events=history&crumb=dnhBC8SRS9G (by clicking on the "Download Data" button on this 雅虎财经页面上的其他答案)。
这个 crumb=dnhBC8SRS9G
显然会根据 Cookie 和用户代理而改变,所以我尝试相应地配置 MATLAB 以将自己伪装成 Chrome 浏览器(复制 cookie/user 代理 Chrome):
useragent = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.79 Safari/537.36';
cookie ='PRF=t%3D%255ENDX; expires=Thu, 11-Jun-2020 09:06:31 GMT; path=/; domain=.finance.yahoo.com';
opts = weboptions('UserAgent',useragent,'KeyName','WWW_Authenticate','KeyValue','dnhBC8SRS9G','KeyName','Cookie','KeyValue',cookie)
url = 'https://query1.finance.yahoo.com/v7/finance/download/^NDX?period1=496969200&period2=1519513200&interval=1d&events=history&crumb=dnhBC8SRS9G' ;
response = webread(url,opts)
但无论我做什么(使用 webread
或额外函数 urlread2
),我得到的响应都是 "unauthorized." 上面的 MATLAB 代码给出了响应:
Error using readContentFromWebService (line 45)
The server returned the status 401 with message "Unauthorized" in response to the request to URL
https://query1.finance.yahoo.com/v7/finance/download/%5ENDX?period1=496969200&period2=1519513200&interval=1d&events=history&crumb=dnhBC8SRS9G.
Error in webread (line 122)
[varargout{1:nargout}] = readContentFromWebService(connection, options);
Error in TEST2 (line 22)
response = webread(url,opts)
任何帮助将不胜感激,我只是想让基础知识工作,即使这意味着我必须手动将 crumb
从 Chrome- 浏览器复制到 MATLAB 之前要求。 (我看到他们用 Python、C# 等解决了它,我尽可能地遵循了这些解决方案,所以它在 MATLAB 中也应该可行,对吧?)
编辑:如果有任何帮助,当我在代码末尾使用 运行 urlread2
而不是 webread
时,即:
[output,extras] = urlread2(url,'GET');
extras.firstHeaders
我从 MATLAB 得到以下输出:
ans =
struct with fields:
Response: 'HTTP/1.1 401 Unauthorized'
X_Content_Type_Options: 'nosniff'
WWW_Authenticate: 'crumb'
Content_Type: 'application/json;charset=utf-8'
Content_Length: '136'
Date: 'Tue, 12 Jun 2018 13:07:38 GMT'
Age: '0'
Via: 'http/1.1 media-router-omega4.prod.media.ir2.yahoo.com (ApacheTrafficServer [cMsSf ]), http/1.1 media-ncache-api17.prod.media.ir2.yahoo.com (ApacheTrafficServer [cMsSf ]), http/1.1 media-ncache-api15.prod.media.ir2.yahoo.com (ApacheTrafficServer [cMsSf ]), http/1.1 media-router-api12.prod.media.ir2.yahoo.com (ApacheTrafficServer [cMsSf ]), https/1.1 e3.ycpi.seb.yahoo.com (ApacheTrafficServer [cMsSf ])'
Server: 'ATS'
Expires: '-1'
Cache_Control: 'max-age=0, private'
Strict_Transport_Security: 'max-age=15552000'
Connection: 'keep-alive'
Expect_CT: 'max-age=31536000, report-uri="http://csp.yahoo.com/beacon/csp?src=yahoocom-expect-ct-report-only"'
Public_Key_Pins_Report_Only: 'max-age=2592000; pin-sha256="2fRAUXyxl4A1/XHrKNBmc8bTkzA7y4FB/GLJuNAzCqY="; pin-sha256="2oALgLKofTmeZvoZ1y/fSZg7R9jPMix8eVA6DH4o/q8="; pin-sha256="Gtk3r1evlBrs0hG3fm3VoM19daHexDWP//OCmeeMr5M="; pin-sha256="I/Lt/z7ekCWanjD0Cvj5EqXls2lOaThEA0H2Bg4BT/o="; pin-sha256="JbQbUG5JMJUoI6brnx0x3vZF6jilxsapbXGVfjhN8Fg="; pin-sha256="SVqWumuteCQHvVIaALrOZXuzVVVeS7f4FGxxu6V+es4="; pin-sha256="UZJDjsNp1+4M5x9cbbdflB779y5YRBcV6Z6rBMLIrO4="; pin-sha256="Wd8xe/qfTwq3ylFNd3IpaqLHZbh2ZNCLluVzmeNkcpw="; pin-sha256="WoiWRyIOVNa9ihaBciRSC7XHjliYS9VwUGOIud4PB18="; pin-sha256="cAajgxHlj7GTSEIzIYIQxmEloOSoJq7VOaxWHfv72QM="; pin-sha256="dolnbtzEBnELx/9lOEQ22e6OZO/QNb6VSSX2XHA3E7A="; pin-sha256="i7WTqTvh0OioIruIfFR4kMPnBqrS2rdiVPl/s2uC/CY="; pin-sha256="iduNzFNKpwYZ3se/XV+hXcbUonlLw09QPa6AYUwpu4M="; pin-sha256="lnsM2T/O9/J84sJFdnrpsFp3awZJ+ZZbYpCWhGloaHI="; pin-sha256="r/mIkG3eEpVdm+u/ko/cwxzOMo1bk4TyHIlByibiA5E="; pin-sha256="uUwZgwDOxcBXrQcntwu+kYFpkiVkOaezL0WYEZ3anJc="; includeSubdomains; report-uri="http://csp.yahoo.com/beacon/csp?src=yahoocom-hpkp-report-only"'
我的 weboptions
输出是:
opts =
weboptions with properties:
CharacterEncoding: 'auto'
UserAgent: 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.79 Safari/537.36'
Timeout: 5
Username: ''
Password: ''
KeyName: ''
KeyValue: ''
ContentType: 'auto'
ContentReader: []
MediaType: 'application/x-www-form-urlencoded'
RequestMethod: 'auto'
ArrayFormat: 'csv'
HeaderFields: {'Cookie' 'PRF=t%3D%255ENDX; expires=Thu, 11-Jun-2020 09:06:31 GMT; path=/; domain=.finance.yahoo.com'}
CertificateFilename: '/opt/matlab/r2017a/sys/certificates/ca/rootcerts.pem'
好吧,用 Curl 做了一些尝试,看起来你试图做的事情在指定的 URL 上是不可能的。值得注意的是 crumb 和 cookie 经常变化,所以我每次 [=57] 都必须解析两个 GET 请求的响应=] 获取它们值的脚本。
我将引导您完成我的尝试。
- 获取请求并保存 cookie 文件。
- 为 cookie 解析 cookie 文件。
- 将 cookie 打印到文件。
- GET 请求并保存 html.
- 解析HTML并获得面包屑。
- 表格URL.
- 表单 curl 请求。
- 执行请求。
代码:
%Get cookie.
command = 'curl -s --cookie-jar cookie.txt https://finance.yahoo.com/quote/GOOG?p=GOOG';
%Execute request.
system(command);
%Read file.
cookie_file = fileread('cookie.txt');
%regexp the cookie.
cookie = regexp(cookie_file,'B\s*(.*)','tokens');
cookie = cell2mat(cookie{1});
%Print cookie to file (for curl purposes only).
file = fopen('mycookie.txt','w');
fprintf(file,'%s',cookie);
%Get request.
command = 'curl https://finance.yahoo.com/quote/GOOG?p=GOOG > goog.txt';
%Execute request.
system(command);
%Read file.
crumb_file = fileread('goog.txt');
%regexp the crumb.
crumb = regexp(crumb_file,'(?<="CrumbStore":{"crumb":")(.*)(?="},"UserStore":)','tokens');
crumb = crumb{:};
%Form the URL.
url = 'https://query1.finance.yahoo.com/v7/finance/download/AAPL?period1=1492524105&period2=1495116105&interval=1d&events=history&crumb=';
url = strcat(url,crumb);
%Form the curl command.
command = strcat('curl',{' '},'-v -L -b',{' '},'mycookie.txt',{' '},'-H',{' '},'"User-Agent:',{' '},'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.79 Safari/537.36','"',{' '},'"',url,'"');
command = command{1};
system(command);
最终的卷曲请求:
curl -v -L -b mycookie.txt -H "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.79 Safari/537.36" "https://query1.finance.yahoo.com/v7/finance/download/^NDX?period1=496969200&period2=1519513200&interval=1d&events=history&crumb=dSpwQstrQDp"
在最终的 curl 请求中,我使用了以下标志:
-v: verbosity
-L: follow redirects
-b: use cookie file
-H: user agent header field (tried spoofing it with my browser)
对于每次尝试,响应如下:
{
"finance": {
"error": {
"code": "Unauthorized",
"description": "Invalid cookie"
}
}
}
我研究了服务器响应,客户端成功发送了每个 header 值,但它总是导致相同的错误。现在我怀疑你不能再像 here 解释的那样这样做了。因此,正如用户所指出的,您可能需要从不同的位置执行网络抓取。也许如果你找到一个工作 URL 你可以提出一个新问题,我很乐意帮忙。
以下是一个脚本,它下载 AAPL 股票上个月的数据并创建一个名为 AAPL_14-05-2018_14-06-2018 的 .csv 文件,其中包含日期、开盘价、最高价, low, close, adj close and volume information as found here.
%Choose any ticker.
ticker = 'AAPL'; %'FB','AMZN'...
%Base url.
url = 'https://query1.finance.yahoo.com/v8/finance/chart/GOOG?symbol=';
%weboption constructor.
opts = weboptions();
%Start retrieving data from today.
today = datetime('now');
today.TimeZone = 'America/New_York';
%Convert dates to unix timestamp.
todayp = posixtime(today);
%Last week.
weekp = posixtime(datetime(addtodate(datenum(today),-7,'day'),'ConvertFrom','datenum'));
%Last month.
monthp = posixtime(datetime(addtodate(datenum(today),-1,'month'),'ConvertFrom','datenum'));
%Last year.
yearp = posixtime(datetime(addtodate(datenum(today),-1,'year'),'ConvertFrom','datenum'));
%Add ticker.
url = strcat(url,ticker);
%Construct url, add time intervals. The following url is for last month worth of data.
url = strcat(url,'&period1=',num2str(monthp,'%.10g'),'&period2=',num2str(todayp,'%.10g'),'&interval=','1d');
%Execute HTTP request.
data = webread(url,opts);
%Get data.
dates = flipud(datetime(data.chart.result.timestamp,'ConvertFrom','posixtime'));
high = flipud(data.chart.result.indicators.quote.high);
low = flipud(data.chart.result.indicators.quote.low);
vol = flipud(data.chart.result.indicators.quote.volume);
open = flipud(data.chart.result.indicators.quote.open);
close = flipud(data.chart.result.indicators.quote.close);
adjclose = flipud(data.chart.result.indicators.adjclose.adjclose);
%Create table.
t = table(dates,open,high,low,close,adjclose,vol);
%Format filename: ticker, start date, end date.
namefile = strcat(ticker,'_',char(datetime(monthp,'Format','dd-MM-yyyy','ConvertFrom','posixtime')),...
'_',char(datetime(todayp,'Format','dd-MM-yyyy','ConvertFrom','posixtime')),'.csv');
%Write table to file.
writetable(t,namefile);
.csv 文件输出(只显示最近几天):
dates open high low close adjclose vol
14/06/2018 16:46 191.5500031 191.5700073 190.2200012 190.7599945 190.7599945 10252639
13/06/2018 13:30 192.4199982 192.8800049 190.4400024 190.6999969 190.6999969 21431900
12/06/2018 13:30 191.3899994 192.6100006 191.1499939 192.2799988 192.2799988 16911100
11/06/2018 13:30 191.3500061 191.9700012 190.2100067 191.2299957 191.2299957 18308500
08/06/2018 13:30 191.1699982 192 189.7700043 191.6999969 191.6999969 26656800
07/06/2018 13:30 194.1399994 194.1999969 192.3399963 193.4600067 193.4600067 21347200
06/06/2018 13:30 193.6300049 194.0800018 191.9199982 193.9799957 193.9799957 20933600
05/06/2018 13:30 193.0700073 193.9400024 192.3600006 193.3099976 193.3099976 21566000
上面我得到的是上个月的数据。在评论中,我展示了如何针对上周和去年进行调整。我可以轻松调整代码并将其转换为函数供您用于任何股票和时间间隔。您只需要告诉我您对哪种时间间隔感兴趣。
Yahoo 进行了大量检查以确保请求来自网络浏览器。
检查此函数 https://www.mathworks.com/matlabcentral/fileexchange/68361-yahoo-finance-data-downloader 让雅虎财经相信请求来自浏览器。
这里有几个例子说明如何使用此功能下载和分析市场数据
https://github.com/Lenskiy/market-data-functions
我已经尝试了几天(使用 this site and MathWorks ) to get around the crumb
that Yahoo Finance add at the end of a link to download a CSV file, e.g. for a CSV with Nasdaq100 data in a Chrome browser you would get the link: https://query1.finance.yahoo.com/v7/finance/download/%5ENDX?period1=496969200&period2=1519513200&interval=1d&events=history&crumb=dnhBC8SRS9G (by clicking on the "Download Data" button on this 雅虎财经页面上的其他答案)。
这个 crumb=dnhBC8SRS9G
显然会根据 Cookie 和用户代理而改变,所以我尝试相应地配置 MATLAB 以将自己伪装成 Chrome 浏览器(复制 cookie/user 代理 Chrome):
useragent = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.79 Safari/537.36';
cookie ='PRF=t%3D%255ENDX; expires=Thu, 11-Jun-2020 09:06:31 GMT; path=/; domain=.finance.yahoo.com';
opts = weboptions('UserAgent',useragent,'KeyName','WWW_Authenticate','KeyValue','dnhBC8SRS9G','KeyName','Cookie','KeyValue',cookie)
url = 'https://query1.finance.yahoo.com/v7/finance/download/^NDX?period1=496969200&period2=1519513200&interval=1d&events=history&crumb=dnhBC8SRS9G' ;
response = webread(url,opts)
但无论我做什么(使用 webread
或额外函数 urlread2
),我得到的响应都是 "unauthorized." 上面的 MATLAB 代码给出了响应:
Error using readContentFromWebService (line 45)
The server returned the status 401 with message "Unauthorized" in response to the request to URL
https://query1.finance.yahoo.com/v7/finance/download/%5ENDX?period1=496969200&period2=1519513200&interval=1d&events=history&crumb=dnhBC8SRS9G.
Error in webread (line 122)
[varargout{1:nargout}] = readContentFromWebService(connection, options);
Error in TEST2 (line 22)
response = webread(url,opts)
任何帮助将不胜感激,我只是想让基础知识工作,即使这意味着我必须手动将 crumb
从 Chrome- 浏览器复制到 MATLAB 之前要求。 (我看到他们用 Python、C# 等解决了它,我尽可能地遵循了这些解决方案,所以它在 MATLAB 中也应该可行,对吧?)
编辑:如果有任何帮助,当我在代码末尾使用 运行 urlread2
而不是 webread
时,即:
[output,extras] = urlread2(url,'GET');
extras.firstHeaders
我从 MATLAB 得到以下输出:
ans =
struct with fields:
Response: 'HTTP/1.1 401 Unauthorized'
X_Content_Type_Options: 'nosniff'
WWW_Authenticate: 'crumb'
Content_Type: 'application/json;charset=utf-8'
Content_Length: '136'
Date: 'Tue, 12 Jun 2018 13:07:38 GMT'
Age: '0'
Via: 'http/1.1 media-router-omega4.prod.media.ir2.yahoo.com (ApacheTrafficServer [cMsSf ]), http/1.1 media-ncache-api17.prod.media.ir2.yahoo.com (ApacheTrafficServer [cMsSf ]), http/1.1 media-ncache-api15.prod.media.ir2.yahoo.com (ApacheTrafficServer [cMsSf ]), http/1.1 media-router-api12.prod.media.ir2.yahoo.com (ApacheTrafficServer [cMsSf ]), https/1.1 e3.ycpi.seb.yahoo.com (ApacheTrafficServer [cMsSf ])'
Server: 'ATS'
Expires: '-1'
Cache_Control: 'max-age=0, private'
Strict_Transport_Security: 'max-age=15552000'
Connection: 'keep-alive'
Expect_CT: 'max-age=31536000, report-uri="http://csp.yahoo.com/beacon/csp?src=yahoocom-expect-ct-report-only"'
Public_Key_Pins_Report_Only: 'max-age=2592000; pin-sha256="2fRAUXyxl4A1/XHrKNBmc8bTkzA7y4FB/GLJuNAzCqY="; pin-sha256="2oALgLKofTmeZvoZ1y/fSZg7R9jPMix8eVA6DH4o/q8="; pin-sha256="Gtk3r1evlBrs0hG3fm3VoM19daHexDWP//OCmeeMr5M="; pin-sha256="I/Lt/z7ekCWanjD0Cvj5EqXls2lOaThEA0H2Bg4BT/o="; pin-sha256="JbQbUG5JMJUoI6brnx0x3vZF6jilxsapbXGVfjhN8Fg="; pin-sha256="SVqWumuteCQHvVIaALrOZXuzVVVeS7f4FGxxu6V+es4="; pin-sha256="UZJDjsNp1+4M5x9cbbdflB779y5YRBcV6Z6rBMLIrO4="; pin-sha256="Wd8xe/qfTwq3ylFNd3IpaqLHZbh2ZNCLluVzmeNkcpw="; pin-sha256="WoiWRyIOVNa9ihaBciRSC7XHjliYS9VwUGOIud4PB18="; pin-sha256="cAajgxHlj7GTSEIzIYIQxmEloOSoJq7VOaxWHfv72QM="; pin-sha256="dolnbtzEBnELx/9lOEQ22e6OZO/QNb6VSSX2XHA3E7A="; pin-sha256="i7WTqTvh0OioIruIfFR4kMPnBqrS2rdiVPl/s2uC/CY="; pin-sha256="iduNzFNKpwYZ3se/XV+hXcbUonlLw09QPa6AYUwpu4M="; pin-sha256="lnsM2T/O9/J84sJFdnrpsFp3awZJ+ZZbYpCWhGloaHI="; pin-sha256="r/mIkG3eEpVdm+u/ko/cwxzOMo1bk4TyHIlByibiA5E="; pin-sha256="uUwZgwDOxcBXrQcntwu+kYFpkiVkOaezL0WYEZ3anJc="; includeSubdomains; report-uri="http://csp.yahoo.com/beacon/csp?src=yahoocom-hpkp-report-only"'
我的 weboptions
输出是:
opts =
weboptions with properties:
CharacterEncoding: 'auto'
UserAgent: 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.79 Safari/537.36'
Timeout: 5
Username: ''
Password: ''
KeyName: ''
KeyValue: ''
ContentType: 'auto'
ContentReader: []
MediaType: 'application/x-www-form-urlencoded'
RequestMethod: 'auto'
ArrayFormat: 'csv'
HeaderFields: {'Cookie' 'PRF=t%3D%255ENDX; expires=Thu, 11-Jun-2020 09:06:31 GMT; path=/; domain=.finance.yahoo.com'}
CertificateFilename: '/opt/matlab/r2017a/sys/certificates/ca/rootcerts.pem'
好吧,用 Curl 做了一些尝试,看起来你试图做的事情在指定的 URL 上是不可能的。值得注意的是 crumb 和 cookie 经常变化,所以我每次 [=57] 都必须解析两个 GET 请求的响应=] 获取它们值的脚本。
我将引导您完成我的尝试。
- 获取请求并保存 cookie 文件。
- 为 cookie 解析 cookie 文件。
- 将 cookie 打印到文件。
- GET 请求并保存 html.
- 解析HTML并获得面包屑。
- 表格URL.
- 表单 curl 请求。
- 执行请求。
代码:
%Get cookie.
command = 'curl -s --cookie-jar cookie.txt https://finance.yahoo.com/quote/GOOG?p=GOOG';
%Execute request.
system(command);
%Read file.
cookie_file = fileread('cookie.txt');
%regexp the cookie.
cookie = regexp(cookie_file,'B\s*(.*)','tokens');
cookie = cell2mat(cookie{1});
%Print cookie to file (for curl purposes only).
file = fopen('mycookie.txt','w');
fprintf(file,'%s',cookie);
%Get request.
command = 'curl https://finance.yahoo.com/quote/GOOG?p=GOOG > goog.txt';
%Execute request.
system(command);
%Read file.
crumb_file = fileread('goog.txt');
%regexp the crumb.
crumb = regexp(crumb_file,'(?<="CrumbStore":{"crumb":")(.*)(?="},"UserStore":)','tokens');
crumb = crumb{:};
%Form the URL.
url = 'https://query1.finance.yahoo.com/v7/finance/download/AAPL?period1=1492524105&period2=1495116105&interval=1d&events=history&crumb=';
url = strcat(url,crumb);
%Form the curl command.
command = strcat('curl',{' '},'-v -L -b',{' '},'mycookie.txt',{' '},'-H',{' '},'"User-Agent:',{' '},'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.79 Safari/537.36','"',{' '},'"',url,'"');
command = command{1};
system(command);
最终的卷曲请求:
curl -v -L -b mycookie.txt -H "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.79 Safari/537.36" "https://query1.finance.yahoo.com/v7/finance/download/^NDX?period1=496969200&period2=1519513200&interval=1d&events=history&crumb=dSpwQstrQDp"
在最终的 curl 请求中,我使用了以下标志:
-v: verbosity
-L: follow redirects
-b: use cookie file
-H: user agent header field (tried spoofing it with my browser)
对于每次尝试,响应如下:
{
"finance": {
"error": {
"code": "Unauthorized",
"description": "Invalid cookie"
}
}
}
我研究了服务器响应,客户端成功发送了每个 header 值,但它总是导致相同的错误。现在我怀疑你不能再像 here 解释的那样这样做了。因此,正如用户所指出的,您可能需要从不同的位置执行网络抓取。也许如果你找到一个工作 URL 你可以提出一个新问题,我很乐意帮忙。
以下是一个脚本,它下载 AAPL 股票上个月的数据并创建一个名为 AAPL_14-05-2018_14-06-2018 的 .csv 文件,其中包含日期、开盘价、最高价, low, close, adj close and volume information as found here.
%Choose any ticker.
ticker = 'AAPL'; %'FB','AMZN'...
%Base url.
url = 'https://query1.finance.yahoo.com/v8/finance/chart/GOOG?symbol=';
%weboption constructor.
opts = weboptions();
%Start retrieving data from today.
today = datetime('now');
today.TimeZone = 'America/New_York';
%Convert dates to unix timestamp.
todayp = posixtime(today);
%Last week.
weekp = posixtime(datetime(addtodate(datenum(today),-7,'day'),'ConvertFrom','datenum'));
%Last month.
monthp = posixtime(datetime(addtodate(datenum(today),-1,'month'),'ConvertFrom','datenum'));
%Last year.
yearp = posixtime(datetime(addtodate(datenum(today),-1,'year'),'ConvertFrom','datenum'));
%Add ticker.
url = strcat(url,ticker);
%Construct url, add time intervals. The following url is for last month worth of data.
url = strcat(url,'&period1=',num2str(monthp,'%.10g'),'&period2=',num2str(todayp,'%.10g'),'&interval=','1d');
%Execute HTTP request.
data = webread(url,opts);
%Get data.
dates = flipud(datetime(data.chart.result.timestamp,'ConvertFrom','posixtime'));
high = flipud(data.chart.result.indicators.quote.high);
low = flipud(data.chart.result.indicators.quote.low);
vol = flipud(data.chart.result.indicators.quote.volume);
open = flipud(data.chart.result.indicators.quote.open);
close = flipud(data.chart.result.indicators.quote.close);
adjclose = flipud(data.chart.result.indicators.adjclose.adjclose);
%Create table.
t = table(dates,open,high,low,close,adjclose,vol);
%Format filename: ticker, start date, end date.
namefile = strcat(ticker,'_',char(datetime(monthp,'Format','dd-MM-yyyy','ConvertFrom','posixtime')),...
'_',char(datetime(todayp,'Format','dd-MM-yyyy','ConvertFrom','posixtime')),'.csv');
%Write table to file.
writetable(t,namefile);
.csv 文件输出(只显示最近几天):
dates open high low close adjclose vol
14/06/2018 16:46 191.5500031 191.5700073 190.2200012 190.7599945 190.7599945 10252639
13/06/2018 13:30 192.4199982 192.8800049 190.4400024 190.6999969 190.6999969 21431900
12/06/2018 13:30 191.3899994 192.6100006 191.1499939 192.2799988 192.2799988 16911100
11/06/2018 13:30 191.3500061 191.9700012 190.2100067 191.2299957 191.2299957 18308500
08/06/2018 13:30 191.1699982 192 189.7700043 191.6999969 191.6999969 26656800
07/06/2018 13:30 194.1399994 194.1999969 192.3399963 193.4600067 193.4600067 21347200
06/06/2018 13:30 193.6300049 194.0800018 191.9199982 193.9799957 193.9799957 20933600
05/06/2018 13:30 193.0700073 193.9400024 192.3600006 193.3099976 193.3099976 21566000
上面我得到的是上个月的数据。在评论中,我展示了如何针对上周和去年进行调整。我可以轻松调整代码并将其转换为函数供您用于任何股票和时间间隔。您只需要告诉我您对哪种时间间隔感兴趣。
Yahoo 进行了大量检查以确保请求来自网络浏览器。 检查此函数 https://www.mathworks.com/matlabcentral/fileexchange/68361-yahoo-finance-data-downloader 让雅虎财经相信请求来自浏览器。
这里有几个例子说明如何使用此功能下载和分析市场数据 https://github.com/Lenskiy/market-data-functions