无法抓取 URL,因为有特殊字符
Not able to crawl a URL as there is special character
尝试使用 NUTCH 1.17 进行抓取,但 URL 被拒绝,
有 #!在 URL
示例:xxmydomain.com/xxx/#!/xxx/abc.html
我也试过包括
+^/
+^#!
在我的 regex-urlfilter
- 如果您特别签入 regex-normalize.xml 文件
此特定规则文件将作为 urlnormalizer-regex 插件的一部分应用。
本插件默认包含在plugin-includes in nutch-site.xml.
作为 URL 规范化的一部分,如果
之后出现任何内容,此特定行将截断 URLs
<!-- removes interpage href anchors such as site.com#location -->
<regex>
<pattern>#.*?(\?|&|$)</pattern>
<substitution></substitution>
</regex>
您可以通过评论禁用此规则。 (推荐方式)
(OR) 你可以从 plugin-include conf 中删除 urlnormalizer-regex 从 nutch-site.xml.
- 在URL归一化部分URL片段部分还有一个地方被忽略了urlnormalizer-basic
BasicURLNormalizer 用于对 URL 应用一般规范化(即删除多个直接斜线并使用 percent-encoding 正确编码)
public String normalize(String urlString, String scope)
throws MalformedURLException {
if ("".equals(urlString)) // permit empty
return urlString;
urlString = urlString.trim(); // remove extra spaces
URL url = new URL(urlString);
String protocol = url.getProtocol();
String host = url.getHost();
int port = url.getPort();
String file = url.getFile();
boolean changed = false;
boolean normalizePath = false;
if (!urlString.startsWith(protocol)) // protocol was lowercased
changed = true;
if ("http".equals(protocol) || "https".equals(protocol)
|| "ftp".equals(protocol)) {
if (host != null && url.getAuthority() != null) {
String newHost = normalizeHostName(host);
if (!host.equals(newHost)) {
host = newHost;
changed = true;
} else if (!url.getAuthority().equals(newHost)) {
// authority (http://<...>/) contains other elements (port, user,
// etc.) which will likely cause a change if left away
changed = true;
}
} else {
// no host or authority: recompose the URL from components
changed = true;
}
if (port == url.getDefaultPort()) { // uses default port
port = -1; // so don't specify it
changed = true;
}
normalizePath = true;
if (file == null || "".equals(file)) {
file = "/";
changed = true;
normalizePath = false; // no further path normalization required
} else if (!file.startsWith("/")) {
file = "/" + file;
changed = true;
normalizePath = false; // no further path normalization required
}
if (url.getRef() != null) { // remove the ref
changed = true;
}
} else if (protocol.equals("file")) {
normalizePath = true;
}
// properly encode characters in path/file using percent-encoding
String file2 = unescapePath(file);
file2 = escapePath(file2);
if (!file.equals(file2)) {
changed = true;
file = file2;
}
if (normalizePath) {
// check for unnecessary use of "/../", "/./", and "//"
if (changed) {
url = new URL(protocol, host, port, file);
}
file2 = getFileWithNormalizedPath(url);
if (!file.equals(file2)) {
changed = true;
file = file2;
}
}
if (changed) {
url = new URL(protocol, host, port, file);
urlString = url.toString();
}
return urlString;
}
从代码中可以看出..它完全忽略了包含URLFragment.
的**url.getRef**
信息
所以,我们能做的就是简单地替换url = new URL(protocol, host, port, file);
在 normalize 方法的末尾(line number)
和url = new URL(protocol, host, port, file+"#"+url.getRef());
我是如何验证的?
scala> val url = new URL("https://www.codepublishing.com/CA/AlisoViejo/#!/AlisoViejo01/AlisoViejo01.html");
url: java.net.URL = https://www.codepublishing.com/CA/AlisoViejo/#!/AlisoViejo01/AlisoViejo01.html
scala> val protocol = url.getProtocol();
protocol: String = https
scala> val host = url.getHost();
host: String = www.codepublishing.com
scala> val port = url.getPort();
port: Int = -1
scala> val file = url.getFile();
file: String = /CA/AlisoViejo/
scala> //when we construct back new url using the above information we end up loosing fragment information like shown in below
scala> new URL(protocol, host, port, file).toString
res69: String = https://www.codepublishing.com/CA/AlisoViejo/
scala> //if we use url.getRef Information in constructing url we can retain back URL fragment information
scala> //like shown below
scala> new URL(protocol, host, port, file+"#"+url.getRef).toString
res70: String = https://www.codepublishing.com/CA/AlisoViejo/#!/AlisoViejo01/AlisoViejo01.html
scala> // so we can replace the url construction object as explained above to retain url fragment information
注意:UrlFragment 将在页面内提供本地对象引用。在大多数情况下抓取那些 URL 是没有意义的(这就是 nutch 使用上述规则规范化 URL 的原因)因为 HTML 将保持不变。
尝试使用 NUTCH 1.17 进行抓取,但 URL 被拒绝, 有 #!在 URL 示例:xxmydomain.com/xxx/#!/xxx/abc.html
我也试过包括
+^/
+^#! 在我的 regex-urlfilter
- 如果您特别签入 regex-normalize.xml 文件 此特定规则文件将作为 urlnormalizer-regex 插件的一部分应用。 本插件默认包含在plugin-includes in nutch-site.xml.
作为 URL 规范化的一部分,如果
<!-- removes interpage href anchors such as site.com#location -->
<regex>
<pattern>#.*?(\?|&|$)</pattern>
<substitution></substitution>
</regex>
您可以通过评论禁用此规则。 (推荐方式) (OR) 你可以从 plugin-include conf 中删除 urlnormalizer-regex 从 nutch-site.xml.
- 在URL归一化部分URL片段部分还有一个地方被忽略了urlnormalizer-basic
BasicURLNormalizer 用于对 URL 应用一般规范化(即删除多个直接斜线并使用 percent-encoding 正确编码)
public String normalize(String urlString, String scope)
throws MalformedURLException {
if ("".equals(urlString)) // permit empty
return urlString;
urlString = urlString.trim(); // remove extra spaces
URL url = new URL(urlString);
String protocol = url.getProtocol();
String host = url.getHost();
int port = url.getPort();
String file = url.getFile();
boolean changed = false;
boolean normalizePath = false;
if (!urlString.startsWith(protocol)) // protocol was lowercased
changed = true;
if ("http".equals(protocol) || "https".equals(protocol)
|| "ftp".equals(protocol)) {
if (host != null && url.getAuthority() != null) {
String newHost = normalizeHostName(host);
if (!host.equals(newHost)) {
host = newHost;
changed = true;
} else if (!url.getAuthority().equals(newHost)) {
// authority (http://<...>/) contains other elements (port, user,
// etc.) which will likely cause a change if left away
changed = true;
}
} else {
// no host or authority: recompose the URL from components
changed = true;
}
if (port == url.getDefaultPort()) { // uses default port
port = -1; // so don't specify it
changed = true;
}
normalizePath = true;
if (file == null || "".equals(file)) {
file = "/";
changed = true;
normalizePath = false; // no further path normalization required
} else if (!file.startsWith("/")) {
file = "/" + file;
changed = true;
normalizePath = false; // no further path normalization required
}
if (url.getRef() != null) { // remove the ref
changed = true;
}
} else if (protocol.equals("file")) {
normalizePath = true;
}
// properly encode characters in path/file using percent-encoding
String file2 = unescapePath(file);
file2 = escapePath(file2);
if (!file.equals(file2)) {
changed = true;
file = file2;
}
if (normalizePath) {
// check for unnecessary use of "/../", "/./", and "//"
if (changed) {
url = new URL(protocol, host, port, file);
}
file2 = getFileWithNormalizedPath(url);
if (!file.equals(file2)) {
changed = true;
file = file2;
}
}
if (changed) {
url = new URL(protocol, host, port, file);
urlString = url.toString();
}
return urlString;
}
从代码中可以看出..它完全忽略了包含URLFragment.
的**url.getRef**
信息
所以,我们能做的就是简单地替换url = new URL(protocol, host, port, file);
在 normalize 方法的末尾(line number)
和url = new URL(protocol, host, port, file+"#"+url.getRef());
我是如何验证的?
scala> val url = new URL("https://www.codepublishing.com/CA/AlisoViejo/#!/AlisoViejo01/AlisoViejo01.html");
url: java.net.URL = https://www.codepublishing.com/CA/AlisoViejo/#!/AlisoViejo01/AlisoViejo01.html
scala> val protocol = url.getProtocol();
protocol: String = https
scala> val host = url.getHost();
host: String = www.codepublishing.com
scala> val port = url.getPort();
port: Int = -1
scala> val file = url.getFile();
file: String = /CA/AlisoViejo/
scala> //when we construct back new url using the above information we end up loosing fragment information like shown in below
scala> new URL(protocol, host, port, file).toString
res69: String = https://www.codepublishing.com/CA/AlisoViejo/
scala> //if we use url.getRef Information in constructing url we can retain back URL fragment information
scala> //like shown below
scala> new URL(protocol, host, port, file+"#"+url.getRef).toString
res70: String = https://www.codepublishing.com/CA/AlisoViejo/#!/AlisoViejo01/AlisoViejo01.html
scala> // so we can replace the url construction object as explained above to retain url fragment information
注意:UrlFragment 将在页面内提供本地对象引用。在大多数情况下抓取那些 URL 是没有意义的(这就是 nutch 使用上述规则规范化 URL 的原因)因为 HTML 将保持不变。