Fetch as google 无法调用 api
Fetch as google is not able to call api
我有一个 React 应用程序。对于 SEO,我正在预览 google 如何查看我们的 React 应用程序。
在我们的应用程序中,如果 API 失败,我们会在页面上显示错误消息。
因此,当我们执行 Fetch as google 时,我们会看到同样的错误。这意味着 API 失败了,但在浏览器中,它工作正常。
我找不到 Google 在获取 api.
时出错的原因
我们已将 API 响应分为相应的部分和错误消息。所以我们得到的错误意味着..错误代码是介于 300 - 400 和任何高于 403
是因为CORS吗?或有反应的东西?有什么建议么?
您不允许在 robots.txt 文件中请求 API 网址。这将阻止 Google 执行您的 JavaScript 代码,因为 API 是 "unreachable"。
查看 help center article that is linked from the "Fetch as Google" 页面:
Google got a response from your site and fetched the URL, but could not reach all resources referenced by the page because they were blocked by robots.txt files. If this is a fetch only, do a fetch and render. Examine the rendered page to see if any significant resources were blocked that could prevent Google from properly analyzing the meaning of the page. If significant resources were blocked, unblock the resources on robots.txt files that you own. For resources blocked by robots.txt files that you don't own, reach out to the resource site owners and ask them to unblock those resources to Googlebot.
您还可以看到这些页面仅 "partial" 被抓取,详细信息页面会准确告诉您哪些 URL 被阻止以及原因。
我有一个 React 应用程序。对于 SEO,我正在预览 google 如何查看我们的 React 应用程序。 在我们的应用程序中,如果 API 失败,我们会在页面上显示错误消息。 因此,当我们执行 Fetch as google 时,我们会看到同样的错误。这意味着 API 失败了,但在浏览器中,它工作正常。 我找不到 Google 在获取 api.
时出错的原因我们已将 API 响应分为相应的部分和错误消息。所以我们得到的错误意味着..错误代码是介于 300 - 400 和任何高于 403
是因为CORS吗?或有反应的东西?有什么建议么?
您不允许在 robots.txt 文件中请求 API 网址。这将阻止 Google 执行您的 JavaScript 代码,因为 API 是 "unreachable"。
查看 help center article that is linked from the "Fetch as Google" 页面:
Google got a response from your site and fetched the URL, but could not reach all resources referenced by the page because they were blocked by robots.txt files. If this is a fetch only, do a fetch and render. Examine the rendered page to see if any significant resources were blocked that could prevent Google from properly analyzing the meaning of the page. If significant resources were blocked, unblock the resources on robots.txt files that you own. For resources blocked by robots.txt files that you don't own, reach out to the resource site owners and ask them to unblock those resources to Googlebot.
您还可以看到这些页面仅 "partial" 被抓取,详细信息页面会准确告诉您哪些 URL 被阻止以及原因。