类似于 GROUP BY 和 Logs Explorer
Something like GROUP BY with Logs Explorer
我正在尝试找出有关我网站 运行 上 Google 云(Google App Engine with Java 上的可疑流量的详细信息具体的)。一种想法是分析哪些 IP 地址经常发送请求。在 SQL 我会做类似
SELECT
protoPayload.ip,
COUNT(protoPayload.ip) AS `ip_occurrence`
FROM
foo /* TODO replace foo with correct table name */
WHERE
protoPayload.ip NOT LIKE '66.249.77.%' /* ignore Google bots */
GROUP BY
protoPayload.ip
ORDER BY
`ip_occurrence` DESC
LIMIT 100
但我不知道如何使用 Logs Explorer 执行此操作。 “Log Analytics”似乎允许这样的 SQL,但只要求在非生产项目上使用它。
我也尝试从 Logs Explorer 下载日志,但是有 10,000 条日志的限制,根本不够。
有什么简单的方法吗?
总的来说,我正在尝试重新打开我的 AdSense 帐户。到目前为止我失败了。也许我提供的证据,我的 Google 分析数据,不够强大。表单上的字段描述提到了 IP 地址。但是在 Google 分析中我没有看到任何 IP 地址...
Log Explorer allows you to create some easy Log Explorer queries 用于过滤,但您不会有任何 Group By
的可能性。
要实现类似的效果,您可以使用 Sink:
Sinks control how Cloud Logging routes logs. Using sinks, you can route some or all of your logs to supported destinations. Some of the reasons that you might want to control how your logs are routed include the following:
- To store logs that are unlikely to be read but that must be retained for compliance purposes.
- To organize your logs in buckets in a format that is useful to you.
- To use big-data analysis tools on your logs.
- To stream your logs to other applications, other repositories, or third parties.
Cloud Storage: JSON files stored in Cloud Storage buckets.
Pub/Sub: JSON messages delivered to Pub/Sub topics. Supports third-party integrations, such as Splunk, with Logging.
BigQuery: Tables created in BigQuery datasets.
Another Cloud Logging bucket: Log entries held in Cloud Logging log buckets.
对于您的情况最好是 BigQuery Sink
在文档中,您有关于如何 Create Sink 的分步指南。
有用链接:
我正在尝试找出有关我网站 运行 上 Google 云(Google App Engine with Java 上的可疑流量的详细信息具体的)。一种想法是分析哪些 IP 地址经常发送请求。在 SQL 我会做类似
SELECT
protoPayload.ip,
COUNT(protoPayload.ip) AS `ip_occurrence`
FROM
foo /* TODO replace foo with correct table name */
WHERE
protoPayload.ip NOT LIKE '66.249.77.%' /* ignore Google bots */
GROUP BY
protoPayload.ip
ORDER BY
`ip_occurrence` DESC
LIMIT 100
但我不知道如何使用 Logs Explorer 执行此操作。 “Log Analytics”似乎允许这样的 SQL,但只要求在非生产项目上使用它。
我也尝试从 Logs Explorer 下载日志,但是有 10,000 条日志的限制,根本不够。
有什么简单的方法吗?
总的来说,我正在尝试重新打开我的 AdSense 帐户。到目前为止我失败了。也许我提供的证据,我的 Google 分析数据,不够强大。表单上的字段描述提到了 IP 地址。但是在 Google 分析中我没有看到任何 IP 地址...
Log Explorer allows you to create some easy Log Explorer queries 用于过滤,但您不会有任何 Group By
的可能性。
要实现类似的效果,您可以使用 Sink:
Sinks control how Cloud Logging routes logs. Using sinks, you can route some or all of your logs to supported destinations. Some of the reasons that you might want to control how your logs are routed include the following:
- To store logs that are unlikely to be read but that must be retained for compliance purposes.
- To organize your logs in buckets in a format that is useful to you.
- To use big-data analysis tools on your logs.
- To stream your logs to other applications, other repositories, or third parties.
Cloud Storage: JSON files stored in Cloud Storage buckets.
Pub/Sub: JSON messages delivered to Pub/Sub topics. Supports third-party integrations, such as Splunk, with Logging.
BigQuery: Tables created in BigQuery datasets.
Another Cloud Logging bucket: Log entries held in Cloud Logging log buckets.
对于您的情况最好是 BigQuery Sink
在文档中,您有关于如何 Create Sink 的分步指南。
有用链接: