如何将多个日志文件发送到 amazon cloudwatch?
how to send multiple log files to amazon cloudwatch?
我有一个整体正在转移到 aws(使用 ecs/fargate),稍后将被分解为微服务。它使用 amazon linux 1 图像提供了 apache,php,以及我所有的生产网站数据。目前,它将日志发送到 /etc/httpd/logs 和 /var/www/vhosts/logs 中的多个文件。
据说我可以在 ecs 任务定义中使用日志配置和卷做一些事情,但我还没有找到任何解释如何做的细节的东西。
如果是容器,我绝不会建议将日志写入文件,最好将日志容器写入标准输出和标准错误。
另一个有趣的事情,如果你搬到了fargate,你将如何处理日志文件?所以不要将日志写入文件,也不要像实例机一样对待容器。
AWS 日志驱动的美妙之处在于,它将日志推送到 Cloud watch 日志,并且从 Cloud watch 将这些推送到 ELK 也非常容易。
选择 AWS 日志驱动程序,设计您的入口点,使其将日志写入容器中的 stdout 和 stderr。通常,当您 运行 进程在前台时,这非常容易,它会自动将日志写入容器标准输出。
只需在您的任务定义中添加此行并添加云监视角色。
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "awslogs-wordpress",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "awslogs-example"
}
}
或
配置完成后您将看到日志
使用 awslogs 日志驱动程序
You can configure the containers in your tasks to send log information
to CloudWatch Logs. If you are using the Fargate launch type for your
tasks, this allows you to view the logs from your containers. If you
are using the EC2 launch type, this enables you to view different logs
from your containers in one convenient location, and it prevents your
container logs from taking up disk space on your container instances.
This topic helps you get started using the awslogs log driver in your
task definitions.
备注
The type of information that is logged by the containers in your task
depends mostly on their ENTRYPOINT command
. By default, the logs that
are captured show the command output that you would normally see in an
interactive terminal if you ran the container locally, which are the
STDOUT
and STDERR I/O streams
. The awslogs log driver simply passes
these logs from Docker to CloudWatch Logs. For more information on how
Docker logs are processed, including alternative ways to capture
different file data or streams, see View logs for a container or
service in the Docker documentation.
我有一个整体正在转移到 aws(使用 ecs/fargate),稍后将被分解为微服务。它使用 amazon linux 1 图像提供了 apache,php,以及我所有的生产网站数据。目前,它将日志发送到 /etc/httpd/logs 和 /var/www/vhosts/logs 中的多个文件。
据说我可以在 ecs 任务定义中使用日志配置和卷做一些事情,但我还没有找到任何解释如何做的细节的东西。
如果是容器,我绝不会建议将日志写入文件,最好将日志容器写入标准输出和标准错误。
另一个有趣的事情,如果你搬到了fargate,你将如何处理日志文件?所以不要将日志写入文件,也不要像实例机一样对待容器。
AWS 日志驱动的美妙之处在于,它将日志推送到 Cloud watch 日志,并且从 Cloud watch 将这些推送到 ELK 也非常容易。
选择 AWS 日志驱动程序,设计您的入口点,使其将日志写入容器中的 stdout 和 stderr。通常,当您 运行 进程在前台时,这非常容易,它会自动将日志写入容器标准输出。
只需在您的任务定义中添加此行并添加云监视角色。
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "awslogs-wordpress",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "awslogs-example"
}
}
或
配置完成后您将看到日志
使用 awslogs 日志驱动程序
You can configure the containers in your tasks to send log information to CloudWatch Logs. If you are using the Fargate launch type for your tasks, this allows you to view the logs from your containers. If you are using the EC2 launch type, this enables you to view different logs from your containers in one convenient location, and it prevents your container logs from taking up disk space on your container instances. This topic helps you get started using the awslogs log driver in your task definitions.
备注
The type of information that is logged by the containers in your task depends mostly on their
ENTRYPOINT command
. By default, the logs that are captured show the command output that you would normally see in an interactive terminal if you ran the container locally, which are theSTDOUT
andSTDERR I/O streams
. The awslogs log driver simply passes these logs from Docker to CloudWatch Logs. For more information on how Docker logs are processed, including alternative ways to capture different file data or streams, see View logs for a container or service in the Docker documentation.