问题监控 hadoop 响应
issue monitoring hadoop response
我正在使用 ganglia
来监控 Hadoop
。 gmond
和 gmetad
运行 很好。当我在 gmond
端口 (8649) 上远程登录以及在其 xml
应答端口上远程登录 gmetad
时,我没有获得 hadoop
数据。怎么可能 ?
cluster {
name = "my cluster"
owner = "Master"
latlong = "unspecified"
url = "unspecified"
}
host {
location = localhost
}
udp_send_channel {
#bind_hostname = yes
#mcast_join = 239.2.11.71
host = localhost
port = 8649
ttl = 1
}
udp_recv_channel {
#mcast_join = 239.2.11.71
port = 8649
retry_bind = true
# Size of the UDP buffer. If you are handling lots of metrics you really
# should bump it up to e.g. 10MB or even higher.
# buffer = 10485760
}
tcp_accept_channel {
port = 8649
# If you want to gzip XML output
gzip_output = no
}
我发现问题了。它与 hadoop
指标属性有关。我在 hadoop-metrics.properties
中设置了 ganglia
但我必须设置 hadoop-metrics.properties
配置文件。现在 ganglia
抛出正确的指标。
我正在使用 ganglia
来监控 Hadoop
。 gmond
和 gmetad
运行 很好。当我在 gmond
端口 (8649) 上远程登录以及在其 xml
应答端口上远程登录 gmetad
时,我没有获得 hadoop
数据。怎么可能 ?
cluster {
name = "my cluster"
owner = "Master"
latlong = "unspecified"
url = "unspecified"
}
host {
location = localhost
}
udp_send_channel {
#bind_hostname = yes
#mcast_join = 239.2.11.71
host = localhost
port = 8649
ttl = 1
}
udp_recv_channel {
#mcast_join = 239.2.11.71
port = 8649
retry_bind = true
# Size of the UDP buffer. If you are handling lots of metrics you really
# should bump it up to e.g. 10MB or even higher.
# buffer = 10485760
}
tcp_accept_channel {
port = 8649
# If you want to gzip XML output
gzip_output = no
}
我发现问题了。它与 hadoop
指标属性有关。我在 hadoop-metrics.properties
中设置了 ganglia
但我必须设置 hadoop-metrics.properties
配置文件。现在 ganglia
抛出正确的指标。