Ganglia 监控 Hadoop 多节点集群
Monitering Hadoop multi node cluster by Ganglia
我想使用 ganglia 监控 Hadoop(Hadoop version-0.20.2)多节点集群。我的 Hadoop 正在运行 properly.I 在阅读了以下博客后安装了 Ganglia---
http://hakunamapdata.com/ganglia-configuration-for-a-small-hadoop-cluster-and-some-troubleshooting/
http://hokamblogs.blogspot.in/2013/06/ganglia-overview-and-installation-on.html
我还学习了 Ganglia.pdf 的监控(附录 B
神经节和 Hadoop/HBase )。
I have modified only the following lines in **Hadoop-metrics.properties**(same on all Hadoop Nodes)==>
// Configuration of the "dfs" context for ganglia
dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext
dfs.period=10
dfs.servers=192.168.1.182:8649
// Configuration of the "mapred" context for ganglia
mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext
mapred.period=10
mapred.servers=192.168.1.182:8649:8649
// Configuration of the "jvm" context for ganglia
jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext
jvm.period=10
jvm.servers=192.168.1.182:8649
**gmetad.conf** (Only on Hadoop master Node )
data_source "Hadoop-slaves" 5 192.168.1.182:8649
RRAs "RRA:AVERAGE:0.5:1:302400" //Because i want to analyse one week data.
**gmond.conf** (on all the Hadoop Slave nodes and Hadoop Master)
globals {
daemonize = yes
setuid = yes
user = ganglia
debug_level = 0
max_udp_msg_len = 1472
mute = no
deaf = no
allow_extra_data = yes
host_dmax = 0 /*secs */
cleanup_threshold = 300 /*secs */
gexec = no
send_metadata_interval = 0
}
cluster {
name = "Hadoop-slaves"
owner = "Sandeep Priyank"
latlong = "unspecified"
url = "unspecified"
}
/* The host section describes attributes of the host, like the location */
host {
location = "CASL"
}
/* Feel free to specify as many udp_send_channels as you like. Gmond
used to only support having a single channel */
udp_send_channel {
host = 192.168.1.182
port = 8649
ttl = 1
}
/* You can specify as many udp_recv_channels as you like as well. */
udp_recv_channel {
port = 8649
}
/* You can specify as many tcp_accept_channels as you like to share
an xml description of the state of the cluster */
tcp_accept_channel {
port = 8649
}
现在 Ganglia 只为所有节点提供系统指标(内存、磁盘等)。但它没有显示 Hadoop 指标(如 jvm、mapred 指标
等)在网络界面上。我该如何解决这个问题?
我确实使用 Ganglia 来处理 Hadoop,是的,我在 Ganglia 上看到了很多 Hadoop 指标(容器、映射任务、vmem)。事实上,Hadoop 特定报告给 Ganglio 的数百个指标。
hokamblogs Post 就足够了。
我在master节点上编辑hadoop-metrics2.properties,内容为:
namenode.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
namenode.sink.ganglia.period=10
namenode.sink.ganglia.servers=gmetad_hostname_or_ip:8649
resourcemanager.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
resourcemanager.sink.ganglia.period=10
resourcemanager.sink.ganglia.servers=gmetad_hostname_or_ip:8649
我也在从服务器上编辑相同的文件:
datanode.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
datanode.sink.ganglia.period=10
datanode.sink.ganglia.servers=gmetad_hostname_or_ip:8649
nodemanager.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
nodemanager.sink.ganglia.period=10
nodemanager.sink.ganglia.servers=gmetad_hostname_or_ip:8649
记得改完文件后重启Hadoop和Ganglia。
希望对你有所帮助。
感谢大家,如果您使用的是旧版本的 Hadoop,请放入以下文件(来自新版本的 Hadoop)==>
GangliaContext31.java
GangliaContext.java
在路径中 ==> hadoop/src/core/org/apache/hadoop/metrics/ganglia
来自新版Hadoop.
使用 ant 编译 Hadoop(并在编译时设置适当的代理)。
如果出现函数定义丢失之类的错误,则将该函数定义(来自新版本)放入正确的 java 文件中,然后再次编译 Hadoop。它会起作用。
我想使用 ganglia 监控 Hadoop(Hadoop version-0.20.2)多节点集群。我的 Hadoop 正在运行 properly.I 在阅读了以下博客后安装了 Ganglia---
http://hakunamapdata.com/ganglia-configuration-for-a-small-hadoop-cluster-and-some-troubleshooting/
http://hokamblogs.blogspot.in/2013/06/ganglia-overview-and-installation-on.html
我还学习了 Ganglia.pdf 的监控(附录 B 神经节和 Hadoop/HBase )。
I have modified only the following lines in **Hadoop-metrics.properties**(same on all Hadoop Nodes)==>
// Configuration of the "dfs" context for ganglia
dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext
dfs.period=10
dfs.servers=192.168.1.182:8649
// Configuration of the "mapred" context for ganglia
mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext
mapred.period=10
mapred.servers=192.168.1.182:8649:8649
// Configuration of the "jvm" context for ganglia
jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext
jvm.period=10
jvm.servers=192.168.1.182:8649
**gmetad.conf** (Only on Hadoop master Node )
data_source "Hadoop-slaves" 5 192.168.1.182:8649
RRAs "RRA:AVERAGE:0.5:1:302400" //Because i want to analyse one week data.
**gmond.conf** (on all the Hadoop Slave nodes and Hadoop Master)
globals {
daemonize = yes
setuid = yes
user = ganglia
debug_level = 0
max_udp_msg_len = 1472
mute = no
deaf = no
allow_extra_data = yes
host_dmax = 0 /*secs */
cleanup_threshold = 300 /*secs */
gexec = no
send_metadata_interval = 0
}
cluster {
name = "Hadoop-slaves"
owner = "Sandeep Priyank"
latlong = "unspecified"
url = "unspecified"
}
/* The host section describes attributes of the host, like the location */
host {
location = "CASL"
}
/* Feel free to specify as many udp_send_channels as you like. Gmond
used to only support having a single channel */
udp_send_channel {
host = 192.168.1.182
port = 8649
ttl = 1
}
/* You can specify as many udp_recv_channels as you like as well. */
udp_recv_channel {
port = 8649
}
/* You can specify as many tcp_accept_channels as you like to share
an xml description of the state of the cluster */
tcp_accept_channel {
port = 8649
}
现在 Ganglia 只为所有节点提供系统指标(内存、磁盘等)。但它没有显示 Hadoop 指标(如 jvm、mapred 指标 等)在网络界面上。我该如何解决这个问题?
我确实使用 Ganglia 来处理 Hadoop,是的,我在 Ganglia 上看到了很多 Hadoop 指标(容器、映射任务、vmem)。事实上,Hadoop 特定报告给 Ganglio 的数百个指标。
hokamblogs Post 就足够了。
我在master节点上编辑hadoop-metrics2.properties,内容为:
namenode.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
namenode.sink.ganglia.period=10
namenode.sink.ganglia.servers=gmetad_hostname_or_ip:8649
resourcemanager.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
resourcemanager.sink.ganglia.period=10
resourcemanager.sink.ganglia.servers=gmetad_hostname_or_ip:8649
我也在从服务器上编辑相同的文件:
datanode.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
datanode.sink.ganglia.period=10
datanode.sink.ganglia.servers=gmetad_hostname_or_ip:8649
nodemanager.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
nodemanager.sink.ganglia.period=10
nodemanager.sink.ganglia.servers=gmetad_hostname_or_ip:8649
记得改完文件后重启Hadoop和Ganglia。
希望对你有所帮助。
感谢大家,如果您使用的是旧版本的 Hadoop,请放入以下文件(来自新版本的 Hadoop)==>
GangliaContext31.java
GangliaContext.java
在路径中 ==> hadoop/src/core/org/apache/hadoop/metrics/ganglia 来自新版Hadoop.
使用 ant 编译 Hadoop(并在编译时设置适当的代理)。 如果出现函数定义丢失之类的错误,则将该函数定义(来自新版本)放入正确的 java 文件中,然后再次编译 Hadoop。它会起作用。