作业跟踪器未启动
Job tracker is not starting up
我在this site的帮助下安装CDH4.6.0我是运行开始-all.sh开始服务
/etc/init.d/hadoop-hdfs-namenode start
/etc/init.d/hadoop-hdfs-datanode start
/etc/init.d/hadoop-hdfs-secondarynamenode start
/etc/init.d/hadoop-0.20-mapreduce-jobtracker start
/etc/init.d/hadoop-0.20-mapreduce-tasktracker start
bin/bash [to start bash prompt after starting services]
将这些指令作为 docker 文件的一部分执行后,如
CMD ["start-all.sh"]
启动所有服务
jps的时候只能看到
jps
Namenode
Datanode
Secondary Namenode
Tasktracker
但作业跟踪器尚未启动。日志如下
2015-01-23 07:26:46,706 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=JobTracker, sessionId=
2015-01-23 07:26:46,735 INFO org.apache.hadoop.mapred.JobTracker:
JobTracker up at: 8021
2015-01-23 07:26:46,735 INFO org.apache.hadoop.mapred.JobTracker:
JobTracker webserver: 50030
2015-01-23 07:26:47,725 INFO org.apache.hadoop.mapred.JobTracker:
Creating the system directory
2015-01-23 07:26:47,750 WARN org.apache.hadoop.mapred.JobTracker: Failed
to operate on mapred.system.dir (hdfs://localhost:8020/var/lib/hadoop-
hdfs/cache/mapred/mapred/system) because of permissions.
2015-01-23 07:26:47,750 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2015-01-23 07:26:47,751 WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...
org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
但是当我再次从 bash 提示符启动它时,它起作用了。为什么这样?有什么建议吗?
从日志中可以看出。 Job Tracker 从 8020 端口开始,为什么它试图在 8020 端口运行?这是个问题吗?如果有,如何解决?
似乎 mapred 用户没有权限在 HDFS 根目录中写入 files/directories。
在启动mapreduce 服务之前切换到hdfs 用户并为mapred 用户分配必要的权限。
sudo -su hdfs ;
hadoop fs -chmod 777 /
/etc/init.d/hadoop-0.20-mapreduce-jobtracker stop; /etc/init.d/hadoop-0.20-mapreduce-jobtracker start
我在this site的帮助下安装CDH4.6.0我是运行开始-all.sh开始服务
/etc/init.d/hadoop-hdfs-namenode start
/etc/init.d/hadoop-hdfs-datanode start
/etc/init.d/hadoop-hdfs-secondarynamenode start
/etc/init.d/hadoop-0.20-mapreduce-jobtracker start
/etc/init.d/hadoop-0.20-mapreduce-tasktracker start
bin/bash [to start bash prompt after starting services]
将这些指令作为 docker 文件的一部分执行后,如
CMD ["start-all.sh"]
启动所有服务
jps的时候只能看到
jps
Namenode
Datanode
Secondary Namenode
Tasktracker
但作业跟踪器尚未启动。日志如下
2015-01-23 07:26:46,706 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=JobTracker, sessionId=
2015-01-23 07:26:46,735 INFO org.apache.hadoop.mapred.JobTracker:
JobTracker up at: 8021
2015-01-23 07:26:46,735 INFO org.apache.hadoop.mapred.JobTracker:
JobTracker webserver: 50030
2015-01-23 07:26:47,725 INFO org.apache.hadoop.mapred.JobTracker:
Creating the system directory
2015-01-23 07:26:47,750 WARN org.apache.hadoop.mapred.JobTracker: Failed
to operate on mapred.system.dir (hdfs://localhost:8020/var/lib/hadoop-
hdfs/cache/mapred/mapred/system) because of permissions.
2015-01-23 07:26:47,750 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2015-01-23 07:26:47,751 WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...
org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
但是当我再次从 bash 提示符启动它时,它起作用了。为什么这样?有什么建议吗?
从日志中可以看出。 Job Tracker 从 8020 端口开始,为什么它试图在 8020 端口运行?这是个问题吗?如果有,如何解决?
似乎 mapred 用户没有权限在 HDFS 根目录中写入 files/directories。
在启动mapreduce 服务之前切换到hdfs 用户并为mapred 用户分配必要的权限。
sudo -su hdfs ;
hadoop fs -chmod 777 /
/etc/init.d/hadoop-0.20-mapreduce-jobtracker stop; /etc/init.d/hadoop-0.20-mapreduce-jobtracker start