Apache Ambari:在现有集群中安装时 Datanode 安装失败
Apache Ambari : Datanode installation failed while installing in existing cluster
我使用带有 3 个数据节点的 apache ambari 2.1.0 创建了 hadoop 集群。
现在,当我尝试将另一个数据节点添加到(现有集群)中时,它会抛出一个错误
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum
-d 0 -e 0 -y install 'hadoop_2_3_*'' returned 1. No Presto metadata available for base
Delta RPM 将 3.6 M 更新减少到 798 k(节省 78%)
这是我的网站 UI 控制台日志:
回溯(最近调用最后):
文件“/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py”,第 153 行,在
DataNode().execute()
文件“/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”,第 218 行,在执行中
方法(环境)
文件“/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py”,第 34 行,安装中
self.install_packages(环境, params.exclude_packages)
文件“/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”,第 376 行,在 install_packages
包(名称)
文件“/usr/lib/python2.6/site-packages/resource_management/core/base.py”,第 157 行,在 init
self.env.run()
文件“/usr/lib/python2.6/site-packages/resource_management/core/environment.py”,第 152 行,在 运行
self.run_action(资源,操作)
文件“/usr/lib/python2.6/site-packages/resource_management/core/environment.py”,第 118 行,在 run_action
provider_action()
文件“/usr/lib/python2.6/site-packages/resource_management/core/providers/package/init.py”,第 45 行,在 action_install
self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
文件“/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py”,第 49 行,在 install_package
shell.checked_call(cmd, sudo=True, logoutput=self.get_logoutput())
文件“/usr/lib/python2.6/site-packages/resource_management/core/shell.py”,第 70 行,在内部
结果=函数(命令,**kwargs)
文件“/usr/lib/python2.6/site-packages/resource_management/core/shell.py”,第 92 行,在 checked_call
尝试=尝试,try_sleep=try_sleep)
文件“/usr/lib/python2.6/site-packages/resource_management/core/shell.py”,第 140 行,在 _call_wrapper
结果 = _call(命令, **kwargs_copy)
文件“/usr/lib/python2.6/site-packages/resource_management/core/shell.py”,第 291 行,在 _call 中
提高失败(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum
-d 0 -e 0 -y install 'hadoop_2_3_*'' returned 1. No Presto metadata available for base Delta RPMs reduced 3.6 M of updates to 798 k (78%
saved)
Error downloading packages:
hadoop_2_3_4_0_3485-yarn-proxyserver-2.7.1.2.3.4.0-3485.el6.x86_64:
[Errno 256] No more mirrors to try.
看来 yum 和您的存储库有两个问题。
首先我看到了消息:
No Presto metadata available for base Delta RPMs reduced 3.6 M of
updates to 798 k (78% saved)
在您尝试添加为数据节点的主机上尝试运行以下命令来修复第一个问题:
sudo yum clean all
然后看看能不能成功执行这个命令:
sudo yum -v install hadoop_2_3_*
如果您收到询问是否要安装的提示 (y/n),那么它是成功的,请选择否选项,然后从 Ambari 重试添加数据节点操作。如果您遇到错误或失败,请查看详细输出以进一步解决问题。
我使用带有 3 个数据节点的 apache ambari 2.1.0 创建了 hadoop 集群。 现在,当我尝试将另一个数据节点添加到(现有集群)中时,它会抛出一个错误
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install 'hadoop_2_3_*'' returned 1. No Presto metadata available for base
Delta RPM 将 3.6 M 更新减少到 798 k(节省 78%)
这是我的网站 UI 控制台日志:
回溯(最近调用最后):
文件“/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py”,第 153 行,在 DataNode().execute()
文件“/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”,第 218 行,在执行中 方法(环境)
文件“/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py”,第 34 行,安装中 self.install_packages(环境, params.exclude_packages)
文件“/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”,第 376 行,在 install_packages 包(名称)
文件“/usr/lib/python2.6/site-packages/resource_management/core/base.py”,第 157 行,在 init self.env.run()
文件“/usr/lib/python2.6/site-packages/resource_management/core/environment.py”,第 152 行,在 运行 self.run_action(资源,操作)
文件“/usr/lib/python2.6/site-packages/resource_management/core/environment.py”,第 118 行,在 run_action provider_action()
文件“/usr/lib/python2.6/site-packages/resource_management/core/providers/package/init.py”,第 45 行,在 action_install self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
文件“/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py”,第 49 行,在 install_package shell.checked_call(cmd, sudo=True, logoutput=self.get_logoutput())
文件“/usr/lib/python2.6/site-packages/resource_management/core/shell.py”,第 70 行,在内部 结果=函数(命令,**kwargs)
文件“/usr/lib/python2.6/site-packages/resource_management/core/shell.py”,第 92 行,在 checked_call 尝试=尝试,try_sleep=try_sleep)
文件“/usr/lib/python2.6/site-packages/resource_management/core/shell.py”,第 140 行,在 _call_wrapper 结果 = _call(命令, **kwargs_copy)
文件“/usr/lib/python2.6/site-packages/resource_management/core/shell.py”,第 291 行,在 _call 中 提高失败(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install 'hadoop_2_3_*'' returned 1. No Presto metadata available for base Delta RPMs reduced 3.6 M of updates to 798 k (78% saved)
Error downloading packages:
hadoop_2_3_4_0_3485-yarn-proxyserver-2.7.1.2.3.4.0-3485.el6.x86_64: [Errno 256] No more mirrors to try.
看来 yum 和您的存储库有两个问题。
首先我看到了消息:
No Presto metadata available for base Delta RPMs reduced 3.6 M of updates to 798 k (78% saved)
在您尝试添加为数据节点的主机上尝试运行以下命令来修复第一个问题:
sudo yum clean all
然后看看能不能成功执行这个命令:
sudo yum -v install hadoop_2_3_*
如果您收到询问是否要安装的提示 (y/n),那么它是成功的,请选择否选项,然后从 Ambari 重试添加数据节点操作。如果您遇到错误或失败,请查看详细输出以进一步解决问题。