DFS Used%:100.00% 从属虚拟机在 Hadoop 中关闭

DFS Used%: 100.00% Slave VMs down in Hadoop

我的从属 VM 出现故障,我猜这是因为使用的 DFS 是 100%。你能给出一个系统的方法来解决这个问题吗?是防火墙问题吗?容量问题或可能导致它的原因以及如何解决?

ubuntu@anmol-vm1-new:~$  hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

15/12/13 22:25:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 845446217728 (787.38 GB)
Present Capacity: 797579996211 (742.80 GB)
DFS Remaining: 794296401920 (739.75 GB)
DFS Used: 3283594291 (3.06 GB)
DFS Used%: 0.41%
Under replicated blocks: 1564
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 2 (4 total, 2 dead)

Live datanodes:
Name: 10.0.1.190:50010 (anmol-vm1-new)
Hostname: anmol-vm1-new
Decommission Status : Normal
Configured Capacity: 422723108864 (393.69 GB)
DFS Used: 1641142625 (1.53 GB)
Non DFS Used: 25955075743 (24.17 GB)
DFS Remaining: 395126890496 (367.99 GB)
DFS Used%: 0.39%
DFS Remaining%: 93.47%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Sun Dec 13 22:25:51 UTC 2015


Name: 10.0.1.193:50010 (anmol-vm4-new)
Hostname: anmol-vm4-new
Decommission Status : Normal
Configured Capacity: 422723108864 (393.69 GB)
DFS Used: 1642451666 (1.53 GB)
Non DFS Used: 21911145774 (20.41 GB)
DFS Remaining: 399169511424 (371.76 GB)
DFS Used%: 0.39%
DFS Remaining%: 94.43%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Sun Dec 13 22:25:51 UTC 2015


Dead datanodes:
Name: 10.0.1.191:50010 (anmol-vm2-new)
Hostname: anmol-vm2-new
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 0 (0 B)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Sun Dec 13 21:20:12 UTC 2015


Name: 10.0.1.192:50010 (anmol-vm3-new)
Hostname: anmol-vm3-new
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 0 (0 B)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Sun Dec 13 22:09:27 UTC 2015

VM中只有一个文件系统。以根用户身份登录

  1. df -sh(其中一个挂载点将显示~100%)
  2. du -sh /(它将列出每个目录的大小)
  3. 如果你的namenode和datanode目录以外的任何目录占用过多space,你可以开始清理
  4. 您也可以 运行 hadoop fs -du -s -h /user/hadoop(查看目录的使用情况)
  5. 识别所有不需要的目录,然后运行ninghadoop fs -rm -R /user/hadoop/raw_data开始清理(-rm是删除-R是递归删除,-R使用时要小心)。
  6. 运行hadoop fs -expunge(要立即清理垃圾,有时需要运行多次)
  7. 运行 hadoop fs -du -s -h /(它会给你整个文件系统的 hdfs 使用情况,或者你也可以 运行 dfsadmin -report - 确认存储是否被回收)