无法 create/read 记录到在 EKS 集群中使用 AWS EBS 部署的 HDFS

Unable to create/read document to HDFS deployed with AWS EBS in EKS cluster

我有带 EBS 存储的 EKS 集群 class/volume。 我能够使用 statefulset 成功部署 hdfs 名称节点和数据节点图像 (bde2020/hadoop-xxx)。 当我尝试使用 hdfs://: 将文件从我的机器放入 hdfs 时,它给了我成功,但它没有写入数据节点。
在名称节点日志中,我看到以下错误。
这可能与 EBS 卷有关吗?我什至不能 upload/download 来自 namenode GUI 的文件。可能是因为数据节点主机名 hdfs-data-X.hdfs-data.pulse.svc.cluster.local 无法解析到我的本地机器?
请帮助

2020-05-12 17:38:51,360 INFO hdfs.StateChange: BLOCK* allocate blk_1073741825_1001, replicas=10.8.29.112:9866, 10.8.29.176:9866, 10.8.29.188:9866 for /vault/a.json
2020-05-12 17:39:13,036 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology
2020-05-12 17:39:13,036 WARN protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2020-05-12 17:39:13,036 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2020-05-12 17:39:13,036 INFO hdfs.StateChange: BLOCK* allocate blk_1073741826_1002, replicas=10.8.29.176:9866, 10.8.29.188:9866 for /vault/a.json
2020-05-12 17:39:34,607 INFO namenode.FSEditLog: Number of transactions: 11 Total time for transactions(ms): 23 Number of transactions batched in Syncs: 3 Number of syncs: 8 SyncTimes(ms): 23 
2020-05-12 17:39:35,146 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 2 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology
2020-05-12 17:39:35,146 WARN protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 2 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK, DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2020-05-12 17:39:35,146 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 2 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2020-05-12 17:39:35,147 INFO hdfs.StateChange: BLOCK* allocate blk_1073741827_1003, replicas=10.8.29.188:9866 for /vault/a.json
2020-05-12 17:39:57,319 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 3 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology
2020-05-12 17:39:57,319 WARN protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 3 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK, DISK, DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2020-05-12 17:39:57,319 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 3 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2020-05-12 17:39:57,320 INFO ipc.Server: IPC Server handler 5 on default port 8020, call Call#12 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 10.254.40.95:59328
java.io.IOException: File /vault/a.json could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2219)
    at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2789)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:574)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:999)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:927)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2915)

我的namenode网页如下所示:

Node    Http Address    Last contact    Last Block Report   Capacity    Blocks  Block pool used Version
hdfs-data-0.hdfs-data.pulse.svc.cluster.local:9866  http://hdfs-data-0.hdfs-data.pulse.svc.cluster.local:9864   1s  0m  
975.9 MB
0   24 KB (0%)  3.2.1
hdfs-data-1.hdfs-data.pulse.svc.cluster.local:9866  http://hdfs-data-1.hdfs-data.pulse.svc.cluster.local:9864   2s  0m  
975.9 MB
0   24 KB (0%)  3.2.1
hdfs-data-2.hdfs-data.pulse.svc.cluster.local:9866  http://hdfs-data-2.hdfs-data.pulse.svc.cluster.local:9864   1s  0m  
975.9 MB
0   24 KB (0%)  3.2.1

我的部署: 名称节点:

#clusterIP service of namenode
apiVersion: v1
kind: Service
metadata:
  name: hdfs-name
  namespace: pulse
  labels:
    component: hdfs-name
spec:
  ports:
    - port: 8020
      protocol: TCP
      name: nn-rpc
    - port: 9870
      protocol: TCP
      name: nn-web
  selector:
    component: hdfs-name
  type: ClusterIP
---
#namenode stateful deployment 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: hdfs-name
  namespace: pulse
  labels:
    component: hdfs-name
spec:
  serviceName: hdfs-name
  replicas: 1
  selector:
    matchLabels:
      component: hdfs-name
  template:
    metadata:
      labels:
        component: hdfs-name
    spec:
      initContainers:
      - name: delete-lost-found
        image: busybox
        command: ["sh", "-c", "rm -rf /hadoop/dfs/name/lost+found"]
        volumeMounts:
        - name: hdfs-name-pv-claim
          mountPath: /hadoop/dfs/name
      containers:
      - name: hdfs-name
        image: bde2020/hadoop-namenode
        env:
        - name: CLUSTER_NAME
          value: hdfs-k8s
        - name: HDFS_CONF_dfs_permissions_enabled
          value: "false"
        ports:
        - containerPort: 8020
          name: nn-rpc
        - containerPort: 9870
          name: nn-web
        volumeMounts:
        - name: hdfs-name-pv-claim
          mountPath: /hadoop/dfs/name
          #subPath: data     #subPath required as on root level, lost+found folder is created which does not cause to run namenode --format
  volumeClaimTemplates:
  - metadata:
      name: hdfs-name-pv-claim
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: ebs
      resources:
        requests:
          storage: 1Gi

数据节点:

#headless service of datanode
apiVersion: v1
kind: Service
metadata:
  name: hdfs-data
  namespace: pulse
  labels:
    component: hdfs-data
spec:
  ports:
    - port: 80
      protocol: TCP
  selector:
    component: hdfs-data
  clusterIP: None
  type: ClusterIP
---
#datanode stateful deployment
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: hdfs-data
  namespace: pulse
  labels:
    component: hdfs-data
spec:
  serviceName: hdfs-data
  replicas: 3
  selector:
    matchLabels:
      component: hdfs-data
  template:
    metadata:
      labels:
        component: hdfs-data
    spec:
      containers:
      - name: hdfs-data
        image: bde2020/hadoop-datanode
        env:
        - name: CORE_CONF_fs_defaultFS
          value: hdfs://hdfs-name:8020
        volumeMounts:
        - name: hdfs-data-pv-claim
          mountPath: /hadoop/dfs/data 
  volumeClaimTemplates:
  - metadata:
      name: hdfs-data-pv-claim
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: ebs
      resources:
        requests:
          storage: 1Gi

我的客户端计算机无法通过 rpc 端口访问数据节点,这似乎是个问题。 我的客户端机器可以访问 datanodes http 端口。在将数据节点 podname 与 IP 的映射放入主机文件后,尝试使用 webhdfs://(而不是 hdfs://),结果成功了。