如何修复 "File could only be replicated to 0 nodes instead of minReplication (=1)."?

How do I fix "File could only be replicated to 0 nodes instead of minReplication (=1)."?

,我以为我解决了这个问题,但事实证明它消失了只是因为我正在处理一个较小的数据集。

很多人问过这个问题,我已经遍历了所有我能找到的互联网 post,但仍然没有取得任何进展。

我想做的是: 我在配置单元中有一个外部 table browserdata,它引用大约 1 GB 的数据。 我尝试将该数据粘贴到分区的 table partbrowserdata 中,其定义如下:

CREATE EXTERNAL TABLE IF NOT EXISTS partbrowserdata (                                                                                                                                                              
    BidID string,                                                                                                                                                                                                  
    Timestamp_ string,                                                                                                                                                                                             
    iPinYouID string,                                                                                                                                                                                              
    UserAgent string,                                                                                                                                                                                              
    IP string,                                                                                                                                                                                                     
    RegionID int,                                                                                                                                                                                                  
    AdExchange int,                                                                                                                                                                                                
    Domain string,                                                                                                                                                                                                 
    URL string,                                                                                                                                                                                                    
    AnonymousURL string,                                                                                                                                                                                           
    AdSlotID string,                                                                                                                                                                                               
    AdSlotWidth int,                                                                                                                                                                                               
    AdSlotHeight int,                                                                                                                                                                                              
    AdSlotVisibility string,                                                                                                                                                                                       
    AdSlotFormat string,                                                                                                                                                                                           
    AdSlotFloorPrice decimal,                                                                                                                                                                                      
    CreativeID string,                                                                                                                                                                                             
    BiddingPrice decimal,                                                                                                                                                                                          
    AdvertiserID string,                                                                                                                                                                                           
    UserProfileIDs array<string>                                                                                                                                                                                   
)                                                                                                                                                                                                                  
PARTITIONED BY (CityID int)                                                                                                                                                                                        
ROW FORMAT DELIMITED                                                                                                                                                                                               
FIELDS TERMINATED BY '\t'                                                                                                                                                                                          
STORED AS TEXTFILE                                                                                                                                                                                                 
LOCATION '/user/maria_dev/data2';

使用此查询:

insert into table partbrowserdata partition(cityid) 
select BidID,Timestamp_ ,iPinYouID ,UserAgent ,IP ,RegionID ,AdExchange ,Domain ,URL ,AnonymousURL ,AdSlotID ,AdSlotWidth ,AdSlotHeight ,AdSlotVisibility ,AdSlotFormat ,AdSlotFloorPrice ,CreativeID ,BiddingPrice ,AdvertiserID ,UserProfileIDs ,CityID 
from browserdata;

每次,在每个平台上,无论是 hortonworks 还是 cloudera,我都会收到这条消息:

Caused by: 

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/maria_dev/data2/.hive-staging_hive_2019-02-06_18-58-39_333_7627883726303986643-1/_task_tmp.-ext-10000/cityid=219/_tmp.000000_3 could only be replicated to 0 nodes instead of minReplication (=1).  There are 4 datanode(s) running and no node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1720)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3389)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:683)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:214)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:495)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2217)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211)

        at org.apache.hadoop.ipc.Client.call(Client.java:1504)
        at org.apache.hadoop.ipc.Client.call(Client.java:1441)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
        at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:413)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
        at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1814)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1610)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:773)

我该怎么办?我不明白为什么会这样。不过,这似乎确实是一个内存问题,因为我可以插入几行,但由于某种原因不能插入所有行。请注意,我在 HDFS 上有足够的内存,所以 1 gig 的额外数据是一分钱一分货,所以这可能是 RAM 问题?

这是我的 dfs 报告输出:

我在所有执行引擎上都试过了:sparktezmr

请不要建议我需要格式化名称节点的解决方案,因为它们不起作用,而且它们无论如何都不是解决方案。

更新:

在查看名称节点的日志后,我注意到了这一点,如果它有帮助的话:

Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK ], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], stor agePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}

这些日志表明:

For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.ser ver.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology

我该怎么做?

我也注意到这里有一个类似的未解决问题post:

HDP 2.2@Linux/CentOS@OracleVM (Hortonworks) fails on remote submission from Eclipse@Windows

更新 2:

我刚刚尝试用 spark 对其进行分区,它成功了!所以,这一定是一个蜂巢错误...

更新 3:

刚刚在 MapR 上对此进行了测试并且它有效,但 MapR 不使用 HDFS。这绝对是某种 HDFS + Hive 组合错误。

证明:

我最终联系了 cloudera 论坛,他们在几分钟内回答了我的问题:http://community.cloudera.com/t5/Storage-Random-Access-HDFS/Why-can-t-I-partition-a-1-gigabyte-dataset-into-300/m-p/86554#M3981 我尝试了 Harsh J 的建议,并且效果很好!

他是这样说的:

If you are dealing with unordered partitioning from a data source, you can end up creating a lot of files in parallel as the partitioning is attempted.

In HDFS, when a file (or more specifically, its block) is open, the DataNode performs a logical reservation of its target block size. So if your configured block size is 128 MiB, then every concurrently open block will deduct that value (logically) from the available remaining space the DataNode publishes to the NameNode.

This reservation is done to help manage space and guarantees of a full block write to a client, so that a client that's begun writing its file never runs into an out of space exception mid-way.

Note: When the file is closed, only the actual length is persisted, and the reservation calculation is adjusted to reflect the reality of used and available space. However, while the file block remains open, its always considered to be holding a full block size.

The NameNode further will only select a DataNode for a write if it can guarantee full target block size. It will ignore any DataNodes it deems (based on its reported values and metrics) unfit for the requested write's parameters. Your error shows that the NameNode has stopped considering your only live DataNode when trying to allocate a new block request.

As an example, 70 GiB of available space will prove insufficient if there will be more than 560 concurrent, open files (70 GiB divided into 128 MiB block sizes). So the DataNode will 'appear full' at the point of ~560 open files, and will no longer serve as a valid target for further file requests.

It appears per your description of the insert that this is likely, as each of the 300 chunks of the dataset may still carry varied IDs, resulting in a lot of open files requested per parallel task, for insert into several different partitions.

You could 'hack' your way around this by reducing the request block size within the query (set dfs.blocksize to 8 MiB for ex.), influencing the reservation calculation. However, this may not be a good idea for larger datasets as you scale, since it will drive up the file:block count and increase memory costs for the NameNode.

A better way to approach this would be to perform a pre-partitioned insert (sort first by partition and then insert in a partitioned manner). Hive for example provides this as an option: hive.optimize.sort.dynamic.partition, and if you use plain Spark or MapReduce then their default strategy of partitioning does exactly this.

所以,在一天结束时我做了 set hive.optimize.sort.dynamic.partition=true;,一切都开始工作了。不过我还做了另外一件事

这是我之前调查此问题时发表的一篇帖子: 我 运行 遇到了 Hive 无法对我的数据集进行分区的问题,因为 hive.exec.max.dynamic.partitions设置为 100,所以,我用谷歌搜索了这个问题,在 hortonworks 论坛的某个地方我看到了一个答案,说我应该这样做:

SET hive.exec.max.dynamic.partitions=100000; 
SET hive.exec.max.dynamic.partitions.pernode=100000;

这是另一个问题,可能 Hive 会尝试打开与您设置的一样多的并发连接 hive.exec.max.dynamic.partitions,因此我的 insert 查询在我将这些值减小到 500.