将记录附加到主题时 Apache kafka 出错
Apache kafka Error while appending records to topic
我正在尝试通过连接 api 消耗 1000 万行大小为 (600MB) 的 csv 文件。连接开始消耗完成 370 万条记录。之后我收到以下错误。
[2018-11-01 07:28:49,889] ERROR Error while appending records to topic-test-0 in dir /tmp/kafka-logs (kafka.server.LogDirFailureChannel)
java.io.IOException: No space left on device
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:60)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211)
at org.apache.kafka.common.record.MemoryRecords.writeFullyTo(MemoryRecords.java:95)
at org.apache.kafka.common.record.FileRecords.append(FileRecords.java:151)
at kafka.log.LogSegment.append(LogSegment.scala:138)
at kafka.log.Log.$anonfun$append(Log.scala:868)
at kafka.log.Log.maybeHandleIOException(Log.scala:1837)
at kafka.log.Log.append(Log.scala:752)
at kafka.log.Log.appendAsLeader(Log.scala:722)
at kafka.cluster.Partition.$anonfun$appendRecordsToLeader(Partition.scala:634)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:257)
at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:622)
at kafka.server.ReplicaManager.$anonfun$appendToLocalLog(ReplicaManager.scala:745)
at scala.collection.TraversableLike.$anonfun$map(TraversableLike.scala:234)
at scala.collection.mutable.HashMap.$anonfun$foreach(HashMap.scala:138)
at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:236)
at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:229)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:138)
at scala.collection.TraversableLike.map(TraversableLike.scala:234)
at scala.collection.TraversableLike.map$(TraversableLike.scala:227)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:733)
at kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:472)
at kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:489)
at kafka.server.KafkaApis.handle(KafkaApis.scala:106)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
at java.lang.Thread.run(Thread.java:748)
[2018-11-01 07:28:49,893] INFO [ReplicaManager broker=0] Stopping serving replicas in dir /tmp/kafka-logs (kafka.server.ReplicaManager)
[2018-11-01 07:28:49,897] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,topic-test-0,__consumer_offsets-25,__consumer_offsets
我有一个主题名称 topic-test
机器规格:
- OS : CentOs 7
- 内存:16GB
- 高清:80GB
我看到一些博客在谈论 log.dirs 是 server.property 但事情并不清楚,因为它需要如何输入。我还要创建分区吗?我没有这样做,认为它是同一个数据文件。
错误,将记录附加到目录 /tmp/kafka-logs 中的 topic-test-0 时出错 (kafka.server.LogDirFailureChannel)java.io.IOException:设备上没有剩余 space
当您在 kafka 主题中使用一个巨大的文件或流时,就会出现这种情况。
转到默认日志目录 /tmp/kafka-logs
那么,
[root@ENT-CL-015243 kafka-logs]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_rhel6u4x64-lv_root 61G 8.4G 49G 15% /
tmpfs 7.7G 0 7.7G 0% /dev/shm
/dev/sda1 485M 37M 423M 9% /boot
/dev/mapper/vg_rhel6u4x64-lv_home 2.0G 68M 1.9G 4% /home
/dev/mapper/vg_rhel6u4x64-lv_tmp 4.0G 315M 3.5G 9% /tmp
/dev/mapper/vg_rhel6u4x64-lv_var 7.9G 252M 7.3G 4% /var
如您所见,在我的案例中只有 3.5Gb 的 /tmp space 可用,我正面临这个问题。我在 root 中创建了一个 /klogs 并在 server.properties[=11= 中更改了 log.dirs=/klogs/kafka-logs ]
我正在尝试通过连接 api 消耗 1000 万行大小为 (600MB) 的 csv 文件。连接开始消耗完成 370 万条记录。之后我收到以下错误。
[2018-11-01 07:28:49,889] ERROR Error while appending records to topic-test-0 in dir /tmp/kafka-logs (kafka.server.LogDirFailureChannel)
java.io.IOException: No space left on device
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:60)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211)
at org.apache.kafka.common.record.MemoryRecords.writeFullyTo(MemoryRecords.java:95)
at org.apache.kafka.common.record.FileRecords.append(FileRecords.java:151)
at kafka.log.LogSegment.append(LogSegment.scala:138)
at kafka.log.Log.$anonfun$append(Log.scala:868)
at kafka.log.Log.maybeHandleIOException(Log.scala:1837)
at kafka.log.Log.append(Log.scala:752)
at kafka.log.Log.appendAsLeader(Log.scala:722)
at kafka.cluster.Partition.$anonfun$appendRecordsToLeader(Partition.scala:634)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:257)
at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:622)
at kafka.server.ReplicaManager.$anonfun$appendToLocalLog(ReplicaManager.scala:745)
at scala.collection.TraversableLike.$anonfun$map(TraversableLike.scala:234)
at scala.collection.mutable.HashMap.$anonfun$foreach(HashMap.scala:138)
at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:236)
at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:229)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:138)
at scala.collection.TraversableLike.map(TraversableLike.scala:234)
at scala.collection.TraversableLike.map$(TraversableLike.scala:227)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:733)
at kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:472)
at kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:489)
at kafka.server.KafkaApis.handle(KafkaApis.scala:106)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
at java.lang.Thread.run(Thread.java:748)
[2018-11-01 07:28:49,893] INFO [ReplicaManager broker=0] Stopping serving replicas in dir /tmp/kafka-logs (kafka.server.ReplicaManager)
[2018-11-01 07:28:49,897] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,topic-test-0,__consumer_offsets-25,__consumer_offsets
我有一个主题名称 topic-test
机器规格:
- OS : CentOs 7
- 内存:16GB
- 高清:80GB
我看到一些博客在谈论 log.dirs 是 server.property 但事情并不清楚,因为它需要如何输入。我还要创建分区吗?我没有这样做,认为它是同一个数据文件。
错误,将记录附加到目录 /tmp/kafka-logs 中的 topic-test-0 时出错 (kafka.server.LogDirFailureChannel)java.io.IOException:设备上没有剩余 space 当您在 kafka 主题中使用一个巨大的文件或流时,就会出现这种情况。 转到默认日志目录 /tmp/kafka-logs 那么,
[root@ENT-CL-015243 kafka-logs]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_rhel6u4x64-lv_root 61G 8.4G 49G 15% /
tmpfs 7.7G 0 7.7G 0% /dev/shm
/dev/sda1 485M 37M 423M 9% /boot
/dev/mapper/vg_rhel6u4x64-lv_home 2.0G 68M 1.9G 4% /home
/dev/mapper/vg_rhel6u4x64-lv_tmp 4.0G 315M 3.5G 9% /tmp
/dev/mapper/vg_rhel6u4x64-lv_var 7.9G 252M 7.3G 4% /var
如您所见,在我的案例中只有 3.5Gb 的 /tmp space 可用,我正面临这个问题。我在 root 中创建了一个 /klogs 并在 server.properties[=11= 中更改了 log.dirs=/klogs/kafka-logs ]