Kafka:ZstdIOException:无法从 BufferPool 获取大小为 131075 的 ByteBuffer

问题描述 投票:0回答:1

我们有 3 个节点的 Kafka 集群,使用

bitnami/kafka:3.6.0
图像和 Kraft 协议。

我们还在主题和生产者上使用 zstd 压缩集。

突然,其中一个节点开始发送垃圾邮件,并在日志中显示以下错误:

[2024-02-03 10:01:12,699] ERROR [ReplicaManager broker=1] Error processing append operation on partition topic_name (kafka.server.ReplicaManager)
org.apache.kafka.common.KafkaException: com.github.luben.zstd.ZstdIOException: Cannot get ByteBuffer of size 131075 from the BufferPool
    at org.apache.kafka.common.compress.ZstdFactory.wrapForInput(ZstdFactory.java:70)
    at org.apache.kafka.common.record.CompressionType$5.wrapForInput(CompressionType.java:155)
    at org.apache.kafka.common.record.DefaultRecordBatch.recordInputStream(DefaultRecordBatch.java:273)
    at org.apache.kafka.common.record.DefaultRecordBatch.compressedIterator(DefaultRecordBatch.java:277)
    at org.apache.kafka.common.record.DefaultRecordBatch.skipKeyValueIterator(DefaultRecordBatch.java:352)
    at org.apache.kafka.storage.internals.log.LogValidator.validateMessagesAndAssignOffsetsCompressed(LogValidator.java:358)
    at org.apache.kafka.storage.internals.log.LogValidator.validateMessagesAndAssignOffsets(LogValidator.java:165)
    at kafka.log.UnifiedLog.$anonfun$append$2(UnifiedLog.scala:805)
    at kafka.log.UnifiedLog.append(UnifiedLog.scala:1845)
    at kafka.log.UnifiedLog.appendAsLeader(UnifiedLog.scala:719)
    at kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:1313)
    at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:1301)
    at kafka.server.ReplicaManager.$anonfun$appendToLocalLog$6(ReplicaManager.scala:1210)
    at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
    at scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149)
    at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237)
    at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230)
    at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44)
    at scala.collection.mutable.HashMap.foreach(HashMap.scala:149)
    at scala.collection.TraversableLike.map(TraversableLike.scala:286)
    at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
    at scala.collection.AbstractTraversable.map(Traversable.scala:108)
    at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:1198)
    at kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:754)
    at kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:686)
    at kafka.server.KafkaApis.handle(KafkaApis.scala:180)
    at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:149)
    at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: com.github.luben.zstd.ZstdIOException: Cannot get ByteBuffer of size 131075 from the BufferPool
    at com.github.luben.zstd.ZstdInputStreamNoFinalizer.<init>(ZstdInputStreamNoFinalizer.java:67)
    at org.apache.kafka.common.compress.ZstdFactory.wrapForInput(ZstdFactory.java:68)
    ... 27 more

在客户端导致了很多生产者的错误:

Unknown broker error
但似乎所有消息都已重试到另一个代理 - 没有注意到数据丢失。

我在互联网上找不到此类错误的提及。

重新启动代理至少暂时解决了该问题。

但是如果这个问题再次出现并且同时有 2 个经纪人怎么办?

apache-kafka zstd bitnami-kafka
1个回答
0
投票

我也有同样的问题。就我而言,将 kafka 升级到 3.6.1 有所帮助,它很可能在此 bugfix 中得到解决。

© www.soinside.com 2019 - 2024. All rights reserved.