我有一个spring云应用,使用spring reactive core监听两个主题,每个主题有10个分区。
在消费者中,我只是简单地读取消息并打印主题、分区和偏移量,有些消息没有被读取。
我已经尝试了两种方法 自动提交 和 确认书.
测试设置:将30K消息分别推送到Topic 1和Topic 2中,并启动应用程序,它只读取了59999条记录,而不是60000条。
所有主题的所有分区的滞后都是0,说明所有数据都被消耗了。
@Bean
public Consumer<Flux<Message<String>>> receiver() {
return (sink -> {
sink
.doOnNext((record)->{
String topic=(String) record.getHeaders().get(KafkaHeaders.RECEIVED_TOPIC);
Long offset=(Long) record.getHeaders().get(KafkaHeaders.OFFSET);
Integer partition=(Integer) record.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID);
log.startstate(()-> format("Register Topic %s partition %d offset %d",topic,partition,offset));
})
.subscribe();
});
我的应用程序.yml包含以下信息
spring:
cloud:
stream:
kafka:
default:
consumer:
autoCommitOffset: true
bindings:
receiver-in-0:
consumer:
autoCommitOffset: true
binder:
brokers: localhost:9092
autoAddPartitions: true
minPartitionCount: 10
auto-offset-reset: earliest
bindings:
receiver-in-0:
binder: kafka
destination: Topic1,Topic2
content-type: text/plain;charset=UTF-8
group: input-group-1
max-attempts: 5
back-off-initial-interval: 10000
back-off-max-interval: 30000
emitter-out-0:
binder: kafka
producer:
partition-count: 2
partition-key-extractor-name: EmitterPartitionKey
erroremitter-out-0:
binder: kafka
destination: error
error:
binder: kafka
destination: error
spring.cloud.stream.function.definition: receiver;emitter;erroremitter
我的日志文件显示,消费者没有读取Topic。Topic1
分区:2
偏移 76031
它从 76030
到 76032
,
[STARTSTATE] 2020-05-22 11:40:01.033 [KafkaConsumerDestination{consumerDestinationName='Topic1', partitions=10, dlqName='null'}.container-0-C-1][52] CloudConsumer - Register Topic Topic1 partition 2 offset 76030
[STARTSTATE] 2020-05-22 11:40:01.034 [KafkaConsumerDestination{consumerDestinationName='Topic1', partitions=10, dlqName='null'}.container-0-C-1][52] CloudConsumer - Register Topic Topic2 partition 7 offset 86149
[STARTSTATE] 2020-05-22 11:40:01.034 [KafkaConsumerDestination{consumerDestinationName='Topic1', partitions=10, dlqName='null'}.container-0-C-1][52] CloudConsumer - Register Topic Topic2 partition 7 offset 86150
[STARTSTATE] 2020-05-22 11:40:01.034 [KafkaConsumerDestination{consumerDestinationName='Topic1', partitions=10, dlqName='null'}.container-0-C-1][52] CloudConsumer - Register Topic Topic1 partition 2 offset 76032
《公约》的相关章节 pom.xml
:
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<java.version>11</java.version>
<maven.compiler.source>11</maven.compiler.source>
<maven.compiler.target>11</maven.compiler.target>
<spring-boot.version>2.1.7.RELEASE</spring-boot.version>
<spring-cloud.version>Hoxton.SR4</spring-cloud.version>
<skipTests>true</skipTests>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-logging</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
<version>3.0.4.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-reactive</artifactId>
<version>2.2.1.RELEASE</version>
</dependency>
</dependencies>
我重现了你的问题,并将针对SCSt开一个问题(不确定是流问题还是Reactor)。
当Reactor不参与时,我没有看到任何丢失的记录。
Consumer<Message<String>>
https:/github.comspring-cloudspring-cloud-stream-binder-kafkaissues906。