我用kafka 1创建了一个带有5.0.1生成器的最小jhipster应用程序:3次提交
我在这里按照教程2
尽管如此,该应用程序无法在CentOS 7.5环境中启动
首发卡夫卡:docker-compose -f src/main/docker/kafka.yml up -d
开始我的应用程序./mvnw
2018-06-25 10:56:40.883 WARN 15375 --- [ restartedMain] o.s.c.s.b.k.p.KafkaTopicProvisioner : The number of expected partitions was: 1, but 0 has been found instead.There will be 1 idle consumers
2018-06-25 10:56:40.886 ERROR 15375 --- [ restartedMain] o.s.cloud.stream.binding.BindingService : Failed to create consumer binding; retrying in 30 seconds
org.springframework.cloud.stream.binder.BinderException: Exception thrown while starting consumer:
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindConsumer(AbstractMessageChannelBinder.java:326)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindConsumer(AbstractMessageChannelBinder.java:77)
at org.springframework.cloud.stream.binder.AbstractBinder.bindConsumer(AbstractBinder.java:129)
at org.springframework.cloud.stream.binding.BindingService.doBindConsumer(BindingService.java:134)
at org.springframework.cloud.stream.binding.BindingService.bindConsumer(BindingService.java:116)
at org.springframework.cloud.stream.binding.BindableProxyFactory.createAndBindInputs(BindableProxyFactory.java:234)
at org.springframework.cloud.stream.binding.InputBindingLifecycle.doStartWithBindable(InputBindingLifecycle.java:52)
at java.util.Iterator.forEachRemaining(Iterator.java:116)
at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
at org.springframework.cloud.stream.binding.AbstractBindingLifecycle.start(AbstractBindingLifecycle.java:47)
at org.springframework.cloud.stream.binding.InputBindingLifecycle.start(InputBindingLifecycle.java:31)
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:181)
at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:52)
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:356)
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:157)
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:121)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:885)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.finishRefresh(ServletWebServerApplicationContext.java:161)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:553)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:140)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:759)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:395)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:327)
at org.flfmitlab.jhipster5kafka.Jhipster5KafkaApp.main(Jhipster5KafkaApp.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49)
Caused by: java.lang.IllegalArgumentException: A list of partitions must be provided
at org.springframework.util.Assert.isTrue(Assert.java:116)
at org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder.createConsumerEndpoint(KafkaMessageChannelBinder.java:354)
at org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder.createConsumerEndpoint(KafkaMessageChannelBinder.java:126)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindConsumer(AbstractMessageChannelBinder.java:279)
... 29 common frames omitted
2018-06-25 10:56:41.285 INFO 15375 --- [ restartedMain] o.f.jhipster5kafka.Jhipster5KafkaApp : Started Jhipster5KafkaApp in 14.489 seconds (JVM running for 14.981)
2018-06-25 10:56:41.304 INFO 15375 --- [ restartedMain] o.f.jhipster5kafka.Jhipster5KafkaApp :
----------------------------------------------------------
Application 'jhipster5kafka' is running! Access URLs:
Local: http://localhost:8080
External: http://127.0.0.1:8080
Profile(s): [dev, swagger]
----------------------------------------------------------
2018-06-25 10:57:10.861 ERROR 15375 --- [ad | producer-3] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='{84, 101, 115, 116, 32, 109, 101, 115, 115, 97, 103, 101, 32, 102, 114, 111, 109, 32, 74, 72, 105, 1...' to topic topic-jhipster:
org.apache.kafka.common.errors.TimeoutException: Expiring 31 record(s) for topic-jhipster-0: 30077 ms has passed since batch creation plus linger time
2018-06-25 10:57:10.864 ERROR 15375 --- [ad | producer-3] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='{84, 101, 115, 116, 32, 109, 101, 115, 115, 97, 103, 101, 32, 102, 114, 111, 109, 32, 74, 72, 105, 1...' to topic topic-jhipster:
我的配置的一些细节
[INFO] +- org.springframework.cloud:spring-cloud-stream-binder-kafka:jar:2.0.0.RELEASE:compile
[INFO] | +- org.springframework.cloud:spring-cloud-stream-binder-kafka-core:jar:2.0.0.RELEASE:compile
[INFO] | | \- org.springframework.integration:spring-integration-kafka:jar:3.0.3.RELEASE:compile
[INFO] | +- org.apache.kafka:kafka-clients:jar:1.0.1:compile
[INFO] | | +- org.lz4:lz4-java:jar:1.4.1:compile
[INFO] | | \- org.xerial.snappy:snappy-java:jar:1.1.4:compile
[INFO] | \- org.springframework.kafka:spring-kafka:jar:2.1.7.RELEASE:compile
而对于kafka配置
spring:
cloud:
stream:
bindings:
messageChannel:
destination: greetings
content-type: application/json
subscribableChannel:
destination: greetings
好像是没有正确关闭kafka / zookeeper的情况。如果kafka进程/容器被杀死或崩溃,而不是停止,它不会正确关闭zookeeper连接。
在停止zookeeper之前,尝试使用docker stop <container_id>
优雅地停止kafka Docker。然后尝试用docker-compose再次启动kafka。
这是卡夫卡的问题。如果您使用docker进行容器化,请停止kafka,删除kafka并启动kafka容器。检查为kafka设置的IP地址。