在Mesos上运行的Kafka消费者"无法添加分区的领导者"错误

dsi*_*mie 4 apache-kafka mesos apache-zookeeper

我正在使用mesos/kafka库运行一个由6个代理组成的Kafka集群.我能够在6个不同的机器上添加和启动代理,并使用Python SimpleProducer和kafka-console-producer.sh脚本将消息发布到集群中.

但是我无法使消费者正常工作.我正在运行以下使用者命令:

bin/kafka-console-consumer.sh --zookeeper 192.168.1.199:2181 --topic test --from-beginning --consumer.config config/consumer.properties --delete-consumer-offsets
Run Code Online (Sandbox Code Playgroud)

在consumer.properties文件中,我将group.id设置为my.group并设置zookeeeper.connect为zookeeper集合中的多个节点.我从运行此消费者获得以下warninng消息:

            [2015-09-24 16:01:06,609] WARN [my.group_my_host-1443106865779-b5a3a1e1-leader-finder-thread], Failed to add l
    eader for partitions [test,4],[test,1],[test,5],[test,2],[test,0],[test,3]; will retry (kafka.consumer.ConsumerFetcherM
    anager$LeaderFinderThread)
    java.nio.channels.ClosedChannelException
            at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
            at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:78)
            at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
            at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:127)
            at kafka.consumer.SimpleConsumer.earliestOrLatestOffset(SimpleConsumer.scala:166)
            at kafka.consumer.ConsumerFetcherThread.handleOffsetOutOfRange(ConsumerFetcherThread.scala:60)
            at kafka.server.AbstractFetcherThread$$anonfun$addPartitions$2.apply(AbstractFetcherThread.scala:177)
            at kafka.server.AbstractFetcherThread$$anonfun$addPartitions$2.apply(AbstractFetcherThread.scala:172)
            at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
            at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
            at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
            at kafka.server.AbstractFetcherThread.addPartitions(AbstractFetcherThread.scala:172)
            at kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:87)
            at kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:77)
            at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
            at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
            at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
            at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
            at kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:77)
            at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:95)
            at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
    {'some':2}
    [2015-09-24 16:20:02,362] WARN [my.group_my_host-1443108001180-fa0c93e4-leader-finder-thread], Failed to add leader for partitions [test,4],[test,1],[test,5],[test,2],[test,0],[test,3]; will retry (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
    java.nio.channels.ClosedChannelException
            at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
            at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:78)
            at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
            at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:127)
            at kafka.consumer.SimpleConsumer.earliestOrLatestOffset(SimpleConsumer.scala:166)
            at kafka.consumer.ConsumerFetcherThread.handleOffsetOutOfRange(ConsumerFetcherThread.scala:60)
            at kafka.server.AbstractFetcherThread$$anonfun$addPartitions$2.apply(AbstractFetcherThread.scala:177)
            at kafka.server.AbstractFetcherThread$$anonfun$addPartitions$2.apply(AbstractFetcherThread.scala:172)
            at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
            at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
            at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
            at kafka.server.AbstractFetcherThread.addPartitions(AbstractFetcherThread.scala:172)
            at kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:87)
            at kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:77)
            at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
            at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
            at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
            at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
            at kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:77)
            at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:95)
            at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
    ...
    // Lots more of this
    ...
    Consumed 1 messages
Run Code Online (Sandbox Code Playgroud)

我不确定为什么它无法添加领导者,领导者似乎已经在Zookeeper中了.除了所有这些错误消息,我只能将一条消息传递给消费者.该字符串{'some':2}是我从控制台生产者发送的消息.

我在server.log其中一个Mesos奴隶中发现了这个错误,不确定它是否相关:

[2015-09-24 17:09:41,926] ERROR Closing socket for /192.168.1.199 because of error (kafka.network.Processor)
java.io.IOException: Broken pipe
            at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
            at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
            at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
            at sun.nio.ch.IOUtil.write(IOUtil.java:65)
            at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
            at kafka.api.TopicDataSend.writeTo(FetchResponse.scala:123)
            at kafka.network.MultiSend.writeTo(Transmission.scala:101)
            at kafka.api.FetchResponseSend.writeTo(FetchResponse.scala:231)
            at kafka.network.Processor.write(SocketServer.scala:472)
            at kafka.network.Processor.run(SocketServer.scala:342)
            at java.lang.Thread.run(Thread.java:745)
Run Code Online (Sandbox Code Playgroud)

有关消费者可能发生的事情或我可能在哪里解决问题的建议?

其中一个日志分区的Zookeeper代理分区状态:

[zk: localhost:2181(CONNECTED) 166] get /brokers/topics/test/partitions/0/state
{"controller_epoch":1,"leader":0,"version":1,"leader_epoch":0,"isr":[0]}
Run Code Online (Sandbox Code Playgroud)

操作系统:Ubuntu 14.0.4 Mesos:0.23 Kafka:2.10-0.8.2.1

更新:使用kafka-console-consumer.sh这些消息做一些进一步的测试似乎正在通过.错误消息是不变的,因此您不会看到所有消息stdout.Python KafkaConsumer立刻失败FailedPayloadsError.

小智 5

我认为你需要研究属性" advertised.host.name "的价值.我最近也遇到过这个问题并使用上面的属性修复了.
请确保您提到了每个BROKER的正确IP地址. 如果不起作用,请告诉我.