为什么卡夫卡不继续致力于其中一个经纪人的失败?

Sum*_*nha 7 apache-kafka

我的印象是,有两个同步开启的经纪人我的kafka设置应该继续工作,即使其中一个经纪人失败.

为了测试它,我创建了一个名为topicname的新主题.其描述如下:

Topic:topicname    PartitionCount:1 ReplicationFactor:1 Configs:
Topic: topicname    Partition: 0    Leader: 0   Replicas: 0 Isr: 0
Run Code Online (Sandbox Code Playgroud)

然后我按以下方式运行producer.sh和consumer.sh:

bin/kafka-console-producer.sh --broker-list localhost:9092,localhost:9095 sync --topic topicname

bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic topicname --from-beginning
Run Code Online (Sandbox Code Playgroud)

直到两个经纪人都在工作我才看到消费者正在接收消息,但是当我通过kill命令杀死其中一个经纪人实例时,消费者就不再向我显示任何新消息了.相反,它向我显示以下错误消息:

WARN [ConsumerFetcherThread-console-consumer-57116_ip-<internalipvalue>-1438604886831-603de65b-0-0], Error in fetch Name: FetchRequest; Version: 0; CorrelationId: 865; ClientId: console-consumer-57116; ReplicaId: -1; MaxWait: 100 ms; MinBytes: 1 bytes; RequestInfo: [topicname,0] -> PartitionFetchInfo(9,1048576). Possible cause: java.nio.channels.ClosedChannelException (kafka.consumer.ConsumerFetcherThread)
[2015-08-03 12:29:36,341] WARN Fetching topic metadata with correlation id 1 for topics [Set(topicname)] from broker [id:0,host:<hostname>,port:9092] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
Run Code Online (Sandbox Code Playgroud)

use*_*864 0

对于复制因子为 N 的主题,Kafka 最多可以容忍 N-1 个服务器故障。例如,复制因子为 3 将允许您处理最多 2 个服务器故障。