use*_*477 6 apache-kafka kafka-consumer-api apache-kafka-connect
我刚刚在两个实例的集群(2 台机器上的 2 个容器)上部署了我的 Kafka Connect(我只使用连接源到 MQTT)应用程序,现在它似乎进入了一种重新平衡循环,我有一个开始时有一点数据,但没有出现新数据。这是我在日志中得到的。
[2017-08-11 07:27:35,810] INFO Joined group and got assignment: Assignment{error=0, leader='connect-1-592bcc91-9d99-4c54-b707-3f52d0f8af50', leaderUrl='http:// 10.120.233.78:9040/', offset=2, connectorIds=[SourceConnector1], taskIds=[]} (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1009)
[2017-08-11 07:27:35,810] WARN Catching up to assignment's config offset. (org.apache.kafka.connect.runtime.distributed.DistributedHerder:679)
[2017-08-11 07:27:35,810] INFO Current config state offset 1 is behind group assignment 2, reading to end of config log (org.apache.kafka.connect.runtime.distributed.DistributedHerder:723)
[2017-08-11 07:27:36,310] INFO Finished reading to end of log and updated config snapshot, new config log offset: 1 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:727)
[2017-08-11 07:27:36,310] INFO Current config state offset 1 does not match group assignment 2. Forcing rebalance. (org.apache.kafka.connect.runtime.distributed.DistributedHerder:703)
[2017-08-11 07:27:36,311] INFO Rebalance started (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1030)
[2017-08-11 07:27:36,311] INFO Wasn't unable to resume work after last rebalance, can skip stopping connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1056)
[2017-08-11 07:27:36,311] INFO (Re-)joining group source-connector11234 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:381)
[2017-08-11 07:27:36,315] INFO Successfully joined group source-connector11234 with generation 28 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:349)
[2017-08-11 07:27:36,317] INFO Joined group and got assignment: Assignment{error=0, leader='connect-1-592bcc91-9d99-4c54-b707-3f52d0f8af50', leaderUrl='http:// 10.120.233.78:9040/', offset=2, connectorIds=[SourceConnector1], taskIds=[]} (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1009)
[2017-08-11 07:27:36,317] WARN Catching up to assignment's config offset. (org.apache.kafka.connect.runtime.distributed.DistributedHerder:679)
[2017-08-11 07:27:36,317] INFO Current config state offset 1 is behind group assignment 2, reading to end of config log (org.apache.kafka.connect.runtime.distributed.DistributedHerder:723
Run Code Online (Sandbox Code Playgroud)
小智 0
我也遇到了类似的问题,在 mesos 集群上运行两个单独的容器 - 最终的解决方案是一个烦人的解决方案,没有在任何地方记录:
使用奇数个容器!
一些分布式系统依靠其工作人员来选举领导者。如果有两个,他们就会各自投票给对方并陷入循环。这似乎也是这里发生的事情。