Kafka Schema Registry错误:无法将Noop记录写入kafka商店

eug*_*ned 5 apache-kafka

我正在尝试启动kafka模式注册表,但收到以下错误:无法将Noop记录写入kafka商店.堆栈跟踪如下.我检查了与zookeeper,kafka经纪人的联系 - 一切都很好.我可以发信息到kafka.我试图删除_schema主题,甚至重新安装kafka,但这个问题仍然存在.昨天一切都工作正常,但今天,重新启动我的流浪盒后,这个问题出现了.我能做些什么吗?谢谢

[2015-11-19 19:12:25,904] INFO SchemaRegistryConfig values: 
master.eligibility = true
port = 8081
kafkastore.timeout.ms = 500
kafkastore.init.timeout.ms = 60000
debug = false
kafkastore.zk.session.timeout.ms = 30000
request.logger.name = io.confluent.rest-utils.requests
metrics.sample.window.ms = 30000
schema.registry.zk.namespace = schema_registry
kafkastore.topic = _schemas
avro.compatibility.level = none
shutdown.graceful.ms = 1000
response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
metrics.jmx.prefix = kafka.schema.registry
host.name = 12bac2a9529f
metric.reporters = []
kafkastore.commit.interval.ms = -1
kafkastore.connection.url = master.mesos:2181
metrics.num.samples = 2
response.mediatype.default = application/vnd.schemaregistry.v1+json
kafkastore.topic.replication.factor = 3
(io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:135)

[2015-11-19 19:12:25,904] INFO SchemaRegistryConfig values: 
master.eligibility = true
port = 8081
kafkastore.timeout.ms = 500
kafkastore.init.timeout.ms = 60000
debug = false
kafkastore.zk.session.timeout.ms = 30000
request.logger.name = io.confluent.rest-utils.requests
metrics.sample.window.ms = 30000
schema.registry.zk.namespace = schema_registry
kafkastore.topic = _schemas
avro.compatibility.level = none
shutdown.graceful.ms = 1000
response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
metrics.jmx.prefix = kafka.schema.registry
host.name = 12bac2a9529f
metric.reporters = []
kafkastore.commit.interval.ms = -1
kafkastore.connection.url = master.mesos:2181
metrics.num.samples = 2
response.mediatype.default = application/vnd.schemaregistry.v1+json
kafkastore.topic.replication.factor = 3
(io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:135)
[2015-11-19 19:12:26,535] INFO Initialized the consumer offset to -1        (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:87)
[2015-11-19 19:12:27,167] WARN Creating the schema topic _schemas using a replication factor of 1, which is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic.   (io.confluent.kafka.schemaregistry.storage.KafkaStore:172)
[2015-11-19 19:12:27,262] INFO [kafka-store-reader-thread-_schemas], Starting  (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:68)
[2015-11-19 19:13:27,350] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:57)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
at   io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:164)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:55)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
at io.confluent.rest.Application.createServer(Application.java:104)
at io.confluent.kafka.schemaregistry.rest.Main.main(Main.java:42)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:151)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:162)
... 4 more
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:363)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.waitUntilKafkaReaderReachesLastOffset(KafkaStore.java:220)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:149)
... 5 more
Run Code Online (Sandbox Code Playgroud)

小智 6

该错误消息具有误导性,正如其他开发人员在其他帖子中所建议的那样,我建议如下。

1)确保Zookeeper正在运行。(检查日志文件以及进程是否处于活动状态)

2)确保kafka集群中的各个节点可以相互通信(telnet到主机和端口)

3)如果1和2都很好,那么我不建议创建另一个主题(例如其他帖子中一些人推荐的_schema2)并使用新主题更新schemaregistry配置文件kafkastore.topic。
相反,3.1)停止进程(zookeeper,kafka服务器)3.2)清理zookeeper数据目录中的数据3.3)重新启动zookeeper,kafka服务器,最后重启schemaregistry服务(它应该可以工作!)

PS:如果您确实尝试创建另一个主题,那么当您尝试使用来自 kafka 主题的数据时,您可能会陷入困境。(发生在我身上,花了我几个小时才弄清楚这一点!)。


32c*_*upo 0

我遇到了同样的错误。问题是我预计 Kafkakafka在 ZooKeeper 中使用命名空间,所以我将其设置为schema-registry.properties

kafkastore.connection.url=localhost:2181/kafka
Run Code Online (Sandbox Code Playgroud)

但在卡夫卡server.properties我根本没有设置它。配置包含

zookeeper.connect=localhost:2181
Run Code Online (Sandbox Code Playgroud)

所以我只需将 ZooKeeper 命名空间添加到此属性并重新启动 Kafka

zookeeper.connect=localhost:2181/kafka
Run Code Online (Sandbox Code Playgroud)

也许你的问题是你的模式注册表需要'/'命名空间,但在你的Kafka配置中你定义了其他东西。你能发布 Kafka 配置吗?

或者,您可以使用 zkCli.sh 来查找 ZooKeeper Kafka 中存储主题信息的位置。

/bin/zkCli.sh localhost:2181
Welcome to ZooKeeper!
ls /kafka
[cluster, controller, controller_epoch, brokers, admin, isr_change_notification, consumers, latest_producer_id_block, config]
Run Code Online (Sandbox Code Playgroud)