嵌入的kafka:java.io.FileNotFoundException:/tmp/kafka-7785736914220873149/replication-offset-checkpoint.tmp

qas*_*smi 8 cassandra spring-boot spring-kafka

我在集成测试中使用kafkaEmbedded,并且得到FileNotFoundException:

java.io.FileNotFoundException: /tmp/kafka-7785736914220873149/replication-offset-checkpoint.tmp 
at java.io.FileOutputStream.open0(Native Method) ~[na:1.8.0_141]
at java.io.FileOutputStream.open(FileOutputStream.java:270) ~[na:1.8.0_141]
at java.io.FileOutputStream.<init>(FileOutputStream.java:213) ~[na:1.8.0_141]
at java.io.FileOutputStream.<init>(FileOutputStream.java:162) ~[na:1.8.0_141]
at kafka.server.checkpoints.CheckpointFile.write(CheckpointFile.scala:43) ~[kafka_2.11-0.11.0.0.jar:na]
at kafka.server.checkpoints.OffsetCheckpointFile.write(OffsetCheckpointFile.scala:58) ~[kafka_2.11-0.11.0.0.jar:na]
at kafka.server.ReplicaManager$$anonfun$checkpointHighWatermarks$2.apply(ReplicaManager.scala:1118) [kafka_2.11-0.11.0.0.jar:na]
at kafka.server.ReplicaManager$$anonfun$checkpointHighWatermarks$2.apply(ReplicaManager.scala:1115) [kafka_2.11-0.11.0.0.jar:na]
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733) [scala-library-2.11.11.jar:na]
at scala.collection.immutable.Map$Map1.foreach(Map.scala:116) [scala-library-2.11.11.jar:na]
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732) [scala-library-2.11.11.jar:na]
at kafka.server.ReplicaManager.checkpointHighWatermarks(ReplicaManager.scala:1115) [kafka_2.11-0.11.0.0.jar:na]
at kafka.server.ReplicaManager$$anonfun$1.apply$mcV$sp(ReplicaManager.scala:211) [kafka_2.11-0.11.0.0.jar:na]
at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) [kafka_2.11-0.11.0.0.jar:na]
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57) [kafka_2.11-0.11.0.0.jar:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_141]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_141]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_141]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_141]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_141]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_141]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]
Run Code Online (Sandbox Code Playgroud)

我的测试成功通过,但是在构建结束时出现此错误

经过数小时的研究,我发现了这一点:

  • kafka TestUtils.tempDirectory方法用于为嵌入式kafka代理创建临时目录。它还注册关闭钩子,当JVM退出时,该钩子将删除此目录。
  • 当单元测试完成执行时,它将调用System.exit,该系统依次执行所有已注册的关闭挂钩

如果kafka broker在单元测试结束时运行,它将尝试在已删除的dir中写入/读取数据,并产生不同的FileNotFound异常。

我的配置类:

@Configuration
public class KafkaEmbeddedConfiguration {

private final KafkaEmbedded kafkaEmbedded;

public KafkaEmbeddedListenerConfigurationIT() throws Exception {
    kafkaEmbedded = new KafkaEmbedded(1, true, "topic1");
    kafkaEmbedded.before();
}

@Bean
public KafkaTemplate<String, Message> sender(ProtobufSerializer protobufSerializer,
        KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry) throws Exception {
    KafkaTemplate<String, Message> sender = KafkaTestUtils.newTemplate(kafkaEmbedded, new StringSerializer(),
            protobufSerializer);
for (MessageListenerContainer listenerContainer : 
registry.getListenerContainers()) {
        ContainerTestUtils.waitForAssignment(listenerContainer, 
kafkaEmbedded.getPartitionsPerTopic());
    }        

    return sender;
}
Run Code Online (Sandbox Code Playgroud)

测试类别:

@RunWith(SpringRunner.class)
public class DeviceEnergyKafkaListenerIT {
 ...
@Autowired
private KafkaTemplate<String, Message> sender;

@Test
public void test (){
    ...
    sender.send(topic, msg);
    sender.flush();
}
Run Code Online (Sandbox Code Playgroud)

有任何想法如何解决这个问题吗?

Gar*_*ell 10

使用@ClassRule代理,添加一个@AfterClass方法...

@AfterClass
public static void tearDown() {
    embeddedKafka.getKafkaServers().forEach(b -> b.shutdown());
    embeddedKafka.getKafkaServers().forEach(b -> b.awaitShutdown());
}
Run Code Online (Sandbox Code Playgroud)

对于 a@Rule或 bean,使用@After方法。

  • 对于那些使用 `@EmbeddedKafka` 注解(可从 spring-kafka 2.0 获得)的用户,您可以在注解中添加 `controlledShutdown = true` 以达到 Gary 描述的相同效果。 (3认同)
  • `controlledShutdown` 对我不起作用,`@DirtiesContext` 对我有用(我把它放在一个方法上,但这取决于你的情况) (2认同)