EmbeddedKafka如何在单元测试中检查收到的消息

der*_*itz 6 junit spring apache-kafka spring-kafka

我创建了一个Spring Boot应用程序,它将消息发送到Kafka主题.我正在使用spring spring-integration-kafka:A KafkaProducerMessageHandler<String,String>订阅了一个channel(SubscribableChannel)并将收到的所有消息推送到一个主题.该应用程序工作正常.我看到通过控制台消费者(本地kafka)到达Kafka的消息.

我还创建了一个使用的Integrationtest KafkaEmbedded.我通过订阅测试中的频道来检查预期的消息 - 一切都很好.

但我希望测试也检查放入kafka的消息.可悲的是,Kafka的JavaDoc并不是最好的.到目前为止我尝试的是:

@ClassRule
public static KafkaEmbedded kafkaEmbedded = new KafkaEmbedded(1, true, "myTopic");
//...
@Before
public void init() throws Exception {

    mockConsumer = new MockConsumer<>( OffsetResetStrategy.EARLIEST );
    kafkaEmbedded.consumeFromAnEmbeddedTopic( mockConsumer,"sikom" );

}
//...

@Test
public void endToEnd() throws Exception {
//  ...

    ConsumerRecords<String, String> records = mockConsumer.poll( 10000 );

    StreamSupport.stream(records.spliterator(), false).forEach( record -> log.debug( "record: " + record.value() ) );


}
Run Code Online (Sandbox Code Playgroud)

问题是我没有看到任何记录.我不确定我的KafkaEmbedded设置是否正确.但是消息是由频道接收的.

pvp*_*ran 7

这适合我.试试看

@RunWith(SpringRunner.class)
@SpringBootTest
public class KafkaEmbeddedTest {

    private static String SENDER_TOPIC = "testTopic";

    @ClassRule
    // By default it creates two partitions.
    public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, SENDER_TOPIC); 

    @Test
    public void testSend() throws InterruptedException, ExecutionException {

        Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
        //If you wish to send it to partitions other than 0 and 1, 
        //then you need to specify number of paritions in the declaration

        KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps);
        producer.send(new ProducerRecord<>(SENDER_TOPIC, 0, 0, "message00")).get();
        producer.send(new ProducerRecord<>(SENDER_TOPIC, 0, 1, "message01")).get();
        producer.send(new ProducerRecord<>(SENDER_TOPIC, 1, 0, "message10")).get();


        Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("sampleRawConsumer", "false", embeddedKafka);
        // Make sure you set the offset as earliest, because by the 
        // time consumer starts, producer might have sent all messages
        consumerProps.put("auto.offset.reset", "earliest");

        final List<String> receivedMessages = Lists.newArrayList();
        final CountDownLatch latch = new CountDownLatch(3);
        ExecutorService executorService = Executors.newSingleThreadExecutor();
        executorService.execute(() -> {
            KafkaConsumer<Integer, String> kafkaConsumer = new KafkaConsumer<>(consumerProps);
            kafkaConsumer.subscribe(Collections.singletonList(SENDER_TOPIC));
            try {
                while (true) {
                    ConsumerRecords<Integer, String> records = kafkaConsumer.poll(100);
                    records.iterator().forEachRemaining(record -> {
                        receivedMessages.add(record.value());
                        latch.countDown();
                    });
                }
            } finally {
                kafkaConsumer.close();
            }
        });

    latch.await(10, TimeUnit.SECONDS);
    assertTrue(receivedMessages.containsAll(Arrays.asList("message00", "message01", "message10")));
    }
}
Run Code Online (Sandbox Code Playgroud)

我正在使用倒计时锁存器因为Producer.Send(..)是异步操作.所以我在这里做的是等待无限循环轮询kafka每100毫秒,如果有新记录,如果是这样,将其添加到List以供将来断言,然后减少倒计时.我总共会等待10秒才能确定.
您也可以使用一个简单的循环,然后在几分钟后退出.(如果您不想使用CountdownLatch和ExecutorService的东西)

  • 这里的关键是 `consumerProps.put("auto.offset.reset", "earliest");` 否则消费者在消息发送后连接并从最后消费。 (3认同)