Kafka Consumer收不到消息

not*_*x0x 4 kafka-consumer-api

我是卡夫卡的新手。我在网上看了很多制作Kafka Producer和Kafka Consumer的说明。我成功地完成了前者,它可以向 Kafka 集群发送消息。然而,我没有完成后一项。请帮助我解决这个问题。我看到我的问题就像 StackOverflow 上的一些帖子一样,但我想描述得更清楚。我在 Virtual Box 上的 Ubuntu 服务器上运行 Kafka 和 Zookeeper。使用最简单的配置(几乎默认),即 1 个 Kafka 集群和 1 个 Zookeeper 集群。

1.当我使用Kafka的命令行作为生产者和消费者时,例如:

* Case 1: It works. I can see the word: Hello, World on the screen

$~/kafka/bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic helloKafka --partitions 1 --replication-factor 1.
$echo "Hello, World" | ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic TutorialTopic > /dev/null.
$~/kafka/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic TutorialTopic --from-beginning.
Run Code Online (Sandbox Code Playgroud)

2.当我使用Kafka的Producer和命令行作为consumer时,例如:

* Case 2: It works. I can see the messages which sent from the Producer on the screen

$~/kafka/bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic helloKafka --partitions 1 --replication-factor 1.
$~/kafka/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic TutorialTopic --from-beginning.
$java -cp target/Kafka_Producer_Program-0.0.1-SNAPSHOT.jar AddLab_Producer
Run Code Online (Sandbox Code Playgroud)

3.当我使用生产者和消费者时,例如:

* Case 3: Only Producer works perfectly. The Consumer runs but does not shows any messages. 

$~/kafka/bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic helloKafka --partitions 1 --replication-factor 1.
$java -cp target/Kafka_Producer_Program-0.0.1-SNAPSHOT.jar AddLab_Producer
$java -cp target/Kafka_Consumer_Program-0.0.1-SNAPSHOT.jar AddLab_Consumer
Run Code Online (Sandbox Code Playgroud)

这是我的生产者和消费者的代码。其实我是从Kafka的一些说明网站上复制过来的。

*制作人计划

import java.util.Properties;
import java.util.concurrent.ExecutionException;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;
public class AddLab_Producer {
    public static void main(String args[]) throws InterruptedException, ExecutionException {
        Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());

        KafkaProducer<String, String> producer = new KafkaProducer<String, String>(props);

        boolean sync = false;
        String topic = args[0];
        String key = "mykey";

        for (int i = 1; i <= 3; i++) {
            String value = args[1] + " " + i;
            ProducerRecord<String, String> producerRecord = new ProducerRecord<String, String>(topic, value);
            if (sync) {
                producer.send(producerRecord).get();
            } else {
                producer.send(producerRecord);
            }
        }
        producer.close();
    }
}
Run Code Online (Sandbox Code Playgroud)

*消费者计划

import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

public class AddLab_Consumer {

    public static class KafkaPartitionConsumer implements Runnable {

        private int tnum ;
        private KafkaStream kfs ;

        public KafkaPartitionConsumer(int id, KafkaStream ks) {
            tnum = id ;
            kfs = ks ;
        }   
        public void run() {
            // TODO Auto-generated method stub
            System.out.println("This is thread " + tnum) ;

            ConsumerIterator<byte[], byte[]> it = kfs.iterator();
                int i = 1 ;
                while (it.hasNext()) {
                    System.out.println(tnum + " " + i + ": " + new String(it.next().message()));
                    ++i ;
                }       
        }
    }

    public static class MultiKafka {    
        public void run() {
        }   
    }

    public static void main(String[] args) {

        Properties props = new Properties();
        props.put("zookeeper.connect", "localhost:2181");
        props.put("group.id", "mygroupid2");
        props.put("zookeeper.session.timeout.ms", "413");
        props.put("zookeeper.sync.time.ms", "203");
        props.put("auto.commit.interval.ms", "1000");

        ConsumerConfig cf = new ConsumerConfig(props) ;    
        ConsumerConnector consumer = Consumer.createJavaConsumerConnector(cf) ;      
        String topic = "mytopic" ;     
        Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
        topicCountMap.put(topic, new Integer(1));
        Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap);
        List<KafkaStream<byte[], byte[]>> streams = consumerMap.get(topic);

        ExecutorService executor = Executors.newFixedThreadPool(1); ;

        int threadnum = 0 ;     
        for(KafkaStream<byte[],byte[]> stream  : streams) { 
            executor.execute(new KafkaPartitionConsumer(threadnum,stream));
            ++threadnum ;
        }
    }
}
Run Code Online (Sandbox Code Playgroud)

*我的POM.xml文件

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>HelloJava</groupId>
    <artifactId>HelloJava</artifactId>
    <version>0.0.1-SNAPSHOT</version>

    <dependencies>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.10</artifactId>
            <version>0.9.0.0</version>
        </dependency>

        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>3.8.1</version>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <sourceDirectory>src</sourceDirectory>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <version>1.4</version>
                <configuration>
                    <createDependencyReducedPom>true</createDependencyReducedPom>
                    <filters>
                        <filter>
                            <artifact>*:*</artifact>
                            <excludes>
                                <exclude>META-INF/*.SF</exclude>
                                <exclude>META-INF/*.DSA</exclude>
                                <exclude>META-INF/*.RSA</exclude>
                            </excludes>
                        </filter>
                    </filters>
                </configuration>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>shade</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>
Run Code Online (Sandbox Code Playgroud)

我非常感谢你的帮助。非常感谢。

消费者屏幕。它似乎运行但无法接收来自 Producer 的任何消息

Ayl*_*ake 7

我遇到了和你一样的问题。经过长时间的尝试,答案如下。

有两种类型的 kafka 新消费者 api,您可以选择一种。

cousumer.分配(...)

消费者.订阅(..)

并使用如下:

    // set these properites or you should run consumer first than run producer
    props.put("enable.auto.commit", "false");
    props.put("auto.offset.reset", "earliest");

    KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);

    boolean assign = false;
    if(assign) {
        TopicPartition tp = new TopicPartition(topic, 0);
        List<TopicPartition> tps = Arrays.asList(tp);
        consumer.assign(tps);
        consumer.seekToBeginning(tps);
    }else {
        consumer.subscribe(Arrays.asList(topic));
    }
Run Code Online (Sandbox Code Playgroud)

http://kafka.apache.org/documentation.html#newconsumerconfigs

如果你使用旧的consumer api,属性配置几乎是一样的。如果您想在消费者消费之前看到生成的消息,请记住添加以下两段代码:

props.put("enable.auto.commit", "false");
props.put("auto.offset.reset", "earliest");
Run Code Online (Sandbox Code Playgroud)

希望这会帮助其他人。