代理列表服务器和引导服务器之间有什么区别?

Ama*_*man 8 apache-kafka

Kafka和有什么不一样

broker-listbootstrap servers

Lak*_*utu 10

我也讨厌阅读“像文本墙一样”的 Kafka 文档:P
据我了解:

  • 经纪人名单

    • 服务器的完整列表,如果缺少的生产者可能无法工作
    • 与生产者命令相关
  • 引导服务器

    • 一个就足以发现所有其他人
    • 与消费者命令相关
    • 涉及动物园管理员

抱歉这么……简短。下次我会更多地关注细节以更加清晰。为了解释我的观点,我将使用 Kafka 1.0.1 控制台脚本。

kafka-console-consumer.sh

The console consumer is a tool that reads data from Kafka and outputs it to standard output.
Option                                   Description
------                                   -----------
--blacklist <String: blacklist>          Blacklist of topics to exclude from
                                           consumption.
--bootstrap-server <String: server to    REQUIRED (unless old consumer is
  connect to>                              used): The server to connect to.
--consumer-property <String:             A mechanism to pass user-defined
  consumer_prop>                           properties in the form key=value to
                                           the consumer.
--consumer.config <String: config file>  Consumer config properties file. Note
                                           that [consumer-property] takes
                                           precedence over this config.
--csv-reporter-enabled                   If set, the CSV metrics reporter will
                                           be enabled
--delete-consumer-offsets                If specified, the consumer path in
                                           zookeeper is deleted when starting up
--enable-systest-events                  Log lifecycle events of the consumer
                                           in addition to logging consumed
                                           messages. (This is specific for
                                           system tests.)
--formatter <String: class>              The name of a class to use for
                                           formatting kafka messages for
                                           display. (default: kafka.tools.
                                           DefaultMessageFormatter)
--from-beginning                         If the consumer does not already have
                                           an established offset to consume
                                           from, start with the earliest
                                           message present in the log rather
                                           than the latest message.
--group <String: consumer group id>      The consumer group id of the consumer.
--isolation-level <String>               Set to read_committed in order to
                                           filter out transactional messages
                                           which are not committed. Set to
                                           read_uncommittedto read all
                                           messages. (default: read_uncommitted)
--key-deserializer <String:
  deserializer for key>
--max-messages <Integer: num_messages>   The maximum number of messages to
                                           consume before exiting. If not set,
                                           consumption is continual.
--metrics-dir <String: metrics           If csv-reporter-enable is set, and
  directory>                               this parameter isset, the csv
                                           metrics will be output here
--new-consumer                           Use the new consumer implementation.
                                           This is the default, so this option
                                           is deprecated and will be removed in
                                           a future release.
--offset <String: consume offset>        The offset id to consume from (a non-
                                           negative number), or 'earliest'
                                           which means from beginning, or
                                           'latest' which means from end
                                           (default: latest)
--partition <Integer: partition>         The partition to consume from.
                                           Consumption starts from the end of
                                           the partition unless '--offset' is
                                           specified.
--property <String: prop>                The properties to initialize the
                                           message formatter.
--skip-message-on-error                  If there is an error when processing a
                                           message, skip it instead of halt.
--timeout-ms <Integer: timeout_ms>       If specified, exit if no message is
                                           available for consumption for the
                                           specified interval.
--topic <String: topic>                  The topic id to consume on.
--value-deserializer <String:
  deserializer for values>
--whitelist <String: whitelist>          Whitelist of topics to include for
                                           consumption.
--zookeeper <String: urls>               REQUIRED (only when using old
                                           consumer): The connection string for
                                           the zookeeper connection in the form
                                           host:port. Multiple URLS can be
                                           given to allow fail-over.

kafka-console-producer.sh
Read data from standard input and publish it to Kafka.
Option                                   Description
------                                   -----------
--batch-size <Integer: size>             Number of messages to send in a single
                                           batch if they are not being sent
                                           synchronously. (default: 200)
--broker-list <String: broker-list>      REQUIRED: The broker list string in
                                           the form HOST1:PORT1,HOST2:PORT2.
--compression-codec [String:             The compression codec: either 'none',
  compression-codec]                       'gzip', 'snappy', or 'lz4'.If
                                           specified without value, then it
                                           defaults to 'gzip'
--key-serializer <String:                The class name of the message encoder
  encoder_class>                           implementation to use for
                                           serializing keys. (default: kafka.
                                           serializer.DefaultEncoder)
--line-reader <String: reader_class>     The class name of the class to use for
                                           reading lines from standard in. By
                                           default each line is read as a
                                           separate message. (default: kafka.
                                           tools.
                                           ConsoleProducer$LineMessageReader)
--max-block-ms <Long: max block on       The max time that the producer will
  send>                                    block for during a send request
                                           (default: 600
Run Code Online (Sandbox Code Playgroud)

如您所见,bootstrap-server 参数仅适用于消费者。另一方面 - broker-list 仅在生产者的参数列表中。

而且:

kafka-console-consumer.sh --zookeeper localost:2181 --topic bets
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
Run Code Online (Sandbox Code Playgroud)

因此,cricket-007 注意到 bootstrap-server 和 zookeeper 看起来具有相似的目的。不同的是 --zookeeper 应该指向另一端的 Zookeeper 节点 --bootstrap-server 指向 Kafka 节点和端口。

重申一下,bootstrap-server 被用作消费者参数,broker-list 被用作生产者参数。

  • 你能解释一下“涉及动物园管理员”吗?鉴于新的消费者 API 可以使用 `--zookeeper` 或 `--bootstrap-server` (3认同)

Ati*_*ain 5

这个答案仅供参考,因为我没有使用 --broker-list 所以我很困惑,然后我意识到它已被弃用。

目前我使用的是 Kafka 2.6.0 版本。

现在,对于生产者和消费者,我们必须使用 --bootstrap-server 而不是 --broker-list,因为它现已弃用。

您可以在 Kafka 控制台脚本中检查这一点。

bin/kafka-console- Producer.sh

在此输入图像描述

如您所见,Kafka-console- Producer.sh 已弃用 --broker-list

bin/kafka-console-consumer.sh

在此输入图像描述