我目前正在 Dataproc 上运行 Spark 作业,并且在尝试重新加入组并从 kafka 主题读取数据时遇到错误。我已经做了一些挖掘,但不确定是什么问题。我已经auto.offset.reset设置,earliest所以它应该从最早的可用非提交偏移量中读取,最初我的火花日志看起来像这样:
19/04/29 16:30:30 INFO
org.apache.kafka.clients.consumer.internals.Fetcher: [Consumer
clientId=consumer-1, groupId=demo-group] Resetting offset for
partition demo.topic-11 to offset 5553330.
19/04/29 16:30:30 INFO
org.apache.kafka.clients.consumer.internals.Fetcher: [Consumer
clientId=consumer-1, groupId=demo-group] Resetting offset for
partition demo.topic-2 to offset 5555553.
19/04/29 16:30:30 INFO
org.apache.kafka.clients.consumer.internals.Fetcher: [Consumer
clientId=consumer-1, groupId=demo-group] Resetting offset for
partition demo.topic-3 to offset 5555484.
19/04/29 16:30:30 INFO
org.apache.kafka.clients.consumer.internals.Fetcher: [Consumer
clientId=consumer-1, groupId=demo-group] Resetting offset for
partition demo.topic-4 to offset 5555586.
19/04/29 16:30:30 INFO
org.apache.kafka.clients.consumer.internals.Fetcher: [Consumer
clientId=consumer-1, groupId=demo-group] Resetting offset …Run Code Online (Sandbox Code Playgroud) 在 helm 图表中声明变量进行部署之前,是否可以进行某种条件检查?
例如,假设我有
- name: EXAMPLE_VAR
valueFrom:
secretKeyRef:
name: "name"
key: "key"
Run Code Online (Sandbox Code Playgroud)
但我只想将其包含在特定配置的部署中(基于环境变量),并且不想仅为此配置选项维护单独的 yaml 配置