我正在使用这个图表:https ://github.com/helm/charts/tree/master/stable/prometheus-mongodb-exporter
该图表需要MONGODB_URI环境变量或mongodb.uri填充在values.yaml文件中,因为这是一个连接字符串,我不想将其签入 git。我正在考虑 kubernetes 秘密并提供来自 kubernetes 秘密的连接字符串。我一直未能成功找到此问题的解决方案。
我还尝试创建另一个舵图并将其用作该图表的依赖项并为MONGODB_URIfrom提供值secrets.yaml,但这也不起作用,因为在prometheus-mongodb-exporter图表中MONGODB_URI定义为所需值,然后将其传递到secrets.yaml该图表中的文件中,因此依赖项因此图表永远不会被安装。
实现这一目标的最佳方法是什么?
我正在使用 https://github.com/helm/charts/tree/master/stable/airflow helm chart 并构建 v1.10.8puckle/docker-airflow图像并在其上安装 kubernetes 并在舵图中使用该图像,但我不断收到
File "/usr/local/bin/airflow", line 37, in <module>
args.func(args)
File "/usr/local/lib/python3.7/site-packages/airflow/bin/cli.py", line 1140, in initdb
db.initdb(settings.RBAC)
File "/usr/local/lib/python3.7/site-packages/airflow/utils/db.py", line 332, in initdb
dagbag = models.DagBag()
File "/usr/local/lib/python3.7/site-packages/airflow/models/dagbag.py", line 95, in __init__
executor = get_default_executor()
File "/usr/local/lib/python3.7/site-packages/airflow/executors/__init__.py", line 48, in get_default_executor
DEFAULT_EXECUTOR = _get_executor(executor_name)
File "/usr/local/lib/python3.7/site-packages/airflow/executors/__init__.py", line 87, in _get_executor
return KubernetesExecutor()
File "/usr/local/lib/python3.7/site-packages/airflow/contrib/executors/kubernetes_executor.py", line 702, in __init__
self.kube_config = KubeConfig()
File "/usr/local/lib/python3.7/site-packages/airflow/contrib/executors/kubernetes_executor.py", line 283, in __init__
self.kube_client_request_args = json.loads(kube_client_request_args)
File "/usr/local/lib/python3.7/json/__init__.py", line …Run Code Online (Sandbox Code Playgroud) 我试图让这个蜘蛛工作,如果要求分别刮下它的组件,它可以工作,但是当我尝试使用Srapy回调函数来接收参数后,我会崩溃.目标是在输出json文件中以格式写入时抓取多个页面并刮取数据:
作者| 专辑| 标题| 歌词
每个数据都位于不同的网页上,这就是我为什么要使用Scrapy回调函数来实现这一目标的原因.
此外,上述每个项目都在Scrapy items.py下定义为:
import scrapy
class TutorialItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
author = scrapy.Field()
album = scrapy.Field()
title = scrapy.Field()
lyrics = scrapy.Field()
Run Code Online (Sandbox Code Playgroud)
蜘蛛代码从这里开始:
import scrapy
import re
import json
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from tutorial.items import TutorialItem
# urls class
class DomainSpider(scrapy.Spider):
name = "domainspider"
allowed_domains = ['www.domain.com']
start_urls = [
'http://www.domain.com',
]
rules = (
Rule(LinkExtractor(allow='www\.domain\.com/[A-Z][a-zA-Z_/]+$'),
'parse', follow=True,
), …Run Code Online (Sandbox Code Playgroud)