带容器的 Kubeadm 1.24。Kubeadm 初始化失败 (centos 7)

awo*_*t83 5 centos7 kubernetes kubeadm containerd

我尝试在centos 7上安装单节点集群,使用kubadm 1.24和containerd,\ni遵循安装步骤,

\n

我做了:\ncontainerd config default > /etc/containerd/config.toml\n并通过了:SystemdCgroup = true

\n

但 kubeadm init 失败于:

\n
[root@master-node .kube]# kubeadm init\n[init] Using Kubernetes version: v1.24.0\n[preflight] Running pre-flight checks\n        [WARNING HTTPProxy]: Connection to "https://10.XXXXXXXX" uses proxy "http://proxy-XXXXXXXXX.com:8080/". If that is not intended, adjust your proxy settings\n        [WARNING HTTPProxyCIDR]: connection to "10.96.XXXXXXXX" uses proxy "http://proxy-XXXXXXXXX.com:8080/". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder "/etc/kubernetes/pki"\n[certs] Generating "ca" certificate and key\n[certs] Generating "apiserver" certificate and key\n[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master-node] and IPs [10.96.0.1 10.XXXXXXXX]\n[certs] Generating "apiserver-kubelet-client" certificate and key\n[certs] Generating "front-proxy-ca" certificate and key\n[certs] Generating "front-proxy-client" certificate and key\n[certs] Generating "etcd/ca" certificate and key\n[certs] Generating "etcd/server" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [localhost master-node] and IPs [10.XXXXXX 127.0.0.1 ::1]\n[certs] Generating "etcd/peer" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [localhost master-node] and IPs [10.XXXXXXX 127.0.0.1 ::1]\n[certs] Generating "etcd/healthcheck-client" certificate and key\n[certs] Generating "apiserver-etcd-client" certificate and key\n[certs] Generating "sa" key and public key\n[kubeconfig] Using kubeconfig folder "/etc/kubernetes"\n[kubeconfig] Writing "admin.conf" kubeconfig file\n[kubeconfig] Writing "kubelet.conf" kubeconfig file\n[kubeconfig] Writing "controller-manager.conf" kubeconfig file\n[kubeconfig] Writing "scheduler.conf" kubeconfig file\n[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"\n[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"\n[kubelet-start] Starting the kubelet\n[control-plane] Using manifest folder "/etc/kubernetes/manifests"\n[control-plane] Creating static Pod manifest for "kube-apiserver"\n[control-plane] Creating static Pod manifest for "kube-controller-manager"\n[control-plane] Creating static Pod manifest for "kube-scheduler"\n[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s\nWaiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s\n[kubelet-check] Initial timeout of 40s passed.\n\nUnfortunately, an error has occurred:\n        timed out waiting for the condition\n\nThis error is likely caused by:\n        - The kubelet is not running\n        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)\n\nIf you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:\n        - 'systemctl status kubelet'\n        - 'journalctl -xeu kubelet'\n\nAdditionally, a control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'\n        Once you have found the failing container, you can inspect its logs with:\n        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'\nerror execution phase wait-control-plane: couldn't initialize a Kubernetes cluster\nTo see the stack trace of this error execute with --v=5 or higher\n
Run Code Online (Sandbox Code Playgroud)\n

systemctl status kubelet : is Active: 活动(正在运行)

\n

和日志:journalctl -xeu kubelet:

\n
mai 20 17:07:05 master-node kubelet[8685]: E0520 17:07:05.715751    8685 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reas\nmai 20 17:07:05 master-node kubelet[8685]: E0520 17:07:05.809523    8685 kubelet.go:2419] "Error getting node" err="node \\"master-node\\" not found"\nmai 20 17:07:05 master-node kubelet[8685]: E0520 17:07:05.910121    8685 kubelet.go:2419] "Error getting node" err="node \\"master-node\\" not found"\nmai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.010996    8685 kubelet.go:2419] "Error getting node" err="node \\"master-node\\" not found"\nmai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.111729    8685 kubelet.go:2419] "Error getting node" err="node \\"master-node\\" not found"\nmai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.185461    8685 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://10.3\nmai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.212834    8685 kubelet.go:2419] "Error getting node" err="node \\"master-node\\" not found"\nmai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.313367    8685 kubelet.go:2419] "Error getting node" err="node \\"master-node\\" not found"\nmai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.413857    8685 kubelet.go:2419] "Error getting node" err="node \\"master-node\\" not found"\nmai 20 17:07:06 master-node kubelet[8685]: I0520 17:07:06.433963    8685 kubelet_node_status.go:70] "Attempting to register node" node="master-node"\nmai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.434313    8685 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \\"https://10.\nmai 20 17:07:06 master-node kubelet[8685]: W0520 17:07:06.451759    8685 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDr\nmai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.451831    8685 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSID\nmai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.514443    8685 kubelet.go:2419] "Error getting node" err="node \\"master-node\\" not found"\nmai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.573293    8685 remote_runtime.go:201] "RunPodSandbox from runtime service failed" err="rpc error: code = Un\nmai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.573328    8685 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown\nmai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.573353    8685 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown\nmai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.573412    8685 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \\"CreatePodSandbox\\" for \\"\nmai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.574220    8685 remote_runtime.go:201] "RunPodSandbox from runtime service failed" err="rpc error: code = Un\nmai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.574254    8685 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown\nmai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.574279    8685 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown\nmai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.574321    8685 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \\"CreatePodSandbox\\" for \\"\nmai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.615512    8685 kubelet.go:2419] "Error getting node" err="node \\"master-node\\" not found"\nmai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.716168    8685 kubelet.go:2419] "Error getting node" err="node \\"master-node\\" not found"\nmai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.816764    8685 kubelet.go:2419] "Error getting node" err="node \\"master-node\\" not found"\n
Run Code Online (Sandbox Code Playgroud)\n

/var/log/message :有很多:

\n
May 22 12:50:00 master-node kubelet: E0522 12:50:00.616324   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"\n
Run Code Online (Sandbox Code Playgroud)\n

\n

[root@master-node .kube]# systemctl status containerd

\n
\xe2\x97\x8f containerd.service - containerd container runtime\n   Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; vendor preset: disabled)\n  Drop-In: /etc/systemd/system/containerd.service.d\n           \xe2\x94\x94\xe2\x94\x80http_proxy.conf\n   Active: active (running) since dim. 2022-05-22 12:28:59 CEST; 22min ago\n     Docs: https://containerd.io\n Main PID: 18416 (containerd)\n    Tasks: 111\n   Memory: 414.6M\n   CGroup: /system.slice/containerd.service\n           \xe2\x94\x9c\xe2\x94\x8018416 /usr/bin/containerd\n           \xe2\x94\x9c\xe2\x94\x8019025 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id c7bc656d43ab9b01e546e4fd4ad88634807c836c4e86622cd0506a0b2216c89a -address /run/container...\n           \xe2\x94\x9c\xe2\x94\x8019035 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id b9097bd741e5b87042b4592d26b46cce5f14a24e609e03c91282a438c2dcd7f8 -address /run/container...\n           \xe2\x94\x9c\xe2\x94\x8019047 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 979ac32bd88c094dae25964159066202bab919ca2aea4299827807c0829c3fa2 -address /run/container...\n           \xe2\x94\x9c\xe2\x94\x8019083 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id a6bcd2c83034531d9907defce5eda846dbdfcf474cbfe0eba7464bb670d5b73d -address /run/container...\n           \xe2\x94\x9c\xe2\x94\x80kubepods-burstable-pod07444178f947cc274160582c2d92fd91.slice:cri-containerd:27b2a5932689d1d62fa03024b9b9542e24bc5fda8d5088cbeecf72f66afd4251\n           \xe2\x94\x82 \xe2\x94\x94\xe2\x94\x8019266 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-ad...\n           \xe2\x94\x9c\xe2\x94\x80kubepods-burstable-pod817561003fea443230cdbdc318133c3d.slice:cri-containerd:c5c8abc23cb256e2b7f01e767ea18ba6b78f851b68f594349cb6449e2c2c2409\n           \xe2\x94\x82 \xe2\x94\x94\xe2\x94\x8019259 kube-controller-manager --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/contro...\n           \xe2\x94\x9c\xe2\x94\x80kubepods-burstable-pod68dc7c99c505d2f1495ca6aaa1fe2ba6.slice:cri-containerd:231b0ecd5ad9e49e2276770f235a753b4bac36d0888ef0d1cb24af56e89fa23e\n           \xe2\x94\x82 \xe2\x94\x94\xe2\x94\x8019246 etcd --advertise-client-urls=https://10.32.67.20:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var...\n           \xe2\x94\x9c\xe2\x94\x80kubepods-burstable-podc5c33a178f011135df400feb1027e3a5.slice:cri-containerd:9cf36107d9881a5204f01bdc6a45a097a3130ae5c3a237b02dfa03978b21dc42\n           \xe2\x94\x82 \xe2\x94\x94\xe2\x94\x8019233 kube-apiserver --advertise-address=10.32.67.20 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca...\n           \xe2\x94\x9c\xe2\x94\x80kubepods-burstable-pod817561003fea443230cdbdc318133c3d.slice:cri-containerd:a6bcd2c83034531d9907defce5eda846dbdfcf474cbfe0eba7464bb670d5b73d\n           \xe2\x94\x82 \xe2\x94\x94\xe2\x94\x8019140 /pause\n           \xe2\x94\x9c\xe2\x94\x80kubepods-burstable-pod07444178f947cc274160582c2d92fd91.slice:cri-containerd:c7bc656d43ab9b01e546e4fd4ad88634807c836c4e86622cd0506a0b2216c89a\n           \xe2\x94\x82 \xe2\x94\x94\xe2\x94\x8019133 /pause\n           \xe2\x94\x9c\xe2\x94\x80kubepods-burstable-pod68dc7c99c505d2f1495ca6aaa1fe2ba6.slice:cri-containerd:b9097bd741e5b87042b4592d26b46cce5f14a24e609e03c91282a438c2dcd7f8\n           \xe2\x94\x82 \xe2\x94\x94\xe2\x94\x8019124 /pause\n           \xe2\x94\x94\xe2\x94\x80kubepods-burstable-podc5c33a178f011135df400feb1027e3a5.slice:cri-containerd:979ac32bd88c094dae25964159066202bab919ca2aea4299827807c0829c3fa2\n             \xe2\x94\x94\xe2\x94\x8019117 /pause\n\nmai 22 12:45:56 master-node containerd[18416]: time="2022-05-22T12:45:56.146209618+02:00" level=info msg="StartContainer for \\"231b0ecd5ad9e49e2276770f23...9fa23e\\""\nmai 22 12:45:56 master-node containerd[18416]: time="2022-05-22T12:45:56.151240012+02:00" level=info msg="CreateContainer within sandbox \\"c7bc656d43ab9b01e546e4f...\nmai 22 12:45:56 master-node containerd[18416]: time="2022-05-22T12:45:56.151540207+02:00" level=info msg="StartContainer for \\"27b2a5932689d1d62fa03024b9...fd4251\\""\nmai 22 12:45:56 master-node containerd[18416]: time="2022-05-22T12:45:56.164666904+02:00" level=info msg="CreateContainer within sandbox \\"a6bcd2c83034531d9907def...\nmai 22 12:45:56 master-node containerd[18416]: time="2022-05-22T12:45:56.166282219+02:00" level=info msg="StartContainer for \\"c5c8abc23cb256e2b7f01e767e...2c2409\\""\nmai 22 12:45:56 master-node containerd[18416]: time="2022-05-22T12:45:56.277928704+02:00" level=info msg="StartContainer for \\"9cf36107d9881a5204f01bdc6a...essfully"\nmai 22 12:45:56 master-node containerd[18416]: time="2022-05-22T12:45:56.288703134+02:00" level=info msg="StartContainer for \\"c5c8abc23cb256e2b7f01e767e...essfully"\nmai 22 12:45:56 master-node containerd[18416]: time="2022-05-22T12:45:56.290631867+02:00" level=info msg="StartContainer for \\"231b0ecd5ad9e49e2276770f23...essfully"\nmai 22 12:45:56 master-node containerd[18416]: time="2022-05-22T12:45:56.293864738+02:00" level=info msg="StartContainer for \\"27b2a5932689d1d62fa03024b9...essfully"\nmai 22 12:46:55 master-node containerd[18416]: time="2022-05-22T12:46:55.476960835+02:00" level=error msg="ContainerStatus for \\"58ef67cb3c64c5032bf0dac6f1913e53e...\nHint: Some lines were ellipsized, use -l to show in full.\n
Run Code Online (Sandbox Code Playgroud)\n

[root@master-node .kube]# systemctl status kubelet

\n
\xe2\x97\x8f kubelet.service - kubelet: The Kubernetes Node Agent\n   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)\n  Drop-In: /usr/lib/systemd/system/kubelet.service.d\n           \xe2\x94\x94\xe2\x94\x8010-kubeadm.conf\n   Active: active (running) since dim. 2022-05-22 12:45:55 CEST; 6min ago\n     Docs: https://kubernetes.io/docs/\n Main PID: 18961 (kubelet)\n    Tasks: 16\n   Memory: 44.2M\n   CGroup: /system.slice/kubelet.service\n           \xe2\x94\x94\xe2\x94\x8018961 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kube...\n\nmai 22 12:51:25 master-node kubelet[18961]: E0522 12:51:25.632732   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkRe...itialized"\nmai 22 12:51:30 master-node kubelet[18961]: E0522 12:51:30.633996   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkRe...itialized"\nmai 22 12:51:35 master-node kubelet[18961]: E0522 12:51:35.634586   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkRe...itialized"\nmai 22 12:51:40 master-node kubelet[18961]: E0522 12:51:40.635415   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkRe...itialized"\nmai 22 12:51:45 master-node kubelet[18961]: E0522 12:51:45.636621   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkRe...itialized"\nmai 22 12:51:50 master-node kubelet[18961]: E0522 12:51:50.637966   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkRe...itialized"\nmai 22 12:51:55 master-node kubelet[18961]: E0522 12:51:55.639255   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkRe...itialized"\nmai 22 12:52:00 master-node kubelet[18961]: E0522 12:52:00.640514   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkRe...itialized"\nmai 22 12:52:05 master-node kubelet[18961]: E0522 12:52:05.641452   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkRe...itialized"\nmai 22 12:52:10 master-node kubelet[18961]: E0522 12:52:10.642237   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkRe...itialized"\nHint: Some lines were ellipsized, use -l to show in full.\n
Run Code Online (Sandbox Code Playgroud)\n

\n
[root@master-node yum.repos.d]# rpm -qa|grep containerd\ncontainerd.io-1.6.4-3.1.el7.x86_64\n\n[root@master-node yum.repos.d]# rpm -qa |grep kube\nkubeadm-1.24.0-0.x86_64\nkubectl-1.24.0-0.x86_64\nkubelet-1.24.0-0.x86_64\nkubernetes-cni-0.8.7-0.x86_64\n
Run Code Online (Sandbox Code Playgroud)\n

我还尝试安装 Calico :

\n
[root@master-node .kube]# kubectl apply -f calico.yaml\nThe connection to the server localhost:8080 was refused - did you specify the right host or port?\n
Run Code Online (Sandbox Code Playgroud)\n

\n

[root@master-node ~]# cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf

\n
# Note: This dropin only works with kubeadm and kubelet v1.11+\n[Service]\nEnvironment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"\nEnvironment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"\nEnvironment="KUBELET_KUBEADM_ARGS=--node-ip=10.XXXXXX --container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock --cgroup-driver=systemd\n# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically\nEnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env\n# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use\n# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.\nEnvironmentFile=-/etc/sysconfig/kubelet\nExecStart=\nExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS\n
Run Code Online (Sandbox Code Playgroud)\n

我不知道是否:

\n

[编辑:我回答下面的问题]

\n
    \n
  • 由于containerd,我必须运行kubeadm init --config.yaml ?答案:=> [否]
  • \n
  • 我是否必须先安装像 Calico 这样的 CNI?答案:=> [没有 kubeadm init 也可以没有]
  • \n
\n

[编辑]同样的安装对于谷歌DNS来说是可以的,并且没有公司代理。

\n

小智 0

在运行之前请确保 containerd 正在运行kubeadm。如果你有nerdctl,请尝试:

nerdctl run -it --rm gcr.io/google-samples/env-show:1.1
Run Code Online (Sandbox Code Playgroud)

问题?可能未配置 CRI 集成。尝试:

containerd config default > /etc/containerd/config.toml 
systemctl restart containerd
Run Code Online (Sandbox Code Playgroud)

这应该可以帮助您对其进行排序,但您可能需要提供更多调试信息。