Bal*_*u R -1 kubernetes kubelet kubeadm
来自 kubeadm init 的执行日志
\n$$kubeadm init --kubernetes-version="v1.18.0" --pod-network-cidr="10.244.0.0/16"\n\nW0519 21:08:48.180818 913499 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]\n[init] Using Kubernetes version: v1.18.0\n[preflight] Running pre-flight checks\n [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/\n [WARNING FileExisting-socat]: socat not found in system path\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"\n[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"\n[kubelet-start] Starting the kubelet\n[certs] Using certificateDir folder "/etc/kubernetes/pki"\n[certs] Generating "ca" certificate and key\n[certs] Generating "apiserver" certificate and key\n[certs] apiserver serving cert is signed for DNS names [host422 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.180.40.75]\n[certs] Generating "apiserver-kubelet-client" certificate and key\n[certs] Generating "front-proxy-ca" certificate and key\n[certs] Generating "front-proxy-client" certificate and key\n[certs] Generating "etcd/ca" certificate and key\n[certs] Generating "etcd/server" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [host422 localhost] and IPs [10.180.40.75 127.0.0.1 ::1]\n[certs] Generating "etcd/peer" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [host422 localhost] and IPs [10.180.40.75 127.0.0.1 ::1]\n[certs] Generating "etcd/healthcheck-client" certificate and key\n[certs] Generating "apiserver-etcd-client" certificate and key\n[certs] Generating "sa" key and public key\n[kubeconfig] Using kubeconfig folder "/etc/kubernetes"\n[kubeconfig] Writing "admin.conf" kubeconfig file\n[kubeconfig] Writing "kubelet.conf" kubeconfig file\n[kubeconfig] Writing "controller-manager.conf" kubeconfig file\n[kubeconfig] Writing "scheduler.conf" kubeconfig file\n[control-plane] Using manifest folder "/etc/kubernetes/manifests"\n[control-plane] Creating static Pod manifest for "kube-apiserver"\n[control-plane] Creating static Pod manifest for "kube-controller-manager"\nW0519 21:08:50.681218 913499 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"\n[control-plane] Creating static Pod manifest for "kube-scheduler"\nW0519 21:08:50.681948 913499 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"\n[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s\n[kubelet-check] Initial timeout of 40s passed.\n Unfortunately, an error has occurred:\n timed out waiting for the condition\n\n This error is likely caused by:\n - The kubelet is not running\n - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)\n\n If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:\n - 'systemctl status kubelet'\n - 'journalctl -xeu kubelet'\n\n Additionally, a control plane component may have crashed or exited when started by the container runtime.\n To troubleshoot, list all containers using your preferred container runtimes CLI.\n\n Here is one example how you may list all Kubernetes containers running in docker:\n - 'docker ps -a | grep kube | grep -v pause'\n Once you have found the failing container, you can inspect its logs with:\n - 'docker logs CONTAINERID'\n\nerror execution phase wait-control-plane: couldn't initialize a Kubernetes cluster\nRun Code Online (Sandbox Code Playgroud)\nkubelet 状态(可以看到Setting node annotation to enable volume controller attach/detach日志中不断打印)
kubelet.service - kubelet: The Kubernetes Node Agent\n Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)\n Drop-In: /etc/systemd/system/kubelet.service.d\n \xe2\x94\x94\xe2\x94\x8010-kubeadm.conf\n Active: active (running) since Wed 2021-05-19 21:08:48 IST; 17min ago\n Docs: https://kubernetes.io/docs/\n Main PID: 913672 (kubelet)\n Tasks: 18 (limit: 101228)\n Memory: 33.7M\n CGroup: /system.slice/kubelet.service\n \xe2\x94\x94\xe2\x94\x80913672 /home0/kubernetes/kubernetes/server/bin/kubelet --root-dir=/home0/kubernetes/workdir\n\nMay 19 21:24:49 InBlrbnc422 kubelet[913672]: I0519 21:24:49.379623 913672 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach\nMay 19 21:24:59 InBlrbnc422 kubelet[913672]: I0519 21:24:59.425035 913672 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach\nRun Code Online (Sandbox Code Playgroud)\n(1) 执行swapoff -a
\n(2) 尝试将 docker 和 kubelet 的 cgroup 驱动程序更新到 systemd,但不知何故 kubelet 没有接受更改。希望 kubeadm init 应该能够使用驱动程序版本 cgroupfs 运行。
\n请让我知道我还需要检查什么。
更新堆栈
\ncouldn't initialize a Kubernetes cluster\nk8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase\n /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:114\nk8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1\n /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234\nk8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll\n /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422\nk8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run\n /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207\nk8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1\n /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:147\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute\n /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC\n /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute\n /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864\nk8s.io/kubernetes/cmd/kubeadm/app.Run\n /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50\nmain.main\n _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25\nruntime.main\n /usr/local/go/src/runtime/proc.go:203\nruntime.goexit\n /usr/local/go/src/runtime/asm_amd64.s:1357\nerror execution phase wait-control-plane\nk8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1\n /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235\nk8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll\n /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422\nk8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run\n /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207\nk8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1\n /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:147\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute\n /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC\n /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute\n /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864\nk8s.io/kubernetes/cmd/kubeadm/app.Run\n /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50\nmain.main\n _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25\nruntime.main\n /usr/local/go/src/runtime/proc.go:203\nruntime.goexit\n /usr/local/go/src/runtime/asm_amd64.s:1357\nRun Code Online (Sandbox Code Playgroud)\n
通常,此问题是由于虚拟机或软件包配置错误造成的。尝试执行这些步骤,它应该对您有用(所有命令都需要以 root 身份运行):
首先,通过运行重置命令重置 kubeadm 集群并刷新 iptables(以避免任何网络问题):
kubeadm reset -f
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
Run Code Online (Sandbox Code Playgroud)
其次,您需要将 Docker cgroup 驱动程序更改为 systemd(默认推荐 kubernetes kubelet 的 CRI conf),然后重新启动 docker 服务:
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
systemctl daemon-reload
systemctl restart docker
Run Code Online (Sandbox Code Playgroud)
最后需要关闭 swapoff 并重新启动并启用 kubelet 服务
swapoff -a
systemctl start kubelet
Run Code Online (Sandbox Code Playgroud)
我确实使用完全相同的软件包安装了 kubernetes,但我使用的是 Kubernetes v1.21.0,它对我来说工作得很好,如果不适合你,也许你应该升级该版本。
| 归档时间: |
|
| 查看次数: |
14683 次 |
| 最近记录: |