我无法启动 Minikube

fei*_*eiz 17 ubuntu docker kubernetes minikube

我已经安装了 Minikube,但是当我运行时minikube start,出现以下错误:

\n
  minikube v1.17.1 on Ubuntu 20.04\n\xe2\x9c\xa8  Using the docker driver based on existing profile\n  Starting control plane node minikube in cluster minikube\n  Updating the running docker "minikube" container ...\n  Preparing Kubernetes v1.20.2 on Docker 20.10.0 ...\n  Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared\n    \xe2\x96\xaa Generating certificates and keys ...\n    \xe2\x96\xaa Booting up control plane ...\n  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.20.2\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\nKERNEL_VERSION: 5.8.0-40-generic\nDOCKER_VERSION: 20.10.0\nOS: Linux\nCGROUPS_CPU: enabled\nCGROUPS_CPUACCT: enabled\nCGROUPS_CPUSET: enabled\nCGROUPS_DEVICES: enabled\nCGROUPS_FREEZER: enabled\nCGROUPS_MEMORY: enabled\nCGROUPS_PIDS: enabled\nCGROUPS_HUGETLB: enabled\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using \'kubeadm config images pull\'\n[certs] Using certificateDir folder "/var/lib/minikube/certs"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing "sa" key\n[kubeconfig] Using kubeconfig folder "/etc/kubernetes"\n[kubeconfig] Writing "admin.conf" kubeconfig file\n[kubeconfig] Writing "kubelet.conf" kubeconfig file\n[kubeconfig] Writing "controller-manager.conf" kubeconfig file\n[kubeconfig] Writing "scheduler.conf" kubeconfig file\n[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"\n[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"\n[kubelet-start] Starting the kubelet\n[control-plane] Using manifest folder "/etc/kubernetes/manifests"\n[control-plane] Creating static Pod manifest for "kube-apiserver"\n[control-plane] Creating static Pod manifest for "kube-controller-manager"\n[control-plane] Creating static Pod manifest for "kube-scheduler"\n[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s\n[kubelet-check] Initial timeout of 40s passed.\n[kubelet-check] It seems like the kubelet isn\'t running or healthy.\n[kubelet-check] The HTTP call equal to \'curl -sSL http://localhost:10248/healthz\' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn\'t running or healthy.\n[kubelet-check] The HTTP call equal to \'curl -sSL http://localhost:10248/healthz\' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn\'t running or healthy.\n[kubelet-check] The HTTP call equal to \'curl -sSL http://localhost:10248/healthz\' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn\'t running or healthy.\n[kubelet-check] The HTTP call equal to \'curl -sSL http://localhost:10248/healthz\' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn\'t running or healthy.\n[kubelet-check] The HTTP call equal to \'curl -sSL http://localhost:10248/healthz\' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.\n\n    Unfortunately, an error has occurred:\n        timed out waiting for the condition\n\n    This error is likely caused by:\n        - The kubelet is not running\n        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)\n\n    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:\n        - \'systemctl status kubelet\'\n        - \'journalctl -xeu kubelet\'\n\n    Additionally, a control plane component may have crashed or exited when started by the container runtime.\n    To troubleshoot, list all containers using your preferred container runtimes CLI.\n\n    Here is one example how you may list all Kubernetes containers running in docker:\n        - \'docker ps -a | grep kube | grep -v pause\'\n        Once you have found the failing container, you can inspect its logs with:\n        - \'docker logs CONTAINERID\'\n\n\nstderr:\n    [\n    \xe2\x96\xaa Generating certificates and keys ...\n    \xe2\x96\xaa Booting up control plane ...\n\n  Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.20.2\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\nKERNEL_VERSION: 5.8.0-40-generic\nDOCKER_VERSION: 20.10.0\nOS: Linux\nCGROUPS_CPU: enabled\nCGROUPS_CPUACCT: enabled\nCGROUPS_CPUSET: enabled\nCGROUPS_DEVICES: enabled\nCGROUPS_FREEZER: enabled\nCGROUPS_MEMORY: enabled\nCGROUPS_PIDS: enabled\nCGROUPS_HUGETLB: enabled\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using \'kubeadm config images pull\'\n[certs] Using certificateDir folder "/var/lib/minikube/certs"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing "sa" key\n[kubeconfig] Using kubeconfig folder "/etc/kubernetes"\n[kubeconfig] Writing "admin.conf" kubeconfig file\n[kubeconfig] Writing "kubelet.conf" kubeconfig file\n[kubeconfig] Writing "controller-manager.conf" kubeconfig file\n[kubeconfig] Writing "scheduler.conf" kubeconfig file\n[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"\n[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"\n[kubelet-start] Starting the kubelet\n[control-plane] Using manifest folder "/etc/kubernetes/manifests"\n[control-plane] Creating static Pod manifest for "kube-apiserver"\n[control-plane] Creating static Pod manifest for "kube-controller-manager"\n[control-plane] Creating static Pod manifest for "kube-scheduler"\n[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s\n[kubelet-check] Initial timeout of 40s passed.\n[kubelet-check] It seems like the kubelet isn\'t running or healthy.\n[kubelet-check] The HTTP call equal to \'curl -sSL http://localhost:10248/healthz\' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn\'t running or healthy.\n[kubelet-check] The HTTP call equal to \'curl -sSL http://localhost:10248/healthz\' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn\'t running or healthy.\n[kubelet-check] The HTTP call equal to \'curl -sSL http://localhost:10248/healthz\' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn\'t running or healthy.\n[kubelet-check] The HTTP call equal to \'curl -sSL http://localhost:10248/healthz\' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn\'t running or healthy.\n[kubelet-check] The HTTP call equal to \'curl -sSL http://localhost:10248/healthz\' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.\n\n    Unfortunately, an error has occurred:\n        timed out waiting for the condition\n\n    This error is likely caused by:\n        - The kubelet is not running\n        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)\n\n    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:\n        - \'systemctl status kubelet\'\n        - \'journalctl -xeu kubelet\'\n\n    Additionally, a control plane component may have crashed or exited when started by the container runtime.\n    To troubleshoot, list all containers using your preferred container runtimes CLI.\n\n    Here is one example how you may list all Kubernetes containers running in docker:\n        - \'docker ps -a | grep kube | grep -v pause\'\n        Once you have found the failing container, you can inspect its logs with:\n        - \'docker logs CONTAINERID\'\n\n\nstderr:\n\n\n  minikube is exiting due to an error. If the above message is not useful, open an issue:\n  https://github.com/kubernetes/minikube/issues/new/choose\n\n\xe2\x9d\x8c  Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.20.2\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\nKERNEL_VERSION: 5.8.0-40-generic\nDOCKER_VERSION: 20.10.0\nOS: Linux\nCGROUPS_CPU: enabled\nCGROUPS_CPUACCT: enabled\nCGROUPS_CPUSET: enabled\nCGROUPS_DEVICES: enabled\nCGROUPS_FREEZER: enabled\nCGROUPS_MEMORY: enabled\nCGROUPS_PIDS: enabled\nCGROUPS_HUGETLB: enabled\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using \'kubeadm config images pull\'\n[certs] Using certificateDir folder "/var/lib/minikube/certs"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing "sa" key\n[kubeconfig] Using kubeconfig folder "/etc/kubernetes"\n[kubeconfig] Writing "admin.conf" kubeconfig file\n[kubeconfig] Writing "kubelet.conf" kubeconfig file\n[kubeconfig] Writing "controller-manager.conf" kubeconfig file\n[kubeconfig] Writing "scheduler.conf" kubeconfig file\n[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"\n[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"\n[kubelet-start] Starting the kubelet\n[control-plane] Using manifest folder "/etc/kubernetes/manifests"\n[control-plane] Creating static Pod manifest for "kube-apiserver"\n[control-plane] Creating static Pod manifest for "kube-controller-manager"\n[control-plane] Creating static Pod manifest for "kube-scheduler"\n[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s\n[kubelet-check] Initial timeout of 40s passed.\n[kubelet-check] It seems like the kubelet isn\'t running or healthy.\n[kubelet-check] The HTTP call equal to \'curl -sSL http://localhost:10248/healthz\' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn\'t running or healthy.\n[kubelet-check] The HTTP call equal to \'curl -sSL http://localhost:10248/healthz\' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn\'t running or healthy.\n[kubelet-check] The HTTP call equal to \'curl -sSL http://localhost:10248/healthz\' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn\'t running or healthy.\n[kubelet-check] The HTTP call equal to \'curl -sSL http://localhost:10248/healthz\' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn\'t running or healthy.\n[kubelet-check] The HTTP call equal to \'curl -sSL http://localhost:10248/healthz\' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.\n\n    Unfortunately, an error has occurred:\n        timed out waiting for the condition\n\n    This error is likely caused by:\n        - The kubelet is not running\n        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)\n\n    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:\n        - \'systemctl status kubelet\'\n        - \'journalctl -xeu kubelet\'\n\n    Additionally, a control plane component may have crashed or exited when started by the container runtime.\n    To troubleshoot, list all containers using your preferred container runtimes CLI.\n\n    Here is one example how you may list all Kubernetes containers running in docker:\n        - \'docker ps -a | grep kube | grep -v pause\'\n        Once you have found the failing container, you can inspect its logs with:\n        - \'docker logs CONTAINERID\'\n\n\nstderr:\n\n  Suggestion: Check output of \'journalctl -xeu kubelet\', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start\n  Related issue: https://github.com/kubernetes/minikube/issues/4172\n
Run Code Online (Sandbox Code Playgroud)\n

我不明白这里有什么问题。它有效,但后来我遇到了类似的错误。它说:

\n
\n

在 Docker 20.10.0 上准备 Kubernetes v1.20.0 ...| \xe2\x9d\x8c 无法加载缓存图像:加载缓存图像:stat /home/feiz-nouri/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4:没有这样的文件或目录

\n
\n

我卸载了它然后重新安装,但仍然出现错误。

\n

我怎样才能解决这个问题?

\n

小智 16

您可以使用minikube delete删除旧集群。之后使用 启动 Minikube minikube start


小智 5

我按照以下步骤操作:

docker system prune

minikube delete

minikube start --container-runtime=containerd
Run Code Online (Sandbox Code Playgroud)