kubelet无法获得docker和kubelet服务的cgroup统计信息

Jér*_*Pin 12 cgroups kubernetes

我在裸机Debian上运行kubernetes(3个主人,2个工人,现在是PoC).我跟着k8s-hard-way,我在我的kubelet上遇到了以下问题:

无法获取"/system.slice/docker.service"的系统容器统计信息:无法获取"/system.slice/docker.service"的cgroup统计信息:无法获取"/system.slice/docker.service"的cgroup统计信息":无法获取"/system.slice/docker.service"的容器信息:未知容器"/system.slice/docker.service"

我对kubelet.service也有同样的信息.

我有一些关于这些cgroup的文件:

$ ls /sys/fs/cgroup/systemd/system.slice/docker.service
cgroup.clone_children  cgroup.procs  notify_on_release  tasks

$ ls /sys/fs/cgroup/systemd/system.slice/kubelet.service/
cgroup.clone_children  cgroup.procs  notify_on_release  tasks
Run Code Online (Sandbox Code Playgroud)

管理员告诉我:

$ curl http://127.0.0.1:4194/validate
cAdvisor version: 

OS version: Debian GNU/Linux 8 (jessie)

Kernel version: [Supported and recommended]
    Kernel version is 3.16.0-4-amd64. Versions >= 2.6 are supported. 3.0+ are recommended.


Cgroup setup: [Supported and recommended]
    Available cgroups: map[cpu:1 memory:1 freezer:1 net_prio:1 cpuset:1 cpuacct:1 devices:1 net_cls:1 blkio:1 perf_event:1]
    Following cgroups are required: [cpu cpuacct]
    Following other cgroups are recommended: [memory blkio cpuset devices freezer]
    Hierarchical memory accounting enabled. Reported memory usage includes memory used by child containers.


Cgroup mount setup: [Supported and recommended]
    Cgroups are mounted at /sys/fs/cgroup.
    Cgroup mount directories: blkio cpu cpu,cpuacct cpuacct cpuset devices freezer memory net_cls net_cls,net_prio net_prio perf_event systemd 
    Any cgroup mount point that is detectible and accessible is supported. /sys/fs/cgroup is recommended as a standard location.
    Cgroup mounts:
    cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
    cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
    cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
    cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
    cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
    cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
    cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
    cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
    cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0


Managed containers: 
    /kubepods/burstable/pod76099b4b-af57-11e7-9b82-fa163ea0076a
    /kubepods/besteffort/pod6ed4ee49-af53-11e7-9b82-fa163ea0076a/f9da6bf60a186c47bd704bbe3cc18b25d07d4e7034d185341a090dc3519c047a
            Namespace: docker
            Aliases:
                    k8s_tiller_tiller-deploy-cffb976df-5s6np_kube-system_6ed4ee49-af53-11e7-9b82-fa163ea0076a_1
                    f9da6bf60a186c47bd704bbe3cc18b25d07d4e7034d185341a090dc3519c047a
    /kubepods/burstable/pod76099b4b-af57-11e7-9b82-fa163ea0076a/956911118c342375abfb7a07ec3bb37451bbc64a1e141321b6284cf5049e385f
Run Code Online (Sandbox Code Playgroud)

编辑

禁用kubelet()上的cadvisor端口--cadvisor-port=0不能解决这个问题.

小智 22

尝试用kubelet开始

--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice
Run Code Online (Sandbox Code Playgroud)

我在RHEL7上使用这个解决方案,使用Kubelet 1.8.0和Docker 1.12


mer*_*rea 11

angeloxx的解决方法也适用于kops的AWS默认映像(k8s-1.8-debian-jessie-amd64-hvm-ebs-2017-12-02(ami-bd229ec4))

sudo vim /etc/sysconfig/kubelet
Run Code Online (Sandbox Code Playgroud)

在DAEMON_ARGS字符串的末尾添加:

 --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice
Run Code Online (Sandbox Code Playgroud)

最后:

sudo systemctl restart kubelet
Run Code Online (Sandbox Code Playgroud)

  • 在CentOS7中,我不得不编辑另一个文件:/etc/systemd/system/kubelet.service.d/10-kubeadm.conf (2认同)