小编use*_*081的帖子

kubernetes master的x509证书无效

我正试着从我的工作站到达我的k8s主人.我可以从局域网中访问主服务器,但不能从我的工作站访问.错误消息是:

% kubectl --context=employee-context get pods
Unable to connect to the server: x509: certificate is valid for 10.96.0.1, 10.161.233.80, not 114.215.201.87
Run Code Online (Sandbox Code Playgroud)

如何在证书中添加114.215.201.87?我是否需要删除旧的群集ca.crt,重新创建它,重新启动整个群集然后重新签名客户端证书?我已经使用kubeadm部署了我的集群,我不确定如何手动执行这些步骤.

kubernetes kubeadm

21
推荐指数
5
解决办法
2万
查看次数

未授权在mongodb上的admin.system.namespaces上进行查询

我启动一个新的mongo实例,创建一个用户,授权它,但是当我运行"show collections"时,系统说id没有被授权.我不知道为什么?

# mongo admin
MongoDB shell version: 2.4.3
connecting to: admin
Server has startup warnings:
Thu May 23 18:23:56.735 [initandlisten]
Thu May 23 18:23:56.735 [initandlisten] ** NOTE: This is a 32 bit MongoDB binary.
Thu May 23 18:23:56.735 [initandlisten] **       32 bit builds are limited to less than 2GB of data (or less with --journal).
Thu May 23 18:23:56.735 [initandlisten] **       See http://dochub.mongodb.org/core/32bit
Thu May 23 18:23:56.735 [initandlisten]
> db = db.getSiblingDB("admin")
admin
> db.addUser({user:"sa",pwd:"sa",roles:["userAdminAnyDatabase"]})
{
        "user" : "sa", …
Run Code Online (Sandbox Code Playgroud)

linux mongodb

13
推荐指数
4
解决办法
3万
查看次数

kubernetes集群主节点未准备好

我不知道为什么,我的主节点处于未就绪状态,集群上的所有pod都正常运行,我使用cabernets v1.7.5,网络插件使用calico,而os版本是"centos7.2.1511"

# kubectl get nodes
NAME        STATUS     AGE       VERSION
k8s-node1   Ready      1h        v1.7.5
k8s-node2   NotReady   1h        v1.7.5




# kubectl get all --all-namespaces
NAMESPACE     NAME                                           READY     STATUS    RESTARTS   AGE
kube-system   po/calico-node-11kvm                           2/2       Running   0          33m
kube-system   po/calico-policy-controller-1906845835-1nqjj   1/1       Running   0          33m
kube-system   po/calicoctl                                   1/1       Running   0          33m
kube-system   po/etcd-k8s-node2                              1/1       Running   1          15m
kube-system   po/kube-apiserver-k8s-node2                    1/1       Running   1          15m
kube-system   po/kube-controller-manager-k8s-node2           1/1       Running   2          15m
kube-system   po/kube-dns-2425271678-2mh46                   3/3       Running   0          1h
kube-system   po/kube-proxy-qlmbx                            1/1       Running   1          1h
kube-system   po/kube-proxy-vwh6l                            1/1 …
Run Code Online (Sandbox Code Playgroud)

kubernetes

9
推荐指数
2
解决办法
2万
查看次数

启动 etcd 因“bind:无法分配请求的地址”而失败

我将 etcd 作为 docker 容器运行,10.132.41.234 是我运行 docker 容器的主机 IP,我收到这样的错误信息,我不知道它是否正确,我现在熟悉 etcd,有人可以提供帮助?谢谢!

2017-09-13 08:55:03.339612 I | etcdmain: etcd Version: 3.0.17
2017-09-13 08:55:03.339891 I | etcdmain: Git SHA: cc198e2
2017-09-13 08:55:03.339902 I | etcdmain: Go Version: go1.6.4
2017-09-13 08:55:03.339912 I | etcdmain: Go OS/Arch: linux/amd64
2017-09-13 08:55:03.339921 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2017-09-13 08:55:03.340059 I | etcdmain: peerTLS: cert = /etc/ssl/certs/server.pem, key = /etc/ssl/certs/server-key.pem, ca = , trusted-ca = /etc/ssl/certs/ca.pem, client-cert-auth …
Run Code Online (Sandbox Code Playgroud)

etcd3

5
推荐指数
1
解决办法
8854
查看次数

coredns 无法正确解析服务名称

我使用 Kubernetes v1.11.3,它使用 coredns 来解析主机或服务名称,但我发现在 pod 中,解析无法正常工作,

# kubectl get services --all-namespaces -o wide
NAMESPACE     NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE       SELECTOR
default       kubernetes    ClusterIP   10.96.0.1       <none>        443/TCP          50d       <none>
kube-system   calico-etcd   ClusterIP   10.96.232.136   <none>        6666/TCP         50d       k8s-app=calico-etcd
kube-system   kube-dns      ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP    50d       k8s-app=kube-dns
kube-system   kubelet       ClusterIP   None            <none>        10250/TCP        32d       <none>
testalex      grafana       NodePort    10.96.51.173    <none>        3000:30002/TCP   2d        app=grafana
testalex      k8s-alert     NodePort    10.108.150.47   <none>        9093:30093/TCP   13m       app=alertmanager
testalex      prometheus    NodePort    10.96.182.108   <none>        9090:30090/TCP   16m       app=prometheus
Run Code Online (Sandbox Code Playgroud)

以下命令无响应

# …
Run Code Online (Sandbox Code Playgroud)

kubernetes coredns

5
推荐指数
1
解决办法
1万
查看次数

如何让 calico 使用 K8s etcd?

我阅读了 calico 文档,它说 calico 将在启动时启动一个 etcd 实例,但我注意到 K8s 集群将在集群启动时启动一个 etcd pod。我希望 calico 使用那个 etcd 节点,所以我执行以下操作:

使用 calicoctl 做测试,创建一个配置文件:

# cat myconfig.yml
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
  datastoreType: etcdv3
  etcdEndpoints: https://10.100.1.20:2379
  etcdKeyFile: /etc/kubernetes/pki/etcd/server.key
  etcdCertFile: /etc/kubernetes/pki/etcd/server.crt
  etcdCACertFile: /etc/kubernetes/pki/etcd/ca.crt
Run Code Online (Sandbox Code Playgroud)

etcd 配置信息来自 /etc/kubernetes/manifests/etcd.yaml

# cat /etc/kubernetes/manifests/etcd.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://127.0.0.1:2379
    - --cert-file=/etc/kubernetes/pki/etcd/server.crt
    - --client-cert-auth=true
    - --data-dir=/var/lib/etcd
    - --initial-advertise-peer-urls=https://127.0.0.1:2380
    - --initial-cluster=t-k8s-a1=https://127.0.0.1:2380 …
Run Code Online (Sandbox Code Playgroud)

calico kubernetes

4
推荐指数
1
解决办法
3669
查看次数

c中“argv['A']”是什么意思?

我找到了以下代码,但我不明白它是什么或它是如何工作的。我以前只在 C 中见过argv[n](argv with an integer index) ,从来没有见过像argv['A'].

if(argc != 100) return 0;
if(strcmp(argv['A'],"\x00")) return 0;
if(strcmp(argv['B'],"\x20\x0a\x0d")) return 0;
printf("Stage 1 clear!\n");
Run Code Online (Sandbox Code Playgroud)

这是做什么的?你能解释一下为什么它有效吗?

c

3
推荐指数
1
解决办法
878
查看次数

Why do pods remain in 'pending' status?

I am very confused about why my pods are staying in pending status.

Vitess seems have problem scheduling the vttablet pod on nodes. I built a 2-worker-node Kubernetes cluster (nodes A & B), and started vttablets on the cluster, but only two vttablets start normally, the other three is stay in pending state.

When I allow the master node to schedule pods, then the three pending vttablets all start on the master (first error, then running normally), and I create …

kubernetes vitess

2
推荐指数
1
解决办法
971
查看次数

标签 统计

kubernetes ×5

c ×1

calico ×1

coredns ×1

etcd3 ×1

kubeadm ×1

linux ×1

mongodb ×1

vitess ×1