二進制方式部署 k8s(v1-18 - containerd)

前言

目的:以二進制方式部署實驗用途的 kubernetes 集羣

說明:實驗性記錄,無高可用,etcd 亦單機版,單主兩節點

系統:CentOS 7.2(阿里雲 ECS,無安全組、iptables、selinux,時間同步,2C 4G)

服務器規劃(寫入到各節點 hosts):

Ik5KNF

軟件版本:

增加內核參數(所有節點):

$ cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

$ sysctl --system

以下操作,代碼區域 $ 符號表示有輸出或換行(無它時表示純命令),#表示註釋,一般情況下以 root 身份操作。

在 Master 操作

1. 安裝 etcd

$ yum install etcd
$ systemctl start etcd
$ systemctl enable etcd

$ etcd --version
etcd Version: 3.3.11
Git SHA: 2cf9e51
Go Version: go1.10.3
Go OS/Arch: linux/amd64

$ etcdctl cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://localhost:2379
cluster is healthy

是的,單機版 etcd,監聽127.0.0.1:2379,無證書認證。

2. 安裝 cfssl 證書生成工具

wget -c https://static.saintic.com/download/k8s/cfssl-R1.2.tar.gz
tar zxf cfssl-R1.2.tar.gz
chmod +x cfssl cfssljson cfssl-certinfo
mv cfssl cfssljson cfssl-certinfo /usr/bin/

3. 生成 k8s 證書

# 自簽證書頒發機構(CA)
$ mkdir -p ~/TLS/k8s && cd ~/TLS/k8s
$ cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

$ cat > ca-csr.json << EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
[INFO] generating a new CA key and certificate from CSR
[INFO] generate received request
[INFO] received CSR
[INFO] generating key: rsa-2048
[INFO] encoded CSR
[INFO] signed certificate with serial number 218586274008070489012647351630930893980576584553

$ ls *pem
ca-key.pem  ca.pem

# 使用自籤CA簽發kube-apiserver HTTPS證書
# hosts字段中IP爲所有Master/LB/VIP IP,一個都不能少!爲了方便後期擴容可以寫幾個預留的IP
$ cat > server-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "172.17.89.23",
      "172.17.89.17",
      "172.17.89.7",
      "172.17.89.10",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
[INFO] generate received request
[INFO] received CSR
[INFO] generating key: rsa-2048
[INFO] encoded CSR
[INFO] signed certificate with serial number 276237244257869550803211619152022588342922126725

$ ls server*pem
server-key.pem  server.pem

4. 下載安裝 kubernetes 二進制可執行程序

cd /usr/local/src
wget -c https://static.saintic.com/download/k8s/kubernetes-server-linux-amd64.tar.gz
tar zxf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
mkdir -p /opt/k8s/{bin,cfg,ssl,logs}
cp kube-apiserver kube-scheduler kube-controller-manager /opt/k8s/bin
cp kubectl /usr/bin/

5. 部署 kube-apiserver 組件

cat > /opt/k8s/cfg/kube-apiserver.conf << 'EOF'
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/k8s/logs \
--etcd-servers=http://localhost:2379 \
--bind-address=172.17.89.23 \
--secure-port=6443 \
--advertise-address=172.17.89.23 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/k8s/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/opt/k8s/ssl/server.pem \
--kubelet-client-key=/opt/k8s/ssl/server-key.pem \
--tls-cert-file=/opt/k8s/ssl/server.pem \
--tls-private-key-file=/opt/k8s/ssl/server-key.pem \
--client-ca-file=/opt/k8s/ssl/ca.pem \
--service-account-key-file=/opt/k8s/ssl/ca-key.pem \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/k8s/logs/k8s-audit.log"
EOF

配置說明:

--etcd-servers:etcd 地址

--bind-address:監聽地址

--advertise-address:集羣通告地址

--service-cluster-ip-range:Service 虛擬 IP 地址段

--enable-admission-plugins:准入控制模塊

--authorization-mode:認證授權,啓用 RBAC 授權和節點自管理

--enable-bootstrap-token-auth:啓用 TLS bootstrap 機制

--token-auth-file:bootstrap token 文件

複製證書

cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/k8s/ssl/

啓用 TLS Bootstrapping 機制

# 生成token
$ head -c 16 /dev/urandom | od -An -t x | tr -d ' '
cf2cdbdd07d06094e598d83d0b89006b

# 格式:token,用戶名,UID,用戶組
$ cat > /opt/k8s/cfg/token.csv << EOF
cf2cdbdd07d06094e598d83d0b89006b,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF

systemd 管理 apiserver

$ cat > /lib/systemd/system/kube-apiserver.service << 'EOF'
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/k8s/cfg/kube-apiserver.conf
ExecStart=/opt/k8s/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

# 啓動並設置開機啓動
$ systemctl daemon-reload
$ systemctl enable kube-apiserver
$ systemctl start kube-apiserver

授權 kubelet-bootstrap 用戶允許請求證書

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

6. 部署 kube-controller-manager 組件

cat > /opt/k8s/cfg/kube-controller-manager.conf << 'EOF'
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/k8s/logs \
--leader-elect=true \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-signing-cert-file=/opt/k8s/ssl/ca.pem \
--cluster-signing-key-file=/opt/k8s/ssl/ca-key.pem  \
--root-ca-file=/opt/k8s/ssl/ca.pem \
--service-account-private-key-file=/opt/k8s/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"
EOF

systemd 管理 controller-manager

$ cat > /lib/systemd/system/kube-controller-manager.service << 'EOF'
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/k8s/cfg/kube-controller-manager.conf
ExecStart=/opt/k8s/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

# 啓動並設置開機啓動
$ systemctl daemon-reload
$ systemctl start kube-controller-manager
$ systemctl enable kube-controller-manager

7. 部署 kube-scheduler 組件

cat > /opt/k8s/cfg/kube-scheduler.conf << 'EOF'
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/k8s/logs \
--leader-elect \
--kubeconfig=/opt/k8s/cfg/kube-scheduler.kubeconfig \
--bind-address=127.0.0.1"
EOF

systemd 管理 scheduler

$ cat > /usr/lib/systemd/system/kube-scheduler.service << 'EOF'
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/k8s/cfg/kube-scheduler.conf
ExecStart=/opt/k8s/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

# 啓動並設置開機啓動
$ systemctl daemon-reload
$ systemctl start kube-scheduler
$ systemctl enable kube-scheduler

8. 查看集羣狀態

# 如下輸出說明Master節點組件運行正常
$ kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
controller-manager   Healthy   ok

9. 在 Master 上準備 Worker 環境所需

生成 kubelet 初次加入集羣引導 kubeconfig 文件

KUBE_CONFIG="/opt/k8s/cfg/bootstrap.kubeconfig"KUBE_APISERVER="https://172.17.89.23:6443" # 如同一終端可忽略,上一步已設置TOKEN="cf2cdbdd07d06094e598d83d0b89006b"   # 與token.csv裏保持一致

# 生成 kubelet bootstrap kubeconfig 配置文件
$ kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
$ kubectl config set-credentials "kubelet-bootstrap" \
  --token=${TOKEN} \
  --kubeconfig=${KUBE_CONFIG}
$ kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=${KUBE_CONFIG}
$ kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

生成 kube-proxy.kubeconfig 文件

cd ~/TLS/k8s

# 創建證書請求文件
$ cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

# 生成證書
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

# 生成kubeconfig文件KUBE_CONFIG="/opt/k8s/cfg/kube-proxy.kubeconfig"KUBE_APISERVER="https://172.17.89.23:6443" # 如同一終端可忽略,上一步已設置

$ kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
$ kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
$ kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=${KUBE_CONFIG}
$ kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

在 Woker Node 操作

Master 不作爲 Worker 使用,以下操作在 k8s-node7 節點上,資源受限也可以在 Master 上。

$ mkdir -p /opt/k8s/{bin,cfg,ssl,logs}

# 把Master節點上的 kubernetes/server/bin/ 下的 kubelet 和 kube-proxy 同步到Worker上
$ cp kubelet kube-proxy /opt/k8s/bin/

# 把Master節點上生成的Node所需文件(在 /opt/k8s/cfg/ 下)同步到Worker上
$ cp bootstrap.kubeconfig kube-proxy.kubeconfig /opt/k8s/cfg/

# 把Master節點上的ca.pem(在 /opt/k8s/ssl/ 下)同步到Worker上
$ cp ca.pem /opt/k8s/ssl/

1. 安裝 containerd

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y containerd.io
containerd config default | sudo tee /etc/containerd/config.toml
sed -i 's#k8s.gcr.io/pause#docker.io/staugur/pause#' /etc/containerd/config.toml
systemctl daemon-reload
systemctl start containerd
systemctl enable containerd

//(可選)安裝 crictl 工具

2. 部署 kubelet

cat > /opt/k8s/cfg/kubelet.conf << 'EOF'
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/k8s/logs \
--hostname-override=k8s-node7 \
--kubeconfig=/opt/k8s/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/k8s/cfg/bootstrap.kubeconfig \
--config=/opt/k8s/cfg/kubelet-config.yml \
--cert-dir=/opt/k8s/ssl \
--pod-infra-container-image=docker.io/staugur/pause:3.2 \
--container-runtime=remote \
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock"
EOF

參數說明:

--hostname-override:顯示名稱,集羣中唯一

--kubeconfig:空路徑,會自動生成,後面用於連接 apiserver

--bootstrap-kubeconfig:首次啓動向 apiserver 申請證書

--config:配置參數文件

--cert-dir:kubelet 證書生成目錄

--pod-infra-container-image:管理 Pod 網絡容器的鏡像

--container-runtime:設定容器運行時,目前默認還是 docker,需要改爲 remote

--container-runtime-endpoint:遠程運行時服務的端點,即 containerd 的 sock 路徑

配置參數文件

cat > /opt/k8s/cfg/kubelet-config.yml << 'EOF'
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/k8s/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

systemd 管理 kubelet

$ cat > /lib/systemd/system/kubelet.service << 'EOF'
[Unit]
Description=Kubernetes Kubelet
After=docker.service

[Service]
EnvironmentFile=/opt/k8s/cfg/kubelet.conf
ExecStart=/opt/k8s/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

# 啓動動並設置開機啓動
$ systemctl daemon-reload
$ systemctl start kubelet
$ systemctl enable kubelet

批准 kubelet 證書申請並加入集羣

這一步切換到 Master 節點操作!

# 查看kubelet證書請求
$ kubectl get csr
NAME           AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-xxx   2m26s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

# 批准申請
$ kubectl certificate approve node-csr-xxx

# 查看節點(由於網絡插件還沒有部署,節點會沒有準備就緒 NotReady)
$ kubectl get node
NAME        STATUS     ROLES    AGE    VERSION
k8s-node7   NotReady   <none>   3m6s   v1.21.0

3. 部署 kube-proxy

cat > /opt/k8s/cfg/kube-proxy.conf << 'EOF'
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/k8s/logs \
--config=/opt/k8s/cfg/kube-proxy-config.yml"
EOF

配置參數文件

cat > /opt/k8s/cfg/kube-proxy-config.yml << 'EOF'
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/k8s/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-node7
clusterCIDR: 10.0.0.0/24
EOF

systemd 管理 kube-proxy

$ cat > /lib/systemd/system/kube-proxy.service << 'EOF'
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/k8s/cfg/kube-proxy.conf
ExecStart=/opt/k8s/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

# 啓動並設置開機啓動
$ systemctl daemon-reload
$ systemctl start kube-proxy
$ systemctl enable kube-proxy

4. 安裝 cni-plugins

mkdir -p /opt/cni/bin
version="0.9.1"
wget -c https://github.com/containernetworking/plugins/releases/download/v${version}/cni-plugins-linux-amd64-v${version}.tgz
tar zxf cni-plugins-linux-amd64-v${version}.tgz -C /opt/cni/bin/

至此,Worker 節點大致是這樣,增加節點按照本部分內容操作即可。

//(省略)依此安裝 k8s-node8 節點,注意更改主機名(hostname-override)


部署 CNI 網絡及其他可選組件(在 Master 上操作)

部署 flannel 網絡組件

使用 flannel,部署到 pod 中(使用 v0.13)

KUBE_FLANNEL=https://raw.githubusercontent.com/flannel-io/flannel/v0.13.0/Documentation/kube-flannel.yml
kubectl apply -f $KUBE_FLANNEL

Pod 部署完成後,kubectl get node狀態應該是 Ready

$ kubectl get node
NAME        STATUS   ROLES    AGE   VERSION
k8s-node7   Ready    <none>   13m   v1.18.3

授權 apiserver 訪問 kubelet

以便 apiserver 可以調用 node 的 kubelet 接口

# 可以放到yaml集中目錄
$ cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

$ kubectl apply -f apiserver-to-kubelet-rbac.yaml

部署 CoreDNS 組件

CoreDNS 用於集羣內部 Service 名稱解析。

# 安裝依賴 jq 命令
$ yum install jq -y

# 下載 CoreDNS 部署項目
$ git clone https://github.com/coredns/deployment.git
$ cd coredns/deployment/kubernetes
$ ./deploy.sh -i 10.0.0.2 | kubectl apply -f -

# 查看結果
$ kubectl get svc -n kube-system -o wide
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
kube-dns   ClusterIP   10.0.0.2     <none>        53/UDP,53/TCP,9153/TCP   3m26s   k8s-app=kube-dns

$ kubectl get pods -n kube-system -l k8s-app=kube-dns -o wide
NAME                      READY   STATUS    RESTARTS   AGE     IP           NODE        NOMINATED NODE   READINESS GATES
coredns-6ff445f54-cg2zj   1/1     Running   0          9m16s   10.244.0.2   k8s-node7   <none>           <none>

默認情況下是自動獲取 kube-dns 的集羣 ip 的,但是由於沒有部署 kube-dns 所以只能手動指定一個 集羣 ip(前面 kubelet 規劃的 IP 地址),否則會報錯。

完結

目前整體文件結構、服務狀態如下:

$ tree /opt/k8s/
/opt/k8s/
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   └── kube-scheduler
├── cfg
│   ├── bootstrap.kubeconfig
│   ├── kube-apiserver.conf
│   ├── kube-controller-manager.conf
│   ├── kube-proxy.kubeconfig
│   ├── kube-scheduler.conf
│   └── token.csv
├── logs
│   ├── ignore...
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem
$ tree /opt/k8s/
/opt/k8s/
├── bin
│   ├── kubelet
│   └── kube-proxy
├── cfg
│   ├── bootstrap.kubeconfig
│   ├── kubelet.conf
│   ├── kubelet-config.yml
│   ├── kubelet.kubeconfig
│   ├── kube-proxy.conf
│   ├── kube-proxy-config.yml
│   └── kube-proxy.kubeconfig
├── logs
│   └── ignore...
└── ssl
    ├── ca.pem
    ├── kubelet-client-2021-05-11-17-14-05.pem
    ├── kubelet-client-current.pem -> /opt/k8s/ssl/kubelet-client-2021-05-11-17-14-05.pem
    ├── kubelet.crt
    └── kubelet.key

$ tree /opt/cni
/opt/cni/
└── bin
    ├── bandwidth
    ├── bridge
    ├── dhcp
    ├── firewall
    ├── flannel
    ├── host-device
    ├── host-local
    ├── ipvlan
    ├── loopback
    ├── macvlan
    ├── portmap
    ├── ptp
    ├── sbr
    ├── static
    ├── tuning
    ├── vlan
    └── vrf
$ kubectl get cs,node -o wide
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/controller-manager   Healthy   ok                  
componentstatus/etcd-0               Healthy   {"health":"true"}   
componentstatus/scheduler            Healthy   ok                  

NAME             STATUS   ROLES    AGE    VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
node/k8s-node7   Ready    <none>   18h    v1.18.3   172.17.89.17   <none>        CentOS Linux 7 (Core)   3.10.0-514.6.2.el7.x86_64   containerd://1.4.4
node/k8s-node8   Ready    <none>   116m   v1.18.3   172.17.89.7    <none>        CentOS Linux 7 (Core)   3.10.0-514.6.2.el7.x86_64   containerd://1.4.4

$ kubectl get pod -n kube-system -o wide
NAME                      READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
coredns-6ff445f54-cg2zj   1/1     Running   0          9m45s   10.244.0.2     k8s-node7   <none>           <none>
kube-flannel-ds-7cssx     1/1     Running   0          18h     172.17.89.17   k8s-node7   <none>           <none>
kube-flannel-ds-nj6tj     1/1     Running   2          114m    172.17.89.7    k8s-node8   <none>           <none>

$ kubectl get svc --all-namespaces 
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP                  18h
kube-system   kube-dns     ClusterIP   10.0.0.2     <none>        53/UDP,53/TCP,9153/TCP   10m
本文由 Readfog 進行 AMP 轉碼,版權歸原作者所有。
來源https://mp.weixin.qq.com/s/o7YvS4K4C87KcPyPUCSRFA