ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

kubernetes二进制安装

2021-12-21 15:04:18  阅读:171  来源: 互联网

标签:work kubernetes 二进制 安装 master01 -- ip kube root


文章目录

一、实验环境

系统主机名ip配置运行服务扮演角色
Centos7.4master01192.168.100.2024G双核docker、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、kubectl、kubelet、kube-nginx、flannelmaster节点
Centos7.4master02192.168.100.2034G双核docker、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、kubectl、kubelet、kube-nginx、flannelmaster节点
Centos7.4worker01192.168.100.2052G单核docker、etcd、kubelet、proxy、flannelworker节点
Centos7.4worker02192.168.100.2062G单核docker、etcd、kubelet、proxy、flannelworker节点

以上主机全部都有桥接网卡,虚拟ip192.168.100.204,部署双master实现k8s高可用

二、实验步骤

1、先做基础配置

四台服务器都需要进行操作

#master01
[root@Centos7 ~]# hostnamectl set-hostname master01
[root@Centos7 ~]# su
[root@master01 ~]# cat <<aaa>> /etc/hosts
192.168.100.202 master01
192.168.100.203 master02
192.168.100.205 worker01
192.168.100.206 worker02
aaa
#master02
[root@Centos7 ~]# hostnamectl set-hostname master02
[root@Centos7 ~]# su
[root@master02 ~]# cat <<aaa>> /etc/hosts
> 192.168.100.202 master01
> 192.168.100.203 master02
> 192.168.100.205 worker01
> 192.168.100.206 worker02
> aaa
#worker01
[root@Centos7 ~]# hostnamectl set-hostname worker01
[root@Centos7 ~]# su
[root@worker01 ~]# cat <<aaa>> /etc/hosts
> 192.168.100.202 master01
> 192.168.100.203 master02
> 192.168.100.205 worker01
> 192.168.100.206 worker02
> aaa
#worker02
[root@Centos7 ~]# hostnamectl set-hostname worker02
[root@Centos7 ~]# su
[root@worker02 ~]# cat <<aaa>> /etc/hosts
> 192.168.100.202 master01
> 192.168.100.203 master02
> 192.168.100.205 worker01
> 192.168.100.206 worker02
> aaa

2、编写脚本进行初始化准备

2步骤的所有操作在master01上执行即可!!!!

#在master01上编写脚本
[root@master01 ~]# vim k8sinit.sh
#!/bin/sh
#****************************************************************#
# ScriptName: k8sinit.sh
# Initialize the machine. This needs to be executed on every machine.
# Mkdir k8s directory
yum -y install wget ntpdate && ntpdate ntp1.aliyun.com
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
 yum -y install epel-release
mkdir -p /opt/k8s/bin/
mkdir -p /data/k8s/k8s
mkdir -p /data/k8s/docker
# Disable the SELinux.
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
# Turn off and disable the firewalld.
systemctl stop firewalld
systemctl disable firewalld
# Modify related kernel parameters & Disable the swap.
cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.tcp_tw_recycle = 0
vm.swappiness = 0
vm.overcommit_memory = 1
vm.panic_on_oom = 0
net.ipv6.conf.all.disable_ipv6 = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf >&/dev/null
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
modprobe br_netfilter

# Add ipvs modules
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
modprobe -- nf_conntrack
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules

# Install rpm
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget gcc gcc-c++ make libnl libnl-devel libnfnetlink-devel openssl-devel vim
# ADD k8s bin to PATH
echo 'export PATH=/opt/k8s/bin:$PATH' >> /root/.bashrc
#保存退出
[root@master01 ~]# chmod +x k8sinit.sh
#在master01上配置免密登录其他主机
[root@master01 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:hjslVhnFN3ZeWAhJR0xXQavf1L1OyF0L2USEqoELTgo root@master01
The key's randomart image is:
+---[RSA 2048]----+
|        .o..o*BO*|
|         o. =o*.o|
|        +  o.+ + |
| E   o + . .  * o|
|  . + = S o  + .=|
|   . o * .  . =.=|
|      o      o *.|
|       .      o  |
|               . |
+----[SHA256]-----+
[root@master01 ~]# ssh-copy-id 192.168.100.202
[root@master01 ~]# ssh-copy-id 192.168.100.203
[root@master01 ~]# ssh-copy-id 192.168.100.205
[root@master01 ~]# ssh-copy-id 192.168.100.206
#编写设置环境变量的脚本
[root@master01 ~]# vim environment.sh   #其中的节点ip和网卡名称要记得修改,如果和环境配置相同则无需修改
!/bin/bash
# 生成 EncryptionConfig 所需的加密 key
export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

# 集群 MASTER 机器 IP 数组
export MASTER_IPS=(192.168.100.202 192.168.100.203)

# 集群 MASTER IP 对应的主机名数组
export MASTER_NAMES=(master01 master02)

# 集群 NODE 机器 IP 数组
export NODE_IPS=(192.168.100.205 192.168.100.206)

# 集群 NODE IP 对应的主机名数组
export NODE_NAMES=(worker01 worker02)

# 集群所有机器 IP 数组
export ALL_IPS=(192.168.100.202 192.168.100.203 192.168.100.205 192.168.100.206)

# 集群所有IP 对应的主机名数组
export ALL_NAMES=(master01 master02 worker01 worker02)

# etcd 集群服务地址列表
export ETCD_ENDPOINTS="https://192.168.100.202:2379,https://192.168.100.203:2379"

# etcd 集群间通信的 IP 和端口
export ETCD_NODES="master01=https://192.168.100.202:2380,master02=https://192.168.100.203:2380"

# kube-apiserver 的反向代理(kube-nginx)地址端口,这里填虚拟ip地址
export KUBE_APISERVER="https://192.168.100.204:16443"

# 节点间互联网络接口名称
export IFACE="ens32"

# etcd 数据目录
export ETCD_DATA_DIR="/data/k8s/etcd/data"

# etcd WAL 目录,建议是 SSD 磁盘分区,或者和 ETCD_DATA_DIR 不同的磁盘分区
export ETCD_WAL_DIR="/data/k8s/etcd/wal"

# k8s 各组件数据目录
export K8S_DIR="/data/k8s/k8s"

# docker 数据目录
export DOCKER_DIR="/data/k8s/docker"

## 以下参数一般不需要修改
# TLS Bootstrapping 使用的 Token,可以使用命令 head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成
BOOTSTRAP_TOKEN="41f7e4ba8b7be874fcff18bf5cf41a7c"

# 最好使用 当前未用的网段 来定义服务网段和 Pod 网段
# 服务网段,部署前路由不可达,部署后集群内路由可达(kube-proxy 保证)
SERVICE_CIDR="10.20.0.0/16"

# Pod 网段,建议 /16 段地址,部署前路由不可达,部署后集群内路由可达(flanneld 保证)
CLUSTER_CIDR="10.10.0.0/16"

# 服务端口范围 (NodePort Range)
export NODE_PORT_RANGE="1-65535"

# flanneld 网络配置前缀
export FLANNEL_ETCD_PREFIX="/kubernetes/network"

# kubernetes 服务 IP (一般是 SERVICE_CIDR 中第一个IP)
export CLUSTER_KUBERNETES_SVC_IP="10.20.0.1"

# 集群 DNS 服务 IP (从 SERVICE_CIDR 中预分配)
export CLUSTER_DNS_SVC_IP="10.20.0.254"

# 集群 DNS 域名(末尾不带点号)
export CLUSTER_DNS_DOMAIN="cluster.local"

# 将二进制目录 /opt/k8s/bin 加到 PATH 中
export PATH=/opt/k8s/bin:$PATH
#保存退出
[root@master01 ~]# chmod +x environment.sh 
[root@master01 ~]# source /root/environment.sh  #执行脚本
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}; do echo $all_ip; done  #运行之前先执行这个命令,看能不能输出所有服务器的ip
[root@master01 ~]# ll
总用量 12
-rw-------. 1 root root 1264 1月  12 2021 anaconda-ks.cfg
-rwxr-xr-x  1 root root 2470 8月   5 16:28 environment.sh
-rwxr-xr-x  1 root root 1627 8月   5 16:19 k8sinit.sh
[root@master01 ~]# source environment.sh   #没有的话重新执行
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}; do echo $all_ip; done  #就像这样可以输出所有服务器的ip即可
192.168.100.202
192.168.100.203
192.168.100.205
192.168.100.206

#执行循环语句,一次性把四台服务器进行环境准备
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}   
  do
    echo ">>> ${all_ip}"
    scp -rp /etc/hosts root@${all_ip}:/etc/hosts
    scp -rp k8sinit.sh root@${all_ip}:/root/
    ssh root@${all_ip} "bash /root/k8sinit.sh"
  done

运行时间较长,一定要有网络环境!

3、创建CA证书和密钥

3步骤操作全部都在master01上执行

#安装cfssl工具集
[root@master01 ~]# mkdir -p /opt/k8s/cert
[root@master01 ~]# curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /opt/k8s/bin/cfssl   #下载cfssl软件
[root@master01 ~]# curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /opt/k8s/bin/cfssljson #下载json模板
[root@master01 ~]# curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /opt/k8s/bin/cfssl-certinfo
[root@master01 ~]# chmod u+x /opt/k8s/bin/*
[root@master01 ~]# cd /opt/k8s/bin/
[root@master01 bin]# ll
总用量 18808
-rwxr--r-- 1 root root 10376657 8月   6 10:09 cfssl
-rwxr--r-- 1 root root  6595195 8月   6 10:10 cfssl-certinfo
-rwxr--r-- 1 root root  2277873 8月   6 10:10 cfssljson
#创建根证书配置文件
[root@master01 bin]# cd
[root@master01 ~]# mkdir -p /opt/k8s/work
[root@master01 ~]#  cd /opt/k8s/work
[root@master01 work]# cfssl print-defaults config > config.json
[root@master01 work]# cfssl print-defaults csr > csr.json
[root@master01 work]# cp config.json ca-config.json
[root@master01 work]# cat > ca-config.json <<EOF
 {
     "signing": {
         "default": {
             "expiry": "876000h"
         },
         "profiles": {
             "kubernetes": {
                 "expiry": "876000h",
                 "usages": [
                     "signing",
                     "key encipherment",
                     "server auth",
                     "client auth"
                 ]
             }
         }
     }
 }
 EOF
 
#字段解释:
config.json:可以定义多个profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个profile;
•	signing: 表示该证书可用于签名其它证书;生成的ca.pem 证书中CA=TRUE;
•	server auth: 表示client 可以用该CA 对server 提供的证书进行校验;
•	client auth: 表示server 可以用该CA 对client 提供的证书进行验证;
•	"expiry": "876000h":表示证书有效期设置为 100 年。
# 创建根证书签名请求文件
[root@master01 work]# cp csr.json ca-csr.json
[root@master01 work]# cat > ca-csr.json <<EOF
 {
     "CN": "kubernetes",
     "key": {
         "algo": "rsa",
         "size": 2048
     },
     "names": [
         {
             "C": "CN",
             "ST": "Shanghai",
             "L": "Shanghai",
             "O": "k8s",
             "OU": "System"
         }
     ],
     "ca": {
         "expiry": "876000h"
  }
 }
 EOF
#字段解释:
•	CN: Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名(User Name);浏览器使用该字段验证网站是否合法;
•	C:country;
•	ST:state;
•	L:city;
•	O: Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组(Group);
•	OU:organization unit。

#生成CA密钥(ca-key.pem)和证书(ca.pem)
[root@master01 work]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca 
2021/08/06 10:15:01 [INFO] generating a new CA key and certificate from CSR
2021/08/06 10:15:01 [INFO] generate received request
2021/08/06 10:15:01 [INFO] received CSR
2021/08/06 10:15:01 [INFO] generating key: rsa-2048
2021/08/06 10:15:02 [INFO] encoded CSR
2021/08/06 10:15:02 [INFO] signed certificate with serial number 671027392584519656097263783341319452729816665502
[root@master01 work]# echo $?
0

提示:生成证书后,Kubernetes集群需要双向TLS认证,则可将ca-key.pem和ca.pem拷贝到所有要部署的机器的/etc/kubernetes/ssl目录下。不同证书 csr 文件的 CN、C、ST、L、O、OU 组合必须不同,否则可能出现 PEER'S CERTIFICATE HAS AN INVALID SIGNATURE 错误;
后续创建证书的 csr 文件时,CN 都不相同(C、ST、L、O、OU 相同),以达到区分的目的;
#分发证书
[root@master01 work]# source /root/environment.sh
[root@master01 work]#  for all_ip in ${ALL_IPS[@]};   do     echo ">>> ${all_ip}";     ssh root@${all_ip} "mkdir -p /etc/kubernetes/cert";  scp ca*.pem ca-config.json root@${all_ip}:/etc/kubernetes/cert; done
>>> 192.168.100.202
ca-key.pem                                                                                             100% 1679     1.6MB/s   00:00    
ca.pem                                                                                                 100% 1367    56.8KB/s   00:00    
ca-config.json                                                                                         100%  388    75.1KB/s   00:00    
>>> 192.168.100.203
ca-key.pem                                                                                             100% 1679     1.1MB/s   00:00    
ca.pem                                                                                                 100% 1367     1.5MB/s   00:00    
ca-config.json                                                                                         100%  388   594.7KB/s   00:00    
>>> 192.168.100.205
ca-key.pem                                                                                             100% 1679     1.6MB/s   00:00    
ca.pem                                                                                                 100% 1367     1.4MB/s   00:00    
ca-config.json                                                                                         100%  388   429.7KB/s   00:00    
>>> 192.168.100.206
ca-key.pem                                                                                             100% 1679     1.6MB/s   00:00    
ca.pem                                                                                                 100% 1367     1.5MB/s   00:00    
ca-config.json                                                                                         100%  388   629.1KB/s   00:00    

4、部署ETCD集群

4步骤全部都在master01节点上运行

#安装ETCD
etcd 是基于 Raft 的分布式 key-value 存储系统,由 CoreOS 开发,常用于服务发现、共享配置以及并发控制(如 leader 选举、分布式锁等)。kubernetes 使用 etcd 存储所有运行数据。
[root@master01 ~]# cd /opt/k8s/work  
[root@master01 work]# wget https://github.com/coreos/etcd/releases/download/v3.3.22/etcd-v3.3.10-linux-amd64.tar.gz
[root@master01 work]# ll
总用量 11116
-rw-r--r-- 1 root    root       388 8月   6 10:12 ca-config.json
-rw-r--r-- 1 root    root      1005 8月   6 10:15 ca.csr
-rw-r--r-- 1 root    root       310 8月   6 10:13 ca-csr.json
-rw------- 1 root    root      1679 8月   6 10:15 ca-key.pem
-rw-r--r-- 1 root    root      1367 8月   6 10:15 ca.pem
-rw-r--r-- 1 root    root       567 8月   6 10:12 config.json
-rw-r--r-- 1 root    root       287 8月   6 10:12 csr.json
drwxr-xr-x 3 6810230 users      123 10月 11 2018 etcd-v3.3.10-linux-amd64
-rw-r--r-- 1 root    root  11353259 3月  25 2020 etcd-v3.3.10-linux-amd64.tar.gz
[root@master01 work]# tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
#提示:flanneld 版本 (v0.11.0/v0.12.0) 不支持 etcd v3.4.x,本方案部署etcd-v3.3.10版本。
#分发ETCD到master节点上
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
   do
     echo ">>> ${master_ip}"
     scp etcd-v3.3.10-linux-amd64/etcd* root@${master_ip}:/opt/k8s/bin
     ssh root@${master_ip} "chmod +x /opt/k8s/bin/*"
   done
#创建etcd证书和密钥,创建etcd的CA证书请求文件
[root@master01 work]#  cat > etcd-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
        "127.0.0.1",
        "192.168.100.202",
        "192.168.100.203"
  ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Shanghai",
            "L": "Shanghai",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
#解释:
hosts:指定授权使用该证书的 etcd 节点 IP 或域名列表,需要将 etcd 集群的三个节点 IP 都列在其中。

##生成密钥和证书
[root@master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem -ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
2021/08/06 10:23:29 [INFO] generate received request
2021/08/06 10:23:29 [INFO] received CSR
2021/08/06 10:23:29 [INFO] generating key: rsa-2048
2021/08/06 10:23:29 [INFO] encoded CSR
2021/08/06 10:23:29 [INFO] signed certificate with serial number 613228402925097686112501293991749855067805987177
2021/08/06 10:23:29 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
#分发证书和私钥
[root@master01 work]# source /root/environment.sh
[root@master01 work]#  for master_ip in ${MASTER_IPS[@]};   do     echo ">>> ${master_ip}";     ssh root@${master_ip} "mkdir -p /etc/etcd/cert";     scp etcd*.pem root@${master_ip}:/etc/etcd/cert/;  done
#创建etcd的systemd
[root@master01 work]#  source /root/environment.sh
[root@master01 work]# cat > etcd.service.template <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=${ETCD_DATA_DIR}
ExecStart=/opt/k8s/bin/etcd \\
  --enable-v2=true \\
  --data-dir=${ETCD_DATA_DIR} \\
  --wal-dir=${ETCD_WAL_DIR} \\
  --name=##MASTER_NAME## \\
  --cert-file=/etc/etcd/cert/etcd.pem \\
  --key-file=/etc/etcd/cert/etcd-key.pem \\
  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
  --peer-cert-file=/etc/etcd/cert/etcd.pem \\
  --peer-key-file=/etc/etcd/cert/etcd-key.pem \\
  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --listen-peer-urls=https://##MASTER_IP##:2380 \\
  --initial-advertise-peer-urls=https://##MASTER_IP##:2380 \\
  --listen-client-urls=https://##MASTER_IP##:2379,http://127.0.0.1:2379 \\
  --advertise-client-urls=https://##MASTER_IP##:2379 \\
  --initial-cluster-token=etcd-cluster-0 \\
  --initial-cluster=${ETCD_NODES} \\
  --initial-cluster-state=new \\
  --auto-compaction-mode=periodic \\
  --auto-compaction-retention=1 \\
  --max-request-bytes=33554432 \\
  --quota-backend-bytes=6442450944 \\
  --heartbeat-interval=250 \\
  --election-timeout=2000
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
#解释:
WorkingDirectory、--data-dir:指定工作目录和数据目录为 ${ETCD_DATA_DIR},需在启动服务前创建这个目录;
--wal-dir:指定 wal 目录,为了提高性能,一般使用 SSD 或者和 --data-dir 不同的磁盘;
--name:指定节点名称,当 --initial-cluster-state 值为 new 时,--name 的参数值必须位于 --initial-cluster 列表中;
--cert-file、--key-file:etcd server 与 client 通信时使用的证书和私钥;
--trusted-ca-file:签名 client 证书的 CA 证书,用于验证 client 证书;
--peer-cert-file、--peer-key-file:etcd 与 peer 通信使用的证书和私钥;
--peer-trusted-ca-file:签名 peer 证书的 CA 证书,用于验证 peer 证书。
#修改systemd相应地址
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for (( i=0; i < 2; i++ ))
   do
     sed -e "s/##MASTER_NAME##/${MASTER_NAMES[i]}/" -e "s/##MASTER_IP##/${MASTER_IPS[i]}/" etcd.service.template > etcd-${MASTER_IPS[i]}.service
   done
[root@master01 work]# ll
总用量 11144
-rw-r--r-- 1 root    root       388 8月   6 10:12 ca-config.json
-rw-r--r-- 1 root    root      1005 8月   6 10:15 ca.csr
-rw-r--r-- 1 root    root       310 8月   6 10:13 ca-csr.json
-rw------- 1 root    root      1679 8月   6 10:15 ca-key.pem
-rw-r--r-- 1 root    root      1367 8月   6 10:15 ca.pem
-rw-r--r-- 1 root    root       567 8月   6 10:12 config.json
-rw-r--r-- 1 root    root       287 8月   6 10:12 csr.json
-rw-r--r-- 1 root    root      1383 8月   6 10:26 etcd-192.168.100.202.service  #会有这两个master节点的service配置文件
-rw-r--r-- 1 root    root      1383 8月   6 10:26 etcd-192.168.100.203.service
-rw-r--r-- 1 root    root      1058 8月   6 10:23 etcd.csr
-rw-r--r-- 1 root    root       354 8月   6 10:21 etcd-csr.json
-rw------- 1 root    root      1679 8月   6 10:23 etcd-key.pem
-rw-r--r-- 1 root    root      1436 8月   6 10:23 etcd.pem
-rw-r--r-- 1 root    root      1382 8月   6 10:25 etcd.service.template
drwxr-xr-x 3 6810230 users      123 10月 11 2018 etcd-v3.3.10-linux-amd64
-rw-r--r-- 1 root    root  11353259 3月  25 2020 etcd-v3.3.10-linux-amd64.tar.gz   
#分发etcd systemd
[root@master01 work]#  source /root/environment.sh
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
   do
     echo ">>> ${master_ip}"
     scp etcd-${master_ip}.service root@${master_ip}:/etc/systemd/system/etcd.service
   done
#启动ETCD

[root@master01 work]# source /root/environment.sh
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "mkdir -p ${ETCD_DATA_DIR} ${ETCD_WAL_DIR}"
    ssh root@${master_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd " &
  done

#检查ETCD启动
[root@master01 work]# source /root/environment.sh
[root@master01 work]#  for master_ip in ${MASTER_IPS[@]}
   do
     echo ">>> ${master_ip}"
     ssh root@${master_ip} "systemctl status etcd|grep Active"
   done
>>> 192.168.100.202
   Active: active (running) since 五 2021-08-06 10:28:57 CST; 48s ago  #提示两个都是running即可
>>> 192.168.100.203
   Active: active (running) since 五 2021-08-06 10:28:57 CST; 48s ago
   
#验证服务状态
[root@master01 work]#  source /root/environment.sh
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ETCDCTL_API=3 /opt/k8s/bin/etcdctl \
    --endpoints=https://${master_ip}:2379 \
    --cacert=/etc/kubernetes/cert/ca.pem \
    --cert=/etc/etcd/cert/etcd.pem \
    --key=/etc/etcd/cert/etcd-key.pem endpoint health
  done
>>> 192.168.100.202
https://192.168.100.202:2379 is healthy: successfully committed proposal: took = 2.190051ms #提示successfully即可
>>> 192.168.100.203
https://192.168.100.203:2379 is healthy: successfully committed proposal: took = 1.756794ms

#查看ETCD当前leader
[root@master01 work]# source /root/environment.sh
[root@master01 work]# ETCDCTL_API=3 /opt/k8s/bin/etcdctl \
  -w table --cacert=/etc/kubernetes/cert/ca.pem \
  --cert=/etc/etcd/cert/etcd.pem \
  --key=/etc/etcd/cert/etcd-key.pem \
  --endpoints=${ETCD_ENDPOINTS} endpoint status
+------------------------------+------------------+---------+---------+-----------+-----------+------------+
|           ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+------------------------------+------------------+---------+---------+-----------+-----------+------------+
| https://192.168.100.202:2379 | 43bba1b27886ffbe |  3.3.10 |   20 kB |     false |         2 |          6 |
| https://192.168.100.203:2379 | c98d893073b7487d |  3.3.10 |   20 kB |      true |         2 |          6 |
+------------------------------+------------------+---------+---------+-----------+-----------+------------+

5、部署Docker

5步骤全部都在master01节点运行

Docker 运行和管理容器,kubelet 通过 Container Runtime Interface (CRI) 与它进行交互。
#下载Docker
[root@master01 ~]# cd /opt/k8s/work 
[root@master01 work]# wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.12.tgz  
[root@master01 work]# ll
总用量 70464
-rw-r--r-- 1 root    root       388 8月   6 10:12 ca-config.json
-rw-r--r-- 1 root    root      1005 8月   6 10:15 ca.csr
-rw-r--r-- 1 root    root       310 8月   6 10:13 ca-csr.json
-rw------- 1 root    root      1679 8月   6 10:15 ca-key.pem
-rw-r--r-- 1 root    root      1367 8月   6 10:15 ca.pem
-rw-r--r-- 1 root    root       567 8月   6 10:12 config.json
-rw-r--r-- 1 root    root       287 8月   6 10:12 csr.json
-rw-r--r-- 1 root    root  60741087 7月   1 2020 docker-19.03.12.tgz  #这个
-rw-r--r-- 1 root    root      1383 8月   6 10:26 etcd-192.168.100.202.service
-rw-r--r-- 1 root    root      1383 8月   6 10:26 etcd-192.168.100.203.service
-rw-r--r-- 1 root    root      1058 8月   6 10:23 etcd.csr
-rw-r--r-- 1 root    root       354 8月   6 10:21 etcd-csr.json
-rw------- 1 root    root      1679 8月   6 10:23 etcd-key.pem
-rw-r--r-- 1 root    root      1436 8月   6 10:23 etcd.pem
-rw-r--r-- 1 root    root      1382 8月   6 10:25 etcd.service.template
drwxr-xr-x 3 6810230 users      123 10月 11 2018 etcd-v3.3.10-linux-amd64
-rw-r--r-- 1 root    root  11353259 3月  25 2020 etcd-v3.3.10-linux-amd64.tar.gz
[root@master01 work]# tar -xvf docker-19.03.12.tgz
#安装和部署Docker
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    scp docker/*  root@${all_ip}:/opt/k8s/bin/
    ssh root@${all_ip} "chmod +x /opt/k8s/bin/*"
  done
#配置Docker system
[root@master01 work]# cat > docker.service <<"EOF"
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io

[Service]
WorkingDirectory=##DOCKER_DIR##
Environment="PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin"
EnvironmentFile=-/run/flannel/docker
ExecStart=/opt/k8s/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
EOF
#解释:
•	EOF 前后有双引号,这样 bash 不会替换文档中的变量,如 $DOCKER_NETWORK_OPTIONS (这些环境变量是 systemd 负责替换的。);
•	Dockerd 运行时会调用其它 Docker 命令,如 docker-proxy,所以需要将 Docker 命令所在的目录加到 PATH 环境变量中;
•	flanneld 后续启动时将网络配置写入 /run/flannel/docker 文件中,dockerd 启动前读取该文件中的环境变量 DOCKER_NETWORK_OPTIONS ,然后设置 docker0 网桥网段;
•	如果指定了多个 EnvironmentFile 选项,则必须将 /run/flannel/docker 放在最后(确保 docker0 使用 flanneld 生成的 bip 参数);
•	Docker 需要以 root 用于运行;
•	Docker 从 1.13 版本开始,可能将 iptables FORWARD chain的默认策略设置为DROP,从而导致 ping 其它 Node 上的 Pod IP 失败,遇到这种情况时,需要手动设置策略为 ACCEPT,建议以下命令写入 /etc/rc.local 文件中,防止节点重启iptables FORWARD chain的默认策略又还原为DROP


[root@master01 work]# for all_ip in ${ALL_IPS[@]} ; do  echo ">>> ${all_ip}"; ssh root@${all_ip} "echo '/sbin/iptables -P FORWARD ACCEPT' >> /etc/rc.local" ;  done
#分发Docker systemd
[root@master01 work]# source /root/environment.sh
[root@master01 work]# sed -i -e "s|##DOCKER_DIR##|${DOCKER_DIR}|" docker.service
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    scp docker.service root@${all_ip}:/etc/systemd/system/
  done
#配置Docker配置文件
[root@master01 work]# source /root/environment.sh
[root@master01 work]# cat > docker-daemon.json <<EOF
{
    "registry-mirrors": ["https://dbzucv6w.mirror.aliyuncs.com"],
    "exec-opts": ["native.cgroupdriver=cgroupfs"],
    "data-root": "${DOCKER_DIR}/data",
    "exec-root": "${DOCKER_DIR}/exec",
    "log-driver": "json-file",
    "log-opts": {
      "max-size": "100m",
      "max-file": "5"
    },
    "storage-driver": "overlay2",
    "storage-opts": [
      "overlay2.override_kernel_check=true"
  ]
}
EOF
#分发Docker配置文件
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "mkdir -p /etc/docker/ ${DOCKER_DIR}/{data,exec}"
    scp docker-daemon.json root@${all_ip}:/etc/docker/daemon.json
  done
#启动并验证
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "systemctl daemon-reload && systemctl enable docker && systemctl restart docker"
  done

#检查状态
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "systemctl status docker|grep Active"
  done
>>> 192.168.100.202
   Active: active (running) since 五 2021-08-06 10:39:59 CST; 36s ago  #四台节点都是running即可
>>> 192.168.100.203
   Active: active (running) since 五 2021-08-06 10:40:00 CST; 35s ago
>>> 192.168.100.205
   Active: active (running) since 五 2021-08-06 10:40:02 CST; 33s ago
>>> 192.168.100.206
   Active: active (running) since 五 2021-08-06 10:40:04 CST; 32s ago
   
#检查Docker 0网桥
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "/usr/sbin/ip addr show docker0"
  done
>>> 192.168.100.202
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:1d:99:54:e1 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0   #四个节点的docker0网卡都有ip地址即可
       valid_lft forever preferred_lft forever
>>> 192.168.100.203
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:1c:98:5d:f3 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
>>> 192.168.100.205
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:dd:1f:ba:ba brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
>>> 192.168.100.206
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:44:74:0d:84 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

#查看Docker信息
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "ps -elfH | grep docker | grep -v grep"
  done
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "docker info"
  done

6、部署flannel

6步骤的操作全部都在master01上进行

kubernetes 要求集群内各节点(包括 master 节点)能通过 Pod 网段互联互通。flannel 使用 vxlan 技术为各节点创建一个可以互通的 Pod 网络,使用的端口为 UDP 8472。
flanneld 第一次启动时,从 etcd 获取配置的 Pod 网段信息,为本节点分配一个未使用的地址段,然后创建 flannedl.1 网络接口(也可能是其它名称,如 flannel1 等)。
flannel 将分配给自己的 Pod 网段信息写入 /run/flannel/docker 文件,docker 后续使用这个文件中的环境变量设置 docker0 网桥,从而从这个地址段为本节点的所有 Pod 容器分配 IP。

#下载flannel
root@master01 ~]# cd /opt/k8s/work/
[root@master01 work]# mkdir flannel
[root@master01 work]# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
[root@master01 work]# ll
总用量 79816
-rw-r--r-- 1 root    root       388 8月   6 10:12 ca-config.json
-rw-r--r-- 1 root    root      1005 8月   6 10:15 ca.csr
-rw-r--r-- 1 root    root       310 8月   6 10:13 ca-csr.json
-rw------- 1 root    root      1679 8月   6 10:15 ca-key.pem
-rw-r--r-- 1 root    root      1367 8月   6 10:15 ca.pem
-rw-r--r-- 1 root    root       567 8月   6 10:12 config.json
-rw-r--r-- 1 root    root       287 8月   6 10:12 csr.json
drwxrwxr-x 2    1000  1000      138 6月  22 2020 docker
-rw-r--r-- 1 root    root  60741087 7月   1 2020 docker-19.03.12.tgz
-rw-r--r-- 1 root    root       413 8月   6 10:38 docker-daemon.json
-rw-r--r-- 1 root    root       487 8月   6 10:37 docker.service
-rw-r--r-- 1 root    root      1383 8月   6 10:26 etcd-192.168.100.202.service
-rw-r--r-- 1 root    root      1383 8月   6 10:26 etcd-192.168.100.203.service
-rw-r--r-- 1 root    root      1058 8月   6 10:23 etcd.csr
-rw-r--r-- 1 root    root       354 8月   6 10:21 etcd-csr.json
-rw------- 1 root    root      1679 8月   6 10:23 etcd-key.pem
-rw-r--r-- 1 root    root      1436 8月   6 10:23 etcd.pem
-rw-r--r-- 1 root    root      1382 8月   6 10:25 etcd.service.template
drwxr-xr-x 3 6810230 users      123 10月 11 2018 etcd-v3.3.10-linux-amd64
-rw-r--r-- 1 root    root  11353259 3月  25 2020 etcd-v3.3.10-linux-amd64.tar.gz
-rw-r--r-- 1 root    root   9565743 3月  25 2020 flannel-v0.11.0-linux-amd64.tar.gz
[root@master01 work]# tar -xzvf flannel-v0.11.0-linux-amd64.tar.gz -C flannel
#分发flannel
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    scp flannel/{flanneld,mk-docker-opts.sh} root@${all_ip}:/opt/k8s/bin/
    ssh root@${all_ip} "chmod +x /opt/k8s/bin/*"
  done
#创建flannel证书和密钥,创建flanneld的CA证书请求文件
[root@master01 work]# cat > flanneld-csr.json <<EOF
{
    "CN": "flanneld",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Shanghai",
            "L": "Shanghai",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
#解释:
该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空。

#生成密钥和证书
[root@master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
> -ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json \
> -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld
#分发证书和私钥
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "mkdir -p /etc/flanneld/cert"
    scp flanneld*.pem root@${all_ip}:/etc/flanneld/cert
  done	
#写入集群 Pod 网段信息
[root@master01 work]# etcdctl \
  --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/opt/k8s/work/ca.pem \
  --cert-file=/opt/k8s/work/flanneld.pem \
  --key-file=/opt/k8s/work/flanneld-key.pem \
  mk ${FLANNEL_ETCD_PREFIX}/config '{"Network":"'${CLUSTER_CIDR}'", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}'
#注意:本步骤只需执行一次。
写入的 Pod 网段 ${CLUSTER_CIDR} 地址段(如 /16)必须小于 SubnetLen,必须与 kube-controller-manager 的 --cluster-cidr 参数值一致。
# 创建flanneld的systemd
[root@master01 work]# source /root/environment.sh
[root@master01 work]# cat > flanneld.service << EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
ExecStart=/opt/k8s/bin/flanneld \\
  -etcd-cafile=/etc/kubernetes/cert/ca.pem \\
  -etcd-certfile=/etc/flanneld/cert/flanneld.pem \\
  -etcd-keyfile=/etc/flanneld/cert/flanneld-key.pem \\
  -etcd-endpoints=${ETCD_ENDPOINTS} \\
  -etcd-prefix=${FLANNEL_ETCD_PREFIX} \\
  -iface=${IFACE} \\
  -ip-masq
ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF
#解释:
mk-docker-opts.sh:该脚本将分配给 flanneld 的 Pod 子网段信息写入 /run/flannel/docker 文件,后续 docker 启动时使用这个文件中的环境变量配置 docker0 网桥;
flanneld:使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用 -iface 参数指定通信接口;
flanneld:运行时需要 root 权限;
-ip-masq: flanneld 为访问 Pod 网络外的流量设置 SNAT 规则,同时将传递给 Docker 的变量 --ip-masq(/run/flannel/docker 文件中)设置为 false,这样 Docker 将不再创建 SNAT 规则; Docker 的 --ip-masq 为 true 时,创建的 SNAT 规则比较“暴力”:将所有本节点 Pod 发起的、访问非 docker0 接口的请求做 SNAT,这样访问其他节点 Pod 的请求来源 IP 会被设置为 flannel.1 接口的 IP,导致目的 Pod 看不到真实的来源 Pod IP。 flanneld 创建的 SNAT 规则比较温和,只对访问非 Pod 网段的请求做 SNAT。
#分发flannel systemd
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    scp flanneld.service root@${all_ip}:/etc/systemd/system/
  done
#启动并验证
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "systemctl daemon-reload && systemctl enable flanneld && systemctl restart flanneld"
  done

#检查flannel启动
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "systemctl status flanneld|grep Active"
  done
>>> 192.168.100.202
   Active: active (running) since 五 2021-08-06 10:52:31 CST; 26s ago #同样四个running即可
>>> 192.168.100.203
   Active: active (running) since 五 2021-08-06 10:52:32 CST; 24s ago
>>> 192.168.100.205
   Active: active (running) since 五 2021-08-06 10:52:33 CST; 23s ago
>>> 192.168.100.206
   Active: active (running) since 五 2021-08-06 10:52:35 CST; 22s ago
   
#检查pod网段信息,查看集群 Pod 网段(/16)
[root@master01 work]# etcdctl \
  --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/etc/kubernetes/cert/ca.pem \
  --cert-file=/etc/flanneld/cert/flanneld.pem \
  --key-file=/etc/flanneld/cert/flanneld-key.pem \
  get ${FLANNEL_ETCD_PREFIX}/config
{"Network":"10.10.0.0/16", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}  #会输出这个网段信息

#查看已分配的 Pod 子网段列表(/24)
[root@master01 work]# etcdctl \
  --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/etc/kubernetes/cert/ca.pem \
  --cert-file=/etc/flanneld/cert/flanneld.pem \
  --key-file=/etc/flanneld/cert/flanneld-key.pem \
  ls ${FLANNEL_ETCD_PREFIX}/subnets
/kubernetes/network/subnets/10.10.152.0-21
/kubernetes/network/subnets/10.10.136.0-21
/kubernetes/network/subnets/10.10.192.0-21
/kubernetes/network/subnets/10.10.168.0-21

#查看某一 Pod 网段对应的节点 IP 和 flannel 接口地址
[root@master01 work]# etcdctl \
  --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/etc/kubernetes/cert/ca.pem \
  --cert-file=/etc/flanneld/cert/flanneld.pem \
  --key-file=/etc/flanneld/cert/flanneld-key.pem \
  get ${FLANNEL_ETCD_PREFIX}/subnets/10.10.168.0-21
{"PublicIP":"192.168.100.205","BackendType":"vxlan","BackendData":{"VtepMAC":"66:73:0e:a2:bc:4e"}} #接口信息
#解释输出信息:
10.10.168.0/21 被分配给节点 worker01(192.168.100.205);
VtepMAC 为 worker01 节点的 flannel.1 网卡 MAC 地址。

#检查flannel网络信息
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
do 
echo ">>> ${all_ip}"
ssh root@${all_ip} "/usr/sbin/ip addr show flannel.1 && /usr/sbin/ip addr show docker0" 
done
>>> 192.168.100.202
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether ea:7b:96:a4:6a:f5 brd ff:ff:ff:ff:ff:ff
    inet 10.10.136.0/32 scope global flannel.1   #可以看到各节点的flannel网卡的网段和上面的pod子网段是相对应的
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:1d:99:54:e1 brd ff:ff:ff:ff:ff:ff
    inet 10.10.136.1/21 brd 10.10.143.255 scope global docker0
       valid_lft forever preferred_lft forever
>>> 192.168.100.203
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether f2:74:0d:58:8e:48 brd ff:ff:ff:ff:ff:ff
    inet 10.10.192.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:1c:98:5d:f3 brd ff:ff:ff:ff:ff:ff
    inet 10.10.192.1/21 brd 10.10.199.255 scope global docker0
       valid_lft forever preferred_lft forever
>>> 192.168.100.205
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether 66:73:0e:a2:bc:4e brd ff:ff:ff:ff:ff:ff
    inet 10.10.168.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:dd:1f:ba:ba brd ff:ff:ff:ff:ff:ff
    inet 10.10.168.1/21 brd 10.10.175.255 scope global docker0
       valid_lft forever preferred_lft forever
>>> 192.168.100.206
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether 12:a4:53:8b:d3:4f brd ff:ff:ff:ff:ff:ff
    inet 10.10.152.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:44:74:0d:84 brd ff:ff:ff:ff:ff:ff
    inet 10.10.152.1/21 brd 10.10.159.255 scope global docker0
       valid_lft forever preferred_lft forever
#解释:flannel.1 网卡的地址为分配的 Pod 子网段的第一个 IP(.0),且是 /32 的地址。

#查看网络信息
[root@master01 work]# ip route show |grep flannel.1
10.10.152.0/21 via 10.10.152.0 dev flannel.1 onlink 
10.10.168.0/21 via 10.10.168.0 dev flannel.1 onlink 
10.10.192.0/21 via 10.10.192.0 dev flannel.1 onlink 
#解释:
到其它节点 Pod 网段请求都被转发到 flannel.1 网卡;
flanneld 根据 etcd 中子网段的信息,如 ${FLANNEL_ETCD_PREFIX}/subnets/172.30.32.0-21 ,来决定进请求发送给哪个节点的互联 IP。

#验证各节点flannel
在各节点上部署 flannel 后,检查是否创建了 flannel 接口(名称可能为 flannel0、flannel.0、flannel.1 等)
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "/usr/sbin/ip addr show flannel.1 | grep -w inet"
  done
>>> 192.168.100.202
    inet 10.10.136.0/32 scope global flannel.1
>>> 192.168.100.203
    inet 10.10.192.0/32 scope global flannel.1
>>> 192.168.100.205
    inet 10.10.168.0/32 scope global flannel.1
>>> 192.168.100.206
    inet 10.10.152.0/32 scope global flannel.1
    
#在各节点上 ping 所有 flannel 接口 IP,确保能通,要注意ping的ip要和上面输出的网段信息相匹配
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh ${all_ip} "ping -c 1 10.10.136.0"
    ssh ${all_ip} "ping -c 1 10.10.152.0"
    ssh ${all_ip} "ping -c 1 10.10.192.0"
    ssh ${all_ip} "ping -c 1 10.10.168.0"
  done
#输出信息显示能通即可

7、部署master节点高可用

7步骤全部都在master01节点上进行,本次实验采用keepalived+nginx代理实现高可用

#Keepalived安装,创建keepalived的目录
[root@master01 work]# cd
[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh ${master_ip} "mkdir -p /opt/k8s/kube-keepalived/"
    ssh ${master_ip} "mkdir -p /etc/keepalived/"
  done
[root@master01 ~]# cd /opt/k8s/work  
[root@master01 work]# wget http://down.linuxsb.com:8888/software/keepalived-2.0.20.tar.gz
[root@master01 work]# ll | grep keepalived
-rw-r--r-- 1 root    root   1036063 7月   1 2020 keepalived-2.0.20.tar.gz
[root@master01 work]# tar -zxvf keepalived-2.0.20.tar.gz
[root@master01 work]# cd keepalived-2.0.20/ && ./configure --sysconf=/etc --prefix=/opt/k8s/kube-keepalived/ && make && make install
#分发Keepalived二进制文件
[root@master01 keepalived-2.0.20]# cd ..
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp -rp /opt/k8s/kube-keepalived/ root@${master_ip}:/opt/k8s/
    scp -rp /usr/lib/systemd/system/keepalived.service  root@${master_ip}:/usr/lib/systemd/system/
    ssh ${master_ip} "systemctl daemon-reload && systemctl enable keepalived"
  done						
# Nginx安装
[root@master01 work]# wget http://nginx.org/download/nginx-1.19.0.tar.gz
[root@master01 work]# ll | grep nginx
-rw-r--r-- 1 root    root   1043748 7月   1 2020 nginx-1.19.0.tar.gz
[root@master01 work]# tar -xzvf nginx-1.19.0.tar.gz
[root@master01 work]# cd /opt/k8s/work/nginx-1.19.0/
[root@master01 nginx-1.19.0]# mkdir nginx-prefix
[root@master01 nginx-1.19.0]# ./configure --with-stream --without-http --prefix=$(pwd)/nginx-prefix --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module
[root@master01 nginx-1.19.0]# make && make install
#解释:
--with-stream:开启 4 层透明转发(TCP Proxy)功能;
--without-xxx:关闭所有其他功能,这样生成的动态链接二进制程序依赖最小。
[root@master01 nginx-1.19.0]# ./nginx-prefix/sbin/nginx -v
nginx version: nginx/1.19.0  #查看版本
#验证编译后的Nginx,查看 nginx 动态链接的库
[root@master01 nginx-1.19.0]#  ldd ./nginx-prefix/sbin/nginx
	linux-vdso.so.1 =>  (0x00007ffe23cdd000)
	libdl.so.2 => /lib64/libdl.so.2 (0x00007f5c49436000)
	libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f5c4921a000)
	libc.so.6 => /lib64/libc.so.6 (0x00007f5c48e56000)
	/lib64/ld-linux-x86-64.so.2 (0x0000559ddee42000)
#提示:
由于只开启了 4 层透明转发功能,所以除了依赖 libc 等操作系统核心 lib 库外,没有对其它 lib 的依赖(如 libz、libssl 等),以便达到精简编译的目的。	
#分发Nginx二进制文件
[root@master01 nginx-1.19.0]# cd ..
[root@master01 work]# source /root/environment.sh
[root@master01 work]#  for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "mkdir -p /opt/k8s/kube-nginx/{conf,logs,sbin}"
    scp /opt/k8s/work/nginx-1.19.0/nginx-prefix/sbin/nginx root@${master_ip}:/opt/k8s/kube-nginx/sbin/kube-nginx
    ssh root@${master_ip} "chmod a+x /opt/k8s/kube-nginx/sbin/*"
  done		
#配置Nginx system
[root@master01 work]# cat > kube-nginx.service <<EOF
[Unit]
Description=kube-apiserver nginx proxy
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=forking
ExecStartPre=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx -t
ExecStart=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx
ExecReload=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx -s reload
PrivateTmp=true
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
#分发Nginx systemd
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kube-nginx.service  root@${master_ip}:/etc/systemd/system/
    ssh ${master_ip} "systemctl daemon-reload && systemctl enable kube-nginx.service"
  done
#创建配置文件
[root@master01 work]# ll | grep config
-rw-r--r--  1 root    root       388 8月   6 10:12 ca-config.json
drwxr-xr-x  6 root    root        92 8月   6 11:25 config   #上传config目录
-rw-r--r--  1 root    root       567 8月   6 10:12 config.json
[root@master01 work]# vim binngkek8s.sh   #需要修改master节点的ip和vip,以及master节点的网卡名称
#!/bin/sh

echo """
    请记得一定要先上传,config目录
"""

if [ ! -d config ]
then
    sleep 30
    echo "请看上边输出..."
	exit 1
fi

#######################################
# set variables below to create the config files, all files will create at ./config directory
#######################################

# master keepalived virtual ip address
export K8SHA_VIP=192.168.100.204

# master01 ip address
export K8SHA_IP1=192.168.100.202

# master02 ip address
export K8SHA_IP2=192.168.100.203


# master01 hostname
export K8SHA_HOST1=master01

# master02 hostname
export K8SHA_HOST2=master02


# master01 network interface name
export K8SHA_NETINF1=ens32

# master02 network interface name
export K8SHA_NETINF2=ens32


# keepalived auth_pass config
export K8SHA_KEEPALIVED_AUTH=412f7dc3bfed32194d1600c483e10ad1d

# kubernetes CIDR pod subnet
export K8SHA_PODCIDR=10.10.0.0

# kubernetes CIDR svc subnet
export K8SHA_SVCCIDR=10.20.0.0

##############################
# please do not modify anything below
##############################

mkdir -p config/$K8SHA_HOST1/{keepalived,nginx-lb}
mkdir -p config/$K8SHA_HOST2/{keepalived,nginx-lb}
mkdir -p config/keepalived
mkdir -p config/nginx-lb

# create all keepalived files
chmod u+x config/keepalived/check_apiserver.sh
cp config/keepalived/check_apiserver.sh config/$K8SHA_HOST1/keepalived
cp config/keepalived/check_apiserver.sh config/$K8SHA_HOST2/keepalived

sed \
-e "s/K8SHA_KA_STATE/BACKUP/g" \
-e "s/K8SHA_KA_INTF/${K8SHA_NETINF1}/g" \
-e "s/K8SHA_IPLOCAL/${K8SHA_IP1}/g" \
-e "s/K8SHA_KA_PRIO/102/g" \
-e "s/K8SHA_VIP/${K8SHA_VIP}/g" \
-e "s/K8SHA_KA_AUTH/${K8SHA_KEEPALIVED_AUTH}/g" \
config/keepalived/k8s-keepalived.conf.tpl > config/$K8SHA_HOST1/keepalived/keepalived.conf

sed \
-e "s/K8SHA_KA_STATE/BACKUP/g" \
-e "s/K8SHA_KA_INTF/${K8SHA_NETINF2}/g" \
-e "s/K8SHA_IPLOCAL/${K8SHA_IP2}/g" \
-e "s/K8SHA_KA_PRIO/101/g" \
-e "s/K8SHA_VIP/${K8SHA_VIP}/g" \
-e "s/K8SHA_KA_AUTH/${K8SHA_KEEPALIVED_AUTH}/g" \
config/keepalived/k8s-keepalived.conf.tpl > config/$K8SHA_HOST2/keepalived/keepalived.conf


echo "create keepalived files success. config/$K8SHA_HOST1/keepalived/"
echo "create keepalived files success. config/$K8SHA_HOST2/keepalived/"

# create all nginx-lb files
sed \
-e "s/K8SHA_IP1/$K8SHA_IP1/g" \
-e "s/K8SHA_IP2/$K8SHA_IP2/g" \
-e "s/K8SHA_IP3/$K8SHA_IP3/g" \
config/nginx-lb/bink8s-nginx-lb.conf.tpl > config/nginx-lb/nginx-lb.conf

echo "create nginx-lb files success. config/nginx-lb/nginx-lb.conf"

# cp all file to node
scp -rp config/nginx-lb/nginx-lb.conf root@$K8SHA_HOST1:/opt/k8s/kube-nginx/conf/kube-nginx.conf
scp -rp config/nginx-lb/nginx-lb.conf root@$K8SHA_HOST2:/opt/k8s/kube-nginx/conf/kube-nginx.conf

scp -rp config/$K8SHA_HOST1/keepalived/* root@$K8SHA_HOST1:/etc/keepalived/
scp -rp config/$K8SHA_HOST2/keepalived/* root@$K8SHA_HOST2:/etc/keepalived/

# chmod *.sh
chmod u+x config/*.sh
#保存退出
[root@master01 work]# chmod u+x *.sh
[root@master01 work]# ./binngkek8s.sh
#解释:
如上仅需Master01节点操作。执行binngkek8s.sh脚本后,会自动生成以下配置文件:
•	keepalived:keepalived配置文件,位于各个master节点的/etc/keepalived目录
•	nginx-lb:nginx-lb负载均衡配置文件,位于各个master节点的/opt/k8s/kube-nginx/conf/kube-nginx.conf目录
#确认高可用配置
[root@master01 work]#  for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    echo ">>>> check check sh"
    ssh root@${master_ip} "ls -l /etc/keepalived/check_apiserver.sh"
    echo ">>> check Keepalived config"
    ssh root@${master_ip} "cat /etc/keepalived/keepalived.conf"
    echo ">>> check Nginx config"
    ssh root@${master_ip} "cat /opt/k8s/kube-nginx/conf/kube-nginx.conf"
  done	
#检查一下高可用配置,会输出nginx和keepalived的配置文件,检查项包括:(要注意看是那个节点的输出信息不要搞混了)
mcast_src_ip       #查看此节点的ip
virtual_ipaddress  #查看vip地址
upstream apiserver #查看nginx的负载均衡

#启动服务
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "systemctl restart keepalived.service && systemctl enable keepalived.service"
    ssh root@${master_ip} "systemctl restart kube-nginx.service && systemctl enable kube-nginx.service"
    ssh root@${master_ip} "systemctl status keepalived.service | grep Active"
    ssh root@${master_ip} "systemctl status kube-nginx.service | grep Active"
    ssh root@${master_ip} "netstat -tlunp | grep 16443"
  done
>>> 192.168.100.202
   Active: active (running) since 五 2021-08-06 11:31:55 CST; 292ms ago  #都是running并且nginx处于启动状态即可
   Active: active (running) since 五 2021-08-06 11:31:55 CST; 255ms ago
tcp        0      0 0.0.0.0:16443           0.0.0.0:*               LISTEN      14274/nginx: master 
>>> 192.168.100.203
   Active: active (running) since 五 2021-08-06 11:31:55 CST; 342ms ago
   Active: active (running) since 五 2021-08-06 11:31:55 CST; 309ms ago
tcp        0      0 0.0.0.0:16443           0.0.0.0:*               LISTEN      8300/nginx: master  

#确认验证,稍等一会再进行ping,下面ping的ip是vip,vip是多少就填多少
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "ping -c1 192.168.100.204"
  done				
#确认都可以ping通即可

8、部署master kubectl

8步骤的所有操作都在master01上进行

#获取kubectl
[root@master01 work]# wget https://storage.googleapis.com/kubernetes-release/release/v1.18.3/kubernetes-client-linux-amd64.tar.gz
[root@master01 work]# ll | grep kubernetes-client
-rw-r--r--  1 root    root  13233170 7月   1 2020 kubernetes-client-linux-amd64.tar.gz
[root@master01 work]# tar -xzvf kubernetes-client-linux-amd64.tar.gz
#分发kubectl
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kubernetes/client/bin/kubectl root@${master_ip}:/opt/k8s/bin/
    ssh root@${master_ip} "chmod +x /opt/k8s/bin/*"
  done
#创建admin证书和密钥,创建admin的CA证书请求文件
[root@master01 work]# cat > admin-csr.json <<EOF
{
    "CN": "admin",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Shanghai",
            "L": "Shanghai",
            "O": "system:masters",
            "OU": "System"
        }
    ]
}
EOF
#解释:
O 为 system:masters:kube-apiserver 收到该证书后将请求的 Group 设置为 system:masters;
预定义的 ClusterRoleBinding cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,即该 Role 授予所有 API的权限;
该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空。

#生成密钥和证书
[root@master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json \
-profile=kubernetes admin-csr.json | cfssljson -bare admin
#创建kubeconfig文件
kubectl 默认从 ~/.kube/config 文件读取 kube-apiserver 地址和认证信息。只需在master节点部署一次,其生成的 kubeconfig 文件是通用的,可以拷贝到需要执行 kubectl 命令的机器,重命名为/.kube/config即可。
# 设置集群参数
[root@master01 work]# kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kubectl.kubeconfig
  
# 设置客户端认证参数
[root@master01 work]#kubectl config set-credentials admin \
  --client-certificate=/opt/k8s/work/admin.pem \
  --client-key=/opt/k8s/work/admin-key.pem \
  --embed-certs=true \
  --kubeconfig=kubectl.kubeconfig

#设置上下文参数
[root@master01 work]#kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin \
  --kubeconfig=kubectl.kubeconfig

#设置默认上下文
[root@master01 work]# kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig
#解释:
--certificate-authority:验证 kube-apiserver 证书的根证书;
--client-certificate、--client-key:刚生成的 admin 证书和私钥,连接 kube-apiserver 时使用;
--embed-certs=true:将 ca.pem 和 admin.pem 证书内容嵌入到生成的 kubectl.kubeconfig 文件中(默认写入的是证书文件路径,后续需要拷贝 kubeconfig 和该证书文件至到其它机器。)。
#分发kubeconfig
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "mkdir -p ~/.kube"
    scp kubectl.kubeconfig root@${master_ip}:~/.kube/config
    ssh root@${master_ip} "echo 'export KUBECONFIG=\$HOME/.kube/config' >> ~/.bashrc"
    ssh root@${master_ip} "echo 'source <(kubectl completion bash)' >> ~/.bashrc"
  done

9、部署kube-apiserver

9步骤全部都在master01上执行

#master节点服务
kubernetes master 节点运行如下组件:
•	kube-apiserver
•	kube-scheduler
•	kube-controller-manager
•	kube-nginx
kube-apiserver、kube-scheduler 和 kube-controller-manager 均以多实例模式运行:
kube-scheduler 和 kube-controller-manager 会自动选举产生一个 leader 实例,其它实例处于阻塞模式,当 leader 挂了后,重新选举产生新的 leader,从而保证服务可用性。
kube-apiserver 是无状态的,需要通过 kube-nginx 进行代理访问,从而保证服务可用性。
#安装Kubernetes
[root@master01 work]# wget https://storage.googleapis.com/kubernetes-release/release/v1.18.3/kubernetes-server-linux-amd64.tar.gz
[root@master01 work]# ll | grep kubernetes-server
-rw-r--r--  1 root    root  363654483 7月   1 2020 kubernetes-server-linux-amd64.tar.gz
[root@master01 work]# tar -xzvf kubernetes-server-linux-amd64.tar.gz
[root@master01 work]# cd kubernetes
[root@master01 kubernetes]# tar -xzvf kubernetes-src.tar.gz
#分发Kubernetes
[root@master01 kubernetes]# cd ..
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp -rp kubernetes/server/bin/{apiextensions-apiserver,kube-apiserver,kube-controller-manager,kube-proxy,kube-scheduler,kubeadm,kubectl,kubelet,mounter} root@${master_ip}:/opt/k8s/bin/
    ssh root@${master_ip} "chmod +x /opt/k8s/bin/*"
  done	
高可用apiserver介绍
本实验部署一个三实例 kube-apiserver 集群的步骤,它们通过 kube-nginx 进行代理访问,从而保证服务可用性
#创建kube-apiserver证书,创建Kubernetes证书和私钥,要注意看hosts项中的ip对不对,分别是master01、02和vip地址
[root@master01 work]# cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.100.202",
    "192.168.100.203",
    "192.168.100.204",
    "${CLUSTER_KUBERNETES_SVC_IP}",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local."
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shanghai",
      "L": "Shanghai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF	
#解释:
hosts 字段指定授权使用该证书的 IP 和域名列表,这里列出了 master 节点 IP、kubernetes 服务的 IP 和域名;
kubernetes 服务 IP 是 apiserver 自动创建的,一般是 --service-cluster-ip-range 参数指定的网段的第一个IP,后续可以通过下面命令获取:
kubectl get svc kubernetes

#生成密钥和证书
[root@master01 work]#  cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json \
-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
#分发证书和私钥
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "mkdir -p /etc/kubernetes/cert"
    scp kubernetes*.pem root@${master_ip}:/etc/kubernetes/cert/
  done
#配置kube-apiserver审计
#创建加密配置文件
[root@master01 work]# cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}
EOF
#分发加密配置文件
[root@master01 work]#  for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp encryption-config.yaml root@${master_ip}:/etc/kubernetes/
  done
#创建审计策略文件
 cat > audit-policy.yaml <<EOF
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
  # The following requests were manually identified as high-volume and low-risk, so drop them.
  - level: None
    resources:
      - group: ""
        resources:
          - endpoints
          - services
          - services/status
    users:
      - 'system:kube-proxy'
    verbs:
      - watch

  - level: None
    resources:
      - group: ""
        resources:
          - nodes
          - nodes/status
    userGroups:
      - 'system:nodes'
    verbs:
      - get

  - level: None
    namespaces:
      - kube-system
    resources:
      - group: ""
        resources:
          - endpoints
    users:
      - 'system:kube-controller-manager'
      - 'system:kube-scheduler'
      - 'system:serviceaccount:kube-system:endpoint-controller'
    verbs:
      - get
      - update

  - level: None
    resources:
      - group: ""
        resources:
          - namespaces
          - namespaces/status
          - namespaces/finalize
    users:
      - 'system:apiserver'
    verbs:
      - get

  # Don't log HPA fetching metrics.
  - level: None
    resources:
      - group: metrics.k8s.io
    users:
      - 'system:kube-controller-manager'
    verbs:
      - get
      - list

  # Don't log these read-only URLs.
  - level: None
    nonResourceURLs:
      - '/healthz*'
      - /version
      - '/swagger*'

  # Don't log events requests.
  - level: None
    resources:
      - group: ""
        resources:
          - events

  # node and pod status calls from nodes are high-volume and can be large, don't log responses for expected updates from nodes
  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - nodes/status
          - pods/status
    users:
      - kubelet
      - 'system:node-problem-detector'
      - 'system:serviceaccount:kube-system:node-problem-detector'
    verbs:
      - update
      - patch

  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - nodes/status
          - pods/status
    userGroups:
      - 'system:nodes'
    verbs:
      - update
      - patch

  # deletecollection calls can be large, don't log responses for expected namespace deletions
  - level: Request
    omitStages:
      - RequestReceived
    users:
      - 'system:serviceaccount:kube-system:namespace-controller'
    verbs:
      - deletecollection

  # Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,
  # so only log at the Metadata level.
  - level: Metadata
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - secrets
          - configmaps
      - group: authentication.k8s.io
        resources:
          - tokenreviews
  # Get repsonses can be large; skip them.
  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
      - group: admissionregistration.k8s.io
      - group: apiextensions.k8s.io
      - group: apiregistration.k8s.io
      - group: apps
      - group: authentication.k8s.io
      - group: authorization.k8s.io
      - group: autoscaling
      - group: batch
      - group: certificates.k8s.io
      - group: extensions
      - group: metrics.k8s.io
      - group: networking.k8s.io
      - group: policy
      - group: rbac.authorization.k8s.io
      - group: scheduling.k8s.io
      - group: settings.k8s.io
      - group: storage.k8s.io
    verbs:
      - get
      - list
      - watch

  # Default level for known APIs
  - level: RequestResponse
    omitStages:
      - RequestReceived
    resources:
      - group: ""
      - group: admissionregistration.k8s.io
      - group: apiextensions.k8s.io
      - group: apiregistration.k8s.io
      - group: apps
      - group: authentication.k8s.io
      - group: authorization.k8s.io
      - group: autoscaling
      - group: batch
      - group: certificates.k8s.io
      - group: extensions
      - group: metrics.k8s.io
      - group: networking.k8s.io
      - group: policy
      - group: rbac.authorization.k8s.io
      - group: scheduling.k8s.io
      - group: settings.k8s.io
      - group: storage.k8s.io

  # Default level for all other requests.
  - level: Metadata
    omitStages:
      - RequestReceived
EOF
#分发策略文件
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp audit-policy.yaml root@${master_ip}:/etc/kubernetes/audit-policy.yaml
  done
#配置metrics-server,创建metrics-server的CA证书请求文件
[root@master01 work]#  cat > proxy-client-csr.json <<EOF
{
  "CN": "system:metrics-server",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shanghai",
      "L": "Shanghai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
#解释:
CN 名称需要位于 kube-apiserver 的 --requestheader-allowed-names 参数中,否则后续访问 metrics 时会提示权限不足。

#生成密钥和证书
[root@master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json \
-profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client 
#分发证书和私钥
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp proxy-client*.pem root@${master_ip}:/etc/kubernetes/cert/
  done
#创建kube-apiserver的systemd
[root@master01 work]# cat > kube-apiserver.service.template <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=${K8S_DIR}/kube-apiserver
ExecStart=/opt/k8s/bin/kube-apiserver \\
  --insecure-port=0 \\
  --secure-port=6443 \\
  --bind-address=##MASTER_IP## \\
  --advertise-address=##MASTER_IP## \\
  --default-not-ready-toleration-seconds=360 \\
  --default-unreachable-toleration-seconds=360 \\
  --feature-gates=DynamicAuditing=true \\
  --max-mutating-requests-inflight=2000 \\
  --max-requests-inflight=4000 \\
  --default-watch-cache-size=200 \\
  --delete-collection-workers=2 \\
  --encryption-provider-config=/etc/kubernetes/encryption-config.yaml \\
  --etcd-cafile=/etc/kubernetes/cert/ca.pem \\
  --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \\
  --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \\
  --etcd-servers=${ETCD_ENDPOINTS} \\
  --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \\
  --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \\
  --audit-dynamic-configuration \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-truncate-enabled=true \\
  --audit-log-path=${K8S_DIR}/kube-apiserver/audit.log \\
  --audit-policy-file=/etc/kubernetes/audit-policy.yaml \\
  --profiling \\
  --anonymous-auth=false \\
  --client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --enable-bootstrap-token-auth=true \\
  --requestheader-allowed-names="system:metrics-server" \\
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-extra-headers-prefix=X-Remote-Extra- \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --service-account-key-file=/etc/kubernetes/cert/ca.pem \\
  --authorization-mode=Node,RBAC \\
  --runtime-config=api/all=true \\
  --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --event-ttl=168h \\
  --kubelet-certificate-authority=/etc/kubernetes/cert/ca.pem \\
  --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \\
  --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \\
  --kubelet-https=true \\
  --kubelet-timeout=10s \\
  --proxy-client-cert-file=/etc/kubernetes/cert/proxy-client.pem \\
  --proxy-client-key-file=/etc/kubernetes/cert/proxy-client-key.pem \\
  --service-cluster-ip-range=${SERVICE_CIDR} \\
  --service-node-port-range=${NODE_PORT_RANGE} \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
RestartSec=10
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

——————————————————————————————————————————————————————————————————————————————————————————————————————————————————
#使用 --audit-policy-file 标志将包含策略的文件传递给 kube-apiserver。如果不设置该标志,则不记录事件。
#解释:
•	--advertise-address:apiserver 对外通告的 IP(kubernetes 服务后端节点 IP);
•	--default-*-toleration-seconds:设置节点异常相关的阈值;
•	--max-*-requests-inflight:请求相关的最大阈值;
•	--etcd-*:访问 etcd 的证书和 etcd 服务器地址;
•	--experimental-encryption-provider-config:指定用于加密 etcd 中 secret 的配置;
•	--bind-address: https 监听的 IP,不能为 127.0.0.1,否则外界不能访问它的安全端口 6443;
•	--secret-port:https 监听端口;
•	--insecure-port=0:关闭监听 http 非安全端口(8080);
•	--tls-*-file:指定 apiserver 使用的证书、私钥和 CA 文件;
•	--audit-*:配置审计策略和审计日志文件相关的参数;
•	--client-ca-file:验证 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)请求所带的证书;
•	--enable-bootstrap-token-auth:启用 kubelet bootstrap 的 token 认证;
•	--requestheader-*:kube-apiserver 的 aggregator layer 相关的配置参数,proxy-client & HPA 需要使用;
•	--requestheader-client-ca-file:用于签名 --proxy-client-cert-file 和 --proxy-client-key-file 指定的证书;在启用了 metric aggregator 时使用;
•	--requestheader-allowed-names:不能为空,值为逗号分割的 --proxy-client-cert-file 证书的 CN 名称,这里设置为 "aggregator";
•	--service-account-key-file:签名 ServiceAccount Token 的公钥文件,kube-controller-manager 的 --service-account-private-key-file 指定私钥文件,两者配对使用;
•	--runtime-config=api/all=true: 启用所有版本的 APIs,如 autoscaling/v2alpha1;
•	--authorization-mode=Node,RBAC、--anonymous-auth=false: 开启 Node 和 RBAC 授权模式,拒绝未授权的请求;
•	--enable-admission-plugins:启用一些默认关闭的 plugins;
•	--allow-privileged:运行执行 privileged 权限的容器;
•	--apiserver-count=3:指定 apiserver 实例的数量;
•	--event-ttl:指定 events 的保存时间;
•	--kubelet-*:如果指定,则使用 https 访问 kubelet APIs;需要为证书对应的用户(上面 kubernetes*.pem 证书的用户为 kubernetes) 用户定义 RBAC 规则,否则访问 kubelet API 时提示未授权;
•	--proxy-client-*:apiserver 访问 metrics-server 使用的证书;
•	--service-cluster-ip-range: 指定 Service Cluster IP 地址段;
•	--service-node-port-range: 指定 NodePort 的端口范围。
#提示:如果 kube-apiserver 机器没有运行 kube-proxy,则还需要添加 --enable-aggregator-routing=true 参数。
#注意:requestheader-client-ca-file 指定的 CA 证书,必须具有 client auth and server auth;
如果 --requestheader-allowed-names 为空,或者 --proxy-client-cert-file 证书的 CN 名称不在 allowed-names 中,则后续查看 node 或 pods 的 metrics 失败,会提示:
[root@master01 ~]# kubectl top nodes       #不用进行操作,这里只是演示报错信息
Error from server (Forbidden): nodes.metrics.k8s.io is forbidden: User "aggregator" cannot list resource "nodes" in API group "metrics.k8s.io" at the cluster scope
———————————————————————————————————————————————————————————————————————————————————————————————————————————————————
#分发systemd
[root@master01 work]# for (( i=0; i < 2; i++ ))
  do
    sed -e "s/##MASTER_NAME##/${MASTER_NAMES[i]}/" -e "s/##MASTER_IP##/${MASTER_IPS[i]}/" kube-apiserver.service.template > kube-apiserver-${MASTER_IPS[i]}.service
  done
[root@master01 work]# ls kube-apiserver*.service    #查看,会发现都替换了两个master节点的相应的ip地址
kube-apiserver-192.168.100.202.service  kube-apiserver-192.168.100.203.service
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kube-apiserver-${master_ip}.service root@${master_ip}:/etc/systemd/system/kube-apiserver.service
  done			
#启动kube-apiserver服务
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "mkdir -p ${K8S_DIR}/kube-apiserver"
    ssh root@${master_ip} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver"
  done

#检查kube-apiserver服务
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "systemctl status kube-apiserver |grep 'Active:'"
  done
>>> 192.168.100.202
   Active: active (running) since 五 2021-08-06 12:05:06 CST; 31s ago  #两个都是running即可
>>> 192.168.100.203
   Active: active (running) since 五 2021-08-06 12:05:18 CST; 20s ago
   
#查看kube-apiserver写入etcd的数据
[root@master01 work]# ETCDCTL_API=3 etcdctl \
    --endpoints=${ETCD_ENDPOINTS} \
    --cacert=/opt/k8s/work/ca.pem \
    --cert=/opt/k8s/work/etcd.pem \
    --key=/opt/k8s/work/etcd-key.pem \
get /registry/ --prefix --keys-only

#检查集群信息
[root@master01 work]# kubectl cluster-info  #集群ip
Kubernetes master is running at https://192.168.100.204:16443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@master01 work]# kubectl get all --all-namespaces  
NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.20.0.1    <none>        443/TCP   107s
[root@master01 work]# kubectl get componentstatuses
NAME                 STATUS      MESSAGE                                                                                     ERROR
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused   
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused   
etcd-0               Healthy     {"health":"true"}                                                                           
etcd-1               Healthy     {"health":"true"}                                                                           
[root@master01 work]# netstat -lnpt|grep 6443
tcp        0      0 192.168.100.202:6443    0.0.0.0:*               LISTEN      17172/kube-apiserve 
tcp        0      0 0.0.0.0:16443           0.0.0.0:*               LISTEN      14274/nginx: master 

#提示
执行 kubectl get componentstatuses 命令时,apiserver 默认向 127.0.0.1 发送请求。目前controller-manager和scheduler还未部署,因此显示Unhealthy;
6443: 接收 https 请求的安全端口,对所有请求做认证和授权;
16443:Nginx反向代理监听端口;
由于关闭了非安全端口,故没有监听 8080。
#授权
授予 kube-apiserver 访问 kubelet API 的权限。
在执行 kubectl exec、run、logs 等命令时,apiserver 会将请求转发到 kubelet 的 https 端口。本实验定义 RBAC 规则,授权 apiserver 使用的证书(kubernetes.pem)用户名(CN:kuberntes)访问 kubelet API 的权限:
[root@master01 ~]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
clusterrolebinding.rbac.authorization.k8s.io/kube-apiserver:kubelet-apis created  #会提示已创建

10、部署kube-controller-manager

10步骤全部都在master01上执行

高可用kube-controller-manager介绍
本实验部署一个三实例 kube-controller-manager 的集群,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用时,阻塞的节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。
为保证通信安全,本文档先生成 x509 证书和私钥,kube-controller-manager 在如下两种情况下使用该证书:
•	与 kube-apiserver 的安全端口通信;
•	在安全端口(https,10252) 输出 prometheus 格式的 metrics。
#创建kube-controller-manager证书和私钥,创建kube-controller-manager的CA证书请求文件,注意修改ip
[root@master01 work]# source /root/environment.sh
[root@master01 work]# cat > kube-controller-manager-csr.json <<EOF 
{
  "CN": "system:kube-controller-manager",
  "hosts": [
    "127.0.0.1",
    "192.168.100.202",
    "192.168.100.203",
    "192.168.100.204"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shanghai",
      "L": "Shanghai",
      "O": "system:kube-controller-manager",
      "OU": "System"
    }
  ]
}
EOF
#解释:
hosts 列表包含所有 kube-controller-manager 节点 IP;
CN 和 O 均为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限。


#生成
[root@master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json \
-profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
#分发证书和私钥
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kube-controller-manager*.pem root@${master_ip}:/etc/kubernetes/cert/
  done
#创建和分发kubeconfig
kube-controller-manager 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-controller-manager 证书:
[root@master01 work]# kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-controller-manager.kubeconfig
  
[root@master01 work]# kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=kube-controller-manager.pem \
  --client-key=kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-controller-manager.kubeconfig
  
[root@master01 work]#  kubectl config set-context system:kube-controller-manager \
  --cluster=kubernetes \
  --user=system:kube-controller-manager \
  --kubeconfig=kube-controller-manager.kubeconfig

[root@master01 work]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kube-controller-manager.kubeconfig root@${master_ip}:/etc/kubernetes/
  done
#创建kube-controller-manager的systemd
[root@master01 work]# cat > kube-controller-manager.service.template <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=${K8S_DIR}/kube-controller-manager
ExecStart=/opt/k8s/bin/kube-controller-manager \\
  --secure-port=10257 \\
  --bind-address=127.0.0.1 \\
  --profiling \\
  --cluster-name=kubernetes \\
  --controllers=*,bootstrapsigner,tokencleaner \\
  --kube-api-qps=1000 \\
  --kube-api-burst=2000 \\
  --leader-elect \\
  --use-service-account-credentials\\
  --concurrent-service-syncs=2 \\
  --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \\
  --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \\
  --authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-allowed-names="system:metrics-server" \\
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \\
  --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \\
  --experimental-cluster-signing-duration=87600h \\
  --horizontal-pod-autoscaler-sync-period=10s \\
  --concurrent-deployment-syncs=10 \\
  --concurrent-gc-syncs=30 \\
  --node-cidr-mask-size=24 \\
  --service-cluster-ip-range=${SERVICE_CIDR} \\
  --cluster-cidr=${CLUSTER_CIDR} \\
  --pod-eviction-timeout=6m \\
  --terminated-pod-gc-threshold=10000 \\
  --root-ca-file=/etc/kubernetes/cert/ca.pem \\
  --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \\
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
#分发systemd
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kube-controller-manager.service.template root@${master_ip}:/etc/systemd/system/kube-controller-manager.service
  done
#启动kube-controller-manager 服务
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "mkdir -p ${K8S_DIR}/kube-controller-manager"
    ssh root@${master_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager"
  done

#检查kube-controller-manager 服务
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "systemctl status kube-controller-manager|grep Active"
  done
>>> 192.168.100.202
   Active: active (running) since 五 2021-08-06 12:17:44 CST; 25s ago  #全是running即可
>>> 192.168.100.203
   Active: active (running) since 五 2021-08-06 12:17:45 CST; 25s ago
   
#查看输出的 metrics
[root@master01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://127.0.0.1:10257/metrics | head

#查看权限
[root@master01 work]# kubectl describe clusterrole system:kube-controller-manager
#提示
ClusteRole system:kube-controller-manager 的权限很小,只能创建 secret、serviceaccount 等资源对象,各 controller 的权限分散到 ClusterRole system:controller:XXX 中。
当在 kube-controller-manager 的启动参数中添加 --use-service-account-credentials=true 参数,这样 main controller 会为各 controller 创建对应的 ServiceAccount XXX-controller。内置的 ClusterRoleBinding system:controller:XXX 将赋予各 XXX-controller ServiceAccount 对应的 ClusterRole system:controller:XXX 权限。

[root@master01 ~]# kubectl get clusterrole | grep controller

#查看当前leader
[root@master01 work]# kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml

11、部署kube-scheduler

11步骤全部都在master01上执行

高可用kube-scheduler介绍
本实验部署一个三实例 kube-scheduler 的集群,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用时,阻塞的节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。
为保证通信安全,本文档先生成 x509 证书和私钥,kube-controller-manager 在如下两种情况下使用该证书:
•	与 kube-apiserver 的安全端口通信;
•	在安全端口(https,10251) 输出 prometheus 格式的 metrics。
#创建kube-scheduler证书和私钥,注意修改ip
[root@master01 work]# source /root/environment.sh
[root@master01 work]# cat > kube-scheduler-csr.json <<EOF
{
  "CN": "system:kube-scheduler",
  "hosts": [
    "127.0.0.1",
    "192.168.100.202",
    "192.168.100.203",
    "192.168.100.204"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shanghai",
      "L": "Shanghai",
      "O": "system:kube-scheduler",
      "OU": "System"
    }
  ]
}
EOF	
#解释:
hosts 列表包含所有 kube-scheduler 节点 IP;
CN 和 O 均为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。

#生成
[root@master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json \
-profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
#分发证书和私钥
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kube-scheduler*.pem root@${master_ip}:/etc/kubernetes/cert/
  done
#创建和分发kubeconfig
kube-scheduler 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-scheduler 证书
[root@master01 work]# kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-scheduler.kubeconfig

[root@master01 work]#  kubectl config set-credentials system:kube-scheduler \
  --client-certificate=kube-scheduler.pem \
  --client-key=kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig

[root@master01 work]# kubectl config set-context system:kube-scheduler \
  --cluster=kubernetes \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig

[root@master01 work]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

[root@master01 work]#  for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kube-scheduler.kubeconfig root@${master_ip}:/etc/kubernetes/
  done
#创建kube-scheduler 配置文件
[root@master01 work]# cat > kube-scheduler.yaml.template <<EOF
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
bindTimeoutSeconds: 600
clientConnection:
  burst: 200
  kubeconfig: "/etc/kubernetes/kube-scheduler.kubeconfig"
  qps: 100
enableContentionProfiling: false
enableProfiling: true
hardPodAffinitySymmetricWeight: 1
healthzBindAddress: 127.0.0.1:10251
leaderElection:
  leaderElect: true
metricsBindAddress: 127.0.0.1:10251
EOF
#解释:
--kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver;
--leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态。
#分发配置文件
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kube-scheduler.yaml.template root@${master_ip}:/etc/kubernetes/kube-scheduler.yaml
  done
# 创建kube-scheduler的systemd
[root@master01 work]# cat > kube-scheduler.service.template <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=${K8S_DIR}/kube-scheduler
ExecStart=/opt/k8s/bin/kube-scheduler \\
  --port=0 \\
  --secure-port=10259 \\
  --bind-address=127.0.0.1 \\
  --config=/etc/kubernetes/kube-scheduler.yaml \\
  --tls-cert-file=/etc/kubernetes/cert/kube-scheduler.pem \\
  --tls-private-key-file=/etc/kubernetes/cert/kube-scheduler-key.pem \\
  --authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
  --client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-allowed-names="system:metrics-server" \\
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
  --logtostderr=true \\
  --v=2
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
EOF
#分发systemd
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kube-scheduler.service.template root@${master_ip}:/etc/systemd/system/kube-scheduler.service
  done
# 启动kube-scheduler 服务,启动服务前必须先创建工作目录
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "mkdir -p ${K8S_DIR}/kube-scheduler"
    ssh root@${master_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler"
  done		

# 检查kube-scheduler 服务
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "systemctl status kube-scheduler | grep Active"
  done
>>> 192.168.100.202
   Active: active (running) since 五 2021-08-06 12:30:52 CST; 22s ago
>>> 192.168.100.203
   Active: active (running) since 五 2021-08-06 12:30:53 CST; 22s ago
   
# 查看输出的 metrics
kube-scheduler 监听 10251 和 10251 端口:
•	10251:接收 http 请求,非安全端口,不需要认证授权;
•	10259:接收 https 请求,安全端口,需要认证授权。
•	两个接口都对外提供 /metrics 和 /healthz 的访问。
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
do
echo ">>> ${master_ip}"
ssh root@${master_ip} "netstat -lnpt | grep kube-sch"
done

#测试非安全端口
[root@master01 work]# curl -s http://127.0.0.1:10251/metrics | head

#测试安全端口
[root@master01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://127.0.0.1:10259/metrics | head

#查看当前leader
[root@master01 work]# kubectl get endpoints kube-scheduler --namespace=kube-system  -o yaml

12、部署worker kubelet

12步骤全部都在master01上执行

kubelet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 exec、run、logs 等。
kubelet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。
为确保安全,部署时关闭了 kubelet 的非安全 http 端口,对请求进行认证和授权,拒绝未授权的访问(如 apiserver、heapster 的请求)。
#提示:master01节点已下载相应二进制,可直接分发至worker节点。
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    scp kubernetes/server/bin/kubelet root@${all_ip}:/opt/k8s/bin/
    ssh root@${all_ip} "chmod +x /opt/k8s/bin/*"
  done
#分发kubeconfig
[root@master01 work]# for all_name in ${ALL_NAMES[@]}
  do
    echo ">>> ${all_name}"
    export BOOTSTRAP_TOKEN=$(kubeadm token create \
      --description kubelet-bootstrap-token \
      --groups system:bootstrappers:${all_name} \
      --kubeconfig ~/.kube/config)
    
    kubectl config set-cluster kubernetes \
      --certificate-authority=/etc/kubernetes/cert/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig

   
    kubectl config set-credentials kubelet-bootstrap \
      --token=${BOOTSTRAP_TOKEN} \
      --kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig

    
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kubelet-bootstrap \
      --kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig

    
    kubectl config use-context default --kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig
  done
#解释:
向 kubeconfig 写入的是 token,bootstrap 结束后 kube-controller-manager 为 kubelet 创建 client 和 server 证书。
token 有效期为 1 天,超期后将不能再被用来 boostrap kubelet,且会被 kube-controller-manager 的 tokencleaner 清理;
kube-apiserver 接收 kubelet 的 bootstrap token 后,将请求的 user 设置为 system:bootstrap:<Token ID>,group 设置为 system:bootstrappers,后续将为这个 group 设置 ClusterRoleBinding。

#查看 kubeadm 为各节点创建的 token
[root@master01 work]# kubeadm token list --kubeconfig ~/.kube/config

##查看各 token 关联的 Secret
[root@master01 work]# kubectl get secrets  -n kube-system|grep bootstrap-token
#分发bootstrap kubeconfig
[root@master01 work]# for all_name in ${ALL_NAMES[@]}
  do
    echo ">>> ${all_name}"
    scp kubelet-bootstrap-${all_name}.kubeconfig root@${all_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
  done
# 创建kubelet 参数配置文件
[root@master01 work]# cat > kubelet-config.yaml.template <<EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: "##ALL_IP##"
staticPodPath: ""
syncFrequency: 1m
fileCheckFrequency: 20s
httpCheckFrequency: 20s
staticPodURL: ""
port: 10250
readOnlyPort: 0
rotateCertificates: true
serverTLSBootstrap: true
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/etc/kubernetes/cert/ca.pem"
authorization:
  mode: Webhook
registryPullQPS: 0
registryBurst: 20
eventRecordQPS: 0
eventBurst: 20
enableDebuggingHandlers: true
enableContentionProfiling: true
healthzPort: 10248
healthzBindAddress: "##ALL_IP##"
clusterDomain: "${CLUSTER_DNS_DOMAIN}"
clusterDNS:
  - "${CLUSTER_DNS_SVC_IP}"
nodeStatusUpdateFrequency: 10s
nodeStatusReportFrequency: 1m
imageMinimumGCAge: 2m
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
volumeStatsAggPeriod: 1m
kubeletCgroups: ""
systemCgroups: ""
cgroupRoot: ""
cgroupsPerQOS: true
cgroupDriver: cgroupfs
runtimeRequestTimeout: 10m
hairpinMode: promiscuous-bridge
maxPods: 220
podCIDR: "${CLUSTER_CIDR}"
podPidsLimit: -1
resolvConf: /etc/resolv.conf
maxOpenFiles: 1000000
kubeAPIQPS: 1000
kubeAPIBurst: 2000
serializeImagePulls: false
evictionHard:
  memory.available:  "100Mi"
nodefs.available:  "10%"
nodefs.inodesFree: "5%"
imagefs.available: "15%"
evictionSoft: {}
enableControllerAttachDetach: true
failSwapOn: true
containerLogMaxSize: 20Mi
containerLogMaxFiles: 10
systemReserved: {}
kubeReserved: {}
systemReservedCgroup: ""
kubeReservedCgroup: ""
enforceNodeAllocatable: ["pods"]
EOF
#分发kubelet 参数配置文件
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    sed -e "s/##ALL_IP##/${all_ip}/" kubelet-config.yaml.template > kubelet-config-${all_ip}.yaml.template
    scp kubelet-config-${all_ip}.yaml.template root@${all_ip}:/etc/kubernetes/kubelet-config.yaml
  done
#创建kubelet systemd
[root@master01 work]# cat > kubelet.service.template <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=${K8S_DIR}/kubelet
ExecStart=/opt/k8s/bin/kubelet \\
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
  --cert-dir=/etc/kubernetes/cert \\
  --cgroup-driver=cgroupfs \\
  --cni-conf-dir=/etc/cni/net.d \\
  --container-runtime=docker \\
  --container-runtime-endpoint=unix:///var/run/dockershim.sock \\
  --root-dir=${K8S_DIR}/kubelet \\
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
  --config=/etc/kubernetes/kubelet-config.yaml \\
  --hostname-override=##ALL_NAME## \\
  --pod-infra-container-image=k8s.gcr.io/pause-amd64:3.2 \\
  --image-pull-progress-deadline=15m \\
  --volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ \\
  --logtostderr=true \\
  --v=2
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
EOF
#解释:
•	如果设置了 --hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 Node 的情况;
•	--bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求;
•	K8S approve kubelet 的 csr 请求后,在 --cert-dir 目录创建证书和私钥文件,然后写入 --kubeconfig 文件;
•	--pod-infra-container-image 不使用 redhat 的 pod-infrastructure:latest 镜像,它不能回收容器的僵尸。
#分发kubelet systemd
[root@master01 work]# for all_name in ${ALL_NAMES[@]}
  do
    echo ">>> ${all_name}"
    sed -e "s/##ALL_NAME##/${all_name}/" kubelet.service.template > kubelet-${all_name}.service
    scp kubelet-${all_name}.service root@${all_name}:/etc/systemd/system/kubelet.service
  done
#授权
kubelet 启动时查找 --kubeletconfig 参数对应的文件是否存在,如果不存在则使用 --bootstrap-kubeconfig 指定的 kubeconfig 文件向 kube-apiserver 发送证书签名请求 (CSR)。
kube-apiserver 收到 CSR 请求后,对其中的 Token 进行认证,认证通过后将请求的 user 设置为 system:bootstrap:<Token ID>,group 设置为 system:bootstrappers,这一过程称为 Bootstrap Token Auth。
默认情况下,这个 user 和 group 没有创建 CSR 的权限,因此kubelet 会启动失败,可通过如下方式创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定。

[root@master01 work]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

#启动kubelet
[root@master01 work]# for all_name in ${ALL_NAMES[@]}
  do
    echo ">>> ${all_name}"
    ssh root@${all_name} "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/"
    ssh root@${all_name} "/usr/sbin/swapoff -a"
    ssh root@${all_name} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
  done
kubelet 启动后使用 --bootstrap-kubeconfig 向 kube-apiserver 发送 CSR 请求,当这个 CSR 被 approve 后,kube-controller-manager 为 kubelet 创建 TLS 客户端证书、私钥和 --kubeletconfig 文件。
#注意:kube-controller-manager 需要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 参数,才会为 TLS Bootstrap 创建证书和私钥。
#提示:启动服务前必须先创建工作目录;
#关闭 swap 分区,否则 kubelet 会启动失败。
#提示:本步骤操作仅需要在master01节点操作。
#查看kubelet服务
[root@master01 work]# for all_name in ${ALL_NAMES[@]}
  do
    echo ">>> ${all_name}"
    ssh root@${all_name} "systemctl status kubelet|grep active"
  done
>>> master01
   Active: active (running) since 五 2021-08-06 12:41:14 CST; 1min 9s ago  #全是running即可
>>> master02
   Active: active (running) since 五 2021-08-06 12:41:14 CST; 1min 8s ago
>>> worker01
   Active: active (running) since 五 2021-08-06 12:41:15 CST; 1min 8s ago
>>> worker02
   Active: active (running) since 五 2021-08-06 12:41:16 CST; 1min 7s ago
[root@master01 work]# kubectl get csr
NAME        AGE   SIGNERNAME                                    REQUESTOR                 CONDITION
csr-mkkcx   92s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:hrdvi4   Pending
csr-pgx7g   94s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:32syq8   Pending
csr-v6cws   93s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:0top7h   Pending
csr-vhq62   93s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:bf9va5   Pending
[root@master01 work]# kubectl get nodes
No resources found in default namespace.   
#自动 approve CSR 请求
创建三个 ClusterRoleBinding,分别用于自动 approve client、renew client、renew server 证书。
[root@master01 work]# cat > csr-crb.yaml <<EOF
 # Approve all CSRs for the group "system:bootstrappers"
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: auto-approve-csrs-for-group
 subjects:
 - kind: Group
   name: system:bootstrappers
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
   apiGroup: rbac.authorization.k8s.io
---
 # To let a node of the group "system:nodes" renew its own credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-client-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
   apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
  resources: ["certificatesigningrequests/selfnodeserver"]
  verbs: ["create"]
---
 # To let a node of the group "system:nodes" renew its own server credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-server-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: approve-node-server-renewal-csr
   apiGroup: rbac.authorization.k8s.io
EOF
[root@master01 work]# kubectl apply -f csr-crb.yaml
#解释:
auto-approve-csrs-for-group:自动 approve node 的第一次 CSR; 注意第一次 CSR 时,请求的 Group 为 system:bootstrappers;
node-client-cert-renewal:自动 approve node 后续过期的 client 证书,自动生成的证书 Group 为 system:nodes;
node-server-cert-renewal:自动 approve node 后续过期的 server 证书,自动生成的证书 Group 为 system:nodes。
#查看 kubelet 的情况
[root@master01 work]#  kubectl get csr | grep boot
csr-mkkcx   4m8s    kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:hrdvi4   Pending
csr-pgx7g   4m10s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:32syq8   Pending
csr-v6cws   4m9s    kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:0top7h   Pending
csr-vhq62   4m9s    kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:bf9va5   Pending
#等待一段时间(1-10 分钟),三个节点的 CSR 都被自动 approved
[root@master01 work]# ls -l /etc/kubernetes/kubelet.kubeconfig
-rw------- 1 root root 2313 8月   6 12:41 /etc/kubernetes/kubelet.kubeconfig
[root@master01 work]# ls -l /etc/kubernetes/cert/ | grep kubelet
-rw------- 1 root root 1224 8月   6 12:48 kubelet-client-2021-08-06-12-48-03.pem
lrwxrwxrwx 1 root root   59 8月   6 12:48 kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2021-08-06-12-48-03.pem


#手动 approve server cert csr
基于安全性考虑,CSR approving controllers 不会自动 approve kubelet server 证书签名请求,需要手动 approve。
[root@master01 work]# kubectl get csr | grep node  #现在还是pending
csr-874vr   23s     kubernetes.io/kubelet-serving                 system:node:master01      Pending
csr-dsc6z   22s     kubernetes.io/kubelet-serving                 system:node:master02      Pending
csr-t6wvz   22s     kubernetes.io/kubelet-serving                 system:node:worker02      Pending
csr-wc84r   22s     kubernetes.io/kubelet-serving                 system:node:worker01      Pending
[root@master01 work]# kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve
certificatesigningrequest.certificates.k8s.io/csr-874vr approved
certificatesigningrequest.certificates.k8s.io/csr-dsc6z approved
certificatesigningrequest.certificates.k8s.io/csr-t6wvz approved
certificatesigningrequest.certificates.k8s.io/csr-wc84r approved
[root@master01 work]# ls -l /etc/kubernetes/cert/kubelet-*
-rw------- 1 root root 1224 8月   6 12:48 /etc/kubernetes/cert/kubelet-client-2021-08-06-12-48-03.pem
lrwxrwxrwx 1 root root   59 8月   6 12:48 /etc/kubernetes/cert/kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2021-08-06-12-48-03.pem
-rw------- 1 root root 1261 8月   6 12:48 /etc/kubernetes/cert/kubelet-server-2021-08-06-12-48-53.pem
lrwxrwxrwx 1 root root   59 8月   6 12:48 /etc/kubernetes/cert/kubelet-server-current.pem -> /etc/kubernetes/cert/kubelet-server-2021-08-06-12-48-53.pem
[root@master01 work]# kubectl get csr | grep node
csr-874vr   64s     kubernetes.io/kubelet-serving                 system:node:master01      Approved,Issued
csr-dsc6z   63s     kubernetes.io/kubelet-serving                 system:node:master02      Approved,Issued
csr-t6wvz   63s     kubernetes.io/kubelet-serving                 system:node:worker02      Approved,Issued
csr-wc84r   63s     kubernetes.io/kubelet-serving                 system:node:worker01      Approved,Issued
[root@master01 work]# kubectl get nodes  #查看集群节点状态,都是ready就对了
NAME       STATUS   ROLES    AGE    VERSION
master01   Ready    <none>   103s   v1.18.3
master02   Ready    <none>   102s   v1.18.3
worker01   Ready    <none>   101s   v1.18.3
worker02   Ready    <none>   101s   v1.18.3
#kubelet 提供的 API 接口
[root@master01 work]# netstat -lnpt | grep kubelet  #查看监听的端口
tcp        0      0 127.0.0.1:35257         0.0.0.0:*               LISTEN      20914/kubelet       
tcp        0      0 192.168.100.202:10248   0.0.0.0:*               LISTEN      20914/kubelet       
tcp        0      0 192.168.100.202:10250   0.0.0.0:*               LISTEN      20914/kubelet 
#解释:
•	10248: healthz http 服务;
•	10250: https 服务,访问该端口时需要认证和授权(即使访问 /healthz 也需要);
•	未开启只读端口 10255;
•	从 K8S v1.10 开始,去除了 --cadvisor-port 参数(默认 4194 端口),不支持访问 cAdvisor UI & API。
#kubelet api 认证和授权
kubelet 配置了如下认证参数:
•	authentication.anonymous.enabled:设置为 false,不允许匿名访问 10250 端口;
•	authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTPs 证书认证;
•	authentication.webhook.enabled=true:开启 HTTPs bearer token 认证。
同时配置了如下授权参数:
authroization.mode=Webhook:开启 RBAC 授权。
kubelet 收到请求后,使用 clientCAFile 对证书签名进行认证,或者查询 bearer token 是否有效。如果两者都没通过,则拒绝请求,提示 Unauthorized,这里就是都没有通过
[root@master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem https://192.168.100.202:10250/metrics
Unauthorized
[root@master01 work]#  curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer 123456" https://192.168.100.202cs0250/metri 
Unauthorized
若通过认证后,kubelet 使用 SubjectAccessReview API 向 kube-apiserver 发送请求,查询证书或 token 对应的 user、group 是否有操作资源的权限(RBAC)。
#证书认证和授权,默认权限不足
[root@master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://192.168.100.202:10250/metrics
Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)

##使用最高权限的admin
[root@master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://192.168.100.202:10250/metrics | head
#解释:
--cacert、--cert、--key 的参数值必须是文件路径,如上面的 ./admin.pem 不能省略 ./,否则返回 401 Unauthorized。

在这里插入图片描述

#创建bear token 认证和授权
[root@master01 work]#  kubectl create sa kubelet-api-test
serviceaccount/kubelet-api-test created
[root@master01 work]# kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
clusterrolebinding.rbac.authorization.k8s.io/kubelet-api-test created
[root@master01 work]# SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
[root@master01 work]# TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')
[root@master01 work]# echo ${TOKEN}   #查看token值
eyJhbGciOiJSUzI1NiIsImtpZCI6IjBMT1lOUFgycEpIOWpQajFoQUpNNHlWMkRxZDNmdUttcVBNVHpyajdTN2sifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Imt1YmVsZXQtYXBpLXRlc3QtdG9rZW4tNGI4Y3YiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoia3ViZWxldC1hcGktdGVzdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImQ2NDhkNzQ5LTg1NWUtNDI4YS1iZGE1LTJiMmQ5NmQwYjNjOSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0Omt1YmVsZXQtYXBpLXRlc3QifQ.t_h8MprNlDiSKXBUN_oTCm9KNXxxKbiqHBwBsrx4q7KvyPS10QCZ03nkNhbsjmrboXxSgkj7Ll7yBY_-DaXGI0-bULLA4v8fjK_c0UCWEC3jKHLpsCBIYIS9WKJliZNZb3NgXXGY33n8MEQpZccVz1IyTih0kFPgV4JgQxwYeqIH60mq4KieZv1gAEnnXg9rhU_AXm1bqB7QQEQafWMFO3XHfOo7HYvoVlFa0BzlfJgN8SAExdF6-V6BC6AqY6oRC6BIt7rPlnGIS9iHLEqH8rCm-lDgXlW3LQbn7sbSGZa-kGMnBwI10v7xOdxd18g_FfoNFO-Rcnpb6KbGQsPVbw

在这里插入图片描述

[root@master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://192.168.100.202:10250/metrics | head

在这里插入图片描述

13、部署所有节点的kube-proxy

13步骤所有操作都在master01上执行

#kube-proxy 运行在所有节点上,它监听 apiserver 中 service 和 endpoint 的变化情况,创建路由规则以提供服务 IP 和负载均衡功能。
安装kube-proxy
#提示:master01 节点已下载相应二进制,可直接分发至worker节点。

#分发kube-proxy
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    scp kubernetes/server/bin/kube-proxy root@${all_ip}:/opt/k8s/bin/
    ssh root@${all_ip} "chmod +x /opt/k8s/bin/*"
  done
#创建kube-proxy证书和私钥
[root@master01 work]# cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shanghai",
      "L": "Shanghai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF	
#解释:
•	CN:指定该证书的 User 为 system:kube-proxy;
•	预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;
•	该证书只会被 kube-proxy 当做 client 证书使用,所以 hosts 字段为空。

#生成
[root@master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json \
-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
# 创建和分发kubeconfig
kube-proxy 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-proxy 证书:
[root@master01 work]# kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

[root@master01 work]# kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

[root@master01 work]#  kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

[root@master01 work]#  kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    scp kube-proxy.kubeconfig root@${all_ip}:/etc/kubernetes/
  done
#创建kube-proxy 配置文件
从 v1.10 开始,kube-proxy 部分参数可以配置文件中配置。可以使用 --write-config-to 选项生成该配置文件。
[root@master01 work]# cat > kube-proxy-config.yaml.template <<EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  burst: 200
  kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"
  qps: 100
bindAddress: ##ALL_IP##
healthzBindAddress: ##ALL_IP##:10256
metricsBindAddress: ##ALL_IP##:10249
enableProfiling: true
clusterCIDR: ${CLUSTER_CIDR}
hostnameOverride: ##ALL_NAME##
mode: "ipvs"
portRange: ""
kubeProxyIPTablesConfiguration:
  masqueradeAll: false
kubeProxyIPVSConfiguration:
  scheduler: rr
  excludeCIDRs: []
EOF
#解释:
•	bindAddress: 监听地址;
•	clientConnection.kubeconfig: 连接 apiserver 的 kubeconfig 文件;
•	clusterCIDR: kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT;
•	hostnameOverride: 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;
•	mode: 使用 ipvs 模式。
#分发配置文件
[root@master01 work]# for (( i=0; i < 4; i++ ))
  do
    echo ">>> ${ALL_NAMES[i]}"
    sed -e "s/##ALL_NAME##/${ALL_NAMES[i]}/" -e "s/##ALL_IP##/${ALL_IPS[i]}/" kube-proxy-config.yaml.template > kube-proxy-config-${ALL_NAMES[i]}.yaml.template
    scp kube-proxy-config-${ALL_NAMES[i]}.yaml.template root@${ALL_NAMES[i]}:/etc/kubernetes/kube-proxy-config.yaml
  done
#创建kube-proxy的systemd
[root@master01 work]# cat > kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=${K8S_DIR}/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \\
  --config=/etc/kubernetes/kube-proxy-config.yaml \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
#分发kube-proxy systemd
[root@master01 work]# for all_name in ${ALL_NAMES[@]}
  do
    echo ">>> ${all_name}"
    scp kube-proxy.service root@${all_name}:/etc/systemd/system/
  done		
# 启动kube-proxy 服务,启动服务前必须先创建工作目录
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "mkdir -p ${K8S_DIR}/kube-proxy"
    ssh root@${all_ip} "modprobe ip_vs_rr"
    ssh root@${all_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
  done
#检查kube-proxy 服务
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "systemctl status kube-proxy | grep Active"
  done
>>> 192.168.100.202
   Active: active (running) since 五 2021-08-06 13:03:59 CST; 42s ago  #都是running即可
>>> 192.168.100.203
   Active: active (running) since 五 2021-08-06 13:04:00 CST; 42s ago
>>> 192.168.100.205
   Active: active (running) since 五 2021-08-06 13:04:00 CST; 41s ago
>>> 192.168.100.206
   Active: active (running) since 五 2021-08-06 13:04:01 CST; 41s ago
   
#查看监听端口
kube-proxy 监听 10249 和 10256 端口:
•	10249:对外提供 /metrics;
•	10256:对外提供 /healthz 的访问。
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "sudo netstat -lnpt | grep kube-prox"
  done

#查看ipvs 路由规则
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "/usr/sbin/ipvsadm -ln"
  done
#可见所有通过 https 访问 K8S SVC kubernetes 的请求都转发到 kube-apiserver 节点的 6443 端口。

在这里插入图片描述

14、验证集群功能

#检查节点状态
[root@master01 work]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
[root@master01 work]# kubectl cluster-info 
Kubernetes master is running at https://192.168.100.204:16443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@master01 work]# kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
master01   Ready    <none>   20m   v1.18.3
master02   Ready    <none>   19m   v1.18.3
worker01   Ready    <none>   19m   v1.18.3
worker02   Ready    <none>   19m   v1.18.3
#创建测试文件
————————————————————————————————————————————————————————————————————————
运行config下的脚本可以自己下载镜像,但是需要联网
[root@master01 work]# bash config/baseimage.sh		#提前pull镜像
————————————————————————————————————————————————————————————————————————
[root@master01 work]# source /root/environment.sh

#上传镜像,上传镜像四个节点全部都需要操作
[root@master01 work]# ll | grep nginx
-rw-r--r--  1 root    root        624 8月   6 11:19 kube-nginx.service
drwxr-xr-x 10    1001  1001       206 8月   6 11:16 nginx-1.19.0
-rw-r--r--  1 root    root    1043748 7月   1 2020 nginx-1.19.0.tar.gz
-rw-r--r--  1 root    root        586 8月   6 13:08 nginx-ds.yml
-rw-r--r--  1 root    root  136325120 7月   1 2020 nginx.tar.gz   #上传nginx1.19.0的镜像文件
[root@master01 work]# ll | grep pau
-rw-r--r--  1 root    root     692736 7月   1 2020 pause-amd64_3.2.tar  #上传pause-amd64_3.2镜像文件
[root@master01 work]# docker load -i nginx.tar.gz   #上传
13cb14c2acd3: Loading layer [==================================================>]  72.49MB/72.49MB
d4cf327d8ef5: Loading layer [==================================================>]   63.8MB/63.8MB
7c7d7f446182: Loading layer [==================================================>]  3.072kB/3.072kB
9040af41bb66: Loading layer [==================================================>]  4.096kB/4.096kB
f978b9ed3f26: Loading layer [==================================================>]  3.584kB/3.584kB
Loaded image: nginx:1.19.0
[root@master01 work]# docker load -i pause-amd64_3.2.tar 
ba0dae6243cc: Loading layer [==================================================>]  684.5kB/684.5kB
Loaded image: k8s.gcr.io/pause-amd64:3.2
[root@master01 work]# docker images        #查看
REPOSITORY          	TAG                 	IMAGE ID            CREATED             SIZE
nginx               	1.19.0              	2622e6cca7eb        14 months ago       132MB
k8s.gcr.io/pause-amd64   3.2                 	80d28bedfe5d        17 months ago       683kB
[root@master01 work]# scp nginx.tar.gz root@192.168.100.203:/root
[root@master01 work]# scp nginx.tar.gz root@192.168.100.205:/root
[root@master01 work]# scp nginx.tar.gz root@192.168.100.206:/root  
[root@master01 work]# scp pause-amd64_3.2.tar root@192.168.100.203:/root
[root@master01 work]# scp pause-amd64_3.2.tar root@192.168.100.205:/root
[root@master01 work]# scp pause-amd64_3.2.tar root@192.168.100.206:/root

#在另外三台节点使用docker load进行导入
[root@master02 ~]# docker load -i nginx.tar.gz
[root@master02 ~]# docker load -i pause-amd64_3.2.tar

[root@worker01 ~]# docker load -i nginx.tar.gz
[root@worker01 ~]# docker load -i pause-amd64_3.2.tar

[root@worker02 ~]# docker load -i nginx.tar.gz
[root@worker02 ~]# docker load -i pause-amd64_3.2.tar
#在master01编写测试文件
[root@master01 work]# cat > nginx-ds.yml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  labels:
    app: nginx-svc
spec:
  type: NodePort
  selector:
    app: nginx-ds
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 8888
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ds
  labels:
    app: nginx-ds
spec:
  selector:
    matchLabels:
      app: nginx-ds
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: nginx:1.19.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
EOF
[root@master01 work]# kubectl create -f nginx-ds.yml
service/nginx-svc created
daemonset.apps/nginx-ds created
#检查各节点的 Pod IP 连通性
[root@master01 work]# kubectl get pods  -o wide | grep nginx-ds
nginx-ds-8znb9   1/1     Running   0          17s   10.10.136.2   master01   <none>           <none>
nginx-ds-h2ssb   1/1     Running   0          17s   10.10.152.2   worker02   <none>           <none>
nginx-ds-pnjbf   1/1     Running   0          17s   10.10.192.2   master02   <none>           <none>
nginx-ds-wjx2z   1/1     Running   0          17s   10.10.168.2   worker01   <none>           <none>
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh ${all_ip} "ping -c 1 10.10.136.2"
    ssh ${all_ip} "ping -c 1 10.10.152.2"
    ssh ${all_ip} "ping -c 1 10.10.192.2"
    ssh ${all_ip} "ping -c 1 10.10.168.2"
  done
#能通即可
#检查服务 IP 和端口可达性
[root@master01 work]# kubectl get svc |grep nginx-svc
nginx-svc    NodePort    10.20.0.12   <none>        80:8888/TCP   88s
[root@master01 work]# for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh ${node_ip} "curl -s 10.20.0.12"
  done
#注意这个循环的ip要和上面查询到的ip相同,执行后会输出信息,可以看到nginx的页面,例如:
>>> 192.168.100.206
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>   #这样就是成功了
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
#解释:
Service Cluster IP:10.20.0.12
服务端口:80
NodePort 端口:8888
#检查服务的 NodePort 可达性
[root@master01 work]# for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh ${node_ip} "curl -s ${node_ip}:8888"
  done
#这个和上面的一样,可以看到nginx页面内容即可

15、部署集群插件coredns

 #下载解压
 [root@master01 work]# cd /opt/k8s/work/kubernetes/
 [root@master01 kubernetes]# tar -xzvf kubernetes-src.tar.gz
 
 # 修改配置
 [root@master01 ~]# cd /opt/k8s/work/kubernetes/cluster/addons/dns/coredns
[root@master01 coredns]# cp coredns.yaml.base coredns.yaml
[root@master01 coredns]# source /root/environment.sh
[root@master01 coredns]# sed -i -e "s/__PILLAR__DNS__DOMAIN__/${CLUSTER_DNS_DOMAIN}/" -e "s/__PILLAR__DNS__SERVER__/${CLUSTER_DNS_SVC_IP}/" -e "s/__PILLAR__DNS__MEMORY__LIMIT__/200Mi/" coredns.yaml
#在两台master上上传镜像
[root@master01 work]# docker load -i coredns_1.6.5.tar 
225df95e717c: Loading layer [==================================================>]  336.4kB/336.4kB
7c9b0f448297: Loading layer [==================================================>]  41.37MB/41.37MB
Loaded image: k8s.gcr.io/coredns:1.6.5
[root@master01 work]# docker load -i tutum-dnsutils.tar.gz 
5f70bf18a086: Loading layer [==================================================>]  1.024kB/1.024kB
3c9ca2b4b72a: Loading layer [==================================================>]  197.2MB/197.2MB
b83a6cb01503: Loading layer [==================================================>]  208.9kB/208.9kB
f5c259e37fdd: Loading layer [==================================================>]  4.608kB/4.608kB
47995420132c: Loading layer [==================================================>]  11.86MB/11.86MB
Loaded image: tutum/dnsutils:latest
[root@master01 work]# docker images
REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
nginx                    1.19.0              2622e6cca7eb        14 months ago       132MB
k8s.gcr.io/pause-amd64   3.2                 80d28bedfe5d        17 months ago       683kB
k8s.gcr.io/coredns       1.6.5               70f311871ae1        21 months ago       41.6MB
tutum/dnsutils           latest              6cd78a6d3256        6 years ago         200MB

[root@master02 ~]# docker load -i coredns_1.6.5.tar
225df95e717c: Loading layer [==================================================>]  336.4kB/336.4kB
7c9b0f448297: Loading layer [==================================================>]  41.37MB/41.37MB
Loaded image: k8s.gcr.io/coredns:1.6.5
[root@master01 work]# docker load -i tutum-dnsutils.tar.gz 
5f70bf18a086: Loading layer [==================================================>]  1.024kB/1.024kB
3c9ca2b4b72a: Loading layer [==================================================>]  197.2MB/197.2MB
b83a6cb01503: Loading layer [==================================================>]  208.9kB/208.9kB
f5c259e37fdd: Loading layer [==================================================>]  4.608kB/4.608kB
47995420132c: Loading layer [==================================================>]  11.86MB/11.86MB
Loaded image: tutum/dnsutils:latest
[root@master02 ~]# docker images
REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
nginx                    1.19.0              2622e6cca7eb        14 months ago       132MB
k8s.gcr.io/pause-amd64   3.2                 80d28bedfe5d        17 months ago       683kB
k8s.gcr.io/coredns       1.6.5               70f311871ae1        21 months ago       41.6MB

#创建 coredns
设置调度策略
提示:对于非业务应用(即集群内部应用)建议仅部署在master节点,如coredns及dashboard。
[root@master01 coredns]# kubectl label nodes master01 node-role.kubernetes.io/master=true
node/master01 labeled
[root@master01 coredns]# kubectl label nodes master02 node-role.kubernetes.io/master=true
node/master02 labeled
[root@master01 coredns]# vi coredns.yaml
#找到下面这个项里面修改:
apiVersion: apps/v1
kind: Deployment
。。。。。。	  
   97   # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
   98   replicas: 2
   99   strategy:
。。。。。。
   118       nodeSelector:        #从这行添加,直接复制119到124行,把119行原来那一行删除
   119         node-role.kubernetes.io/master: "true"
   120       tolerations:
   121         - key: node-role.kubernetes.io/master
   122           operator: "Equal"
   123           value: ""
   124           effect: NoSchedule
#保存退出
#创建coredns并检查
[root@master01 coredns]# kubectl create -f coredns.yaml #都是created即可
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

#检查 coredns 功能
[root@master01 coredns]# kubectl get all -n kube-system
NAME                          READY   STATUS    RESTARTS   AGE
pod/coredns-7966bcdf9-hqx7t   1/1     Running   0          5s
pod/coredns-7966bcdf9-nvjk8   1/1     Running   0          5s

NAME               TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns   ClusterIP   10.20.0.254   <none>        53/UDP,53/TCP,9153/TCP   5s

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   2/2     2            2           5s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-7966bcdf9   2         2         2       5s
#新建Deployment,查看应用
[root@master01 coredns]# cd /opt/k8s/work/
[root@master01 work]# kubectl get pods -o wide
NAME             READY   STATUS    RESTARTS   AGE    IP            NODE       NOMINATED NODE   READINESS GATES
nginx-ds-8znb9   1/1     Running   0          106m   10.10.136.2   master01   <none>           <none>
nginx-ds-h2ssb   1/1     Running   0          106m   10.10.152.2   worker02   <none>           <none>
nginx-ds-pnjbf   1/1     Running   0          106m   10.10.192.2   master02   <none>           <none>
nginx-ds-wjx2z   1/1     Running   0          106m   10.10.168.2   worker01   <none>           <none>
[root@master01 work]# kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)       AGE     SELECTOR
kubernetes   ClusterIP   10.20.0.1    <none>        443/TCP       3h14m   <none>
nginx-svc    NodePort    10.20.0.12   <none>        80:8888/TCP   106m    app=nginx-ds
#创建测试pod
[root@master01 work]# source /root/environment.sh
[root@master01 work]# cat > dnsutils-ds.yml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: dnsutils-ds
  labels:
    app: dnsutils-ds
spec:
  type: NodePort
  selector:
    app: dnsutils-ds
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: dnsutils-ds
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      app: dnsutils-ds
  template:
    metadata:
      labels:
        app: dnsutils-ds
    spec:
      containers:
      - name: my-dnsutils
        image: tutum/dnsutils:latest
        imagePullPolicy: IfNotPresent
        command:
          - sleep
          - "3600"
        ports:
        - containerPort: 80
EOF
[root@master01 work]# kubectl create -f dnsutils-ds.yml
service/dnsutils-ds created
daemonset.apps/dnsutils-ds created
[root@master01 work]# kubectl get pods -lapp=dnsutils-ds
NAME                READY   STATUS    RESTARTS   AGE
dnsutils-ds-hcsw2   1/1     Running   0          6s
dnsutils-ds-msmnd   1/1     Running   0          6s
dnsutils-ds-qvcwp   1/1     Running   0          6s
dnsutils-ds-vrczl   1/1     Running   0          6s
#检查解析
[root@master01 work]# kubectl -it exec dnsutils-ds-hcsw2 -- /bin/sh  #这里的pod填上面输出的podname
# cat /etc/resolv.conf
nameserver 10.20.0.254    #可以成功解析 
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
# exit

16、部署metrics、dashboard

Metrics介绍
Kubernetes的早期版本依靠Heapster来实现完整的性能数据采集和监控功能,Kubernetes从1.8版本开始,性能数据开始以Metrics API的方式提供标准化接口,并且从1.10版本开始将Heapster替换为Metrics Server。在Kubernetes新的监控体系中,Metrics Server用于提供核心指标(Core Metrics),包括Node、Pod的CPU和内存使用指标。
对其他自定义指标(Custom Metrics)的监控则由Prometheus等组件来完成。
#获取部署文件
[root@master01 work]#  mkdir metrics
[root@master01 work]# cd metrics/
[root@master01 metrics]# wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
[root@master01 metrics]# ll
总用量 4
-rw-r--r-- 1 root root 3509 1月   6 2021 components.yaml
[root@master01 metrics]# vi components.yaml
找到这个,在这个项里面修改配置
     62 apiVersion: apps/v1
     63 kind: Deployment
。。。。。。
     69 spec:
     70   replicas: 2   #添加这行
     71   selector:
。。。。。。
     79     spec:
     80       serviceAccountName: metrics-server
     81       hostNetwork: true    #添加这行
。。。。。。
     87       - name: metrics-server     #添加下面到94行
     88         image: k8s.gcr.io/metrics-server-amd64:v0.3.6
     89         imagePullPolicy: IfNotPresent
     90         args:
     91           - --cert-dir=/tmp
     92           - --secure-port=4443
     93           - --kubelet-insecure-tls
     94           - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
     95         ports:
#保存退出
#在两台master上上传镜像
[root@master01 metrics]# docker load -i metrics-server-amd64_v0.3.6.tar   #上传这个
932da5156413: Loading layer [==================================================>]  3.062MB/3.062MB
7bf3709d22bb: Loading layer [==================================================>]  38.13MB/38.13MB
Loaded image: k8s.gcr.io/metrics-server-amd64:v0.3.6
[root@master01 metrics]# docker images
REPOSITORY                        TAG                 IMAGE ID            CREATED             SIZE
nginx                             1.19.0              2622e6cca7eb        14 months ago       132MB
k8s.gcr.io/pause-amd64            3.2                 80d28bedfe5d        17 months ago       683kB
k8s.gcr.io/coredns                1.6.5               70f311871ae1        21 months ago       41.6MB
k8s.gcr.io/metrics-server-amd64   v0.3.6              9dd718864ce6        22 months ago       39.9MB
tutum/dnsutils                    latest              6cd78a6d3256        6 years ago         200MB

[root@master02 ~]# docker load -i metrics-server-amd64_v0.3.6.tar 
932da5156413: Loading layer [==================================================>]  3.062MB/3.062MB
7bf3709d22bb: Loading layer [==================================================>]  38.13MB/38.13MB
Loaded image: k8s.gcr.io/metrics-server-amd64:v0.3.6
[root@master02 ~]# docker images
REPOSITORY                        TAG                 IMAGE ID            CREATED             SIZE
nginx                             1.19.0              2622e6cca7eb        14 months ago       132MB
k8s.gcr.io/pause-amd64            3.2                 80d28bedfe5d        17 months ago       683kB
k8s.gcr.io/coredns                1.6.5               70f311871ae1        21 months ago       41.6MB
k8s.gcr.io/metrics-server-amd64   v0.3.6              9dd718864ce6        22 months ago       39.9MB
tutum/dnsutils                    latest              6cd78a6d3256        6 years ago         200MB


#正式部署
[root@master01 metrics]# kubectl apply -f components.yaml  
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
[root@master01 metrics]# kubectl -n kube-system get pods -l k8s-app=metrics-server  #都是running即可
NAME                              READY   STATUS    RESTARTS   AGE
metrics-server-7b97647899-5tnn7   1/1     Running   0          2m44s
metrics-server-7b97647899-tr6qt   1/1     Running   0          2m45s
#查看资源监控
[root@master02 ~]# kubectl top nodes
NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master01   111m         5%     2038Mi          53%       
master02   101m         5%     1725Mi          44%       
worker01   12m          1%     441Mi           23%       
worker02   14m          1%     439Mi           23%       
[root@master02 ~]# kubectl top pods --all-namespaces
NAMESPACE     NAME                              CPU(cores)   MEMORY(bytes)   
default       dnsutils-ds-hcsw2                 0m           0Mi             
default       dnsutils-ds-msmnd                 0m           1Mi             
default       dnsutils-ds-qvcwp                 0m           1Mi             
default       dnsutils-ds-vrczl                 0m           1Mi             
default       nginx-ds-8znb9                    0m           3Mi             
default       nginx-ds-h2ssb                    0m           2Mi             
default       nginx-ds-pnjbf                    0m           3Mi             
default       nginx-ds-wjx2z                    0m           2Mi             
kube-system   coredns-7966bcdf9-hqx7t           3m           9Mi             
kube-system   coredns-7966bcdf9-nvjk8           3m           14Mi            
kube-system   metrics-server-7b97647899-5tnn7   1m           13Mi            
kube-system   metrics-server-7b97647899-tr6qt   1m           11Mi  
#dashboard部署
#设置标签
[root@master01 ~]# kubectl label nodes master01 dashboard=yes
node/master01 labeled
[root@master01 ~]# kubectl label nodes master02 dashboard=yes
node/master02 labeled

#创建证书
[root@master01 metrics]#  mkdir -p /opt/k8s/work/dashboard/certs
[root@master01 metrics]#  cd /opt/k8s/work/dashboard/certs
[root@master01 certs]# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/C=CN/ST=ZheJiang/L=HangZhou/O=Xianghy/OU=Xianghy/CN=k8s.odocker.com"
[root@master01 certs]# ls
tls.crt  tls.key
[root@master01 certs]# pwd
/opt/k8s/work/dashboard/certs
[root@master01 certs]# scp tls.* root@192.168.100.203:$pwd  #要记得在master02上创建目录
tls.crt                                                                                                100% 1346     1.2MB/s   00:00    
tls.key                                                                                                100% 1704     1.5MB/s   00:00   
#手动创建secret
#v2版本dashboard独立ns
[root@master01 certs]# kubectl create ns kubernetes-dashboard
namespace/kubernetes-dashboard created
[root@master01 certs]# kubectl create secret generic kubernetes-dashboard-certs --from-file=/opt/k8s/work/dashboard/certs -n kubernetes-dashboard
secret/kubernetes-dashboard-certs created

#查看新证书
[root@master01 certs]# kubectl get secret kubernetes-dashboard-certs -n kubernetes-dashboard -o yaml
# 下载yaml文件
[root@master01 certs]#  cd /opt/k8s/work/dashboard/
[root@master01 dashboard]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.1/aio/deploy/recommended.yaml
[root@master01 dashboard]# ll
总用量 8
drwxr-xr-x 2 root root   36 8月   6 18:29 certs
-rw-r--r-- 1 root root 7767 7月   2 2020 recommended.yaml
 #修改yaml文件
 [root@master01 dashboard]# vi recommended.yaml
 。。。。。。
     30 ---
     31 
     32 kind: Service
     33 apiVersion: v1
     34 metadata:
     35   labels:
     36     k8s-app: kubernetes-dashboard
     37   name: kubernetes-dashboard
     38   namespace: kubernetes-dashboard
     39 spec:
     40   type: NodePort   #增加
     41   ports:
     42     - port: 443
     43       targetPort: 8443
     44       nodePort: 30001  #增加
     45   selector:
     46     k8s-app: kubernetes-dashboard
     47 
     48 ---
 。。。。。。
 #因为自动生成的证书很多浏览器无法使用,所以我们自己创建,注释掉kubernetes-dashboard-certs对象声明
     48 ---
     49 
     50 #apiVersion: v1            #注释掉
     51 #kind: Secret
     52 #metadata:
     53 #  labels:
     54 #    k8s-app: kubernetes-dashboard
     55 #  name: kubernetes-dashboard-certs
     56 #  namespace: kubernetes-dashboard
     57 #type: Opaque
     58 
     59 ---
 。。。。。。
    189     spec:                  #从189开始
    190       containers:
    191         - name: kubernetes-dashboard
    192           image: kubernetesui/dashboard:v2.0.0-beta8
    193           imagePullPolicy: IfNotPresent
    194           ports:
    195             - containerPort: 8443
    196               protocol: TCP
    197           args:
    198             - --auto-generate-certificates
    199             - --namespace=kubernetes-dashboard
    200             - --tls-key-file=tls.key
    201             - --tls-cert-file=tls.crt
    202             - --token-ttl=3600
#保存退出    
#正式部署
[root@master01 dashboard]#  kubectl apply -f recommended.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
namespace/kubernetes-dashboard configured
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

#在master1和master2上都下载一下镜像
[root@master02 ~]# docker pull kubernetesui/metrics-scraper:v1.0.1
[root@master02 ~]# docker pull kubernetesui/dashboard:v2.0.0-beta8
[root@master02 ~]# docker images
REPOSITORY                        TAG                 IMAGE ID            CREATED             SIZE
nginx                             1.19.0              2622e6cca7eb        14 months ago       132MB
k8s.gcr.io/pause-amd64            3.2                 80d28bedfe5d        17 months ago       683kB
kubernetesui/dashboard            v2.0.0-beta8        eb51a3597525        20 months ago       90.8MB  #这个
k8s.gcr.io/coredns                1.6.5               70f311871ae1        21 months ago       41.6MB
k8s.gcr.io/metrics-server-amd64   v0.3.6              9dd718864ce6        22 months ago       39.9MB
kubernetesui/metrics-scraper      v1.0.1              709901356c11        2 years ago         40.1MB  #这个
tutum/dnsutils                    latest              6cd78a6d3256        6 years ago         200MB

[root@master01 ~]# docker pull kubernetesui/metrics-scraper:v1.0.1
[root@master01 ~]# docker pull kubernetesui/dashboard:v2.0.0-beta8
[root@master01 ~]# docker images
REPOSITORY                        TAG                 IMAGE ID            CREATED             SIZE
nginx                             1.19.0              2622e6cca7eb        14 months ago       132MB
k8s.gcr.io/pause-amd64            3.2                 80d28bedfe5d        17 months ago       683kB
kubernetesui/dashboard            v2.0.0-beta8        eb51a3597525        20 months ago       90.8MB  #这个
k8s.gcr.io/coredns                1.6.5               70f311871ae1        21 months ago       41.6MB
k8s.gcr.io/metrics-server-amd64   v0.3.6              9dd718864ce6        22 months ago       39.9MB
kubernetesui/metrics-scraper      v1.0.1              709901356c11        2 years ago         40.1MB  #这个
tutum/dnsutils                    latest              6cd78a6d3256        6 years ago         200MB


#查看状态
[root@master01 dashboard]# kubectl get deployment kubernetes-dashboard -n kubernetes-dashboard
NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
kubernetes-dashboard   1/1     1            1           18m
[root@master01 dashboard]#  kubectl get services -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.20.55.23    <none>        8000/TCP        18m
kubernetes-dashboard        NodePort    10.20.46.179   <none>        443:30001/TCP   18m
[root@master01 dashboard]# kubectl get pods -o wide -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE   IP            NODE       NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-694557449d-l2cpn   1/1     Running   0          19m   10.10.192.5   master02   <none>           <none>
kubernetes-dashboard-df75cc4c7-xz8nt         1/1     Running   0          19m   10.10.136.5   master01   <none>           <none>
#提示:master01 NodePort 30001/TCP映射到 dashboard pod 443 端口。
#创建管理员账户
提示:dashboard v2版本默认没有创建具有管理员权限的账户,可如下操作创建。
[root@master01 dashboard]# vi dashboard-admin.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
#保存退出
[root@master01 dashboard]#  kubectl apply -f dashboard-admin.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
#访问Dashboard
[root@master01 dashboard]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')  #查看token值
Name:         admin-user-token-87x8n
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: e7006035-fdda-4c70-b5bd-c5342cf9a9e8

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1367 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IjBMT1lOUFgycEpIOWpQajFoQUpNNHlWMkRxZDNmdUttcVBNVHpyajdTN2sifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTg3eDhuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJlNzAwNjAzNS1mZGRhLTRjNzAtYjViZC1jNTM0MmNmOWE5ZTgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.Rrhwup0uhoIYQAzXhz7VylRYk-cJR1oyAyi4HZc4mJGE6rFcpA6CsfcHtJJvdGticX9hS7TErPlN9nP-yK0dA2T-oxB5mG2RA2H6mMEqa9wU_X4JcQv0Aw7JwEPXzO62I3ue6iThnT8PpsxAN6PQM3fSG1qJqKL8hneyenNzS8J-S09isdZSCChYSk_DsJLf1ICuUMIJvcTAbIELKcmhsf2ixY6k1FAvttmzfB8-EV6I6Brua63pY-5wc3i6Ptrg1FofH5tyOAV5mYDGBZzl9Y1B9N5QM9oJDAffgHP5oAKLnflVeuMeU8miba5phrLh7DTdXT8nKSo34dhUA6CEKw

使用浏览器访问https://192.168.100.204:30001/

在这里插入图片描述

在这里插入图片描述
在这里插入图片描述

成功访问,至此二进制成功安装k8s!!!

标签:work,kubernetes,二进制,安装,master01,--,ip,kube,root
来源: https://blog.csdn.net/rzy1248873545/article/details/122062024

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有