ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

k8s 二进制高可用集群部署

2022-01-24 22:58:54  阅读:199  来源: 互联网

标签:pki kubernetes etc 二进制 -- 集群 etcd k8s


本文档适用于 k8s 1.17+ 版本

本文章将演示 Centos7 二进制方式安装高可用 k8s 1.20+, 相对于其他版本, 二进制安装方式并无太大区别,只需要区分每个组件版本的对应关系即可

生产环境中,建议使用小版本大于 5 的 kubernetes 版本,比如 1.19.5 以后的才可用于生产环境

集群安装

基本环境配置

服务器 IP 地址 需改成 静态 IP

虚拟IP 不要和公司内网 IP 重复,首先去 ping 一下,不通才可用。 VIP 需要和主机在同一个局域网内!

高可用 kubernetes 集群规划

主机名IP地址说明
k8s-master01 ~ 03192.168.32.129 ~ 131master 节点 ;3个
k8s-master-lb #有硬件负载的用硬件,云上的用云上自己的负载192.168.32.233keepalived 虚拟 IP
node01 ~ 02192.168.32.132 ~ 133worker 节点 ;2个

配置信息备注
系统版本Centos7.x
Docker 版本19.03.x
Pod 网段172.16.0.0/12
Service 网段10.96.0.0/12

所有节点配置 hosts, 修改 /etc/hosts 文件 如下:

[root@k8s-master01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.32.129 k8s-master01
192.168.32.130 k8s-master02
192.168.32.131 k8s-master03
192.168.32.132 node01
192.168.32.133 node02
192.168.32.233 k8s-master-lb

Centos 7 安装 yum 源如下;

yum install -y wget
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo  #阿里yum base源
wget -P /etc/yum.repos.d/ http://mirrors.aliyun.com/repo/epel-7.repo    #阿里 epel 源
​
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
​
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
​
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo  
  
​
​

必备工具安装

yum -y install wget jq psmics vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git
​

所有节点关闭防火墙,selinux, dnsmasq , swap

systemctl disable --now firewalld                               #--now 并立即生效
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager
​
 setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

关闭 swap 分区

swapoff -a && sysctl -w vm.swappiness=0    #关闭 swap分区 并临时设置使用虚拟内存的权重;不建议直接关闭 swap,但是k8s集群建议关闭 swap
​
 /etc/sysctl.conf增加一行 vm.swappiness = 0     #永久设置
​
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab           #注释掉 swap
​
​
​
​
内核参数vm.swappiness控制换出运行时内存的相对权重,参数值大小对如何使用swap分区有很大联系。值越大,表示越积极使用swap分区,越小表示越积极使用物理内存。默认值swappiness=60,表示内存使用率超过100-60=40%时开始使用交换分区。swappiness=0的时候表示最大限度使用物理内存,然后才是 swap空间;swappiness=100的时候表示积极使用swap分区,并把内存上的数据及时搬运到swap空间。(网上有的说,对于3.5以后的内核和RedHat 2.6.32之后的内核,设置为0会禁止使用swap,从而引发out of memory,这种情况可以设置为1。)
​
需要根据服务器运行的程序类型,来设置不同的参数值。例如,对于Oracle一般设置为10;对于MySQL一般设置为1,尽可能不用swap分区。
​
​

安装 ntpdate

rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
yum -y install ntpdate

所有节点同步时间,时间同步配置如下

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime       #描述本机时间
echo 'Asia/Shanghai' > /etc/timezone                          #描述本机所属的时区
ntpdate time2.aliyun.com
​
加入到 crontab
[root@k8s-master01 yum.repos.d]# crontab -l
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com

所有节点配置 limit

​
ulimit -SHn 65535               #设置当前shell,所有用户的最大文件打开数位 65535个,立即生效
vim /etc/security/limits.conf     #永久生效,新开shell,立即生效,默认以这个配置文件为准,如果不去手动设置
# 末尾添加如下内容
[root@k8s-master01 yum.repos.d]# tail -6 /etc/security/limits.conf 
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
[root@k8s-master01 yum.repos.d]# 
​
ulimit 是一种 Linux 系统的内键功能,它具有一套参数集,用于为由它生成的 shell进程及其子进程的资源使用设置限制。
ulimit的设定值是 per-process 的,也就是说,每个进程有自己的limits值。
使用ulimit进行修改,是 立即生效 的。
ulimit只影响shell进程及其子进程,用户登出后失效。
​
​
​
#nofile 最大文件打开数
#soft   可以超过,会报警
#hard   硬性规定,不能超过
#memlock - 最大锁定内存地址空间
#noproc  - 进程的最大数目
#unlimited   无限制的
​

Master01 节点免密钥登陆其他节点,安装过程中生成配置文件和证书(kubeadm不需要手动生成证书)均在 Master01 上操作,集群管理也在 Master01 上操作,阿里云或者 AWS 上需要单独一台 kubectl 服务器。密钥配置如下;

ssh-keygen -t rsa
for i in k8s-master01 k8s-master02 k8s-master03 node01 node02;do ssh-copy-id -i /root/.ssh/id_rsa.pub $i;done
​

下载安装所有的源码文件(master01上操作就行了)

git clone https://gitee.com/luoluo160717/k8s-ha-install.git
​

所有节点升级系统并重启,此处升级没有升级内核,下面会单独升级内核

 yum update -y --exclude=kernel* && reboot  #CentOs7 需要升级

内核配置

Centos7 需要升级内核至4.18+,本次升级的版本为4.19

在 master01 节点下载内核

cd /root
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
​
for i in k8s-master01 k8s-master02 k8s-master03 node01 node02;do scp kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/;done   #分发到其他节点
​

所有节点升级内核

yum localinstall -y kernel-ml*

所有节点更改内核启动顺序

grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg 
grubby --args="user_namespace.enable=1"  --update-kernel="$(grubby --default-kernel)"

检查默认内核是不是4.19

[root@k8s-master01 ~]# grubby --default-kernel
/boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64

所有节点重启,然后检查内核是不是 4.19

[root@k8s-master01 ~]# uname -a
Linux k8s-master01 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
​

所有节点安装 ipvsadm

yum install ipvsadm ipset sysstat conntrack libseccomp -y
​

所有节点配置 ipvs 模块, 在内核 4.19+ 版本 nf_conntrack_ipv4 已经改为 nf_conntrack, 4.18 以下使用 nf_conntrack_ipv4 即可

vim /etc/modules-load.d/ipvs.conf   #默认不存在
[root@k8s-master01 ~]# cat /etc/modules-load.d/ipvs.conf      #加入如下配置
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
​
​
​
​
然后执行 systemctl enable --now systemd-modules-load.service 即可

开启一些k8s集群中必须的内核参数,所有节点配置k8s内核

[root@k8s-master01 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf    
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
net.ipv4.conf.all.route_localnet = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
​
​
执行 sysctl --system   #加载所有的配置文件

所有节点配置完内核后,重启服务器,保证重启后内核加载

reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack

基本组件安装

docker-ce, kubernetes 各组件等

所有节点安装 docker-ce 19.03

yum install docker-ce-19.03.* -y

温馨提示;

由于新版 kubelet 建议使用 systemd , 所以可以把 docker 的 CgroupDriver 改成 systemd

mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://6h6ezoe5.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

所有节点设置开机自启动 docker

systemctl daemon-reload && systemctl enable --now docker

k8s 及 etcd 安装

Master01 下载 kubernetes 二进制 tar.gz 压缩包

GitHub - kubernetes/kubernetes: Production-Grade Container Scheduling and Management

选择对应版本的CHANGELOG-1.X.md文件,点击打开

 

点击 Server Binaries

右键复制链接 或者 鼠标点击下载

 

wget https://dl.k8s.io/v1.20.14/kubernetes-server-linux-amd64.tar.gz  #复制的链接地址
​
如果网络原因上不去 github , 可以将其中dl.k8s.io修改为  storage.googleapis.com/kubernetes-release/release
​
新的下载地址:wget https://storage.googleapis.com/kubernetes-release/release/v1.20.14/kubernetes-server-linux-amd64.tar.gz

Releases · etcd-io/etcd (github.com) #

etcd 二进制包
地址:https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
​
​
下载不下来的可以去网盘下载
链接:https://pan.baidu.com/s/16k1WGvhRsdu2iro-GbkNBg 
提取码:3941 
--来自百度网盘超级会员V1的分享

解压包

[root@k8s-master01 ~]# tar -xf  kubernetes-server-linux-amd64.tar.gz --strip-components 3 -C /usr/local/bin  kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}   
#解压指定内容到 /usr/local/bin/  下,
​
​
​
[root@k8s-master01 ~]# tar -xf etcd-v3.4.13-linux-amd64.tar.gz --strip-components 1 -C /usr/local/bin/ etcd-v3.4.13-linux-amd64/etcd{,ctl}          
​
​
​
​
​

版本查看

[root@k8s-master01 ~]# kubelet --version
Kubernetes v1.20.14
[root@k8s-master01 ~]# etcdctl version
etcdctl version: 3.4.13
API version: 3.4
[root@k8s-master01 ~]# 
​

将组件发送到其他节点

[root@k8s-master01 ~]# masternode='k8s-master02 k8s-master03'
[root@k8s-master01 ~]# worknode='node01 node02'
​
[root@k8s-master01 ~]# for i in $masternode;do scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $i:/usr/local/bin/; scp /usr/local/bin/etcd* $i:/usr/local/bin/;done
​
​
[root@k8s-master01 ~]# for i in $worknode;do scp /usr/local/bin/kube{let,-proxy} $i:/usr/local/bin/; done
​

所有节点创建 /opt/cni/bin 目录

mkdir -p /opt/cni/bin

master01 上操作 ,切换到1.20.x 分支 (其他版本可以切换到其他分支)

[root@k8s-master01 ~]# cd /root/k8s-ha-install/ 
You have new mail in /var/spool/mail/root
[root@k8s-master01 k8s-ha-install]# git checkout manual-installation-v1.20.x
Branch manual-installation-v1.20.x set up to track remote branch manual-installation-v1.20.x from origin.
Switched to a new branch 'manual-installation-v1.20.x'
[root@k8s-master01 k8s-ha-install]# 
​

生成证书

单向认证双向认证

何为SSL/TLS单向认证,双向认证? 单向认证指的是只有一个对象校验对端的证书合法性。 通常都是client来校验服务器的合法性。那么client需要一个ca.crt,服务器需要server.crt,server.key 双向认证指的是相互校验,服务器需要校验每个client,client也需要校验服务器。 server 需要 server.key 、server.crt 、ca.crt client 需要 client.key 、client.crt 、ca.crt

kubernetesV1.8版本后建议开启TLS双向认证及RBAC授权管理,以加强集群的安全管理。界内流行的开启TLS方法为基于一个“公钥基础设施(public key infrastructure,缩写为 PKI)”,使用了内部托管的认证中心(CA),常见PKI工具有CFSSL,OPENSSL等

CFSSL是CloudFlare开源的一款PKI/TLS工具。 CFSSL 包含一个命令行工具 和一个用于 签名,验证并且捆绑TLS证书的 HTTP API 服务。 使用Go语言编写。

CFSSL包括:

  • 一组用于生成自定义 TLS PKI 的工具

  • cfssl程序,是CFSSL的命令行工具

  • multirootca程序是可以使用多个签名密钥的证书颁发机构服务器

  • mkbundle程序用于构建证书池

  • cfssljson程序,从cfsslmultirootca程序获取JSON输出,并将证书,密钥,CSR和bundle写入磁盘

PKI借助数字证书和公钥加密技术提供可信任的网络身份。通常,证书就是一个包含如下身份信息的文件:

  • 证书所有组织的信息

  • 公钥

  • 证书颁发组织的信息

  • 证书颁发组织授予的权限,如证书有效期、适用的主机名、用途等

  • 使用证书颁发组织私钥创建的数字签名

安装cfssl

这里我们只用到cfssl工具和cfssljson工具:

Master01 下载生成证书工具

[root@k8s-master01 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl
​
[root@k8s-master01 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64  -O /usr/local/bin/cfssljson
​
[root@k8s-master01 ~]# chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson 
​
​
​
网络问题下载不下来的可以去网盘下载
链接:https://pan.baidu.com/s/1VcGd_FQE5GoKsaU3Jx79QA 
提取码:3941 
--来自百度网盘超级会员V1的分享
​

cfssl工具,子命令介绍:

  • bundle: 创建包含客户端证书的证书包

  • genkey: 生成一个key(私钥)和CSR(证书签名请求)

  • scan: 扫描主机问题

  • revoke: 吊销证书

  • certinfo: 输出给定证书的证书信息, 跟cfssl-certinfo 工具作用一样

  • gencrl: 生成新的证书吊销列表

  • selfsign: 生成一个新的自签名密钥和 签名证书

  • print-defaults: 打印默认配置,这个默认配置可以用作模板

  • serve: 启动一个HTTP API服务

  • gencert: 生成新的key(密钥)和签名证书

    • -ca:指明ca的证书

    • -ca-key:指明ca的私钥文件

    • -config:指明请求证书的json文件

    • -profile:与-config中的profile对应,是指根据config中的profile段来生成证书的相关信息

  • ocspdump

  • ocspsign

  • info: 获取有关远程签名者的信息

  • sign: 签名一个客户端证书,通过给定的CA和CA密钥,和主机名

  • ocsprefresh

  • ocspserve

cfssl常用命令:

  • cfssl gencert -initca ca-csr.json | cfssljson -bare ca ## 初始化ca

  • cfssl gencert -initca -ca-key key.pem ca-csr.json | cfssljson -bare ca ## 使用现有私钥, 重新生成

  • cfssl certinfo -cert ca.pem

  • cfssl certinfo -csr ca.csr

所有Master 节点创建 etcd 证书目录

[root@k8s-master01 ~]# mkdir /etc/etcd/ssl -p
​

所有节点创建 kubernetes 相关目录

[root@k8s-master01 ~]# mkdir -p /etc/kubernetes/pki
​

Master01 节点生成 etcd 证书

生成证书的 CSR 文件: 证书签名请求文件,配置了一些域名,公司,单位

cd /root/k8s-ha-install/pki     
​
##cfssl print-defaults config > config.json # 默认证书生产策略配置模板
##cfssl print-defaults csr > csr.json #默认csr请求模板
​
​
​
​
初始化生成 etcd CA 根证书和 etcd CA 证书的 key 
​
[root@k8s-master01 pki]# cfssl gencert -initca etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
​
[root@k8s-master01 pki]# ls /etc/etcd/ssl/
etcd-ca.csr  etcd-ca-key.pem  etcd-ca.pem
​
生成了 etcd ca的证书 etcd-ca.pem
生产了 etcd ca的私钥 etcd-ca-key.pem  
生成了 etcd ca的请求文件  etcd-ca.csr
​
​
​
​
​
生成客户端证书
​
​
[root@k8s-master01 pki]# cfssl gencert -ca=/etc/etcd/ssl/etcd-ca.pem -ca-key=/etc/etcd/ssl/etcd-ca-key.pem -config=ca-config.json -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.32.129,192.168.32.130,192.168.32.131,192.168.32.144,192.168.32.145 -profile=kubernetes etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
​
​
​
192.168.32.144  192.168.32.145  #预留地址  可以多写几个

将证书复制到其他 etcd 节点 ,也就是本实验中的,master02,master03节点

上边已经生成过变量,请注意
​
[root@k8s-master01 ~]# for i in $masternode;do ssh $i "mkdir -p /etc/etcd/ssl";for j in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem;do scp /etc/etcd/ssl/${j} $i:/etc/etcd/ssl/${j};done;done
​

Master01节点 生成 kubernetes 证书

初始化根证书 ,用于k8s 集群
[root@k8s-master01 pki]# cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
​
​
​
​
​
​
​
​
​
​
这个证书目前专属于 apiserver,加了一个 k8s-master* 域名以便内部私有 DNS 解析使用(可删除);至于很多人问过 kubernetes 这几个能不能删掉,答案是不可以的;因为当集群创建好后,default namespace 下会创建一个叫 kubenretes 的 svc,有一些组件会直接连接这个 svc 来跟 api 通讯的,证书如果不包含可能会出现无法连接的情况;其他几个 kubernetes 开头的域名作用相同
    hosts包含的是授权范围,不在此范围的的节点或者服务使用此证书就会报证书不匹配错误。
    10.96.0.1是指kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP。
    #-hostname 配置区域,即一个证书的网站可以是*.youku.com也是可以是*.google.com
​
[root@k8s-master01 pki]# cfssl gencert   -ca=/etc/kubernetes/pki/ca.pem   -ca-key=/etc/kubernetes/pki/ca-key.pem   -config=ca-config.json   -hostname=10.96.0.1,192.168.32.233,127.0.0.1,k8s-master*,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.32.129,192.168.32.130,192.168.32.131   -profile=kubernetes   apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
​
​
​
​
​
​
​
​
初始化生成apiserver的聚合证书
​
[root@k8s-master01 pki]# cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca
​
​
​
[root@k8s-master01 pki]# cfssl gencert   -ca=/etc/kubernetes/pki/front-proxy-ca.pem   -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem   -config=ca-config.json   -profile=kubernetes   front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
​
​
​
​
​
​
​
​
​
​
​
​
​
生成controller-manage的证书
​
[root@k8s-master01 pki]# cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
   
   
   
   
   
# 注意,如果不是高可用集群,192.168.32.233:8443改为master01的地址,8443改为apiserver的端口,默认是6443
# set-cluster:设置一个集群项,192.168.32.233是VIP
kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem \
     --embed-certs=true \
     --server=https://192.168.32.233:8443 \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
     
     
     
     
     
​
​
# 设置一个环境项,一个上下文
[root@k8s-master01 pki]# kubectl config set-context system:kube-controller-manager@kubernetes \
    --cluster=kubernetes \
    --user=system:kube-controller-manager \
    --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig 
​
​
​
​
​
​
​
# set-credentials 设置一个用户项
[root@k8s-master01 pki]# kubectl config set-credentials system:kube-controller-manager \
     --client-certificate=/etc/kubernetes/pki/controller-manager.pem \
     --client-key=/etc/kubernetes/pki/controller-manager-key.pem \
     --embed-certs=true \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
     
     
     
     
     
 
 
# 使用某个环境当做默认环境
[root@k8s-master01 pki]# kubectl config use-context system:kube-controller-manager@kubernetes \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
     
     
     
     
     
     
生成 scheduler 证书
[root@k8s-master01 pki]# cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
   
   
​
​
# 注意,如果不是高可用集群,192.168.32.233:8443改为master01的地址,8443改为apiserver的端口,默认是6443
​
​
[root@k8s-master01 pki]# kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem \
     --embed-certs=true \
     --server=https://192.168.32.233:8443 \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
     
     
kubectl config set-credentials system:kube-scheduler \
     --client-certificate=/etc/kubernetes/pki/scheduler.pem \
     --client-key=/etc/kubernetes/pki/scheduler-key.pem \
     --embed-certs=true \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
     
     
     
     
kubectl config set-context system:kube-scheduler@kubernetes \
     --cluster=kubernetes \
     --user=system:kube-scheduler \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
     
     
     
kubectl config use-context system:kube-scheduler@kubernetes \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
     
     
cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
   
   
   
 
# 注意,如果不是高可用集群,192.168.32.233:8443改为master01的地址,8443改为apiserver的端口,默认是6443
kubectl config set-cluster kubernetes     --certificate-authority=/etc/kubernetes/pki/ca.pem     --embed-certs=true     --server=https://192.168.32.233:8443     --kubeconfig=/etc/kubernetes/admin.kubeconfig
​
​
​
kubectl config set-credentials kubernetes-admin     --client-certificate=/etc/kubernetes/pki/admin.pem     --client-key=/etc/kubernetes/pki/admin-key.pem     --embed-certs=true     --kubeconfig=/etc/kubernetes/admin.kubeconfig
​
​
​
​
kubectl config set-context kubernetes-admin@kubernetes     --cluster=kubernetes     --user=kubernetes-admin     --kubeconfig=/etc/kubernetes/admin.kubeconfig
​
​
​
kubectl config use-context kubernetes-admin@kubernetes     --kubeconfig=/etc/kubernetes/admin.kubeconfig

创建ServiceAccount Key

[root@k8s-master01 pki]# openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
​
​
[root@k8s-master01 pki]# openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
​

发送证书至其他节点

for NODE in k8s-master02 k8s-master03; do 
for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do 
scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE};
done; 
for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do 
scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE};
done;
done;

查看证书文件

[root@k8s-master01 pki]# ls /etc/kubernetes/pki/
admin.csr      apiserver.csr      ca.csr      controller-manager.csr      front-proxy-ca.csr      front-proxy-client.csr      sa.key         scheduler-key.pem
admin-key.pem  apiserver-key.pem  ca-key.pem  controller-manager-key.pem  front-proxy-ca-key.pem  front-proxy-client-key.pem  sa.pub         scheduler.pem
admin.pem      apiserver.pem      ca.pem      controller-manager.pem      front-proxy-ca.pem      front-proxy-client.pem      scheduler.csr
​
[root@k8s-master01 pki]# ls /etc/kubernetes/pki/ |wc -l
23

Kubernetes系统组件配置

Etcd配置(所有Master节点)

etcd配置大致相同,注意修改每个Master节点的etcd配置的主机名和IP地址

Master01 节点

[root@k8s-master01 ~]# vim /etc/etcd/etcd.config.yml
​
name: 'k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.32.129:2380'
listen-client-urls: 'https://192.168.32.129:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.32.129:2380'
advertise-client-urls: 'https://192.168.32.129:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.32.129:2380,k8s-master02=https://192.168.32.130:2380,k8s-master03=https://192.168.32.131:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
​

Master02 节点

[root@k8s-master02 ~]#  vim /etc/etcd/etcd.config.yml
name: 'k8s-master02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.32.130:2380'
listen-client-urls: 'https://192.168.32.130:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.32.130:2380'
advertise-client-urls: 'https://192.168.32.130:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.32.129:2380,k8s-master02=https://192.168.32.130:2380,k8s-master03=https://192.168.32.131:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
​

Master03 节点

[root@k8s-master03 ~]#  vim /etc/etcd/etcd.config.yml
name: 'k8s-master03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.32.131:2380'
listen-client-urls: 'https://192.168.32.131:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.32.131:2380'
advertise-client-urls: 'https://192.168.32.131:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.32.129:2380,k8s-master02=https://192.168.32.130:2380,k8s-master03=https://192.168.32.131:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
​

创建Service

创建etcd service并启动(所有Master节点)

vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
​
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
​
[Install]
WantedBy=multi-user.target
Alias=etcd3.service

创建etcd的证书目录(所有Master节点)

​
​
mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
systemctl daemon-reload
systemctl enable --now etcd

查看集群状态(任意master)

[root@k8s-master01 ~]# ETCDCTL_API=3
[root@k8s-master01 ~]# etcdctl --endpoints="192.168.32.129:2379,192.168.32.130:2379,192.168.32.131:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem  endpoint status --write-out=table
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|      ENDPOINT       |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.32.129:2379 | c391821cb3f30b1a |  3.4.13 |   25 kB |     false |      false |      2315 |         10 |                 10 |        |
| 192.168.32.130:2379 | d65c253897bed4d4 |  3.4.13 |   20 kB |      true |      false |      2315 |         10 |                 10 |        |
| 192.168.32.131:2379 | f6e8226e0b0ed00e |  3.4.13 |   20 kB |     false |      false |      2315 |         10 |                 10 |        |
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
[root@k8s-master01 ~]# 
​
​

高可用配置

高可用配置(注意:如果不是高可用集群,haproxy和keepalived无需安装) 如果在云上安装也无需执行此章节的步骤,可以直接使用云上的服务

安装keepalived和haproxy(所有Master节点)

yum install keepalived haproxy -y

所有Master节点配置HAProxy,配置都一样

vim /etc/haproxy/haproxy.cfg 
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s
​
defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s
​
frontend k8s-master
  bind 0.0.0.0:8443
  bind 127.0.0.1:8443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master
​
backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-master01    192.168.32.129:6443  check
  server k8s-master02    192.168.32.130:6443  check
  server k8s-master03    192.168.32.131:6443  check
​

配置KeepAlived(Master节点)

注意每个节点的IP和网卡(interface参数)

Master01

vim /etc/keepalived/keepalived.conf 
​
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface ens32
    mcast_src_ip 192.168.32.129
    virtual_router_id 51
    priority 101
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.32.233
    }
    track_script {
      chk_apiserver 
} }
​

Master02

vim /etc/keepalived/keepalived.conf
​
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
 
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens32
    mcast_src_ip 192.168.32.130
    virtual_router_id 51
    priority 100
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.32.233
    }
    track_script {
      chk_apiserver 
} }
​

Master03

vim /etc/keepalived/keepalived.conf
​
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
 
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens32
    mcast_src_ip 192.168.32.131
    virtual_router_id 51
    priority 100
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.32.233
    }
    track_script {
      chk_apiserver 
} }

健康检查脚本(所有master节点)

cat > /etc/keepalived/check_apiserver.sh  << EFO
#!/bin/bash
​
err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done
​
if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi
EFO
​
​
# 授权
chmod +x /etc/keepalived/check_apiserver.sh

节点启动haproxy和keepalived(所有master节点)

systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived

VIP测试 (master01)

重要:如果安装了keepalived和haproxy,需要测试keepalived是否是正常的

# 看到有VIP绑定到ens33网卡上了
[root@k8s-master01 pki]# ip addr show ens32
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:ce:bd:c6 brd ff:ff:ff:ff:ff:ff
    inet 192.168.32.129/24 brd 192.168.32.255 scope global dynamic ens32
       valid_lft 1164sec preferred_lft 1164sec
    inet 192.168.32.233/32 scope global ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fece:bdc6/64 scope link 
       valid_lft forever preferred_lft forever
[root@k8s-master01 pki]# 
​
​
# 任意节点,检查haproxy
telnet 192.168.32.233 8443
​
​
如果ping不通且telnet没有出现 "]",则认为VIP不可以,不可在继续往下执行,需要排查keepalived的问题,比如防火墙和selinux,haproxy和keepalived的状态,监听端口等
所有节点查看防火墙状态必须为disable和inactive:systemctl status firewalld
所有节点查看selinux状态,必须为disable:getenforce
master节点查看haproxy和keepalived状态:systemctl status keepalived haproxy
master节点查看监听端口:netstat -lntp 

Kubernetes组件配置

Apiserver

所有Master节点创建kube-apiserver service,# 注意,如果不是高可用集群,192.168.32.233改为master01的地址

注意本文档使用的k8s service网段为10.96.0.0/12,该网段不能和宿主机的网段、Pod网段的重复,请按需修改

Master01配置

[root@k8s-master01 ~]# vim /usr/lib/systemd/system/kube-apiserver.service
​
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
​
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
      --v=2  \
      --logtostderr=true  \
      --allow-privileged=true  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --insecure-port=0  \
      --advertise-address=192.168.32.129 \            # 主机IP    记得把注释删了
      --service-cluster-ip-range=10.96.0.0/12  \     # service网段
      --service-node-port-range=30000-32767  \
      --etcd-servers=https://192.168.32.129:2379,https://192.168.32.130:2379,https://192.168.32.131:2379 \
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User
      # --token-auth-file=/etc/kubernetes/token.csv
​
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
​
[Install]
WantedBy=multi-user.target
​

Master02

[root@k8s-master02 ~]# vim /usr/lib/systemd/system/kube-apiserver.service
​
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
​
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
      --v=2  \
      --logtostderr=true  \
      --allow-privileged=true  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --insecure-port=0  \
      --advertise-address=192.168.32.130 \          # 主机IP
      --service-cluster-ip-range=10.96.0.0/12  \   # servicer IP
      --service-node-port-range=30000-32767  \
      --etcd-servers=https://192.168.32.129:2379,https://192.168.32.130:2379,https://192.168.32.131:2379 \
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User
      # --token-auth-file=/etc/kubernetes/token.csv
​
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
​
[Install]
WantedBy=multi-user.target
​

Master03

[root@k8s-master03 ~]# vim /usr/lib/systemd/system/kube-apiserver.service
​
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
​
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
      --v=2  \
      --logtostderr=true  \
      --allow-privileged=true  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --insecure-port=0  \
      --advertise-address=192.168.32.131 \          # 主机IP     
      --service-cluster-ip-range=10.96.0.0/12  \   # servicer IP
      --service-node-port-range=30000-32767  \
      --etcd-servers=https://192.168.32.129:2379,https://192.168.32.130:2379,https://192.168.32.131:2379 \
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User
      # --token-auth-file=/etc/kubernetes/token.csv
​
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
​
[Install]
WantedBy=multi-user.target
​

启动apiserver(所有Master节点)

systemctl daemon-reload && systemctl enable --now kube-apiserver
​
​
# 检测kube-server状态
systemctl status kube-apiserver
​
​
​

配置kube-controller-manager service (所有Master节点)

注意本文档使用的k8s Pod网段为172.16.0.0/12,该网段不能和宿主机的网段、k8s Service网段的重复,请按需修改

三个Master 节点配置文件内容相同
​
vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
​
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
      --v=2 \
      --logtostderr=true \
      --address=127.0.0.1 \
      --root-ca-file=/etc/kubernetes/pki/ca.pem \
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
      --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
      --service-account-private-key-file=/etc/kubernetes/pki/sa.key \
      --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
      --leader-elect=true \
      --use-service-account-credentials=true \
      --node-monitor-grace-period=40s \
      --node-monitor-period=5s \
      --pod-eviction-timeout=2m0s \
      --controllers=*,bootstrapsigner,tokencleaner \
      --allocate-node-cidrs=true \
      --cluster-cidr=172.16.0.0/12 \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
      --node-cidr-mask-size=24
      
Restart=always
RestartSec=10s
​
[Install]
WantedBy=multi-user.target
​

启动(所有matser节点)

systemctl daemon-reload && systemctl enable --now kube-controller-manager

查看状态(所有matser节点)

systemctl  status kube-controller-manager

配置kube-scheduler service(所有Master节点)

所有master节点配置文件相同

vim /usr/lib/systemd/system/kube-scheduler.service
​
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
​
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
      --v=2 \
      --logtostderr=true \
      --address=127.0.0.1 \
      --leader-elect=true \
      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
​
Restart=always
RestartSec=10s
​
[Install]
WantedBy=multi-user.target

启动

 systemctl daemon-reload && systemctl enable --now kube-scheduler

查看状态

systemctl status kube-scheduler

TLS Bootstrapping配置

Kubernetes 在 1.4 版本(我记着是)推出了 TLS bootstrapping 功能;这个功能主要解决了以下问题:

当集群开启了 TLS 认证后,每个节点的 kubelet 组件都要使用由 apiserver 使用的 CA 签发的有效证书才能与 apiserver 通讯;此时如果节点多起来,为每个节点单独签署证书将是一件非常繁琐的事情;

TLS bootstrapping 功能就是让 kubelet 先使用一个预定的低权限用户连接到 apiserver,然后向 apiserver 申请证书,kubelet 的证书由 apiserver 动态签署;在配合 RBAC 授权模型下的工作

创建bootstrap(Master01节点)

注意,如果不是高可用集群,192.168.32.233:8443改为master01的地址,8443改为apiserver的端口,默认是6443

[root@k8s-master01 ~]# cd /root/k8s-ha-install/bootstrap
​
kubectl config set-cluster kubernetes     --certificate-authority=/etc/kubernetes/pki/ca.pem     --embed-certs=true     --server=https://192.168.32.233:8443     --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-credentials tls-bootstrap-token-user     --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-context tls-bootstrap-token-user@kubernetes     --cluster=kubernetes     --user=tls-bootstrap-token-user     --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config use-context tls-bootstrap-token-user@kubernetes     --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

注意:如果要修改bootstrap.secret.yaml的token-id和token-secret,需保证各字段的对应关系,还有上面命令中的 --token 字段的对应

 

[root@k8s-master01 bootstrap]# mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
[root@k8s-master01 bootstrap]# kubectl create -f bootstrap.secret.yaml 
​

Node节点配置

复制证书至Node(这里各master 也充当了node)节点,1.19版本以后,建议 各master节点,把 kubelet,kube-proxy,都装上,占用不了多少资源,可以通过污点等方式,不运行业务pod就好了

cd /etc/kubernetes/
​
for NODE in k8s-master02 k8s-master03 node01 node02; do
     ssh $NODE mkdir -p /etc/kubernetes/pki /etc/etcd/ssl /etc/etcd/ssl
     for FILE in etcd-ca.pem etcd.pem etcd-key.pem; do
       scp /etc/etcd/ssl/$FILE $NODE:/etc/etcd/ssl/
     done
     for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do
       scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}
 done
 done

Kubelet配置

创建相关目录(所有节点)

mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/

配置kubelet service(所有节点)

vim  /usr/lib/systemd/system/kubelet.service
​
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
​
[Service]
ExecStart=/usr/local/bin/kubelet
​
Restart=always
StartLimitInterval=0
RestartSec=10
​
[Install]
WantedBy=multi-user.target

配置kubelet service的配置文件(所有节点)

vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf
​
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS

kubelet的配置文件(所有节点)启动所有节点kubelet

注意:如果更改了k8s的service网段,需要更改kubelet-conf.yml 的clusterDNS:配置,改成k8s Service网段的第十个地址,比如10.96.0.10(k8s的service网段开始设置的是10.96.0.0/12)

vim /etc/kubernetes/kubelet-conf.yml
​
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s

启动kubelet(所有节点)

systemctl daemon-reload
systemctl enable --now kubelet

查看集群状态(matser01上)

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE     VERSION
k8s-master01   NotReady   <none>   6m19s   v1.20.14
k8s-master02   NotReady   <none>   5m57s   v1.20.14
k8s-master03   NotReady   <none>   5m57s   v1.20.14
node01         NotReady   <none>   6m48s   v1.20.14
node02         NotReady   <none>   6m48s   v1.20.14
​

kube-proxy配置

注意,如果不是高可用集群,192.168.32.233:8443改为master01的地址,8443改为apiserver的端口,默认是6443

Master01执行

[root@k8s-master01 ~]# cd /root/k8s-ha-install
​
kubectl -n kube-system create serviceaccount kube-proxy
kubectl create clusterrolebinding system:kube-proxy         --clusterrole system:node-proxier         --serviceaccount kube-system:kube-proxy
SECRET=$(kubectl -n kube-system get sa/kube-proxy \
    --output=jsonpath='{.secrets[0].name}')
JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \
--output=jsonpath='{.data.token}' | base64 -d)
PKI_DIR=/etc/kubernetes/pki
K8S_DIR=/etc/kubernetes
kubectl config set-cluster kubernetes     --certificate-authority=/etc/kubernetes/pki/ca.pem     --embed-certs=true     --server=https://192.168.32.233:8443     --kubeconfig=${K8S_DIR}/kube-proxy.kubeconfig
kubectl config set-credentials kubernetes     --token=${JWT_TOKEN}     --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config set-context kubernetes     --cluster=kubernetes     --user=kubernetes     --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config use-context kubernetes     --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

发送kube-proxy的systemd Service文件发送到其他节点(master01上)

如果更改了集群Pod的网段,需要更改kube-proxy/kube-proxy.conf的clusterCIDR: 172.16.0.0/12参数为pod的网段

[root@k8s-master01 ~]# vim /root/k8s-ha-install/kube-proxy/kube-proxy.conf
clusterCIDR: 172.16.0.0/12   #把这个字段更改为,自己的pod 网段

分发配置文件(master01上)

[root@k8s-master01 ~]# cd /root/k8s-ha-install
for NODE in k8s-master01 k8s-master02 k8s-master03; do
     scp ${K8S_DIR}/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
     scp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf
     scp kube-proxy/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.service
 done
​
for NODE in node01 node02; do
     scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
     scp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf
     scp kube-proxy/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.service
 done

启动kube-proxy(所有节点)

 systemctl daemon-reload && systemctl enable --now kube-proxy

安装Calico

[root@k8s-master01 ~]# cd /root/k8s-ha-install/calico/
​
# 修改calico-etcd.yaml的以下位置
​
sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.32.129:2379,https://192.168.32.130:2379,https://192.168.32.131:2379"#g' calico-etcd.yaml
​
​
​
ETCD_CA=`cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'`
ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d '\n'`
ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d '\n'`
​
​
sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml
​
​
sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml
​
​
# 更改此处为自己的pod网段
POD_SUBNET="172.16.0.0/12"
​
​
sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "192.168.0.0/16"@  value: '"${POD_SUBNET}"'@g' calico-etcd.yaml

apply

[root@k8s-master01 calico]# kubectl apply -f calico-etcd.yaml

查看容器状态

[root@k8s-master01 calico]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5f6d4b864b-6jp52   1/1     Running   0          3m27s
calico-node-2glmr                          1/1     Running   0          3m26s
calico-node-bvfkj                          1/1     Running   0          3m26s
calico-node-ffbwq                          1/1     Running   0          3m26s
calico-node-tpk2b                          1/1     Running   0          3m26s
calico-node-vv9r7                          1/1     Running   0          3m26s
[root@k8s-master01 calico]# kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    <none>   43m   v1.20.14
k8s-master02   Ready    <none>   43m   v1.20.14
k8s-master03   Ready    <none>   43m   v1.20.14
node01         Ready    <none>   43m   v1.20.14
node02         Ready    <none>   43m   v1.20.14
​

安装CoreDNS

如果更改了k8s service的网段需要将coredns的serviceIP改成k8s service网段的第十个IP

cd /root/k8s-ha-install/
​
sed -i "s#10.96.0.10#x.x.x.10#g" CoreDNS/coredns.yaml

安装coredns

[root@k8s-master01 k8s-ha-install]# kubectl  create -f CoreDNS/coredns.yaml

安装Metrics Server

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率

[root@k8s-master01 ~]# cd /root/k8s-ha-install/metrics-server-0.4.x/
[root@k8s-master01 metrics-server-0.4.x]# kubectl  create -f . 

安装dashboard

Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。

安装指定版本dashboard

[root@k8s-master01 ~]# cd /root/k8s-ha-install/dashboard/
​
[root@k8s-master01 dashboard]# kubectl  create -f .

登录dashboard

# 更改dashboard的svc为NodePort
[root@k8s-master01 ~]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
将ClusterIP更改为NodePort
​
# 查看端口号
[root@k8s-master01 dashboard]#  kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.107.11.77   <none>        443:31214/TCP   2m49s
​
​
# 根据自己的实例端口号,通过任意安装了kube-proxy的宿主机或者VIP的IP+端口即可访问到dashboard
​

页面访问:https://192.168.32.233:31214

查看token值

[root@k8s-master01 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

 

集群验证

安装busybox(master01上)

cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: busybox:1.28
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF

验证步骤(matser01上)

1.  Pod必须能解析Service
2.  Pod必须能解析跨namespace的Service
3.  每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53
4.  Pod和Pod之前要能通
    a)  同namespace能通信
    b)  跨namespace能通信
    c)  跨机器能通信

步骤演示(matser01上)

# 首先查看po是否安装成功
[root@k8s-master01 ~]# kubectl get po
NAME      READY   STATUS    RESTARTS   AGE
busybox   1/1     Running   0          3m11s
​
# 查看svc是否正常
[root@k8s-master01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   163m
​
# 查看Pod是否能能解析Service
[root@k8s-master01 ~]# kubectl exec  busybox -n default -- nslookup kubernetes 
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
​
Name:      kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
​
# 查看Pod是否能解析跨namespace的Service
[root@k8s-master01 ~]# kubectl exec  busybox -n default -- nslookup kube-dns.kube-system
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
​
Name:      kube-dns.kube-system
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
​
# 跟我以上结果一致就成功了

使用telnet命令验证

# 所有节点安装telnet命令,有的话忽略
yum install -y telnet
​
# 所有机器 10.96.0.1  443  kubernetes svc 443
# 所有机器 10.96.0.10 53   kube-dns的service 53
# 不会自动断开就是成功了
telnet 10.96.0.1 443
telnet 10.96.0.10 53
​
Trying 10.96.0.1...
Connected to 10.96.0.1.
Escape character is '^]'.

使用curl命令验证(所有机器)

[root@k8s-master01 ~]# curl 10.96.0.10:53
curl: (52) Empty reply from server

容器验证(master01上)

[root@k8s-master01 ~]# kubectl get po -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5f6d4b864b-pq2qw   1/1     Running   0          62m
calico-node-75blv                          1/1     Running   0          62m
calico-node-hw27b                          1/1     Running   0          62m
calico-node-k2wdf                          1/1     Running   0          62m
calico-node-l58lz                          1/1     Running   0          62m
calico-node-v2qlq                          1/1     Running   0          62m
coredns-867d46bfc6-8vzrk                   1/1     Running   0          72m
metrics-server-595f65d8d5-kgn8c            1/1     Running   0          60m
​
[root@k8s-master01 ~]# kubectl get po -n kube-system -owide
NAME                                       READY   STATUS    RESTARTS   AGE   IP              NODE           NOMINATED NODE   READINESS GATES
calico-kube-controllers-5f6d4b864b-pq2qw   1/1     Running   0          63m   192.168.1.100   k8s-master01   <none>           <none>
calico-node-75blv                          1/1     Running   0          63m   192.168.1.103   k8s-node01     <none>           <none>
calico-node-hw27b                          1/1     Running   0          63m   192.168.1.101   k8s-master02   <none>           <none>
calico-node-k2wdf                          1/1     Running   0          63m   192.168.1.100   k8s-master01   <none>           <none>
calico-node-l58lz                          1/1     Running   0          63m   192.168.1.102   k8s-master03   <none>           <none>
calico-node-v2qlq                          1/1     Running   0          63m   192.168.1.104   k8s-node02     <none>           <none>
coredns-867d46bfc6-8vzrk                   1/1     Running   0          73m   172.161.125.2   k8s-node01     <none>           <none>
metrics-server-595f65d8d5-kgn8c            1/1     Running   0          62m   172.161.125.1   k8s-node01     <none>           <none>
​
# 能进去就ok
[root@k8s-master01 ~]# kubectl exec -it calico-node-v2qlq -n  kube-system  -- sh
sh-4.4#
​
# 进入node01,然后能ping通node02就行
[root@k8s-master01 ~]# kubectl exec -it calico-node-v2qlq -n  kube-system  -- bash
[root@k8s-node02 /]# ping 192.168.1.104
PING 192.168.1.104 (192.168.1.104) 56(84) bytes of data.
64 bytes from 192.168.1.104: icmp_seq=1 ttl=64 time=0.123 ms
64 bytes from 192.168.1.104: icmp_seq=2 ttl=64 time=0.090 ms
^C
--- 192.168.1.104 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 46ms
rtt min/avg/max/mdev = 0.090/0.106/0.123/0.019 ms

常用配置

# 所有节点都改
vim /etc/docker/daemon.json
​
{
"registry-mirrors": ["https://6h6ezoe5.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"max-concurrent-downloads": 10,
"max-concurrent-uploads": 5,
"log-opts": {
  "max-size": "300m",
  "max-file": "2"
},
"live-restore": true
}
​
# 所有节点改完重启docker
systemctl daemon-reload && systemctl restart docker
​
​
max-concurrent-downloads # 下载并发数,最大开启线程下载镜像
max-concurrent-uploads   # 上传并发数,同上
max-size                 # 日志文件最大到多少切割 
max-file                 # 日志文件保留个数 
live-restore             # 开启这个参数,重启docker,不会导致 容器重启

vim /usr/lib/systemd/system/kube-controller-manager.service
 # 找个位置加上,在三个master节点
 --experimental-cluster-signing-duration=876000h0m0s \
 
 # 改完重启
 systemctl daemon-reload && systemctl restart kube-controller-manager
 
 
 ##TLS bootstrapping 时的证书实际是由 kube-controller-manager 组件来签署的,也就是说证书有效期是 kube-controller-manager 组件控制的
默认为 8760h0m0s,将其改为 87600h0m0s 即 10 年后再进行 TLS bootstrapping 签署证书即可
bootstrapping 会在证书快要过期的时候自动续签

# 所有节点,更换成以下的配置文件
[root@k8s-node02 ~]# cat /etc/systemd/system/kubelet.service.d/10-kubelet.conf 
​
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384    --image-pull-progress-deadline=30m "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
​
​
​
#增加了  --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
        安全加固
        
 #增加了   --image-pull-progress-deadline=30m 
#镜像拉取进度最大时间,如果在这段时间拉取镜像没有任何进展,将取消拉取

标签:pki,kubernetes,etc,二进制,--,集群,etcd,k8s
来源: https://blog.csdn.net/yf8e8y/article/details/122676664

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有