ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

K8S 1.22.3 多master节点部署

2021-12-10 12:32:08  阅读:337  来源: 互联网

标签:haproxy kubernetes -- etc master 1.22 kubeadm K8S 节点


集群节点:

master01192.168.90.110
master01192.168.90.111
master01192.168.90.112
node01192.168.90.113
node02192.168.90.114

1、部署安装K8S前置环境

参考:基于kubeadm 部署K8S1.22.3 踩坑_ice_bird的专栏-CSDN博客

2、在所有的master节点卡iptable转发

iptables -P FORWARD ACCEPT

3、在所有的master节点安装keepalived和haproxy服务

yum -y install haproxy keepalived

修改haproxy配置文件 /etc/haproxy/haproxy.cfg

#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000


#---------------------------------------------------------------------
# apiserver frontend which proxys to the control plane nodes
#---------------------------------------------------------------------
frontend apiserver
    bind *:16443       #端口可以自己调整
    mode tcp
    option tcplog
    default_backend apiserver

#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserver
    option httpchk GET /healthz
    http-check expect status 200
    mode tcp
    option ssl-hello-chk
    balance     roundrobin
    server  master01 192.168.90.110:6443 check
    server  master02 192.168.90.113:6443 check
    server  master03 192.168.90.114:6443 check

修改keepalived配置文件 /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
   router_id LVS_DEVEL
}


vrrp_script check_apiserver {
    script "/etc/keepalived/check_apiserver.sh"         # 检测脚本路径
    interval 3
    weight -2
    fall 10
    rise 2
}



vrrp_instance VI_1 {
    state MASTER             #主节点为MASTER,其他节点为BACKUP
    interface ens192         #网卡名称
    virtual_router_id 51     #id所有节点保持一致
    priority 100             #主节点优先级100,其他节点优先级数值逐步下调
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111       
    }
    virtual_ipaddress {
        192.168.90.115        #keepalived虚拟IP可自行调整
    }

    track_script {
        check_apiserver       # 检测模块
    }


}

创建/etc/keepalived/check_apiserver.sh检测脚本,keepalived配置需要使用

#!/bin/sh

errorExit() {
    echo "*** $*" 1>&2
    exit 1
}

curl --silent --max-time 2 --insecure https://localhost:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://localhost:${APISERVER_DEST_PORT}/"
if ip addr | grep -q ${APISERVER_VIP}; then
    curl --silent --max-time 2 --insecure https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/"
fi


#${APISERVER_VIP}  keepalived cluster 的虚拟IP.
#${APISERVER_DEST_PORT} Kubernetes API Server端口,在haproxy.cfg配置的端口.

修改各个master节点的keepalived配置,state、interface、权重各个节点会有所差异。

启动keepalive和haproxy

# systemctl enable haproxy --now
# systemctl enable keepalived --now

开启后可以在keepalived的master节点上看到虚拟IP地址

 可以停止haproxy进行测试,虚拟IP会迁移到优先级高的服务器。

4、在所有节点安装kubeadm kubelet kubectl

5、在master01上创建kubeadm-config.yaml用于初始化

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.90.110               #本机地址
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: master01
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.90.115:16443"      #虚拟IP地址及端口
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.22.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}
~                   

初始化完成后,先在master01执行

#执行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

然后拷以下文件到其他master节点

    scp /etc/kubernetes/pki/{ca.crt,ca.key,sa.key,sa.pub,front-proxy-ca.crt,front-proxy-ca.key} root@192.168.90.113:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/etcd/{ca.crt,ca.key} root@192.168.90.113:/etc/kubernetes/pki/etcd
    scp /etc/kubernetes/admin.conf root@192.168.90.113:/etc/kubernetes/

    scp /etc/kubernetes/pki/{ca.crt,ca.key,sa.key,sa.pub,front-proxy-ca.crt,front-proxy-ca.key} root@192.168.90.114:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/etcd/{ca.crt,ca.key} root@192.168.90.114:/etc/kubernetes/pki/etcd
    scp /etc/kubernetes/admin.conf root@192.168.90.114:/etc/kubernetes/

文件复制完成后在其他master节点执行初始化返回的命令,master节点比node节点

多--control-plane --certificate-key 参数

kubeadm join 192.168.90.115:16443 --token jbze8q.1hs7cgz1tso347j8 --discovery-token-ca-cert-hash sha256:7690c7d2e7e1727506174918fdf9abcee2e5dab2775ac47e24485991fd6abcde --control-plane --certificate-key 621b4cf9efbd111123dc17d1ed55ff5b14b49937442b09c1dcec4415530731b7

在node节点执行以下命令

kubeadm join 192.168.90.115:16443 --token jbze8q.1hs7cgz1tso347j8 --discovery-token-ca-cert-hash sha256:7690c7d2e7e1727506174918fdf9abcee2e5dab2775ac47e24485991fd6abcde

kubeadm join过程中如果出现证书过期,可以用以下命令重新生成

kubeadm token create --print-join-command

如果是添加master节点,则需要加certificate-key,获取方式如下

kubeadm init phase upload-certs --upload-certs

如出现coredns无法启动的情况,可能要检查/etc/kubernetes/controller-manager.conf文件

添加cidr参数

    - --allocate-node-cidrs=true
    - --cluster-cidr=10.244.0.0/16

参考链接:kubeadm/ha-considerations.md at main · kubernetes/kubeadm · GitHubhttps://github.com/kubernetes/kubeadm/blob/main/docs/ha-considerations.md#options

利用 kubeadm 创建高可用集群 | Kuberneteshttps://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/high-availability/

标签:haproxy,kubernetes,--,etc,master,1.22,kubeadm,K8S,节点
来源: https://blog.csdn.net/ice_bird/article/details/121790384

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有