ICode9

精准搜索请尝试: 精确搜索
首页 > 系统相关> 文章详细

CentOS 7.9.2009 安装 Kubernetes 1.22.版本

2021-10-02 20:30:47  阅读:431  来源: 互联网

标签:kube CentOS Kubernetes -- system 2009 Running master k8s


目录

1.1 关闭SWAP交换分区

1.2 安装 Docker/kubeadm/kubelet [所有节点]

1.3 创建一个 Master 节点

1.4 将 Node 节点加入到当前集群中

1.5 部署容器网络(CNI)

1.6 测试 Kuberntes 集群

1.7 部署 Web UI(Dashboard)

2 相关报错问题


角色IP地址
k8s-master192.168.237.5
k8s-node01192.168.237.15
k8s-node02192.168.237.25

1.1 关闭SWAP交换分区

 # 关闭防火墙
 $ systemctl stop firewalld.service
 $ systemctl disable firewalld.service
 ​
 # 关闭 SELinux
 $ sed -i.bak -r 's@(SELINUX=)enforcing@\1disabled@' /etc/selinux/config # 永久
 $ setenforce 0                                                          # 临时
 ​
 # 关闭 SWAP 分区
 $ swapoff -a     # 临时
 $ vim /etc/fstab # 永久
 ​
 # 设置主机名
 $ hostnamectl set-hostname <hostname>
 ​
 # 在master添加hosts
 $ cat >> /etc/hosts <<EOF
 10.0.0.20 k8s-master
 10.0.0.21 k8s-node01
 10.0.0.22 k8s-node02
 EOF
 ​
 # 将桥接的IPv4的流量传递到 iptables 的链
 $ cat > /etc/sysctl.d/k8s.conf <<EOF
 net.bridge.bridge-nf-call-ip6tables = 1
 net.bridge.bridge-nf-call-iptables = 1
 EOF
 $ sysctl --system # 生效
 ​
 # 时间同步
 $ yum install ntpdate -y
 $ ntpdate time.windows.com
 ​
 # 设置操作系统时区
 $ ll /etc/localtime
 lrwxrwxrwx. 1 root root 38 Jul 11 08:51 /etc/localtime -> ../usr/share/zoneinfo/America/New_York
 $ ln -svf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
 ‘/etc/localtime’ -> ‘/usr/share/zoneinfo/Asia/Shanghai’
 $ date
 Tue Sep 21 12:54:40 CST 2021 

CentOS7 ==> firewalld ;CentOS6 ==> iptables(用户态的一个工具)

1.2 安装 Docker/kubeadm/kubelet [所有节点]

kubernetes 默认 CRI(容器运行时)为Docker,因此先安装 Docker

1.2.1 设置Yum源

 $ wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
 $ wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

1.2.2 安装 Docker

 $ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
 $ yum install -y docker-ce
 $ systemctl enable --now docker

配置镜像下载加速器以及设置cgroupdriver引擎:

 $ sudo mkdir -pv /etc/docker
 # "exec-opts": ["native.cgroupdriver=systemd"] 新版本的Kubernetes默认使用systemd的cgroupdriver,所以需要修改Docker默认的cgroupdriver的引擎从cgroupfs到systemd
 $ sudo tee /etc/docker/daemon.json << 'EOF'
 {
   "registry-mirrors": ["https://po13h3y1.mirror.aliyuncs.com"],
   "exec-opts": ["native.cgroupdriver=systemd"]
 }
 EOF
 ​
 $ sudo systemctl daemon-reload
 $ sudo systemctl restart docker
 $ docker info

1.2.3 添加阿里云 Kubernetes Yum 软件源

 # 配置 Kubernetes 软件源
 $ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
 [kubernetes]
 name=Kubernetes
 baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
 enabled=1
 gpgcheck=1
 repo_gpgcheck=1
 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
 EOF
 ​
 $ yum install -y kubelet kubeadm kubectl
 $ systemctl enable kubelet && systemctl start kubelet

Master 节点:apiserver,kube-scheduler,kube-controller-manager,etcd

Slave 节点:kubelet(非容器化),kube-proxy

kubeadm 不单纯是简化部署K8S集群,采用了容器化方式部署K8S组件

kubelet 采用非容器化部署

1.3 创建一个 Master 节点

kubeadm init | Kubernetes

使用 kubeadm 创建集群 | Kubernetes

在 192.168.237.5(Master 执行)

 $ kubeadm init \
 --apiserver-advertise-address=192.168.237.5 \
 --image-repository registry.aliyuncs.com/google_containers \
 --kubernetes-version v1.22.2 \
 --service-cidr=10.96.0.0/12 \
 --pod-network-cidr=10.244.0.0/16 \
 --ignore-preflight-errors=all
  • --apiserver-advertise-address:集群通告地址

  • --image-repository:由于默认拉取镜像地址 k8s.gcr.io 国内无法访问,这里指定阿里云镜像仓库地址

  • --kubernetes-version:k8s 版本,与上面安装的一致

  • --service-cidr:集群内部虚拟网络,Pod 统一访问入口

  • --pod-network-cidr:Pod 网络,与下面部署的CNI网络组件yaml中保持一致

  • --ignore-preflight-errors:在预检中如果有错误可以忽略掉,比如忽略 IsPrivilegedUser,Swap等

Docker / Kubernetes 镜像源不可用,教你几招搞定它! - 云+社区 - 腾讯云

若出现报错:

 The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
 # 解决方法:
 $ mkdir -pv /etc/systemd/system/kubelet.service.d/
 $ vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
 内容如下:
 # Note: This dropin only works with kubeadm and kubelet v1.11+
 [Service]
 Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
 Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
 # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
 EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
 # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
 # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
 EnvironmentFile=-/etc/default/kubelet
 ExecStart=
 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
 ​
 $ systemctl daemon-reload
 $ systemctl restart kubelet

kubeadm init 初始化工作

1、[preflight]:环境检查和拉取镜像 kubeadm config images pull

2、[certs]:生成k8s证书和etcd证书 /etc/kubernetes/pki

3、[kubeconfig]:生成 kubeconfig 文件

4、[kubelet-start]:启动kubelet服务并且生成配置文件 /var/lib/kubelet/config.yaml

5、[control-plane]:部署管理节点组件,用镜像启动容器 kubectl get pods -n kube-system

6、[etcd]:部署 etcd 数据库,用镜像启动容器

7、[upload-config] [kubelet] [upload-certs]:上传配置文件到 k8s 中

8、[mark-control-plane]:给管理节点添加一个标签 node-role.kubernetes.io/master=' ',再添加一个污点 [node-role.kubernetes.io/master:NoSchedule]

9、[bootstrap-token]:自动为kublet(客户端)颁发证书

10、[addons]:部署插件 CoreDNS kube-proxy

11、最后,拷贝连接k8s集群的认证文件到默认路径下

12、可以参考Installing Addons | Kubernetes安装网络组件,以及生成Node节点加入kubernetes的Master的配置

 Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
   https://kubernetes.io/docs/concepts/cluster-administration/addons/
   
 kubeadm join 192.168.237.5:6443 --token cnblld.gbhjbgufpdrglady \
     --discovery-token-ca-cert-hash sha256:b9df12811b44f2cb6756ffd33e7c579ba951e0aa8a56a6da89d63cdca57d4a37

或者使用配置文件引导:

$ mkdir -pv /var/lib/kubelet
mkdir: created directory ‘/var/lib/kubelet’
$ cat > /var/lib/kubelet/config.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF

$ systemctl restart kubelet
$ kubeadm init --config kubeadm.conf --ignore-preflight-errors=all

拷贝kubectl 使用的连接 k8s 认证文件到默认路径:(对应步骤11)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

$ kubectl get node
NAME         STATUS     ROLES                  AGE   VERSION
k8s-master   NotReady   control-plane,master   73m   v1.22.2

范例:相关 kubectl 命令行操作

[root@k8s-master ~]#kubectl get node
NAME         STATUS     ROLES                  AGE   VERSION
k8s-master   NotReady   control-plane,master   76m   v1.22.2
[root@k8s-master ~]#kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-7f6cbbb7b8-672w6             0/1     Pending   0          75m
coredns-7f6cbbb7b8-fjhtp             0/1     Pending   0          75m
etcd-k8s-master                      1/1     Running   0          76m
kube-apiserver-k8s-master            1/1     Running   0          76m
kube-controller-manager-k8s-master   1/1     Running   0          76m
kube-proxy-tkdj2                     1/1     Running   0          75m
kube-scheduler-k8s-master            1/1     Running   0          76m
[root@k8s-master ~]#kubectl get pod -A
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-7f6cbbb7b8-672w6             0/1     Pending   0          76m
kube-system   coredns-7f6cbbb7b8-fjhtp             0/1     Pending   0          76m
kube-system   etcd-k8s-master                      1/1     Running   0          76m
kube-system   kube-apiserver-k8s-master            1/1     Running   0          76m
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          76m
kube-system   kube-proxy-tkdj2                     1/1     Running   0          76m
kube-system   kube-scheduler-k8s-master            1/1     Running   0          76m

 

1.4 将 Node 节点加入到当前集群中

在 192.168.237.15/192.168.237.25(Node)执行

向集群添加新节点,执行在 kubeadm init 输出的 kubeadm join 命令:

# 若出现相关报错,则使用选项--ignore-preflight-errors=all
$ kubeadm join 10.0.0.20:6443 --token 7ey31d.6ouv4qpcn1e8vqgn \
--discovery-token-ca-cert-hash sha256:156bdbb66dd7fc2f049bb06f0621d4229b43eb05afc83de61186e4c53353afd2

# 若是地址解析的问题,可以添加到/etc/hosts

默认 token 有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,操作如下:

$ kubeadm token create
$ kubeadm token list
$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -public der 2> /dev/null | openssl dgst -sha256 -hex | sed 's@^.* @@'
156bdbb66dd7fc2f049bb06f0621d4229b43eb05afc83de61186e4c53353afd3
# token 以及 证书取最新
$ kubeadm join 10.0.0.20:6443 --token 7ey31d.6ouv4qpcn1e8vqgm \
--discovery-token-ca-cert-hash sha256:156bdbb66dd7fc2f049bb06f0621d4229b43eb05afc83de61186e4c53353afd3

或者直接命令快捷生成:kubeadm token create --print-join-command

kubeadm join | Kubernetes

1.5 部署容器网络(CNI)

使用 kubeadm 创建集群 | Kubernetes

注意:只需要部署下面其中一个,推荐使用 Calico

Calico 是一个纯三层的数据中心网络方案,Calico 支持广泛的平台,包括 Kubernetes ,OpenStack等。

Calico 在每一个计算节点利用 Linux Kernel 实现了一个高效虚拟路由器(vRouter)来负责数据转发,而每个vRouter 通过 BGP 协议负责把自己上云心的 workload 的路由信息向整个 Calico 网络内传播。

此外,Calico 项目还实现了 Kubernetes 网络策略,提供了 ACL 功能

Quickstart for Calico on Kubernetes

会网络报错:

$ journalctl -u kubelet
Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
$ wget https://docs.projectcalico.org/manifests/calico.yaml

下载完成后还需要修改里面定义Pod网络(CALICO_IPV4POOL_CIDR),与前面kubeadm init 指定的一样

$ vim calico.yaml
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"		# 对应的是集群部署时的--pod-network-cidr参数(容器网络)

修改完成后应用清单:

$ kubectl apply -f calico.yaml
$ kubectl get pods -n kube-system

范例:部署容器网络

[root@k8s-master ~]# cat calico.yaml | grep image
          image: docker.io/calico/cni:v3.20.1
          image: docker.io/calico/cni:v3.20.1
          image: docker.io/calico/pod2daemon-flexvol:v3.20.1
          image: docker.io/calico/node:v3.20.1
          image: docker.io/calico/kube-controllers:v3.20.1
# 如果下载比较慢,可以手动在各个Node节点执行安装calico组件:共4个镜像
[root@k8s-master ~]#docker pull calico/cni:v3.20.1
[root@k8s-master ~]#docker pull calico/pod2daemon-flexvol:v3.20.1
[root@k8s-master ~]#docker pull calico/node:v3.20.1
[root@k8s-master ~]#docker pull calico/kube-controllers:v3.20.1

[root@k8s-master ~]# kubectl apply -f calico.yaml
[root@k8s-master ~]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-855445d444-9xn9z   1/1     Running   0          31m
kube-system   calico-node-h94ns                          1/1     Running   0          31m
kube-system   calico-node-nkxlr                          1/1     Running   0          31m
kube-system   calico-node-x4zlb                          1/1     Running   0          31m
kube-system   coredns-6d56c8448f-l9ksb                   1/1     Running   0          34m
kube-system   coredns-6d56c8448f-lsljv                   1/1     Running   0          34m
kube-system   etcd-k8s-master                            1/1     Running   0          34m
kube-system   kube-apiserver-k8s-master                  1/1     Running   0          34m
kube-system   kube-controller-manager-k8s-master         1/1     Running   0          34m
kube-system   kube-proxy-qp9qm                           1/1     Running   0          34m
kube-system   kube-proxy-v6f9p                           1/1     Running   0          33m
kube-system   kube-proxy-vstpj                           1/1     Running   0          33m
kube-system   kube-scheduler-k8s-master                  1/1     Running   0          34m
[root@k8s-master ~]#kubectl get nodes
NAME         STATUS   ROLES                  AGE    VERSION
k8s-master   Ready    control-plane,master   140m   v1.22.2
k8s-node01   Ready    <none>                 47m    v1.22.2
k8s-node02   Ready    <none>                 47m    v1.22.2

1.6 测试 Kuberntes 集群

  • 验证 Pod 工作

  • 验证 Pod 网络通信

  • 验证 DNS 解析

在 Kubernetes 集群中创建一个 Pod,验证是否正常运行:

$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=80 --type=NodePort
$ kubectl get pod,svc 

访问地址:http://NodeIP:Port

1.7 部署 Web UI(Dashboard)

在 Master 节点执行kubectl apply -f recommended.yaml

$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml

默认 Dashboard 只能集群内部访问,修改 Service 为 NodePort 类型,暴露到外部:

$ vim recommended.yaml
...省略...
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  # 添加以下内容
  type: NodePort
...省略...

$ kubectl apply -f recommended.yaml
$ kubectl get pods -A
NAMESPACE              NAME                                         READY   STATUS              RESTARTS   AGE
default                nginx-6799fc88d8-tpvxh                       1/1     Running             0          14m
kube-system            calico-kube-controllers-855445d444-9xn9z     1/1     Running             0          77m
kube-system            calico-node-h94ns                            1/1     Running             0          77m
kube-system            calico-node-nkxlr                            1/1     Running             0          77m
kube-system            calico-node-x4zlb                            1/1     Running             0          77m
kube-system            coredns-6d56c8448f-l9ksb                     1/1     Running             0          80m
kube-system            coredns-6d56c8448f-lsljv                     1/1     Running             0          80m
kube-system            etcd-k8s-master                              1/1     Running             0          80m
kube-system            kube-apiserver-k8s-master                    1/1     Running             0          80m
kube-system            kube-controller-manager-k8s-master           1/1     Running             0          80m
kube-system            kube-proxy-qp9qm                             1/1     Running             0          80m
kube-system            kube-proxy-v6f9p                             1/1     Running             0          79m
kube-system            kube-proxy-vstpj                             1/1     Running             0          79m
kube-system            kube-scheduler-k8s-master                    1/1     Running             0          80m
kubernetes-dashboard   dashboard-metrics-scraper-7b59f7d4df-bmvdx   0/1     ContainerCreating   0          43s
kubernetes-dashboard   kubernetes-dashboard-5dbf55bd9d-rgtfw        0/1     ContainerCreating   0          44s
$ kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7b59f7d4df-bmvdx   1/1     Running   0          2m10s
kubernetes-dashboard-5dbf55bd9d-rgtfw        1/1     Running   0          2m11s

范例:测试 Kubernetes 集群

[root@k8s-master ~]# kubectl get pods,svc -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-7b59f7d4df-bmvdx   1/1     Running   0          4m42s
pod/kubernetes-dashboard-5dbf55bd9d-rgtfw        1/1     Running   0          4m43s

NAME                                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.98.127.228   <none>        8000/TCP        4m43s
service/kubernetes-dashboard        NodePort    10.100.37.12    <none>        443:31744/TCP   4m43s

访问地址:https://NodeIP:30001

创建service account 并绑定默认 cluster-admin 管理员集群角色:

# 创建用户
$ kubectl create serviceaccount dashboard-admin -n kube-system
# 用户授权
$ kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
# 获取用户 Token
$ kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

使用输出的 token 登录 Dashboard

2 相关报错问题

  • 1、清空当前初始化环境

    • 解决方法:kubeadm reset

  • 2、calico pod 未准备就绪

    • 解决方法:

    • $ grep image calico.yaml
      $ docker pull calico/xxx(cni:v3.20.1,pod2daemon-flexvol:v3.20.1,node:v3.20.1,kube-controllers:v3.20.1)

  • 3、Node 节点出现报错 error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition

    • 解决方法:

    • error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition
      To see the stack trace of this error execute with --v=5 or higher
    • $ swapoff -a
      $ kubeadm reset
      $ systemctl daemon-reload
      $ systemctl restart kubelet
      $ iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X  

 

 

标签:kube,CentOS,Kubernetes,--,system,2009,Running,master,k8s
来源: https://blog.csdn.net/weixin_40274679/article/details/120493831

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有