ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

kubenetes 环境的塔建

2019-01-25 16:00:08  阅读:384  来源: 互联网

标签:k8s 塔建 kubernetes 环境 192.168 kubenetes 9.88 kmaster kube


最近听我朋友说他们公司准备上云,全线把服务迁到 k8s 上面,一下感觉,我们就 lower 了不少,之前服务器一直跑的就是 docker ,想想弄到 k8s 应该还是没有啥,于是我们也开始改造了

参考了不少文档,有兴趣的可以读原文 https://kubernetes.io/docs/setup/independent/install-kubeadm/

https://blog.csdn.net/networken/article/details/84991940

https://jimmysong.io/kubernetes-handbook/practice/install-kubernetes-with-kubeadm.html

k8s 服务规划
主机名 ip 角色
 kmaster 192.168.9.88 master
knode1 192.168.9.81 node
konde2 192.168.9.82 node

分别在三台机器 中执行


cat >> /etc/hosts <<EOF
192.168.9.88 kmaster
192.168.9.81 knode1
192.168.9.82 knode2
EOF

setenforce 0

sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

[root@knode1 ~]# cat >> /etc/hosts <<EOF
> 192.168.9.88 kmaster
> 192.168.9.81 knode1
> 192.168.9.82 knode2
> EOF

[root@knode1 ~]# sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config && setenforce 0

[root@knode1 ~]# swapoff -a
[root@knode1 ~]# yes | cp /etc/fstab /etc/fstab_bak
[root@knode1 ~]# cat /etc/fstab_bak |grep -v swap > /etc/fstab

 

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

#执行脚本
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
#kube-proxy开启ipvs
yum install ipset ipvsadm -y

设置 docker 的源

#配置docker yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#安装指定版本,这里安装18.06
yum list docker-ce --showduplicates | sort -r
yum install -y docker-ce-18.06.1.ce-3.el7
systemctl start docker && systemctl enable docker

安装 kubeadmin kube kubctl

#配置kubernetes.repo的源,由于官方源国内无法访问,这里使用阿里云yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#在所有节点上安装指定版本 kubelet、kubeadm 和 kubectl
yum install -y kubelet-1.13.1 kubeadm-1.13.1 kubectl-1.13.1

#启动kubelet服务
systemctl enable kubelet && systemctl start kubelet

  部署 master

kubeadm init \
    --apiserver-advertise-address=192.168.9.88 \
    --image-repository registry.aliyuncs.com/google_containers \
    --kubernetes-version v1.13.1 \
    --pod-network-cidr=10.244.0.0/16

  注意这里执行初始化用到了- -image-repository选项,指定初始化需要的镜像源从阿里云镜像仓库拉取

 

 这里有点慢,如果 输出下面这些,就可以了

[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'


[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kmaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.9.88]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.9.88 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.9.88 127.0.0.1 ::1]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 22.007105 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kmaster" as an annotation
[mark-control-plane] Marking the node kmaster as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kmaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: hpvjuo.divmu5zdcqb7oysy
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.9.88:6443 --token hpvjuo.divmu5zdcqb7oysy --discovery-token-ca-cert-hash sha256:a5e36c51c68ad1f1e07286c8c9c58bf5b8794c25182b18b15c1dcb6e99462eb2

  

#创建普通用户并设置密码123456
useradd k8s && echo "k8s:123456" | chpasswd k8s

#追加sudo权限,并配置sudo免密
sed -i '/^root/a\k8s  ALL=(ALL)       NOPASSWD:ALL' /etc/sudoers
[root@kmaster ~]# su - k8s
[k8s@kmaster ~]$
[k8s@kmaster ~]$
[k8s@kmaster ~]$
[k8s@kmaster ~]$
[k8s@kmaster ~]$ mkdir -p $HOME/.kube
[k8s@kmaster ~]$
[k8s@kmaster ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[k8s@kmaster ~]$
[k8s@kmaster ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

[k8s@kmaster ~]$ #启用 kubectl 命令自动补全功能(注销重新登录生效)
[k8s@kmaster ~]$ echo "source <(kubectl completion bash)" >> ~/.bashrc



[k8s@kmaster ~]$ kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}
[k8s@kmaster ~]$ kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}
[k8s@kmaster ~]$ kubectl get nodes
NAME      STATUS     ROLES    AGE     VERSION
kmaster   NotReady   master   6m51s   v1.13.1
[k8s@kmaster ~]$ kubectl describe node kmaster
Name:               kmaster
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=kmaster
                    node-role.kubernetes.io/master=

  查看 pod的情况

[k8s@kmaster ~]$ kubectl get pod -n kube-system -o wide
NAME                              READY   STATUS    RESTARTS   AGE     IP             NODE      NOMINATED NODE   READINESS GATES
coredns-78d4cf999f-l9f7v          0/1     Pending   0          2m32s   <none>         <none>    <none>           <none>
coredns-78d4cf999f-n8g4g          0/1     Pending   0          2m32s   <none>         <none>    <none>           <none>
etcd-kmaster                      1/1     Running   0          6m51s   192.168.9.88   kmaster   <none>           <none>
kube-apiserver-kmaster            1/1     Running   0          6m48s   192.168.9.88   kmaster   <none>           <none>
kube-controller-manager-kmaster   1/1     Running   0          6m54s   192.168.9.88   kmaster   <none>           <none>
kube-proxy-57lvg                  1/1     Running   0          7m32s   192.168.9.88   kmaster   <none>           <none>
kube-scheduler-kmaster            1/1     Running   0          6m45s   192.168.9.88   kmaster   <none>           <none>

  部署网络插件

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

  

[k8s@kmaster ~]$ kubectl get pod -n kube-system -o wide
NAME                              READY   STATUS    RESTARTS   AGE     IP             NODE      NOMINATED NODE   READINESS GATES
coredns-78d4cf999f-l9f7v          1/1     Running   0          9m18s   10.244.0.3     kmaster   <none>           <none>
coredns-78d4cf999f-n8g4g          1/1     Running   0          9m18s   10.244.0.2     kmaster   <none>           <none>
etcd-kmaster                      1/1     Running   0          13m     192.168.9.88   kmaster   <none>           <none>
kube-apiserver-kmaster            1/1     Running   0          13m     192.168.9.88   kmaster   <none>           <none>
kube-controller-manager-kmaster   1/1     Running   0          13m     192.168.9.88   kmaster   <none>           <none>
kube-flannel-ds-amd64-dkb2t       1/1     Running   0          2m44s   192.168.9.88   kmaster   <none>           <none>
kube-proxy-57lvg                  1/1     Running   0          14m     192.168.9.88   kmaster   <none>           <none>
kube-scheduler-kmaster            1/1     Running   0          13m     192.168.9.88   kmaster   <none>           <none>

  至此,Kubernetes 的 Master 节点就部署完成了。如果你只需要一个单节点的 Kubernetes,现在你就可以使用了。

 

部署worker节点

[root@knode1 ~]# kubeadm join 192.168.9.88:6443 --token hpvjuo.divmu5zdcqb7oysy --discovery-token-ca-cert-hash sha256:a5e36c51c68ad1f1e07286c8c9c58bf5b8794c25182b18b15c1dcb6e99462eb2
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.9.88:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.9.88:6443"
[discovery] Requesting info from "https://192.168.9.88:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.9.88:6443"
[discovery] Successfully established connection with API Server "192.168.9.88:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "knode1" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

  

#执行以下命令将节点接入集群
kubeadm join 192.168.92.56:6443 --token 67kq55.8hxoga556caxty7s --discovery-token-ca-cert-hash sha256:7d50e704bbfe69661e37c5f3ad13b1b88032b6b2b703ebd4899e259477b5be69

#如果执行kubeadm init时没有记录下加入集群的命令,可以通过以下命令重新创建
kubeadm token create --print-join-command

  查看节点的状态

[k8s@kmaster ~]$ kubectl get nodes
NAME      STATUS     ROLES    AGE    VERSION
kmaster   Ready      master   19m    v1.13.1
knode1    NotReady   <none>   2m5s   v1.13.1
knode2    NotReady   <none>   2m9s   v1.13.1

  稍等处刻

[k8s@kmaster ~]$ kubectl get nodes
NAME      STATUS   ROLES    AGE     VERSION
kmaster   Ready    master   24m     v1.13.1
knode1    Ready    <none>   7m46s   v1.13.1
knode2    Ready    <none>   7m50s   v1.13.1
[k8s@kmaster ~]$ kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE     IP             NODE      NOMINATED NODE   READINESS GATES
kube-system   coredns-78d4cf999f-l9f7v          1/1     Running   0          20m     10.244.0.3     kmaster   <none>           <none>
kube-system   coredns-78d4cf999f-n8g4g          1/1     Running   0          20m     10.244.0.2     kmaster   <none>           <none>
kube-system   etcd-kmaster                      1/1     Running   0          24m     192.168.9.88   kmaster   <none>           <none>
kube-system   kube-apiserver-kmaster            1/1     Running   0          24m     192.168.9.88   kmaster   <none>           <none>
kube-system   kube-controller-manager-kmaster   1/1     Running   0          24m     192.168.9.88   kmaster   <none>           <none>
kube-system   kube-flannel-ds-amd64-44x4d       1/1     Running   0          8m48s   192.168.9.81   knode1    <none>           <none>
kube-system   kube-flannel-ds-amd64-465pk       1/1     Running   2          8m51s   192.168.9.82   knode2    <none>           <none>
kube-system   kube-flannel-ds-amd64-dkb2t       1/1     Running   0          13m     192.168.9.88   kmaster   <none>           <none>
kube-system   kube-proxy-4rgz9                  1/1     Running   0          8m48s   192.168.9.81   knode1    <none>           <none>
kube-system   kube-proxy-57lvg                  1/1     Running   0          25m     192.168.9.88   kmaster   <none>           <none>
kube-system   kube-proxy-hbbqj                  1/1     Running   0          8m51s   192.168.9.82   knode2    <none>           <none>
kube-system   kube-scheduler-kmaster            1/1     Running   0          24m     192.168.9.88   kmaster   <none>           <none>

Pod调度到Master节点

[k8s@kmaster ~]$ kubectl taint node kmaster node-role.kubernetes.io/master-
node/kmaster untainted
如果要恢复Master Only状态,执行如下命令:
kubectl taint node k8s-master node-role.kubernetes.io/master=""

kube-proxy开启ipvs

修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”:

[k8s@kmaster ~]$ kubectl edit cm kube-proxy -n kube-system
configmap/kube-proxy edited

之后重启各个节点上的kube-proxy pod:

kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'

[k8s@kmaster ~]$ kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-4rgz9" deleted
pod "kube-proxy-57lvg" deleted
pod "kube-proxy-hbbqj" deleted

 
[k8s@kmaster ~]$ kubectl logs kube-proxy-6btv9 -n kube-system
I0125 07:52:50.004289       1 server_others.go:189] Using ipvs Proxier.
W0125 07:52:50.004834       1 proxier.go:365] IPVS scheduler not specified, use rr by default
I0125 07:52:50.004997       1 server_others.go:216] Tearing down inactive rules.
I0125 07:52:50.052950       1 server.go:464] Version: v1.13.1
I0125 07:52:50.067533       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0125 07:52:50.067821       1 config.go:102] Starting endpoints config controller
I0125 07:52:50.069525       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0125 07:52:50.069231       1 config.go:202] Starting service config controller
I0125 07:52:50.070363       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0125 07:52:50.169786       1 controller_utils.go:1034] Caches are synced for endpoints config controller
I0125 07:52:50.170565       1 controller_utils.go:1034] Caches are synced for service config controller

 

标签:k8s,塔建,kubernetes,环境,192.168,kubenetes,9.88,kmaster,kube
来源: https://www.cnblogs.com/jackluo/p/10319409.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有