ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

使用kubeadm搭建K8S

2022-01-18 15:04:15  阅读:292  来源: 互联网

标签:kube kubernetes flannel token dashboard io kubeadm K8S 搭建


准备工作:3个 centos7 虚拟机

     

 

 

 

1.网络互通 关闭防火墙 更改主机名等等准备工作

更改hostname

vi /etc/hostname

master

node1

node2



修改主机hosts

vi /etc/hosts

192.168.78.134 master
192.168.78.136 node1
192.168.78.137 node2



关闭防火墙等

systemctl stop firewalld && systemctl disable firewalld 

重置iptables 

iptables -F  && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT 

关闭交换 

swapoff -a 

永久关闭 

vi /etc/fstab 

注释关于swap的那一行  

关闭selinux 

setenforce 0 

vi /etc/sysconfig/selinux 

将里面的SELINUX配置为disabled    SELINUX=disabled 

同步时间 

安装ntpdate 

yum install ntpdate -y 

添加定时任务 crontab -e 

插入内容: 0-59/10 * * * * /usr/sbin/ntpdate us.pool.ntp.org | logger -t NTP 

先手动同步一次 ntpdate us.pool.ntp.org

 

2.安装docker(我用的是20.10.12版本)

安装docker

卸载现有版本
yum remove -y docker* container-selinux

删除容器镜像:
sudo rm -rf /var/lib/docker

安装依赖包
sudo yum install -y yum-utils device-mapper-persistent-data lvm2 

设置阿里云镜像源
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 

安装 Docker-CE
sudo yum install docker-ce

开机自启
sudo systemctl enable docker 

启动docker服务  
sudo systemctl start docker

 

3.免密登录

免密登录

[root@master ~]# 

ssh-keygen -t rsa

 //生成密钥, 连续回车



复制密钥到其他主机

ssh-copy-id node1 

ssh-copy-id node2



scp /etc/hosts node1:/etc 

scp /etc/hosts node2:/etc

4.编辑kubernetes.conf文件

$ cat <<EOF > /etc/sysctl.d/kubernetes.conf 

vm.swappiness=0 

vm.overcommit_memory = 1 

vm.panic_on_oom=0 

net.ipv4.ip_forward = 1 

net.bridge.bridge-nf-call-ip6tables = 1 

net.bridge.bridge-nf-call-iptables = 1 

fs.inotify.max_user_watches=89100 

EOF  

#生效配置文件 

$ sysctl -p /etc/sysctl.d/kubernetes.conf

把路由转发和iptables桥接复制到其他主机

[root@master ~]# scp /etc/sysctl.d/kubernetes.conf node1:/etc/sysctl.d/

[root@master ~]# scp /etc/sysctl.d/kubernetes.conf node2:/etc/sysctl.d/

[root@master ~]# scp /etc/sysctl.conf node2:/etc/

[root@master ~]# scp /etc/sysctl.conf node1:/etc/

 

记得node01和node02也要执行以下命令

[root@master ~]# sysctl -p /etc/sysctl.d/kubernetes.conf

[root@master ~]# sysctl -p

 

 5.指定yum安装kubernetes的yum源

指定yum安装kubernetes的yum源(三台都要执行)

cat <<EOF > /etc/yum.repos.d/kubernetes.repo 

[kubernetes] 

name=Kubernetes 

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ 

enabled=1 

gpgcheck=1 

repo_gpgcheck=1 

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 

EOF



[root@master ~]# yum repolist 

[root@master ~]# yum makecache fast



各节点安装所需安装包

master执行
[root@master ~]# yum -y install kubeadm-1.15.0-0 kubelet-1.15.0-0 kubectl-1.15.0-0

node执行
[root@node01 ~]# yum -y install kubeadm-1.15.0-0 kubelet-1.15.0-0



三台主机自启

systemctl enable kubelet


  所有机器


  mkdir -p /home/glory/working

  cd /home/glory/working/

 

 

6.生成kubeadm.conf 配置文件(.重要的环节)

kubeadm config print init-defaults ClusterConfiguration > kubeadm.conf

修改配置文件 imageRepository  kubernetesVersion 

vi kubeadm.conf 

修改 imageRepository: k8s.gcr.io 

改为 registry.cn-hangzhou.aliyuncs.com/google_containers 

imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers



修改kubernetes版本kubernetesVersion: v1.13.0  

改为kubernetesVersion: v1.15.0 

kubernetesVersion: v1.15.0 



修改 kubeadm.conf 中的API服务器地址,后面会频繁使用这个 地址。

就是后面serverapi的地址 

localAPIEndpoint: 

localAPIEndpoint: 

advertiseAddress: 192.168.0.100 

bindPort: 6443 

注意:  192.168.0.100 是master主机的ip地址 

配置子网网络 

networking: 

dnsDomain: cluster.local 

podSubnet: 10.244.0.0/16 

serviceSubnet: 10.96.0.0/12 

scheduler: {} 

这里的 10.244.0.0/16 和  10.96.0.0/12 

分别是k8s内部pods和services的子网网络,最好使用这个地址,后续flannel网络需要用到。

可能没有pod,就添加一行yaml文件格式要求很严格,空格别少了或者多了。


查看一下都需要哪些镜像文件需要拉取

kubeadm config images list --config kubeadm.conf  

$ kubeadm config images list --config kubeadm.conf 

registry.cn-beijing.aliyuncs.com/imcto/kube- apiserver:v1.15.0 

registry.cn-beijing.aliyuncs.com/imcto/kube- controller-manager:v1.13.1 

registry.cn-beijing.aliyuncs.com/imcto/kube- scheduler:v1.13.1 

registry.cn-beijing.aliyuncs.com/imcto/kube- proxy:v1.13.1 

registry.cn-beijing.aliyuncs.com/imcto/pause:3.1 

registry.cn-beijing.aliyuncs.com/imcto/etcd:3.2.24 

registry.cn-beijing.aliyuncs.com/imcto/coredns:1.2.6

 

7.拉取镜像初始化并且启动

拉取镜像

kubeadm config images pull --config ./kubeadm.conf

  初始化并且启动(从这往下都是在主节点master 上执行的 ,先配置好主节点 ,子节点之后 再加入)

  初始化 sudo kubeadm init --config ./kubeadm.conf

  若报错根据报错信息对应修改 [error]比如什么cpu个数,swap未关闭,bridge-nf-call-iptables 这个参数,需要设置为 1: 改好重新执行 #iptable设置为一的方法 echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables modprobe br_netfilter

  成功后记下最后的token,这个很重要,很重要,很重要。要复制出来。

 

  k8s启动成功输出内容较多,但是记住末尾的内容

  末尾token命令复制出来,这种样子的

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.78.134:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:cc4d47976f8ed28a7669399ca8d11b015bd6e26ff200c69dd6acdc11bab7c3cd

 

按照官方提示:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

启动kubelet 设置为开机⾃自启动

$ sudo systemctl enable kubelet

启动k8s服务程序

$ sudo systemctl start kubelet

 

查看节点信息

kubectl get nodes

NAME STATUS ROLES AGE VERSION

master NotReady master 12m v1.13.1

 

$ kubectl get cs

NAME STATUS MESSAGE ERROR

scheduler Healthy ok

controller-manager Healthy ok

etcd-0 Healthy {"health": "true"}

至此 主节点master 配置成功

 

 

 

8.安装kube-flanne网络通讯插件,并加入子节点

cd  /home/glory/working
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

下载不了 直接自己创建

vi kube-flannel.txt

mv kube-flannel.txt kube-flannel.yml
配一份我下载好的文件
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: rancher/mirrored-flannelcni-flannel:v0.16.1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: rancher/mirrored-flannelcni-flannel:v0.16.1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
配置文件

 


docker pull registry.cn-hangzhou.aliyuncs.com/mygcrio/flannel:v0.11.0-amd64 docker tag registry.cn-hangzhou.aliyuncs.com/mygcrio/flannel:v0.11.0-amd64 quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64


kubectl apply -f kube-flannel.yml 过程比较久需要下镜像 等等就好



  notready变为ready就好了

  root@master01:/home/itcast/working# kubectl get nodes

  NAME STATUS ROLES AGE VERSION

  master1 Ready master 6m58s v1.13.1

 

   此时主节点配置完成 开始加入子节点

  sudo systemctl enable kubelet

  sudo systemctl start kubelet

 

  #将admin.conf传递给node1

  sudo scp /etc/kubernetes/admin.conf root@192.168.78.136:/home/glory/

  #将admin.conf传递给node2

  sudo scp /etc/kubernetes/admin.conf root@192.168.78.137:/home/glory/

  各个子节点执行

  mkdir -p $HOME/.kube

  sudo cp -i admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

   之前主节点初始化成功后的拷贝信息,两个节点都要执行 

  kubeadm join 192.168.78.134:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:cc4d47976f8ed28a7669399ca8d11b015bd6e26ff200c69dd6acdc11bab7c3cd

 

  将kube-flannel.yml传递给node1

  sudo scp kube-flannel.yml root@192.168.78.136:/home/glory/

 

  将kube-flannel.yml传递给node2

  sudo scp kube-flannel.yml root@192.168.78.137:/home/glory/

  分别启动 flannel ⽹网络

  root@node1:~$ kubectl apply -f kube-flannel.yml

  root@node2:~$ kubectl apply -f kube-flannel.yml


这里需要等一段时间去下载资源
直到三个节点都是Ready 至此 集群搭建结束

 

 


  查看镜像下载情况 (下面两个命令超级重要 经常用,下载报错不要急 会重新自动下载)

  kubectl get pods --all-namespaces -o wide (查看当前下载状况)

  kubectl get pods -n kube-system (查看当前节点pod状态)

  kubectl get nodes (获取所有节点信息)

  删除一个指定的pod 之后会自动重新创建一个

  kubectl delete pods kube-flannel-ds-sqcl8 -n kube-system

 

9.演示使用

  创建一个deployment 控制器使用nginx web服务

  kubectl create deployment nginx --image=nginx

  创建一个service 暴漏应用 让web访问

  kubectl expose deployment nginx --port=80 --type=NodePort

    kubectl get pods 查看pods

   kubectl get pods,svc 查看端口详细信息

   kubectl get pods,svc -o wide 查看详细信息

   NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

   pod/nginx-554b9c67f9-98n27 1/1 Running 0 9m57s 10.244.1.2 node1 <none> <none>

   NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR

   service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16h <none>

    service/nginx NodePort 10.111.66.112 <none> 80:30848/TCP 47s app=nginx

 

   NodePort 类型 直接nodeip 加端口就可以访问

   http://192.168.78.134:30848/

   http://192.168.78.136:30848/

   http://192.168.78.137:30848/

   都可以访问成功

 

10.部署dashboard

部署dashboard

wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

修改配置文件

一个是 Deployment 镜像地址

一个是Service 节点类型

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
配置文件

 

拉取镜像

  docker pull lizhenliang/kubernetes-dashboard-amd64:v1.10.1


  创建


  kubectl apply -f kubernetes-dashboard.yaml

  执行结果如下:

 

secret/kubernetes-dashboard-certs created 

serviceaccount/kubernetes-dashboard created

role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created

rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created

deployment.apps/kubernetes-dashboard created

service/kubernetes-dashboard created


查看状态


kubectl get pods -n kube-system


默认放在kube-system 空间下


[root@master working]# kubectl get pods -n kube-system

NAME READY STATUS RESTARTS AGE

coredns-6967fb4995-qw8tl 1/1 Running 0 18h

coredns-6967fb4995-sk2b9 1/1 Running 0 18h

etcd-master 1/1 Running 0 18h

kube-apiserver-master 1/1 Running 0 18h

kube-controller-manager-master 1/1 Running 0 18h

kube-flannel-ds-8vvbz 1/1 Running 0 170m

kube-flannel-ds-sqcl8 1/1 Running 0 152m

kube-flannel-ds-stl89 1/1 Running 0 152m

kube-proxy-ddctw 1/1 Running 0 18h

kube-proxy-mp76x 1/1 Running 0 152m

kube-proxy-w9ljh 1/1 Running 0 152m

kube-scheduler-master 1/1 Running 0 18h

kubernetes-dashboard-79ddd5-27jzh 0/1 ImagePullBackOff 0 50s

 

访问页面


[root@master working]# kubectl get pods,svc -n kube-system

NAME READY STATUS RESTARTS AGE

pod/coredns-6967fb4995-qw8tl 1/1 Running 0 18h

pod/coredns-6967fb4995-sk2b9 1/1 Running 0 18h

pod/etcd-master 1/1 Running 0 18h

pod/kube-apiserver-master 1/1 Running 0 18h

pod/kube-controller-manager-master 1/1 Running 0 18h

pod/kube-flannel-ds-8vvbz 1/1 Running 0 172m

pod/kube-flannel-ds-sqcl8 1/1 Running 0 155m

pod/kube-flannel-ds-stl89 1/1 Running 0 155m

pod/kube-proxy-ddctw 1/1 Running 0 18h

pod/kube-proxy-mp76x 1/1 Running 0 155m

pod/kube-proxy-w9ljh 1/1 Running 0 155m

pod/kube-scheduler-master 1/1 Running 0 18h

pod/kubernetes-dashboard-79ddd5-27jzh 1/1 Running 0 3m29s


NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 18h

service/kubernetes-dashboard NodePort 10.109.12.48 <none> 443:32124/TCP 3m29s


节点ip+32124端口访问即可 需要添加网站信任


打开后选则令牌认证

 

 

11.创建dashboard的用户

创建用户 (默认角色cluster-admin 可访问k8s 所有权限)

kubectl create serviceaccount dashboard-admin -n kube-system

kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin 


查找tocken

kubectl get secret -n kube-system NAME TYPE DATA AGE

attachdetach-controller-token-6lzls kubernetes.io/service-account-token 3 19h

bootstrap-signer-token-bhgmn kubernetes.io/service-account-token 3 19h

bootstrap-token-abcdef bootstrap.kubernetes.io/token 6 19h

certificate-controller-token-92z4t kubernetes.io/service-account-token 3 19h

clusterrole-aggregation-controller-token-psghh kubernetes.io/service-account-token 3 19h

coredns-token-cflwj kubernetes.io/service-account-token 3 19h

cronjob-controller-token-24vk8 kubernetes.io/service-account-token 3 19h

daemon-set-controller-token-xjpl6 kubernetes.io/service-account-token 3 19h

dashboard-admin-token-smp5m kubernetes.io/service-account-token 3 2m53s

default-token-9fzh8 kubernetes.io/service-account-token 3 19h

deployment-controller-token-2l5ns kubernetes.io/service-account-token 3 19h

disruption-controller-token-pf75p kubernetes.io/service-account-token 3 19h

endpoint-controller-token-g65g6 kubernetes.io/service-account-token 3 19h

expand-controller-token-qd94d kubernetes.io/service-account-token 3 19h

flannel-token-8z26h kubernetes.io/service-account-token 3 3h9m

generic-garbage-collector-token-c2j6t kubernetes.io/service-account-token 3 19h

horizontal-pod-autoscaler-token-784rw kubernetes.io/service-account-token 3 19h

job-controller-token-ghzhm kubernetes.io/service-account-token 3 19h

kube-proxy-token-zrktf kubernetes.io/service-account-token 3 19h

kubernetes-dashboard-certs Opaque 0 20m

kubernetes-dashboard-key-holder Opaque 2 18m

kubernetes-dashboard-token-nxbpv kubernetes.io/service-account-token 3 20m

namespace-controller-token-rs6mj kubernetes.io/service-account-token 3 19h

node-controller-token-c5hvr kubernetes.io/service-account-token 3 19h

persistent-volume-binder-token-vccz6 kubernetes.io/service-account-token 3 19h

pod-garbage-collector-token-lfsgv kubernetes.io/service-account-token 3 19h

pv-protection-controller-token-hxlpt kubernetes.io/service-account-token 3 19h

pvc-protection-controller-token-kbtbd kubernetes.io/service-account-token 3 19h

replicaset-controller-token-xz9f2 kubernetes.io/service-account-token 3 19h

replication-controller-token-nn5ql kubernetes.io/service-account-token 3 19h

resourcequota-controller-token-qqzd9 kubernetes.io/service-account-token 3 19h

service-account-controller-token-jzd2l kubernetes.io/service-account-token 3 19h

service-controller-token-6d5x7 kubernetes.io/service-account-token 3 19h

statefulset-controller-token-t5wlw kubernetes.io/service-account-token 3 19h

token-cleaner-token-pmk5h kubernetes.io/service-account-token 3 19h

ttl-controller-token-4tdh2 kubernetes.io/service-account-token 3 19h

 

找dashboard-admin-token 开头的

使用 kubectl describe secret dashboard-admin-token-smp5m -n kube-system (命名空间必须加)

获得到token信息

 

[root@master working]# kubectl describe secret dashboard-admin-token-smp5m -n kube-system
Name: dashboard-admin-token-smp5m
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 5d364c5a-3053-4c68-97d7-ea41910fc7c3

Type: kubernetes.io/service-account-token

Data
====
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tc21wNW0iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNWQzNjRjNWEtMzA1My00YzY4LTk3ZDctZWE0MTkxMGZjN2MzIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.ZwH9bvSJXATQXoteoPqp40C39l24-Qvjq7YPGVmRMCBNU9S34ycBnlv5rjjAfEqnop8vXk8QqYrcF1zRqOf2BcrYaPG-LJoifiqixRIWuqXpIlFvWxw7ZlQeRE3uOlV5UJh5nHPWtQL-oAizDH9w1AGd93E7NyVix0cbMC_SIHnxcTeollrG1YZ9apYUks0V8ElBpjT7LfGqrBkjFGYgcGQ0zGh866oR6WrUAEcHusrRGIjhAyYpz6WZfk-rMzUrmyLaMaA_pNaaOW1VvIGIIccuNjDwEe8tcKIHRyKV4CmBx2QQ2eqbPL3ZsbR_2mO-gXzYzyfOJPPExTFAqfdrKA
ca.crt: 1025 bytes
namespace: 11 bytes

 

复制token 输入到页面 点击登录即可成功

 

 

12.其他

重置配置,查看日志

 rm -rf /etc/kubernetes/*
 rm -rf ~/.kube/*
 rm -rf /var/lib/etcd/*
 
lsof -i :6443|grep -v "PID"|awk '{print "kill -9",$2}'|sh
lsof -i :10251|grep -v "PID"|awk '{print "kill -9",$2}'|sh
lsof -i :10252|grep -v "PID"|awk '{print "kill -9",$2}'|sh
lsof -i :10250|grep -v "PID"|awk '{print "kill -9",$2}'|sh
lsof -i :2379|grep -v "PID"|awk '{print "kill -9",$2}'|sh
lsof -i :2380|grep -v "PID"|awk '{print "kill -9",$2}'|sh

kubeadm reset


启动报错 查看日志

journalctl -u kubelet

journalctl -u kubelet |tail



看系统日志

cat /var/log/messages


用kubectl 查看日志
# 注意:使用Kubelet describe 查看日志,一定要带上 命名空间,否则会报如下错误[root@node2 ~]# kubectl describe pod coredns-6c65fc5cbb-8ntpvError from server (NotFound): pods "coredns-6c65fc5cbb-8ntpv" not found

kubectl describe pod kubernetes-dashboard-849cd79b75-s2snt --namespace kube-system

kubectl logs -f pods/monitoring-influxdb-fc8f8d5cd-dbs7d -n kube-system

kubectl logs --tail 200 -f kube-apiserver -n kube-system |more

kubectl logs --tail 200 -f podname -n jenkins



用journalctl查看日志非常管用

journalctl -u kube-scheduler

journalctl -xefu kubelet

journalctl -u kube-apiserver


journalctl -u kubelet |tail

journalctl -xe


用docker查看日志

docker logs c36c56e4cfa3  (容器id)


  ImagePullBackOff 错误处理


  kubectl describe pod nginx-554b9c67f9-98n27(pod名)

 
docker 镜像源配置

 


 

 

 

标签:kube,kubernetes,flannel,token,dashboard,io,kubeadm,K8S,搭建
来源: https://www.cnblogs.com/YinXuanZhiZhi9/p/15817794.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有