ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

K8s——kubernetes集群中ceph集群使用【下】

2021-08-02 19:57:37  阅读:253  来源: 互联网

标签:kube quay ceph prometheus 集群 registry io K8s root


kubernetes集群中ceph集群使用

一:CephFS 创建和使用

CephFS 允许用户挂载一个兼容posix的共享目录到多个主机,该存储和NFS共享存储以及CIFS共享目录相似

1.filesystem 配置

[root@master ~]#  cd /tmp/rook/cluster/examples/kubernetes/ceph
[root@master ceph]# sed -i 's/failureDomain: host/failureDomain: osd/g' filesystem.yaml
[root@master ceph]# kubectl apply -f filesystem.yaml
cephfilesystem.ceph.rook.io/myfs created
[root@master cephfs]# kubectl -n rook-ceph get pod -l app=rook-ceph-mds
NAME      READY   STATUS    RESTARTS   AGE
rook-ceph-mds-myfs-a-5bd6895d9-mbbm6    1/1 Running   0 7m21s
rook-ceph-mds-myfs-b-7d7b55684b-j5f5x   1/1 Running   0  7m4s

2.查看资源配置

[root@master ceph]# NAME=$(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}')
[root@master ceph]# kubectl -n rook-ceph exec -it ${NAME} sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
sh-4.2# ceph status
  cluster:
    id:     fb3cdbc2-8fea-4346-b752-131fd1eb2baf
    health: HEALTH_ERR
            1 filesystem is offline
            1 filesystem is online with fewer MDS than max_mds
            1/3 mons down, quorum a,b

  services:
    mon: 3 daemons, quorum  (age 2h), out of quorum: a, b, c
    mgr: a(active, since 136y)
    mds: myfs:0
    osd: 4 osds: 3 up (since 136y), 3 in (since 136y)

  data:
    pools:   2 pools, 64 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 21 GiB / 24 GiB avail
    pgs:     64 active+clean

sh-4.2# ceph osd lspools
1 myfs-metadata
2 myfs-data0
sh-4.2# ceph mds stat
myfs:1 {0=myfs-b=up:active} 1 up:standby-replay
sh-4.2# ceph fs ls
name: myfs, metadata pool: myfs-metadata, data pools: [myfs-data0 ]

3.创建相对应的storageclass

如果想使用CephFS,必须创建storageclass

[root@master ceph]# cd /tmp/rook/cluster/examples/kubernetes/ceph/csi/cephfs/
[root@master cephfs]# kubectl apply -f storageclass.yaml
storageclass.storage.k8s.io/csi-cephfs created

4.kubernetes-dashboard查看结果

在这里插入图片描述

5.cpch-cephFS测试
案例一:多容器共享同一个数据目录,部署多个私有仓库共享同一个数据目录进行测试

[root@master cephfs]# ls
kube-registry.yaml  pod.yaml  pvc.yaml  storageclass.yaml
[root@master cephfs]# docker pull registry:2   
2: Pulling from library/registry
0a6724ff3fcd: Pull complete 
d550a247d74f: Pull complete 
1a938458ca36: Pull complete 
acd758c36fc9: Pull complete 
9af6d68b484a: Pull complete 
Digest: sha256:d5459fcb27aecc752520df4b492b08358a1912fcdfa454f7d2101d4b09991daa
Status: Downloaded newer image for registry:2
docker.io/library/registry:2
[root@master cephfs]# kubectl create -f kube-registry.yaml
persistentvolumeclaim/cephfs-pvc created
deployment.apps/kube-registry created

6.创建数据验证共享性

创建数据验证共享性
在kube-system下创建了一个deployment作为私有仓库
将目录/var/lib/registry挂接到CephFS,并且是3个副本共享挂载的

[root@k8s-master01 cephfs]# kubectl get pod -n kube-system -l k8s-app=kube-registry -o wide
[root@k8s-master01 cephfs]# kubectl -n kube-system exec -it kube-registry-65df7d789d-9bwzn sh
sh-4.2# df -hP|grep '/var/lib/registry'
sh-4.2# cd /var/lib/registry
sh-4.2# touch abc
sh-4.2# exit
[root@k8s-master01 cephfs]# kubectl -n kube-system exec -it kube-registry-65df7d789d-sf55j 
sh-4.2# ls /var/lib/registry
sh-4.2# abc

二:kubenetes 部署 Prometheus 监控

1.下载 kube-prometheus

[root@master opt]# cd /opt
[root@master opt]# git clone https://github.com/coreos/kube-prometheus.git

2.过滤所需镜像

需要搂出来所有的镜像信息,使用阿里云镜像服务做中转加速,镜像信息简单搂取脚本如下:

[root@master kube-prometheus]# find . -name "*.yaml" -exec grep 'quay.io' {} \;|awk '{print $NF}'|sort|uniq
--prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1
quay.io/brancz/kube-rbac-proxy:v0.8.0
quay.io/coreos/kube-state-metrics:v1.9.7
quay.io/fabxc/prometheus_demo_service
quay.io/prometheus/alertmanager:v0.21.0
quay.io/prometheus/blackbox-exporter:v0.18.0
quay.io/prometheus/node-exporter:v1.0.1
quay.io/prometheus-operator/prometheus-operator:v0.44.1
quay.io/prometheus/prometheus:v2.22.1
[root@master kube-prometheus]# find . -name "*.yaml" -exec grep 'image: ' {} \;|awk '{print $NF}'|sort|uniq
}}
directxman12/k8s-prometheus-adapter:v0.8.2
gcr.io/google_containers/metrics-server-amd64:v0.2.0
grafana/grafana:7.3.5
jimmidyson/configmap-reload:v0.4.0
quay.io/brancz/kube-rbac-proxy:v0.8.0
quay.io/coreos/kube-state-metrics:v1.9.7
quay.io/fabxc/prometheus_demo_service
quay.io/prometheus/alertmanager:v0.21.0
quay.io/prometheus/blackbox-exporter:v0.18.0
quay.io/prometheus/node-exporter:v1.0.1
quay.io/prometheus-operator/prometheus-operator:v0.44.1
quay.io/prometheus/prometheus:v2.22.1

3.配置文件中替换为阿里镜像

[root@master kube-prometheus]# cat a.sh 
find . -name "*.yaml" -exec sed -i 's|gcr.io/google_containers/metrics-server-amd64:v0.2.0|registry.cn-hangzhou.aliyuncs.com/vinc-auto/metrics-server-amd64:v0.2.0|g' {} \;
find . -name "*.yaml" -exec sed -i 's|grafana/grafana:6.6.0|registry.cn-hangzhou.aliyuncs.com/vinc-auto/grafana:6.6.0|g' {} \;
find . -name "*.yaml" -exec sed -i 's|jimmidyson/configmap-reload:v0.3.0|registry.cn-hangzhou.aliyuncs.com/vinc-auto/configmap-reload:v0.3.0|g' {} \;
find . -name "*.yaml" -exec sed -i 's|luxas/autoscale-demo:v0.1.2|registry.cn-hangzhou.aliyuncs.com/vinc-auto/autoscale-demo:v0.1.2|g' {} \;
find . -name "*.yaml" -exec sed -i 's|quay.io/coreos/k8s-prometheus-adapter-amd64:v0.5.0|registry.cn-hangzhou.aliyuncs.com/vinc-auto/k8s-prometheus-adapter-amd64:v0.5.0|g' {} \;
find . -name "*.yaml" -exec sed -i 's|quay.io/coreos/kube-rbac-proxy:v0.4.1|registry.cn-hangzhou.aliyuncs.com/vinc-auto/kube-rbac-proxy:v0.4.1|g' {} \;
find . -name "*.yaml" -exec sed -i 's|quay.io/coreos/kube-state-metrics:v1.9.5|registry.cn-hangzhou.aliyuncs.com/vinc-auto/kube-state-metrics:v1.9.5|g' {} \;
find . -name "*.yaml" -exec sed -i 's|quay.io/coreos/prometheus-config-reloader:v0.38.0|registry.cn-hangzhou.aliyuncs.com/vinc-auto/prometheus-config-reloader:v0.38.0|g' {} \;
find . -name "*.yaml" -exec sed -i 's|quay.io/coreos/prometheus-operator:v0.38.0|registry.cn-hangzhou.aliyuncs.com/vinc-auto/prometheus-operator:v0.38.0|g' {} \;
find . -name "*.yaml" -exec sed -i 's|quay.io/fabxc/prometheus_demo_service|registry.cn-hangzhou.aliyuncs.com/vinc-auto/prometheus_demo_service:latest|g' {} \;
find . -name "*.yaml" -exec sed -i 's|quay.io/prometheus/alertmanager:v0.20.0|registry.cn-hangzhou.aliyuncs.com/vinc-auto/alertmanager:v0.20.0|g' {} \;
find . -name "*.yaml" -exec sed -i 's|quay.io/prometheus/node-exporter:v0.18.1|registry.cn-hangzhou.aliyuncs.com/vinc-auto/node-exporter:v0.18.1|g' {} \;
find . -name "*.yaml" -exec sed -i 's|quay.io/prometheus/prometheus:v2.15.2|registry.cn-hangzhou.aliyuncs.com/vinc-auto/prometheus:v2.15.2|g' {} \;
[root@master kube-prometheus]# bash a.sh

4.安装 prometheus-operator

[root@master kube-prometheus]# kubectl apply -f manifests/setup

5.安装 promethes metric adapter

[root@master kube-prometheus]# kubectl apply -f manifests/

6.查看运行状态

[root@master kube-prometheus]# kubectl get pods -n monitoring
[root@master kube-prometheus]# kubectl top pods -n monitoring

7.暴露 prometheus 服务

修改 ClusterIP 为 NodePort
[root@master kube-prometheus]# kubectl edit svc prometheus-k8s -n monitoring

8.浏览器访问测试 prometheus

9.暴露 grafana 服务

修改 ClusterIP 为 NodePort
[root@master kube-prometheus]# kubectl edit svc grafana -n monitoring

10.浏览器访问测试 grafana

用户名密码均为 admin

标签:kube,quay,ceph,prometheus,集群,registry,io,K8s,root
来源: https://blog.csdn.net/weixin_55985097/article/details/119084391

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有