ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

containerd使用总结

2022-06-28 15:02:25  阅读:154  来源: 互联网

标签:总结 00 ctr -- containerd io 使用 com


# 安装
yum install -y yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast

yum -y install containerd.io


注意:yum方式安装,默认已安装libseccomp,runc

rpm -qa | grep containerd

systemctl enable containerd
systemctl start containerd
systemctl status containerd

ctr version

# 生成默认模块配置文件

containerd config default > /etc/containerd/config.toml

sandbox_image = "docker.io/liuxincuit/pause:3.6"  由于网络原因,此处被替换
sandbox_image = "liuxincuit/pause:3.6"  由于网络原因,此处被替换

    [plugins."io.containerd.grpc.v1.cri".cni] =》网络插件尚未安装
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = ""
      ip_pref = ""
      max_conf_num = 1

添加其他镜像仓库
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
            endpoint = ["https://docker.mirrors.ustc.edu.cn","http://hub-mirror.c.163.com"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
            endpoint = ["https://gcr.mirrors.ustc.edu.cn"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
            endpoint = ["https://gcr.mirrors.ustc.edu.cn/google-containers/"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
            endpoint = ["https://quay.mirrors.ustc.edu.cn"]

# 查看runc
runc -v

# 常用命令
ctr --help

COMMANDS:
   plugins, plugin            provides information about containerd plugins
   version                    print the client and server versions
   containers, c, container   manage containers
   content                    manage content
   events, event              display containerd events
   images, image, i           manage images
   leases                     manage leases
   namespaces, namespace, ns  manage namespaces
   pprof                      provide golang pprof outputs for containerd
   run                        run a container
   snapshots, snapshot        manage snapshots
   tasks, t, task             manage tasks
   install                    install a new package
   oci                        OCI tools
   shim                       interact with a shim directly
   help, h                    Shows a list of commands or help for one command

GLOBAL OPTIONS:
   --debug                      enable debug output in logs
   --address value, -a value    address for containerd's GRPC server (default: "/run/containerd/containerd.sock") [$CONTAINERD_ADDRESS]
   --timeout value              total timeout for ctr commands (default: 0s)
   --connect-timeout value      timeout for connecting to containerd (default: 0s)
   --namespace value, -n value  namespace to use with commands (default: "default") [$CONTAINERD_NAMESPACE]
   --help, -h                   show help
   --version, -v                print the version


ctr images ls

ctr images pull docker.io/library/nginx:alpine

注意:containerd支持oci标准的镜像,所以可以直接使用docker官方或dockerfile构建的镜像,可以这样理解,使用dockerfile文件构建镜像,使用containerd来管理容器

ctr images mount docker.io/library/nginx:alpine /mnt
umount /mnt

ctr images export --all-platforms nginx.img docker.io/library/nginx:alpine

ctr images rm docker.io/library/nginx:alpine

ctr images import nginx.img

ctr images tag docker.io/library/nginx:alpine nginx:alpine

ctr images check

注意:
1.使用`ctr container create `命令创建容器后,容器并没有处于运行状态,其只是一个静态的容器。
这个 container 对象只是包含了运行一个容器所需的资源及配置的数据结构,
例如:namespaces、rootfs 和容器的配置都已经初始化成功了,只是用户进程(本案例为nginx)还没有启动。
需要使用`ctr tasks`命令才能获取一个动态容器。
2.使用`ctr run`命令可以创建一个静态容器并使其运行。一步到位运行容器。


ctr container ls =》 ctr c ls

ctr task ls  =》 ctr t ls

ctr container create docker.io/library/nginx:alpine nginx1

ctr container info nginx1

ctr task start -d nginx1

# ctr task ps nginx1  (查看容器的进程(都是物理机的进程)) => 第一个pid是父进程ID,从下面的进程中可以看出 
PID     INFO
1591    -
1629    -
1630    -
1631    -
1632    -

# ps -ef|grep 1591 
root       1591   1571  0 09:42 ?        00:00:00 nginx: master process nginx -g daemon off;
101        1629   1591  0 09:42 ?        00:00:00 nginx: worker process
101        1630   1591  0 09:42 ?        00:00:00 nginx: worker process
101        1631   1591  0 09:42 ?        00:00:00 nginx: worker process
101        1632   1591  0 09:42 ?        00:00:00 nginx: worker process


ctr task exec --exec-id 1 nginx1 /bin/sh  => 没有命令提示符,--exec-id为exec进程设定一个id,可以随意输入,只要保证唯一即可
查看网卡信息,默认只有一个lo本地回环网卡

使用exit退出容器

注意:加上参数-t可以显示命令提示符:/#  (前面的/表示的是当前所在路径)


ctr run -d --net-host docker.io/library/nginx:alpine nginx2  =》 --net-host 代表容器的IP就是宿主机的IP(相当于docker里的host类型网络)

ctr task exec --exec-id 2 -t nginx2 /bin/sh  => 有命令提示符,
查看网卡信息,除了只有一个lo本地回环网卡,还有宿主机网卡

ctr task pause nginx2
ctr task resume nginx2
ctr task kill nginx2
ctr task delete nginx2
ctr container delete nginx2

注意:使用-d参数启动的容器,task kill不掉,可以先用系统kill命令杀手pid,然后再task delete,container delete

# 添加harbor作为个人私有仓库


      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
            endpoint = ["https://docker.mirrors.ustc.edu.cn","http://hub-mirror.c.163.com"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
            endpoint = ["https://gcr.mirrors.ustc.edu.cn"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
            endpoint = ["https://gcr.mirrors.ustc.edu.cn/google-containers/"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
            endpoint = ["https://quay.mirrors.ustc.edu.cn"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."www.myharbor.com"]
            endpoint = ["https://www.myharbor.com/"]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]
        [plugins."io.containerd.grpc.v1.cri".registry.configs."www.myharbor.com".tls]
          #insecure_skip_verify = true
          insecure_skip_verify = false
          ca_file = "/etc/containerd/www.myharbor.com/ca.crt"
          cert_file = "/etc/containerd/www.myharbor.com/www.myharbor.com.crt"
          key_file = "/etc/containerd/www.myharbor.com/www.myharbor.com.key"
        [plugins."io.containerd.grpc.v1.cri".registry.configs."www.myharbor.com".auth]
          username = "test"
          password = "Test123456"


但是用ctl拉取镜像的时候有报错:

# ctr images pull www.myharbor.com/library/busybox:v0.1
INFO[0000] trying next host                              error="failed to do request: Head \"https://www.myharbor.com/v2/library/busybox/manifests/v0.1\": x509: certificate signed by unknown authority" host=www.myharbor.com
ctr: failed to resolve reference "www.myharbor.com/library/busybox:v0.1": failed to do request: Head "https://www.myharbor.com/v2/library/busybox/manifests/v0.1": x509: certificate signed by unknown authority
[root@localhost ~]# ctr images pull www.myharbor.com/mytest/busybox:v0.1 
INFO[0000] trying next host                              error="failed to do request: Head \"https://www.myharbor.com/v2/mytest/busybox/manifests/v0.1\": x509: certificate signed by unknown authority" host=www.myharbor.com
ctr: failed to resolve reference "www.myharbor.com/mytest/busybox:v0.1": failed to do request: Head "https://www.myharbor.com/v2/mytest/busybox/manifests/v0.1": x509: certificate signed by unknown authority


解决办法:;拉取镜像的命令中写上用户名和密码,以及ca证书等
ctr images pull --help

OPTIONS:
   --skip-verify, -k                 skip SSL certificate validation
   --plain-http                      allow connections using plain HTTP
   --user value, -u value            user[:password] Registry user and password
   --refresh value                   refresh token for authorization server
   --hosts-dir value                 Custom hosts configuration directory
   --tlscacert value                 path to TLS root CA
   --tlscert value                   path to TLS client certificate
   --tlskey value                    path to TLS client key
   --http-dump                       dump all HTTP request/responses when interacting with container registry
   --http-trace                      enable HTTP tracing for registry interactions
   --snapshotter value               snapshotter name. Empty value stands for the default value. [$CONTAINERD_SNAPSHOTTER]
   --label value                     labels to attach to the image
   --platform value                  Pull content from a specific platform
   --all-platforms                   pull content and metadata from all platforms
   --all-metadata                    Pull metadata for all platforms
   --print-chainid                   Print the resulting image's chain ID
   --max-concurrent-downloads value  Set the max concurrent downloads for each pull (default: 0)
   

ctr images pull --user test:Test123456 --tlscacert /etc/containerd/www.myharbor.com/ca.crt  www.myharbor.com/mytest/busybox:v0.2

ctr images tag www.myharbor.com/mytest/busybox:v0.2 www.myharbor.com/mytest/busybox:v0.3

ctr images push --user test:Test123456 --tlscacert /etc/containerd/www.myharbor.com/ca.crt  www.myharbor.com/mytest/busybox:v0.3

ctr namespace --help

COMMANDS:
   create, c   create a new namespace
   list, ls    list namespaces
   remove, rm  remove one or more namespaces
   label       set and clear labels for a namespace

OPTIONS:
   --help, -h  show help
   

ctr namespace ls

ctr namespace create kubemsb

ctr -n kubemsb tasks ls

ctr -n kubemsb images pull docker.io/library/nginx:latest =》指定空间拉取的镜像只能在该命名空间内看到
ctr -n kubemsb container create docker.io/library/nginx:latest nginxapp
ctr -n kubemsb container ls

ctr -n kubemsb images ls => 能看到镜像
ctr -n defalut images ls => 看不到镜像
ctr images ls => 看不到镜像


yum方式安装的,默认Containerd管理的容器仅有lo网络,无法访问容器之外的网络,可以为其添加网络插件,使用容器可以连接外网。

https://github.com/containernetworking/cni
https://github.com/containernetworking/plugins


Containerd配置文件中默认有如下关于cni的配置

    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = ""
      ip_pref = ""
      max_conf_num = 1

下载cni工具源码包:cni-1.1.1.tar.gz
下载cni插件工具源码包:cni-plugins-linux-amd64-v1.1.1.tgz

tar xf cni-1.1.1.tar.gz && mv cni-1.1.1 cni

mkdir -p /opt/cni/bin ==》 跟上面配置文件的保持一致
tar xf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin

创建名为mynet的网络,其中包含名为cni0的网桥

$ cat >/etc/cni/net.d/10-mynet.conf <<EOF
{
    "cniVersion": "1.0.0",
    "name": "mynet",
    "type": "bridge",
    "bridge": "cni0",
    "isGateway": true,
    "ipMasq": true,
    "ipam": {
        "type": "host-local",
        "subnet": "10.22.0.0/16",
        "routes": [
            { "dst": "0.0.0.0/0" }
        ]
    }
}
EOF

$ cat >/etc/cni/net.d/99-loopback.conf <<EOF
{
    "cniVersion": "1.0.0",
    "name": "lo",
    "type": "loopback"
}
EOF

注意:上面俩文件的cniVersion不能写1.1.1,有限制
mynet : error executing ADD: {
    "code": 1,
    "msg": "incompatible CNI versions",
    "details": "config is \"1.1.1\", plugin supports [\"0.1.0\" \"0.2.0\" \"0.3.0\" \"0.3.1\" \"0.4.0\" \"1.0.0\"]"
}

yum install epel-release -y && yum -y install jq

cd cni/scripts/

执行脚本文件,基于/etc/cni/net.d/目录中的*.conf配置文件生成容器网络

CNI_PATH=/opt/cni/bin ./priv-net-run.sh echo "Hello World"


3: cni0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:23:71:8e:90:01 brd ff:ff:ff:ff:ff:ff
    inet 10.22.0.1/16 brd 10.22.255.255 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::5023:71ff:fe8e:9001/64 scope link 
       valid_lft forever preferred_lft forever

# ip route

10.22.0.0/16 dev cni0 proto kernel scope link src 10.22.0.1


ctr images pull docker.io/library/busybox:latest busybox

ctr run -d docker.io/library/busybox:latest busybox

# ctr tasks exec --exec-id $RANDOM -t busybox /bin/sh
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever

pid=(ctr tasks ls | grep busybox | awk '{print $2}')
echo $pid

netnspath=/proc/$pid/ns/net
echo $netnspath


cd cni/scripts/
CNI_PATH=/opt/cni/bin ./exec-plugins.sh add $pid $netnspath

# ctr tasks exec --exec-id $RANDOM -t busybox /bin/sh  =》进入容器确认是否添加网卡信息
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 6e:66:b7:e1:8e:54 brd ff:ff:ff:ff:ff:ff
    inet 10.22.0.3/16 brd 10.22.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::6c66:b7ff:fee1:8e54/64 scope link 
       valid_lft forever preferred_lft forever

在容器中ping容器宿主机IP地址 => 通
在容器中ping宿主机所在网络的网关IP地址 => 通
在容器中ping宿主机所在网络中的其它主机IP地址 => 通

/ # ping -c 3 www.baidu.com
ping: bad address 'www.baidu.com'

宿主机跟容器网路互通,但是容器无法访问外网

把宿主机目录挂载至Containerd容器中,实现容器数据持久化存储
ctr container create docker.io/library/busybox:latest busybox3 --mount type=bind,src=/tmp,dst=/hostdir,options=rbind:rw


Docker集成Containerd实现容器管理,修改Docker服务文件,以便使用已安装的containerd

# cat /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target docker.socket firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket containerd.service

[Service]
Type=notify
#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock # 这行是原先的
ExecStart=/usr/bin/dockerd --containerd  /run/containerd/containerd.sock --debug  # 这行是修改后的
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
OOMScoreAdjust=-500

[Install]
WantedBy=multi-user.target


systemctl daemon-reload && systemctl restart docker

使用docker命令启动一个容器后,再使用ctr查看是否添加一个新的namespace,本案例中发现添加一个moby命名空间,即为docker使用的命名空间。
ctr namespace ls

查看moby命名空间,发现使用docker run运行的容器包含在其中
ctr -n moby container ls

使用ctr能够查看到一个正在运行的容器,既表示docker run运行的容器是被containerd管理的。

使用docker stop停止且使用docker rm删除容器后再观察,发现容器被删除。

ctr -n moby container ls && ctr -n moby tasks ls

标签:总结,00,ctr,--,containerd,io,使用,com
来源: https://www.cnblogs.com/sanduzxcvbnm/p/16419450.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有