ICode9

精准搜索请尝试: 精确搜索
首页 > 数据库> 文章详细

Redis Cluster 部署

2019-11-15 12:52:12  阅读:271  来源: 互联网

标签:6379 部署 redis Redis 192.168 Cluster node1 slots root


内容:

Redis 编译安装

Redis Cluster部署

Redis 集群扩容

Redis 指定机器下线

 

 环境:
 主机名 IP
 node1192.168.10.1
 node2192.168.10.2
 node3192.168.10.3
 node4192.168.10.4
 node5192.168.10.5
 node6192.168.10.6

 

编译安装 redis-4.0.14.tar.gz ,确认时间同步(node1~node6)

 [root@node1 ~]# tar xf redis-4.0.14.tar.gz
 [root@node1 ~]# cd redis-4.0.14/
 [root@node1 redis-4.0.14]# make PREFIX=/apps/redis install
 ​
 ​
 [root@node1 redis-4.0.14]# mkdir -p /apps/redis/{etc,log,data,run}
 [root@node1 redis-4.0.14]# cp redis.conf /apps/redis/etc/
 [root@node1 redis-4.0.14]# sed -i 's@logfile ""@logfile "/apps/redis/log/redis.log"@' /apps/redis/etc/redis.conf

创建 redis 用户(node1~node6)

 [root@node1 ~]# useradd -r redis -s /sbin/nologin

编写服务管理配置文件(node1~node6)

 [root@node1 ~]# vim /usr/lib/systemd/system/redis.service
 [Unit]
 Description=Redis persistent key-value database
 After=network.target
 After=network-online.target
 Wants=network-online.target
 [Service]
 ExecStart=/apps/redis/bin/redis-server /apps/redis/etc/redis.conf --supervised systemd
 ExecReload=/bin/kill -s HUP $MAINPID
 ExecStop=/bin/kill -s QUIT $MAINPID
 Type=notify
 User=redis
 Group=redis
 RuntimeDirectory=redis
 RuntimeDirectoryMode=0755
 [Install]
 WantedBy=multi-user.target

启动服务(node1~node6)

 [root@node1 ~]# chown -R redis.redis /apps/redis
 [root@node1 ~]# systemctl daemon-reload
 [root@node1 ~]# systemctl start redis
 [root@node1 ~]# ss -ntl
 State       Recv-Q Send-Q Local Address:Port               Peer Address:Port              
 LISTEN     0     100   127.0.0.1:25                         *:*                
 LISTEN     0     128   127.0.0.1:6379                       *:*                
 LISTEN     0     128           *:111                       *:*                
 LISTEN     0     128           *:22                         *:*                
 LISTEN     0     100       [::1]:25                     [::]:*                
 LISTEN     0     128         [::]:111                     [::]:*                
 LISTEN     0     128         [::]:22                     [::]:*                

查看日志并解决启动警告(node1~node6)

 [root@node1 ~]# cat /apps/redis/log/redis.log 
 41347:C 13 Nov 17:08:52.446 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
 41347:C 13 Nov 17:08:52.447 # Redis version=4.0.14, bits=64, commit=00000000, modified=0, pid=41347, just started
 41347:C 13 Nov 17:08:52.447 # Configuration loaded
 41347:C 13 Nov 17:08:52.447 * supervised by systemd, will signal readiness
 41347:M 13 Nov 17:08:52.455 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.
 41347:M 13 Nov 17:08:52.455 # Server can't set maximum open files to 10032 because of OS error: Operation not permitted.
 41347:M 13 Nov 17:08:52.455 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.
                _._                                                  
            _.-``__ ''-._                                            
      _.-``   `. `_. ''-._           Redis 4.0.14 (00000000/0) 64 bit
  .-`` .-```. ```\/   _.,_ ''-._                                  
  (   '     ,       .-` | `,   )     Running in standalone mode
  |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
  |   `-._   `._   /     _.-'   |     PID: 41347
  `-._   `-._ `-./ _.-'   _.-'                                  
  |`-._`-._   `-.__.-'   _.-'_.-'|                                  
  |   `-._`-._       _.-'_.-'   |           http://redis.io        
  `-._   `-._`-.__.-'_.-'   _.-'                                  
  |`-._`-._   `-.__.-'   _.-'_.-'|                                  
  |   `-._`-._       _.-'_.-'   |                                  
  `-._   `-._`-.__.-'_.-'   _.-'                                  
      `-._   `-.__.-'   _.-'                                      
          `-._       _.-'                                          
              `-.__.-'                                              
 ​
 41347:M 13 Nov 17:08:52.462 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
 41347:M 13 Nov 17:08:52.463 # Server initialized
 41347:M 13 Nov 17:08:52.463 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
 41347:M 13 Nov 17:08:52.463 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
 41347:M 13 Nov 17:08:52.464 * Ready to accept connections
 第一个警告提示: redis 设置了 TCP backlog 的值为 511, 但是 linux 内核中当前值为 128,达不到 redis 要求 
 ​
 第二个警告提示: Linux 中内存申请方式需要重 0 修改为 1 ,确保 redis 可以申请更多的物理内容
 ​
 [root@node1 ~]# vim /etc/sysctl.conf
 [root@node1 ~]# sysctl -p
 net.core.somaxconn = 512
 vm.overcommit_memory = 1
 第三个警告提示: 关闭 Linux 的 transparent hugepage 功能,根据警告提示操作即可
 ​
 [root@node1 ~]# echo never > /sys/kernel/mm/transparent_hugepage/enabled
 [root@node1 ~]# echo "echo never > /sys/kernel/mm/transparent_hugepage/enabled" >> /etc/rc.local
 [root@node1 ~]# chmod +x /etc/rc.local

处理完警告后,清空 redis 日志并重启 redis,警告信息消失(node1~node6)

 [root@node1 ~]# rm -rf /apps/redis/log/redis.log
 [root@node1 ~]# systemctl restart redis
 [root@node1 ~]# cat /apps/redis/log/redis.log
 41545:C 13 Nov 19:02:23.537 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
 41545:C 13 Nov 19:02:23.537 # Redis version=4.0.14, bits=64, commit=00000000, modified=0, pid=41545, just started
 41545:C 13 Nov 19:02:23.537 # Configuration loaded
 41545:C 13 Nov 19:02:23.537 * supervised by systemd, will signal readiness
 41545:M 13 Nov 19:02:23.541 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.
 41545:M 13 Nov 19:02:23.541 # Server can't set maximum open files to 10032 because of OS error: Operation not permitted.
 41545:M 13 Nov 19:02:23.541 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.
                _._                                                  
            _.-``__ ''-._                                            
      _.-``   `. `_. ''-._           Redis 4.0.14 (00000000/0) 64 bit
  .-`` .-```. ```\/   _.,_ ''-._                                  
  (   '     ,       .-` | `,   )     Running in standalone mode
  |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
  |   `-._   `._   /     _.-'   |     PID: 41545
  `-._   `-._ `-./ _.-'   _.-'                                  
  |`-._`-._   `-.__.-'   _.-'_.-'|                                  
  |   `-._`-._       _.-'_.-'   |           http://redis.io        
  `-._   `-._`-.__.-'_.-'   _.-'                                  
  |`-._`-._   `-.__.-'   _.-'_.-'|                                  
  |   `-._`-._       _.-'_.-'   |                                  
  `-._   `-._`-.__.-'_.-'   _.-'                                  
      `-._   `-.__.-'   _.-'                                      
          `-._       _.-'                                          
              `-.__.-'                                              
 ​
 41545:M 13 Nov 19:02:23.550 # Server initialized
 41545:M 13 Nov 19:02:23.551 * Ready to accept connections

配置 redis 的 PATH 环境变量,使用 redis-cli 进行连接测试(node1~node6)

 [root@node1 ~]# vim /etc/profile.d/redis.sh
 #!/bin/bash
 export PATH=/apps/redis/bin:$PATH
 ​
 [root@node1 ~]# source /etc/profile.d/redis.sh

编辑 redis 配置文件,过滤所有注释选项后,如下(node1~node6)

 [root@node1 ~]# vim /apps/redis/etc/redis.conf
 [root@node1 ~]# cat /apps/redis/etc/redis.conf | grep -Ev "^#|^$"
 bind 192.168.10.1 # 此项设置为每台服务器本机的 ip 地址,其余都相同
 protected-mode yes
 port 6379
 tcp-backlog 511
 timeout 0
 tcp-keepalive 300
 daemonize yes
 supervised no
 pidfile /apps/redis/run/redis_6379.pid
 loglevel notice
 logfile "/apps/redis/log/redis.log"
 databases 16
 always-show-logo yes
 save 900 1
 save 300 10
 save 60 10000
 stop-writes-on-bgsave-error yes
 rdbcompression yes
 rdbchecksum yes
 dbfilename dump.rdb
 dir /apps/redis/data/
 masterauth 1
 slave-serve-stale-data yes
 slave-read-only yes
 repl-diskless-sync no
 repl-diskless-sync-delay 5
 repl-disable-tcp-nodelay no
 slave-priority 100
 requirepass 1
 lazyfree-lazy-eviction no
 lazyfree-lazy-expire no
 lazyfree-lazy-server-del no
 slave-lazy-flush no
 appendonly no
 appendfilename "appendonly.aof"
 appendfsync everysec
 no-appendfsync-on-rewrite no
 auto-aof-rewrite-percentage 100
 auto-aof-rewrite-min-size 64mb
 aof-load-truncated yes
 aof-use-rdb-preamble no
 lua-time-limit 5000
 cluster-enabled yes
 cluster-config-file nodes-6379.conf
 slowlog-log-slower-than 10000
 slowlog-max-len 128
 latency-monitor-threshold 0
 notify-keyspace-events ""
 hash-max-ziplist-entries 512
 hash-max-ziplist-value 64
 list-max-ziplist-size -2
 list-compress-depth 0
 set-max-intset-entries 512
 zset-max-ziplist-entries 128
 zset-max-ziplist-value 64
 hll-sparse-max-bytes 3000
 activerehashing yes
 client-output-buffer-limit normal 0 0 0
 client-output-buffer-limit slave 256mb 64mb 60
 client-output-buffer-limit pubsub 32mb 8mb 60
 hz 10
 aof-rewrite-incremental-fsync yes

重启 redis 服务,并查看端口(node1~node6)

 [root@node1 ~]# systemctl restart redis
 [root@node1 ~]# ss -ntl
 State     Recv-Q Send-Q   Local Address:Port                 Peer Address:Port              
 LISTEN     0     100         127.0.0.1:25                               *:*    
 LISTEN     0     511       192.168.10.1:16379                           *:*    
 LISTEN     0     511       192.168.10.1:6379                             *:*    
 LISTEN     0     128                 *:111                             *:*    
 LISTEN     0     128                 *:22                               *:*    
 LISTEN     0     100             [::1]:25                           [::]:*    
 LISTEN     0     128               [::]:111                           [::]:*    
 LISTEN     0     128               [::]:22                           [::]:*                  

以上是所有服务器的基本配置,接下来配置 Redis Cluster 部分


 

 

 

在 node1 服务器上编译安装 redis 集群管理工具

 [root@node1 ~]# yum -y install gcc make zlib-devel readline-devel gdbm-devel
 ​
 [root@node1 ~]# tar xf ruby-2.5.5.tar.gz
 [root@node1 ~]# cd ruby-2.5.5/
 [root@node1 ruby-2.5.5]# ./configure
 [root@node1 ruby-2.5.5]# make -j 2
 [root@node1 ruby-2.5.5]# make install
 ​
 [root@node1 ruby-2.5.5]# gem install redis
 ERROR: While executing gem ... (Gem::Exception)
    Unable to require openssl, install OpenSSL and rebuild Ruby (preferred) or use non-HTTPS sources
 ​
 解决办法:
 [root@node1 ruby-2.5.5]# yum -y install openssl-devel
 [root@node1 ruby-2.5.5]# cd ext/openssl/
 [root@node1 openssl]# pwd
 /root/ruby-2.5.5/ext/openssl
 [root@node1 openssl]# ruby ./extconf.rb
 [root@node1 openssl]# make
 compiling openssl_missing.c
 make: *** No rule to make target `/include/ruby.h', needed by `ossl.o'. Stop.
 ​
 ​
 在Makefile顶部中的增加top_srcdir = ../..
 [root@node1 openssl]# vim Makefile
 [root@node1 openssl]# head -1 Makefile
 top_srcdir = ../..
 ​
 ​
 再次执行make
 [root@node1 openssl]# make
 [root@node1 openssl]# make install
 ​
 ​
 [root@node1 openssl]# cd /root/ruby-2.5.5/
 [root@node1 ruby-2.5.5]# gem install redis
 Fetching: redis-4.1.3.gem (100%)
 Successfully installed redis-4.1.3
 Parsing documentation for redis-4.1.3
 Installing ri documentation for redis-4.1.3
 Done installing documentation for redis after 1 seconds
 1 gem installed
 ​
 ​
 修改配置文件,将密码修改为 redis 的密码
 [root@node1 ruby-2.5.5]# vim /usr/local/lib/ruby/gems/2.5.0/gems/redis-4.1.3/lib/redis/client.rb
 [root@node1 ruby-2.5.5]# cat /usr/local/lib/ruby/gems/2.5.0/gems/redis-4.1.3/lib/redis/client.rb | grep ":password "
      :password => 1,

使用 redis 集群管理工具启动 redis 集群, create 代表创建集群, replicas 代表每个redis 主节点有几个从节点,最后添加所有的 redis 服务 地址, 提示输入时,输入 yes 即可

 [root@node1 ~]# cp ./redis-4.0.14/src/redis-trib.rb /usr/sbin/
 [root@node1 ~]# redis-trib.rb
 Usage: redis-trib <command> <options> <arguments ...>
 ​
  create         host1:port1 ... hostN:portN
                  --replicas <arg>
  check           host:port
  info           host:port
  fix             host:port
                  --timeout <arg>
  reshard         host:port
                  --from <arg>
                  --to <arg>
                  --slots <arg>
                  --yes
                  --timeout <arg>
                  --pipeline <arg>
  rebalance       host:port
                  --weight <arg>
                  --auto-weights
                  --use-empty-masters
                  --timeout <arg>
                  --simulate
                  --pipeline <arg>
                  --threshold <arg>
  add-node       new_host:new_port existing_host:existing_port
                  --slave
                  --master-id <arg>
  del-node       host:port node_id
  set-timeout     host:port milliseconds
  call           host:port command arg arg .. arg
  import         host:port
                  --from <arg>
                  --copy
                  --replace
  help           (show this help)
 ​
 ​
 ​
 [root@node1 ~]# redis-trib.rb create --replicas 1 192.168.10.1:6379 192.168.10.2:6379 192.168.10.3:6379 192.168.10.4:6379 192.168.10.5:6379 192.168.10.6:6379
 >>> Creating cluster
 >>> Performing hash slots allocation on 6 nodes...
 Using 3 masters:
 192.168.10.1:6379
 192.168.10.2:6379
 192.168.10.3:6379
 Adding replica 192.168.10.5:6379 to 192.168.10.1:6379
 Adding replica 192.168.10.6:6379 to 192.168.10.2:6379
 Adding replica 192.168.10.4:6379 to 192.168.10.3:6379
 M: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379
    slots:0-5460 (5461 slots) master
 M: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379
    slots:5461-10922 (5462 slots) master
 M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379
    slots:10923-16383 (5461 slots) master
 S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379
    replicates bbe35416650b74b326bc6657d2ff18cc6edc14ce
 S: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379
    replicates ea5b2c985511241879f17d84d462888a24d20590
 S: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379
    replicates 0a45c94458e8a5d751275e252c40c280ff78527e
 Can I set the above configuration? (type 'yes' to accept): yes
 >>> Nodes configuration updated
 >>> Assign a different config epoch to each node
 >>> Sending CLUSTER MEET messages to join the cluster
 Waiting for the cluster to join...
 >>> Performing Cluster Check (using node 192.168.10.1:6379)
 M: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379
    slots:0-5460 (5461 slots) master
    1 additional replica(s)
 M: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379
    slots:5461-10922 (5462 slots) master
    1 additional replica(s)
 S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379
    slots: (0 slots) slave
    replicates bbe35416650b74b326bc6657d2ff18cc6edc14ce
 M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379
    slots:10923-16383 (5461 slots) master
    1 additional replica(s)
 S: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379
    slots: (0 slots) slave
    replicates 0a45c94458e8a5d751275e252c40c280ff78527e
 S: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379
    slots: (0 slots) slave
    replicates ea5b2c985511241879f17d84d462888a24d20590
 [OK] All nodes agree about slots configuration.
 >>> Check for open slots...
 >>> Check slots coverage...
 [OK] All 16384 slots covered.
 ​
 ​
 通过上面的信息可以知道
    主   从
 node1-10.1node5-10.5
 node2-10.2node6-10.6
 node3-10.3node4-10.4
 ​
 ​
 登录 reids 查看集群状态
 [root@node1 ~]# redis-cli -h 192.168.10.1
 192.168.10.1:6379> cluster info
 cluster_state:ok
 cluster_slots_assigned:16384
 cluster_slots_ok:16384
 cluster_slots_pfail:0
 cluster_slots_fail:0
 cluster_known_nodes:6
 cluster_size:3
 cluster_current_epoch:6
 cluster_my_epoch:1
 cluster_stats_messages_ping_sent:887
 cluster_stats_messages_pong_sent:932
 cluster_stats_messages_sent:1819
 cluster_stats_messages_ping_received:927
 cluster_stats_messages_pong_received:887
 cluster_stats_messages_meet_received:5
 cluster_stats_messages_received:1819

测试 redis 集群中主从节点是否具备自动切换功能

 node5 是 node1 的 slave,停止 node1 中的 redis 服务,查看 node6 是否可以提升为主节点
 [root@node1 ~]# systemctl stop redis
 [root@node1 ~]# ss -ntl
 State     Recv-Q Send-Q Local Address:Port                 Peer Address:Port              
 LISTEN     0     100         127.0.0.1:25                             *:*      
 LISTEN     0     128                 *:111                             *:*      
 LISTEN     0     128                 *:22                             *:*      
 LISTEN     0     100             [::1]:25                           [::]:*      
 LISTEN     0     128             [::]:111                         [::]:*      
 LISTEN     0     128             [::]:22                           [::]:*      
 ​
 ​
 [root@node1 ~]# redis-trib.rb check 192.168.10.1:6379
 [ERR] Sorry, can't connect to node 192.168.10.1:6379
 ​
 ​
 [root@node1 ~]# redis-trib.rb check 192.168.10.2:6379
 >>> Performing Cluster Check (using node 192.168.10.2:6379)
 M: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379
    slots:5461-10922 (5462 slots) master
    1 additional replica(s)
 S: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379
    slots: (0 slots) slave
    replicates 0a45c94458e8a5d751275e252c40c280ff78527e
 M: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379
    slots:0-5460 (5461 slots) master
    0 additional replica(s)
 S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379
    slots: (0 slots) slave
    replicates bbe35416650b74b326bc6657d2ff18cc6edc14ce
 M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379
    slots:10923-16383 (5461 slots) master
    1 additional replica(s)
 [OK] All nodes agree about slots configuration.
 >>> Check for open slots...
 >>> Check slots coverage...
 [OK] All 16384 slots covered.
 ​
 ​
 [root@node1 ~]# redis-cli -h 192.168.10.5
 192.168.10.5:6379> auth 1
 OK
 192.168.10.5:6379> info replication
 # Replication
 role:master
 connected_slaves:0
 master_replid:10965c763fab20cf1f99bfdd15cd0a8d66b9fd83
 master_replid2:5fc2e3dc70d7e127cf67f6b1da6b6ce65b592ca3
 master_repl_offset:1694
 second_repl_offset:1695
 repl_backlog_active:1
 repl_backlog_size:1048576
 repl_backlog_first_byte_offset:1
 repl_backlog_histlen:1694
 192.168.10.5:6379> exit
 ​
 ​
 [root@node1 ~]# systemctl start redis
 [root@node1 ~]# redis-trib.rb check 192.168.10.2:6379
 >>> Performing Cluster Check (using node 192.168.10.2:6379)
 M: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379
    slots:5461-10922 (5462 slots) master
    1 additional replica(s)
 S: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379
    slots: (0 slots) slave
    replicates 0a45c94458e8a5d751275e252c40c280ff78527e
 M: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379
    slots:0-5460 (5461 slots) master
    1 additional replica(s)
 S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379
    slots: (0 slots) slave
    replicates bbe35416650b74b326bc6657d2ff18cc6edc14ce
 S: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379
    slots: (0 slots) slave
    replicates 9519fb412b0199e2764f1d14f951194eb758e967
 M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379
    slots:10923-16383 (5461 slots) master
    1 additional replica(s)
 [OK] All nodes agree about slots configuration.
 >>> Check for open slots...
 >>> Check slots coverage...
 [OK] All 16384 slots covered.
 ​
 ​
 [root@node1 ~]# redis-cli -h 192.168.10.1
 192.168.10.1:6379> auth 1
 OK
 192.168.10.1:6379> info replication
 # Replication
 role:slave
 master_host:192.168.10.5
 master_port:6379
 master_link_status:up
 master_last_io_seconds_ago:2
 master_sync_in_progress:0
 slave_repl_offset:1792
 slave_priority:100
 slave_read_only:1
 connected_slaves:0
 master_replid:10965c763fab20cf1f99bfdd15cd0a8d66b9fd83
 master_replid2:0000000000000000000000000000000000000000
 master_repl_offset:1792
 second_repl_offset:-1
 repl_backlog_active:1
 repl_backlog_size:1048576
 repl_backlog_first_byte_offset:1695
 repl_backlog_histlen:98
 192.168.10.1:6379> exit
 ​
 上面我们通过手动关闭主节点 node1 的 redis 服务,发现 node5 自动提升为主节点,使用集群管理工具的 check 命令查看当前集群状态,之后我们有重新启动 node1 的 redis 服务,发现 node1 成为 node5 的从几点。验证 cluster 集群的自动切换成功

测试在 node2 的 redis 中进行数据写入操作

 [root@node1 ~]# redis-cli -h 192.168.10.2
 192.168.10.2:6379> auth 1
 OK
 192.168.10.2:6379> set name yinx1n
 OK
 192.168.10.2:6379> get name
 "yinx1n"
 192.168.10.2:6379> set addr beijing
 (error) MOVED 12790 192.168.10.3:6379
 192.168.10.2:6379> info replication
 # Replication
 role:master
 connected_slaves:1
 slave0:ip=192.168.10.6,port=6379,state=online,offset=2802,lag=1
 master_replid:5d118a1fa4379ac4c4b49ade8dfc4361ac08038e
 master_replid2:0000000000000000000000000000000000000000
 master_repl_offset:2802
 second_repl_offset:-1
 repl_backlog_active:1
 repl_backlog_size:1048576
 repl_backlog_first_byte_offset:1
 repl_backlog_histlen:2802
 192.168.10.2:6379> exit
 ​
 上面的操作中,name 这个 key 被分配到了 node2 节点,可以写入,但是 addr 这个 key 被分配到 node3 中,因此在 node2 中无法写入
 ​
 ​
 [root@node1 ~]# redis-cli -h 192.168.10.3
 192.168.10.3:6379> auth 1
 OK
 192.168.10.3:6379> set addr beijing
 OK
 192.168.10.3:6379> get addr
 "beijing"
 192.168.10.3:6379> exit
 ​
 ​
 [root@node1 ~]# redis-cli -h 192.168.10.6
 192.168.10.6:6379> auth 1
 OK
 192.168.10.6:6379> get name
 (error) MOVED 12790 192.168.10.2:6379
 192.168.10.3:6379> exit
 ​
 在 node 2 中写入的数据在 node6 中无法访问,接下来手动关闭 node2 的 redis 服务,然后再次访问 node6
 ​
 ​
 [root@node2 ~]# systemctl stop redis
 [root@node1 ~]# redis-cli -h 192.168.10.6
 192.168.10.6:6379> auth 1
 OK
 192.168.10.6:6379> info replication
 # Replication
 role:master
 connected_slaves:0
 master_replid:45f8d7daf292fb3cb4cdd2aa6cb96449e0c45b63
 master_replid2:5d118a1fa4379ac4c4b49ade8dfc4361ac08038e
 master_repl_offset:3208
 second_repl_offset:3209
 repl_backlog_active:1
 repl_backlog_size:1048576
 repl_backlog_first_byte_offset:1
 repl_backlog_histlen:3208
 192.168.10.6:6379> get name
 "yinx1n"
 192.168.10.6:6379> exit
 ​
 ​
 验证完成以后,发现 cluster 的从节点不具备访问功能,进作为数据备份使用,接下来重启 node2
 ​
 ​
 [root@node2 ~]# systemctl start redis
 ​
 重启后查看集群状态
 [root@node1 ~]# redis-trib.rb check 192.168.10.1:6379
 >>> Performing Cluster Check (using node 192.168.10.1:6379)
 S: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379
    slots: (0 slots) slave
    replicates 9519fb412b0199e2764f1d14f951194eb758e967
 M: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379
    slots:0-5460 (5461 slots) master
    1 additional replica(s)
 S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379
    slots: (0 slots) slave
    replicates bbe35416650b74b326bc6657d2ff18cc6edc14ce
 M: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379
    slots:5461-10922 (5462 slots) master
    1 additional replica(s)
 M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379
    slots:10923-16383 (5461 slots) master
    1 additional replica(s)
 S: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379
    slots: (0 slots) slave
    replicates 461520796edb41baea82eadf8a99da26a2d77d34
 [OK] All nodes agree about slots configuration.
 >>> Check for open slots...
 >>> Check slots coverage...
 [OK] All 16384 slots covered.
 ​
 ​
 可以发现关闭 redis 服务的节点,再重启 redis 服务的话,会变成从节点

 

 

 

Redis集群扩容

使用 redis 集群时,有的时候需要新增 redis 节点,提高性能。我们新准备两台服务器,要求和之前的 6 台配置相同

node7——192.168.10.7node8——192.168.10.8

启动 node7 和 node8 的 redis 服务
node7:
[root@node7 ~]# systemctl start redis
[root@node7 ~]# ss -ntl
State       Recv-Q Send-Q     Local Address:Port                   Peer Address:Port              
LISTEN      0      100            127.0.0.1:25                     		 *:*   
LISTEN      0      511         192.168.10.7:16379                        *:*   
LISTEN      0      511         192.168.10.7:6379                         *:*   
LISTEN      0      128                    *:111                          *:*   
LISTEN      0      128                    *:22                           *:*   
LISTEN      0      100                [::1]:25                        [::]:*   
LISTEN      0      128                 [::]:111                       [::]:*   
LISTEN      0      128                 [::]:22                        [::]:*   


node8:
[root@node8 ~]# systemctl start redis
[root@node8 ~]# ss -ntl
State      Recv-Q Send-Q       Local Address:Port                  Peer Address:Port              
LISTEN     0      100              127.0.0.1:25                          *:* 
LISTEN     0      511           192.168.10.8:16379                       *:* 
LISTEN     0      511           192.168.10.8:6379                        *:* 
LISTEN     0      128                      *:111                         *:* 
LISTEN     0      128                      *:22                          *:* 
LISTEN     0      100                  [::1]:25                       [::]:* 
LISTEN     0      128                   [::]:111                      [::]:* 
LISTEN     0      128                   [::]:22                       [::]:*



查看集群状态
[root@node1 ~]# redis-trib.rb check 192.168.10.1:6379
>>> Performing Cluster Check (using node 192.168.10.1:6379)
S: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379
   slots: (0 slots) slave
   replicates 9519fb412b0199e2764f1d14f951194eb758e967
M: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379
   slots: (0 slots) slave
   replicates 461520796edb41baea82eadf8a99da26a2d77d34
S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379
   slots: (0 slots) slave
   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ce
M: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.


添加 node7, add-node 命令后添加 node7 和一个原来集群主节点的地址
[root@node1 ~]# redis-trib.rb add-node 192.168.10.7:6379 192.168.10.3:6379
>>> Adding node 192.168.10.7:6379 to cluster 192.168.10.3:6379
>>> Performing Cluster Check (using node 192.168.10.3:6379)
M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
S: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379
   slots: (0 slots) slave
   replicates 461520796edb41baea82eadf8a99da26a2d77d34
S: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379
   slots: (0 slots) slave
   replicates 9519fb412b0199e2764f1d14f951194eb758e967
M: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379
   slots: (0 slots) slave
   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ce
[OK] All nodes agree about slots configuration.


再次查看集群状态,node7 被添加成主节点
[root@node1 ~]# redis-trib.rb check 192.168.10.1:6379
>>> Performing Cluster Check (using node 192.168.10.1:6379)
S: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379
   slots: (0 slots) slave
   replicates 9519fb412b0199e2764f1d14f951194eb758e967
M: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: 39532b5439b378bb36b4b281e545b8b6604917af 192.168.10.7:6379
   slots: (0 slots) master
   0 additional replica(s)
S: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379
   slots: (0 slots) slave
   replicates 461520796edb41baea82eadf8a99da26a2d77d34
S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379
   slots: (0 slots) slave
   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ce
M: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.


[root@node1 ~]# redis-trib.rb info 192.168.10.1:6379
192.168.10.6:6379 (46152079...) -> 1 keys | 5462 slots | 1 slaves.
192.168.10.3:6379 (bbe35416...) -> 1 keys | 5461 slots | 1 slaves.
192.168.10.7:6379 (39532b54...) -> 0 keys | 0 slots | 0 slaves.
192.168.10.5:6379 (9519fb41...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.




添加 node8 到集群中
[root@node1 ~]# redis-trib.rb add-node 192.168.10.8:6379 192.168.10.4:6379
>>> Adding node 192.168.10.8:6379 to cluster 192.168.10.4:6379
>>> Performing Cluster Check (using node 192.168.10.4:6379)
S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379
   slots: (0 slots) slave
   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ce
M: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379
   slots: (0 slots) slave
   replicates 9519fb412b0199e2764f1d14f951194eb758e967
M: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: 39532b5439b378bb36b4b281e545b8b6604917af 192.168.10.7:6379
   slots: (0 slots) master
   0 additional replica(s)
S: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379
   slots: (0 slots) slave
   replicates 461520796edb41baea82eadf8a99da26a2d77d34
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.10.8:6379 to make it join the cluster.
[OK] New node added correctly.


查看集群状态,node8 也为主节点
[root@node1 ~]# redis-trib.rb check 192.168.10.1:6379
>>> Performing Cluster Check (using node 192.168.10.1:6379)
S: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379
   slots: (0 slots) slave
   replicates 9519fb412b0199e2764f1d14f951194eb758e967
M: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: 39532b5439b378bb36b4b281e545b8b6604917af 192.168.10.7:6379
   slots: (0 slots) master
   0 additional replica(s)
S: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379
   slots: (0 slots) slave
   replicates 461520796edb41baea82eadf8a99da26a2d77d34
S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379
   slots: (0 slots) slave
   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ce
M: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: 7e462b96b548ae2dc1512c71765c58493362671f 192.168.10.8:6379
   slots: (0 slots) master
   0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.


登录 node8 的 redis,将它的 master 设置为 node7
[root@node1 ~]# redis-cli -h 192.168.10.8
192.168.10.8:6379> auth 1
OK

192.168.10.8:6379> info replication
# Replication
role:master
connected_slaves:0
master_replid:09db3148943fdf8cacb8de0a73ad021833c99393
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

192.168.10.8:6379> cluster nodes
39532b5439b378bb36b4b281e545b8b6604917af 192.168.10.7:6379@16379 master - 0 1573735372000 9 connected
bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379@16379 master - 0 1573735374000 3 connected 10923-16383
7e462b96b548ae2dc1512c71765c58493362671f 192.168.10.8:6379@16379 myself,master - 0 1573735372000 0 connected
dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379@16379 slave bbe35416650b74b326bc6657d2ff18cc6edc14ce 0 1573735375000 3 connected
461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379@16379 master - 0 1573735373695 8 connected 5461-10922
9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379@16379 master - 0 1573735373000 7 connected 0-5460
ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379@16379 slave 9519fb412b0199e2764f1d14f951194eb758e967 0 1573735375708 7 connected
0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379@16379 slave 461520796edb41baea82eadf8a99da26a2d77d34 0 1573735374702 8 connected

192.168.10.8:6379> cluster replicate 39532b5439b378bb36b4b281e545b8b6604917af # 将其设置为 slave,命令为 cluster replicate MASTERID
OK

192.168.10.8:6379> cluster nodes
39532b5439b378bb36b4b281e545b8b6604917af 192.168.10.7:6379@16379 master - 0 1573735411924 9 connected
bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379@16379 master - 0 1573735409000 3 connected 10923-16383
7e462b96b548ae2dc1512c71765c58493362671f 192.168.10.8:6379@16379 myself,slave 39532b5439b378bb36b4b281e545b8b6604917af 0 1573735405000 0 connected
dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379@16379 slave bbe35416650b74b326bc6657d2ff18cc6edc14ce 0 1573735410000 3 connected
461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379@16379 master - 0 1573735407901 8 connected 5461-10922
9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379@16379 master - 0 1573735410000 7 connected 0-5460
ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379@16379 slave 9519fb412b0199e2764f1d14f951194eb758e967 0 1573735410917 7 connected
0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379@16379 slave 461520796edb41baea82eadf8a99da26a2d77d34 0 1573735409911 8 connected

192.168.10.8:6379> info replication
# Replication
role:slave
master_host:192.168.10.7
master_port:6379
master_link_status:up
master_last_io_seconds_ago:10
master_sync_in_progress:0
slave_repl_offset:42
slave_priority:100
slave_read_only:1
connected_slaves:0
master_replid:927b8e47cb7d76cdf6fcaf69d92ce7eb2fa99229
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:42
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:42

192.168.10.8:6379> exit

[root@node1 ~]# redis-trib.rb info 192.168.10.1:6379
192.168.10.6:6379 (46152079...) -> 1 keys | 5462 slots | 1 slaves.
192.168.10.3:6379 (bbe35416...) -> 1 keys | 5461 slots | 1 slaves.
192.168.10.7:6379 (39532b54...) -> 0 keys | 0 slots | 1 slaves.
192.168.10.5:6379 (9519fb41...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.

分配槽位

现在 node7 已经添加进集群且为主节点, 但是没有槽位,下面我们重新分配一下槽位使用集群的 reshard 命令前,请清空所有节点的数据,然后再从新分配。因此需要提前备份 redis 数据。

[root@node1 ~]# redis-trib.rb reshard 192.168.10.1:6379

提示重新分配大小输入 4096

选择要分配给 node7 的 id 号

选择重所有主节点分配: all # 这里也可以选择某个主节点的 ID,然后输入 done 进行槽位分配

[root@node1 ~]# redis-trib.rb info 192.168.10.1:6379
192.168.10.6:6379 (46152079...) -> 0 keys | 3817 slots | 1 slaves.
192.168.10.3:6379 (bbe35416...) -> 0 keys | 4067 slots | 1 slaves.
192.168.10.7:6379 (39532b54...) -> 0 keys | 4432 slots | 1 slaves.
192.168.10.5:6379 (9519fb41...) -> 0 keys | 4068 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.

下线 Redis 集群中的指定服务器(例如 node7)

先将 node7 的槽位全部移动到 node3 中(必须是主节点)
[root@node1 ~]# redis-trib.rb reshard 192.168.10.1:6379
>>> Performing Cluster Check (using node 192.168.10.1:6379)
S: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379
   slots: (0 slots) slave
   replicates 9519fb412b0199e2764f1d14f951194eb758e967
M: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379
   slots:7106-10922 (3817 slots) master
   1 additional replica(s)
M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379
   slots:12317-16383 (4067 slots) master
   1 additional replica(s)
M: 39532b5439b378bb36b4b281e545b8b6604917af 192.168.10.7:6379
   slots:0-1392,5461-7105,10923-12316 (4432 slots) master
   1 additional replica(s)
S: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379
   slots: (0 slots) slave
   replicates 461520796edb41baea82eadf8a99da26a2d77d34
S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379
   slots: (0 slots) slave
   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ce
M: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379
   slots:1393-5460 (4068 slots) master
   1 additional replica(s)
S: 7e462b96b548ae2dc1512c71765c58493362671f 192.168.10.8:6379
   slots: (0 slots) slave
   replicates 39532b5439b378bb36b4b281e545b8b6604917af
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

How many slots do you want to move (from 1 to 16384)? 4432 # 输入 node7 的槽位数
What is the receiving node ID? bbe35416650b74b326bc6657d2ff18cc6edc14ce # 这里输入 node3 的 ID
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:39532b5439b378bb36b4b281e545b8b6604917af # 这里输入 node7 的 ID
Source node #2:done


[root@node1 ~]# redis-trib.rb info 192.168.10.1:6379
192.168.10.6:6379 (46152079...) -> 0 keys | 3817 slots | 1 slaves.
192.168.10.3:6379 (bbe35416...) -> 0 keys | 8499 slots | 2 slaves.
192.168.10.7:6379 (39532b54...) -> 0 keys | 0 slots | 0 slaves.
192.168.10.5:6379 (9519fb41...) -> 0 keys | 4068 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.


使用集群命令删除 node7 节点
[root@node1 ~]# redis-trib.rb check 192.168.10.1:6379
>>> Performing Cluster Check (using node 192.168.10.1:6379)
S: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379
   slots: (0 slots) slave
   replicates 9519fb412b0199e2764f1d14f951194eb758e967
M: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379
   slots:7106-10922 (3817 slots) master
   1 additional replica(s)
M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379
   slots:0-1392,5461-7105,10923-16383 (8499 slots) master
   2 additional replica(s)
M: 39532b5439b378bb36b4b281e545b8b6604917af 192.168.10.7:6379
   slots: (0 slots) master
   0 additional replica(s)
S: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379
   slots: (0 slots) slave
   replicates 461520796edb41baea82eadf8a99da26a2d77d34
S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379
   slots: (0 slots) slave
   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ce
M: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379
   slots:1393-5460 (4068 slots) master
   1 additional replica(s)
S: 7e462b96b548ae2dc1512c71765c58493362671f 192.168.10.8:6379
   slots: (0 slots) slave
   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ce
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.


[root@node1 ~]# redis-trib.rb del-node 192.168.10.1:6379 39532b5439b378bb36b4b281e545b8b6604917af # 后面跟上 node7 的 ID
>>> Removing node 39532b5439b378bb36b4b281e545b8b6604917af from cluster 192.168.10.1:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.


[root@node1 ~]# redis-trib.rb info 192.168.10.1:6379
192.168.10.6:6379 (46152079...) -> 0 keys | 3817 slots | 1 slaves.
192.168.10.3:6379 (bbe35416...) -> 0 keys | 8499 slots | 2 slaves.
192.168.10.5:6379 (9519fb41...) -> 0 keys | 4068 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.


删除 node7 的从节点 node8
[root@node1 ~]# redis-trib.rb check 192.168.10.1:6379
>>> Performing Cluster Check (using node 192.168.10.1:6379)
S: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379
   slots: (0 slots) slave
   replicates 9519fb412b0199e2764f1d14f951194eb758e967
M: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379
   slots:7106-10922 (3817 slots) master
   1 additional replica(s)
M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379
   slots:0-1392,5461-7105,10923-16383 (8499 slots) master
   2 additional replica(s)
S: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379
   slots: (0 slots) slave
   replicates 461520796edb41baea82eadf8a99da26a2d77d34
S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379
   slots: (0 slots) slave
   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ce
M: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379
   slots:1393-5460 (4068 slots) master
   1 additional replica(s)
S: 7e462b96b548ae2dc1512c71765c58493362671f 192.168.10.8:6379
   slots: (0 slots) slave
   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ce
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.


[root@node1 ~]# redis-trib.rb del-node 192.168.10.1:6379 7e462b96b548ae2dc1512c71765c58493362671f # 后面跟上 node8 的 ID
>>> Removing node 7e462b96b548ae2dc1512c71765c58493362671f from cluster 192.168.10.1:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.


[root@node1 ~]# redis-trib.rb check 192.168.10.1:6379
>>> Performing Cluster Check (using node 192.168.10.1:6379)
S: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379
   slots: (0 slots) slave
   replicates 9519fb412b0199e2764f1d14f951194eb758e967
M: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379
   slots:7106-10922 (3817 slots) master
   1 additional replica(s)
M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379
   slots:0-1392,5461-7105,10923-16383 (8499 slots) master
   1 additional replica(s)
S: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379
   slots: (0 slots) slave
   replicates 461520796edb41baea82eadf8a99da26a2d77d34
S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379
   slots: (0 slots) slave
   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ce
M: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379
   slots:1393-5460 (4068 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

 

内容:
Redis 编译安装
Redis Cluster部署
Redis 集群扩容
Redis 指定机器下线


```环境:主机名  IPnode1192.168.10.1node2192.168.10.2node3192.168.10.3node4192.168.10.4node5192.168.10.5node6192.168.10.6```


**编译安装 redis-4.0.14.tar.gz ,确认时间同步(node1~node6)**
```[root@node1 ~]# tar xf redis-4.0.14.tar.gz[root@node1 ~]# cd redis-4.0.14/[root@node1 redis-4.0.14]# make PREFIX=/apps/redis install

[root@node1 redis-4.0.14]# mkdir -p /apps/redis/{etc,log,data,run}[root@node1 redis-4.0.14]# cp redis.conf /apps/redis/etc/[root@node1 redis-4.0.14]# sed -i 's@logfile ""@logfile "/apps/redis/log/redis.log"@' /apps/redis/etc/redis.conf```
**创建 redis 用户(node1~node6)**
```[root@node1 ~]# useradd -r redis -s /sbin/nologin```
**编写服务管理配置文件(node1~node6)**
```[root@node1 ~]# vim /usr/lib/systemd/system/redis.service[Unit]Description=Redis persistent key-value databaseAfter=network.targetAfter=network-online.targetWants=network-online.target[Service]ExecStart=/apps/redis/bin/redis-server /apps/redis/etc/redis.conf --supervised systemdExecReload=/bin/kill -s HUP $MAINPID ExecStop=/bin/kill -s QUIT $MAINPIDType=notifyUser=redisGroup=redisRuntimeDirectory=redisRuntimeDirectoryMode=0755[Install]WantedBy=multi-user.target```
**启动服务(node1~node6)**
```[root@node1 ~]# chown -R redis.redis /apps/redis[root@node1 ~]# systemctl daemon-reload[root@node1 ~]# systemctl start redis[root@node1 ~]# ss -ntlState       Recv-Q Send-Q Local Address:Port               Peer Address:Port              LISTEN      0      100    127.0.0.1:25                         *:*                 LISTEN      0      128    127.0.0.1:6379                       *:*                 LISTEN      0      128            *:111                        *:*                 LISTEN      0      128            *:22                         *:*                 LISTEN      0      100        [::1]:25                      [::]:*                 LISTEN      0      128         [::]:111                     [::]:*                 LISTEN      0      128         [::]:22                      [::]:*                 ```
**查看日志并解决启动警告(node1~node6)**
```[root@node1 ~]# cat /apps/redis/log/redis.log 41347:C 13 Nov 17:08:52.446 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo41347:C 13 Nov 17:08:52.447 # Redis version=4.0.14, bits=64, commit=00000000, modified=0, pid=41347, just started41347:C 13 Nov 17:08:52.447 # Configuration loaded41347:C 13 Nov 17:08:52.447 * supervised by systemd, will signal readiness41347:M 13 Nov 17:08:52.455 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.41347:M 13 Nov 17:08:52.455 # Server can't set maximum open files to 10032 because of OS error: Operation not permitted.41347:M 13 Nov 17:08:52.455 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.                _._                                                             _.-``__ ''-._                                                   _.-``    `.  `_.  ''-._           Redis 4.0.14 (00000000/0) 64 bit  .-`` .-```.  ```\/    _.,_ ''-._                                    (    '      ,       .-`  | `,    )     Running in standalone mode |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379 |    `-._   `._    /     _.-'    |     PID: 41347  `-._    `-._  `-./  _.-'    _.-'                                    |`-._`-._    `-.__.-'    _.-'_.-'|                                   |    `-._`-._        _.-'_.-'    |           http://redis.io          `-._    `-._`-.__.-'_.-'    _.-'                                    |`-._`-._    `-.__.-'    _.-'_.-'|                                   |    `-._`-._        _.-'_.-'    |                                    `-._    `-._`-.__.-'_.-'    _.-'                                         `-._    `-.__.-'    _.-'                                                 `-._        _.-'                                                         `-.__.-'                                               
41347:M 13 Nov 17:08:52.462 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.41347:M 13 Nov 17:08:52.463 # Server initialized41347:M 13 Nov 17:08:52.463 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.41347:M 13 Nov 17:08:52.463 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.41347:M 13 Nov 17:08:52.464 * Ready to accept connections```
```第一个警告提示: redis 设置了 TCP backlog 的值为 511, 但是 linux 内核中当前值为 128,达不到 redis 要求 
第二个警告提示: Linux 中内存申请方式需要重 0 修改为 1 ,确保 redis 可以申请更多的物理内容
[root@node1 ~]# vim /etc/sysctl.conf [root@node1 ~]# sysctl -pnet.core.somaxconn = 512vm.overcommit_memory = 1```
```第三个警告提示: 关闭 Linux 的 transparent hugepage 功能,根据警告提示操作即可
[root@node1 ~]# echo never > /sys/kernel/mm/transparent_hugepage/enabled[root@node1 ~]# echo "echo never > /sys/kernel/mm/transparent_hugepage/enabled" >> /etc/rc.local[root@node1 ~]# chmod +x /etc/rc.local```
**处理完警告后,清空 redis 日志并重启 redis,警告信息消失(node1~node6)**
```[root@node1 ~]# rm -rf /apps/redis/log/redis.log[root@node1 ~]# systemctl restart redis[root@node1 ~]# cat /apps/redis/log/redis.log41545:C 13 Nov 19:02:23.537 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo41545:C 13 Nov 19:02:23.537 # Redis version=4.0.14, bits=64, commit=00000000, modified=0, pid=41545, just started41545:C 13 Nov 19:02:23.537 # Configuration loaded41545:C 13 Nov 19:02:23.537 * supervised by systemd, will signal readiness41545:M 13 Nov 19:02:23.541 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.41545:M 13 Nov 19:02:23.541 # Server can't set maximum open files to 10032 because of OS error: Operation not permitted.41545:M 13 Nov 19:02:23.541 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.                _._                                                             _.-``__ ''-._                                                   _.-``    `.  `_.  ''-._           Redis 4.0.14 (00000000/0) 64 bit  .-`` .-```.  ```\/    _.,_ ''-._                                    (    '      ,       .-`  | `,    )     Running in standalone mode |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379 |    `-._   `._    /     _.-'    |     PID: 41545  `-._    `-._  `-./  _.-'    _.-'                                    |`-._`-._    `-.__.-'    _.-'_.-'|                                   |    `-._`-._        _.-'_.-'    |           http://redis.io          `-._    `-._`-.__.-'_.-'    _.-'                                    |`-._`-._    `-.__.-'    _.-'_.-'|                                   |    `-._`-._        _.-'_.-'    |                                    `-._    `-._`-.__.-'_.-'    _.-'                                         `-._    `-.__.-'    _.-'                                                 `-._        _.-'                                                         `-.__.-'                                               
41545:M 13 Nov 19:02:23.550 # Server initialized41545:M 13 Nov 19:02:23.551 * Ready to accept connections
```
**配置 redis 的 PATH 环境变量,使用 redis-cli 进行连接测试(node1~node6)**
```[root@node1 ~]# vim /etc/profile.d/redis.sh#!/bin/bashexport PATH=/apps/redis/bin:$PATH
[root@node1 ~]# source /etc/profile.d/redis.sh
```
**编辑 redis 配置文件,过滤所有注释选项后,如下(node1~node6)**
```[root@node1 ~]# vim /apps/redis/etc/redis.conf[root@node1 ~]# cat /apps/redis/etc/redis.conf | grep -Ev "^#|^$"bind 192.168.10.1  # 此项设置为每台服务器本机的 ip 地址,其余都相同protected-mode yesport 6379tcp-backlog 511timeout 0tcp-keepalive 300daemonize yessupervised nopidfile /apps/redis/run/redis_6379.pidloglevel noticelogfile "/apps/redis/log/redis.log"databases 16always-show-logo yessave 900 1save 300 10save 60 10000stop-writes-on-bgsave-error yesrdbcompression yesrdbchecksum yesdbfilename dump.rdbdir /apps/redis/data/masterauth 1slave-serve-stale-data yesslave-read-only yesrepl-diskless-sync norepl-diskless-sync-delay 5repl-disable-tcp-nodelay noslave-priority 100requirepass 1lazyfree-lazy-eviction nolazyfree-lazy-expire nolazyfree-lazy-server-del noslave-lazy-flush noappendonly noappendfilename "appendonly.aof"appendfsync everysecno-appendfsync-on-rewrite noauto-aof-rewrite-percentage 100auto-aof-rewrite-min-size 64mbaof-load-truncated yesaof-use-rdb-preamble nolua-time-limit 5000cluster-enabled yescluster-config-file nodes-6379.confslowlog-log-slower-than 10000slowlog-max-len 128latency-monitor-threshold 0notify-keyspace-events ""hash-max-ziplist-entries 512hash-max-ziplist-value 64list-max-ziplist-size -2list-compress-depth 0set-max-intset-entries 512zset-max-ziplist-entries 128zset-max-ziplist-value 64hll-sparse-max-bytes 3000activerehashing yesclient-output-buffer-limit normal 0 0 0client-output-buffer-limit slave 256mb 64mb 60client-output-buffer-limit pubsub 32mb 8mb 60hz 10aof-rewrite-incremental-fsync yes
```
**重启 redis 服务,并查看端口(node1~node6)**
```[root@node1 ~]# systemctl restart redis[root@node1 ~]# ss -ntlState      Recv-Q Send-Q   Local Address:Port                  Peer Address:Port              LISTEN     0      100          127.0.0.1:25                               *:*     LISTEN     0      511       192.168.10.1:16379                            *:*     LISTEN     0      511       192.168.10.1:6379                             *:*     LISTEN     0      128                  *:111                              *:*     LISTEN     0      128                  *:22                               *:*     LISTEN     0      100              [::1]:25                            [::]:*     LISTEN     0      128               [::]:111                           [::]:*     LISTEN     0      128               [::]:22                            [::]:*                   
```
***以上是所有服务器的基本配置,接下来配置 Redis Cluster 部分***
------






**在 node1 服务器上编译安装 redis 集群管理工具**
```[root@node1 ~]# yum -y install gcc make zlib-devel readline-devel gdbm-devel
[root@node1 ~]# tar xf ruby-2.5.5.tar.gz [root@node1 ~]# cd ruby-2.5.5/[root@node1 ruby-2.5.5]# ./configure[root@node1 ruby-2.5.5]# make -j 2[root@node1 ruby-2.5.5]# make install
[root@node1 ruby-2.5.5]# gem install redisERROR:  While executing gem ... (Gem::Exception)    Unable to require openssl, install OpenSSL and rebuild Ruby (preferred) or use non-HTTPS sources
解决办法:[root@node1 ruby-2.5.5]# yum -y install openssl-devel[root@node1 ruby-2.5.5]# cd ext/openssl/[root@node1 openssl]# pwd/root/ruby-2.5.5/ext/openssl[root@node1 openssl]# ruby ./extconf.rb[root@node1 openssl]# makecompiling openssl_missing.cmake: *** No rule to make target `/include/ruby.h', needed by `ossl.o'.  Stop. 

在Makefile顶部中的增加top_srcdir = ../..[root@node1 openssl]# vim Makefile [root@node1 openssl]# head -1 Makefile top_srcdir = ../..

再次执行make[root@node1 openssl]# make[root@node1 openssl]# make install

[root@node1 openssl]# cd /root/ruby-2.5.5/[root@node1 ruby-2.5.5]# gem install redisFetching: redis-4.1.3.gem (100%)Successfully installed redis-4.1.3Parsing documentation for redis-4.1.3Installing ri documentation for redis-4.1.3Done installing documentation for redis after 1 seconds1 gem installed

修改配置文件,将密码修改为 redis 的密码[root@node1 ruby-2.5.5]# vim /usr/local/lib/ruby/gems/2.5.0/gems/redis-4.1.3/lib/redis/client.rb[root@node1 ruby-2.5.5]# cat /usr/local/lib/ruby/gems/2.5.0/gems/redis-4.1.3/lib/redis/client.rb | grep ":password "      :password => 1,
```
**使用 redis 集群管理工具启动 redis 集群, create 代表创建集群, replicas 代表每个redis 主节点有几个从节点,最后添加所有的 redis 服务 地址, 提示输入时,输入 yes 即可**
```[root@node1 ~]# cp ./redis-4.0.14/src/redis-trib.rb /usr/sbin/[root@node1 ~]# redis-trib.rb Usage: redis-trib <command> <options> <arguments ...>
  create          host1:port1 ... hostN:portN                  --replicas <arg>  check           host:port  info            host:port  fix             host:port                  --timeout <arg>  reshard         host:port                  --from <arg>                  --to <arg>                  --slots <arg>                  --yes                  --timeout <arg>                  --pipeline <arg>  rebalance       host:port                  --weight <arg>                  --auto-weights                  --use-empty-masters                  --timeout <arg>                  --simulate                  --pipeline <arg>                  --threshold <arg>  add-node        new_host:new_port existing_host:existing_port                  --slave                  --master-id <arg>  del-node        host:port node_id  set-timeout     host:port milliseconds  call            host:port command arg arg .. arg  import          host:port                  --from <arg>                  --copy                  --replace  help            (show this help)


[root@node1 ~]# redis-trib.rb create --replicas 1 192.168.10.1:6379 192.168.10.2:6379 192.168.10.3:6379 192.168.10.4:6379 192.168.10.5:6379 192.168.10.6:6379>>> Creating cluster>>> Performing hash slots allocation on 6 nodes...Using 3 masters:192.168.10.1:6379192.168.10.2:6379192.168.10.3:6379Adding replica 192.168.10.5:6379 to 192.168.10.1:6379Adding replica 192.168.10.6:6379 to 192.168.10.2:6379Adding replica 192.168.10.4:6379 to 192.168.10.3:6379M: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379   slots:0-5460 (5461 slots) masterM: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379   slots:5461-10922 (5462 slots) masterM: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379   slots:10923-16383 (5461 slots) masterS: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ceS: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379   replicates ea5b2c985511241879f17d84d462888a24d20590S: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379   replicates 0a45c94458e8a5d751275e252c40c280ff78527eCan I set the above configuration? (type 'yes' to accept): yes>>> Nodes configuration updated>>> Assign a different config epoch to each node>>> Sending CLUSTER MEET messages to join the clusterWaiting for the cluster to join...>>> Performing Cluster Check (using node 192.168.10.1:6379)M: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379   slots:0-5460 (5461 slots) master   1 additional replica(s)M: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379   slots:5461-10922 (5462 slots) master   1 additional replica(s)S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379   slots: (0 slots) slave   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ceM: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379   slots:10923-16383 (5461 slots) master   1 additional replica(s)S: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379   slots: (0 slots) slave   replicates 0a45c94458e8a5d751275e252c40c280ff78527eS: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379   slots: (0 slots) slave   replicates ea5b2c985511241879f17d84d462888a24d20590[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

通过上面的信息可以知道    主   从node1-10.1node5-10.5node2-10.2node6-10.6node3-10.3node4-10.4

登录 reids 查看集群状态[root@node1 ~]# redis-cli -h 192.168.10.1192.168.10.1:6379> cluster infocluster_state:okcluster_slots_assigned:16384cluster_slots_ok:16384cluster_slots_pfail:0cluster_slots_fail:0cluster_known_nodes:6cluster_size:3cluster_current_epoch:6cluster_my_epoch:1cluster_stats_messages_ping_sent:887cluster_stats_messages_pong_sent:932cluster_stats_messages_sent:1819cluster_stats_messages_ping_received:927cluster_stats_messages_pong_received:887cluster_stats_messages_meet_received:5cluster_stats_messages_received:1819
```
**测试 redis 集群中主从节点是否具备自动切换功能**
```node5 是 node1 的 slave,停止 node1 中的 redis 服务,查看 node6 是否可以提升为主节点[root@node1 ~]# systemctl stop redis[root@node1 ~]# ss -ntlState      Recv-Q Send-Q  Local Address:Port                 Peer Address:Port              LISTEN     0      100         127.0.0.1:25                              *:*       LISTEN     0      128                 *:111                             *:*       LISTEN     0      128                 *:22                              *:*       LISTEN     0      100             [::1]:25                           [::]:*       LISTEN     0      128              [::]:111                          [::]:*       LISTEN     0      128              [::]:22                           [::]:*       

[root@node1 ~]# redis-trib.rb check 192.168.10.1:6379[ERR] Sorry, can't connect to node 192.168.10.1:6379

[root@node1 ~]# redis-trib.rb check 192.168.10.2:6379>>> Performing Cluster Check (using node 192.168.10.2:6379)M: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379   slots:5461-10922 (5462 slots) master   1 additional replica(s)S: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379   slots: (0 slots) slave   replicates 0a45c94458e8a5d751275e252c40c280ff78527eM: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379   slots:0-5460 (5461 slots) master   0 additional replica(s)S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379   slots: (0 slots) slave   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ceM: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379   slots:10923-16383 (5461 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

[root@node1 ~]# redis-cli -h 192.168.10.5192.168.10.5:6379> auth 1OK192.168.10.5:6379> info replication# Replicationrole:masterconnected_slaves:0master_replid:10965c763fab20cf1f99bfdd15cd0a8d66b9fd83master_replid2:5fc2e3dc70d7e127cf67f6b1da6b6ce65b592ca3master_repl_offset:1694second_repl_offset:1695repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:1repl_backlog_histlen:1694192.168.10.5:6379> exit

[root@node1 ~]# systemctl start redis[root@node1 ~]# redis-trib.rb check 192.168.10.2:6379>>> Performing Cluster Check (using node 192.168.10.2:6379)M: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379   slots:5461-10922 (5462 slots) master   1 additional replica(s)S: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379   slots: (0 slots) slave   replicates 0a45c94458e8a5d751275e252c40c280ff78527eM: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379   slots:0-5460 (5461 slots) master   1 additional replica(s)S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379   slots: (0 slots) slave   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ceS: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379   slots: (0 slots) slave   replicates 9519fb412b0199e2764f1d14f951194eb758e967M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379   slots:10923-16383 (5461 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

[root@node1 ~]# redis-cli -h 192.168.10.1192.168.10.1:6379> auth 1OK192.168.10.1:6379> info replication# Replicationrole:slavemaster_host:192.168.10.5master_port:6379master_link_status:upmaster_last_io_seconds_ago:2master_sync_in_progress:0slave_repl_offset:1792slave_priority:100slave_read_only:1connected_slaves:0master_replid:10965c763fab20cf1f99bfdd15cd0a8d66b9fd83master_replid2:0000000000000000000000000000000000000000master_repl_offset:1792second_repl_offset:-1repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:1695repl_backlog_histlen:98192.168.10.1:6379> exit
上面我们通过手动关闭主节点 node1 的 redis 服务,发现 node5 自动提升为主节点,使用集群管理工具的 check 命令查看当前集群状态,之后我们有重新启动 node1 的 redis 服务,发现 node1 成为 node5 的从几点。验证 cluster 集群的自动切换成功
```
**测试在 node2 的 redis 中进行数据写入操作**
```[root@node1 ~]# redis-cli -h 192.168.10.2192.168.10.2:6379> auth 1OK192.168.10.2:6379> set name yinx1nOK192.168.10.2:6379> get name"yinx1n"192.168.10.2:6379> set addr beijing(error) MOVED 12790 192.168.10.3:6379192.168.10.2:6379> info replication# Replicationrole:masterconnected_slaves:1slave0:ip=192.168.10.6,port=6379,state=online,offset=2802,lag=1master_replid:5d118a1fa4379ac4c4b49ade8dfc4361ac08038emaster_replid2:0000000000000000000000000000000000000000master_repl_offset:2802second_repl_offset:-1repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:1repl_backlog_histlen:2802192.168.10.2:6379> exit
上面的操作中,name 这个 key 被分配到了 node2 节点,可以写入,但是 addr 这个 key 被分配到 node3 中,因此在 node2 中无法写入

[root@node1 ~]# redis-cli -h 192.168.10.3192.168.10.3:6379> auth 1OK192.168.10.3:6379> set addr beijingOK192.168.10.3:6379> get addr"beijing"192.168.10.3:6379> exit

[root@node1 ~]# redis-cli -h 192.168.10.6192.168.10.6:6379> auth 1OK192.168.10.6:6379> get name(error) MOVED 12790 192.168.10.2:6379192.168.10.3:6379> exit
在 node 2 中写入的数据在 node6 中无法访问,接下来手动关闭 node2 的 redis 服务,然后再次访问 node6

[root@node2 ~]# systemctl stop redis[root@node1 ~]# redis-cli -h 192.168.10.6192.168.10.6:6379> auth 1OK192.168.10.6:6379> info replication# Replicationrole:masterconnected_slaves:0master_replid:45f8d7daf292fb3cb4cdd2aa6cb96449e0c45b63master_replid2:5d118a1fa4379ac4c4b49ade8dfc4361ac08038emaster_repl_offset:3208second_repl_offset:3209repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:1repl_backlog_histlen:3208192.168.10.6:6379> get name"yinx1n"192.168.10.6:6379> exit

验证完成以后,发现 cluster 的从节点不具备访问功能,进作为数据备份使用,接下来重启 node2

[root@node2 ~]# systemctl start redis
重启后查看集群状态[root@node1 ~]# redis-trib.rb check 192.168.10.1:6379>>> Performing Cluster Check (using node 192.168.10.1:6379)S: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379   slots: (0 slots) slave   replicates 9519fb412b0199e2764f1d14f951194eb758e967M: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379   slots:0-5460 (5461 slots) master   1 additional replica(s)S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379   slots: (0 slots) slave   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ceM: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379   slots:5461-10922 (5462 slots) master   1 additional replica(s)M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379   slots:10923-16383 (5461 slots) master   1 additional replica(s)S: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379   slots: (0 slots) slave   replicates 461520796edb41baea82eadf8a99da26a2d77d34[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

可以发现关闭 redis 服务的节点,再重启 redis 服务的话,会变成从节点
```






**Redis集群扩容**
使用 redis 集群时,有的时候需要新增 redis 节点,提高性能。我们新准备两台服务器,要求和之前的 6 台配置相同
node7——192.168.10.7node8——192.168.10.8
```启动 node7 和 node8 的 redis 服务node7:[root@node7 ~]# systemctl start redis[root@node7 ~]# ss -ntlState       Recv-Q Send-Q     Local Address:Port                   Peer Address:Port              LISTEN      0      100            127.0.0.1:25                      *:*   LISTEN      0      511         192.168.10.7:16379                        *:*   LISTEN      0      511         192.168.10.7:6379                         *:*   LISTEN      0      128                    *:111                          *:*   LISTEN      0      128                    *:22                           *:*   LISTEN      0      100                [::1]:25                        [::]:*   LISTEN      0      128                 [::]:111                       [::]:*   LISTEN      0      128                 [::]:22                        [::]:*   

node8:[root@node8 ~]# systemctl start redis[root@node8 ~]# ss -ntlState      Recv-Q Send-Q       Local Address:Port                  Peer Address:Port              LISTEN     0      100              127.0.0.1:25                          *:* LISTEN     0      511           192.168.10.8:16379                       *:* LISTEN     0      511           192.168.10.8:6379                        *:* LISTEN     0      128                      *:111                         *:* LISTEN     0      128                      *:22                          *:* LISTEN     0      100                  [::1]:25                       [::]:* LISTEN     0      128                   [::]:111                      [::]:* LISTEN     0      128                   [::]:22                       [::]:*


查看集群状态[root@node1 ~]# redis-trib.rb check 192.168.10.1:6379>>> Performing Cluster Check (using node 192.168.10.1:6379)S: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379   slots: (0 slots) slave   replicates 9519fb412b0199e2764f1d14f951194eb758e967M: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379   slots:5461-10922 (5462 slots) master   1 additional replica(s)M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379   slots:10923-16383 (5461 slots) master   1 additional replica(s)S: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379   slots: (0 slots) slave   replicates 461520796edb41baea82eadf8a99da26a2d77d34S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379   slots: (0 slots) slave   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ceM: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379   slots:0-5460 (5461 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

添加 node7, add-node 命令后添加 node7 和一个原来集群主节点的地址[root@node1 ~]# redis-trib.rb add-node 192.168.10.7:6379 192.168.10.3:6379>>> Adding node 192.168.10.7:6379 to cluster 192.168.10.3:6379>>> Performing Cluster Check (using node 192.168.10.3:6379)M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379   slots:10923-16383 (5461 slots) master   1 additional replica(s)M: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379   slots:0-5460 (5461 slots) master   1 additional replica(s)S: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379   slots: (0 slots) slave   replicates 461520796edb41baea82eadf8a99da26a2d77d34S: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379   slots: (0 slots) slave   replicates 9519fb412b0199e2764f1d14f951194eb758e967M: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379   slots:5461-10922 (5462 slots) master   1 additional replica(s)S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379   slots: (0 slots) slave   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ce[OK] All nodes agree about slots configuration.

再次查看集群状态,node7 被添加成主节点[root@node1 ~]# redis-trib.rb check 192.168.10.1:6379>>> Performing Cluster Check (using node 192.168.10.1:6379)S: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379   slots: (0 slots) slave   replicates 9519fb412b0199e2764f1d14f951194eb758e967M: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379   slots:5461-10922 (5462 slots) master   1 additional replica(s)M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379   slots:10923-16383 (5461 slots) master   1 additional replica(s)M: 39532b5439b378bb36b4b281e545b8b6604917af 192.168.10.7:6379   slots: (0 slots) master   0 additional replica(s)S: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379   slots: (0 slots) slave   replicates 461520796edb41baea82eadf8a99da26a2d77d34S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379   slots: (0 slots) slave   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ceM: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379   slots:0-5460 (5461 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

[root@node1 ~]# redis-trib.rb info 192.168.10.1:6379192.168.10.6:6379 (46152079...) -> 1 keys | 5462 slots | 1 slaves.192.168.10.3:6379 (bbe35416...) -> 1 keys | 5461 slots | 1 slaves.192.168.10.7:6379 (39532b54...) -> 0 keys | 0 slots | 0 slaves.192.168.10.5:6379 (9519fb41...) -> 0 keys | 5461 slots | 1 slaves.[OK] 2 keys in 4 masters.0.00 keys per slot on average.



添加 node8 到集群中[root@node1 ~]# redis-trib.rb add-node 192.168.10.8:6379 192.168.10.4:6379>>> Adding node 192.168.10.8:6379 to cluster 192.168.10.4:6379>>> Performing Cluster Check (using node 192.168.10.4:6379)S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379   slots: (0 slots) slave   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ceM: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379   slots:5461-10922 (5462 slots) master   1 additional replica(s)S: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379   slots: (0 slots) slave   replicates 9519fb412b0199e2764f1d14f951194eb758e967M: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379   slots:0-5460 (5461 slots) master   1 additional replica(s)M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379   slots:10923-16383 (5461 slots) master   1 additional replica(s)M: 39532b5439b378bb36b4b281e545b8b6604917af 192.168.10.7:6379   slots: (0 slots) master   0 additional replica(s)S: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379   slots: (0 slots) slave   replicates 461520796edb41baea82eadf8a99da26a2d77d34[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.>>> Send CLUSTER MEET to node 192.168.10.8:6379 to make it join the cluster.[OK] New node added correctly.

查看集群状态,node8 也为主节点[root@node1 ~]# redis-trib.rb check 192.168.10.1:6379>>> Performing Cluster Check (using node 192.168.10.1:6379)S: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379   slots: (0 slots) slave   replicates 9519fb412b0199e2764f1d14f951194eb758e967M: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379   slots:5461-10922 (5462 slots) master   1 additional replica(s)M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379   slots:10923-16383 (5461 slots) master   1 additional replica(s)M: 39532b5439b378bb36b4b281e545b8b6604917af 192.168.10.7:6379   slots: (0 slots) master   0 additional replica(s)S: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379   slots: (0 slots) slave   replicates 461520796edb41baea82eadf8a99da26a2d77d34S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379   slots: (0 slots) slave   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ceM: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379   slots:0-5460 (5461 slots) master   1 additional replica(s)M: 7e462b96b548ae2dc1512c71765c58493362671f 192.168.10.8:6379   slots: (0 slots) master   0 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

登录 node8 的 redis,将它的 master 设置为 node7[root@node1 ~]# redis-cli -h 192.168.10.8192.168.10.8:6379> auth 1OK
192.168.10.8:6379> info replication# Replicationrole:masterconnected_slaves:0master_replid:09db3148943fdf8cacb8de0a73ad021833c99393master_replid2:0000000000000000000000000000000000000000master_repl_offset:0second_repl_offset:-1repl_backlog_active:0repl_backlog_size:1048576repl_backlog_first_byte_offset:0repl_backlog_histlen:0
192.168.10.8:6379> cluster nodes39532b5439b378bb36b4b281e545b8b6604917af 192.168.10.7:6379@16379 master - 0 1573735372000 9 connectedbbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379@16379 master - 0 1573735374000 3 connected 10923-163837e462b96b548ae2dc1512c71765c58493362671f 192.168.10.8:6379@16379 myself,master - 0 1573735372000 0 connecteddd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379@16379 slave bbe35416650b74b326bc6657d2ff18cc6edc14ce 0 1573735375000 3 connected461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379@16379 master - 0 1573735373695 8 connected 5461-109229519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379@16379 master - 0 1573735373000 7 connected 0-5460ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379@16379 slave 9519fb412b0199e2764f1d14f951194eb758e967 0 1573735375708 7 connected0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379@16379 slave 461520796edb41baea82eadf8a99da26a2d77d34 0 1573735374702 8 connected
192.168.10.8:6379> cluster replicate 39532b5439b378bb36b4b281e545b8b6604917af # 将其设置为 slave,命令为 cluster replicate MASTERIDOK
192.168.10.8:6379> cluster nodes39532b5439b378bb36b4b281e545b8b6604917af 192.168.10.7:6379@16379 master - 0 1573735411924 9 connectedbbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379@16379 master - 0 1573735409000 3 connected 10923-163837e462b96b548ae2dc1512c71765c58493362671f 192.168.10.8:6379@16379 myself,slave 39532b5439b378bb36b4b281e545b8b6604917af 0 1573735405000 0 connecteddd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379@16379 slave bbe35416650b74b326bc6657d2ff18cc6edc14ce 0 1573735410000 3 connected461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379@16379 master - 0 1573735407901 8 connected 5461-109229519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379@16379 master - 0 1573735410000 7 connected 0-5460ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379@16379 slave 9519fb412b0199e2764f1d14f951194eb758e967 0 1573735410917 7 connected0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379@16379 slave 461520796edb41baea82eadf8a99da26a2d77d34 0 1573735409911 8 connected
192.168.10.8:6379> info replication# Replicationrole:slavemaster_host:192.168.10.7master_port:6379master_link_status:upmaster_last_io_seconds_ago:10master_sync_in_progress:0slave_repl_offset:42slave_priority:100slave_read_only:1connected_slaves:0master_replid:927b8e47cb7d76cdf6fcaf69d92ce7eb2fa99229master_replid2:0000000000000000000000000000000000000000master_repl_offset:42second_repl_offset:-1repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:1repl_backlog_histlen:42
192.168.10.8:6379> exit
[root@node1 ~]# redis-trib.rb info 192.168.10.1:6379192.168.10.6:6379 (46152079...) -> 1 keys | 5462 slots | 1 slaves.192.168.10.3:6379 (bbe35416...) -> 1 keys | 5461 slots | 1 slaves.192.168.10.7:6379 (39532b54...) -> 0 keys | 0 slots | 1 slaves.192.168.10.5:6379 (9519fb41...) -> 0 keys | 5461 slots | 1 slaves.[OK] 2 keys in 4 masters.0.00 keys per slot on average.

```
**分配槽位**
现在 node7 已经添加进集群且为主节点, 但是没有槽位,下面我们重新分配一下槽位使用集群的 reshard 命令前,请清空所有节点的数据,然后再从新分配。因此需要提前备份 redis 数据。
```[root@node1 ~]# redis-trib.rb reshard 192.168.10.1:6379
提示重新分配大小输入 4096
选择要分配给 node7 的 id 号
选择重所有主节点分配: all # 这里也可以选择某个主节点的 ID,然后输入 done 进行槽位分配
[root@node1 ~]# redis-trib.rb info 192.168.10.1:6379192.168.10.6:6379 (46152079...) -> 0 keys | 3817 slots | 1 slaves.192.168.10.3:6379 (bbe35416...) -> 0 keys | 4067 slots | 1 slaves.192.168.10.7:6379 (39532b54...) -> 0 keys | 4432 slots | 1 slaves.192.168.10.5:6379 (9519fb41...) -> 0 keys | 4068 slots | 1 slaves.[OK] 0 keys in 4 masters.0.00 keys per slot on average.
```
**下线 Redis 集群中的指定服务器(例如 node7)**
```先将 node7 的槽位全部移动到 node3 中(必须是主节点)[root@node1 ~]# redis-trib.rb reshard 192.168.10.1:6379>>> Performing Cluster Check (using node 192.168.10.1:6379)S: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379   slots: (0 slots) slave   replicates 9519fb412b0199e2764f1d14f951194eb758e967M: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379   slots:7106-10922 (3817 slots) master   1 additional replica(s)M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379   slots:12317-16383 (4067 slots) master   1 additional replica(s)M: 39532b5439b378bb36b4b281e545b8b6604917af 192.168.10.7:6379   slots:0-1392,5461-7105,10923-12316 (4432 slots) master   1 additional replica(s)S: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379   slots: (0 slots) slave   replicates 461520796edb41baea82eadf8a99da26a2d77d34S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379   slots: (0 slots) slave   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ceM: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379   slots:1393-5460 (4068 slots) master   1 additional replica(s)S: 7e462b96b548ae2dc1512c71765c58493362671f 192.168.10.8:6379   slots: (0 slots) slave   replicates 39532b5439b378bb36b4b281e545b8b6604917af[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4432 # 输入 node7 的槽位数What is the receiving node ID? bbe35416650b74b326bc6657d2ff18cc6edc14ce # 这里输入 node3 的 IDPlease enter all the source node IDs.  Type 'all' to use all the nodes as source nodes for the hash slots.  Type 'done' once you entered all the source nodes IDs.Source node #1:39532b5439b378bb36b4b281e545b8b6604917af # 这里输入 node7 的 IDSource node #2:done

[root@node1 ~]# redis-trib.rb info 192.168.10.1:6379192.168.10.6:6379 (46152079...) -> 0 keys | 3817 slots | 1 slaves.192.168.10.3:6379 (bbe35416...) -> 0 keys | 8499 slots | 2 slaves.192.168.10.7:6379 (39532b54...) -> 0 keys | 0 slots | 0 slaves.192.168.10.5:6379 (9519fb41...) -> 0 keys | 4068 slots | 1 slaves.[OK] 0 keys in 4 masters.0.00 keys per slot on average.

使用集群命令删除 node7 节点[root@node1 ~]# redis-trib.rb check 192.168.10.1:6379>>> Performing Cluster Check (using node 192.168.10.1:6379)S: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379   slots: (0 slots) slave   replicates 9519fb412b0199e2764f1d14f951194eb758e967M: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379   slots:7106-10922 (3817 slots) master   1 additional replica(s)M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379   slots:0-1392,5461-7105,10923-16383 (8499 slots) master   2 additional replica(s)M: 39532b5439b378bb36b4b281e545b8b6604917af 192.168.10.7:6379   slots: (0 slots) master   0 additional replica(s)S: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379   slots: (0 slots) slave   replicates 461520796edb41baea82eadf8a99da26a2d77d34S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379   slots: (0 slots) slave   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ceM: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379   slots:1393-5460 (4068 slots) master   1 additional replica(s)S: 7e462b96b548ae2dc1512c71765c58493362671f 192.168.10.8:6379   slots: (0 slots) slave   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ce[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

[root@node1 ~]# redis-trib.rb del-node 192.168.10.1:6379 39532b5439b378bb36b4b281e545b8b6604917af # 后面跟上 node7 的 ID>>> Removing node 39532b5439b378bb36b4b281e545b8b6604917af from cluster 192.168.10.1:6379>>> Sending CLUSTER FORGET messages to the cluster...>>> SHUTDOWN the node.

[root@node1 ~]# redis-trib.rb info 192.168.10.1:6379192.168.10.6:6379 (46152079...) -> 0 keys | 3817 slots | 1 slaves.192.168.10.3:6379 (bbe35416...) -> 0 keys | 8499 slots | 2 slaves.192.168.10.5:6379 (9519fb41...) -> 0 keys | 4068 slots | 1 slaves.[OK] 0 keys in 3 masters.0.00 keys per slot on average.

删除 node7 的从节点 node8[root@node1 ~]# redis-trib.rb check 192.168.10.1:6379>>> Performing Cluster Check (using node 192.168.10.1:6379)S: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379   slots: (0 slots) slave   replicates 9519fb412b0199e2764f1d14f951194eb758e967M: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379   slots:7106-10922 (3817 slots) master   1 additional replica(s)M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379   slots:0-1392,5461-7105,10923-16383 (8499 slots) master   2 additional replica(s)S: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379   slots: (0 slots) slave   replicates 461520796edb41baea82eadf8a99da26a2d77d34S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379   slots: (0 slots) slave   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ceM: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379   slots:1393-5460 (4068 slots) master   1 additional replica(s)S: 7e462b96b548ae2dc1512c71765c58493362671f 192.168.10.8:6379   slots: (0 slots) slave   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ce[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

[root@node1 ~]# redis-trib.rb del-node 192.168.10.1:6379 7e462b96b548ae2dc1512c71765c58493362671f # 后面跟上 node8 的 ID>>> Removing node 7e462b96b548ae2dc1512c71765c58493362671f from cluster 192.168.10.1:6379>>> Sending CLUSTER FORGET messages to the cluster...>>> SHUTDOWN the node.

[root@node1 ~]# redis-trib.rb check 192.168.10.1:6379>>> Performing Cluster Check (using node 192.168.10.1:6379)S: ea5b2c985511241879f17d84d462888a24d20590 192.168.10.1:6379   slots: (0 slots) slave   replicates 9519fb412b0199e2764f1d14f951194eb758e967M: 461520796edb41baea82eadf8a99da26a2d77d34 192.168.10.6:6379   slots:7106-10922 (3817 slots) master   1 additional replica(s)M: bbe35416650b74b326bc6657d2ff18cc6edc14ce 192.168.10.3:6379   slots:0-1392,5461-7105,10923-16383 (8499 slots) master   1 additional replica(s)S: 0a45c94458e8a5d751275e252c40c280ff78527e 192.168.10.2:6379   slots: (0 slots) slave   replicates 461520796edb41baea82eadf8a99da26a2d77d34S: dd88beffe2da38f1f6f56664e2b00139d67061ef 192.168.10.4:6379   slots: (0 slots) slave   replicates bbe35416650b74b326bc6657d2ff18cc6edc14ceM: 9519fb412b0199e2764f1d14f951194eb758e967 192.168.10.5:6379   slots:1393-5460 (4068 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.
```

标签:6379,部署,redis,Redis,192.168,Cluster,node1,slots,root
来源: https://www.cnblogs.com/yinx1n/p/11865637.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有