Docker(九):redis集群搭建

发布时间 2023-06-08 23:58:24作者: 谁知道水烫不烫

一、搭建网络

docker network create redisNet --subnet 172.16.0.0/16

二、建立redis配置文件

三、开启redis容器

docker run -p 6371:6379 -p 16371:16379 --name redis-1 \
    -v /mydata/redis/node-1/data:/data \
    -v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \
    -d --net redisNet --ip 172.38.0.11 redis redis-server /etc/redis/redis.conf

docker run -p 6372:6379 -p 16372:16379 --name redis-2 \
-v /mydata/redis/node-2/data:/data \
-v /mydata/redis/node-2/conf/redis.conf:/etc/redis/redis.conf \
-d --net redisNet --ip 172.38.0.12 redis redis-server /etc/redis/redis.conf


docker run -p 6373:6379 -p 16373:16379 --name redis-3 \
-v /mydata/redis/node-3/data:/data \
-v /mydata/redis/node-3/conf/redis.conf:/etc/redis/redis.conf \
-d --net redisNet --ip 172.38.0.13 redis redis-server /etc/redis/redis.conf


docker run -p 6374:6379 -p 16374:16379 --name redis-4 \
-v /mydata/redis/node-4/data:/data \
-v /mydata/redis/node-4/conf/redis.conf:/etc/redis/redis.conf \
-d --net redisNet --ip 172.38.0.14 redis redis-server /etc/redis/redis.conf


docker run -p 6375:6379 -p 16375:16379 --name redis-5\
-v /mydata/redis/node-5/data:/data \
-v /mydata/redis/node-5/conf/redis.conf:/etc/redis/redis.conf \
-d --net redisNet --ip 172.38.0.15 redis redis-server /etc/redis/redis.conf


docker run -p 6376:6379 -p 16376:16379 --name redis-6 \
-v /mydata/redis/node-6/data:/data \
-v /mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf \
-d --net redisNet --ip 172.38.0.16 redis redis-server /etc/redis/redis.conf

 三、搭建集群

[root@VM-8-4-centos ~]# docker exec -it redis-1 /bin/sh
# redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1

>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.38.0.15:6379 to 172.38.0.11:6379
Adding replica 172.38.0.16:6379 to 172.38.0.12:6379
Adding replica 172.38.0.14:6379 to 172.38.0.13:6379
M: 406d1956c1c5cac9f52f2fb6c52a34d3b0fc123f 172.38.0.11:6379
slots:[0-5460] (5461 slots) master
M: 2c316a4dfb725880d7506d6d8c0a15b849e019f4 172.38.0.12:6379
slots:[5461-10922] (5462 slots) master
M: 411ce5ed7740e266432c2fbe29a6542e9afcadbb 172.38.0.13:6379
slots:[10923-16383] (5461 slots) master
S: 4bce3ac2dcaea05704fde9f4104223f9c13c6917 172.38.0.14:6379
replicates 411ce5ed7740e266432c2fbe29a6542e9afcadbb
S: 59475bc2e6d6f88288e10ec21cd707d2b463e058 172.38.0.15:6379
replicates 406d1956c1c5cac9f52f2fb6c52a34d3b0fc123f
S: fdd70c865a894767f8ea4d546ac3f97ebd72d2fd 172.38.0.16:6379
replicates 2c316a4dfb725880d7506d6d8c0a15b849e019f4
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join


>>> Performing Cluster Check (using node 172.38.0.11:6379)
M: 406d1956c1c5cac9f52f2fb6c52a34d3b0fc123f 172.38.0.11:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: 411ce5ed7740e266432c2fbe29a6542e9afcadbb 172.38.0.13:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: 2c316a4dfb725880d7506d6d8c0a15b849e019f4 172.38.0.12:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: fdd70c865a894767f8ea4d546ac3f97ebd72d2fd 172.38.0.16:6379
slots: (0 slots) slave
replicates 2c316a4dfb725880d7506d6d8c0a15b849e019f4
S: 4bce3ac2dcaea05704fde9f4104223f9c13c6917 172.38.0.14:6379
slots: (0 slots) slave
replicates 411ce5ed7740e266432c2fbe29a6542e9afcadbb
S: 59475bc2e6d6f88288e10ec21cd707d2b463e058 172.38.0.15:6379
slots: (0 slots) slave
replicates 406d1956c1c5cac9f52f2fb6c52a34d3b0fc123f
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

 

(本文仅作个人学习记录用,如有纰漏敬请指正)