searchusermenu
  • 发布文章
  • 消息中心
点赞
收藏
评论
分享
原创

Redis集群迁移

2024-04-17 09:44:59
10
0

客户端redis配置

客户端redis server的地址列表配置如下,一般配置为集群的全部节点。

"redis_cluster":{
        "serv_list":[
            {"ip":"172.21.52.28",   "port":6379  }, { "ip":"172.21.52.28",  "port":6380 },  { "ip":"172.21.52.28",   "port":6381 }
        ]

}

客户端从配置列表中依次取节点进行尝试,如果可以连通任一节点,并获取到集群slot信息,则缓存集群信息,完成初始化过程。

因此配置中只要包含集群中任意数量的可服务节点就可以工作,

如果在使用过程中发生主从或者slot切换,旧节点返回信令moved,网关会重新获取slot信息并缓存。

集群节点替换步骤

例:使用新节点127.0.0.1:6386替换172.21.52.28:6384

启动新节点

新的redis服务,配置必须与已有集群相同(建议复制配置文件,然后修改端口等个性化配置)

启动命令

 /usr/local/bin/redis-server /usr/local/redis-cluster/conf/redis-6386.conf

确保被替换节点为slave

使用 cluster nodes查看节点状态
11a027f61116a5578638add26e23f5654090f38b 172.21.52.28:6384@16384 slave 7946da0fdc152c3e56b68f602283a302f5b815f3 0 1699705127341 11 connected
ca6ded950ba7a7ad2751749fb6e37f83c27ab81f 172.21.52.28:6383@16383 master - 0 1699705126000 8 connected 10923-16383
34625d3289582a79802775d8a21e6bd86924ff36 172.21.52.28:6380@16380 slave 5350cb588918bfd535301fde95dc31467813fb12 0 1699705125000 9 connected
594bfd6f99d16c148229ccb20977a7b7812b2a01 172.21.52.28:6381@16381 slave ca6ded950ba7a7ad2751749fb6e37f83c27ab81f 0 1699705126337 8 connected
7946da0fdc152c3e56b68f602283a302f5b815f3 172.21.52.28:6379@16379 myself,master - 0 1699705128000 11 connected 0-5460
5350cb588918bfd535301fde95dc31467813fb12 172.21.52.28:6382@16382 master - 0 1699705128345 9 connected 5461-10922

待替换节点172.21.52.28:6384是slave状态,主节点id为 7946da0fdc152c3e56b68f602283a302f5b815f3,满足替换条件。

如果该节点为主节点,可以在其slave节点上使用cluster failover进行切换。

此步骤可以减少后续操作数据同步的概率。

添加新节点

集群添加新的slave节点命令如下:

redis-cli -a xxx --cluster add-node --cluster-slave --cluster-master-id 7946da0fdc152c3e56b68f602283a302f5b815f3 127.0.0.1:6386 127.0.0.1:6379

xxx为密码

7946da0fdc152c3e56b68f602283a302f5b815f3为指定的主节点

127.0.0.1:6386为新增的从节点

127.0.0.1:6379为任意集群节点

添加日志如下:

>>> Adding node 127.0.0.1:6386 to cluster 127.0.0.1:6379
>>> Performing Cluster Check (using node 127.0.0.1:6379)
M: 7946da0fdc152c3e56b68f602283a302f5b815f3 127.0.0.1:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 11a027f61116a5578638add26e23f5654090f38b 172.21.52.28:6384
   slots: (0 slots) slave
   replicates 7946da0fdc152c3e56b68f602283a302f5b815f3
M: ca6ded950ba7a7ad2751749fb6e37f83c27ab81f 172.21.52.28:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 34625d3289582a79802775d8a21e6bd86924ff36 172.21.52.28:6380
   slots: (0 slots) slave
   replicates 5350cb588918bfd535301fde95dc31467813fb12
S: 594bfd6f99d16c148229ccb20977a7b7812b2a01 172.21.52.28:6381
   slots: (0 slots) slave
   replicates ca6ded950ba7a7ad2751749fb6e37f83c27ab81f
M: 5350cb588918bfd535301fde95dc31467813fb12 172.21.52.28:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 127.0.0.1:6386 to make it join the cluster.
Waiting for the cluster to join

>>> Configure node as replica of 127.0.0.1:6379.
[OK] New node added correctly.

使用 cluster nodes查看节点状态

11a027f61116a5578638add26e23f5654090f38b 172.21.52.28:6384@16384 slave 7946da0fdc152c3e56b68f602283a302f5b815f3 0 1699706047072 11 connected
ca6ded950ba7a7ad2751749fb6e37f83c27ab81f 172.21.52.28:6383@16383 master - 0 1699706047000 8 connected 10923-16383
34625d3289582a79802775d8a21e6bd86924ff36 172.21.52.28:6380@16380 slave 5350cb588918bfd535301fde95dc31467813fb12 0 1699706046000 9 connected
594bfd6f99d16c148229ccb20977a7b7812b2a01 172.21.52.28:6381@16381 slave ca6ded950ba7a7ad2751749fb6e37f83c27ab81f 0 1699706046000 8 connected
7946da0fdc152c3e56b68f602283a302f5b815f3 127.0.0.1:6379@16379 myself,master - 0 1699706046000 11 connected 0-5460
5350cb588918bfd535301fde95dc31467813fb12 172.21.52.28:6382@16382 master - 0 1699706049080 9 connected 5461-10922
b5b2e845f11146a0d4ca489610f0d4480597b433 127.0.0.1:6386@16386 slave 7946da0fdc152c3e56b68f602283a302f5b815f3 0 1699706048076 11 connected

此时新加节点与待替换节点为同一个master的slave。

重复此步骤,增加所有新的节点。

注:如果添加失败,提示 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0

则需要删除conf/nodes-6386.conf和/data/redis-6386/dump.rdb,重启节点再添加。

 

增加过程中观察主节点cpu,内存,网络等有无异常;

观察新加节点的数据的同步状态,命令info Replication,确保正常工作。

172.21.52.28:6386> info Replication

# Replication
role:slave
master_host:127.0.0.1
master_port:6379
master_link_status:up

修改客户端配置

此时,新加节点和待删除节点都可以正常提供服务。

下发配置更新,使用新节点替换旧节点。

集群删除旧节点

删除前确保该节点为slave

11a027f61116a5578638add26e23f5654090f38b 172.21.52.28:6384@16384 slave 7946da0fdc152c3e56b68f602283a302f5b815f3 0 1699705127341 11 connected

删除命令如下:

redis-cli -a xxx --cluster del-node 127.0.0.1:6379 11a027f61116a5578638add26e23f5654090f38b

xxx为密码

127.0.0.1:6379 为任意集群节点

11a027f61116a5578638add26e23f5654090f38b为待删除节点id

删除日志如下:

>>> Removing node 11a027f61116a5578638add26e23f5654090f38b from cluster 127.0.0.1:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.

重复此步骤,删除所有被替换节点

 

0条评论
0 / 1000
张****东
7文章数
0粉丝数
张****东
7 文章 | 0 粉丝
张****东
7文章数
0粉丝数
张****东
7 文章 | 0 粉丝
原创

Redis集群迁移

2024-04-17 09:44:59
10
0

客户端redis配置

客户端redis server的地址列表配置如下,一般配置为集群的全部节点。

"redis_cluster":{
        "serv_list":[
            {"ip":"172.21.52.28",   "port":6379  }, { "ip":"172.21.52.28",  "port":6380 },  { "ip":"172.21.52.28",   "port":6381 }
        ]

}

客户端从配置列表中依次取节点进行尝试,如果可以连通任一节点,并获取到集群slot信息,则缓存集群信息,完成初始化过程。

因此配置中只要包含集群中任意数量的可服务节点就可以工作,

如果在使用过程中发生主从或者slot切换,旧节点返回信令moved,网关会重新获取slot信息并缓存。

集群节点替换步骤

例:使用新节点127.0.0.1:6386替换172.21.52.28:6384

启动新节点

新的redis服务,配置必须与已有集群相同(建议复制配置文件,然后修改端口等个性化配置)

启动命令

 /usr/local/bin/redis-server /usr/local/redis-cluster/conf/redis-6386.conf

确保被替换节点为slave

使用 cluster nodes查看节点状态
11a027f61116a5578638add26e23f5654090f38b 172.21.52.28:6384@16384 slave 7946da0fdc152c3e56b68f602283a302f5b815f3 0 1699705127341 11 connected
ca6ded950ba7a7ad2751749fb6e37f83c27ab81f 172.21.52.28:6383@16383 master - 0 1699705126000 8 connected 10923-16383
34625d3289582a79802775d8a21e6bd86924ff36 172.21.52.28:6380@16380 slave 5350cb588918bfd535301fde95dc31467813fb12 0 1699705125000 9 connected
594bfd6f99d16c148229ccb20977a7b7812b2a01 172.21.52.28:6381@16381 slave ca6ded950ba7a7ad2751749fb6e37f83c27ab81f 0 1699705126337 8 connected
7946da0fdc152c3e56b68f602283a302f5b815f3 172.21.52.28:6379@16379 myself,master - 0 1699705128000 11 connected 0-5460
5350cb588918bfd535301fde95dc31467813fb12 172.21.52.28:6382@16382 master - 0 1699705128345 9 connected 5461-10922

待替换节点172.21.52.28:6384是slave状态,主节点id为 7946da0fdc152c3e56b68f602283a302f5b815f3,满足替换条件。

如果该节点为主节点,可以在其slave节点上使用cluster failover进行切换。

此步骤可以减少后续操作数据同步的概率。

添加新节点

集群添加新的slave节点命令如下:

redis-cli -a xxx --cluster add-node --cluster-slave --cluster-master-id 7946da0fdc152c3e56b68f602283a302f5b815f3 127.0.0.1:6386 127.0.0.1:6379

xxx为密码

7946da0fdc152c3e56b68f602283a302f5b815f3为指定的主节点

127.0.0.1:6386为新增的从节点

127.0.0.1:6379为任意集群节点

添加日志如下:

>>> Adding node 127.0.0.1:6386 to cluster 127.0.0.1:6379
>>> Performing Cluster Check (using node 127.0.0.1:6379)
M: 7946da0fdc152c3e56b68f602283a302f5b815f3 127.0.0.1:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 11a027f61116a5578638add26e23f5654090f38b 172.21.52.28:6384
   slots: (0 slots) slave
   replicates 7946da0fdc152c3e56b68f602283a302f5b815f3
M: ca6ded950ba7a7ad2751749fb6e37f83c27ab81f 172.21.52.28:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 34625d3289582a79802775d8a21e6bd86924ff36 172.21.52.28:6380
   slots: (0 slots) slave
   replicates 5350cb588918bfd535301fde95dc31467813fb12
S: 594bfd6f99d16c148229ccb20977a7b7812b2a01 172.21.52.28:6381
   slots: (0 slots) slave
   replicates ca6ded950ba7a7ad2751749fb6e37f83c27ab81f
M: 5350cb588918bfd535301fde95dc31467813fb12 172.21.52.28:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 127.0.0.1:6386 to make it join the cluster.
Waiting for the cluster to join

>>> Configure node as replica of 127.0.0.1:6379.
[OK] New node added correctly.

使用 cluster nodes查看节点状态

11a027f61116a5578638add26e23f5654090f38b 172.21.52.28:6384@16384 slave 7946da0fdc152c3e56b68f602283a302f5b815f3 0 1699706047072 11 connected
ca6ded950ba7a7ad2751749fb6e37f83c27ab81f 172.21.52.28:6383@16383 master - 0 1699706047000 8 connected 10923-16383
34625d3289582a79802775d8a21e6bd86924ff36 172.21.52.28:6380@16380 slave 5350cb588918bfd535301fde95dc31467813fb12 0 1699706046000 9 connected
594bfd6f99d16c148229ccb20977a7b7812b2a01 172.21.52.28:6381@16381 slave ca6ded950ba7a7ad2751749fb6e37f83c27ab81f 0 1699706046000 8 connected
7946da0fdc152c3e56b68f602283a302f5b815f3 127.0.0.1:6379@16379 myself,master - 0 1699706046000 11 connected 0-5460
5350cb588918bfd535301fde95dc31467813fb12 172.21.52.28:6382@16382 master - 0 1699706049080 9 connected 5461-10922
b5b2e845f11146a0d4ca489610f0d4480597b433 127.0.0.1:6386@16386 slave 7946da0fdc152c3e56b68f602283a302f5b815f3 0 1699706048076 11 connected

此时新加节点与待替换节点为同一个master的slave。

重复此步骤,增加所有新的节点。

注:如果添加失败,提示 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0

则需要删除conf/nodes-6386.conf和/data/redis-6386/dump.rdb,重启节点再添加。

 

增加过程中观察主节点cpu,内存,网络等有无异常;

观察新加节点的数据的同步状态,命令info Replication,确保正常工作。

172.21.52.28:6386> info Replication

# Replication
role:slave
master_host:127.0.0.1
master_port:6379
master_link_status:up

修改客户端配置

此时,新加节点和待删除节点都可以正常提供服务。

下发配置更新,使用新节点替换旧节点。

集群删除旧节点

删除前确保该节点为slave

11a027f61116a5578638add26e23f5654090f38b 172.21.52.28:6384@16384 slave 7946da0fdc152c3e56b68f602283a302f5b815f3 0 1699705127341 11 connected

删除命令如下:

redis-cli -a xxx --cluster del-node 127.0.0.1:6379 11a027f61116a5578638add26e23f5654090f38b

xxx为密码

127.0.0.1:6379 为任意集群节点

11a027f61116a5578638add26e23f5654090f38b为待删除节点id

删除日志如下:

>>> Removing node 11a027f61116a5578638add26e23f5654090f38b from cluster 127.0.0.1:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.

重复此步骤,删除所有被替换节点

 

文章来自个人专栏
直播
7 文章 | 1 订阅
0条评论
0 / 1000
请输入你的评论
0
0