亚欧色一区w666天堂,色情一区二区三区免费看,少妇特黄A片一区二区三区,亚洲人成网站999久久久综合,国产av熟女一区二区三区

  • 發布文章
  • 消息中心
點贊
收藏
評論
分享
原創

Redis集群遷移

2024-04-17 09:44:59
14
0

客戶端redis配置

客戶端redis server的地址列表配置如下,一般配置為集群的全部節點。

"redis_cluster":{
        "serv_list":[
            {"ip":"172.21.52.28",   "port":6379  }, { "ip":"172.21.52.28",  "port":6380 },  { "ip":"172.21.52.28",   "port":6381 }
        ]

}

客戶端從配置列表中依次取節點進行嘗試,如果可以連通任一節點,并獲取到集群slot信息,則緩存集群信息,完成初始化過程。

因此配置中只要包含集群中任意數量的可服務節點就可以工作,

如果在使用過程中發生主從或者slot切換,舊節點返回信令moved,網關會重新獲取slot信息并緩存。

集群節點替換步驟

例:使用新節點127.0.0.1:6386替換172.21.52.28:6384

啟動新節點

新的redis服務,配置必須與已有集群相同(建議復制配置文件,然后修改端口等個性化配置)

啟動命令

 /usr/local/bin/redis-server /usr/local/redis-cluster/conf/redis-6386.conf

確保被替換節點為slave

使用 cluster nodes查看節點狀態
11a027f61116a5578638add26e23f5654090f38b 172.21.52.28:6384@16384 slave 7946da0fdc152c3e56b68f602283a302f5b815f3 0 1699705127341 11 connected
ca6ded950ba7a7ad2751749fb6e37f83c27ab81f 172.21.52.28:6383@16383 master - 0 1699705126000 8 connected 10923-16383
34625d3289582a79802775d8a21e6bd86924ff36 172.21.52.28:6380@16380 slave 5350cb588918bfd535301fde95dc31467813fb12 0 1699705125000 9 connected
594bfd6f99d16c148229ccb20977a7b7812b2a01 172.21.52.28:6381@16381 slave ca6ded950ba7a7ad2751749fb6e37f83c27ab81f 0 1699705126337 8 connected
7946da0fdc152c3e56b68f602283a302f5b815f3 172.21.52.28:6379@16379 myself,master - 0 1699705128000 11 connected 0-5460
5350cb588918bfd535301fde95dc31467813fb12 172.21.52.28:6382@16382 master - 0 1699705128345 9 connected 5461-10922

待替換節點172.21.52.28:6384是slave狀態,主節點id為 7946da0fdc152c3e56b68f602283a302f5b815f3,滿足替換條件。

如果該節點為主節點,可以在其slave節點上使用cluster failover進行切換。

此步驟可以減少后續操作數據同步的概率。

添加新節點

集群添加新的slave節點命令如下:

redis-cli -a xxx --cluster add-node --cluster-slave --cluster-master-id 7946da0fdc152c3e56b68f602283a302f5b815f3 127.0.0.1:6386 127.0.0.1:6379

xxx為密碼

7946da0fdc152c3e56b68f602283a302f5b815f3為指定的主節點

127.0.0.1:6386為新增的從節點

127.0.0.1:6379為任意集群節點

添加日志如下:

>>> Adding node 127.0.0.1:6386 to cluster 127.0.0.1:6379
>>> Performing Cluster Check (using node 127.0.0.1:6379)
M: 7946da0fdc152c3e56b68f602283a302f5b815f3 127.0.0.1:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 11a027f61116a5578638add26e23f5654090f38b 172.21.52.28:6384
   slots: (0 slots) slave
   replicates 7946da0fdc152c3e56b68f602283a302f5b815f3
M: ca6ded950ba7a7ad2751749fb6e37f83c27ab81f 172.21.52.28:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 34625d3289582a79802775d8a21e6bd86924ff36 172.21.52.28:6380
   slots: (0 slots) slave
   replicates 5350cb588918bfd535301fde95dc31467813fb12
S: 594bfd6f99d16c148229ccb20977a7b7812b2a01 172.21.52.28:6381
   slots: (0 slots) slave
   replicates ca6ded950ba7a7ad2751749fb6e37f83c27ab81f
M: 5350cb588918bfd535301fde95dc31467813fb12 172.21.52.28:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 127.0.0.1:6386 to make it join the cluster.
Waiting for the cluster to join

>>> Configure node as replica of 127.0.0.1:6379.
[OK] New node added correctly.

使用 cluster nodes查看節點狀態

11a027f61116a5578638add26e23f5654090f38b 172.21.52.28:6384@16384 slave 7946da0fdc152c3e56b68f602283a302f5b815f3 0 1699706047072 11 connected
ca6ded950ba7a7ad2751749fb6e37f83c27ab81f 172.21.52.28:6383@16383 master - 0 1699706047000 8 connected 10923-16383
34625d3289582a79802775d8a21e6bd86924ff36 172.21.52.28:6380@16380 slave 5350cb588918bfd535301fde95dc31467813fb12 0 1699706046000 9 connected
594bfd6f99d16c148229ccb20977a7b7812b2a01 172.21.52.28:6381@16381 slave ca6ded950ba7a7ad2751749fb6e37f83c27ab81f 0 1699706046000 8 connected
7946da0fdc152c3e56b68f602283a302f5b815f3 127.0.0.1:6379@16379 myself,master - 0 1699706046000 11 connected 0-5460
5350cb588918bfd535301fde95dc31467813fb12 172.21.52.28:6382@16382 master - 0 1699706049080 9 connected 5461-10922
b5b2e845f11146a0d4ca489610f0d4480597b433 127.0.0.1:6386@16386 slave 7946da0fdc152c3e56b68f602283a302f5b815f3 0 1699706048076 11 connected

此時新加節點與待替換節點為同一個master的slave。

重復此步驟,增加所有新的節點。

注:如果添加失敗,提示 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0

則需要刪除conf/nodes-6386.conf和/data/redis-6386/dump.rdb,重啟節點再添加。

 

增加過程中觀察主節點cpu,內存,網絡等有無異常;

觀察新加節點的數據的同步狀態,命令info Replication,確保正常工作。

172.21.52.28:6386> info Replication

# Replication
role:slave
master_host:127.0.0.1
master_port:6379
master_link_status:up

修改客戶端配置

此時,新加節點和待刪除節點都可以正常提供服務。

下發配置更新,使用新節點替換舊節點。

集群刪除舊節點

刪除前確保該節點為slave

11a027f61116a5578638add26e23f5654090f38b 172.21.52.28:6384@16384 slave 7946da0fdc152c3e56b68f602283a302f5b815f3 0 1699705127341 11 connected

刪除命令如下:

redis-cli -a xxx --cluster del-node 127.0.0.1:6379 11a027f61116a5578638add26e23f5654090f38b

xxx為密碼

127.0.0.1:6379 為任意集群節點

11a027f61116a5578638add26e23f5654090f38b為待刪除節點id

刪除日志如下:

>>> Removing node 11a027f61116a5578638add26e23f5654090f38b from cluster 127.0.0.1:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.

重復此步驟,刪除所有被替換節點

 

0條評論
0 / 1000
張****東
10文章數
0粉絲數
張****東
10 文章 | 0 粉絲
張****東
10文章數
0粉絲數
張****東
10 文章 | 0 粉絲
原創

Redis集群遷移

2024-04-17 09:44:59
14
0

客戶端redis配置

客戶端redis server的地址列表配置如下,一般配置為集群的全部節點。

"redis_cluster":{
        "serv_list":[
            {"ip":"172.21.52.28",   "port":6379  }, { "ip":"172.21.52.28",  "port":6380 },  { "ip":"172.21.52.28",   "port":6381 }
        ]

}

客戶端從配置列表中依次取節點進行嘗試,如果可以連通任一節點,并獲取到集群slot信息,則緩存集群信息,完成初始化過程。

因此配置中只要包含集群中任意數量的可服務節點就可以工作,

如果在使用過程中發生主從或者slot切換,舊節點返回信令moved,網關會重新獲取slot信息并緩存。

集群節點替換步驟

例:使用新節點127.0.0.1:6386替換172.21.52.28:6384

啟動新節點

新的redis服務,配置必須與已有集群相同(建議復制配置文件,然后修改端口等個性化配置)

啟動命令

 /usr/local/bin/redis-server /usr/local/redis-cluster/conf/redis-6386.conf

確保被替換節點為slave

使用 cluster nodes查看節點狀態
11a027f61116a5578638add26e23f5654090f38b 172.21.52.28:6384@16384 slave 7946da0fdc152c3e56b68f602283a302f5b815f3 0 1699705127341 11 connected
ca6ded950ba7a7ad2751749fb6e37f83c27ab81f 172.21.52.28:6383@16383 master - 0 1699705126000 8 connected 10923-16383
34625d3289582a79802775d8a21e6bd86924ff36 172.21.52.28:6380@16380 slave 5350cb588918bfd535301fde95dc31467813fb12 0 1699705125000 9 connected
594bfd6f99d16c148229ccb20977a7b7812b2a01 172.21.52.28:6381@16381 slave ca6ded950ba7a7ad2751749fb6e37f83c27ab81f 0 1699705126337 8 connected
7946da0fdc152c3e56b68f602283a302f5b815f3 172.21.52.28:6379@16379 myself,master - 0 1699705128000 11 connected 0-5460
5350cb588918bfd535301fde95dc31467813fb12 172.21.52.28:6382@16382 master - 0 1699705128345 9 connected 5461-10922

待替換節點172.21.52.28:6384是slave狀態,主節點id為 7946da0fdc152c3e56b68f602283a302f5b815f3,滿足替換條件。

如果該節點為主節點,可以在其slave節點上使用cluster failover進行切換。

此步驟可以減少后續操作數據同步的概率。

添加新節點

集群添加新的slave節點命令如下:

redis-cli -a xxx --cluster add-node --cluster-slave --cluster-master-id 7946da0fdc152c3e56b68f602283a302f5b815f3 127.0.0.1:6386 127.0.0.1:6379

xxx為密碼

7946da0fdc152c3e56b68f602283a302f5b815f3為指定的主節點

127.0.0.1:6386為新增的從節點

127.0.0.1:6379為任意集群節點

添加日志如下:

>>> Adding node 127.0.0.1:6386 to cluster 127.0.0.1:6379
>>> Performing Cluster Check (using node 127.0.0.1:6379)
M: 7946da0fdc152c3e56b68f602283a302f5b815f3 127.0.0.1:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 11a027f61116a5578638add26e23f5654090f38b 172.21.52.28:6384
   slots: (0 slots) slave
   replicates 7946da0fdc152c3e56b68f602283a302f5b815f3
M: ca6ded950ba7a7ad2751749fb6e37f83c27ab81f 172.21.52.28:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 34625d3289582a79802775d8a21e6bd86924ff36 172.21.52.28:6380
   slots: (0 slots) slave
   replicates 5350cb588918bfd535301fde95dc31467813fb12
S: 594bfd6f99d16c148229ccb20977a7b7812b2a01 172.21.52.28:6381
   slots: (0 slots) slave
   replicates ca6ded950ba7a7ad2751749fb6e37f83c27ab81f
M: 5350cb588918bfd535301fde95dc31467813fb12 172.21.52.28:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 127.0.0.1:6386 to make it join the cluster.
Waiting for the cluster to join

>>> Configure node as replica of 127.0.0.1:6379.
[OK] New node added correctly.

使用 cluster nodes查看節點狀態

11a027f61116a5578638add26e23f5654090f38b 172.21.52.28:6384@16384 slave 7946da0fdc152c3e56b68f602283a302f5b815f3 0 1699706047072 11 connected
ca6ded950ba7a7ad2751749fb6e37f83c27ab81f 172.21.52.28:6383@16383 master - 0 1699706047000 8 connected 10923-16383
34625d3289582a79802775d8a21e6bd86924ff36 172.21.52.28:6380@16380 slave 5350cb588918bfd535301fde95dc31467813fb12 0 1699706046000 9 connected
594bfd6f99d16c148229ccb20977a7b7812b2a01 172.21.52.28:6381@16381 slave ca6ded950ba7a7ad2751749fb6e37f83c27ab81f 0 1699706046000 8 connected
7946da0fdc152c3e56b68f602283a302f5b815f3 127.0.0.1:6379@16379 myself,master - 0 1699706046000 11 connected 0-5460
5350cb588918bfd535301fde95dc31467813fb12 172.21.52.28:6382@16382 master - 0 1699706049080 9 connected 5461-10922
b5b2e845f11146a0d4ca489610f0d4480597b433 127.0.0.1:6386@16386 slave 7946da0fdc152c3e56b68f602283a302f5b815f3 0 1699706048076 11 connected

此時新加節點與待替換節點為同一個master的slave。

重復此步驟,增加所有新的節點。

注:如果添加失敗,提示 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0

則需要刪除conf/nodes-6386.conf和/data/redis-6386/dump.rdb,重啟節點再添加。

 

增加過程中觀察主節點cpu,內存,網絡等有無異常;

觀察新加節點的數據的同步狀態,命令info Replication,確保正常工作。

172.21.52.28:6386> info Replication

# Replication
role:slave
master_host:127.0.0.1
master_port:6379
master_link_status:up

修改客戶端配置

此時,新加節點和待刪除節點都可以正常提供服務。

下發配置更新,使用新節點替換舊節點。

集群刪除舊節點

刪除前確保該節點為slave

11a027f61116a5578638add26e23f5654090f38b 172.21.52.28:6384@16384 slave 7946da0fdc152c3e56b68f602283a302f5b815f3 0 1699705127341 11 connected

刪除命令如下:

redis-cli -a xxx --cluster del-node 127.0.0.1:6379 11a027f61116a5578638add26e23f5654090f38b

xxx為密碼

127.0.0.1:6379 為任意集群節點

11a027f61116a5578638add26e23f5654090f38b為待刪除節點id

刪除日志如下:

>>> Removing node 11a027f61116a5578638add26e23f5654090f38b from cluster 127.0.0.1:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.

重復此步驟,刪除所有被替換節點

 

文章來自個人專欄
文章 | 訂閱
0條評論
0 / 1000
請輸入你的評論
0
0