Created
September 29, 2024 04:30
-
-
Save BlackMetalz/94bfa44c29027a130bb5ba59f3a4f074 to your computer and use it in GitHub Desktop.
Test with Redis Sentinel + HA + DNS with simple python application
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
kienlt@ubuntu22-desktop:/data/test$ python3 redis_benchmark1.py | |
[15:17:36] Success: 10, Failure: 0 | |
[15:17:41] Success: 10, Failure: 0 | |
[15:17:46] Success: 10, Failure: 0 | |
[15:17:51] Success: 10, Failure: 0 | |
[15:17:56] Success: 10, Failure: 0 | |
[15:18:01] Success: 10, Failure: 0 | |
[15:18:07] Success: 10, Failure: 0 | |
[15:18:12] Success: 10, Failure: 0 | |
[15:18:17] Success: 10, Failure: 0 | |
[15:18:22] Success: 10, Failure: 0 | |
===> Start shutdown service: redis-server redis-sentinel haproxy on master node! (10.0.0.10) | |
Connection error while setting key:100: Connection closed by server. | |
Connection error while setting key:101: Connection closed by server. | |
[15:18:31] Success: 0, Failure: 10 | |
===> Failover | |
[15:18:36] Success: 10, Failure: 0 | |
[15:18:41] Success: 10, Failure: 0 | |
[15:18:46] Success: 10, Failure: 0 | |
[15:18:51] Success: 10, Failure: 0 | |
[15:18:56] Success: 10, Failure: 0 | |
[15:19:01] Success: 10, Failure: 0 | |
[15:19:06] Success: 10, Failure: 0 | |
[15:19:11] Success: 10, Failure: 0 | |
[15:19:16] Success: 10, Failure: 0 | |
==> Check status of slave node which is just promoted to master | |
root@kienlt-redis-sentinel-2:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n 3 | |
# Replication | |
role:master | |
connected_slaves:0 | |
==> Start service on node just shutdown 10.0.0.10 .Process still running for insert data | |
==> Check status again of both node! | |
root@kienlt-redis-sentinel-1:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n3 | |
# Replication | |
role:slave | |
master_host:10.1.0.10 | |
root@kienlt-redis-sentinel-2:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n 3 | |
# Replication | |
role:master | |
connected_slaves:1 | |
==> ok, lets shutdown current master node 10.1.0.10 | |
root@kienlt-redis-sentinel-2:/data/redis-config# systemctl stop redis-sentinel-8800 redis-server-6800 haproxy | |
==> Check status of other node | |
root@kienlt-redis-sentinel-1:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n3 | |
# Replication | |
role:master | |
connected_slaves:0 | |
==> Application log | |
[15:24:53] Success: 10, Failure: 0 | |
[15:24:58] Success: 10, Failure: 0 | |
[15:25:03] Success: 10, Failure: 0 | |
[15:25:08] Success: 10, Failure: 0 | |
[15:25:13] Success: 10, Failure: 0 | |
[15:25:18] Success: 10, Failure: 0 | |
[15:25:24] Success: 10, Failure: 0 | |
[15:25:29] Success: 10, Failure: 0 | |
[15:25:34] Success: 10, Failure: 0 | |
[15:25:39] Success: 10, Failure: 0 | |
[15:25:44] Success: 10, Failure: 0 | |
[15:25:49] Success: 10, Failure: 0 | |
==> This is the time stop master node 10.1.0.10 | |
Connection error while setting key:970: Connection closed by server. | |
Connection error while setting key:971: Connection closed by server. | |
Connection error while setting key:972: Connection closed by server. | |
Connection error while setting key:973: Connection closed by server. | |
Connection error while setting key:974: Connection closed by server. | |
Connection error while setting key:975: Connection closed by server. | |
Connection error while setting key:976: Connection closed by server. | |
Connection error while setting key:977: Connection closed by server. | |
Connection error while setting key:978: Connection closed by server. | |
Connection error while setting key:979: Connection closed by server. | |
[15:26:01] Success: 0, Failure: 10 | |
[15:26:06] Success: 10, Failure: 0 | |
[15:26:11] Success: 10, Failure: 0 | |
[15:26:16] Success: 10, Failure: 0 | |
[15:26:21] Success: 10, Failure: 0 | |
[15:26:26] Success: 10, Failure: 0 | |
[15:26:31] Success: 10, Failure: 0 | |
[15:26:36] Success: 10, Failure: 0 | |
[15:26:41] Success: 10, Failure: 0 | |
[15:26:47] Success: 10, Failure: 0 | |
[15:26:52] Success: 10, Failure: 0 | |
===> start again node 10.1.0.10 systemctl start redis-sentinel-8800 redis-server-6800 haproxy | |
==> Check status again! | |
root@kienlt-redis-sentinel-1:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n3 | |
# Replication | |
role:master | |
connected_slaves:1 | |
root@kienlt-redis-sentinel-2:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n 3 | |
# Replication | |
role:slave | |
master_host:10.0.0.10 | |
==> Seem good?, Nah, stop master again for other case check! | |
root@kienlt-redis-sentinel-1:/data/redis-config# systemctl stop redis-sentinel-8800.service redis-server-6800.service haproxy.service | |
root@kienlt-redis-sentinel-2:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n 3 | |
# Replication | |
role:master | |
connected_slaves:0 | |
==> Application log still same like above, no need to paste again. | |
==> Start node just shutdown again! and quick enter info replication | |
root@kienlt-redis-sentinel-1:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n3 | |
# Replication | |
role:master | |
connected_slaves:0 | |
==> But after 2 seconds | |
root@kienlt-redis-sentinel-1:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n3 | |
# Replication | |
role:slave | |
master_host:10.1.0.10 | |
root@kienlt-redis-sentinel-2:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n 3 | |
# Replication | |
role:master | |
connected_slaves:1 | |
==> Failover seem like good! Ok next i want the node 10.0.0.10 is always master in case it is online | |
==> Manually failover process! | |
root@kienlt-redis-sentinel-1:/data/redis-config# redis-cli -p 8800 -a 123123 sentinel failover 8800-sentinel-data-crm | |
OK | |
root@kienlt-redis-sentinel-1:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n3 | |
# Replication | |
role:master | |
connected_slaves:1 | |
root@kienlt-redis-sentinel-2:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n 3 | |
# Replication | |
role:slave | |
master_host:10.0.0.10 | |
### Other test case continue | |
1. Stop redis-server redis-sentinel haproxy | |
- Stop slave node : application still write success ( ofc xD ). | |
- Start slave node : Same like above. | |
2. Stop Sentinel on Node3 (10.2.0.10), stop redis-server on master | |
- Check quorum before stop: | |
root@kienlt-redis-sentinel-3:~# redis-cli -p 8800 -a 123123 sentinel ckquorum 8800-sentinel-data-crm | |
OK 3 usable Sentinels. Quorum and failover authorization can be reached | |
root@kienlt-redis-sentinel-3:~# redis-cli -p 8800 -a 123123 sentinel slaves 8800-sentinel-data-crm | |
1) 1) "name" | |
2) "10.1.0.10:6800" | |
3) "ip" | |
4) "10.1.0.10" | |
5) "port" | |
6) "6800" | |
7) "runid" | |
8) "ec9cd2ae4a78d8da12977c99b3b5df8e69bd3f7e" | |
9) "flags" | |
10) "slave" | |
11) "link-pending-commands" | |
12) "0" | |
13) "link-refcount" | |
14) "1" | |
15) "last-ping-sent" | |
16) "0" | |
17) "last-ok-ping-reply" | |
18) "52" | |
19) "last-ping-reply" | |
20) "52" | |
21) "down-after-milliseconds" | |
22) "5000" | |
23) "info-refresh" | |
24) "2079" | |
25) "role-reported" | |
26) "slave" | |
27) "role-reported-time" | |
28) "326053" | |
29) "master-link-down-time" | |
30) "0" | |
31) "master-link-status" | |
32) "ok" | |
33) "master-host" | |
34) "10.0.0.10" | |
35) "master-port" | |
36) "6800" | |
37) "slave-priority" | |
38) "100" | |
39) "slave-repl-offset" | |
40) "667706" | |
41) "replica-announced" | |
42) "1" | |
==> Lets begin stop process | |
root@kienlt-redis-sentinel-3:~# systemctl stop redis-sentinel-8800 | |
root@kienlt-redis-sentinel-1:/data/redis-config# systemctl stop redis-server-6800 # This is master node | |
==> Application Log | |
[15:41:45] Success: 10, Failure: 0 | |
[15:41:50] Success: 10, Failure: 0 | |
[15:41:55] Success: 10, Failure: 0 | |
[15:42:00] Success: 10, Failure: 0 | |
[15:42:05] Success: 10, Failure: 0 | |
==> Stop time | |
Connection error while setting key:2860: Connection closed by server. | |
Connection error while setting key:2861: Connection closed by server. | |
Connection error while setting key:2862: Connection closed by server. | |
Connection error while setting key:2863: Connection closed by server. | |
Connection error while setting key:2864: Connection closed by server. | |
Connection error while setting key:2865: Connection closed by server. | |
Connection error while setting key:2866: Connection closed by server. | |
Connection error while setting key:2867: Connection closed by server. | |
Connection error while setting key:2868: Connection closed by server. | |
Connection error while setting key:2869: Connection closed by server. | |
[15:42:20] Success: 0, Failure: 10 | |
[15:42:25] Success: 10, Failure: 0 | |
[15:42:30] Success: 10, Failure: 0 | |
[15:42:35] Success: 10, Failure: 0 | |
[15:42:40] Success: 10, Failure: 0 | |
[15:42:45] Success: 10, Failure: 0 | |
[15:42:50] Success: 10, Failure: 0 | |
==> Check status of node just promoted | |
root@kienlt-redis-sentinel-2:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n 3 | |
# Replication | |
role:master | |
connected_slaves:0 | |
Result: it doesn't affects at all but we should not make this happens in production environment! | |
After that, start all process that we just stop again! It still fine! | |
3. Last try. Stop current write process. flushdb on master then stop master redis but not haproxy | |
root@kienlt-redis-sentinel-2:/data/redis-config# redis-cli -p 6800 -a 123123 flushdb | |
OK | |
root@kienlt-redis-sentinel-2:/data/redis-config# redis-cli -p 6800 -a 123123 dbsize | |
(integer) 0 | |
root@kienlt-redis-sentinel-1:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n3 | |
# Replication | |
role:slave | |
master_host:10.1.0.10 | |
root@kienlt-redis-sentinel-2:/data/redis-config# systemctl stop redis-server-6800 redis-sentinel-8800 | |
root@kienlt-redis-sentinel-1:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n3 | |
# Replication | |
role:master | |
==> Then start write application for testing | |
kienlt@ubuntu22-desktop:/data/test$ python3 redis_benchmark1.py | |
[15:47:37] Success: 10, Failure: 0 | |
[15:47:42] Success: 10, Failure: 0 | |
[15:47:47] Success: 10, Failure: 0 | |
[15:47:52] Success: 10, Failure: 0 | |
[15:47:57] Success: 10, Failure: 0 | |
==> Now start redis instance on stop node | |
root@kienlt-redis-sentinel-2:/data/redis-config# systemctl start redis-server-6800 redis-sentinel-8800 | |
==> Check status | |
root@kienlt-redis-sentinel-2:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n 3 | |
# Replication | |
role:slave | |
master_host:10.0.0.10 | |
root@kienlt-redis-sentinel-1:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n3 | |
# Replication | |
role:master | |
connected_slaves:1 | |
4. Rare Case! Stop complete 2 nodes in total 3 nodes. | |
==> Stop current master redis + sentinel + haproxy on node 10.0.0.10 and sentinel + haproxy on node 10.2.0.10 ( installed sentinel only ) | |
Guess what? | |
Slave node wasn't able to promote itself because 2 sentinel instance is down. | |
root@kienlt-redis-sentinel-2:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n 3 | |
# Replication | |
role:slave | |
master_host:10.0.0.10 | |
==> But we still need to resolve it manually. slaveof no one is the solution! | |
root@kienlt-redis-sentinel-2:/data/redis-config# redis-cli -p 6800 -a 123123 slaveof no one | |
OK | |
root@kienlt-redis-sentinel-2:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n 3 | |
# Replication | |
role:master | |
connected_slaves:0 | |
==> Here is application log | |
[15:51:47] Success: 10, Failure: 0 | |
Connection error while setting key:500: Connection closed by server. | |
Connection error while setting key:501: Connection closed by server. | |
Connection error while setting key:502: Connection closed by server. | |
[15:51:58] Success: 0, Failure: 10 | |
[15:52:04] Success: 0, Failure: 10 | |
[15:52:10] Success: 0, Failure: 10 | |
Connection error while setting key:674: Connection closed by server. | |
Connection error while setting key:675: Connection closed by server. | |
Connection error while setting key:676: Connection closed by server. | |
Connection error while setting key:677: Error while reading from multifo-redis-7800.kienlt.local:7800 : (104, 'Connection reset by peer') | |
Connection error while setting key:678: Connection closed by server. | |
Connection error while setting key:679: Connection closed by server. | |
[15:53:34] Success: 0, Failure: 10 | |
Connection error while setting key:680: Connection closed by server. | |
Connection error while setting key:681: Connection closed by server. | |
Connection error while setting key:682: Connection closed by server. | |
Connection error while setting key:683: Connection closed by server. | |
Connection error while setting key:684: Connection closed by server. | |
[15:53:40] Success: 0, Failure: 10 | |
[15:53:45] Success: 0, Failure: 10 | |
Connection error while setting key:707: Connection closed by server. | |
Connection error while setting key:708: Connection closed by server. | |
Connection error while setting key:709: Connection closed by server. | |
[15:53:51] Success: 0, Failure: 10 | |
==> slaveof no one command entered | |
[15:53:56] Success: 10, Failure: 0 | |
[15:54:01] Success: 10, Failure: 0 | |
[15:54:06] Success: 10, Failure: 0 | |
[15:54:11] Success: 10, Failure: 0 | |
[15:54:16] Success: 10, Failure: 0 | |
[15:54:22] Success: 10, Failure: 0 | |
[15:54:27] Success: 10, Failure: 0 | |
[15:54:32] Success: 10, Failure: 0 | |
[15:54:37] Success: 10, Failure: 0 | |
[15:54:42] Success: 10, Failure: 0 | |
[15:54:47] Success: 10, Failure: 0 | |
[15:54:52] Success: 10, Failure: 0 | |
Lets start instance was stopped again! | |
root@kienlt-redis-sentinel-3:~# systemctl start redis-sentinel-8800 haproxy.service | |
root@kienlt-redis-sentinel-1:/data/redis-config# systemctl start redis-server-6800.service redis-sentinel-8800.service haproxy.service | |
==> Check status | |
root@kienlt-redis-sentinel-1:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n3 | |
# Replication | |
role:master | |
connected_slaves:1 | |
root@kienlt-redis-sentinel-2:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n 3 | |
# Replication | |
role:slave | |
master_host:10.0.0.10 | |
==> Hmm? master switched! | |
==> Application log | |
[15:55:53] Success: 10, Failure: 0 | |
[15:55:58] Success: 10, Failure: 0 | |
[15:56:03] Success: 10, Failure: 0 | |
[15:56:08] Success: 10, Failure: 0 | |
[15:56:14] Success: 10, Failure: 0 | |
[15:56:19] Success: 10, Failure: 0 | |
[15:56:24] Success: 10, Failure: 0 | |
[15:56:29] Success: 10, Failure: 0 | |
[15:56:34] Success: 10, Failure: 0 | |
[15:56:39] Success: 10, Failure: 0 | |
[15:56:44] Success: 10, Failure: 0 | |
[15:56:49] Success: 10, Failure: 0 | |
Error while setting key:1060: You can't write against a read only replica. | |
Error while setting key:1061: You can't write against a read only replica. | |
Error while setting key:1062: You can't write against a read only replica. | |
Error while setting key:1063: You can't write against a read only replica. | |
[15:56:55] Success: 0, Failure: 10 | |
Error while setting key:1070: You can't write against a read only replica. | |
Error while setting key:1071: You can't write against a read only replica. | |
Error while setting key:1072: You can't write against a read only replica. | |
[15:57:00] Success: 0, Failure: 10 | |
Error while setting key:1080: You can't write against a read only replica. | |
[15:57:05] Success: 0, Failure: 10 | |
[15:57:10] Success: 10, Failure: 0 | |
[15:57:15] Success: 10, Failure: 0 | |
[15:57:20] Success: 10, Failure: 0 | |
[15:57:25] Success: 10, Failure: 0 | |
[15:57:30] Success: 10, Failure: 0 | |
==> Oh it's auto recovery | |
==> BTW this case is extreme rare happens, but who know? | |
5. xD here is final failover test i think | |
==> Check status | |
root@kienlt-redis-sentinel-1:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n3 | |
# Replication | |
role:master | |
connected_slaves:1 | |
root@kienlt-redis-sentinel-2:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n 3 | |
# Replication | |
role:slave | |
master_host:10.0.0.10 | |
==> This time i don't run application for insert key to redis | |
==> Failover start | |
root@kienlt-redis-sentinel-1:/data/redis-config# systemctl stop redis-server-6800 | |
root@kienlt-redis-sentinel-2:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n 3 | |
# Replication | |
role:master | |
connected_slaves:0 | |
==> All good as expected. Lets start again! This time i did run info replication in real fast. 1s/command. | |
root@kienlt-redis-sentinel-1:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n3 | |
# Replication | |
role:master | |
connected_slaves:0 | |
root@kienlt-redis-sentinel-1:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n3 | |
# Replication | |
role:master | |
connected_slaves:0 | |
root@kienlt-redis-sentinel-1:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n3 | |
# Replication | |
role:slave | |
master_host:10.1.0.10 | |
root@kienlt-redis-sentinel-2:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n 3 | |
# Replication | |
role:master | |
connected_slaves:0 | |
root@kienlt-redis-sentinel-2:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n 3 | |
# Replication | |
role:master | |
connected_slaves:0 | |
root@kienlt-redis-sentinel-2:/data/redis-config# redis-cli -p 6800 -a 123123 info replication|head -n 3 | |
# Replication | |
role:master | |
connected_slaves:1 | |
# update test case 07Aug2024 | |
==> Shutdown whole server of node which is current master | |
==> in redis client connect, set socketTimeout to 30 seconds. | |
==> Client takes 20 seconds to failover | |
kienlt@ubuntu22-desktop:/data/test$ python3 redis_benchmark1.py | |
[11:08:18] Success: 10, Failure: 0 | |
[11:08:23] Success: 10, Failure: 0 | |
[11:08:29] Success: 10, Failure: 0 | |
[11:08:34] Success: 10, Failure: 0 | |
==> Shutdown whole server | |
Connection error while setting key:40: Connection closed by server. | |
Connection error while setting key:41: Connection closed by server. | |
[11:08:52] Success: 8, Failure: 2 | |
[11:08:57] Success: 10, Failure: 0 | |
[11:09:02] Success: 10, Failure: 0 | |
[11:09:07] Success: 10, Failure: 0 | |
[11:09:12] Success: 10, Failure: 0 | |
[11:09:17] Success: 10, Failure: 0 | |
[11:09:22] Success: 10, Failure: 0 | |
[11:09:27] Success: 10, Failure: 0 | |
[11:09:33] Success: 10, Failure: 0 | |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment