Cette page est dans les cartons depuis bien trop longtemps je profite d'un peu de disponibilité pour la mettre en ligne
Nous allons donc voir comment mettre en oeuvre un cluster MySQL haute disponibilité en utilisant une réplication master/maseter et Multi-Master Replication Manager for MySQL
L'installation sera effectuée sur une ubuntu 10.04.2 LTS lucid, elle nécessite au moins 3 machines :
Machine | Adresse IP | Rôle |
---|---|---|
frtls-testmysql-1 | 10.41.51.163 | noeud 1 mysql master |
frtls-testmysql-2 | 10.41.51.145 | noeud 2 mysql master |
frtls-testmysql-mon | 10.41.51.184 | monitor MMM |
Sur chacun des deux noeuds mysql :
apt-get install mysql-server
[mysqld] #bind-address = 10.41.51.163 bind-address =0.0.0.0 server-id = 1 auto-increment-increment = 2 auto-increment-offset = 1 master-host=10.41.51.145 master-user=replicationUser master-password=replicationPass master-port=3306 #replicate-ignore-db : which database must NOT be replicated by the server as slave. replicate-ignore-db = mysql replicate-ignore-db = information_schema #binlog-ignore-db : specifies on which databases the binary logging must NOT be active binlog-ignore-db = mysql binlog-ignore-db = information_schema log_bin = /var/log/mysql/mysql-bin.log
[mysqld] #bind-address = 10.41.51.145 bind-address = 0.0.0.0 server-id = 2 auto-increment-increment = 2 auto-increment-offset = 2 master-host=10.41.51.163 master-user=replicationUser master-password=replicationPass master-port=3306 #replicate-ignore-db : which database must NOT be replicated by the server as slave. replicate-ignore-db = mysql replicate-ignore-db = information_schema #binlog-ignore-db : specifies on which databases the binary logging must NOT be active binlog-ignore-db = mysql binlog-ignore-db = information_schema log_bin = /var/log/mysql/mysql-bin.log
Serveur 1: 1,3,5,7,9,11,... Serveur 2: 2,4,6,8,10,12,...
mysql> grant replication slave, replication client on *.* to 'replicationUser'@'10.41.51.145' identified by 'replicationPass'; mysql> flush privileges;
mysql> grant replication slave, replication client on *.* to 'replicationUser'@'10.41.51.163' identified by 'replicationPass'; mysql> flush privileges;
/etc/inid.d/mysql restart
mysql> show master status\G *************************** 1. row *************************** File: mysql-bin.000001 Position: 348 Binlog_Do_DB: kikoo Binlog_Ignore_DB: 1 row in set (0.00 sec) mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 10.41.51.145 Master_User: replicationUser Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000001 Read_Master_Log_Pos: 106 Relay_Log_File: frtls-testmysql-1-relay-bin.000002 Relay_Log_Pos: 251 Relay_Master_Log_File: mysql-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: kikoo Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 106 Relay_Log_Space: 418 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: 1 row in set (0.00 sec)
mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 10.41.51.163 Master_User: replicationUser Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000001 Read_Master_Log_Pos: 348 Relay_Log_File: frtls-testmysql-2-relay-bin.000002 Relay_Log_Pos: 493 Relay_Master_Log_File: mysql-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: kikoo Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 348 Relay_Log_Space: 660 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: 1 row in set (0.00 sec) mysql> show master status ; +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000001 | 106 | kikoo | | +------------------+----------+--------------+------------------+ 1 row in set (0.00 sec)
On dispose maintenant d'un cluster MySQL en replication Master/Master qui fonctionne
stop slave ; CHANGE MASTER TO MASTER_USER='replicate_user' ; start slave ;
MMM nécessite un agent sur chaque noeud MySQL et un moniteur tournant sur une machine tierce qui sert a détecter les erreurs de disponibilité et/ou réplication des noeuds, et basculer le service entre les noeuds (en utilisant une IP flottante)
Les paquets .deb sont disponibles ici
apt-get install liblog-log4perl-perl libmailtools-perl liblog-dispatch-perl iproute libnet-arp-perl libproc-daemon-perl libalgorithm-diff-perl libdbi-perl libdbd-mysql-perl
dpkg -i mysql-mmm-common_2.2.1-1_all.deb mysql-mmm-agent_2.2.1-1_all.deb
apt-get install liblog-log4perl-perl libmailtools-perl liblog-dispatch-perl libclass-singleton-perl libproc-daemon-perl libalgorithm-diff-perl libdbi-perl libdbd-mysql-perl
grant replication client on *.* to 'mmm_monitorUser'@'10.41.51.184' IDENTIFIED BY 'mmm_monitorPass' ; grant super, replication client, process on *.* to 'mmm_agentUser'@'10.41.51.163' IDENTIFIED BY 'mmm_agentPass'; grant super, replication client, process on *.* to 'mmm_agentUser'@'10.41.51.145' IDENTIFIED BY 'mmm_agentPass'; flush privileges ;
include mmm_common.conf <monitor> ip 127.0.0.1 pid_path /var/run/mmm_mond.pid bin_path /usr/lib/mysql-mmm/ status_path /var/lib/misc/mmm_mond.status ping_ips 10.41.51.254, 10.41.51.145, 10.41.51.163 </monitor> <host default> monitor_user mmm_monitorUser monitor_password mmm_monitorPass </host> debug 0
active_master_role writer <host default> cluster_interface eth0 pid_path /var/run/mmm_agentd.pid bin_path /usr/lib/mysql-mmm/ replication_user replicationUser replication_password replicationPass agent_user mmm_agentUser agent_password mmm_agentPass </host> <host db1> ip 10.41.51.163 mode master peer db2 </host> <host db2> ip 10.41.51.145 mode master peer db1 </host> <role writer> hosts db1, db2 ips 10.41.51.240 mode exclusive </role> <role reader> hosts db1, db2 ips 10.41.51.241 mode balanced </role>
où 10.41.51.240 et 10.41.51.241 sont les IPs flottantes respectives pour les roles “ecrivain” et “lecteur”. Le lecteur sera utilisé si l'application le permet pour effectuer les requêtes en lecture sur un noeud non chargé par le role d'ecrivain
sed -i "s/ENABLED=0/ENABLED=1/" /etc/default/mysql-mmm-agent /etc/init.d/mysql-mmm-agent start
sed -i "s/ENABLED=0/ENABLED=1/" /etc/default/mysql-mmm-monitor /etc/init.d/mysql-mmm-monitor start
mmm_control show<code> qui doit retourner un résultat similaire à :<code> db1(10.41.51.163) master/AWAITING_RECOVERY. Roles: db2(10.41.51.145) master/AWAITING_RECOVERY. Roles:
mmm_control set_online db1
mmm_control show db1(10.41.51.163) master/ONLINE. Roles: reader(10.41.51.241), writer(10.41.51.240) db2(10.41.51.145) master/AWAITING_RECOVERY. Roles:
mmm_control set_online db2
mmm_control show db1(10.41.51.163) master/ONLINE. Roles: writer(10.41.51.240) db2(10.41.51.145) master/ONLINE. Roles: reader(10.41.51.241)
Et voila on a un cluster mysql-mmm opérationnel : il ne reste plus qu'a y créer des bases.