MySQL Replication : Master-slave
Master-slave data replication allows us to easily maintain multiple copies of a MySQL data by having them copied automatically from a master to a slave database. We can backup the data, analyze it without using the main database, or scale out our db.
Here are the processes of MySQL replication:
- The master server records all data changes to its binary logs (binary log events) and send it to the slave using a thread called Binlog dump thread once the slave connects to the master.
- The slave copies the binary log events sent by the master's Binlog dump thread to its relay logs using a thread called Slave I/O thread.
- The slave applies these changes from the relay logs to its data by replaying (executing) all events using a thread called Slave SQL thread.
We'll do a very simple mysql replication. A master will send information to a slave. So, we will need two ip addresses:
- master: 172.31.23.198
- slave: 172.31.23.197
Install mysql:
$ sudo apt-get update $ sudo apt-get install mysql-server mysql-client
On the master server, open up the mysql server configuration file (/etc/mysql/my.cnf), and set it to master's ip address:
bind-address = 172.31.23.198
The next configuration change is the server-id, located in the [mysqld] section. We can choose any number for this spot, but the number must be unique and cannot match any other server-id in our replication group. We can just uncomment the line:
server-id = 1
Now, let's move on to the log_bin line where the real details of the replication are kept. The slave is going to copy all of the changes that are registered in the log. For this step we simply need to uncomment the line that refers to log_bin:
log_bin = /var/log/mysql/mysql-bin.log
As a final step, we need to designate the database that will be replicated on the slave server. we can include more than one database.
binlog_do_db = mydb1
Save the configuration file, and restart MySQL:
$ sudo service mysql restart
Open up the MySQL shell;
$ mysql -u root -p
We need to grant privileges to the slave. We can use this line to name our slave and set up their password. The command should be in the following format:
mysql> GRANT REPLICATION SLAVE ON *.* TO 'slave_user_name'@'%' IDENTIFIED BY 'password'; Query OK, 0 rows affected (0.00 sec)
To tell the server to reload the grant tables, perform a flush-privileges operation by issuing the following command:
mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.00 sec)
Create a new database called mydb1:
mysql> CREATE DATABASE mydb1;
Switch to "mydb1"
mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mydb1 | | mysql | | performance_schema | +--------------------+ 4 rows in set (0.00 sec) mysql> use mydb1 Database changed
Lock the database to prevent any new changes:
mysql> FLUSH TABLES WITH READ LOCK; Query OK, 0 rows affected (0.06 sec)
Then type in "SHOW MASTER STATUS;"
mysql> SHOW MASTER STATUS; +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000002 | 107 | mydb1 | | +------------------+----------+--------------+------------------+ 1 row in set (0.00 sec)
This is the position from which the slave database will start replicating. If we make any new changes in the same sql shell, the database will automatically unlock. Therefore, we should open the new tab/window and continue with the next steps there.
Let's proceed with the database still locked, export our database using mysqldump in the new window (make sure typing this command in the bash shell, not in MySQL).
ubuntu@ip-172-31-23-198:~$ mysqldump -u root -p --opt mydb1 > mydb1.sql
Now, returning to our original sql shell, unlock the databases to make them writable again. Finish up by exiting the shell.
mysql> UNLOCK TABLES; Query OK, 0 rows affected (0.00 sec) mysql> quit; Bye ubuntu@ip-172-31-23-198:~$
We will now begin to configure the slave database. Log into our slave server, open up the MySQL shell and create the new database that we will be replicating from the master (then exit):
$ mysql -u root -p ... mysql> CREATE DATABASE mydb1; Query OK, 1 row affected (0.00 sec) mysql> exit; Bye ubuntu@ip-172-31-23-197:~$
Import the database that we previously exported from the master database:
ubuntu@ip-172-31-23-197:~$ mysql -u root -p mydb1 < mydb1.sql
Now we need to configure the slave configuration (/etc/mysql/my.cnf) in the same way as we did the master.
We have to make sure that we have a few things to set up in this configuration.
The first thing to do is the server-id. This number, as mentioned before, needs to be unique. Since it is set to 1 by default, we may want to change it to 2:
server-id = 2 relay-log = /var/log/mysql/mysql-relay-bin.log log_bin = /var/log/mysql/mysql-bin.log binlog_do_db = mydb1
Note that we need to add the relay-log line since it is not given by default. Once we have made all the changes, save and exit out of the slave configuration file.
Then, restart client MySQL:
ubuntu@ip-172-31-23-197:~$ sudo service mysql restart
The next step is to enable the replication from within the MySQL shell. Open up the the MySQL shell, and type in the following details, replacing the values to match our information:
mysql> CHANGE MASTER TO MASTER_HOST='172.31.2.198',MASTER_USER='slave_use_name', MASTER_PASSWORD='password', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS= 107; Query OK, 0 rows affected (0.09 sec)
This command does the following:
- It designates the current server as the slave of our master server.
- It provides the server the correct login credentials.
- It lets the slave server know where to start replicating from; the master log file and log position come from the numbers we wrote down previously.
Now that we have configured a master and slave server.
Let's activate the slave server:
mysql> START SLAVE;
We can see the details of the slave replication by typing in the following command. The \G rearranges the text to make it more readable.
mysql> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Connecting to master Master_Host: 172.31.2.198 Master_User: slave_use_name Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000001 Read_Master_Log_Pos: 107 Relay_Log_File: mysql-relay-bin.000001 Relay_Log_Pos: 4 Relay_Master_Log_File: mysql-bin.000001 Slave_IO_Running: Connecting Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 107 Relay_Log_Space: 107 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 0 1 row in set (0.00 sec)
AWS (Amazon Web Services)
- AWS : EKS (Elastic Container Service for Kubernetes)
- AWS : Creating a snapshot (cloning an image)
- AWS : Attaching Amazon EBS volume to an instance
- AWS : Adding swap space to an attached volume via mkswap and swapon
- AWS : Creating an EC2 instance and attaching Amazon EBS volume to the instance using Python boto module with User data
- AWS : Creating an instance to a new region by copying an AMI
- AWS : S3 (Simple Storage Service) 1
- AWS : S3 (Simple Storage Service) 2 - Creating and Deleting a Bucket
- AWS : S3 (Simple Storage Service) 3 - Bucket Versioning
- AWS : S3 (Simple Storage Service) 4 - Uploading a large file
- AWS : S3 (Simple Storage Service) 5 - Uploading folders/files recursively
- AWS : S3 (Simple Storage Service) 6 - Bucket Policy for File/Folder View/Download
- AWS : S3 (Simple Storage Service) 7 - How to Copy or Move Objects from one region to another
- AWS : S3 (Simple Storage Service) 8 - Archiving S3 Data to Glacier
- AWS : Creating a CloudFront distribution with an Amazon S3 origin
- AWS : Creating VPC with CloudFormation
- AWS : WAF (Web Application Firewall) with preconfigured CloudFormation template and Web ACL for CloudFront distribution
- AWS : CloudWatch & Logs with Lambda Function / S3
- AWS : Lambda Serverless Computing with EC2, CloudWatch Alarm, SNS
- AWS : Lambda and SNS - cross account
- AWS : CLI (Command Line Interface)
- AWS : CLI (ECS with ALB & autoscaling)
- AWS : ECS with cloudformation and json task definition
- AWS Application Load Balancer (ALB) and ECS with Flask app
- AWS : Load Balancing with HAProxy (High Availability Proxy)
- AWS : VirtualBox on EC2
- AWS : NTP setup on EC2
- AWS: jq with AWS
- AWS & OpenSSL : Creating / Installing a Server SSL Certificate
- AWS : OpenVPN Access Server 2 Install
- AWS : VPC (Virtual Private Cloud) 1 - netmask, subnets, default gateway, and CIDR
- AWS : VPC (Virtual Private Cloud) 2 - VPC Wizard
- AWS : VPC (Virtual Private Cloud) 3 - VPC Wizard with NAT
- DevOps / Sys Admin Q & A (VI) - AWS VPC setup (public/private subnets with NAT)
- AWS - OpenVPN Protocols : PPTP, L2TP/IPsec, and OpenVPN
- AWS : Autoscaling group (ASG)
- AWS : Setting up Autoscaling Alarms and Notifications via CLI and Cloudformation
- AWS : Adding a SSH User Account on Linux Instance
- AWS : Windows Servers - Remote Desktop Connections using RDP
- AWS : Scheduled stopping and starting an instance - python & cron
- AWS : Detecting stopped instance and sending an alert email using Mandrill smtp
- AWS : Elastic Beanstalk with NodeJS
- AWS : Elastic Beanstalk Inplace/Rolling Blue/Green Deploy
- AWS : Identity and Access Management (IAM) Roles for Amazon EC2
- AWS : Identity and Access Management (IAM) Policies, sts AssumeRole, and delegate access across AWS accounts
- AWS : Identity and Access Management (IAM) sts assume role via aws cli2
- AWS : Creating IAM Roles and associating them with EC2 Instances in CloudFormation
- AWS Identity and Access Management (IAM) Roles, SSO(Single Sign On), SAML(Security Assertion Markup Language), IdP(identity provider), STS(Security Token Service), and ADFS(Active Directory Federation Services)
- AWS : Amazon Route 53
- AWS : Amazon Route 53 - DNS (Domain Name Server) setup
- AWS : Amazon Route 53 - subdomain setup and virtual host on Nginx
- AWS Amazon Route 53 : Private Hosted Zone
- AWS : SNS (Simple Notification Service) example with ELB and CloudWatch
- AWS : Lambda with AWS CloudTrail
- AWS : SQS (Simple Queue Service) with NodeJS and AWS SDK
- AWS : Redshift data warehouse
- AWS : CloudFormation
- AWS : CloudFormation Bootstrap UserData/Metadata
- AWS : CloudFormation - Creating an ASG with rolling update
- AWS : Cloudformation Cross-stack reference
- AWS : OpsWorks
- AWS : Network Load Balancer (NLB) with Autoscaling group (ASG)
- AWS CodeDeploy : Deploy an Application from GitHub
- AWS EC2 Container Service (ECS)
- AWS EC2 Container Service (ECS) II
- AWS Hello World Lambda Function
- AWS Lambda Function Q & A
- AWS Node.js Lambda Function & API Gateway
- AWS API Gateway endpoint invoking Lambda function
- AWS API Gateway invoking Lambda function with Terraform
- AWS API Gateway invoking Lambda function with Terraform - Lambda Container
- Amazon Kinesis Streams
- AWS: Kinesis Data Firehose with Lambda and ElasticSearch
- Amazon DynamoDB
- Amazon DynamoDB with Lambda and CloudWatch
- Loading DynamoDB stream to AWS Elasticsearch service with Lambda
- Amazon ML (Machine Learning)
- Simple Systems Manager (SSM)
- AWS : RDS Connecting to a DB Instance Running the SQL Server Database Engine
- AWS : RDS Importing and Exporting SQL Server Data
- AWS : RDS PostgreSQL & pgAdmin III
- AWS : RDS PostgreSQL 2 - Creating/Deleting a Table
- AWS : MySQL Replication : Master-slave
- AWS : MySQL backup & restore
- AWS RDS : Cross-Region Read Replicas for MySQL and Snapshots for PostgreSQL
- AWS : Restoring Postgres on EC2 instance from S3 backup
- AWS : Q & A
- AWS : Security
- AWS : Security groups vs. network ACLs
- AWS : Scaling-Up
- AWS : Networking
- AWS : Single Sign-on (SSO) with Okta
- AWS : JIT (Just-in-Time) with Okta
Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization