MySQL: Deploy InnoDB ClusterSet
What is InnoDB ClusterSet
MySQL: "MySQL InnoDB ClusterSet provides disaster tolerance for InnoDB Cluster deployments by linking a primary InnoDB Cluster with one or more replicas of itself in alternate locations, such as different datacenters. InnoDB ClusterSet automatically manages replication from the primary cluster to the replica clusters using a dedicated ClusterSet replication channel. If the primary cluster becomes unavailable due to the loss of the data center or the loss of network connectivity to it, you can make a replica cluster active instead to restore the availability of the service."
Prepare instances
We'll create an InnoDB ClusterSet ClusterSet_33_34, which means we will have 2 clusters (Cluster_33 and Cluster_34). Each of them will consist of 3 hosts.
The first 3 instances are in Cluster33, the others are in Cluster34.
So let's create 2x3 instances.
As I'm a lazy guy, I won't create 6 servers. Instead, I'll use MySQL Shell to create sandbox instances. For production, I would surely use native or virtual servers:
- Cluster33
- 3301
- 3302
- 3303
- Cluster34
- 3401
- 3402
- 3403
Create instances
MySQL sandbox instances are created like that:
MySQL JS > dba.deploySandboxInstance(3301)
A new MySQL sandbox instance will be created on this host in
C:\Users\martin.wendler\MySQL\mysql-sandboxes\3301
Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.
Please enter a MySQL root password for the new instance: ************
Deploying new MySQL instance...
Instance localhost:3301 successfully deployed and started.
Use shell.connect('root@localhost:3301') to connect to the instance.
Repeat that for all the other instances.
Configure instances
Now we have to prepare the instances:
MySQL JS > dba.configureInstance('root@localhost:3301')
Configuring local MySQL instance listening at port 3301 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3301
applierWorkerThreads will be set to the default value of 4.
The instance '127.0.0.1:3301' is valid for InnoDB Cluster usage.
Successfully enabled parallel appliers.
Do that also for all the other instances.
Create InnoDB cluster
First, connect to the first node of the first cluster (in my case: 3301):
MySQL JS > \c root@localhost:3301
Creating a session to 'root@localhost:3301'
Fetching schema names for auto-completion... Press ^C to stop.
Your MySQL connection id is 12
Server version: 8.2.0 MySQL Community Server - GPL
No default schema selected; type \use <schema> to set one.
Then create the first cluster:
MySQL localhost:3301 ssl JS > dba.createCluster('Cluster33')
A new InnoDB Cluster will be created on instance '127.0.0.1:3301'.
Validating instance configuration at localhost:3301...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3301
Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '127.0.0.1:3301'. Use the localAddress option to override.
* Checking connectivity and SSL configuration...
Creating InnoDB Cluster 'Cluster33' on '127.0.0.1:3301'...
Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
Then, add the two other nodes (3302, 3303 - I'll show only for 3302):
MySQL localhost:3301 ssl JS > dba.getCluster().addInstance('root@localhost:3302')
NOTE: The target instance '127.0.0.1:3302' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it.
The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of '127.0.0.1:3302' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.
The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'.
Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone):
Validating instance configuration at localhost:3302...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3302
Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '127.0.0.1:3302'. Use the localAddress option to override.
* Checking connectivity and SSL configuration...
A new instance will be added to the InnoDB Cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
Adding instance to the cluster...
Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.
NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.
* Waiting for clone to finish...
NOTE: 127.0.0.1:3302 is being cloned from 127.0.0.1:3301
** Stage DROP DATA: Completed
** Clone Transfer
FILE COPY ############################################################ 100% Completed
PAGE COPY ############################################################ 100% Completed
REDO COPY ############################################################ 100% Completed
NOTE: 127.0.0.1:3302 is shutting down...
* Waiting for server restart... ready
* 127.0.0.1:3302 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 74.70 MB transferred in about 1 second (~74.70 MB/s)
State recovery already finished for '127.0.0.1:3302'
The instance '127.0.0.1:3302' was successfully added to the cluster.
After creation, the cluster looks like that:
MySQL localhost:3301 ssl JS > dba.getCluster().status()
{
"clusterName": "Cluster33",
"defaultReplicaSet": {
"name": "default",
"primary": "127.0.0.1:3301",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"127.0.0.1:3301": {
"address": "127.0.0.1:3301",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.2.0"
},
"127.0.0.1:3302": {
"address": "127.0.0.1:3302",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.2.0"
},
"127.0.0.1:3303": {
"address": "127.0.0.1:3303",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.2.0"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "127.0.0.1:3301"
}
Create InnoDB ClusterSet
Using the existing Cluster 33
, we can now create a new ClusterSet:
MySQL localhost:3301 ssl JS > dba.getCluster().createClusterSet('ClusterSet_33_34')
A new ClusterSet will be created based on the Cluster 'Cluster33'.
* Validating Cluster 'Cluster33' for ClusterSet compliance.
* Creating InnoDB ClusterSet 'ClusterSet_33_34' on 'Cluster33'...
* Updating metadata...
ClusterSet successfully created. Use ClusterSet.createReplicaCluster() to add Replica Clusters to it.
<ClusterSet:ClusterSet_33_34>
Check the status
We can see that the ClusterSet is create, but only consists of one cluster:
MySQL localhost:3301 ssl JS > dba.getCluster().getClusterSet().status()
{
"clusters": {
"Cluster33": {
"clusterRole": "PRIMARY",
"globalStatus": "OK",
"primary": "127.0.0.1:3301"
}
},
"domainName": "ClusterSet_33_34",
"globalPrimaryInstance": "127.0.0.1:3301",
"primaryCluster": "Cluster33",
"status": "HEALTHY",
"statusText": "All Clusters available."
}
So let's create the Cluster34, which will be the standby cluster:
MySQL localhost:3301 ssl JS > dba.getCluster().getClusterSet().createReplicaCluster('root@localhost:3401','Cluster34')
Setting up replica 'Cluster34' of cluster 'Cluster33' at instance '127.0.0.1:3401'.
A new InnoDB Cluster will be created on instance '127.0.0.1:3401'.
Validating instance configuration at localhost:3401...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3401
Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '127.0.0.1:3401'. Use the localAddress option to override.
* Checking connectivity and SSL configuration...
* Checking transaction state of the instance...
NOTE: The target instance '127.0.0.1:3401' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether replication can completely recover its state.
The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of '127.0.0.1:3401' with a physical snapshot from an existing clusterset member. To use this method by default, set the 'recoveryMethod' option to 'clone'.
WARNING: It should be safe to rely on replication to incrementally recover the state of the new Replica Cluster if you are sure all updates ever executed in the ClusterSet were done with GTIDs enabled, there are no purged transactions and the instance used to create the new Replica Cluster contains the same GTID set as the ClusterSet or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'.
Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone):
* Checking connectivity and SSL configuration to PRIMARY Cluster...
Monitoring Clone based state recovery of the new member. Press ^C to abort the operation.
Clone based state recovery is now in progress.
NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.
* Waiting for clone to finish...
NOTE: 127.0.0.1:3401 is being cloned from 127.0.0.1:3301
** Stage DROP DATA: Completed
** Clone Transfer
FILE COPY ############################################################ 100% Completed
PAGE COPY ############################################################ 100% Completed
REDO COPY ############################################################ 100% Completed
* Clone process has finished: 74.70 MB transferred in about 1 second (~74.70 MB/s)
Creating InnoDB Cluster 'Cluster34' on '127.0.0.1:3401'...
Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
Cluster "memberAuthType" is set to 'PASSWORD' (inherited from the ClusterSet).
* Configuring ClusterSet managed replication channel...
** Changing replication source of 127.0.0.1:3401 to 127.0.0.1:3301
* Waiting for instance '127.0.0.1:3401' to synchronize with PRIMARY Cluster...
** Transactions replicated ############################################################ 100%
* Updating topology
* Waiting for the Cluster to synchronize with the PRIMARY Cluster...
** Transactions replicated ############################################################ 100%
Replica Cluster 'Cluster34' successfully created on ClusterSet 'ClusterSet_33_34'.
<Cluster:Cluster34>
And as the Cluster34 now only consists of one node, we'll add the other 2.
Logon to node 3401:
MySQL localhost:3301 ssl JS > \c root@localhost:3401
Creating a session to 'root@localhost:3401'
Fetching schema names for auto-completion... Press ^C to stop.
Closing old connection...
Your MySQL connection id is 121
Server version: 8.2.0 MySQL Community Server - GPL
No default schema selected; type \use <schema> to set one.
And add the other two nodes (I'll show only 3402, 3403 would be quite the same):
MySQL localhost:3401 ssl JS > dba.getCluster().addInstance('root@localhost:3402')
NOTE: A GTID set check of the MySQL instance at '127.0.0.1:3402' determined that it is missing transactions that were purged from all cluster members.
NOTE: The target instance '127.0.0.1:3402' has not been pre-provisioned (GTID set is empty). The Shell is unable to determine whether the instance has pre-existing data that would be overwritten with clone based recovery.
The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of '127.0.0.1:3402' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.
Please select a recovery method [C]lone/[A]bort (default Clone):
Validating instance configuration at localhost:3402...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3402
Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '127.0.0.1:3402'. Use the localAddress option to override.
* Checking connectivity and SSL configuration...
A new instance will be added to the InnoDB Cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
Adding instance to the cluster...
* Waiting for the Cluster to synchronize with the PRIMARY Cluster...
** Transactions replicated ############################################################ 100%
* Configuring ClusterSet managed replication channel...
** Changing replication source of 127.0.0.1:3402 to 127.0.0.1:3301
Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.
NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.
* Waiting for clone to finish...
NOTE: 127.0.0.1:3402 is being cloned from 127.0.0.1:3401
** Stage DROP DATA: Completed
** Clone Transfer
FILE COPY ############################################################ 100% Completed
PAGE COPY ############################################################ 100% Completed
REDO COPY ############################################################ 100% Completed
NOTE: 127.0.0.1:3402 is shutting down...
* Waiting for server restart... ready
* 127.0.0.1:3402 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 75.92 MB transferred in about 1 second (~75.92 MB/s)
State recovery already finished for '127.0.0.1:3402'
The instance '127.0.0.1:3402' was successfully added to the cluster.
Let's check the status of Cluster34:
MySQL localhost:3401 ssl JS > dba.getCluster().status()
{
"clusterName": "Cluster34",
"clusterRole": "REPLICA",
"clusterSetReplicationStatus": "OK",
"defaultReplicaSet": {
"name": "default",
"primary": "127.0.0.1:3401",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"127.0.0.1:3401": {
"address": "127.0.0.1:3401",
"memberRole": "PRIMARY",
"mode": "R/O",
"readReplicas": {},
"replicationLagFromImmediateSource": "",
"replicationLagFromOriginalSource": "",
"role": "HA",
"status": "ONLINE",
"version": "8.2.0"
},
"127.0.0.1:3402": {
"address": "127.0.0.1:3402",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLagFromImmediateSource": "",
"replicationLagFromOriginalSource": "",
"role": "HA",
"status": "ONLINE",
"version": "8.2.0"
},
"127.0.0.1:3403": {
"address": "127.0.0.1:3403",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLagFromImmediateSource": "",
"replicationLagFromOriginalSource": "",
"role": "HA",
"status": "ONLINE",
"version": "8.2.0"
}
},
"topologyMode": "Single-Primary"
},
"domainName": "ClusterSet_33_34",
"groupInformationSourceMember": "127.0.0.1:3401",
"metadataServer": "127.0.0.1:3301"
}
And now, let's check the status of the whole ClusterSet:
MySQL localhost:3401 ssl JS > dba.getCluster().getClusterSet().status()
{
"clusters": {
"Cluster33": {
"clusterRole": "PRIMARY",
"globalStatus": "OK",
"primary": "127.0.0.1:3301"
},
"Cluster34": {
"clusterRole": "REPLICA",
"clusterSetReplicationStatus": "OK",
"globalStatus": "OK"
}
},
"domainName": "ClusterSet_33_34",
"globalPrimaryInstance": "127.0.0.1:3301",
"primaryCluster": "Cluster33",
"status": "HEALTHY",
"statusText": "All Clusters available."
}
If you want more information, you can add extended: 1
to the command:
MySQL localhost:3401 ssl JS > dba.getCluster().getClusterSet().status({extended: 1})
Access the databases
The classic way
We can connect to the database using the MySQL shell client, MySQL Workbench, or any other database client, using <username>@localhost:3301
. This will allow read/write as well as read/only operations. But we don't use the advantages of a cluster, like scaling out read operations.
The cluster way
To access the cluster using these advantages, we use MySQL Router. By default, it provides the port 6446
for read/write operations, and port 6447
for read/only operations. Additionally, port 6450
will be provided for mixed operations. Using this port, MySQL router will split read/write and read/only operations and distribute them over the cluster.
MySQL Router will be bootstrapped like that:
mysqlrouter --bootstrap root@localhost:3301
Please enter MySQL password for root:
# Bootstrapping system MySQL Router 9.0.1 (MySQL Community - GPL) instance...
- Creating account(s) (only those that are needed, if any)
- Verifying account (using it to run SQL queries that would be run by Router)
- Storing account in keyring
- Creating configuration mysqlrouter.conf
# MySQL Router configured for the InnoDB Cluster 'SandboxCluster'
After this MySQL Router has been started with the generated configuration
> net start mysqlrouter
or
> mysqlrouter.exe -c mysqlrouter.conf
InnoDB Cluster 'SandboxCluster' can be reached by connecting to:
## MySQL Classic protocol
- Read/Write Connections: localhost:6446
- Read/Only Connections: localhost:6447
- Read/Write Split Connections: localhost:6450
## MySQL X protocol
- Read/Write Connections: localhost:6448
- Read/Only Connections: localhost:6449
Switchover
There are two things that can be switched over:
- primary instance
- primary cluster
Switchover primary instance
This switches the primary instance to another node:
MySQL localhost:3401 ssl JS > dba.getCluster().setPrimaryInstance('root@localhost:3402')
Setting instance 'localhost:3402' as the primary instance of cluster 'Cluster34'...
Instance '127.0.0.1:3403' remains SECONDARY.
Instance '127.0.0.1:3402' was switched from SECONDARY to PRIMARY.
Instance '127.0.0.1:3401' was switched from PRIMARY to SECONDARY.
The instance 'localhost:3402' was successfully elected as primary.
The primary cluster will not be switched over.
Switchover primary cluster
This switches to the formerly secondary cluster:
MySQL localhost:3401 ssl JS > dba.getCluster().getClusterSet().setPrimaryCluster('Cluster34')
Switching the primary cluster of the clusterset to 'Cluster34'
* Verifying clusterset status
** Checking cluster Cluster33
Cluster 'Cluster33' is available
** Checking cluster Cluster34
Cluster 'Cluster34' is available
** Waiting for the promoted cluster to apply pending received transactions...
* Reconciling 5 internally generated GTIDs
* Refreshing replication account of demoted cluster
* Synchronizing transaction backlog at 127.0.0.1:3402
** Transactions replicated ############################################################ 100%
* Updating metadata
* Updating topology
** Changing replication source of 127.0.0.1:3302 to 127.0.0.1:3402
** Changing replication source of 127.0.0.1:3303 to 127.0.0.1:3402
** Changing replication source of 127.0.0.1:3301 to 127.0.0.1:3402
* Acquiring locks in ClusterSet instances
** Pre-synchronizing SECONDARIES
** Acquiring global lock at PRIMARY
** Acquiring global lock at SECONDARIES
* Synchronizing remaining transactions at promoted primary
** Transactions replicated ############################################################ 100%
* Updating replica clusters
Cluster 'Cluster34' was promoted to PRIMARY of the clusterset. The PRIMARY instance is '127.0.0.1:3402'
Conclusion
MySQL InnoDB ClusterSet is a great solution to use InnoDB Clusters over different locations. It scales read/only over the complete cluster. But we should keep in mind that read/write operations will only appear on the primary instance of the primary cluster. So if we are changing or writing a very large number of rows from the read/only location, the speed depends on the network speed between the two locations.