MySQL: create an InnoDB cluster
What is MySQL InnoDB Cluster?
MySQL InnoDB Cluster is an easy way to increase the avalability and read-only operations of a MySQL database.
For the simplest configuration, we need 3 hosts having MySQL server installed and running.
In my case, I'll use MySQL sandboxes in MySQL Shell as this is the easiest way to create a cluser. For the creation of a real cluster, you should use real hosts. I'd recommend an enterprise Linux for that.
The smallest cluster needs 3 hosts, which would be tolerant for the outage of 1 host. The count of hosts should be an odd number (3, 5, etc.) as there would be a majority.
Create an InnoDB Cluster
Start MySQL SShell using JS
# mysqlsh
MySQL Shell 9.1.0
Copyright (c) 2016, 2024, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.
Type '\help' or '\?' for help; '\quit' to exit.
MySQL SQL > \js
Switching to JavaScript mode...
MySQL JS >
Prepare sandboxes
Use dba.deploySandboxInstance(Port#)
to create MySQL sandboxes. In my case, I use ports 3301, 3302 and 3303.
MySQL JS > dba.deploySandboxInstance(3301)
A new MySQL sandbox instance will be created on this host in
/home/martin/mysql-sandboxes/3302
Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.
Please enter a MySQL root password for the new instance: ************
Deploying new MySQL instance...
Instance localhost:3301 successfully deployed and started.
Use shell.connect('root@localhost:3302') to connect to the instance.
Prepare instances
As the MySQL standard instance isn't prepared for InnoDB cluster, we'll have to do that. It's quite easy:
MySQL JS > dba.configureInstance('root@localhost:3301')
Please provide the password for 'root@localhost:3301': ************
Save password for 'root@localhost:3301'? [Y]es/[N]o/Ne[v]er (default No): Y
Configuring local MySQL instance listening at port 3301 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3301
applierWorkerThreads will be set to the default value of 4.
The instance '127.0.0.1:3301' is valid for InnoDB Cluster usage.
Successfully enabled parallel appliers.
Don't forget to do that for all 3 instances.
Create Cluster
First we'll have to connect to one of the instances. In my case, I'll start with the first one:
MySQL JS > \c root@localhost:3301
Creating a session to 'root@localhost:3301'
Fetching schema names for auto-completion... Press ^C to stop.
Your MySQL connection id is 12
Server version: 9.1.0 MySQL Community Server - GPL
No default schema selected; type \use <schema> to set one.
Then we create the cluster:
MySQL localhost:3301 ssl JS > dba.createCluster('SandboxCluster');
A new InnoDB Cluster will be created on instance '127.0.0.1:3301'.
Validating instance configuration at localhost:3301...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3301
Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '127.0.0.1:3301'. Use the localAddress option to override.
* Checking connectivity and SSL configuration...
Creating InnoDB Cluster 'SandboxCluster' on '127.0.0.1:3301'...
Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
<Cluster:SandboxCluster>
We can display the status of the newly created cluster:
MySQL localhost:3301 ssl JS > dba.getCluster().status()
{
"clusterName": "SandboxCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "127.0.0.1:3301",
"ssl": "REQUIRED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"127.0.0.1:3301": {
"address": "127.0.0.1:3301",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "9.1.0"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "127.0.0.1:3301"
}
Add instances to the cluster
Now, we can add the other 2 instances to the cluster (I'll show only the command for instance 3302):
MySQL localhost:3301 ssl JS > dba.getCluster().addInstance('root@localhost:3302')
NOTE: The target instance '127.0.0.1:3302' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it.
The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of '127.0.0.1:3302' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.
The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'.
Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone):
Validating instance configuration at localhost:3302...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3302
Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '127.0.0.1:3302'. Use the localAddress option to override.
* Checking connectivity and SSL configuration...
A new instance will be added to the InnoDB Cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
Adding instance to the cluster...
Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.
NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.
* Waiting for clone to finish...
NOTE: 127.0.0.1:3302 is being cloned from 127.0.0.1:3301
** Stage DROP DATA: Completed
** Clone Transfer
FILE COPY ############################################################ 100% Completed
PAGE COPY ############################################################ 100% Completed
REDO COPY ############################################################ 100% Completed
NOTE: 127.0.0.1:3302 is shutting down...
* Waiting for server restart... ready
* 127.0.0.1:3302 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 74.72 MB transferred in about 1 second (~74.72 MB/s)
State recovery already finished for '127.0.0.1:3302'
The instance '127.0.0.1:3302' was successfully added to the cluster.
Check the status for the cluster
As seen before, the status can be checked like that:
MySQL localhost:3301 ssl JS > dba.getCluster().status()
{
"clusterName": "SandboxCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "127.0.0.1:3301",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"127.0.0.1:3301": {
"address": "127.0.0.1:3301",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "9.1.0"
},
"127.0.0.1:3302": {
"address": "127.0.0.1:3302",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "9.1.0"
},
"127.0.0.1:3303": {
"address": "127.0.0.1:3303",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "9.1.0"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "127.0.0.1:3301"
}
Access
The classic way
We can connect to the database using the MySQL shell client, MySQL Workbench, or any other database client, using <username>@localhost:3301
. This will allow read/write as well as read/only operations. But we don't use the advantages of a cluster, like scaling out read operations.
The cluster way
To access the cluster using these advantages, we use MySQL Router. By default, it provides the port 6446
for read/write operations, and port 6447
for read/only operations. Additionally, port 6450
will be provided for mixed operations. Using this port, MySQL router will split read/write and read/only operations and distribute them over the cluster.
MySQL Router will be bootstrapped like that:
mysqlrouter --bootstrap root@localhost:3301
Please enter MySQL password for root:
# Bootstrapping system MySQL Router 9.0.1 (MySQL Community - GPL) instance...
- Creating account(s) (only those that are needed, if any)
- Verifying account (using it to run SQL queries that would be run by Router)
- Storing account in keyring
- Creating configuration mysqlrouter.conf
# MySQL Router configured for the InnoDB Cluster 'SandboxCluster'
After this MySQL Router has been started with the generated configuration
> net start mysqlrouter
or
> mysqlrouter.exe -c mysqlrouter.conf
InnoDB Cluster 'SandboxCluster' can be reached by connecting to:
## MySQL Classic protocol
- Read/Write Connections: localhost:6446
- Read/Only Connections: localhost:6447
- Read/Write Split Connections: localhost:6450
## MySQL X protocol
- Read/Write Connections: localhost:6448
- Read/Only Connections: localhost:6449
Switchover
InnoDB cluster will automatically use another instance to be the new primary instance if the old primary will fail:
MySQL localhost:3302 ssl JS > dba.getCluster().status()
{
"clusterName": "SandboxCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "127.0.0.1:3302",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"127.0.0.1:3301": {
"address": "127.0.0.1:3301",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "9.1.0"
},
"127.0.0.1:3302": {
"address": "127.0.0.1:3302",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "9.1.0"
},
"127.0.0.1:3303": {
"address": "127.0.0.1:3303",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "9.1.0"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "127.0.0.1:3302"
}
As you can see, 3302 is now the primary instance.