Deploy MongoDB replica set on Docker swarm.
Introduction
A 3-member replica set provides enough redundancy to survive most network partitions and other system failures. It also has sufficient capacity for many distributed read operations. The standard replica set deployment for the production system is a three-member replica set.
Prerequisites
- Four servers (one manager, three workers); if you do not have one, you can get a small free credit at Vultr, UpCloud or DigitalOcean for creating your servers.
- Four hostnames for your servers, for example, manager.example.com, worker0.example.com, worker1.example.com, worker2.example.com.
- Valid SSLs for your hostnames.
- Basic knowledge of SSH.
- Basic understanding of MongoDB and Docker.
Deploy New Replica Set
For better performance, use the XFS filesystem for all your servers. In this article, I will use Ubuntu 18.04. For SSL, I will use letsencrypt.org, and I assume that you have your domain in cloudflare.com.
Install Docker on all servers
ssh into all your servers and run the following commands:
# Update server
apt-get update && apt-get upgrade -y
# Install Docker
curl https://raw.githubusercontent.com/harrytang/linux-cmds/master/install-docker-ubuntu.sh | sh
Config manager/workers
ssh into the manager server and run the command below (replace YOUR_SERVER_IP
with your real IP):
docker swarm init --advertise-addr YOUR_SERVER_IP
Then, ssh into three workers' servers and run this command (again, replace TOKEN
and YOUR_MANAGER_IP
with your real value):
docker swarm join --token TOKEN YOUR_MANAGER_IP:2377
We want each MongoDB replica to run at a separate Docker node; the simple way to define this is using the Docker label. First, list all nodes by running this command on the Manager Server:
docker node ls
You will see somthing like this:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
ahen750s5130miryjyn2fv7b2 * manager Ready Drain Leader 19.03.2
j7ahrccg7pmq5xtw7hrj4a781 worker0 Ready Active 19.03.2
kh9uktekpsuu9btj1pjuv3hkm worker1 Ready Active 19.03.2
vhw3wy9js9wkxqyedylnjkhc2 worker2 Ready Active 19.03.2
Now you can see your node IDs; let apple the labels:
docker node update --label-add mongo.rs=0 j7a
docker node update --label-add mongo.rs=1 kh9
docker node update --label-add mongo.rs=2 vhw
Config MongoDB on workers
You need 3 hostnames with valid certificates, for example worker0.example.com, worker1.example.com, worker2.example.com, with the help of acme.sh and letsencrypt.org you can easily get the certificates. Full description of getting the certificates is beyond the scope of this blog. I assume you already have valid certificates for your all your hostnames.
We also need to create some directories to store the data and configure files; just run the command below to get things done:
curl https://raw.githubusercontent.com/powerkernel/docker-mongodb-rs/master/mongodb-rs-init.sh | sh
You will see a directory in /data/mongodb
. Since we use Keyfile Access Control for our replica set, we need to copy and overwrite the /data/mongodb/keyfile
from worker0
to worker1
and worker2
.
We also need to tell acme.sh to reload Mongodb every time the certificates are renewed. So run the command below on every worker:
acme.sh --install-cert -d hostname.example.com \
--key-file /root/certs/hostname/key.pem \
--cert-file /root/certs/hostname/cert.pem \
--fullchain-file /root/certs/hostname/fullchain.pem \
--ca-file /root/certs/hostname/ca.pem \
--reloadcmd "cat /root/certs/hostname/key.pem /root/certs/hostname/fullchain.pem > /root/certs/hostname/mongodb.pem && cat /root/certs/hostname/key.pem /root/certs/hostname/fullchain.pem > /root/certs/all/hostname.pem && /root/scripts/reload-hostname-cert.sh"
Note: replace hostname.example.com
with your hostname, for example worker0.example.com.
Deploy MongoDB on the manager node
Now login to the manager node, download this docker-compose.yml file and run the command below:
docker stack deploy -c docker-compose.yml mongodb
Initiates a replica set
ssh to one of your containers on the workers (use docker ps
to get the CONTAINER_ID
):
docker exec -it CONTAINER_ID /bin/bash
Then run the commands below:
mongo --tls --tlsAllowInvalidCertificates
rs.initiate( {
_id : "mongo-rs",
members: [
{ _id: 0, host: "worker0.example.com:27017" },
{ _id: 1, host: "worker1.example.com:27017" },
{ _id: 2, host: "worker2.example.com:27017" }
]
})
Next, add root user for MongoDB:
admin = db.getSiblingDB("admin")
admin.createUser(
{
user: "root",
pwd: "YOUR_MONGO_ROOT_PASS",
roles: [ { role: "root", db: "admin" } ]
}
)
Connect to your MongoDB replica set:
mongodb://root:[email protected]:27017,worker1.example.com:27017,worker2.example.com/admin?replicaSet=mongo-rs&tls=true
Or you can set up DNS Seedlist Connection Format:
mongodb+srv://mongo.example.com
Tip: If you want to add more users, use the following commands:
use admin
db.createUser(
{
user: "YOUR_USER_NAME",
pwd: "YOUR_LONG_LONG_PW",
roles:
[
{ role: "readWrite", db: "YOUR_DB_NAME" }
],
mechanisms: [ "SCRAM-SHA-1" ]
}
)
Set up high availability
We now have just one manager node. You can join multiple manager nodes to the cluster so that if one manager node fails, another can automatically take its place without impacting the cluster.
Conclusion
Congratulations, now you have a production MongoDB replica set!