- Community Home
- >
- Storage
- >
- Around the Storage Block
- >
- Running MongoDB in OpenShift/Kubernetes on HPE 3PA...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
Running MongoDB in OpenShift/Kubernetes on HPE 3PAR
Running MongoDB in Kubernetes on HPE 3PAR
This tutorial will show you how easy it is run a MongoDB on Kubernetes/Red Hat OpenShift. In addition to setting up MongoDB on Kubernetes, we will show you how to use the HPE 3PAR Volume Plug-in for Docker to present persistent volumes for use by the MongoDB nodes.
There are many ways you can run MongoDB in Kubernetes, a single instance or multiple instances via a StatefulSet. StatefulSets are recommended to be used for deploying databases in order to maintain stable deployments across Pod scheduling. This includes stable, persistent storage.
There are many examples of deploying MongoDB in Kubernetes found on the internet so this shouldn’t be anything new, yet we will show you how we will be using the HPE 3PAR Volume Plug-in for Docker to create a Storage Class and related 3PAR parameters that can be used for the data volumes for MongoDB.
We are assuming that you already have a Kubernetes or Red Hat OpenShift cluster deployed and the HPE 3PAR Volume Plug-in for Docker installed. If you need help with this go to the following pages to learn more:
Learn Kubernetes Basics
https://kubernetes.io/docs/tutorials/kubernetes-basics/
or
Learn Red Hat OpenShift
https://www.openshift.com/learn/get-started/
HPE 3PAR Volume Plug-in for Docker
https://github.com/hpe-storage/python-hpedockerplugin
Note: We will be using Red Hat OpenShift in this demo. You can use OpenShift oc commands and kubectl commands interchangeably.
Getting Started
Now that we have that out of the way, lets get started by setting up the Storage Class we will be using for our MongoDB instance. The Storage Class we will create will use the HPE 3PAR dynamic provisioner to create persistent volumes in Kubernetes that will help our databases operate at peak performance.
The following 3PAR volume parameters will be used in this example:
- Fully provisioned volumes
- All-flash 3PAR Storage array backend
- Peer Persistence Replication enabled
You can customize them as needed. Full list of all Storage Class parameters supported by the 3PAR Dynamic provisioner can be found here.
https://github.com/hpe-storage/python-hpedockerplugin/blob/master/docs/usage.md
Here is what our Storage Class looks like:
# cat sc-mongo.yml --- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: sc-mongo provisioner: hpe.com/hpe parameters: provisioning: 'full' cpg: 'SSD_r6' backend: '3PAR_all_flash' replicationGroup: 'mongodb-app'
We will import the Storage Class so we can use it later to create the persistent volumes that we will need when we deploy MongoDB.
# oc create –f sc-mongo.yml
Let’s double check that everything looks okay in OpenShift
# oc describe sc sc-mongo Name: sc-mongo IsDefaultClass: No Annotations: <none> Provisioner: hpe.com/hpe Parameters: backend=3PAR_all_flash,cpg=SSD_r6,provisioning=full,replicationGroup=mongodb-app AllowVolumeExpansion: <unset> MountOptions: <none> ReclaimPolicy: Delete VolumeBindingMode: Immediate Events: <none>
Now we are ready to deploy Mongo. Here is an example of a StatefulSet from Kubernetes we will use. We will only need to modify the Storage Class to match the newly created storage class we created earlier.
You can find the MongoDB StatefulSet example from Google’s CodeLabs.
There are two sections in this deployment, the first is the headless service that is needed for the networking component of MongoDB and Kubernetes. The second section is the spec for 3 MongoDB instances using the StatefulSet object. We will update the volumeClaimTemplates section to specify the sc-mongo Storage Class.
It should look like this. It's a little long, but fairly straightforward.
# cat mongodb.yml --- apiVersion: v1 kind: Service metadata: name: mongo labels: name: mongo spec: ports: - port: 27017 targetPort: 27017 clusterIP: None selector: role: mongo --- apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: mongo spec: serviceName: "mongo" replicas: 3 # by default is 1 template: metadata: labels: role: mongo environment: test spec: terminationGracePeriodSeconds: 10 containers: - name: mongo image: mongo command: - mongod - "--replSet" - rs0 - "--smallfiles" - "--noprealloc" ports: - containerPort: 27017 volumeMounts: - name: mongo-persistent-storage mountPath: /data/db - name: mongo-sidecar image: cvallance/mongo-k8s-sidecar env: - name: MONGO_SIDECAR_POD_LABELS value: "role=mongo,environment=test" volumeClaimTemplates: - metadata: name: mongo-persistent-storage annotations: volume.beta.kubernetes.io/storage-class: sc-mongo spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 50Gi
Once we have specified the sc-mongo Storage Class, we can create the MongoDB instances by running:
# oc create –f mongodb.yml
We can monitor the creation of the MongoDB instances:
# oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mongo-persistent-storage-mongo-0 Bound sc-mongo-a667ce58-76a7-11e9-b787-0050569bb07c 50Gi RWO sc-mongo 1m mongo-persistent-storage-mongo-1 Bound sc-mongo-b47bc6b6-76a7-11e9-b787-0050569bb07c 50Gi RWO sc-mongo 1m mongo-persistent-storage-mongo-2 Bound sc-mongo-bee869cb-76a7-11e9-b787-0050569bb07c 50Gi RWO sc-mongo 1m $ oc get pods [root@o-master1 mongodb]# oc get pods NAME READY STATUS RESTARTS AGE docker-registry-1-z4llz 1/1 Running 7 32d mongo-0 2/2 Running 0 1m mongo-1 2/2 Running 0 1m mongo-2 0/2 ContainerCreating 0 10s pod-nginx 1/1 Running 0 1d pod-nginx-replicated 1/1 Running 0 1d registry-console-1-j5528 1/1 Running 7 32d router-1-fh4sd 1/1 Running 8 32d
You will also see the corresponding volumes being created on the 3PAR.
virt-3par-8440 cli% showvv -Rsvd(MiB)- -(MiB)-- Id Name Prov Compr Dedup Type CopyOf BsId Rd -Detailed_State- Snp Usr VSize 2 .shared.SSD_r6_0 dds NA No base --- 2 RW normal 0 37376 67108864 3 virtware-primary-vol1 tdvv No Yes base --- 3 RW normal 512 62208 4194304 1 .srdata full NA NA base --- 1 RW normal 0 92160 92160 0 admin full NA NA base --- 0 RW normal 0 10240 10240 68 dcv-4DG6wcCpQIinncdAqeCioQ cpvv NA NA base --- 68 RW normal 512 51200 51200 69 dcv-AWwuviI3S42dnyIMHD864w cpvv NA NA base --- 69 RW normal 512 51200 51200 67 dcv-xKDaUH.6RFKC2fwrluj0Mw cpvv NA NA base --- 67 RW normal 512 51200 51200 -------------------------------------------------------------------------------------------------------- 7 total 2048 355584 7109836
You can also use showrcopy to check the status of the volumes being replicated.
virt-3par-8440 cli% showrcopy Remote Copy System Information Status: Started, Normal Target Information Name ID Type Status Options Policy virt-3par-7200 1 IP ready - mirror_config Link Information Target Node Address Status Options virt-3par-7200 0:3:1 172.17.20.12 Up - virt-3par-7200 1:3:1 172.17.20.13 Up - receive 0:3:1 receive Up - receive 1:3:1 receive Up - Group Information Name Target Status Role Mode Options mongodb-app virt-3par-7200 Started Secondary Sync LocalVV ID RemoteVV ID SyncStatus LastSyncTime dcv-xKDaUH.6RFKC2fwrluj0Mw 67 dcv-xKDaUH.6RFKC2fwrluj0Mw 28 Synced NA dcv-4DG6wcCpQIinncdAqeCioQ 68 dcv-4DG6wcCpQIinncdAqeCioQ 29 Synced NA dcv-AWwuviI3S42dnyIMHD864w 69 dcv-AWwuviI3S42dnyIMHD864w 30 Synced NA
You can connect to the MongoDB instances using the following connection string URI. Note this is using the service and ports defined in the Service spec.
"mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo:27017/dbname_?"
The use of the database is outside the scope of this post.
Scaling our new MongoDB environment
Now it has been a few months since you deployed the original 3 MongoDB instances and you need a couple more instances for your application. Rather than creating a new deployment, you can easily scale StatefulSets in a similar manner as Kubernetes ReplicaSets.
If you want 5 MongoDB Nodes instead of 3, just run the scale command:
# kubectl scale --replicas=5 statefulset mongo
This will deploy two additional nodes, and provision the storage along with them and add them into the existing mongo replica set.
Last, include the two new nodes (mongo-3.mongo & mongo-4.mongo) in your connection string URI and you are good to go.
Using Snapshots in our MongoDB environment
The development team has decided they need to test some upgrades to an application using a valid dataset from one of the MongoDB instances. Rather than give them access to production data, we can make a new Storage Class using the HPE 3PAR Volume Plug-in dynamic provisioner that will take a snapshot of the production MongoDB nodes.
The dev team needs a copy of mongo-2 instance and its corresponding volume.
In order to take a snapshot, get the name of the mongo-2 persistent volume:
# oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mongo-persistent-storage-mongo-2 Bound sc-mongo-bee869cb-76a7-11e9-b787-0050569bb07c 50Gi RWO sc-mongo 1d
Now that we have the persistent volume name, we can create a new Storage Class to create the Snapshot
# vi sc-mongo-2-snap.yml --- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: sc-mongo-2-snap provisioner: hpe.com/hpe parameters: virtualCopyOf: "sc-mongo-bee869cb-76a7-11e9-b787-0050569bb07c”
Next, import the Storage Class into Kubernetes/OpenShift.
# oc create –f sc-mongo-2.snap.yml
To use the snapshot, simply create a new Statefulset spec, similar to the original deployment above, change the Storage Class to the new snapshot Storage Class sc-mongo-2-snap. Set the number of replicas (nodes) needed for the dev teams testing. Finally, import the StatefulSet into Kubernetes/OpenShift. After a couple minutes, you should see the new nodes created, using a snapshot of the mongo-2 persistent volume. The dev team will now be able to use them for their application testing.
Clean-up
Now that we have had all of this fun, we need to clean-up our environment from time to time so it doesn’t start to look like our teenage kid’s bedroom. In order to clean up the deployed resources, we will need to delete the StatefulSet, the Headless Service, and the provisioned volumes.
To delete the StatefulSet and corresponding pods:
# oc delete statefulset mongo
To delete the Service:
# oc delete svc mongo
To delete the Volumes:
# oc delete pvc -l role=mongo
Success!
Congratulations you have deployed several MongoDB database instances and demonstrated how you can use the HPE 3PAR Volume Plug-in for Docker to provision persistent volumes using HPE 3PAR storage for your containerized application workloads within your business.
Stay tuned as we continue to release blogs on various topics and workloads using the HPE 3Par Volume Plugin for Docker. Also if there are topics that you don’t see but would like us to cover don’t hesitate to reach out to me.
- Back to Blog
- Newer Article
- Older Article
- haniff on: High-performance, low-latency networks for edge an...
- StorageExperts on: Configure vSphere Metro Storage Cluster with HPE N...
- haniff on: Need for speed and efficiency from high performanc...
- haniff on: Efficient networking for HPE’s Alletra cloud-nativ...
- CalvinZito on: What’s new in HPE SimpliVity 4.1.0
- MichaelMattsson on: HPE CSI Driver for Kubernetes v1.4.0 with expanded...
- StorageExperts on: HPE Nimble Storage dHCI Intelligent 1-Click Update...
- ORielly on: Power Loss at the Edge? Protect Your Data with New...
- viraj h on: HPE Primera Storage celebrates one year!
- Ron Dharma on: Introducing Language Bindings for HPE SimpliVity R...