HPE Nimble Storage Tech Blog
cancel
Showing results for 
Search instead for 
Did you mean: 

Oracle Integrated Management of Snapshots and Clones Walk-thru

rfenton4

Today's post is the latest in the NimbleOS 4 - A Detailed Exploration and is written by Jan Ulrich Maue.   Is this post we will explore the deeper integration of Oracle with regards to the use of Snapshots and Clones, for rapid and efficient backup/restore and cloning.  This can allow businesses to dramatically improve the time to value, by providing every developer with a full and current clone of a production environment rapidly and efficiently.   This feature also snaps and clones databases, not just the storage volumes, therefore it manages the clone, rename and recover process to make the cloned database fully useable.

In the latter versions of NimbleOS3 and with the advent of NimbleOS4, Nimble launched the Nimble Linux Toolkit (NLT) which not only provides host utilities for making administration with the Linux storage stack much more simplified but also provides much deeper integration into Linux based applications. The first release of NLT2.0 was made Generally Available in November!  In addition to the well-known Nimble Connection Manager NCM, the Toolkit now also includes a Nimble Docker Volume Plugin and the Nimble Oracle Application Data Manager. This blog post will focus on the latter specifically the Oracle integration.  This feature allows database administrators to easily create storage-based snapshots of their Oracle database. The snapshots generated in this way can then be cloned and mounted on the same server or on a second "remote" server. The cloned database is then automatically recovered and opened. All this is possible with only one command and no deep knowledge of the Nimble array and of course the utilises capacity efficient snapshots, zero-copy cloning for rapid cloning of databases, without compromising the performance of the source database.

In order to use this feature there are a number of prerequisites:

  • NimbleOS 3.5 or higher
  • Oracle 11.2.x.x (11gR2) - Single Instance using ASM, currently no RAC Support
  • RHEL 6.5 and above
  • RHEL 7.1 and above
  • Oracle Linux 6.5 and above*
  • CentOS 6.5 and above*

* Note: Oracle Linux and CentOS are supported but not QA verified in this release

Step 1: Installation and configuration

When installing the Nimble Linux Toolkit, you can choose which components that you wish to install. In the example below we have decided to only install NCM and the Oracle Application Manager - somewhat bulky called NORADATAMGR. The Docker plugin we have chosen not to install in this example.  In this example I have also installed Oracle 12c, since 11g on CentOS 7 no longer runs. On the first Nimble array, I create six volumes: one OCR (Oracle Cluster Registry) volume for the two CentOS VMs and four volumes for a "DATA" disk group in the ASM, where the database will be hosted. For the four ASM volumes I create a volume collection "oracle" and specify the second virtual array as a replication partner. This is important for later because the NORADATAMGR can also manage the replication of the snapshot.

OracleCloning_Volumes.jpeg

For the Oracle users to use the Nimble Oracle App Manager on the Linux servers, the rights and permissions must be adapted after installation. The process to do this is well described in the Nimble Linux Integration Guide:

NORA-rights.png

To use the Oracle App Manager, the NLT's Oracle Service must first be "enabled" and then started:

[root@ora-prod oracle]# nltadm --enable oracle

Successfully enabled oracle plugin

[root@ora-prod oracle]# nltadm --start oracle

Done

Then check the status:

[root@ora-prod oracle]# nltadm --status

Service Name               Service Status

------------------------------+--------------

Connection-Manager         RUNNING 
Oracle-App-Data-Manager    RUNNING

Afterwards, the Oracle App Manager  must be registered with the Nimble group, and --verify checks if everything is running. The management IP of the Nimble Group Leaders is used as an IP address.

[root@ora-prod oracle]# nltadm --group --add --ip-address 192.168.43.100 --username admin --password xxxx

Successfully added Nimble Group information for 192.168.43.100.

[root@ora-prod oracle]# nltadm --group --verify --ip-address 192.168.43.100

Successfully verified management connection to Nimble Group 192.168.43.100.

As a final step in the preparation process, the Oracle App Manager must be informed of the server from which the snapshot and cloning processes for a particular Oracle instance may be initiated. Snapshots can only be generated by the local server on which the instance is running. The cloning can be initiated by the local and also by a second server (also called "remote server"). For this reason, I enter both CentOS VMs - the configuration is done as an Oracle user with the command noradatamgr and the options --edit and --allow-hosts:

[oracle@ora-prod oracle]# noradatamgr --edit --instance ORCL --allow-hosts ora-prod,ora-clone

Allowed hosts set to: ora-prod,ora-clone

Success: Storage properties for database instance ORCL updated.

If, at this time, there was no volume collection for the ASM disks on the Nimble array, it would now be created automatically. The volume mapping and also the allowed hosts can be displayed with the --describe option:

[oracle@ora-prod oracle]# noradatamgr --instance ORCL --describe

Diskgroup: DATA

    Disk: /dev/oracleasm/disks/NIMBLEASM1

        Device: dm-3

        Volume: ora-asm1

        Serial number: a48de9fac373d3286c9ce900a5a40084

        Size: 20GB

    Disk: /dev/oracleasm/disks/NIMBLEASM4

        Device: dm-4

        Volume: ora-asm4

        Serial number: e3c26f0c9aa46f266c9ce900a5a40084

        Size: 20GB

    Disk: /dev/oracleasm/disks/NIMBLEASM3

        Device: dm-1

        Volume: ora-asm3

        Serial number: 12e67015f9980a0f6c9ce900a5a40084

        Size: 20GB

    Disk: /dev/oracleasm/disks/NIMBLEASM2

        Device: dm-0

        Volume: ora-asm2

        Serial number: 75e25e2d1b9086886c9ce900a5a40084

        Size: 20GB

Allowed hosts: ora-prod,ora-clone

That's it! The Nimble Oracle App Manager does not need a repository or similar. All the data required for the Oracle cloning process (such as Oracle pfile and redologs) are stored as metadata directly in the snapshot on the nimble array. In this way, a snapshot can even be mounted by a second Linux system (also called "remote" system), and the database is automatically recovered and started. I would like to describe this in the following sections.

Step 2: Create snapshots

The Nimble Oracle App Manager can create two types of snapshots for Oracle instances on the server on which it is installed: 1. crash-consistent and 2. application-aware snapshots. In the first case, an IO-consistent snapshot is created for all Nimble volumes on which the Oracle database is located. The database is therefore in a state as if the server experienced an unplanned outage. When you open it later, Oracle must perform a crash recovery. In the second case, the database is first put into the "Hot Backup" mode, the snapshot is generated, and then the database is taken back from the "Hot Backup" mode. When you open it later, you only need a normal media recovery.

As already described, only one command is required for this. The --hot-backup option allows you to set the database to hot backup mode. Using --replicate option, I can also specify whether the snapshots should be copied to a second array using nimble replication. A prerequisite for this is that a replication partner is configured for the volume collection as described above.

[oracle@ora-prod oracle]# noradatamgr --snapshot --snapname snap1-hotbackup --instance ORCL --hot-backup --replicate

Putting instance ORCL in hot backup mode...

Success: Snapshot backup snap1-hotbackup completed.

Taking instance ORCL out of hot backup mode...


[oracle@ora-prod oracle]# noradatamgr --snapshot --snapname snap1-crash --instance ORCL

Success: Snapshot backup snap1-crash completed.


With --list-snapshots and additionally with --verbose you can have a detailed description of the snapshots for the corresponding instance:

[oracle@ora-prod oracle]# noradatamgr --list-snapshots --instance ORCL --verbose

Snapshot Name: snap1-crash taken at 16-11-28 12:07:39

    Instance: ORCL

    Snapshot can be used for cloning ORCL: Yes

    Hot backup mode enabled: No

    Replication status: N/A

    Database version: 12.1.0.2.0

    Host OS Distribution: CentOS

    Host OS Version: 7.2

Snapshot Name: snap1-hotbackup taken at 16-11-28 12:05:17

    Instance: ORCL

    Snapshot can be used for cloning ORCL: Yes

    Hot backup mode enabled: Yes

    Replication status: complete

    Database version: 12.1.0.2.0

    Host OS Distribution: CentOS

    Host OS Version: 7.2


This command can also be executed on the second "remote" server on which the productive instance is not running:

[oracle@ora-clone oracle]# noradatamgr --list-snapshots --instance ORCL

-----------------------------------+--------------+--------+-------------------

Snap Name                      Taken at        Instance     Usable for cloning

-----------------------------------+--------------+--------+-------------------

snap1-crash                    16-11-28 12:07  ORCL         Yes               

snap1-hotbackup                16-11-28 12:05  ORCL         Yes           



Step 3: Create and mount clones and start the database

After creating two snapshots, we might want to use them for a test and development system. This is quite simple, as the entire process of creating the clones based on the snapshots, mapping to the server and rescan of the devices up to the database activities (mounting, recovering, opening and starting the database) is completely automated with a single command!   As an output I get a "Description" of the cloned database. Optionally I could even change individual Oracle parameters in the PFILE during the database start or assign another Oracle SID. That is beyond the scope of this blog but I wanted to highlight that there is an option to change the cloned database.

[oracle@ora-prod oracle]# noradatamgr --clone --instance ORCL --clone-name clonedDB --snapname snap1-crash

Cloning diskgroups ... completed.

Mounting diskgroups ... completed.

Building instance clonedDB ... completed.

Diskgroup: CLONEDDBDATADG

    Disk: /dev/oracleasm/disks/CLONEDDB0001

        Device: dm-9

        Volume: clonedDB-DATA1

        Serial number: 2030631af2be85fc6c9ce900a5a40084

        Size: 20GB

    Disk: /dev/oracleasm/disks/CLONEDDB0002

        Device: dm-3

        Volume: clonedDB-DATA4

        Serial number: 7e03a8ce03e7c9296c9ce900a5a40084

        Size: 20GB

    Disk: /dev/oracleasm/disks/CLONEDDB

        Device: dm-5

        Volume: clonedDB-DATA3

        Serial number: 4fee6d9baef2f29a6c9ce900a5a40084

        Size: 20GB

    Disk: /dev/oracleasm/disks/CLONEDDB0003

        Device: dm-8

        Volume: clonedDB-DATA2

        Serial number: f1c42b1793cab81a6c9ce900a5a40084

        Size: 20GB

Allowed hosts: ora-prod


You can also omit the --snapname option. The Oracle App Manager first creates a snapshot and uses it as the basis for the clones. However, this only works on the production database server; On the remote server, it is not possible, since no snapshot can be generated from here.

[oracle@ora-prod oracle]# noradatamgr --clone --instance ORCL --clone-name clonedDB

Initiating snapshot backup BaseFor-clonedDB-45512c96-0337-497b-962b-d2867fbe6827 for instance ORCL...

Success: Snapshot backup BaseFor-clonedDB-45512c96-0337-497b-962b-d2867fbe6827 completed.

Cloning diskgroups ... completed.

Mounting diskgroups ... completed.

Building instance clonedDB ... completed.

[....]



To prove that both instances are running, I will connect with SQL * Plus:


[oracle@ora-prod oracle]# echo $ORACLE_SID

ORCL

[oracle@ora-prod oracle]# export ORACLE_SID=clonedDB

[oracle@ora-prod oracle]# sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Nov 28 14:23:13 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics

and Real Application Testing options

SQL> select dbid, name, created

  2  from v$database;

      DBID NAME      CREATED

---------- --------- ---------

1456485204 CLONEDDB  28-NOV-16

SQL> exit

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics

and Real Application Testing options


[oracle@ora-prod oracle]# export ORACLE_SID=ORCL

[oracle@ora-prod oracle]# sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Nov 28 14:25:14 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics

and Real Application Testing options

SQL> select dbid, name, created

  2  from v$database;

      DBID NAME      CREATED

---------- --------- ---------

1456485204 ORCL      23-NOV-16

Cloning on the remote server

As already described, the cloning process can also be started on the remote server. However, I have to specify an existing snapshot, since no new snapshot can be created. In the near future, it will even be possible to perform the cloning on the "remote" nimble array, that is, on the replication partner of the primary array. In this way, a test environment can be started which is physically separated from the primary side, or a DR solution can be established by simple means.

[oracle@ora-clone oracle]# noradatamgr --clone --instance ORCL --clone-name remoteDB -snapname snap1-hotbackup

Cloning diskgroups ... completed.

Mounting diskgroups ... completed.

Building instance remoteDB ... completed.

Diskgroup: REMOTEDBDATADG

    Disk: /dev/oracleasm/disks/REMOTEDB0001

        Device: dm-9

        Volume: remoteDB-DATA1

        Serial number: 3009abe001bc8c0f6c9ce900a5a40084

        Size: 20GB

    Disk: /dev/oracleasm/disks/REMOTEDB0002

        Device: dm-3

        Volume: remoteDB-DATA4

        Serial number: 5fcbec2bc8ef23fa6c9ce900a5a40084

        Size: 20GB

    Disk: /dev/oracleasm/disks/REMOTEDB

        Device: dm-5

        Volume: remoteDB-DATA3

        Serial number: 8e32ed641de4c68f6c9ce900a5a40084

        Size: 20GB

    Disk: /dev/oracleasm/disks/REMOTEDB0003

        Device: dm-8

        Volume: remoteDB-DATA2

        Serial number: 01a2aaa7acc7e9796c9ce900a5a40084

        Size: 20GB

Allowed hosts: ora-clone


Step 4: Delete the clones and snapshots

Deleting the clones and the ASM disk groups is also very easy. After the databases with SQL * Plus are stopped, I can "clean up" using the --destroy command on each server:

[oracle@ora-prod oracle]# noradatamgr --destroy --diskgroup CLONEDDBDATADG

Success: Diskgroup CLONEDDBDATADG deleted.


[oracle@ora-clone oracle]# noradatamgr --destroy --diskgroup REMOTEDBDATADG

Success: Diskgroup REMOTEDBDATADG deleted.

Individual snapshots can be deleted from the primary server using the --delete-snapshot option. The --delete-replica option is also available for deleting the replicas on the remote array.

[oracle@ora-prod oracle]# noradatamgr --delete-snapshot --instance ORCL --snapname snap1-crash

Success: Snapshot snap1-crash deleted.

Conclusion

The Nimble Oracle Application Data Manager is a very easy to implement and configure. It gives database administrators the ability to create storage-based snapshots without additional knowledge of the connected Nimble array, and to provide them as a separate environment for a test or development system, virtually at the touch of a single command. All processes occur the background like creating the clones, mapping to the target server, mounting the clones and even integrating with ASM disk groups is completely automatic. Even the necessary Oracle steps like mounting the DB, copying the necessary logs, recovery and opening the DB are completely automated.  Of course snapshots are hugely efficient in that they take zero time to create, storage only the compressed block incremental changes and finally have no degradation to the running production server instance,  allowing DBA's to provide rapid and fast data recovery options.

One of the best bits is there are no additional license costs! The new Nimble Linux Toolkit (NLT), it is available to Nimble Storage customers free of charge and of course the snapshot, restore, cloning and replication capability within Nimble controllers has always been integral!

It is suitable for customers who run Oracle on physical Linux servers directly connected to a Nimble array. In virtualized environments, the VMs must have direct access to the nimble volumes, e.g. Through iSCSI host initiated connections.

0 Kudos
About the Author

rfenton4

Events
See posts for dates
See posts for locations
HPE at 2018 Technology Events
Learn about the technology events where Hewlett Packard Enterprise will have a presence in 2018.
Read more
See posts for dates
Online
HPE Webinars - 2018
Find out about this year's live broadcasts and on-demand webinars.
Read more
View all