Grounded in the Cloud
Showing results for 
Search instead for 
Do you mean 

Automated Backup to HP Cloud Object Storage, Code Included

‎10-30-2013 01:13 PM - edited ‎10-26-2015 03:59 PM

A fundamental requirement in modern computing systems is persistent data and the ability to recover that data due to unforeseen critical errors. I've used and written backup systems far more times then I'd like to admit. With the advent of cloud storage systems moving data offsite has become the simplest methodology for data recovery.

 

Back when I got started in this industry, a tape backup would take hours. And then you'd have to drive it to your safety deposit box and store it. When you needed to recover data, it was a drive and then hours to restore it. The next iteration was removable hard drives. These were quicker to back up to and restore from, however the offsite portion was still onerous. That's why I developedSyncScript - caching a local copy of the backup makes sense when most of the restore request were for things deleted in the last 24 - 48 hours.

 

However with the advent of Cloud Storage and higher speed internet pipes you can keep offsite backups, and get them back reasonably quickly. Since I work with OpenStack now, that's the hammer that makes sense to use. Here is what I did to get a dead simple backup from a HP Cloud instance to HP's Object Storage.

 

First, I installed the python tools. I followed the directions below:

 

For Centos 6.3:

 

yum install python-pip
pip install python-keystoneclient
pip install python-swiftclient

For Ubuntu 13.04:

 

aptitude install python-pip
pip install python-keystoneclient
pip install python-swiftclient

Then I edited the .bashrc for the user preforming the backup using nano ~/.bashrc to include this:

 

## Enable openstack client stuff
export OS_TENANT_NAME=MY-PROJECT-NAME
export OS_USERNAME=MY-USER-NAME
export OS_PASSWORD='MY-PASSWORD'
export OS_AUTH_URL=https://region-b.geo-1.identity.hpcloudsvc.com:35357/v2.0/
export OS_REGION_NAME=region-b.geo-1

This will back up to US-East. If you wanted to backup to US-West, you'd want a .bashrc that included this:

 

## Enable openstack client stuff
export OS_TENANT_NAME=MY-PROJECT-NAME
export OS_USERNAME=MY-USER-NAME
export OS_PASSWORD='MY-PASSWORD'
export OS_AUTH_URL=https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/
export OS_REGION_NAME=region-a.geo-1

After I finshed updating the .bashrc, I went ahead and ran source ./bashrc to load the changes. I was then able to access swift. I tested that like this:

 

ubuntu@test-server:~$ swift list
Server_Backups
ubuntu@test-server:~$

I had already created the Server_Backups container via the web interface, following these directions. Once I had my server set up to access Object Storage, I was able to use the following script and cron to automate a complete backup of everything that was important on my server.

 

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106
#!/usr/bin/env bash
 
#####
# You will need to install the swift python client to get this to work
# for directions
#
#####
 
#####
# Config below
 
# The name of the Object storage container you are targeting
Swift_Container="Server_Backups"
 
# The name must be unique across all servers using the same swift container.
Server_Unique_Name="Server"
 
# The full path to the working directory.
Backups_Home_Directory="/home/ubuntu/backups"
 
#What directory do you want logs dumped into
Logs_Directory="/home/ubuntu/backups/logs"
 
 
#start up
 
# first off, we need to check for the logs directory
 
if [ ! -d "${Logs_Directory}" ]; then
echo "woops, my logs directory is missing. I'll create a new one"
mkdir $Logs_Directory
fi
 
now=`date "+%Y-%m-%d-%H-%M-%S"`
logs="${Logs_Directory}/${now}.txt"
touch $logs
echo "starting up at ${now}">>$logs
echo "Swift Container = ${Swift_Container}">>$logs
echo "Server's Unique Name = ${Server_Unique_Name}">>$logs
echo "Backup Home Directory = ${Backups_Home_Directory}">>$logs
echo "Logging Directory = ${Logs_Directory}">>$logs
#
####
 
# Env Checks
 
echo "checking for my directories">>$logs
 
if [ ! -d $Backups_Home_Directory ]; then
echo "woops, something is wrong. My expected home directory is missing. Perhaps a typo in the config?">>$logs
echo "exiting!">>$logs
exit 1
fi
 
if [ ! -d "${Backups_Home_Directory}/scratch" ]; then
echo "woops, my scratch directory is missing. I'll create a new one">>$logs
mkdir $Backups_Home_Directory/scratch
fi
 
if [ ! -d "${Backups_Home_Directory}/backup_storage" ]; then
echo "woops, my storage directory is missing. I'll create a new one">>$logs
mkdir $Backups_Home_Directory/backup_storage
fi
 
#start process, create a unique filename
target="${Server_Unique_Name}_${now}.tgz"
 
echo "starting backup">>$logs
echo "moving backup assets over">>$logs
 
#clean up after last run, just incase something went south.
rm -rf ~/backups/scratch/*
 
# Anything you put into the scratch directory will be backed up, so you can use things like
# MySQL dump or what have you, and just put the resultant files in ~/backups/scratch
#### PUT THE COMMANDS TO COPY THE STUFF YOU WANT TO BACKUP BELOW
 
cp -R ~/.znc ~/backups/scratch/
cp -R ~/git ~/backups/scratch/
cp -R ~/www ~/backups/scratch/
cp ~/push_site.sh ~/backups/scratch/
cp ~/site_updater ~/backups/scratch/
 
#### PUT THE COMMANDS TO COPY THE STUFF YOU WANT TO BACKUP ABOVE
 
#Create the archive
cd ~/backups/
echo "creating archive file named ${target}">>$logs
tar -czf $target scratch/*
 
#Upload to Object Store. You'll want to insure that the container is set correctly
 
echo "uploading to object storage">>$logs
swift upload $Swift_Container $target >>$logs
 
#Clean up
echo "Moving archive to storage">>$logs
mv $target ~/backups/backup_storage
 
#clean up the scratch directory
echo "cleaning up">>$logs
rm -rf ~/backups/scratch/*
 
finish=`date "+%Y-%m-%d-%H-%M-%S"`
echo "backup completed at ${finish}">>$logs
 
view rawswift-backup.sh hosted with ❤ by GitHub

 

(up to date source code lives on GitHub)

0 Kudos
About the Author

Stephen_Spector

I manage the HPE Helion social media and website teams promoting the enterprise cloud solutions at HPE for hybrid, public, and private clouds. I was previously at Dell promoting their Cloud solutions and was the open source community manager for OpenStack and Xen.org at Rackspace and Citrix Systems. While at Citrix Systems, I founded the Citrix Developer Network, developed global alliance and licensing programs, and even once added audio to the DOS ICA client with assembler. Follow me at @SpectorID

Events
Aug 29 - Sep 1
Boston, MA
HPE Big Data Conference 2016
Attend HPE’s Big Data Conference on August 29 - September 1, 2016 to learn from peers in every industry and hear from Big Data experts and thought lea...
Read more
Sep 13-16
National Harbor, MD
HPE Protect 2016
Protect 2016 is our annual conference on September 13 - 16, 2016, and is the place to meet the world’s top information security talent, discuss new pr...
Read more
View all