MSA Storage
1748224 Members
4389 Online
108759 Solutions
New Discussion

Re: using an MSA 2040 as a back-end for cinder volumes in an openstack environment

 
SOLVED
Go to solution
jokken
Occasional Advisor

using an MSA 2040 as a back-end for cinder volumes in an openstack environment

hi all,

has anyone used the MSA 2040 as a storage back-end for cinder volumes in openstack?

If so I would so appreciate some advice and direction!

The value-add of using cinder volumes is the fact that the MSA can be used as shared storage between nodes without a shared filesystem. Pretty much what this gentlemen states:

"I was confused at the way cinder handled volumes. I though If I had a san connected via fiber channel to multiple compute nodes, that a shared filesystem needed to be used to avoid conflicts. What I didn't realize is that cinder creates a volume (lun on the san), and then mounts that to the compute node via the fiber channel san connections. Each volume that gets created is mapped to one instance only. If you migrate an instance, cinder just unmounts that volume from one instance (vm), and mounts it to the new host that holds that vm. So, GFS isn't needed. You just use the appropriate driver supported by the san (iscsi, Dell driver, etc) and dont put a filesystem on it. Cinder takes care of the volume creation and mapping to the instance."
https://community.rackspace.com/products/f/private-cloud-forum/5707/gfs2-with-openstack

Im thinking one of thesetwo guides are what I need to follow

1. https://docs.openstack.org/newton/config-reference/block-storage/drivers/hp-msa-driver.html

2. https://docs.openstack.org/cinder/latest/admin/blockstorage-multi-backend.html

I will need to add this to the current cinder back-end that is already handling other volumes in the environment, so I will have to implement this in a multi-backend cinder configuration.

The first guide seems to be really the right thing, but do I have to give openstack the management access to the MSA? Can I use LVMs and the lvm driver discribed in the second guide ?

any dirction would be great! thanks

1 REPLY 1
jokken
Occasional Advisor
Solution

Re: using an MSA 2040 as a back-end for cinder volumes in an openstack environment

was successful in setting this up mostly using the information in the first link

 

1. https://docs.openstack.org/newton/config-reference/block-storage/drivers/hp-msa-driver.html

but the info in second link was useful too

2. https://docs.openstack.org/cinder/latest/admin/blockstorage-multi-backend.html

also useful is this link on booting instances from an image to a volume (the image needs special metadata explianed below)
https://docs.openstack.org/newton/user-guide/cli-nova-launch-instance-from-volume.html
(the WHOLE "Create volume from image and boot instance" section)

==

changes for this need to be done on the controller nodes only not on the compute nodes.

 settings in /etc/cinder/cinder.conf on all controllers:

              add the ",MSA" to this line
                         enabled_backends = RBD-backend, MSA
              add this section to the end of the file after the [RBD-backend] section.
                         [MSA]
                         hpmsa_backend_name = A
                         volume_backend_name = MSA
                         volume_driver = cinder.volume.drivers.san.hp.hpmsa_fc.HPMSAFCDriver
                         san_ip = 192.168.28.xxx
                         san_login = <username>
                         san_password = <password>

restart cinder-volume service on all 3 controllers:

                         systemctl restart cinder-volume.service

commands to run on 1 controller node:
                         openstack volume type create MSA
                         openstack volume type set --property volume_backend_name=MSA MSA
                         systemctl restart cinder-volume.service
                         openstack volume type list --long
                         nova boot --flavor medium.1 --key-name testkey --nic net-id=867801c4-c8d4-417e-a7c4-a67d87e69242 --security-groups all_in_eg --availability-zone nova:node-<withFCcard> --user-data script_file.txt --block-device source=image,id=3f3718a4-ffb8-4c6c-8ed4-8c318a530g45,dest=volume,size=43,shutdown=preserve,bootindex=0 testmsa5
                         openstack volume type list --long

The image used in the above command should have a metadata tag added to it "cinder_img_volume_type=MSA". This can be done in Horizon. Without it the volume will be created in the default cinder backend, in my case RBD.

 

Fiber channel cards need to be in all the controllers and as well as all the compute nodes that will be using this second cinder backend volume type. The controller creates the volume on the MSA via ethernet, http/REST I believe. Then the controller uses the fiber channel card to upload the image to be used to the volume. Then the controller maps the volume to the compute node where the instance will be created and running. You can see these volumes in the v3 MSA webUI under Volumes and mapping. On the compute node running the new instance with a MSA backed volume you will then see the volume mapped in

 ls -alR /dev/disk/by-path/
lrwxrwxrwx 1 root root 9 Apr 6 15:54 pci-0000:09:00.0-fc-0x207000c0ee46ce3f-lun-1 -> ../../sde
lrwxrwxrwx 1 root root 10 Apr 6 15:54 pci-0000:09:00.0-fc-0x207000c0ee46ce3f-lun-1-part1 -> ../../sde1

 

note creating a 40GB volume results in a instance with a 38GB vHD. So there is some space differences here. When I used an image snapshot of a 40GB vHD I needed to create a 43GB volume to use it. "volume,size=43,". When I used an image snapshot of a 100GB vHD I needed to create a 110B volume to use it. "volume,size=110,"

also note we are using shutdown=preserve here (which is most common for volumes), so always make sure you don't end up with a bunch of ERROR'd stale volumes laying around.