Storage Boards Cleanup
To make it easier to find information about HPE Storage products and solutions, we are doing spring cleaning. This includes consolidation of some older boards, and a simpler structure that more accurately reflects how people use HPE Storage.
HPE StoreVirtual Storage / LeftHand
cancel
Showing results for 
Search instead for 
Did you mean: 

C3000,Lefthand and V-Sphere

SOLVED
Go to solution
DThomson_1
Advisor

C3000,Lefthand and V-Sphere

Hi

Im totally new to the Lefthand VSA technology and i have a few questions, if any once could help.

Ive read a few things here on the forum and im confused.

This is what i was hoping to implement:

C3000 Blade system
2 x BL460c Server blades
1 x SB40 Storage blades
LeftHand P4000
Vmware VSphere - HA/DRS/VMotion

I want to build 2 server blades into a VM Cluster using one of the SB40 storage blades as shared storage for the Virtual servers.

My concern: Will HA/DRS and VMotion work if one of the servers fail ?

12 REPLIES
DThomson_1
Advisor

Re: C3000,Lefthand and V-Sphere

Sorry, thats the Lefthand software, not the device.
Uwe Zessin
Honored Contributor
Solution

Re: C3000,Lefthand and V-Sphere

You cannot have automated failovers with two VSAs/storage modules only. You need to configure a 'virtual manager' for manual failovers or install a 3rd VM on a 3rd computer that runs a 'Failover Manager' (FOM).

There are two versions of the FOM than can run on ESX or VMware Server.
.
DThomson_1
Advisor

Re: C3000,Lefthand and V-Sphere

Thank you Uwe.
Steven Clementi
Honored Contributor

Re: C3000,Lefthand and V-Sphere

Isn't the SB40 only directly accessible to one blade at a time though? (The adjacent blade)

Just trying to understand...

You are going to set up 1 server, attach it to the SB40, install ESX on that server, implement a LHN VSA on the one node... so that you can present the storage... via iSCSI... to both of the ESX servers?


Do you see yourself getting additional blades and SB40's in the future? so that you can expand the LHN storage with additional "nodes"?




Steven
Steven Clementi
HP Master ASE, Storage and Clustering
MCSE (NT 4.0, W2K, W2K3)
VCP (ESX2, Vi3, vSphere4, vSphere5)
RHCE
NPP3 (Nutanix Platform Professional)
Uwe Zessin
Honored Contributor

Re: C3000,Lefthand and V-Sphere

Good point. To cover the failure of any server blade it is necessary to set both up with:
- local storage
- a VSA
because VSAs do not work with shared storage. Storage redundancy is then created by doing 2-way replication (think of it as a mirror with 2 members) between VSAs.

Maybe it is not obvious, but: if you have only two ESX servers and one of them fails, you cannot do VMotion or DRS (which uses VMotion). HA, however should be able to restart (=reboot) the failed VMs on the surviving server as long as enough resources are left.
.
DThomson_1
Advisor

Re: C3000,Lefthand and V-Sphere

Hi Steven

As i said im new to this Lefthand tech.

The storage is only accessible from the adjacent blade but with LeftHand you can share the storage across more blades in the enclosure. This is what i understand about the tech. Good for shared storage in an ESX environment but you cant have HA and DRS without another server running FOM.

Im now looking into an MSA to use as external storage now as the project has expanded quite a bit since yesterday.
DThomson_1
Advisor

Re: C3000,Lefthand and V-Sphere

Morning Uwe

I cant seem to find reference to Vmotion and DRS not working between 2 ESX servers.
Uwe Zessin
Honored Contributor

Re: C3000,Lefthand and V-Sphere

You asked:
> Will HA/DRS and VMotion work if one of the servers fail ?

My response is that VMotion/DRS will not work after one out of two servers has failed. You no longer have a destination server that you can migrate a VM to.
.
DThomson_1
Advisor

Re: C3000,Lefthand and V-Sphere

Sorry, yes if one of them fail.

If there is enough resources on the second server the VM's should re-start their ?

Im searching the web and i cannot seem to find a requirement of 3 servers minimum for a working ESX cluster that uses HA DRS and Vmotion.

I automatically assumed that 2 would suffice if their was enough resources.
Uwe Zessin
Honored Contributor

Re: C3000,Lefthand and V-Sphere

Yes, that is what I meant. No offense meant, but many people seem not to realize that VMware ESX does not create ressources 'out of thin air' and expect they can put 100% load on their servers and HA still working.

It's been some time that I had a discussion with a customer who did this:
- a server with 16GB memory
- a number of VMs which took a good portion of the memory

Then he created another VM, gave it a 12GB(!) memory reservation and started this VM.
Ooops...
.
DThomson_1
Advisor

Re: C3000,Lefthand and V-Sphere

You almost gave me a heart attack Uwe, hahahahah. I have ESX clusters running already in the environment but none with only two hosts, i just assumed it would work with two. Good lesson learnt in assuming :)

Thank you for your input so far, its very much appreciated.
Steven Clementi
Honored Contributor

Re: C3000,Lefthand and V-Sphere

I think the moral of thre story is simply that you need to take into consideration the fact that in order to "failover" to one or the other servers... you need to have the appropiate resources available.

In a 2 node cluster... you need to double up, if not more, on your resources in order to survive a single node failure.

For clusters with more nodes, then the question you need to ask yourself is how many nodes do we want to be able to lose in order to survive...

To answer the original question...

"My concern: Will HA/DRS and VMotion work if one of the servers fail ?"

It depends on which server fails. If the node with the SB40 attached fails.... you are simply SoL or DitW (Dead in the Water). Best option is getting a second SB40 (which may have been mentioned, but I refuse to look back right now).

In a properly configured 2 node cluster.. yes, sure.. DRS/HA/vMotion will/should work just fine.


Steven

Steven Clementi
HP Master ASE, Storage and Clustering
MCSE (NT 4.0, W2K, W2K3)
VCP (ESX2, Vi3, vSphere4, vSphere5)
RHCE
NPP3 (Nutanix Platform Professional)