Storage Boards Cleanup
To make it easier to find information about HPE Storage products and solutions, we are doing spring cleaning. This includes consolidation of some older boards, and a simpler structure that more accurately reflects how people use HPE Storage.
HPE StoreVirtual Storage / LeftHand
cancel
Showing results for 
Search instead for 
Did you mean: 

9.5 Upgrade Experiences?

Paul Hutchings
Super Advisor

9.5 Upgrade Experiences?

I'm running 2 clusters, one 2 node P4500 cluster and one 4 node P4300 cluster.

 

Both are multi-site with a single VIP, so basically a stretch cluster (primarily due to lack of routable iSCSI in vSphere MPIO).

 

I've not yet upgraded to 9.5 as I want to give it a little while for the dust to settle and any bugs or issues to emerge, plug I didn't see any killer "must have now" features over 9.0.

 

I know there's a thread that mentions 9.5 features and a few people have mentioned that they've upgraded, but I thought a dedicated upgrade thread might be worthwhile.

 

So far I've only upgraded my CMC.  First impressions, does the job but that SAN summary screen is a little strange IMO.  Perhaps it makes sense if you have a lot of management groups but to me it just looks cluttered and a little "busy".

15 REPLIES
Kyle Thering
Occasional Visitor

Re: 9.5 Upgrade Experiences?

Just wondering if you ever ended up upgrading your clusters to 9.5, I have the exact setup as you have and was just wondering if you ran into any issues.

 

Thanks

Kyle

oikjn
Honored Contributor

Re: 9.5 Upgrade Experiences?

I upgraded without an issue.

 

pure VSA setup.

 

two managment groups, setup for remote snapshots.  Remote site is 2-VSAs+FOM, Primary site is also a multi-site cluster with 4-VSAs+FOM in nRAID10. 

 

Upgrade went without issue and no loss of service from attached hyper-v cluster.  I haven't noticed any significant differences going 9.0 to 9.5 other than the CMC program.

 

Only "bug" I think I found is that I keep getting messages from one VSA in the remote group that the other's latency is like 2428047324234ms and then get a quick reply that its latency is OK.  I haven't seen any latency issues, but I get flooded with this email  (~50/day) usually under periods of higher load.  I deleted the problem VSA and recreated it, but that didn't change the problem ("problem" being the emails... I haven't noticed any latency issues with the VSA itself). I also moved it to another host and that didn't help.

WaynehGRC
Occasional Visitor

Re: 9.5 Upgrade Experiences?

I think we have a performance issue since upgrading to 9.5. Same test performed both before and after using IOMeter. Details are:

9.0

IOps 1986.115817

Read IOps 398.425901

Write IOps 1587.689916

MBps 7.758265

Read MBps 1.556351

Write MBps 6.201914

9.5

IOps 377.41583

Read IOps 76.787947

Write IOps 300.627883

MBps 1.474281

Read MBps 0.299953

Write MBps 1.174328

JohnMurrayUK
Advisor

Re: 9.5 Upgrade Experiences?

Ouch!  Any other changes Wayne, switches, bond type, volume protection method?

Any idea what the CMC reports your Queue Depth Total as when the IOMeter test is running?

 

I've managed to get two test P4500G2 shelfs on my desk today (bit loud). One with SANiQ 9.0 and one with SANiQ 9.5. I'll let you know what differences I observe.

 

--------------------------------------------------------------------------------
If my post was useful, clik on my KUDOS! "White Star"
Patrick Terlisten
Honored Contributor

Re: 9.5 Upgrade Experiences?

Hello,

 

I did an upgrade in a two Node P4500 G2 Cluster with FOM. Attached was a vSphere 4.1 Cluster with two nodes. The FOM was running as a VM inside the VMware Cluster. During the upgrade the storage services went down. The FOM was first upgraded, then the second P4000 node. After the upgrade of the storage node, services went down. It seems to me, that the FOM wasn't back online as the upgrade of the storage box started.

 

Does anyone noticed the same?

 

Regards,

Patrick

Best regards,
Patrick
mozturk
Occasional Visitor

Re: 9.5 Upgrade Experiences?

we have 4 nodes p4300 g2 sas storage and one cluster. i upgraded from 9.0 to 9.5 but  performance after the upgrade is extremely decreased .how can i solve this issue. how can i downgrade to previous saniq version.

 

Pls help.

İmzam
5y53ng
Regular Advisor

Re: 9.5 Upgrade Experiences?

I have a 6 node cluster of P4000 VSAs and I have seen a performance decrease with 9.5 using IOmeter during writes.

Paul Hutchings
Super Advisor

Re: 9.5 Upgrade Experiences?

Have any of you logged and had HP look into the issue?

 

I'm still on 9.0 as I thought I'd give 9.5 plenty of time before jumping in.

5y53ng
Regular Advisor

Re: 9.5 Upgrade Experiences?

Hi Paul,

 

I'm actually going to give them a call today and see what their take is. Specifically, I only saw a decrease in performance on sequential 64KB writes in IOmeter. I'm using ESX 4.1.

Steve Burkett
Valued Contributor

Re: 9.5 Upgrade Experiences?

Anyone else got any tales of woe with 9.5?   Having to upgrade our 3 x P4500 G1's this weekend from 8.1 and not sure to aim for 9.5 or a previous version.

Emilo
Trusted Contributor

Re: 9.5 Upgrade Experiences?

One of the changes that they made in 9.5 had to do with the way the implemented flow control.

Previously on 9.0 flow control was only implmented as recieve on. It was recommended that you also enable flow control on the switch as well. Now with 9.5 flow control is implemented recieve and transmit. Which can adversly impact perfomance if your network is not setup correctly. Anyway without getting to much in depth, you can always tell if this is impacting you is to check your switch counters and see if you are seeing alot of pause frames and drops packets errors ect. If you are there are several things you can do to get back to pre 9.5 configuration. You can turn off flow control on the switch and turn it on the san as transmit only. You will need to clear you switch counters and then mounitor.

 

This will take a little time and some configuration changes on your San as long as you are using 2 way replication (network raid 10)  it shouldnt impact prodcution. You will need to break the bonds and set flow control on each NIC (you will have to assign the 2nd NIC a temporary address) once you set flow control you can recreate the bonds.

 

Switches - Most switches only support flow - control recieve, however newer models are starting to support the transmit also.

 

Bottom line its going to take some monitoring and clearing counters to find out if this is whats impacting your perfomance.  So check your switch counters

 

Steve Burkett
Valued Contributor

Re: 9.5 Upgrade Experiences?

Well we took the plunge and went for 9.5 on our three P4500 G1 (SANiQ 8.1) nodes. Things went through without a hitch. I'd previously got them up to date with firmwares as per the SPOCK documentation, and had updated our DSM installs to the 9.5 version.

 

First thing it did was upload the software to the devices and then installed a small patch on each in turn before continuing with the main SANiQ 9.5 install. This took each node offline for about 5 minutes while it installed (very quick, less then a minute) and rebooted (not so quick). As the device came back online it proceeded with the next one.

 

After it had done all three it then banged through a few quick patches on each in turn and that was it. Total time about one hour (probably would be quicker if you had a 1Gbps link to your nodes from your CMC, I've only got a 100Mbps link so the initial software upload took a good 20 minutes).

 

Only problem I did have was with upgrading the VSS Provider on our SQL Server as part of the post-upgrade tasks, the installer seemed to hang trying to stop some services and the server became unresponsive. That required a reset of the VM. Once it came back up it allowed the uninstall of the 8.1 VSS Provider and the install of the new 9.5 version.

 

Did notice that the 9.5 nodes had 'Auto - Receive On, Transmit On' on them under the TCP Status tab of TCP/IP Network, I'm guessing they've auto-negotiated with our 3750G's to be able to do both. Under 8.1 it didn't indicate if Transmit Flow Control was enabled or not, just that Flow Control was enabled. Not currently seeing any errors on our Cisco ports.

 

One concern we had was how much space we needed available to be able to complete the upgrade successfully. We were indicating about 677GB free at the start and it looks like it used hardly anything to upgrade, so this fear was unfounded.

 

While it was installing part of the upgrade it was throwing up the infamous 'There was a problem getting the alarm list from .... ' warning, but this cleared within a few seconds and continued no problem.

 

So all in all, thumbs up from me, performance is fine, I now need to work out how to enable the VAAI support on our datastores as only some are showing as Supported currently, the others are showing as 'Unknown'. Not sure why that is as yet.

 

After we'd upgraded to 9.5, our new 9.5 P4500 G2 could be added in to the management group no worries.

 

 

 

 

 

Russell Couch
Occasional Visitor

Re: 9.5 Upgrade Experiences?

Hi Emilo

I'm having some similar issues with 9.5 after our upgrade from 8.5. I currently have a case open with HP but they are denying any changes with flow control from 8.5 through to 9.5. I'm tempted to go through with your suggestions but I'd rather get the go ahead from HP before doing so. Have you got any reference at all to the changes? Was this information direct from HP?

Thanks
Russ..
ryan_1212
Advisor

Re: 9.5 Upgrade Experiences?

If you download the file hist.ethtool.log from each NSM on Version 9.0, you will notice this:

 

Fri Nov 16 16:40:17 GMT 2012


Pause parameters for eth0:
Autonegotiate:    on
RX:        on
TX:        on

Pause parameters for eth1:
Autonegotiate:    on
RX:        on
TX:        on


This tells me that in 9.0, it was turned on for send and receive.

Jonna85
Occasional Contributor

Re: 9.5 Upgrade Experiences?

We had an issue when upgrading a management group with a mixture of SANIQ 8.5 through 9.5 nodes. One cluster got stuck in a VIP lock during the upgrade and this required a senior HP engineer to resolve. Not a nice situation and at one point HP were suggesting that we would need to rebuild the cluster.

The cause was determined to be a manager was running on an 8.5 node and this caused the upgrade to fail.