- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- MSA Storage
- >
- MSA2042 connect a P2000 to it
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-05-2017 02:44 AM
07-05-2017 02:44 AM
Hi all
I need some help here.
We use VMware and we have 30VM's.
1 x SBS 2008 (DC, File Server, Exchange is stopped since we moved to office365 and Wsus is off)
1 x SQL Server
1 x Application Server
1 x VoIp Server
1 x Firewall
25 x Win7 client workstations (simple 3gb ram 2 vcpu 64GB disks thin provisioned)
We currently have a MSA P2000G3/FC iscsi LFF that has on it
8x600GB 15K SAS (Raid 10) for VM's
3x3TB 7.2K SAS (Raid5) for Data (no DB's or anything, just file serving)
3 HP servers (2xDL80G9, 1 DL380G6) are connected to it with FC 8GB Cards and all working fine. All 3 servers are basic with no special raid controllers in them and they just have a system disk.
What we want to do is to make the whole system a bit faster as matter IOPS. We found after extensive reading that the best value for money for our needs is to do SSD Caching and to achieve this we have 2 Scenarios since ALL Flash is too expensive yet.
In both scenarios, we don’t care for the current data since when this will happen we will do backup and recreate all the raids etc. and all the Vcenter config also. Basically, it will be a clean install in all servers and restore of VM's.
- To upgrade to MSA2042 SFF that includes 2x400GB SSD and that MSA will connect the P2000 as disk enclosure and it will see the disks on it and control them. Then we will setup the SSD's to do Caching and the overall speed will be improved. All the host will connect to 2042 and disconnect from P2000. Off course we will also gain all the virtualization benefits that the new MSA generation has.
- To upgrade the Servers. Each server will get the latest Raid controller from HP (all 3 will get a P840/4GB) and 1 x 400GB SSD with HP SmartCache enabled. The 600GB disks of the MSA will go to the 3 servers (we will buy additional 1 x 600GB so each server will get 3 x 600GB in raid 5).
The 3TB disks will stay to MSA and maybe we will add more disks 3TB to it if we need more file space.
So, the question is if anyone can help... between the 2 scenarios in our mind the 1st solution is better because we like the NAS side of things and the ability that in a failure of a host (scenario 2) we will be able to run the VM's that the problematic host had to the other 2. Also we can add more hosts to it as we grow.
Off course if the MSA fails then all 3 servers will go down but since 2010 we had the P2000 we never had any kind of trouble and it was rock solid except that we changed in all this years 3 disks that went off. Also, we do daily backups of everything (offsite also) so in a total loss we can do a full restore under 6 hours.
Another thing that puzzles me in the technical document of the MSA2040/2 is that it says that it can have the P2000 as enclosure (which is what we want to do in 1st scenario) but it cannot use the SSD drives for other reason as simple disks, in specific it says, "When using the P2000 G3 Storage Enclosure with MSA 2040 controllers, you will not be able to use SSD drives or have some of the performance benefits of the MSA 2040 Storage Enclosure."
What that really means? if that means that if I connect an SSD in the P2000 it cannot use it as Cache which is logical, or that means in general no matter where the SSD is putted in 2040/2 or P2000 it cannot use it as Cache? if the 2nd is true then our whole planning in scenario 1 is pointless.
Some expert guidance or opinion is needed here :)
Thx in advance
Chris
Athens - Greece
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-05-2017 06:05 AM
07-05-2017 06:05 AM
Re: MSA2042 connect a P2000 to it
The P2000 itself does not support SSDs at all.
The P2000 is available with FC, SAS and iSCSI connectors, even a mix of FC and ISCSI. There is also a P2000 disk chassis with I/O modules only (no controllers), used as an extension.
You need such extension chassis if you want to increase capacity, you cannot "daisy chain" two P2000 disk arrays (or newer models).
If you use SAS connections, only a few specific HBAs are supported, they cannot use their own cache.
All the details here:
https://www.hpe.com/h20195/v2/GetPDF.aspx/c04168365.pdf
Hope this helps!
Regards
Torsten.
__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.
__________________________________________________
No support by private messages. Please ask the forum!
If you feel this was helpful please click the KUDOS! thumb below!

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-05-2017 08:40 AM
07-05-2017 08:40 AM
Re: MSA2042 connect a P2000 to it
Thank you for the detail. If I understand your request, you are trying to decide betwen:
1) Upgrading the P2000 to MSA 2042 (or replacing the P2000 with MSA2042)
2) Beefing up the local storage in the servers and using them without the MSA
Either option is fine but, as you pointed out, there are disaster recovery benefits with using the MSA as shared storage with the ability to vmotion VMs between hosts and use VMWare HA feature.
So, for the sake of this discussion based on the information you've provided, let's assume you are looking to do the following:
1) Move to a current MSA model with caching and/or tiering. You don't care about any of your existing data, you simply want to be able to use your existing 11 x LFF SAS drives and rebuild a new MSA system with virtualized storage, tiering, and caching.
In this case, I would:
a) Purchase an MSA 2052 with the desired support and SFPs (do you need 1GB RJ45 SFPs?)
https://www.hpe.com/h20195/v2/GetDocument.aspx?docname=a00008277enw
b) Configure the MSA as desired
c) Move your drives over
d) The MSA 2052 would ordinarly recognize your data but will not support it in this case because you are using Linear volumes. So, since you don't care about the data anyway (as you mentioned), blow away and reconfigure your disk groups, volumes, and restore data
e) Since you such a limited number of disks, you'll need to carefully plan your pool assignments. If you wanted to do tiering, rather than caching, then all your disks would need to be in the same pool to effective use the limited drives you have. In this case, one controller will essentially be serving up all the data and the other would be a failover. If you had a lot of disks, you could spread it out a bit more. This likely won't be a performance issue with your configuraiton. If you truly want to do caching, you have some more flexibility because you could assign the NL and 1 SSD in one pool and the 15K and SSD in another pool.
f) consider purchasing at least one more 3TB drive and using it as a spare. You are asking for some trouble by using RAID5 on NL disks and having no spare. RAID6 is recommended on NL disks due to lower MTBF and longer rebuild times.
At the end of the day, you could also look at purchasing controllers (and maybe advanced data service licensing) and upgrading your P2000 shelf. However, chances are you might be out of warranty on your P2000 and just buying a brand new empty shelf might be the cleanest and most cost effective way to deal with this. There are lots of variables here and way more questions and answers that should be shared than can be efficiently communicated over a message board. I would suggest working with a qualified HPE partner (VAR) in your area who has Solution Architects on staff who can take some time and really dig into your requirements so that you get the most cost effective solution and don't have any surprises.
I work for HPE. The comments in this post are my own and do not represent an official reply from the company. No warranty or guarantees of any kind are expressed in my reply.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-06-2017 12:13 AM
07-06-2017 12:13 AM
Re: MSA2042 connect a P2000 to it
Thanks for the replys guys
Dear Mike i will go to solution 1 i think since i feel better as i explained with the MSA so i plan to do this.
Also with the MSA i gain 4 x 10GB iscsi ports for future Hosts if i need to but i want to ask something here also. In the P2000 since i have connected hosts with the FC connection i am unable to share the same luns with iScsi ports of it.. the 2042 can do that? to connect hosts with the FC ports and simultaneously the same luns to be shared also with the iscsi ports?
I am reaching the bellow after consideration and help from you guys.
1. I buy an MSA 2042 SFF with 4 x 8GB FC Tranceivers, the model is i believe the Q0F72A that has the Tranceivers Build in(for the 2052 i dont have yet price in Greece but i had read the specs and i dont believe i need the extra IOPS that it gives which basically is the main difference from the 2042.. also i dont know if it can connect a P2000 as Disk Enclosure to it).
2. I connect the Current P2000 with the MSA2042 without moving any of the disks. The 2042 will be the Master and all my hosts will connect to it.
3. I Buy 1 more 3TB disk but instead to declare it as a spare i will just do Raid 10 in the 4x3TB disks since the 6TB that i will get is total fine as file server space and with this i will gain more speed and maybe 2 disks loss(offcourse in the same raid0 group). With this adition the P2000 will be totally full offcourse.
4. I format everything using the new Virtual Pools system.
What i dont know is if Tiering is better than ssd Caching. From what i see with only 2x400GB ssd's maybe the best option is to go with SSD caching for my environment with the 30vm's that i explained above.
For the future i think if i need more space i will buy a 2700 and i will throw 6TB disks on it and if i need speed i just buy SSD's and i put them in the 2042.
Everything is fine with the above?
Thx
Chris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-06-2017 09:01 AM
07-06-2017 09:01 AM
Re: MSA2042 connect a P2000 to it
Chris,
The MSA 2050 series will eventually replace the 2040 series so they should be roughly the same cost. 2050 series is not considered a higher level model (though does have some improved specs) but rather the next lifecycle generation.
I'm not sure when they will be available in Greece. However, if all you can get is the 2040 series and you need to act quickly, then that is fine. You'll have many happy years on the 2040 series.
The MSA 2040/2050 series has four available ports per controller. A host should always be hooked to at least one port on each controller for failover capability. This gives you a maximum of four connected hosts (in a direct connect scenario). Obviously, many more hosts when using Ethernet or FC switches.
You can make two ports on the controller be iSCSI and two be FC (mixed) if you wish. Of course, you can also do all four ports as the same protocol. You can only present a volume to ports belonging to the same protocol type. So, a single volume is presented to iSCSI or FC hosts but not both.
You cannot directly attach a P2000 controller shelf to a MSA 2040 series controller shelf. You would first need to replace the controllers with Disk I/O modules. If you have a LFF shelf then the disk IO module is part number AP844B and you'll need two along with two SAS cables. If your P2000 base shelf is a SFF shelf then you cannot replace the controllers with disk I/O modules and you'ld need to buy a new drive enclosure shelf instead.
Finally, regarding tiering vs. caching; performance benefits depends on your specific workloads. Tiering affects reads and writes, caching affects reads. So, heavy read workloads do well with caching. If you have tons of archival data then tiering can make sense because all your stale stuff can eventually make its way down to NL disks and you grow capacity by adding NL. No data is ever pinned to caching disks so no need to protect it. Hence, a minimum of 1 per pool can be used. Data is pinned in tiering so you need to protect it. Therefore, a minimum of 2 SSDs per pool (in RAID1). In your case, if you are going to use tiering, just place all your disks in the same pool since you don't have that many.
Best of luck!
I work for HPE. The comments in this post are my own and do not represent an official reply from the company. No warranty or guarantees of any kind are expressed in my reply.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-06-2017 10:54 PM
07-06-2017 10:54 PM
Re: MSA2042 connect a P2000 to it
Dear Mike
Thx for everything, your posts cleared everything for me.
I will ask for a price to 2052 SFF model (Q1J03A) even though i dont find a concrete answer that i can connect to it with the same way a P2000 G3 but i suppose it will.
I have the P2000 LFF model so i will buy 2 x disk IO modules AP844B to it and i will be fine and as i see them are very cheap.
As for the tiering from what i had read its better for my environment since its mixed use but i have 1 last question and i promise i will not bother again :-)
This has to do with the Final disk configuration
I will have 8x600GB SAS in Raid 10 and 4x3TB MDL in Raid10 also so after i will create the raids (both as Raid10) then enable the tiering and in settings i declare
The 2 SSD's in raid1 as Performance tier (Owner A or B)
The 8 SAS in raid10 as Standard tier (Owner A controller)
The 3 MDL in raid10 as Archive tier (Owner B Controller)
Then i create a Disk Group of everything and automatic tiering will do its magic, correct?
Is there any point to devide the groups from performance point of view between the 2 controllers or the above examble is my best case scenario? meaning for examble the 8 SAS to 2 raid 10 partitions each one with 4 disks and each one assigned in different controller?
Really thx for everything.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-07-2017 07:17 AM
07-07-2017 07:17 AM
SolutionChris,
No problem :)
Sounds like you are coming up with a good plan. Tiering only works within the same pool. So, in your case of limited disks, you'd want to create a RAID1 SSD virtual disk group assigned to pool A (and controller A), a RAID10 SAS virtual disk group assigned to pool A (and controller A), and a RAID10 MDL virtual disk group assigned to pool A (and controller A).
Spreading out your disk groups amongst the two pools (and thus controllers) is a general best practice for getting the best possible performance. However, you'd need a minimum of two more SSDs to create a RAID1 performance tier for pool B. Even then, now you only have 4 SAS and 2 MDL disks ever handling a request in a given pool. Of course, I can't be 100% sure, but I think your configuration would be best served by dumping everything in single pool. Controller A will be the only one working however, controller B will be available in a failure (as long as each host is connected to each controller). With your disk config, I think you have plenty of controller headroom so as to not really see the benefits of spreading out over both pools/controllers.
A couple final notes:
1) Direct Connect
I know you said you were doing FC direct connect now and considering iSCSI direct connect.
I'm finding mixed info, at this time, as to whether it is officially supported. HPE SPoCK indicates iSCSI direct connect is not supported and web searches indicate that it is VMWare themselves who don't support iSCSI direct connect. I can't find anything definitive but am leaning towards it not being supported at this time (even though it may tecnically work).
2) Converting your shelf/moving drives
Consider the following PDF https://www.hpe.com/h20195/v2/Getdocument.aspx?docname=a00015116enw
Note the table on page 4. The MSA 2050 discontinues support for a lot of older hardware. Without extensively digging into this more, I am afraid that you may run into issues with your older drives or enclosure on an MSA2050. At best, you might be able to move your drives into a new MSA 2050 enclosure but you'd need to compare against the table.
Now, note the same document for upgrading to MSA 2040 series at https://www.hpe.com/h20195/v2/GetPDF.aspx/4AA4-6830ENW.pdf
I think you'll see that you'll have a much smoother experience trying bring older hardware over to an MSA 2040 series array. So, sorry for the initial recommendation to try MSA 2050. If this were net new then MSA 2050 would be best bet. Upgrading, MSA 2040 series will be a better bet for your use case.
Best wishes.
I work for HPE. The comments in this post are my own and do not represent an official reply from the company. No warranty or guarantees of any kind are expressed in my reply.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-09-2017 09:49 PM
07-09-2017 09:49 PM
Re: MSA2042 connect a P2000 to it
Dear Mike
Thank you.
After reading the upgrade path pdf that you linked i see also that the P2000 is not supported but i see that all 6G sas disks and 6G MDL disks are also not supported. This is pretty bad from HP side since i only have 11 x 6G disks but i suppose many have a ton of 6G disks.
Offcourse a solution is to buy a D2700 or 2050 enclusure (HP dropped also the 1040/2040 enclosure support) but i am not really sure if the 2050/2 will be happy to see them in there... there must be a clarification from HP side on this and the product is really new to have huge info online.
I think i am finished... if the 2052 with a D2700 enclosure can get my old 6G msa disks i will proceed like this, if not i will go to 2042 even though i always want to go to latest product but this is a risk i dont wanna take.
Thank you for everything
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-10-2017 07:13 AM
07-10-2017 07:13 AM
Re: MSA2042 connect a P2000 to it
Chris, the 6G SAS drives will be your limiting factor as they are not supported in the MSA205x in any enclosure. As long as your drives are SAS, and not SATA, you should be good to go on the upgrade to MSA 204x.
Good luck on your project.
Mike
I work for HPE. The comments in this post are my own and do not represent an official reply from the company. No warranty or guarantees of any kind are expressed in my reply.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-10-2017 10:48 PM
07-10-2017 10:48 PM
Re: MSA2042 connect a P2000 to it
Well i dont give up yet for the 2052...
I will ask for official answer from HP because im not convinced yet.
The D2700 enclosure that is supported from 2050/2 accept only 6G disks (i suppose you can put 12G disks there and the enclosure will not have problem but its useless), so how the 2052 sees this disks?. Imagine also someone who has an array of P2000 with 7 enclosures full of 6G SAS disks and wants to upgrade the Control System to 2050/2 then HP kicks them? i dont believe that and never HP done that.
In my mind 2050/2 supports fine the 6G disks in an enclosure i just want an official HP answer on this
If i get an anwer i will reply here.
Thx
Chris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-11-2017 08:38 AM
07-11-2017 08:38 AM
Re: MSA2042 connect a P2000 to it
Chris, the SFF D2700 shelf itself supports 6G and 12G drives. The drives that are currently sold and qualified for the MSA 20xx, in the D2700, are all 12G drives. As per that document, 6G drives are not supported in the MSA 2050, regardless of what enclosure they are installed in. If you'd like to private message me your e-mail address, I can take this offline and see if I can connect you with your local HPE account team who may be able to help you with this further.
Thanks,
Mike
I work for HPE. The comments in this post are my own and do not represent an official reply from the company. No warranty or guarantees of any kind are expressed in my reply.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-11-2017 11:46 PM
07-11-2017 11:46 PM
Re: MSA2042 connect a P2000 to it
Thx Mike
Its ok i procceded with the 2042. In specific i ordered this parts
1 x Q0F72A HPE MSA 2042 SAN Dual Controller with Mainstream Endurance Solid State Drive SFF Storage (I ordered this because has bandle the 4 x 8GB ports that is what are my current needs atm since my 4 servers have the 81Q single port cards)
2 x AP844B HP StorageWorks P2000 drive enclosure I/O module
2 x 407337-B21 HP external mini-SAS to mini-SAS 1 m cable
I wait a final price and and availability and im good to go.
Thank you again for everything.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-17-2017 05:05 AM
07-17-2017 05:05 AM
Re: MSA2042 connect a P2000 to it
Dear Mike
I finished ordering the 2042 so im fine from this side... but i have a last question.
HP Greece tries to find the AP844B but is there a difference between
AP844B and AP844A ? i ask because i found from other source a full MSA P2000 G3 Enclosure but has 2 x AP844A.
In my understanding they are both the same but the A model is the embedded that HP sells with an enclosure.
thx
Chris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-17-2017 07:14 AM
07-17-2017 07:14 AM
Re: MSA2042 connect a P2000 to it
Chris,
The presence of an A, B, etc on the end of a part number usually indicates that it's different lifecyle generations of the same product. The A and B units *should* be functionally equivalent but had different introduction and discontinuance dates. Please note that I/O module for this enclosure is no longer available for sale as a new part from HPE so I would expect you would need to source the modules, and/or shelf w/modules, via a used/refurbished route.
Thanks
Mike
I work for HPE. The comments in this post are my own and do not represent an official reply from the company. No warranty or guarantees of any kind are expressed in my reply.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-26-2017 07:24 AM
07-26-2017 07:24 AM
Re: MSA2042 connect a P2000 to it
Hi again
I have all the parts in my hands and i did some preliminary checks before the big movement that i plan to do it in friday evening (hopefully to finish in saturday at some point). The MSA2040 saw correctly the P2000 enclosure that i bought and all are fine.
I believe that i had read the whole internet the last weeks and all the best practices pdf's etc but still there are some things that i need some help but i know its hard for anyone to be hard directive in this.
When i will finish the hardware upgrade then i will proceed with the following procedure
Create the Disk Groups
2x400GB SSD in Raid1 (400GB) for Performance (i plan to buy to 3 more soon to implement 1 read cache and add a 2nd raid1 with SSD for performance tiering)
8x600GB 15k SAS in Raid 10 (2400 GB)
2x3TB 7.2K (3000) in Raid 1 (i have 3 disks and i ordered 1 more to implement Raid10 also here but i will not have the extra disk till next week so i will use the 2 disks now and the other 2 to add an extra pool next week so i will count plus 3TB in my final usable space)
Now with the above i will end up with 5800GB (8800GB with the addition of the extra 3TB) so i have some questions and i need some guidance since i dont have time to do tests with the array since whatever i do it will go immediately in production site.
So here are my questions .
1. Do i take literally what HP says for the Power of 2? I refer to my 8 SAS disks and its tricky to decide what config to do with them and i can do these scenarios. Also i must consider the optimal use of the 2 controllers.
A. 8x600 Disk Group in raid 10 in controller A that gives me 8xRead speed and 4xWrite speed. With this even if its faster i dont use the 2nd controller at all.
B. 4x600 Disk Group in raid 10 in controller A and another one Disk Group 4x600 in raid 10 again in controller A (to do auto tiering with the 2 SSD's that i have now) this will give me 2 Disk Groups each one with 4xRead speed and 2xWrite speed which is slower than A scenario.
C. 4x600 in raid 10 in controller A and one 4x600 in raid 10 in controller B. The 2nd raid here will have no SSD's so no performance gain till i buy 2 or more SSD's
Accordingly the 4 disks of 3 Tera will follow each scenario in splitting .
Which is better? with the A scenario all together is better for speed since but i will not use at all the 2nd controller till i buy more SSD disks for performance tiering but according to HP if i want to add extra disks i need to add disks with equal performance characteristics and this mean with scenario A i must add 8x600 disks but with scenario B i can add 4x600 which is easier.
2. Where do i connect my 4 hosts? Keep in mind i have the HP 81Q cards which are single connection cards and this is in conjunction with the Above disks considerations.
Whatever disk scenario i do from above i dont really have real speed measurements if a host is connected to a controller that does not serve volumes whats the penalty for this. HP says something but its very general.
A. All hosts in 1st controller.
B. 2 hosts in each controller.
3. Which is the best Volume sizing for my hosts? till now my 33 VM's that are connected in the MSA P2000 use 2 volumes of 2 TB (i have the 8x600 in raid 5 now) but all my VM's use is 1.8 tera only so i will create space for 2.5TB space for them to consider future needs and i can do this
A. 3 x 850GB volumes (with the one volume with extra setting in the sub lun tiering to Performance and the other 2 with no affinity)
B. 2 x 1TB (with the one volume with extra setting in the sub lun tiering to Performance and the other one with no affinity)
C. 1 x 3TB (with the sub lun tiering to Performance)
I set the 1 volume with sub lun tiering to performance simply because i will put there the most crucial VM's (SQL, DC etc)
4. Where to i enable the Thin Provisioning? i have mixed feelings on this.
A. In MSA and in all the VM's
B. In MSA and thick in VM's
C. Not in MSA but in VM's Only
D. Not in MSA and thick in VM's
That was all :-) and i think if noone has a better guidance i will proceed with this replies in my questions considering i will buy 4 more SSD's (2 for Tiering in Pool B and the other 2 in each pool as read cache)
1C
2B
3A
4B
Thx in Advance
Chris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-03-2017 03:03 AM
08-03-2017 03:03 AM
Re: MSA2042 connect a P2000 to it
I just write to inform that everything went fine.
I ended up finally after some tests (in the little time i had) in this configuration.
PoolA
2x400GB SSD Raid1 as Performance
8x600GB Sas 15K Raid10 As standard
The Above Gave me 2.8 tera space where i did 2 Volumes
1 TB as EsxiVMSystem01 where i place the most Crucial VM's. This Volume has sublun tiering to Performance
1.5 TB as EsxiVMSystem02 where i place the rest VM's. This Volume has sublun tiering to default.
PoolB
4x3TB 7.2K MDL Raid10
The above gave me 6TB usable space where i did 2 Volumes
2TB as EsxiVMData01 where i place the Data (no SQL or anything just Data SMB files that users work).
3.8TB as EsxiVMBackup01 which is the Backup of everything in the work.
My 4 Hosts are connected all to 1st controller.
Generally the difference in speed is Huge. I did tests without the SSD's and the speed difference of the same disks from the P2000 is 20-30% faster and with the SSD's on in my final setup the speed difference is Huge. As i see the stats in the MSA i see everything is going in the SSD's and in stats it hits 97% use of the SSD's.
Kudos to HP for this ... i believe its a great solution to everyone who doesnt want to spend a ton of money in ALLflash arrays like us here.
I plan to buy 2 more SSD's for sure to Expand the PoolA performance Volume and to add 2 in the PoolB (even thought its not needed speed there.. maybe 1 disk as read cache only)
Thank you all for your help.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-03-2017 01:47 PM
08-03-2017 01:47 PM
Re: MSA2042 connect a P2000 to it
Hi Chris,
Congrats on your upgrade and glad that things seem to be working out to your liking. I'm curious though. By putting your nearline disks in a seperate pool, they are not part of the tiering strategy on Pool A. This means that cold data can never migrate all the way down to the nearline disks. Certainly OK if that is your intention.
You also said that all your hosts are connected to just the first controller (assuming controller A). This would typically mean that your hosts wouldn't have access to volumes on Pool B (if using ALUA) since Pool B should be owned by controller A and you say your hosts don't have access to controller B. So, If this is truly the case then you are probably sending IO (to the controller B) volumes via controller A. This is considered a non-optimzed path and will increase the backend IO between the controllers and potentially decrease performance.
All in all, it sounds like you have this working some way that you are happy with but just a couple of notes I thought of.
Take care,
Mike
I work for HPE. The comments in this post are my own and do not represent an official reply from the company. No warranty or guarantees of any kind are expressed in my reply.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-04-2017 04:28 AM
08-04-2017 04:28 AM
Re: MSA2042 connect a P2000 to it
Thx mike, your help was invaluable believe me
Now
Congrats on your upgrade and glad that things seem to be working out to your liking. I'm curious though. By putting your nearline disks in a seperate pool, they are not part of the tiering strategy on Pool A. This means that cold data can never migrate all the way down to the nearline disks. Certainly OK if that is your intention.
Yes i did that after big thought because i have the HP 81Q cards at this momment and with this in mind my 4 hosts need to be fast in PoolA where the VM system disks relies and also the SSD disks for performance.
Also i didnt want to raise the Usable space of the PoolA because then the 2 SSDs that give me 400GB are not enouph according to HP, to get good performance from tiering. You need to have 10% of usable space at least( i know its not holy grail but i respected it), so if i had put the 4 x MDLs in PoolA suddenly from 2.8TB usable space i would had gone to 8.8TB usable space all in Pool A with under 5% of SSDs for performance tiering. I also considered the scenario to brake the 4 MDL's with 2 in POOLA and 2 in POOLB but with that way i loose disk speed since the more the disks together the faster througput you get.
Also the MDL disks contain only SMB data (and backup space that is workign in nights) which in my case are accessed by Windows Terminals (virtual or Physical) that dont need extra speed for this. From test as we speak i get 100MB/sec or more speed from the MDL's which is totally acceptable (with the P2000G3 i was getting under 60MB/sec). If i buy 2-3 more SSD's to make the POOL A bigger in performance tiering then i will reformat everything and redo all the work but i really dont think is needed.
I include a screenshot of the speeds that i get from HD Tune.
The left image is from a System disk of a server that relies on the POOL A and the 2nd is from the SMB Server that uses the MDL's in POOL B (i didnt had any other way to measure speed.. if there is i am willing to do tests for sure)
You also said that all your hosts are connected to just the first controller (assuming controller A). This would typically mean that your hosts wouldn't have access to volumes on Pool B (if using ALUA) since Pool B should be owned by controller A and you say your hosts don't have access to controller B. So, If this is truly the case then you are probably sending IO (to the controller B) volumes via controller A. This is considered a non-optimzed path and will increase the backend IO between the controllers and potentially decrease performance.
Yes this is the case. All my hosts are in 1st controller and to access the 2nd Pool controller A gets what it needs from controller B. I know this is not optimal but works and the speed penalty is not big in my scenario. For a moment i thought creating a Linear group of the MDL's and assign it to Controller A but then i decided to use the 2nd controller even like this, because we must not forget that the 2nd controller has extreme fast cache and with that way is usable (i hope i am not wrong).
Generally i plan to buy more disks and i am thinking
A. 2 x 400GB ssd's and i will use all 4 and in the end i will get 800GB performance tier space where i have a raid question atm :-). Question is when i buy the 2 SSD's i just create another raid 1 in pool A or i delete the 1st SSD group that i have atm and with the 4 SSD's together i create a new Raid10 (or raid 5 maybe)? is there any performance gain doing this?
B. 8x600GB SAS to add them in PoolA. Question here is if i really need to buy 15K disks since my 1st 8x600GB disks are 15k. I know HP says is supported but its better not to mix different speed disks in same Tiering. Maybe i can buy 8x600 10K disks and assign them as Archive tier in POOL A.
Thx for your notes and help really... i will reconsider some things for sure.
Kind Regards
Chris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-04-2017 05:46 AM
08-04-2017 05:46 AM
Re: MSA2042 connect a P2000 to it
Hi Chris,
I'd be curious to see those Pool B test results with hosts having direct access to controller B. It would be interesting to see what the true penalty is for you of not using an optimized path. Ideally you should consider getting addition FC cards in each server or replacing with 2-port cards. In your case, having only a single 1-port card in your servers will cause a complete outage if Controller A goes down. Also, it prevents you from ever doing an online firmware upgrade on the MSA (which is possible if hosts have access to both controllers). Anyhow, that is a side point.
Your question 1:
If you are going to buy 2 more SSDs for Pool A then you can simply add them as another RAID1 group and the system will take care of everything for you. That is the beauty of pools and wide striping. If you wanted to reconfigure to RAID 5 then it would be destructive (you'd have to move data off the array and back on). Using RAID 5 will give you better space efficiency (you'll loose drives worth of space instead of 50% as in RAID1). This can be highly desireable with the cost of SSDs. With any parity based RAID (5, 6), you will suffer a performance penalty on writes over RAID1 but probably not enough for you to notice in your enironment. Most customers will use RAID 5, 6, or some variation thereof on SSD in order to get better efficiency but certainly nothing wrong with RAID1/10. In your case, if you're going to stick with mirrored drives then simply add another RAID1 disk group to the pool and let the array take care of rebalancing. If you wanted to set the four drive up as RAID10 then it would be destructive (like RAID5) and you wouldn't gain anything in the end.
Your question 2:
The MSA sees SSD as a "performance tier", 10/15k as a standard tier, and 7.2k drives as an archive tier. There is no way around that. So, if you add 10k drives to your Pool A then the MSA will treat that as an addition to your 15k "standard tier". You can certainly add 10k drives to your 15k pool but it isn't "recommended". Whenever you add different size, or speed, drives to your pool (or wildly change the RAID characteristics of vdisks within it) then you introduce the potential for inconsistent performance results in the pool. Will it be noticeable in your case? hard to tell but you should be aware of the potential and do your best to try and keep to best practices.
Take care,
Mike
I work for HPE. The comments in this post are my own and do not represent an official reply from the company. No warranty or guarantees of any kind are expressed in my reply.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-04-2017 10:50 AM
08-04-2017 10:50 AM
Re: MSA2042 connect a P2000 to it
Thank you again..
As for your thoughts i completely agree in general.
I'd be curious to see those Pool B test results with hosts having direct access to controller B. It would be interesting to see what the true penalty is for you of not using an optimized path. Ideally you should consider getting addition FC cards in each server or replacing with 2-port cards. In your case, having only a single 1-port card in your servers will cause a complete outage if Controller A goes down. Also, it prevents you from ever doing an online firmware upgrade on the MSA (which is possible if hosts have access to both controllers). Anyhow, that is a side point.
I want to do this for sure even though when i 1st created the volumes and did some tests i didnt saw speed difference at all in the MDL's if a host is connected to either controller. At start i had 2 Hosts in Controller A and the other 2 Hosts in controller B but either way now that i am finished i will test that for sure but not for the next 2 weeks since today i go on vacations :-) and i hope the system will stay up and healthy here.
As for the SSD's in my 1st question i had read it somewhere and i saw also here that you gain in Raid 10 with 4 disks or more .. maybe this site says something wrong but if you set there 4 disks with Raid 10 there it will tell you that you get
4 x read speed and 2 x write speed which is logical.
In comparison if you do 2 raid 1 pools then for each one you get 2 x read and 1 x write which seems same as the 1st scenario but what i dont know is how the MSA behaves. I mean the data in 2 exact same volumes are written in both simultaneously? or the MSA decides where to write? My gut thinks that it writes only to 1 and then to the other etc like round robin or something, correct me if i am wrong here because this is not written somewhere (or i havent found it). Same exists for the reads.. So if i am not wrong and the site that i said to you is also not wrong then by creating a Raid10 with 4 disks will be faster because the MSA in a write situation will write in all 4 disks and then in the Read all 4 Disks will give data (or something like it).
For my 2nd question your reply is what i was expecting so maybe i will w8 to 1st get dual cards for all the hosts and then the 8x600GB disks will go to pool B with 2 or 3 SSD's and i will be fine.
And a question again :-) why MSA doesnt let you delete a Disk group if in the controller that has that pool exist another disk pool with empty space to cover this delete? i mean this could had been perfect to be able to delete but wait till the MSA copies everything in the other Disk Group no matter how much time this would need.... its an idea .. maybe stupid but it could had been cool if it was able to do that...
Thx again..
Kind Regards
Chris
P.S. Sorry for my english in general
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-04-2017 11:09 AM
08-04-2017 11:09 AM
Re: MSA2042 connect a P2000 to it
Chris,
RAID 10 is mirroring and striping. With thin provisioning, and pools, on the MSA mutiple RAID1 vdisks are striped (or wide striped) via the array controller virtually chunking up the volume instead of it being done purely at the RAID level. So, performance should be similar regardless of whether you have a RAID10 VDISK or multiple RAID1 VDISKS backing the same pool. I supposed you'd really have to run an exact test to tell but I don't think it's worth the effort.
Regarding your other question, you should be able to remove a vdisk so long as the pool as enough space to survive the removal of the vdisk. This will kick off a VDRAIN process and will take time.
and page 148 of
http://h20564.www2.hpe.com/hpsc/doc/public/display?docLocale=en_US&docId=emr_na-c04957376
I work for HPE. The comments in this post are my own and do not represent an official reply from the company. No warranty or guarantees of any kind are expressed in my reply.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-04-2017 04:04 PM
08-04-2017 04:04 PM
Re: MSA2042 connect a P2000 to it
Oh this is new to me...
At 1st then i did mistake to format my 8x6oo all together as one raid 10... now to add a new set of disks there i need to add 8 more. Oh well . in some weeks i will redo everything i suppose at least for test purposes... and as for the removal of a disk group this is great knews really.
Thank you
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-29-2017 06:13 AM
09-29-2017 06:13 AM
Re: MSA2042 connect a P2000 to it
Hi im here again.
I have a question because i plan to buy 1 more server to connect to my MSA 2040.
Atm i have 4 servers connected with single conections to the MSA using 8GB Tranceivers in the MSA but i wonder if i can buy now 4 x 10GB (iscsi) tranceivers for the MSA and connect the new server with a dual port 10GB card to the MSA. I know i can do that but my question is if i will be able to access the volumes that are shared to the 8GB hosts.
The reason i ask is because when i had tried to do the same in old days in my P2000 G3 FC/iscsi a volume that was shared to the FC ports wasnt able to be shared simultaneously to the iscsi ports.
In the msa 2040 can i do that?
Kind Regards
Chris.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-29-2017 06:23 AM
09-29-2017 06:23 AM
Re: MSA2042 connect a P2000 to it
Hi Chris,
You cannot present the same volume to both iSCSI and FC ports; it's one or the other.
Also, be careful on your iSCSI (Ethernet) hookups. Directly connecting a VMWare host to the MSA is not supported like it is with FC. You'd need an Ethernet switch.
I work for HPE. The comments in this post are my own and do not represent an official reply from the company. No warranty or guarantees of any kind are expressed in my reply.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-02-2017 11:51 PM
10-02-2017 11:51 PM
Re: MSA2042 connect a P2000 to it
Thank you .. i thought the same, even though i dont really get the reason for this.
I am aware of the ethernet switch prequisitie but i read in the internet many people succsesfully connected the msa direct.
Either way i proceeded to buy the same 4 pack 8gb tranceivers as the ones i have now and HP 82Q card for the new server.
Thx for your help.