- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- Re: EVA 4100 - VMware vSphere 4- Max size vdisk LU...
Disk Enclosures
1748163
Members
3690
Online
108758
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-01-2010 09:08 AM
тАО03-01-2010 09:08 AM
Hello,i know practical the best ones in order to configure the vdisk of EVA and I know that he convient to create many LUN of approximately 300gb assigning it to VMWARE. No document clears what is the maximum sizing that not cause degradation performance. First case: I create 3 vdisk about 300gb everyone and the check to vmware.
According to second case I create one vdisk only sizing 900gb and assigns and it to vmware. Do i have a degradation of performance in according to second case because to vmware must manage a volume sized 900gb? thanks Regards Mario
According to second case I create one vdisk only sizing 900gb and assigns and it to vmware. Do i have a degradation of performance in according to second case because to vmware must manage a volume sized 900gb? thanks Regards Mario
Solved! Go to Solution.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-01-2010 09:16 AM
тАО03-01-2010 09:16 AM
Re: EVA 4100 - VMware vSphere 4- Max size vdisk LUN for without degrading performances
Hi Mario,
Without knowing the details of the number of physical servers, number of virtual machines etc. it's probably difficult to say...
Cheers,
Rob
Without knowing the details of the number of physical servers, number of virtual machines etc. it's probably difficult to say...
Cheers,
Rob
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-01-2010 09:55 AM
тАО03-01-2010 09:55 AM
Solution
There is no simple rule of "large VMFS = bad performance".
A single VMFS partition can have 2 TeraBytes - 512 Bytes (=4,294,967,295 blocks of 512 Byte). And then you can 'link' up to 32 VMFS partitions (or extents) to one large datastore... (well, I think you need to tune the VMkernel a bit, but most people don't do that anyway ;-).
However, a VMFS (and all its extents) use a single I/O queue. So if you put many VMs which require lots of I/Os it is possible to run into a problem.
Another problem can be SCSI reservations: VMFS meta-data changes are serialized using SCSI reservations. Many events can cause that: VM power-on/off, snapshot creation/deletion, file creation / deletion / expansion(think about growths of DELTA files or thin-provisioning).
So large(r) VMFS = bad? Not necessarily. The other extreme would be to create a single VMFS per VM, but how do you size that VMFS if you want to use snapshots from time to time?
A good argument FOR large(r) VMFS is that it consolidates free space.
A single VMFS partition can have 2 TeraBytes - 512 Bytes (=4,294,967,295 blocks of 512 Byte). And then you can 'link' up to 32 VMFS partitions (or extents) to one large datastore... (well, I think you need to tune the VMkernel a bit, but most people don't do that anyway ;-).
However, a VMFS (and all its extents) use a single I/O queue. So if you put many VMs which require lots of I/Os it is possible to run into a problem.
Another problem can be SCSI reservations: VMFS meta-data changes are serialized using SCSI reservations. Many events can cause that: VM power-on/off, snapshot creation/deletion, file creation / deletion / expansion(think about growths of DELTA files or thin-provisioning).
So large(r) VMFS = bad? Not necessarily. The other extreme would be to create a single VMFS per VM, but how do you size that VMFS if you want to use snapshots from time to time?
A good argument FOR large(r) VMFS is that it consolidates free space.
.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-05-2010 07:03 AM
тАО03-05-2010 07:03 AM
Re: EVA 4100 - VMware vSphere 4- Max size vdisk LUN for without degrading performances
Notes from when i went to vmware HQ in the uk and one of thier senior engineers gave us a great techy talk - this was when esx 3.5 was around though - might be different for vsphere
number of vms per lun
6 - 8 is the "sweet spot" for optimal performance
also past this point you being to encounter scsi reservation errors/locks
absolutely best performance - 1 vm per lun
personally i use 1 vm per lun for mission critical vms and 6 vms per lun for second tier systems
disk size
500gig is optimal - past this you see degredation
remember to use pvscsi nowadays for windows and linux - you'll see 12% more throughput and 18% less cpu use - i posted a load of articels on pvscsi at my blog :
http://raj2796.wordpress.com/2010/02/18/when-to-use-vmware-pvscsi-and-when-to-use-lsi-logic/
number of vms per lun
6 - 8 is the "sweet spot" for optimal performance
also past this point you being to encounter scsi reservation errors/locks
absolutely best performance - 1 vm per lun
personally i use 1 vm per lun for mission critical vms and 6 vms per lun for second tier systems
disk size
500gig is optimal - past this you see degredation
remember to use pvscsi nowadays for windows and linux - you'll see 12% more throughput and 18% less cpu use - i posted a load of articels on pvscsi at my blog :
http://raj2796.wordpress.com/2010/02/18/when-to-use-vmware-pvscsi-and-when-to-use-lsi-logic/
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP