修改时间 01-20-2014 03:00 PM
Hotfix XS61E034 - XenServer 6.1.0
Hotfix XS61E034 - XenServer 6.1.0
This hotfix resolves the following issues:
- LVM-based SRs (for example, LVMoHBA, LVMoiSCSI, and LVM) incorrectly allow customers to create VHD files that are up to 2048 GB in size and file-based SRs (for example, NFS and EXT) allow customers to resize VHD files that are up to 2048 GB in size. However, writing more than 2043 GB of data to the virtual disk images (VDIs) can lead to a VDI corruption. Installing this hotfix ensures VMs with VDIs that contain more than 2043GB of data get I/O errors, and thereby eliminates the chances of a silent VDI corruption.
- When deleting snaphots, the VHD coalesce utility may fail (in cases where the VDI was resized after the snapshot) and customers will be unable to reclaim disk space.
- When multipathing is enabled, using iSCSI SRs may cause slowness in Storage performance. This is due to the system selecting an incorrect I/O scheduler for the device mapper.
- Infrastructure issues in multipath environments may lead to inability for VMs to access storage
- In some cases, multipathing is shown as â˜Not activeâ™ in XenCenter even when there are active paths present in the corresponding SR.
- If a Storage array reports IPv6 addresses for iSCSI, one of the following error may be displayed: "The SR failed to complete the operation" or "ValueError: too many values to unpack or received signal: SIGSEGV".
- Storage garbage collection will not occur if any of the hosts in the pool are disabled.
- In IntelliCache-enabled environments, when using a newly created master VDI, the cache files associated with the existing master VDIs will not be deleted until the corresponding master VDI is deleted. This can lead to an accumulation of unused cache files, which can consume a large amount of disk space on the local SR. This hotfix provides an automated way to delete the unused cache files after they reach the maximum time.
To delete the unused cache files on the XenServer host, run the following command:
/opt/xensource/sm/cleanup.py -u <SR_UUID> -c <max_hours>
Where, <SR_UUID> refers to the UUID of the local SR and <max_hours> refers to the maximum time in hours after which any unused cache files will be deleted.
This hotfix also includes the following previously released hotfixes:
Customers should use either XenCenter or the XenServer Command Line Interface (CLI) to install this update. Once the update has installed, the server must be restarted for it to take effect. As with any software update, please back up your data before applying this hotfix. Citrix recommends updating all hosts within a pool sequentially. Upgrading of hosts should be scheduled to minimize the amount of time the pool runs in a "mixed state" where some hosts have been upgraded and some have not. Running a mixed pool of updated and non-updated hosts for general operation is not supported.
NOTE: The attachment to this article is a zip file. It contains both the hotfix update package, and the source code for any modified open source components. The source code is not necessary for hotfix installation: it is provided to fulfil licensing obligations.Installing the update using XenCenter
- Download the update to a known location on a computer that has XenCenter installed.
- In XenCenter, on the Tools menu, select Install New Update. This displays the Install Update wizard.
- Click Next to start the Wizard.
- Click Add to upload a new update.
- Browse to the location where you downloaded the hotfix, select it, and then click Open.
- From the list of updates select XS61E034.xsupdate and then click Next.
- Select the hosts you wish to apply the hotfix to, and then click Next.
- Follow the recommendations to resolve any upgrade prechecks and then click Next.
- Choose how to perform post-update tasks. In the Post update options section, select automatically or manually, and then click Install update.
- When the installation process is complete, click Finishto exit the wizard.
- Customers who selected to manually perform the post-update tasks, must ensure to do so after installing the hotfix.
- If customers selected to automatically perform the post-update tasks, the XenCenter controlled upgrade process reboots each host sequentially starting with the Pool Master, where possible VMs will be migrated to other running hosts to avoid VM downtime. When the Pool Master is being rebooted, XenCenter will be unable to monitor the pool.
- Download the update file to a known location.
- Extract the xsupdate file from the zip.
- Upload the xsupdate file to the Pool Master by entering the following commands:
(Where hostname is the Pool Master's IP address or DNS name.)
xe patch-upload -s <hostname> -u root -pw <password> file-name=<path_to_update_file>\XS61E034.xsupdateXenServer assigns the update file a UUID which this command prints. Note the UUID.
- Apply the hotfix to all hosts in the pool, specifying the UUID of the hotfix:
xe -s <hostname> -u root -pw <password> patch-pool-apply uuid=c5b98886-370b-45c6-8bf9-8715e901724b
- Verify that the update was applied by using the patch-listcommand.
xe patch-list -s <hostname> -u root -pw <password> name-label=XS61E034If the update has been successful, the hosts field will contain the UUIDs of the hosts this patch was successfully applied to. This should be a complete list of all hosts in the pool.
- To verify in XenCenter that the update has been applied correctly, select the Pool, and then click the General tab. This displays the Pool properties. In the Updates section, ensure that the update is listed as Applied.
- The hotfix is applied to all hosts in the pool, but it will not take effect until each host has been rebooted. For each host, migrate the VMs that you wish to keep running, and shut down the remaining VMs before rebooting the host. Files Hotfix File Component Details Hotfix Filename XS61E034.xsupdate Hotfix File md5sum 2fedbccbb73b35c17c4d7e0139d23c3c Hotfix Source Filename XS61E034-src-pkgs.tar.bz2 Hotfix Source File md5sum 752a5d4700cff0e8f7361fef1aec4c26 Hotfix Zip Filename XS61E034.zip Hotfix Zip File md5sum 98c82a258c8f9c049b9f90208c502bf0