HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- Re: Increrasing disk capacity in a RAID 1+0 and RA...
Operating System - Linux
1827427
Members
3909
Online
109965
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-25-2006 05:18 AM
01-25-2006 05:18 AM
Increrasing disk capacity in a RAID 1+0 and RAID 5 setup
Hi,
I have some around 10 HP ML370 and DL585s. They are all configured either as RAID 1+0 or RAID 5+ADG systems. Some of the arrays are using 72GB disks, I have to upgrade the disks to 146GB. If I just remove the mirror disk and replace it with 146 GB, wait till the sync happens and then replace the primary disk with 146GB and wait till the sync happens. Is their a way I can make the RAID array utilise the full 146GB. Please also note that the system is RHEL AS3 with disks under the control of LVM with ext3 FS.
Any help will be appreciated.
I have some around 10 HP ML370 and DL585s. They are all configured either as RAID 1+0 or RAID 5+ADG systems. Some of the arrays are using 72GB disks, I have to upgrade the disks to 146GB. If I just remove the mirror disk and replace it with 146 GB, wait till the sync happens and then replace the primary disk with 146GB and wait till the sync happens. Is their a way I can make the RAID array utilise the full 146GB. Please also note that the system is RHEL AS3 with disks under the control of LVM with ext3 FS.
Any help will be appreciated.
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-25-2006 06:02 AM
01-25-2006 06:02 AM
Re: Increrasing disk capacity in a RAID 1+0 and RAID 5 setup
Shalom,
If these are two disk systems, pulling out a drive will likely halt them.
If they are raid 5 hardware, your procedure can work, one disk at a time.
If your raid setup is software pulling a disk will likely halt the system. If there is spare capacity in the servers, you pull off a rolling upgrade by adding disk, increasing volume groups and moving the data to the new disks.
This is however a wee bit risky.
Best bet is to use ghost or http://www.mondorescue.org or http://www.acronis.com to image the systems. then you should be able to bring them down, replace the disks and restore your systems.
If there are databases on the disk, the database or other transaction applications must be shut at backup time to get a clean backup.
acronis is the closest thing I've seen on Linux to Ignite which could handle this task easily if these were hp-ux servers.
SEP
If these are two disk systems, pulling out a drive will likely halt them.
If they are raid 5 hardware, your procedure can work, one disk at a time.
If your raid setup is software pulling a disk will likely halt the system. If there is spare capacity in the servers, you pull off a rolling upgrade by adding disk, increasing volume groups and moving the data to the new disks.
This is however a wee bit risky.
Best bet is to use ghost or http://www.mondorescue.org or http://www.acronis.com to image the systems. then you should be able to bring them down, replace the disks and restore your systems.
If there are databases on the disk, the database or other transaction applications must be shut at backup time to get a clean backup.
acronis is the closest thing I've seen on Linux to Ignite which could handle this task easily if these were hp-ux servers.
SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-26-2006 02:50 AM
01-26-2006 02:50 AM
Re: Increrasing disk capacity in a RAID 1+0 and RAID 5 setup
I will try to be a bit clearer this time:
The RAID on the disks are hardware RAID and all the disks are connected to either Smart Array 5300 (internal) or Smart Array 6400 (external) controllers. I have tw0 physical arrays configured as:
Array A --> 1 72 GB disk --> 1 72 GB disk (RAID 1+0)
Array B --> 2 72 GB disks --> 2 72 GB disks (RAID 1+0)
Array A has one logical drive and Array B has 2 logical drives. Further I have LVM configured on the RedHat Linux.
The objective I have is to change these 72GB disks into 146GB disks. I had the following theoretical plan which I am not sure will work.
Step 1. Replace the mirror pair of both arrays to 146GB disks so the layout is now as follows:
Array A --> 1 72 GB disk --> 1 146 GB disk (RAID 1+0)
Array B --> 2 72 GB disks --> 2 146 GB disks (RAID 1+0)
Wait for the sync to happen.
Step 2. Replace the primary disk with 146GB disks so the layout now is:
Array A --> 1 146 GB disk --> 1 146 GB disk (RAID 1+0)
Array B --> 2 146 GB disks --> 2 146 GB disks (RAID 1+0)
Wait for the sync to happen.
After these two steps I am still utilising the first 72GB of each disk as the array was created like that.
How can I extend the array to utilise the full 146GB disks? Will I see the new space as unutilised space? If yes then should I proceed as follows:
Step 3. Create new logical drives in each array.
Step 4. Reboot the machine (is it reqired?)
Step 5. Create a new physical volume. How? Will I see a new /dev/cciss/cxdx kind of drive?
Step 6. Extend the Volume group(s) to the new physical volume. I hope it is just the standard vgextend command.
Step 7. Extend the Logical volume(s) on the Volume Groups. Should be the standard lvextend command?
Step 8. Extend the filesystem on the logical volumes. I have ext3 filesystems. How to extend them? Can they be extended on the fly or shall I unmount the filesystem.
I am more concerned and want to double check steps because these are all production systems and I do not have a spare system to try the sequence first. Ofcourse I will take a backup first and will do that on the development system first, but still the app developers would love to have as low downtime as possible.
Cheers.
The RAID on the disks are hardware RAID and all the disks are connected to either Smart Array 5300 (internal) or Smart Array 6400 (external) controllers. I have tw0 physical arrays configured as:
Array A --> 1 72 GB disk --> 1 72 GB disk (RAID 1+0)
Array B --> 2 72 GB disks --> 2 72 GB disks (RAID 1+0)
Array A has one logical drive and Array B has 2 logical drives. Further I have LVM configured on the RedHat Linux.
The objective I have is to change these 72GB disks into 146GB disks. I had the following theoretical plan which I am not sure will work.
Step 1. Replace the mirror pair of both arrays to 146GB disks so the layout is now as follows:
Array A --> 1 72 GB disk --> 1 146 GB disk (RAID 1+0)
Array B --> 2 72 GB disks --> 2 146 GB disks (RAID 1+0)
Wait for the sync to happen.
Step 2. Replace the primary disk with 146GB disks so the layout now is:
Array A --> 1 146 GB disk --> 1 146 GB disk (RAID 1+0)
Array B --> 2 146 GB disks --> 2 146 GB disks (RAID 1+0)
Wait for the sync to happen.
After these two steps I am still utilising the first 72GB of each disk as the array was created like that.
How can I extend the array to utilise the full 146GB disks? Will I see the new space as unutilised space? If yes then should I proceed as follows:
Step 3. Create new logical drives in each array.
Step 4. Reboot the machine (is it reqired?)
Step 5. Create a new physical volume. How? Will I see a new /dev/cciss/cxdx kind of drive?
Step 6. Extend the Volume group(s) to the new physical volume. I hope it is just the standard vgextend command.
Step 7. Extend the Logical volume(s) on the Volume Groups. Should be the standard lvextend command?
Step 8. Extend the filesystem on the logical volumes. I have ext3 filesystems. How to extend them? Can they be extended on the fly or shall I unmount the filesystem.
I am more concerned and want to double check steps because these are all production systems and I do not have a spare system to try the sequence first. Ofcourse I will take a backup first and will do that on the development system first, but still the app developers would love to have as low downtime as possible.
Cheers.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Support
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP