ProLiant Servers (ML,DL,SL)
1752805 Members
5579 Online
108789 Solutions
New Discussion юеВ

DL380 G9: Enable HDD uid on CentOS using hpssacli

 
Ronny_
Occasional Advisor

Re: DL380 G9: Enable HDD uid on CentOS using hpssacli

So it would make sense to have a LV for each disk?

I am running a ceph cluster on this machines. So there is no need to have RAID or something else. But if I can use the UID when using an LV und would not have this issue with renumbering, then this could be a possible config change to avoid those problems.

Does a LV (RAID 0) slow down such a system compared to HBA mode?

TTr
Honored Contributor

Re: DL380 G9: Enable HDD uid on CentOS using hpssacli

Do you use any disk redundancy at the OS level? The H240 is a low end controller and does not have caching acceleration. I would try raid5 with 4 and 8 disks and do some testing and compare these with what you get now. Is your ceph mostly read? 

The only other way to identify the disks is to record their WWN for each slot before you deploy the disks and match the WWN within the OS.

I would think the RAID0 LVs would add another layer of i/o cycles that with HBA mode disks.

Jimmy Vance
HPE Pro

Re: DL380 G9: Enable HDD uid on CentOS using hpssacli

I'm not aware of any test results that show what the difference is (if any) between HBA mode and individual RAID 0 arrays.  The H240 has no caching so there isn't any advantage there.  If you were using a 'P' series controller the controller cache would come into play (in RAID mode, cache is disabled in HBA mode)

No support by private messages. Please ask the forum! 
Jimmy Vance
HPE Pro

Re: DL380 G9: Enable HDD uid on CentOS using hpssacli


@TTr wrote:

Do you use any disk redundancy at the OS level?


From a previous post, this is a CEPH configuration. Anything but RAID 0 is frowned upon in a CEPH configuration

No support by private messages. Please ask the forum! 
Ronny_
Occasional Advisor

Re: DL380 G9: Enable HDD uid on CentOS using hpssacli

We use disk redundancy only for the OS disks (two disks at the back of the server).

The disks we are using with ceph are just pushed to ceph "as is". No raid, no LV ... That's the recommended configuration by ceph, because you should use one "OSD" process per disk.

I am just investigating in how ceph identifies disks. I do not know if it uses the /dev/sdx or some wwn/uuid.