Operating System - HP-UX
1832650 Members
2902 Online
110043 Solutions
New Discussion

How many LUNs can be more added to the system without getting exhausted for the Device Files

 
SOLVED
Go to solution
sujit kumar singh
Honored Contributor

How many LUNs can be more added to the system without getting exhausted for the Device Files

hi

For My server that is rx8620/ running 11.23 and is single partition with all 4 cell boards and 24 FC cards.I am using all the disks in LVM to be part of the System VGs.
I am well within the limits of max_vgs( <255)and max_pv per VG(<255).
Below is the present allocation of the disks.
sdwhpdb1:PRD:/dev/rdsk # swlist | grep -i QPK
QPKAPPS B.11.23.0806.072 Applications Patches for HP-UX 11i v2, June 2008
QPKBASE B.11.23.0806.072 Base Quality Pack Bundle for HP-UX 11i v2, June 2008

For 1 disk added from the EMC symmetrix Disk array I get 10 Device Files From the Symmetrix side that are Active paths.

MY Question is :

1) How many more Symmetrix LUNs can be added more to this system till I am able to get the Device Files created and that we can use in the server for adding to the VGs.
2) Per EMC Lun I am getting 10 Device Files.
3) I have already reached the ext_bus count of 251.
4) The Limitations is 8192 for active paths in 11.23 and no of all visible paths is limited to 16894

OTHER INFORMATIONS FOR THE CURRENT DISKS AND DEVICE FILES:
Sdwhpdb1:
sdwhpdb1:PRD:/root # strings /etc/lvmtab| grep dsk | wc -l
455
sdwhpdb1:PRD:/dev/rdsk # ll | wc -l
5657
sdwhpdb1:PRD:/dev/rdsk # ioscan -kfnC disk | grep -i dsk | wc -l
5297
sdwhpdb1:PRD:/dev/rdsk # powermt display dev=all | grep -i alive | wc -l
5888
sdwhpdb1:PRD:/dev/rdsk # powermt display dev=all | grep -i active | wc -l
5370
sdwhpdb1:PRD:/dev/rdsk # ioscan -kfnC ext_bus | grep -i ext_bus | wc -l
123
sdwhpdb1:PRD:/dev/rdsk # ioscan -kfnC ext_bus | grep -i ext_bus| sort -k 2,2
ext_bus 0 0/0/0/2/0 c8xx CLAIMED INTERFACE SCSI C1010 Ultra Wide Single-Ended
ext_bus 1 0/0/0/2/1 c8xx CLAIMED INTERFACE SCSI C1010 Ultra Wide Single-Ended
ext_bus 2 0/0/0/3/0 c8xx CLAIMED INTERFACE SCSI C1010 Ultra Wide Single-Ended
ext_bus 3 0/0/0/3/1 c8xx CLAIMED INTERFACE SCSI C1010 Ultra160 Wide LVD
ext_bus 4 1/0/0/2/0 c8xx CLAIMED INTERFACE SCSI C1010 Ultra Wide Single-Ended
ext_bus 5 1/0/0/2/1 c8xx CLAIMED INTERFACE SCSI C1010 Ultra Wide Single-Ended
ext_bus 6 1/0/0/3/0 c8xx CLAIMED INTERFACE SCSI C1010 Ultra Wide Single-Ended
ext_bus 7 1/0/0/3/1 c8xx CLAIMED INTERFACE SCSI C1010 Ultra160 Wide LVD
ext_bus 58 0/0/8/1/0.99.30.19.0 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 59 0/0/8/1/0.99.30.255.1 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 62 0/0/8/1/0.99.138.19.0 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 63 0/0/8/1/0.99.138.255.1 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 80 1/0/2/1/1.100.44.19.0 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 81 1/0/2/1/1.100.44.255.1 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 82 1/0/2/1/1.100.45.19.0 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 83 1/0/2/1/1.100.45.255.1 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 96 1/0/8/1/0.41.3.0.0 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 97 1/0/8/1/0.41.3.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 102 1/0/8/1/1.31.3.0.0 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 103 1/0/8/1/1.31.3.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 108 1/0/10/1/0.41.3.0.0 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 109 1/0/10/1/0.41.3.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 114 1/0/10/1/1.31.3.0.0 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 115 1/0/10/1/1.31.3.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 136 0/0/8/1/0.99.30.19.1 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 137 0/0/8/1/0.99.138.19.1 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 152 1/0/8/1/0.41.114.0.0 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 153 1/0/8/1/0.41.114.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 154 1/0/8/1/1.31.114.0.0 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 155 1/0/8/1/1.31.114.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 156 1/0/10/1/0.41.114.0.0 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 157 1/0/10/1/0.41.114.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 158 1/0/10/1/1.31.114.0.0 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 159 1/0/10/1/1.31.114.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 160 0/0/4/1/0.41.3.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 161 0/0/6/1/1.31.96.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 162 0/0/4/1/1.31.3.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 163 0/0/6/1/1.31.97.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 164 0/0/6/1/0.41.96.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 165 1/0/6/1/0.41.9.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 166 0/0/6/1/0.41.97.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 167 1/0/6/1/1.31.9.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 168 0/0/6/1/1.31.97.0.1 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 169 0/0/6/1/0.41.96.0.1 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 170 1/0/6/1/1.31.9.0.1 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 171 1/0/6/1/0.41.9.0.1 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 172 1/0/6/1/0.41.104.0.2 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 173 1/0/6/1/0.41.104.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 174 1/0/6/1/1.31.104.0.2 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 175 1/0/6/1/1.31.104.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 176 0/0/10/1/1.31.224.255.8 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 177 0/0/10/1/1.31.225.255.8 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 178 1/0/1/1/0.41.224.255.8 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 179 1/0/1/1/0.41.225.255.8 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 180 0/0/8/1/1.41.96.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 181 0/0/8/1/1.41.97.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 182 1/0/2/1/0.31.96.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 183 1/0/2/1/0.31.97.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 184 0/0/6/1/1.31.97.0.2 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 185 0/0/6/1/0.41.96.0.2 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 186 1/0/6/1/1.31.9.0.2 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 187 1/0/6/1/0.41.9.0.2 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 188 1/0/1/1/0.41.225.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 189 0/0/8/1/1.41.97.0.1 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 190 0/0/8/1/1.41.97.0.2 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 191 1/0/2/1/0.31.96.0.1 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 192 1/0/2/1/0.31.96.0.2 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 193 0/0/6/1/0.41.96.0.3 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 194 0/0/6/1/1.31.97.0.3 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 195 0/0/8/1/1.41.97.0.3 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 196 1/0/2/1/0.31.96.0.3 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 197 1/0/6/1/1.31.9.0.3 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 198 1/0/6/1/0.41.9.0.3 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 199 0/0/4/1/1.31.3.0.0 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 200 0/0/4/1/0.41.3.0.0 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 201 1/0/4/1/0.31.80.0.1 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 202 1/0/4/1/0.31.80.0.2 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 203 1/0/4/1/0.31.80.0.3 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 204 1/0/4/1/0.31.80.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 205 1/0/4/1/1.41.80.0.1 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 206 1/0/4/1/1.41.80.0.2 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 207 1/0/4/1/1.41.80.0.3 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 208 1/0/4/1/1.41.80.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 209 0/0/4/1/0.41.229.255.8 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 210 0/0/4/1/1.31.229.255.8 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 211 0/0/4/1/0.41.23.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 212 0/0/4/1/1.31.20.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 213 0/0/4/1/0.41.23.0.1 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 214 0/0/4/1/0.41.23.0.2 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 215 0/0/4/1/0.41.23.0.3 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 216 0/0/4/1/1.31.20.0.1 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 217 0/0/4/1/1.31.20.0.2 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 218 0/0/4/1/1.31.20.0.3 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 219 0/0/2/1/0.41.88.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 220 0/0/2/1/1.31.88.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 221 0/0/2/1/1.31.88.0.1 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 222 0/0/2/1/1.31.88.0.2 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 223 0/0/2/1/1.31.88.0.3 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 224 0/0/2/1/0.41.88.0.1 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 225 0/0/2/1/0.41.88.0.2 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 226 0/0/2/1/0.41.88.0.3 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 227 0/0/10/1/1.31.108.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 228 0/0/1/1/0.41.108.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 229 0/0/1/1/0.41.109.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 231 0/0/1/1/0.41.110.255.0 fcd_vbus CLAIMED INTERFACE FCP Device Interface
ext_bus 234 0/0/4/1/1.31.229.128.0 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 235 0/0/4/1/0.41.229.128.0 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 236 1/0/6/1/1.31.104.0.3 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 237 1/0/6/1/0.41.104.0.3 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 238 0/0/2/1/1.31.88.0.4 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 239 0/0/2/1/0.41.88.0.4 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 240 0/0/4/1/0.41.23.0.4 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 241 0/0/4/1/1.31.20.0.4 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 242 0/0/6/1/0.41.96.0.4 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 243 0/0/6/1/1.31.97.0.4 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 244 0/0/8/1/1.41.97.0.4 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 245 1/0/2/1/0.31.96.0.4 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 246 1/0/4/1/0.31.80.0.4 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 247 1/0/4/1/1.41.80.0.4 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 248 1/0/6/1/0.41.9.0.4 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 249 1/0/6/1/0.41.104.0.4 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 250 1/0/6/1/1.31.9.0.4 fcd_vbus CLAIMED INTERFACE FCP Array Interface
ext_bus 251 1/0/6/1/1.31.104.0.4 fcd_vbus CLAIMED INTERFACE FCP Array Interface

regards
sujit
11 REPLIES 11
sujit kumar singh
Honored Contributor

Re: How many LUNs can be more added to the system without getting exhausted for the Device Files

Hi one correction

the model is 9000/800/rp8420 and OS id 11.23
regards
sujit
Stephan.
Honored Contributor

Re: How many LUNs can be more added to the system without getting exhausted for the Device Files

> 3) I have already reached the ext_bus count of 251.

I'm not sure about the other Limits but this one will cause you a lot of trouble.

The limit for ext_bus device is 255 - if you go over it there will be device files but they will not response as expected, in other words your ioconfig gets corrupted.

Fixing this requires hp support or at least a tool called ioconfig_dumb which is available from wtec - perhaps the support guys hand it over if you ask for it.
Hein van den Heuvel
Honored Contributor

Re: How many LUNs can be more added to the system without getting exhausted for the Device Files

I can not speak with authority, but it seems to me that you correctly enumerated all current restrictions.

Some questions...

1) Did you take one of those disks as an examples and EXPLAIN every single path?

For example, I can think of 4 * 2 where 4 is 1 (and only 1) card per cell, and 2 is the number of controller side active links.

2) Do you have multiple paths from the same controller presented unit to a single cell?

3) Are you using Zoning to restrict the visibility of certain HBA to certain using?

4) could the system be using zoning more aggressively, to get down to say 2 (hba) x2 (controller) connections per unit?

5) How many many units are we talking about?

10) That's a good few FC cards to play with and keep straight and working!
One must assume you are anticipating a serious IO load?
Any indication of the total MB/Second, IOPs/second this is configured to support? ( "the max" ? )

6) LVM mirroring?

7) It the storage being carved up in units that are relatively small to be recombined in volume groups?
( since you mention max_pv per VG )
Maybe 10 x 50GB to make a 500GB LVM pool?

8) Why not make fewer, large units?
Maybe 4 x 125 GB?

9) Why not have a single large unit per VG?
There are several so-so good reasons, but which one is believed to apply?

Hope these questions help some,
Regards,
Hein.
Hein van den Heuvel
Honored Contributor

Re: How many LUNs can be more added to the system without getting exhausted for the Device Files



sujit, any useful questions in there?

Anyone else with thoughts on this?

Hein.
Viktor Balogh
Honored Contributor

Re: How many LUNs can be more added to the system without getting exhausted for the Device Files

Hi Sujit,

> with all 4 cell boards and 24 FC cards

11i v2 doesn't support native multipathing, so you only need 2 FC paths to every LUN. (in order to avoid SPOF)

If you go according to this approach, you won't exceed the limit of 255 for ext_bus. We also have some server where the zoning must have been adjusted to set every LUN visible only trough 2 FC cards.

****
Unix operates with beer.
Steven E. Protter
Exalted Contributor

Re: How many LUNs can be more added to the system without getting exhausted for the Device Files

Shalom,

The actual answer here depends on the storage.

Some difficult to work with storage systems will top out at 8 LUNS.

EMC and HP storage will normally top out at 255, but the number can be higher.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
sujit kumar singh
Honored Contributor

Re: How many LUNs can be more added to the system without getting exhausted for the Device Files

Hi




Hein >>>>>>

Answer to few of the questions that I could give.


Some questions...

1) Did you take one of those disks as an examples and EXPLAIN every single path? â

Here is what one LUN looks in the powermt display:

Now Om for all the EMC LUNs that are being assigned we get 12 active paths for each of the assigned LUN, as follows:


That is we are getting for one EMC LUN â 12 paths â powerpath is taking care of the multipath
Logical device ID=135F
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
193 0/0/6/1/0.41.96.0.3.14.7 c193t14d7 FA 3aA active alive 0 0 -- /dev/fcd18 -- 0/0/6/1/0
194 0/0/6/1/1.31.97.0.3.14.7 c194t14d7 FA 14bA active alive 0 0 /dev/fcd19 0/0/6/1/1
195 0/0/8/1/1.41.97.0.3.14.7 c195t14d7 FA 14aA active alive 0 0 /dev/fcd5
196 1/0/2/1/0.31.96.0.3.14.7 c196t14d7 FA 3bA active alive 0 0
197 1/0/6/1/1.31.9.0.3.14.7 c197t14d7 FA 14cB active alive 0 0
198 1/0/6/1/0.41.9.0.3.14.7 c198t14d7 FA 3cA active alive 0 0
203 1/0/4/1/0.31.80.0.3.14.7 c203t14d7 FA 3bB active alive 0 0
207 1/0/4/1/1.41.80.0.3.14.7 c207t14d7 FA 14aB active alive 0 0
215 0/0/4/1/0.41.23.0.3.14.7 c215t14d7 FA 3aB active alive 0 0
218 0/0/4/1/1.31.20.0.3.14.7 c218t14d7 FA 14bB active alive 0 0
223 0/0/2/1/1.31.88.0.3.14.7 c223t14d7 FA 14dA active alive 0 0
226 0/0/2/1/0.41.88.0.3.14.7 c226t14d7 FA 3dB active alive 0 0
236 1/0/6/1/1.31.104.0.3.14.7 c236t14d7 FA 3cB active alive 0 0
237 1/0/6/1/0.41.104.0.3.14.7 c237t14d7 FA 14cA active alive 0 0


For example, I can think of 4 * 2 where 4 is 1 (and only 1) card per cell, and 2 is the number of controller side active links.

2) Do you have multiple paths from the same controller presented unit to a single cell?



3) Are you using Zoning to restrict the visibility of certain HBA to certain using?
Yes zoning is in place.

4) could the system be using zoning more aggressively, to get down to say 2 (hba) x2 (controller) connections per unit?
>>that needs to be checked for limitations with the Storage Support Team.

5) How many many units are we talking about?

>>We are planning immediately to add 30 LUNs each of size 69 Gb from the EMC Array

10) That's a good few FC cards to play with and keep straight and working!
One must assume you are anticipating a serious IO load?
Any indication of the total MB/Second, IOPs/second this is configured to support? ( "the max" ? )

6) LVM mirroring?

These are disks for Big Data VG and they do not have the LVM mirroring in place.

7) It the storage being carved up in units that are relatively small to be recombined in volume groups?
( since you mention max_pv per VG )
Maybe 10 x 50GB to make a 500GB LVM pool?

We shall be working on this if we assume that we are falling short on the things.

8) Why not make fewer, large units?
Maybe 4 x 125 GB?
We shall be working on this if we assume that we are falling short on the things the possibility is less

9) Why not have a single large unit per VG?
There are several so-so good reasons, but which one is believed to apply?






Stephan >>
The limit for ext_bus device is 255 - if you go over it there will be device files but they will not response as expected, in other words your ioconfig gets corrupted. --- is there any documentation for the same?


Fixing this requires hp support or at least a tool called ioconfig_dumb which is available from wtec - perhaps the support guys hand it over if you ask for it.
ï ® What does this tool do exactlyâ ¦


SEP >>

>.EMC and HP storage will normally top out at 255, but the number can be higher.

Can you please elaborate more on this ?



Regards
sujit
Viktor Balogh
Honored Contributor
Solution

Re: How many LUNs can be more added to the system without getting exhausted for the Device Files

see 'man 1m ioinit':

"For ext_bus class of devices, specified instance numbers should not exceed 255."
****
Unix operates with beer.
Viktor Balogh
Honored Contributor

Re: How many LUNs can be more added to the system without getting exhausted for the Device Files

the number of legacy device special files has this limit at 256.

see it on page 5, agile addressing:

http://www.docs.hp.com/en/MassStorageStack/The_Next_Generation_Mass_Storage_Stack.pdf

"The size of the I/O configuration was limited by the format of the DSF minor number and naming
convention. With reserved bits for the card instance, target, and LUN, only 256 controllers, 16 targets per
controller, and 8 LUNs per target were allowed. Interface drivers that supported the larger addressing
model of SCSI-3 devices had to create virtual controllers, virtual targets, and virtual LUNs."
****
Unix operates with beer.
TTr
Honored Contributor

Re: How many LUNs can be more added to the system without getting exhausted for the Device Files

> 2) Per EMC Lun I am getting 10 Device Files.

Why? Do these translate to 10 absolutely distinct paths? Probably not. In most cases you can have 4 distinct paths per LUN. Anything above that even if the device files are distinct, they are not trully distinct, meaning that when there is a failure (cable, port, HBA etc) half of the paths will fail together.

You need to clean up your zoning and LUN presentation if you want to save some device files so that you can add more LUNs.
Stephan.
Honored Contributor

Re: How many LUNs can be more added to the system without getting exhausted for the Device Files

> Fixing this requires hp support or at least a tool called ioconfig_dumb which is available from wtec - perhaps the support guys hand it over if you ask for it.
ï ® What does this tool do exactly?

ioconfig_dump generate a ascii file from your ioconfig, which allows to remove unused/unwanted entries.
After the changes you can generate with it a new binary ioconfig from that ascii file or you can use the automatic cleanup mode.

Exchanging the ioconfig requires a reboot.