- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- DEVICE IDENTIFICATION
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2013 10:46 PM
03-08-2013 10:46 PM
DEVICE IDENTIFICATION
HOW TO INDENTIFY THE DEVICE utilization
$sar -d 1 1
12:37:14 PM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
12:37:15 PM cciss/c0d0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
12:37:15 PM cciss/c0d1 38.00 52.00 32.00 39.96 2.93 7.34 2.42 96.50
12:37:15 PM cciss/c1d0 8.00 0.00 240.00 30.00 0.11 14.38 4.50 3.60
12:37:15 PM cciss/c1d0p1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
12:37:15 PM cciss/c1d0p2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
12:37:15 PM cciss/c1d0p3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
12:37:15 PM cciss/c1d0p4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/cciss/c1d0p1 20315812 9843680 9423492 22% /
/dev/cciss/c1d0p2 30470144 5563044 23334340 52% /opt
/dev/cciss/c1d0p3 48755500 2110412 44128508 55%/var
/dev/cciss/c1d0p4 101086 25293 70574 27% /boot
tmpfs 12277428 0 12277428 0% /dev/shm
$
from the above output what is c0d0,c0d1 and c1d0 what this for root file system or raid disk , please explain
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-09-2013 01:08 AM
03-09-2013 01:08 AM
Re: DEVICE IDENTIFICATION
Apparently your system has two SmartArray RAID controllers.
/dev/cciss/cXdYpZ = Controller X, (logical) Disk Y, Partition Z. The numbers X, Y and Z all start from zero.
If the "pZ" part is not listed, then it means the whole-disk device for the appropriate logical disk, i.e. c1d0 = the first logical disk on the second SmartArray controller.
Note that all of these refer to logical disks. In other words, each "logical disk" is a RAID set defined to a particular SmartArray RAID controller. If you want to know about physical disks, you'll need a SmartArray configuration tool like hpacucli.
Your filesystems seem to be all on partitions of the first logical disk of the second controller (c1d0). Based on the single set of sar output, it looks like the second controller can perform quite a bit better than the first one: the c0 controller is running at 96.50% utilization as it has been reading 52 sectors and writing 32 (a total of 84 sectors). On the other hand, the c1 controller has read 240 sectors at the same time, and is not even breaking a sweat at 3.60% utilization.
Based on only the information you posted, it is not possible to know what is using the c0 controller and its disks, and why the whole-disk device c1d0 shows significant utilization although the per-partition statistics are all zeroes.
Are you running a database that could be using logical disks in raw mode? Or is there some kind of a disk-based backup or disk-cloning operation running? Any of these might be accessing the disks through the whole-disk device, causing the per-partition statistics to remain at zeroes.