- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- HPE EVA Storage
- >
- what is reason for 500 GB hard disk became bad
HPE EVA Storage
1753876
Members
7375
Online
108809
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-12-2010 12:50 AM
тАО06-12-2010 12:50 AM
what is reason for 500 GB hard disk became bad
what is reason for 500 GB hard disk became bad
because in our site we repalce many hard disk
and we need to get what reason for that
because in our site we repalce many hard disk
and we need to get what reason for that
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-12-2010 12:58 AM
тАО06-12-2010 12:58 AM
Re: what is reason for 500 GB hard disk became bad
Hi sherif12,
>> what is reason for 500 GB hard disk became bad
You need to define what the term "bad" means -
Do u mean Bad sectors on the disk or disk being slow or ???
Regards,
Murali
>> what is reason for 500 GB hard disk became bad
You need to define what the term "bad" means -
Do u mean Bad sectors on the disk or disk being slow or ???
Regards,
Murali
Let There Be Rock - AC/DC
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-12-2010 11:34 PM
тАО06-12-2010 11:34 PM
Re: what is reason for 500 GB hard disk became bad
This is a very difficult question to answer without a lot more information. It's kind of like asking why your car won't start, without providing any diagnostic information, history of the vehicle maintenance or even what kind of car you have.
Can you describe the environment where the drives are used? In one company I worked for, I noticed a pretty good correlation between hot machine room sections and failed disks. In some cases rows of servers were set up so that the exhaust air from one row of servers fed into the intake of the next row of servers. Combine that with densely racked servers and occasional air conditioner failures, and some spots could get quite hot even though the room temperature average was fairly reasonable. Thermal imaging cameras made this quite apparent. Changing the orientation and racking density of the server racks and adjusting the layout of the floor tile vents helped fix this. Avoiding buildup of dust/lint around fans can also help them work more efficiently. Checking
for failed fans might also be worthwhile.
The endless pressure to reduce component costs must force manufacturers to make design tradeoffs that could hurt reliability of either an entire production run of drives or certain batches of drives. It would be helpful to keep statistics about whether the failures you see are tied to particular vendors, drive models or drive date codes and whether there is any pattern to the symptoms when the drives fail. (Audible noise, bad data, no response, etc.)
Depending on how much you are willing to invest in identifying the root cause of the failures you might hire a failure analysis company that could look at your particular failed drives. That can quickly get expensive, but if your problem is severe enough, might be an option. The drive vendor might also be interested in working with you on this if there is a clear pattern of failure.
Vibration might be another source of problems. Although modern drives are much more resistant to this than they used to be, something like a laptop that gets banged around a lot might well have a higher failure rate than a server in a machine room.
Since you posted this in a SAN forum, I'm assuming you are asking about a SAN, but it would be helpful to confirm that.
Unstable power (improper input voltage, brownouts, lightning storms, cars crashing into powwer poles, ...) could also increase failure rates. Depending on your environment, some form of power conditioning might help, but could be costly.
Other sites have discovered that contamination in machine rooms increases failure rates for electronics in general. Some studies have pointed to minute metallic debris from machine room floor tiles blowing around until it lands in some key spot on a circuit board, interfering with proper electrical signals. Drives vary with respect to how much the electronics get exposed to this potential.
I have heard some reports of increases in electronics failure rates in general as the industry has migrated to lead-free solder.
Perhaps we could make a better guess if you could provide more information about your environment (server, laptop, internal or external drive, room/drive temperature, any patterns with drive vendor/model/age/failure mode, etc.) Without more information I'd be tempted to first focus on any heat issues,
but the things I mentioned above and many more could be causing the failures.
Can you describe the environment where the drives are used? In one company I worked for, I noticed a pretty good correlation between hot machine room sections and failed disks. In some cases rows of servers were set up so that the exhaust air from one row of servers fed into the intake of the next row of servers. Combine that with densely racked servers and occasional air conditioner failures, and some spots could get quite hot even though the room temperature average was fairly reasonable. Thermal imaging cameras made this quite apparent. Changing the orientation and racking density of the server racks and adjusting the layout of the floor tile vents helped fix this. Avoiding buildup of dust/lint around fans can also help them work more efficiently. Checking
for failed fans might also be worthwhile.
The endless pressure to reduce component costs must force manufacturers to make design tradeoffs that could hurt reliability of either an entire production run of drives or certain batches of drives. It would be helpful to keep statistics about whether the failures you see are tied to particular vendors, drive models or drive date codes and whether there is any pattern to the symptoms when the drives fail. (Audible noise, bad data, no response, etc.)
Depending on how much you are willing to invest in identifying the root cause of the failures you might hire a failure analysis company that could look at your particular failed drives. That can quickly get expensive, but if your problem is severe enough, might be an option. The drive vendor might also be interested in working with you on this if there is a clear pattern of failure.
Vibration might be another source of problems. Although modern drives are much more resistant to this than they used to be, something like a laptop that gets banged around a lot might well have a higher failure rate than a server in a machine room.
Since you posted this in a SAN forum, I'm assuming you are asking about a SAN, but it would be helpful to confirm that.
Unstable power (improper input voltage, brownouts, lightning storms, cars crashing into powwer poles, ...) could also increase failure rates. Depending on your environment, some form of power conditioning might help, but could be costly.
Other sites have discovered that contamination in machine rooms increases failure rates for electronics in general. Some studies have pointed to minute metallic debris from machine room floor tiles blowing around until it lands in some key spot on a circuit board, interfering with proper electrical signals. Drives vary with respect to how much the electronics get exposed to this potential.
I have heard some reports of increases in electronics failure rates in general as the industry has migrated to lead-free solder.
Perhaps we could make a better guess if you could provide more information about your environment (server, laptop, internal or external drive, room/drive temperature, any patterns with drive vendor/model/age/failure mode, etc.) Without more information I'd be tempted to first focus on any heat issues,
but the things I mentioned above and many more could be causing the failures.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-13-2010 05:52 AM
тАО06-13-2010 05:52 AM
Re: what is reason for 500 GB hard disk became bad
Another thing often missed is overuse of the FATA disk drives.
They are not meant to be used 24/7 but 30% of that or 13/5 and not for db use more like archiving purposes.. there are whitepapers out about this.
Over how long period did you have to replace how many disks?
They are not meant to be used 24/7 but 30% of that or 13/5 and not for db use more like archiving purposes.. there are whitepapers out about this.
Over how long period did you have to replace how many disks?
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP