FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Gentoo > Gentoo User

 
 
LinkBack Thread Tools
 
Old 05-10-2012, 07:03 AM
Mick
 
Default Are those "green" drives any good?

On Thursday 10 May 2012 00:58:47 Dale wrote:
> Mark Knecht wrote:
> > On Wed, May 9, 2012 at 3:24 PM, Dale <rdalek1967@gmail.com> wrote:
> >> Alan McKinnon wrote:
> > <SNIP>
> >
> >>> My thoughts these days is that nobody really makes a bad drive anymore.
> >>> Like cars[1], they're all good and do what it says on the box. Same
> >>> with bikes[2].
> >>>
> >>> A manufacturer may have some bad luck and a product range is less than
> >>> perfect, but even that is quite rare and most stuff ups can be fixed
> >>> with new firmware. So it's all good.
> >>
> >> That's my thoughts too. It doesn't matter what brand you go with, they
> >> all have some sort of failure at some point. They are not built to last
> >> forever and there is always the random failure, even when a week old.
> >> It's usually the loss of important data and not having a backup that
> >> makes it sooooo bad. I'm not real picky on brand as long as it is a
> >> company I have heard of.
> >
> > One thing to keep in mind is statistics. For a single drive by itself
> > it hardly matters anymore what you buy. You cannot predict the
> > failure. However if you buy multiple identical drives at the same time
> > then most likely you will either get all good drives or (possibly) a
> > bunch of drives that suffer from similar defects and all start failing
> > at the same point in their life cycle. For RAID arrays it's
> > measurably best to buy drives that come from different manufacturing
> > lots, better from different factories, and maybe even from different
> > companies. Then, if a drive fails, assuming the failure is really the
> > fault of the drive and not some local issue like power sources or ESD
> > events, etc., it's less likely other drives in the box will fail at
> > the same time.
> >
> > Cheers,
> > Mark
>
> You make a good point too. I had a headlight to go out on my car once
> long ago. I, not thinking, replaced them both since the new ones were
> brighter. Guess what, when one of the bulbs blew out, the other was out
> VERY soon after. Now, I replace them but NOT at the same time. Keep in
> mind, just like a hard drive, when one headlight is on, so is the other
> one. When we turn our computers on, all the drives spin up together so
> they are basically all getting the same wear and tear effect.

Unless you're driving something out of the 60's before halogen bulbs came out,
you didn't by any chance touched them with your greasy fingers - did you?
Because that's a promoter of early failure (unequal temperature tension caused
by impurities on the glass).

It's better to use a clean tissue or the foam wrapper they are packed in and
take care not to touch them with your fingers at all. Should you
inadvertently do so, then you'll need to clean them with meths or similar
degreaser.
--
Regards,
Mick
 
Old 05-10-2012, 11:55 AM
 
Default Are those "green" drives any good?

On Wed, May 09, 2012 at 06:58:47PM -0500, Dale wrote:
> Mark Knecht wrote:
> > On Wed, May 9, 2012 at 3:24 PM, Dale <rdalek1967@gmail.com> wrote:
> >> Alan McKinnon wrote:
> > <SNIP>
> >>> My thoughts these days is that nobody really makes a bad drive anymore.
> >>> Like cars[1], they're all good and do what it says on the box. Same
> >>> with bikes[2].
> >>>
> >>> A manufacturer may have some bad luck and a product range is less than
> >>> perfect, but even that is quite rare and most stuff ups can be fixed
> >>> with new firmware. So it's all good.
> >>
> >>
> >> That's my thoughts too. It doesn't matter what brand you go with, they
> >> all have some sort of failure at some point. They are not built to last
> >> forever and there is always the random failure, even when a week old.
> >> It's usually the loss of important data and not having a backup that
> >> makes it sooooo bad. I'm not real picky on brand as long as it is a
> >> company I have heard of.
> >>
> >
> > One thing to keep in mind is statistics. For a single drive by itself
> > it hardly matters anymore what you buy. You cannot predict the
> > failure. However if you buy multiple identical drives at the same time
> > then most likely you will either get all good drives or (possibly) a
> > bunch of drives that suffer from similar defects and all start failing
> > at the same point in their life cycle. For RAID arrays it's
> > measurably best to buy drives that come from different manufacturing
> > lots, better from different factories, and maybe even from different
> > companies. Then, if a drive fails, assuming the failure is really the
> > fault of the drive and not some local issue like power sources or ESD
> > events, etc., it's less likely other drives in the box will fail at
> > the same time.
> >
> > Cheers,
> > Mark
> >
> >
>
>
>
> You make a good point too. I had a headlight to go out on my car once
> long ago. I, not thinking, replaced them both since the new ones were
> brighter. Guess what, when one of the bulbs blew out, the other was out
> VERY soon after. Now, I replace them but NOT at the same time. Keep in
> mind, just like a hard drive, when one headlight is on, so is the other
> one. When we turn our computers on, all the drives spin up together so
> they are basically all getting the same wear and tear effect.
>
> I don't use RAID, except to kill bugs, but that is good advice. People
> who do use RAID would be wise to use it.
>
> Dale
>
> :-) :-)
>

hum hum!
I know that Windows does this by default (it annoys me so I disable it)
but does linux disable or stop running the disks if they're inactive?
I'm assuming there's an option somewhere - maybe just `unmount`!
 
Old 05-10-2012, 12:38 PM
Dale
 
Default Are those "green" drives any good?

napalm@squareownz.org wrote:
> On Wed, May 09, 2012 at 06:58:47PM -0500, Dale wrote:
>> Mark Knecht wrote:
>>> On Wed, May 9, 2012 at 3:24 PM, Dale <rdalek1967@gmail.com> wrote:
>>>> Alan McKinnon wrote:
>>> <SNIP>
>>>>> My thoughts these days is that nobody really makes a bad drive anymore.
>>>>> Like cars[1], they're all good and do what it says on the box. Same
>>>>> with bikes[2].
>>>>>
>>>>> A manufacturer may have some bad luck and a product range is less than
>>>>> perfect, but even that is quite rare and most stuff ups can be fixed
>>>>> with new firmware. So it's all good.
>>>>
>>>>
>>>> That's my thoughts too. It doesn't matter what brand you go with, they
>>>> all have some sort of failure at some point. They are not built to last
>>>> forever and there is always the random failure, even when a week old.
>>>> It's usually the loss of important data and not having a backup that
>>>> makes it sooooo bad. I'm not real picky on brand as long as it is a
>>>> company I have heard of.
>>>>
>>>
>>> One thing to keep in mind is statistics. For a single drive by itself
>>> it hardly matters anymore what you buy. You cannot predict the
>>> failure. However if you buy multiple identical drives at the same time
>>> then most likely you will either get all good drives or (possibly) a
>>> bunch of drives that suffer from similar defects and all start failing
>>> at the same point in their life cycle. For RAID arrays it's
>>> measurably best to buy drives that come from different manufacturing
>>> lots, better from different factories, and maybe even from different
>>> companies. Then, if a drive fails, assuming the failure is really the
>>> fault of the drive and not some local issue like power sources or ESD
>>> events, etc., it's less likely other drives in the box will fail at
>>> the same time.
>>>
>>> Cheers,
>>> Mark
>>>
>>>
>>
>>
>>
>> You make a good point too. I had a headlight to go out on my car once
>> long ago. I, not thinking, replaced them both since the new ones were
>> brighter. Guess what, when one of the bulbs blew out, the other was out
>> VERY soon after. Now, I replace them but NOT at the same time. Keep in
>> mind, just like a hard drive, when one headlight is on, so is the other
>> one. When we turn our computers on, all the drives spin up together so
>> they are basically all getting the same wear and tear effect.
>>
>> I don't use RAID, except to kill bugs, but that is good advice. People
>> who do use RAID would be wise to use it.
>>
>> Dale
>>
>> :-) :-)
>>
>
> hum hum!
> I know that Windows does this by default (it annoys me so I disable it)
> but does linux disable or stop running the disks if they're inactive?
> I'm assuming there's an option somewhere - maybe just `unmount`!
>


The default is to keep them all running and to not spin them down. I
have never had a Linux OS to spin down a drive unless I set/told it to.
You can do this tho. The command and option is:

hdparm -S /dev/sdX

X would be the drive number. There is also the -s option but it is not
recommended.

There is also the -y and -Y options. Before using ANY of these, read
the man page. Each one has it uses and you need to know for sure which
one does what you want.

Dale

:-) :-)

--
I am only responsible for what I said ... Not for what you understood or
how you interpreted my words!

Miss the compile output? Hint:
EMERGE_DEFAULT_OPTS="--quiet-build=n"
 
Old 05-10-2012, 12:53 PM
Todd Goodman
 
Default Are those "green" drives any good?

* Dale <rdalek1967@gmail.com> [120509 19:54]:
[..]
> Way back in the stone age, there was a guy that released a curve for
> electronics life. The failure rate is high at the beginning, especially
> for the first few minutes, then falls to about nothing, then after
> several years it goes back up again. At the beginning of the curve, the
> thought was it could be a bad solder job, bad components or some other
> problem. At the other end was just when age kicked in. Sweat spot is
> in the middle.

C. Gordon Bell has that curve in his book "Computer Engineering."

Available online at:

http://research.microsoft.com/en-us/um/people/gbell/Computer_Engineering/index.html

for HTML and:

http://research.microsoft.com/en-us/um/people/gbell/CGB%20Files/Computer%20Engineering%207809%20c.pdf

for the PDF.

Todd
 
Old 05-10-2012, 01:27 PM
 
Default Are those "green" drives any good?

On Thu, May 10, 2012 at 07:38:34AM -0500, Dale wrote:
>
> The default is to keep them all running and to not spin them down. I
> have never had a Linux OS to spin down a drive unless I set/told it to.
> You can do this tho. The command and option is:
>
> hdparm -S /dev/sdX
>
> X would be the drive number. There is also the -s option but it is not
> recommended.
>
> There is also the -y and -Y options. Before using ANY of these, read
> the man page. Each one has it uses and you need to know for sure which
> one does what you want.
>
> Dale
>

Awesome thanks very much, if I need to power down one of my drives I
shall use hdparam!

Does the kernel keep even unmounted drives spinning by default?

Thank you Dale!
 
Old 05-10-2012, 04:20 PM
Norman Invasion
 
Default Are those "green" drives any good?

On 9 May 2012 04:47, Dale <rdalek1967@gmail.com> wrote:
> Hi,
>
> As some know, I'm planning to buy me a LARGE hard drive to put all my
> videos on, eventually. *The prices are coming down now. *I keep seeing
> these "green" drives that are made by just about every company nowadays.
> *When comparing them to a non "green" drive, do they hold up as good?
> Are they as dependable as a plain drive? *I guess they are more
> efficient and I get that but do they break quicker, more often or no
> difference?
>
> I have noticed that they tend to spin slower and are cheaper. *That much
> I have figured out. *Other than that, I can't see any other difference.
> *Data speeds seem to be about the same.
>

They have an ugly tendency to nod off at 6 second intervals.
This runs up "193 Load_Cycle_Count" unacceptably: as many
as a few hundred thousand in a year & a million cycles is
getting close to the lifetime limit on most hard drives. I end
up running some iteration of
# hdparm -B 255 /dev/sda
every boot.
 
Old 05-10-2012, 06:01 PM
Mark Knecht
 
Default Are those "green" drives any good?

On Thu, May 10, 2012 at 9:20 AM, Norman Invasion
<invasivenorman@gmail.com> wrote:
> On 9 May 2012 04:47, Dale <rdalek1967@gmail.com> wrote:
>> Hi,
>>
>> As some know, I'm planning to buy me a LARGE hard drive to put all my
>> videos on, eventually. *The prices are coming down now. *I keep seeing
>> these "green" drives that are made by just about every company nowadays.
>> *When comparing them to a non "green" drive, do they hold up as good?
>> Are they as dependable as a plain drive? *I guess they are more
>> efficient and I get that but do they break quicker, more often or no
>> difference?
>>
>> I have noticed that they tend to spin slower and are cheaper. *That much
>> I have figured out. *Other than that, I can't see any other difference.
>> *Data speeds seem to be about the same.
>>
>
> They have an ugly tendency to nod off at 6 second intervals.
> This runs up "193 Load_Cycle_Count" unacceptably: as many
> as a few hundred thousand in a year & a million cycles is
> getting close to the lifetime limit on most hard drives. *I end
> up running some iteration of
> # hdparm -B 255 /dev/sda
> every boot.
>

Very true about the 193 count. Here's a drive in a system that was
built in Jan., 2010 so it's a bit over 2 years old at this point. It's
on 24/7 and not rebooted except for more major updates, etc. My tests
say the drive spins down and starts back up every 2 minutes and has
been doing so for about 28 months. IIRC the 193 spec on this drive was
something like 300000 max with the drive currently clocking in at
700488. I don't see any evidence that it's going to fail but I am
trying to make sure it's backed up often. Being that it's gone >2x at
this point I will swap the drive out in the early summer no matter
what. This week I'll be visiting where the machine is so I'm going to
put a backup drive in the box to get ready.

- Mark


gandalf ~ # smartctl -a /dev/sda
smartctl 5.42 2011-10-20 r3458 [x86_64-linux-3.2.12-gentoo] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family: Western Digital Caviar Green (Adv. Format)
Device Model: WDC WD10EARS-00Y5B1
Serial Number: WD-WCAV55464493
LU WWN Device Id: 5 0014ee 2ae6b5ffe
Firmware Version: 80.00A80
User Capacity: 1,000,204,886,016 bytes [1.00 TB]
Sector Size: 512 bytes logical/physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: 8
ATA Standard is: Exact ATA specification draft version not indicated
Local Time is: Thu May 10 10:53:59 2012 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status: (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (19800) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection
on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 228) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x3031) SCT Status supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE
UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail
Always - 0
3 Spin_Up_Time 0x0027 131 128 021 Pre-fail
Always - 6441
4 Start_Stop_Count 0x0032 100 100 000 Old_age
Always - 65
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail
Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age
Always - 0
9 Power_On_Hours 0x0032 074 074 000 Old_age
Always - 19316
10 Spin_Retry_Count 0x0032 100 253 000 Old_age
Always - 0
11 Calibration_Retry_Count 0x0032 100 253 000 Old_age
Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age
Always - 63
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age
Always - 14
193 Load_Cycle_Count 0x0032 001 001 000 Old_age
Always - 700488
194 Temperature_Celsius 0x0022 120 113 000 Old_age
Always - 27
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age
Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age
Always - 0
198 Offline_Uncorrectable 0x0030 200 200 000 Old_age
Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age
Always - 0
200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age
Offline - 0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num Test_Description Status Remaining
LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 11655 -
# 2 Extended offline Completed without error 00% 8797 -
# 3 Short offline Completed without error 00% 8794 -
# 4 Extended offline Completed without error 00% 1009 -
# 5 Extended offline Completed without error 00% 388 -
# 6 Short offline Completed without error 00% 376 -

SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

gandalf ~ #
 
Old 05-10-2012, 06:13 PM
Norman Invasion
 
Default Are those "green" drives any good?

On 10 May 2012 14:01, Mark Knecht <markknecht@gmail.com> wrote:
> On Thu, May 10, 2012 at 9:20 AM, Norman Invasion
> <invasivenorman@gmail.com> wrote:
>> On 9 May 2012 04:47, Dale <rdalek1967@gmail.com> wrote:
>>> Hi,
>>>
>>> As some know, I'm planning to buy me a LARGE hard drive to put all my
>>> videos on, eventually. *The prices are coming down now. *I keep seeing
>>> these "green" drives that are made by just about every company nowadays.
>>> *When comparing them to a non "green" drive, do they hold up as good?
>>> Are they as dependable as a plain drive? *I guess they are more
>>> efficient and I get that but do they break quicker, more often or no
>>> difference?
>>>
>>> I have noticed that they tend to spin slower and are cheaper. *That much
>>> I have figured out. *Other than that, I can't see any other difference.
>>> *Data speeds seem to be about the same.
>>>
>>
>> They have an ugly tendency to nod off at 6 second intervals.
>> This runs up "193 Load_Cycle_Count" unacceptably: as many
>> as a few hundred thousand in a year & a million cycles is
>> getting close to the lifetime limit on most hard drives. *I end
>> up running some iteration of
>> # hdparm -B 255 /dev/sda
>> every boot.
>>
>
> Very true about the 193 count. Here's a drive in a system that was
> built in Jan., 2010 so it's a bit over 2 years old at this point. It's
> on 24/7 and not rebooted except for more major updates, etc. My tests
> say the drive spins down and starts back up every 2 minutes and has
> been doing so for about 28 months. IIRC the 193 spec on this drive was
> something like 300000 max with the drive currently clocking in at
> 700488. I don't see any evidence that it's going to fail but I am
> trying to make sure it's backed up often. Being that it's gone >2x at
> this point I will swap the drive out in the early summer no matter
> what. This week I'll be visiting where the machine is so I'm going to
> put a backup drive in the box to get ready.
>

Yes, I just learned about this problem in 2009 or so, &
checked on my FreeBSD laptop, which turned out to be
at >400000. It only made it another month or so before
having unrecoverable errors.

Now, I can't conclusively demonstrate that the 193
Load_Cycle_Count was somehow causitive, but I
gots my suspicions. Many of 'em highly suspectable.
 
Old 05-10-2012, 06:51 PM
Mark Knecht
 
Default Are those "green" drives any good?

On Thu, May 10, 2012 at 11:13 AM, Norman Invasion
<invasivenorman@gmail.com> wrote:
> On 10 May 2012 14:01, Mark Knecht <markknecht@gmail.com> wrote:
>> On Thu, May 10, 2012 at 9:20 AM, Norman Invasion
>> <invasivenorman@gmail.com> wrote:
>>> On 9 May 2012 04:47, Dale <rdalek1967@gmail.com> wrote:
>>>> Hi,
>>>>
>>>> As some know, I'm planning to buy me a LARGE hard drive to put all my
>>>> videos on, eventually. *The prices are coming down now. *I keep seeing
>>>> these "green" drives that are made by just about every company nowadays.
>>>> *When comparing them to a non "green" drive, do they hold up as good?
>>>> Are they as dependable as a plain drive? *I guess they are more
>>>> efficient and I get that but do they break quicker, more often or no
>>>> difference?
>>>>
>>>> I have noticed that they tend to spin slower and are cheaper. *That much
>>>> I have figured out. *Other than that, I can't see any other difference.
>>>> *Data speeds seem to be about the same.
>>>>
>>>
>>> They have an ugly tendency to nod off at 6 second intervals.
>>> This runs up "193 Load_Cycle_Count" unacceptably: as many
>>> as a few hundred thousand in a year & a million cycles is
>>> getting close to the lifetime limit on most hard drives. *I end
>>> up running some iteration of
>>> # hdparm -B 255 /dev/sda
>>> every boot.
>>>
>>
>> Very true about the 193 count. Here's a drive in a system that was
>> built in Jan., 2010 so it's a bit over 2 years old at this point. It's
>> on 24/7 and not rebooted except for more major updates, etc. My tests
>> say the drive spins down and starts back up every 2 minutes and has
>> been doing so for about 28 months. IIRC the 193 spec on this drive was
>> something like 300000 max with the drive currently clocking in at
>> 700488. I don't see any evidence that it's going to fail but I am
>> trying to make sure it's backed up often. Being that it's gone >2x at
>> this point I will swap the drive out in the early summer no matter
>> what. This week I'll be visiting where the machine is so I'm going to
>> put a backup drive in the box to get ready.
>>
>
> Yes, I just learned about this problem in 2009 or so, &
> checked on my FreeBSD laptop, which turned out to be
> at >400000. *It only made it another month or so before
> having unrecoverable errors.
>
> Now, I can't conclusively demonstrate that the 193
> Load_Cycle_Count was somehow causitive, but I
> gots my suspicions. *Many of 'em highly suspectable.
>

It's part of the 'Wear Out Failure' part of the Bathtub Curve posted
in the last few days. That said, some Toyotas go 100K miles, and
others go 500K miles. Same car, same spec, same production line,
different owners, different roads, different climates, etc.

It's not possible to absolutely know when any drive will fail. I
suspect that the 300K spec is just that, a spec. They'd replace the
drive if it failed at 299,999 and wouldn't replace it at 300,001. That
said, they don't want to spec thing too tightly, and I doubt many
people make a purchasing decision on a spec like this, so for the vast
majority of drives most likely they'd do far more than 300K.

At 2 minutes per count on that specific WD Green Drive, if a home
machine is turned on for instance 5 hours a day (6PM to 11PM) then
300K count equates to around 6 years. To me that seems pretty generous
for a low cost home machine. However for a 24/7 production server it's
a pretty fast replacement schedule.

Here's data for my 500GB WD RAID Edition drives in my compute server
here. It's powered down almost every night but doesn't suffer from the
same firmware issues. The machine was built in April, 2010, so it's a
bit of 2 years old. Note that it's been powered on less than 1/2 the
number of hours but only has a 193 count of 907 vs > 700000!

Cheers,
Mark


c2stable ~ # smartctl -a /dev/sda
smartctl 5.42 2011-10-20 r3458 [x86_64-linux-3.2.12-gentoo] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family: Western Digital RE3 Serial ATA
Device Model: WDC WD5002ABYS-02B1B0
Serial Number: WD-WCASYA846988
LU WWN Device Id: 5 0014ee 2042c3477
Firmware Version: 02.03B03
User Capacity: 500,107,862,016 bytes [500 GB]
Sector Size: 512 bytes logical/physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: 8
ATA Standard is: Exact ATA specification draft version not indicated
Local Time is: Thu May 10 11:45:45 2012 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status: (0x84) Offline data collection activity
was suspended by an
interrupting command from host.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 9480) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection
on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test
supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before
entering
power-saving mode.
Supports SMART auto save
timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging
supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 112) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x303f) SCT Status supported.
SCT Error Recovery Control
supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE
UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail
Always - 0
3 Spin_Up_Time 0x0027 239 235 021 Pre-fail
Always - 1050
4 Start_Stop_Count 0x0032 100 100 000 Old_age
Always - 935
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail
Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age
Always - 0
9 Power_On_Hours 0x0032 091 091 000 Old_age
Always - 7281
10 Spin_Retry_Count 0x0032 100 100 000 Old_age
Always - 0
11 Calibration_Retry_Count 0x0032 100 100 000 Old_age
Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age
Always - 933
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age
Always - 27
193 Load_Cycle_Count 0x0032 200 200 000 Old_age
Always - 907
194 Temperature_Celsius 0x0022 106 086 000 Old_age
Always - 41
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age
Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age
Always - 0
198 Offline_Uncorrectable 0x0030 200 200 000 Old_age
Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age
Always - 0
200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age
Offline - 0
 
Old 05-10-2012, 07:24 PM
David Haller
 
Default Are those "green" drives any good?

Hello,

On Thu, 10 May 2012, Mark Knecht wrote:
>On Thu, May 10, 2012 at 9:20 AM, Norman Invasion
><invasivenorman@gmail.com> wrote:
>> They have an ugly tendency to nod off at 6 second intervals.
>> This runs up "193 Load_Cycle_Count" unacceptably: as many
>> as a few hundred thousand in a year & a million cycles is
>> getting close to the lifetime limit on most hard drives. *I end
>> up running some iteration of
>> # hdparm -B 255 /dev/sda
>
>Very true about the 193 count.

There was some bug, IIRC.
http://jeanbruenn.info/2011/01/23/wd-green-discs-and-the-problem-in-linux-load-cycle-count/

and search for 'linux Load_Cycle_Count' using your favorite search site.

HTH,
-dnh

--
Well, merry frelling christmas! -- Aeryn Sun, Farscape - 4x13 - Terra Firma
 

Thread Tools




All times are GMT. The time now is 05:02 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org