FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Debian > Debian User

 
 
LinkBack Thread Tools
 
Old 12-18-2010, 08:32 PM
Klistvud
 
Default Is squeeze compatible with WD20EARS and other 2TB drives?

Attention: long post ahead!
I don't use line wrapping because it breaks long URLs. If that makes
you or your e-mail client cringe, you may as well read this at
http://bufferoverflow.tiddlywiki.com instead (same text, nicer
formatting).


First of all, let me thank all of you who responded. As promised, I am
giving feedback to the list so that future purchasers of Western
Digital WD EARS/EADS models and similar "Advanced Format" hard drives
may benefit.


The first thing of notice is that the Load_Cycle_Count of the drive
heads increases every 8 seconds by default. As seen on the Internet,
this may pose a problem in the long run, since these drives are
"guaranteed" to sustain a limited number of such head parking cycles.
The number given varies from 300.000 to 1.000.000, depending on where
you look. The first thing I did was, therefore, launch a shell script
that wrote something to the drive every second. Not being content with
this dirty workaround, I proceeded to download the WD proprietary
utility wdidle3.exe, and the first link obtained by googling for
"wdidle3.exe" did the trick:
http://support.wdc.com/product/download.asp?groupid=609&sid=113
I then proceeded to download a freedos bootable floppy image and copied
it to a floppy disk using dd. Once the bootable floppy was thus
created, I copied wdidle3.exe thereto.
Reboot computer, change BIOS boot order to floppy first, save&exit, the
floppy boots and I run wdidle3.exe. The utility offers three
command-line switches, for viewing the current status of the
Load_Cycle_Count parameter, for changing it, and for disabling it. No
drive is specified, so if you change/disable the parameter, you are
doing this to ALL and ANY WD drives in your system. I chose to disable
head parking, and since I also have an older 160GB WD IDE disk in the
box, the utility disabled head parking cycles for BOTH drives.
Except that ... there be problems. As opposed to the old 160 GB drive,
the setting didn't work for the new 2 TB drive. Instead, the frequency
of the load cycles increased 16-fold, to a whopping 7200 cycles per
hour! This quickly increased my Load_Cycle_Count parameter (checked by
issuing smartctl --all /dev/sda) by several thousand ticks overnight.
Interestingly enough, the drive loaded and unloaded its heads at the
amazing rate of twice per second even while sustained copying was
underway (copying a 10 GB directory subtree from one drive to another).
I didn't notice the increased cycle count until the next morning,
however. When I did, I rebooted the machine with the freedos floppy
again and set the interval from "disabled" to "every 300 seconds",
which appears to be the maximum interval allowed. It would seem that,
for the time being at least, this made the Load_Cycle_Count stay put at
22413. Whew!
So, setting this bugger straight is probably the first thing you'll
want to do after getting one of these WD drives.


Now, the second issue: the hardware/logical sector alignment.
Since it will affects real-world transfer speeds, let's first check out
the theoretical speeds of this drive in this particular environment --
a 3GHz Pentium-IV motherboard with a humble integrated SATA controller
(I think it's an early SATA-I generation).


Before partitioning and formatting:

obelix# hdparm -tT /dev/sda

/dev/sda:
Timing cached reads: 1726 MB in 2.00 seconds = 713.98 - 862.86
MB/sec (several iterations performed)
Timing buffered disk reads: 336 MB in 3.01 seconds = 100.01 -
111.72 MB/sec (several iterations performed)


After partitioning the drive, aligned on modulo 8 sector boundaries:

obelix:# hdparm -tT /dev/sda

/dev/sda:
Timing cached reads: 1264 MB in 2.00 seconds = 631.97 MB/sec
Timing buffered disk reads: 252 MB in 3.08 seconds = 81.80 MB/sec

Hmm, while we're at it, why don't we also check the antiquated 160 GB
drive on the obsolete IDE interface?


obelix# hdparm -tT /dev/hda

/dev/hda:
Timing cached reads: 1348 MB in 2.00 seconds = 674.14 MB/sec
Timing buffered disk reads: 206 MB in 3.02 seconds = 68.26 MB/sec

Well, so much for the alleged superiority of serial ATA over IDE...

Anyway. I have to prepend here that, Squeeze still not having reached
stable, all of the following was performed on a stock Lenny i386 system
(the reason being I have no Squeeze system yet). So, many of the
following points may become obsolete in a matter of weeks when Squeeze,
with a newer kernel and updated partitioning tools, reaches stable.
The first thing is, fdisk in Lenny doesn't support GPT partitioning, so
I had to use parted. I first used its Gnome variant, GParted, and must
say that it cant't align the partitions. Even if you align the first
sector by hand (in parted, since GParted can't do it) and de-select the
"Round to cylinders" option in GParted as recommended in
http://www.ibm.com/developerworks/linux/library/l-4kb-sector-disks/index.html
(which was my main guide and reference in this adventure), GParted will
end your partition on an aligned sector -- which means that, by
default, the next partition will start on a non-aligned sector again.
Be as it may, I then proceeded to use the new partitions created by
GParted, doing some cursory "benchmarks". The typical copy speed
reached in mc was about 20 MB/s, while rsync reported speeds of up to
51MB/S. Rsync reached a maximum 51MB/s on unaligned partitions, when
copying from hda (WD1600AAJB) to sda (WD20EARS).


Then I tried to re-align my partitions by manually calculating the
starting sectors of all the partitions so as to have them divisible by
8. This could only be done in parted, not in GParted. On the other
hand, parted couldn't create ext3 filesystems, so manually created
partitions had to be subsequently formatted in GParted. In short, a
combination of both tools had to be used to successfully create AND
format the partitions. Here's my final result as seen in parted (fdisk
doesn't understand GPT):


(parted)
print Model:
ATA WDC WD20EARS-00M (scsi)

Disk /dev/sda: 3907029168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name
Flags
1 128s 8194055s 8193928s
linux-swap 2 8194056s 49154055s
40960000s ext3 primary 3 49154056s
90114055s 40960000s ext3 primary 4
90114056s 1998569479s 1908455424s ext3 primary
5 1998569480s 3907024064s 1908454585s ext3


I was just curious if aligned partitions would yield any noticeable
speed improvement (especially in the file write department, since file
reads, according to the above IBM article, should not be that heavily
hit by misalignment). The "benchmarks" I performed, consisting in
copying random files from the other drive to the WD20EARS using mc and
rsync, generally yielded something between 15 and 35 MB/s, sometimes
falling under 10 MB/s and at times going as high as 56 MB/s; the latter
figure, however, was usually reached in the initial moments of a large
file rsync (an Ubuntu CD ISO file) and would decrease after several
seconds to about 40 MB/s, so it may very well be due to the 64MB cache
on these drives. Just for the heck of it, I decided to re-align the
partitions modulo-64, thus:


Partition Table: gpt

Number Start End Size File system Name
Flags
1 128s 8194047s 8193920s linux-swap
linux-swap 2 8194048s 49154047s 40960000s
ext3 ext3 3 49154048s 90114047s
40960000s ext3 ext3 4 90114048s
1998569472s 1908455425s ext3 ext3 5
1998569473s 3907024064s 1908454592s ext3
Rsyncing the good old ubuntu ISO file yielded transfer rates of around
60 MB/s, with the exception of the last partition, which was written to
at under 50 MB/s. It made me wonder. I checked the mount options in
fstab, double checked that the CPU governor was set to max performance,
all to no avail. Then, I fired up parted again and noticed that the 5th
partition was actually one sector off. I corrected my error thus:


Partition Table: gpt

Number Start End Size File system Name
Flags
1 128s 8194047s 8193920s linux-swap
linux-swap 2 8194048s 49154047s 40960000s
ext3 ext3 3 49154048s 90114047s
40960000s ext3 ext3 4 90114048s
1998569471s 1908455424s ext3 ext3 5
1998569472s 3907024064s 1908454593s ext3 ext3
As expected, the rsync results for the last partition became consistent
with the other partitions (i.e. around 60 MB/s).


Conclusions:
By default, these WD drives are not Linux-ready. They do work
out-of-the box, but are not configured optimally speedwise. Given that
we're talking about "green" (marketing mumbo-jumbo for "slow") drives,
this additional performance hit is noticeable and quite undesirable. By
aligning the partitions on 8-sector boundaries, the transfer speeds are
improved by almost 20%; aligning them on 64-sector boundaries doesn't
yield further noticeable improvements though. Or, more precisely: the
tests I performed were too coarse to substantiate potential small
differences, because as differences become smaller, other factors, such
as the CPU governor used, fstab parameters, or actual load on the CPU
at a given moment may prevail, completely masking such small
differences. The CPU governor seems to be the most crucial of those
secondary factors (see below). So, there are indications that using
64-sector alignment "may" give a slightly better performance over
8-sector alignment, but they are nothing more than indications, really.
Proper benchmarks would be required to ascertain that.


Curiosa:
All testing was done with a ~700-MB ISO file; copying many smaller
files may (and will) incur additional performance hits.
Dropping the CPU governor to powersave reduced file writes to under 20
MB/s and less, which means to about a third of the maximum speed
achievable.
Mount options for the partitions, and the performance of the source
disk are also major factors in these tests. In my case, the source from
which the files were copied was an oldish 160 GB WD IDE drive (model
WD1600AAJB).
The only downtime needed was about 10 minutes -- the time it took to
actually install the drive into the chassis; had WD provided a tool for
online modifying the drive's S.M.A.R.T. Load_Cycle_Count parameter, no
further reboots would be needed, i.e. once the hard drive was
installed, it could be taken to production use without as much as a
single reboot. Due to my own mistake, however, a superfluous reboot was
needed. Namely, while messing with parted and gparted and modifying
partition sizes, at one point I forgot to unmount the partitions before
deleting the partition table in parted. After that, I kept getting the
warning that a reboot would be required for the kernel to re-read the
partition tables, preventing me from creating the last two filesystems
and wrapping it up. Neither umount nor swapoff would help. Instead of
digging for the offending process and killing/restarting it, I
preferred to reboot the system, since it wasn't in use at the moment
anyway.
Beside the physical installation of the drive in the chassis, which was
done during off hours, virtually everything else was done remotely via
ssh, without interrupting the work of the currently logged-in user. To
enable graphical tools such as GParted to be used, ssh was run with the
-XC option, and then GParted was launched remotely by issuing "gksu
gparted". The flexibility of GNU/Linux is simply mind-boggling.
I have no kind words for WD. Their drives as provided are severely
underoptimized for GNU/Linux. On the drive label and on their site they
state that no further configuration is required for using the drive in
Linux; which is quite simply untrue. In addition, the head parking
feature is heavily flawed, and is only accessible via a DOS proprietary
tool, and only by taking the entire system offline. I am quite
disappointed in WD, but am thoroughly confident that the GNU/Linux
community will provide for the WD's shortcomings, as always. We'll see
what hdparm and smartmontools in Squeeze will bring along. The Lenny
versions are too old to be of much use with this disk (for example, the
hdparm -B command doesn't work).
The foregoing user experience is nothing more than that -- a user
experience; copying a handful of files is not to be considered a "test"
or "benchmark" in any meaningful sense whatsoever, so take it with a
huge lump of salt!

Happy computing!

--
Cheerio,

Klistvud
http://bufferoverflow.tiddlyspot.com
Certifiable Loonix User #481801 Please reply to the list, not to
me.



--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 1292707930.17806.0@compax">http://lists.debian.org/1292707930.17806.0@compax
 
Old 12-19-2010, 03:31 AM
Stan Hoeppner
 
Default Is squeeze compatible with WD20EARS and other 2TB drives?

Klistvud put forth on 12/18/2010 3:32 PM:

> Before partitioning and formatting:
>
> obelix# hdparm -tT /dev/sda
>
> /dev/sda:
> Timing cached reads: 1726 MB in 2.00 seconds = 713.98 - 862.86
> MB/sec (several iterations performed)
> Timing buffered disk reads: 336 MB in 3.01 seconds = 100.01 - 111.72
> MB/sec (several iterations performed)
>
> After partitioning the drive, aligned on modulo 8 sector boundaries:
>
> obelix:# hdparm -tT /dev/sda
>
> /dev/sda:
> Timing cached reads: 1264 MB in 2.00 seconds = 631.97 MB/sec
> Timing buffered disk reads: 252 MB in 3.08 seconds = 81.80 MB/sec

> expected, the rsync results for the last partition became consistent
> with the other partitions (i.e. around 60 MB/s).

> All testing was done with a ~700-MB ISO file

What is the result of?

dd if=/dev/zero of=/some/filesystem/test count=100000 bs=8192

That will write an 810MB file of all zeros, and will give you a much
better idea of the raw streaming write performance vs copying files from
the old 160GB drive to the new one. I would think the result should be
a bit higher than 60MB/s.

Also, make sure you're using the deadline elevator instead of CFQ as it
yields better performance, especially on SATA systems that don't support
NCQ:

$ echo deadline > /sys/block/sda/queue/scheduler

You may want to add this to your boot scripts to make it permanent. I
roll this option as the default in my custom kernels.

--
Stan


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 4D0D8AA9.3050207@hardwarefreak.com">http://lists.debian.org/4D0D8AA9.3050207@hardwarefreak.com
 
Old 12-19-2010, 08:10 AM
Klistvud
 
Default Is squeeze compatible with WD20EARS and other 2TB drives?

Dne, 19. 12. 2010 05:31:37 je Stan Hoeppner napisal(a):


What is the result of?

dd if=/dev/zero of=/some/filesystem/test count=100000 bs=8192

That will write an 810MB file of all zeros, and will give you a much
better idea of the raw streaming write performance vs copying files
from
the old 160GB drive to the new one. I would think the result should
be

a bit higher than 60MB/s.

Also, make sure you're using the deadline elevator instead of CFQ as
it
yields better performance, especially on SATA systems that don't
support

NCQ:

$ echo deadline > /sys/block/sda/queue/scheduler

You may want to add this to your boot scripts to make it permanent. I
roll this option as the default in my custom kernels.



Thanks for the suggestion, Stan. Using dd I get a much higher figure,
namely around 83 MB/s. Changing the elevator doesn't make a difference
on my system though.


--
Cheerio,

Klistvud
http://bufferoverflow.tiddlyspot.com
Certifiable Loonix User #481801 Please reply to the list, not to
me.



--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 1292749820.5989.0@compax">http://lists.debian.org/1292749820.5989.0@compax
 
Old 12-19-2010, 09:25 AM
Stan Hoeppner
 
Default Is squeeze compatible with WD20EARS and other 2TB drives?

Klistvud put forth on 12/19/2010 3:10 AM:
> Dne, 19. 12. 2010 05:31:37 je Stan Hoeppner napisal(a):
>>
>> What is the result of?
>>
>> dd if=/dev/zero of=/some/filesystem/test count=100000 bs=8192
>>
>> That will write an 810MB file of all zeros, and will give you a much
>> better idea of the raw streaming write performance vs copying files from
>> the old 160GB drive to the new one. I would think the result should be
>> a bit higher than 60MB/s.
>>
>> Also, make sure you're using the deadline elevator instead of CFQ as it
>> yields better performance, especially on SATA systems that don't support
>> NCQ:
>>
>> $ echo deadline > /sys/block/sda/queue/scheduler
>>
>> You may want to add this to your boot scripts to make it permanent. I
>> roll this option as the default in my custom kernels.
>>
>
> Thanks for the suggestion, Stan. Using dd I get a much higher figure,
> namely around 83 MB/s. Changing the elevator doesn't make a difference
> on my system though.

83 MB/s isn't too bad for that drive. IIRC the WD20EARS is a 5400 RPM
drive with variable spindle speed to reduce power consumption. My 7200
RPM WD Blue 500GB WD5000AAKS single platter drive hits about the same dd
sequential write speed, using lower bit density but higher spindle speed:

/$ dd if=/dev/zero of=./test count=100000 bs=8192
100000+0 records in
100000+0 records out
819200000 bytes (819 MB) copied, 9.8092 s, 83.5 MB/s

The deadline elevator may not help much with streaming reads/writes. It
does help a bit with random read/writes, especially under multi-user or
multi-threading random seek disk workloads.

--
Stan


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 4D0DDD7D.70104@hardwarefreak.com">http://lists.debian.org/4D0DDD7D.70104@hardwarefreak.com
 
Old 12-19-2010, 08:52 PM
Celejar
 
Default Is squeeze compatible with WD20EARS and other 2TB drives?

On Sat, 18 Dec 2010 22:32:10 +0100
Klistvud <quotations@aliceadsl.fr> wrote:

> Attention: long post ahead!
> I don't use line wrapping because it breaks long URLs. If that makes
> you or your e-mail client cringe, you may as well read this at
> http://bufferoverflow.tiddlywiki.com instead (same text, nicer
> formatting).

*Somethings's* doing line-wrapping for you - your message contains
plenty of newlines (hex 0A).

And I'm not sure what you mean by line-wrapping breaking long urls. A
proper line-wrapper understands urls, and won't break them (although my
beloved Sylph admittedly uses a broken line-wrapper :

http://www.sraoss.jp/pipermail/sylpheed/2010-September/004166.html

Of course, some badly broken (e.g., Microsoft) MUAs will break
urls while displaying them for the recipient ...

Celejar
--
foffl.sourceforge.net - Feeds OFFLine, an offline RSS/Atom aggregator
mailmin.sourceforge.net - remote access via secure (OpenPGP) email
ssuds.sourceforge.net - A Simple Sudoku Solver and Generator


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 20101219165214.f1bc39f4.celejar@gmail.com">http://lists.debian.org/20101219165214.f1bc39f4.celejar@gmail.com
 
Old 12-19-2010, 09:42 PM
Eduard Bloch
 
Default Is squeeze compatible with WD20EARS and other 2TB drives?

#include <hallo.h>
* Klistvud [Sat, Dec 18 2010, 10:32:10PM]:

> First of all, let me thank all of you who responded. As promised, I
> am giving feedback to the list so that future purchasers of Western
> Digital WD EARS/EADS models and similar "Advanced Format" hard
> drives may benefit.

Err, what? EADS don't use AF, TTBOMK.

> The first thing of notice is that the Load_Cycle_Count of the drive
> heads increases every 8 seconds by default. As seen on the Internet,

That only refers to EARS. And it's wrong. Umount everything on that disk
and wait a while, no loading/unloading should stop.

What really happens is that the disk parks after 8 seconds when it's
IDLE. Which is ok when you either read or write stuff all the time or
don't do anything at all. It is not ok if you use them as system disks
where a few bytes are written every couple of seconds and certain
popular Linux filesystems like to flush (means: write out to disk) that
data every 10..15 seconds (just a bit more than 8 seconds) and so
causing the LCC growing quite quickly over time.

So DO NOT use an EARS drive as SYSTEM DISK.

> this may pose a problem in the long run, since these drives are
> "guaranteed" to sustain a limited number of such head parking
> cycles. The number given varies from 300.000 to 1.000.000, depending
> on where you look. The first thing I did was, therefore, launch a

There is no reason to put the word guaranteed into double-quotes or
refer to weird sites. Just have a look at the official data sheet and
the common definition of MTBF please.

> the WD proprietary utility wdidle3.exe, and the first link obtained
> by googling for "wdidle3.exe" did the trick:
> http://support.wdc.com/product/download.asp?groupid=609&sid=113
...
> thousand ticks overnight. Interestingly enough, the drive loaded and
> unloaded its heads at the amazing rate of twice per second even
...
> "disabled" to "every 300 seconds", which appears to be the maximum
> interval allowed. It would seem that, for the time being at least,
> this made the Load_Cycle_Count stay put at 22413. Whew!

Err, what? You play with a dangerous toy which was not designed for your
drive and you wonder that it's all messed up now?

> Now, the second issue: the hardware/logical sector alignment.
> Since it will affects real-world transfer speeds, let's first check
> out the theoretical speeds of this drive in this particular
> environment -- a 3GHz Pentium-IV motherboard with a humble
> integrated SATA controller (I think it's an early SATA-I
> generation).

My company had a lot of them. The SATA controllers were crap
performance-wise. I remember a colleague who got a shiny new OCZ SSD
drive which was supposed to deliver >>200MB/s but never got beyond
70MB/s on his system. The solution was a 15EUR PCIe controller card
which suddenly make it work as expected.

> obelix# hdparm -tT /dev/sda

Err, what does this have to do with pro/contra of logical sector sizes?
Counter-example, WD20EARS on an AMD-78xx mainboard:

/dev/sdc:
Timing cached reads: 8042 MB in 2.00 seconds = 4023.46 MB/sec
Timing buffered disk reads: 360 MB in 3.02 seconds = 119.38 MB/sec

<ignored the rest of the posting, ENOTIME to read all of the voodoo>

Eduard.

--
Naja, Garbage Collector eben. Holt den Müll sogar vom Himmel.
(Heise Trollforum über Java in der Flugzeugsteuerung)


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 20101219224217.GA5172@rotes76.wohnheim.uni-kl.de">http://lists.debian.org/20101219224217.GA5172@rotes76.wohnheim.uni-kl.de
 
Old 01-09-2011, 10:37 AM
Lisi
 
Default Is squeeze compatible with WD20EARS and other 2TB drives?

On Sunday 19 December 2010 22:42:17 Eduard Bloch wrote:
> <ignored the rest of the posting, ENOTIME to read all of the voodoo>

At least he wrote in comprehensible English.

Lisi


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 201101091137.28723.lisi.reisz@gmail.com">http://lists.debian.org/201101091137.28723.lisi.reisz@gmail.com
 
Old 01-09-2011, 10:58 AM
Dotan Cohen
 
Default Is squeeze compatible with WD20EARS and other 2TB drives?

On Sat, Dec 18, 2010 at 23:32, Klistvud <quotations@aliceadsl.fr> wrote:
> Attention: long post ahead!
> I don't use line wrapping because it breaks long URLs. If that makes you or
> your e-mail client cringe, you may as well read this at
> http://bufferoverflow.tiddlywiki.com instead (same text, nicer formatting).
>
> First of all, let me thank all of you who responded. As promised, I am
> giving feedback to the list so that future purchasers of Western Digital WD
> EARS/EADS models and similar "Advanced Format" hard drives may benefit.
>
> The first thing of notice is that the Load_Cycle_Count of the drive heads
> increases every 8 seconds by default. As seen on the Internet, this may pose
> a problem in the long run, since these drives are "guaranteed" to sustain a
> limited number of such head parking cycles. The number given varies from
> 300.000 to 1.000.000, depending on where you look. The first thing I did
> was, therefore, launch a shell script that wrote something to the drive
> every second. Not being content with this dirty workaround, I proceeded to
> download the WD proprietary utility wdidle3.exe, and the first link obtained
> by googling for "wdidle3.exe" did the trick:
> http://support.wdc.com/product/download.asp?groupid=609&sid=113
> I then proceeded to download a freedos bootable floppy image and copied it
> to a floppy disk using dd. Once the bootable floppy was thus created, I
> copied wdidle3.exe thereto.
> Reboot computer, change BIOS boot order to floppy first, save&exit, the
> floppy boots and I run wdidle3.exe. The utility offers three command-line
> switches, for viewing the current status of the Load_Cycle_Count parameter,
> for changing it, and for disabling it. No drive is specified, so if you
> change/disable the parameter, you are doing this to ALL and ANY WD drives in
> your system. I chose to disable head parking, and since I also have an older
> 160GB WD IDE disk in the box, the utility disabled head parking cycles for
> BOTH drives.
> Except that ... there be problems. As opposed to the old 160 GB drive, the
> setting didn't work for the new 2 TB drive. Instead, the frequency of the
> load cycles increased 16-fold, to a whopping 7200 cycles per hour! This
> quickly increased my Load_Cycle_Count parameter (checked by issuing smartctl
> --all /dev/sda) by several thousand ticks overnight. Interestingly enough,
> the drive loaded and unloaded its heads at the amazing rate of twice per
> second even while sustained copying was underway (copying a 10 GB directory
> subtree from one drive to another). I didn't notice the increased cycle
> count until the next morning, however. When I did, I rebooted the machine
> with the freedos floppy again and set the interval from "disabled" to "every
> 300 seconds", which appears to be the maximum interval allowed. It would
> seem that, for the time being at least, this made the Load_Cycle_Count stay
> put at 22413. Whew!
> So, setting this bugger straight is probably the first thing you'll want to
> do after getting one of these WD drives.
>
> Now, the second issue: the hardware/logical sector alignment.
> Since it will affects real-world transfer speeds, let's first check out the
> theoretical speeds of this drive in this particular environment -- a 3GHz
> Pentium-IV motherboard with a humble integrated SATA controller (I think
> it's an early SATA-I generation).
>
> Before partitioning and formatting:
>
> obelix# hdparm -tT /dev/sda
>
> /dev/sda:
> *Timing cached reads: * 1726 MB in *2.00 seconds = 713.98 - 862.86 MB/sec
> (several iterations performed)
> *Timing buffered disk reads: *336 MB in *3.01 seconds = 100.01 - 111.72
> MB/sec (several iterations performed)
>
> After partitioning the drive, aligned on modulo 8 sector boundaries:
>
> obelix:# hdparm -tT /dev/sda
>
> /dev/sda:
> *Timing cached reads: * 1264 MB in *2.00 seconds = 631.97 MB/sec
> *Timing buffered disk reads: *252 MB in *3.08 seconds = *81.80 MB/sec
>
> Hmm, while we're at it, why don't we also check the antiquated 160 GB drive
> on the obsolete IDE interface?
>
> obelix# hdparm -tT /dev/hda
>
> /dev/hda:
> *Timing cached reads: * 1348 MB in *2.00 seconds = 674.14 MB/sec
> *Timing buffered disk reads: *206 MB in *3.02 seconds = *68.26 MB/sec
>
> Well, so much for the alleged superiority of serial ATA over IDE...
>
> Anyway. I have to prepend here that, Squeeze still not having reached
> stable, all of the following was performed on a stock Lenny i386 system (the
> reason being I have no Squeeze system yet). So, many of the following points
> may become obsolete in a matter of weeks when Squeeze, with a newer kernel
> and updated partitioning tools, reaches stable.
> The first thing is, fdisk in Lenny doesn't support GPT partitioning, so I
> had to use parted. I first used its Gnome variant, GParted, and must say
> that it cant't align the partitions. Even if you align the first sector by
> hand (in parted, since GParted can't do it) and de-select the "Round to
> cylinders" option in GParted as recommended in
> http://www.ibm.com/developerworks/linux/library/l-4kb-sector-disks/index.html
> (which was my main guide and reference in this adventure), GParted will end
> your partition on an aligned sector -- which means that, by default, the
> next partition will start on a non-aligned sector again. Be as it may, I
> then proceeded to use the new partitions created by GParted, doing some
> cursory "benchmarks". The typical copy speed reached in mc was about 20
> MB/s, while rsync reported speeds of up to 51MB/S. Rsync reached a maximum
> 51MB/s on unaligned partitions, when copying from hda (WD1600AAJB) to sda
> (WD20EARS).
>
> Then I tried to re-align my partitions by manually calculating the starting
> sectors of all the partitions so as to have them divisible by 8. This could
> only be done in parted, not in GParted. On the other hand, parted couldn't
> create ext3 filesystems, so manually created partitions had to be
> subsequently formatted in GParted. In short, a combination of both tools had
> to be used to successfully create AND format the partitions. Here's my final
> result as seen in parted (fdisk doesn't understand GPT):
>
> (parted) print
> Model: ATA WDC WD20EARS-00M (scsi)
> Disk /dev/sda: 3907029168s
> Sector size (logical/physical): 512B/512B
> Partition Table: gpt
>
> Number *Start * * * *End * * * * *Size * * * * File system *Name * * Flags
> *1 * * *128s * * * * 8194055s * * 8193928s * * linux-swap * * * * * * * * 2
> * * *8194056s * * 49154055s * *40960000s * *ext3 * * * * primary * * * 3
> *49154056s * *90114055s * *40960000s * *ext3 * * * * primary * * * 4
> *90114056s * *1998569479s *1908455424s *ext3 * * * * primary * * * 5
> *1998569480s *3907024064s *1908454585s *ext3
>
> I was just curious if aligned partitions would yield any noticeable speed
> improvement (especially in the file write department, since file reads,
> according to the above IBM article, should not be that heavily hit by
> misalignment). The "benchmarks" I performed, consisting in copying random
> files from the other drive to the WD20EARS using mc and rsync, generally
> yielded something between 15 and 35 MB/s, sometimes falling under 10 MB/s
> and at times going as high as 56 MB/s; the latter figure, however, was
> usually reached in the initial moments of a large file rsync (an Ubuntu CD
> ISO file) and would decrease after several seconds to about 40 MB/s, so it
> may very well be due to the 64MB cache on these drives. Just for the heck of
> it, I decided to re-align the partitions modulo-64, thus:
>
> Partition Table: gpt
>
> Number *Start * * * *End * * * * *Size * * * * File system *Name
> *Flags
> *1 * * *128s * * * * 8194047s * * 8193920s * * linux-swap * linux-swap
> 2 * * *8194048s * * 49154047s * *40960000s * *ext3 * * * * ext3
> 3 * * *49154048s * *90114047s * *40960000s * *ext3 * * * * ext3
> 4 * * *90114048s * *1998569472s *1908455425s *ext3 * * * * ext3
> 5 * * *1998569473s *3907024064s *1908454592s *ext3
> *Rsyncing the good old ubuntu ISO file yielded transfer rates of around 60
> MB/s, with the exception of the last partition, which was written to at
> under 50 MB/s. It made me wonder. I checked the mount options in fstab,
> double checked that the CPU governor was set to max performance, all to no
> avail. Then, I fired up parted again and noticed that the 5th partition was
> actually one sector off. I corrected my error thus:
>
> Partition Table: gpt
>
> Number *Start * * * *End * * * * *Size * * * * File system *Name
> *Flags
> *1 * * *128s * * * * 8194047s * * 8193920s * * linux-swap * linux-swap
> 2 * * *8194048s * * 49154047s * *40960000s * *ext3 * * * * ext3
> 3 * * *49154048s * *90114047s * *40960000s * *ext3 * * * * ext3
> 4 * * *90114048s * *1998569471s *1908455424s *ext3 * * * * ext3
> 5 * * *1998569472s *3907024064s *1908454593s *ext3 * * * * ext3 * * * * * As
> expected, the rsync results for the last partition became consistent with
> the other partitions (i.e. around 60 MB/s).
>
> Conclusions:
> By default, these WD drives are not Linux-ready. They do work out-of-the
> box, but are not configured optimally speedwise. Given that we're talking
> about "green" (marketing mumbo-jumbo for "slow") drives, this additional
> performance hit is noticeable and quite undesirable. By aligning the
> partitions on 8-sector boundaries, the transfer speeds are improved by
> almost 20%; aligning them on 64-sector boundaries doesn't yield further
> noticeable improvements though. Or, more precisely: the tests I performed
> were too coarse to substantiate potential small differences, because as
> differences become smaller, other factors, such as the CPU governor used,
> fstab parameters, or actual load on the CPU at a given moment may prevail,
> completely masking such small differences. The CPU governor seems to be the
> most crucial of those secondary factors (see below). So, there are
> indications that using 64-sector alignment "may" give a slightly better
> performance over 8-sector alignment, but they are nothing more than
> indications, really. Proper benchmarks would be required to ascertain that.
>
> Curiosa:
> All testing was done with a ~700-MB ISO file; copying many smaller files may
> (and will) incur additional performance hits.
> Dropping the CPU governor to powersave reduced file writes to under 20 MB/s
> and less, which means to about a third of the maximum speed achievable.
> Mount options for the partitions, and the performance of the source disk are
> also major factors in these tests. In my case, the source from which the
> files were copied was an oldish 160 GB WD IDE drive (model WD1600AAJB).
> The only downtime needed was about 10 minutes -- the time it took to
> actually install the drive into the chassis; had WD provided a tool for
> online modifying the drive's S.M.A.R.T. Load_Cycle_Count parameter, no
> further reboots would be needed, i.e. once the hard drive was installed, it
> could be taken to production use without as much as a single reboot. Due to
> my own mistake, however, a superfluous reboot was needed. Namely, while
> messing with parted and gparted and modifying partition sizes, at one point
> I forgot to unmount the partitions before deleting the partition table in
> parted. After that, I kept getting the warning that a reboot would be
> required for the kernel to re-read the partition tables, preventing me from
> creating the last two filesystems and wrapping it up. Neither umount nor
> swapoff would help. Instead of digging for the offending process and
> killing/restarting it, I preferred to reboot the system, since it wasn't in
> use at the moment anyway.
> Beside the physical installation of the drive in the chassis, which was done
> during off hours, virtually everything else was done remotely via ssh,
> without interrupting the work of the currently logged-in user. To enable
> graphical tools such as GParted to be used, ssh was run with the -XC option,
> and then GParted was launched remotely by issuing "gksu gparted". The
> flexibility of GNU/Linux is simply mind-boggling.
> I have no kind words for WD. Their drives as provided are severely
> underoptimized for GNU/Linux. On the drive label and on their site they
> state that no further configuration is required for using the drive in
> Linux; which is quite simply untrue. In addition, the head parking feature
> is heavily flawed, and is only accessible via a DOS proprietary tool, and
> only by taking the entire system offline. I am quite disappointed in WD, but
> am thoroughly confident that the GNU/Linux community will provide for the
> WD's shortcomings, as always. We'll see what hdparm and smartmontools in
> Squeeze will bring along. The Lenny versions are too old to be of much use
> with this disk (for example, the hdparm -B command doesn't work).
> The foregoing user experience is nothing more than that -- a user
> experience; copying a handful of files is not to be considered a "test" or
> "benchmark" in any meaningful sense whatsoever, so take it with a huge lump
> of salt!
> Happy computing!
>

Thanks, Klistvud. I just purchased a WD10EARS (1 TB drive) and I
noticed that my writes are _slow_. I think that it may be a KDE issue,
there even is an open KDE bug that copy/paste is vry slow. But even
copying via cp I feel that it's not moving, I need to benchmark the
drive. Your post gives me some other things to check and configure.
Thank you!


--
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: AANLkTikvu53e50dHmkTrn=yk-W03pA0x14NQ-QVJzVGS@mail.gmail.com">http://lists.debian.org/AANLkTikvu53e50dHmkTrn=yk-W03pA0x14NQ-QVJzVGS@mail.gmail.com
 
Old 01-09-2011, 01:02 PM
Stan Hoeppner
 
Default Is squeeze compatible with WD20EARS and other 2TB drives?

Dotan Cohen put forth on 1/9/2011 5:58 AM:

> Thanks, Klistvud. I just purchased a WD10EARS (1 TB drive) and I
> noticed that my writes are _slow_. I think that it may be a KDE issue,
> there even is an open KDE bug that copy/paste is vry slow. But even
> copying via cp I feel that it's not moving, I need to benchmark the
> drive. Your post gives me some other things to check and configure.
> Thank you!

Given the inherent performance problems Linux currently has with the
512/4096 byte sector hybrid drives, called "Advanced Format" by Western
Digital, my recommendation to Linux users is to stay away from these
drives at all costs, regardless of how attractive the price/GB ratio is.

Specifically regarding the WDxxEARS drives, WD has a drive of the same
capacity but with native 512 byte sectors in either or both of the Blue
and Black product lines. The only advantage of the Green (EARS) line is
a 3TB drive model not present in the Blue/Black lines.

Additionally, the Blue and Black drives have full 7.2k spindles and will
thus yield far superior performance to the Green (EARS) drives for the
same size drive.

If one is so power consumption conscious to be suckered into a Green
(EARS) drive, then one needs to realize the CPU dissipates about 10
times the wattage/heat of a hard drive. Thus, concentrate your power
saving efforts elsewhere than the disk drive. Buy a non "green" drive,
and save yourself these sector alignment/performance headaches.

Dotan, in your case, you should have purchased a WD10EALS instead of the
WD10EARS:
http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-701277.pdf

This Blue series 1TB drive has vastly superior performance and little
additional power consumption compared to its WD10EARS cousin.

http://www.newegg.com/Product/Product.aspx?Item=N82E16822136534&cm_re=wd10eals-_-22-136-534-_-Product

http://www.newegg.com/Product/Product.aspx?Item=N82E16822136490&cm_re=WD10EARS-_-22-136-490-_-Product

The Blue drive costs $5 USD more at Newegg. In all respects it is a
vastly superior drive for Linux users over the WD10EARS Green drive--no
sector alignment headaches, 50%+ better streaming and random IOPS
performance.

--
Stan


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 4D29BFDD.4040209@hardwarefreak.com">http://lists.debian.org/4D29BFDD.4040209@hardwarefreak.com
 
Old 01-09-2011, 02:08 PM
Klistvud
 
Default Is squeeze compatible with WD20EARS and other 2TB drives?

Dne, 09. 01. 2011 12:58:22 je Dotan Cohen napisal(a):


Thanks, Klistvud. I just purchased a WD10EARS (1 TB drive) and I
noticed that my writes are _slow_. I think that it may be a KDE issue,
there even is an open KDE bug that copy/paste is vry slow. But even
copying via cp I feel that it's not moving, I need to benchmark the
drive. Your post gives me some other things to check and configure.
Thank you!



Glad to be of help. Please do read Stan Hoeppner's suggestion in this
thread on using the dd command as a more reliable benchmark!


--
Cheerio,

Klistvud
http://bufferoverflow.tiddlyspot.com
Certifiable Loonix User #481801 Please reply to the list, not to
me.



--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 1294585698.3874.5@compax">http://lists.debian.org/1294585698.3874.5@compax
 

Thread Tools




All times are GMT. The time now is 10:17 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright ©2007 - 2008, www.linux-archive.org