FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Gentoo > Gentoo User

 
 
LinkBack Thread Tools
 
Old 02-07-2010, 05:38 PM
Mark Knecht
 
Default 1-Terabyte drives - 4K sector sizes? -> bar performance so far

On Sun, Feb 7, 2010 at 9:30 AM, Alexander <b3nder@yandex.ru> wrote:
> On Sunday 07 February 2010 19:27:46 Mark Knecht wrote:
>
>> * *Every time there is an apparent delay I just see the hard drive
>> light turned on solid. That said as far as I know if I wait for things
>> to complete the data is there but I haven't tested it extensively.
>>
>> * *Is this a bad drive or am I somehow using it incorrectly?
>>
>
> Is there any related info in dmesg?
>
>

No, nothing in dmesg at all.

Here are two tests this morning. The first is to the 1T drive, the
second is to a 120GB drive I'm currently using as a system drive until
I work this out:

gandalf TestMount # time tar xjf /mnt/TestMount/portage-latest.tar.bz2
-C /mnt/TestMount/usr

real 8m13.077s
user 0m8.184s
sys 0m2.561s
gandalf TestMount #


mark@gandalf ~ $ time tar xjf /mnt/TestMount/portage-latest.tar.bz2 -C
/home/mark/Test_usr/

real 0m39.213s
user 0m8.243s
sys 0m2.135s
mark@gandalf ~ $

8 minutes vs 39 seconds!

The amount of data written appears to be the same:

gandalf ~ # du -shc /mnt/TestMount/usr/
583M /mnt/TestMount/usr/
583M total
gandalf ~ #


mark@gandalf ~ $ du -shc /home/mark/Test_usr/
583M /home/mark/Test_usr/
583M total
mark@gandalf ~ $


I did some reading at the WD site and it seems this drive does use the
4K sector size. The way it's done is the addressing on cable is still
512 byte 'user sectors', but they are packed into 4K physical sectors
and internal hardware does the mapping.

I suspect the performance issue is figuring out how to get the file
system to keep things on 4K boundaries. I assume that's what the 4K
block size is for when building the file system but I need to go find
out more about that. I did not select it specifically. Maybe I need
to.

Thanks,
Mark
 
Old 02-07-2010, 06:26 PM
Mark Knecht
 
Default 1-Terabyte drives - 4K sector sizes? -> bar performance so far

On Sun, Feb 7, 2010 at 10:19 AM, Volker Armin Hemmann
<volkerarmin@googlemail.com> wrote:
> On Sonntag 07 Februar 2010, Alexander wrote:
>> On Sunday 07 February 2010 19:27:46 Mark Knecht wrote:
>> > * *Every time there is an apparent delay I just see the hard drive
>> >
>> > light turned on solid. That said as far as I know if I wait for things
>> > to complete the data is there but I haven't tested it extensively.
>> >
>> > * *Is this a bad drive or am I somehow using it incorrectly?
>>
>> Is there any related info in dmesg?
>
> or maybe there is too much cached and seeking is not the drives strong point
> ...

It's an interesting question. There is new physical seeking technology
in this line of drives which is intended to reduce power and noise,
but it seem unlikely to me that WD would purposely make a drive that's
10-20x slower than previous generations. Could be though...

Are there any user space Linux tools that can test that?

The other thing I checked out was that when the block size is not
specified it seems that mke2fs uses the default values from
/etc/mke2fs.conf and my file says blocksize = 4096 so it would seem to
me that if all partitions use blocks then at least the partitions
would be properly aligned.

My question about that would be when I write a 1 byte file to this
drive do I use all 4K of the block it's written in? It's wasteful, but
faster, right? I want files to be block-aligned so that the drive
isn't doing lots of translation to get the right data. It seems that's
been the problem with these drives in the Windows world so WD had to
release updated software to get the Windows disk formatters to do
things right, or so I think.

Thanks Volker.

Cheers,
Mark
 
Old 02-07-2010, 07:31 PM
Mark Knecht
 
Default 1-Terabyte drives - 4K sector sizes? -> bar performance so far

On Sun, Feb 7, 2010 at 11:39 AM, Willie Wong <wwong@math.princeton.edu> wrote:
> On Sun, Feb 07, 2010 at 08:27:46AM -0800, Mark Knecht wrote:
>> <QUOTE>
>> 4KB physical sectors: KNOW WHAT YOU'RE DOING!
>>
>> Pros: Quiet, cool-running, big cache
>>
>> Cons: The 4KB physical sectors are a problem waiting to happen. If you
>> misalign your partitions, disk performance can suffer. I ran
>> benchmarks in Linux using a number of filesystems, and I found that
>> with most filesystems, read performance and write performance with
>> large files didn't suffer with misaligned partitions, but writes of
>> many small files (unpacking a Linux kernel archive) could take several
>> times as long with misaligned partitions as with aligned partitions.
>> WD's advice about who needs to be concerned is overly simplistic,
>> IMHO, and it's flat-out wrong for Linux, although it's probably
>> accurate for 90% of buyers (those who run Windows or Mac OS and use
>> their standard partitioning tools). If you're not part of that 90%,
>> though, and if you don't fully understand this new technology and how
>> to handle it, buy a drive with conventional 512-byte sectors!
>> </QUOTE>
>>
>> * *Now, I don't mind getting a bit dirty learning to use this
>> correctly but I'm wondering what that means in a practical sense.
>> Reading the mke2fs man page the word 'sector' doesn't come up. It's my
>> understanding the Linux 'blocks' are groups of sectors. True? If the
>> disk must use 4K sectors then what - the smallest block has to be 4K
>> and I'm using 1 sector per block? It seems that ext3 doesn't support
>> anything larger than 4K?
>
> The problem is not when you are making the filesystem with mke2fs, but
> when you partitioned the disk using fdisk. I'm sure I am making some
> small mistakes in the explanation below, but it goes something like
> this:
>
> a) The harddrive with 4K sectors allows the head to efficiently
> read/write 4K sized blocks at a time.
> b) However, to be compatible in hardware, the harddrive allows 512B
> sized blocks to be addressed. In reality, this means that you can
> individually address the 8 512B-sized chunks of the 4K sized blocks,
> but each will count as a separate operation. To illustrate: say the
> hardware has some sector X of size 4K. It has 8 addressable slots
> inside X1 ... X8 each of size 512B. If your OS clusters read/writes on
> the 512B level, it will send 8 commands to read the info in those 8
> blocks separately. If your OS clusters in 4K, it will send one
> command. So in the stupid analysis I give here, it will take 8 times
> as long for the 512B addressing to read the same data, since it will
> take 8 passes, and each time inefficiently reading only 1/8 of the
> data required. Now in reality, drives are smarter than that: if all 8
> of those are sent in sequence, sometimes the drives will cluster them
> together in one read.
> c) A problem occurs, however, when your OS deals with 4K clusters but
> when you make the partition, the partition is offset! Imagine the
> physical read sectors of your disk looking like
>
> AAAAAAAABBBBBBBBCCCCCCCCDDDDDDDD
>
> but when you make your partitions, somehow you partitioned it
>
> ....YYYYYYYYZZZZZZZZWWWWWWWW....
>
> This is possible because the drive allows addressing by 512K chunks.
> So for some reason one of your partitions starts halfway inside a
> physical sector. What is the problem with this? Now suppose your OS
> sends data to be written to the ZZZZZZZZ block. If it were completely
> aligned, the drive will just go kink-move the head to the block, and
> overwrite it with this information. But since half of the block is
> over the BBBB phsical sector, and half over CCCC, what the disk now
> needs to do is to
>
> pass 1) read BBBBBBBB
> pass 2) modify the second half of BBBB to match the first half of ZZZZ
> pass 3) write BBBBBBBB
> pass 4) read CCCCCCCC
> pass 5) modify the first half of CCCC to match the second half of ZZZZ
> pass 6) write CCCCCCCC
>
> Or what is known as a read-modify-write operation. Thus the disk
> becomes a lot less efficient.
>
> ----------
>
> Now, I don't know if this is the actual problem is causing your
> performance problems. But this may be it. When you use fdisk, it
> defaults to aligning the partition to cylinder boundaries, and use the
> default (from ancient times) value of 63 x (512B sized) sectors per
> track. Since 63 is not evenly divisible by 8, you see that quite
> likely some of your partitions are not aligned to the physical sector
> boundaries.
>
> If you use cfdisk, you can try to change the geometry with the command
> g. Or you can use the command u to change the units used in the
> partitioning to either sectors or megabytes, and make sure your
> partition sizes are a multiple of 8 in the former, or an integer in
> the latter.
>
> Again, take what I wrote with a grain of salt: this information came
> from the research I did a little while back after reading the slashdot
> article on this 4K switch. So being my own understanding, it may not
> completely be correct.
>
> HTH,
>
> W
> --
> Willie W. Wong * * * * * * * * * * * * * * * * * * wwong@math.princeton.edu
> Data aequatione quotcunque fluentes quantitae involvente fluxiones invenire
> * * * * et vice versa * ~~~ *I. Newton
>
>

Willie,
Thanks. Your description above is pretty much consistent (I think)
with the information I found at the WD site explaining how the data is
being physically packed on the drive. Being that I have the OS set up
on a different drive I was able to blow away all the partitions so I
just created 1 large 1T partition but I think that doesn't deal with
the exact problem you outline.

I'll have to study how to change the geometry. I do see that cfdisk
is reporting 255/63/121601. Am I to choose a size that __smaller__
than 63 but a multiple of 8? I.e. - 56? And then if I do that does the
partitioning of the drive just ignore those last 7 sectors and reduce
capacity by 56/63 or about 11%?

Or is it legal to push the number of sectors up to 64? I would have
thought that the sector count would be driven by really low level
formatting and I shouldn't be messing with that.

Assuming I have done what you are suggesting then with 7
blocks/track then I need to choose the starting positions of each
partition to be aligned to the start of a new 8 sector blocks?

It's very strange that the disk industry chose anything that's not
2^X but I guess they did.

As per your and Volker's suggestions I'm going to study the proper
way to align partitions before I do anything more. I did find a small
program called 'fio' that does some interesting drive testing
including seek time testing. I need to study how to really use it
though. It can set up multiple threads to simulate loads that are more
real-world like.

Thanks to you both for the responses.

Cheers,
Mark
 
Old 02-07-2010, 08:42 PM
Mark Knecht
 
Default 1-Terabyte drives - 4K sector sizes? -> bar performance so far

On Sun, Feb 7, 2010 at 11:39 AM, Willie Wong <wwong@math.princeton.edu> wrote:
> On Sun, Feb 07, 2010 at 08:27:46AM -0800, Mark Knecht wrote:
>> <QUOTE>
>> 4KB physical sectors: KNOW WHAT YOU'RE DOING!
>>
>> Pros: Quiet, cool-running, big cache
>>
>> Cons: The 4KB physical sectors are a problem waiting to happen. If you
>> misalign your partitions, disk performance can suffer. I ran
>> benchmarks in Linux using a number of filesystems, and I found that
>> with most filesystems, read performance and write performance with
>> large files didn't suffer with misaligned partitions, but writes of
>> many small files (unpacking a Linux kernel archive) could take several
>> times as long with misaligned partitions as with aligned partitions.
>> WD's advice about who needs to be concerned is overly simplistic,
>> IMHO, and it's flat-out wrong for Linux, although it's probably
>> accurate for 90% of buyers (those who run Windows or Mac OS and use
>> their standard partitioning tools). If you're not part of that 90%,
>> though, and if you don't fully understand this new technology and how
>> to handle it, buy a drive with conventional 512-byte sectors!
>> </QUOTE>
>>
>> * *Now, I don't mind getting a bit dirty learning to use this
>> correctly but I'm wondering what that means in a practical sense.
>> Reading the mke2fs man page the word 'sector' doesn't come up. It's my
>> understanding the Linux 'blocks' are groups of sectors. True? If the
>> disk must use 4K sectors then what - the smallest block has to be 4K
>> and I'm using 1 sector per block? It seems that ext3 doesn't support
>> anything larger than 4K?
>
> The problem is not when you are making the filesystem with mke2fs, but
> when you partitioned the disk using fdisk. I'm sure I am making some
> small mistakes in the explanation below, but it goes something like
> this:
>
> a) The harddrive with 4K sectors allows the head to efficiently
> read/write 4K sized blocks at a time.
> b) However, to be compatible in hardware, the harddrive allows 512B
> sized blocks to be addressed. In reality, this means that you can
> individually address the 8 512B-sized chunks of the 4K sized blocks,
> but each will count as a separate operation. To illustrate: say the
> hardware has some sector X of size 4K. It has 8 addressable slots
> inside X1 ... X8 each of size 512B. If your OS clusters read/writes on
> the 512B level, it will send 8 commands to read the info in those 8
> blocks separately. If your OS clusters in 4K, it will send one
> command. So in the stupid analysis I give here, it will take 8 times
> as long for the 512B addressing to read the same data, since it will
> take 8 passes, and each time inefficiently reading only 1/8 of the
> data required. Now in reality, drives are smarter than that: if all 8
> of those are sent in sequence, sometimes the drives will cluster them
> together in one read.
> c) A problem occurs, however, when your OS deals with 4K clusters but
> when you make the partition, the partition is offset! Imagine the
> physical read sectors of your disk looking like
>
> AAAAAAAABBBBBBBBCCCCCCCCDDDDDDDD
>
> but when you make your partitions, somehow you partitioned it
>
> ....YYYYYYYYZZZZZZZZWWWWWWWW....
>
> This is possible because the drive allows addressing by 512K chunks.
> So for some reason one of your partitions starts halfway inside a
> physical sector. What is the problem with this? Now suppose your OS
> sends data to be written to the ZZZZZZZZ block. If it were completely
> aligned, the drive will just go kink-move the head to the block, and
> overwrite it with this information. But since half of the block is
> over the BBBB phsical sector, and half over CCCC, what the disk now
> needs to do is to
>
> pass 1) read BBBBBBBB
> pass 2) modify the second half of BBBB to match the first half of ZZZZ
> pass 3) write BBBBBBBB
> pass 4) read CCCCCCCC
> pass 5) modify the first half of CCCC to match the second half of ZZZZ
> pass 6) write CCCCCCCC
>
> Or what is known as a read-modify-write operation. Thus the disk
> becomes a lot less efficient.
>
> ----------
>
> Now, I don't know if this is the actual problem is causing your
> performance problems. But this may be it. When you use fdisk, it
> defaults to aligning the partition to cylinder boundaries, and use the
> default (from ancient times) value of 63 x (512B sized) sectors per
> track. Since 63 is not evenly divisible by 8, you see that quite
> likely some of your partitions are not aligned to the physical sector
> boundaries.
>
> If you use cfdisk, you can try to change the geometry with the command
> g. Or you can use the command u to change the units used in the
> partitioning to either sectors or megabytes, and make sure your
> partition sizes are a multiple of 8 in the former, or an integer in
> the latter.
>
> Again, take what I wrote with a grain of salt: this information came
> from the research I did a little while back after reading the slashdot
> article on this 4K switch. So being my own understanding, it may not
> completely be correct.
>
> HTH,
>
> W
> --
> Willie W. Wong * * * * * * * * * * * * * * * * * * wwong@math.princeton.edu
> Data aequatione quotcunque fluentes quantitae involvente fluxiones invenire
> * * * * et vice versa * ~~~ *I. Newton
>

Hi Willie,
OK - it turns out if I start fdisk using the -u option it show me
sector numbers. Looking at the original partition put on just using
default values it had the starting sector was 63 - probably about the
worst value it could be. As a test I blew away that partition and
created a new one starting at 64 instead and the untar results are
vastly improved - down to roughly 20 seconds from 8-10 minutes. That's
roughly twice as fast as the old 120GB SATA2 drive I was using to test
the system out while I debugged this issue.

There's still some variability but there's probably other things
running on the box - screen savers and stuff - that account for some
of that.

I'm still a little fuzzy about what happens to the extra sectors at
the end of a track. Are they used and I pay for a little bit of
overhead reading data off of them or are they ignored and I lose
capacity? I think it must be the former as my partition isn't all that
much less than 1TB.

Again, many thanks to you and Volker for point this issue out.

Cheers,
Mark

gandalf TestMount # fdisk -u /dev/sdb

The number of cylinders for this disk is set to 121601.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x67929f10

Device Boot Start End Blocks Id System
/dev/sdb1 64 1953525167 976762552 83 Linux

Command (m for help): q

gandalf TestMount # df -H
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 110G 8.6G 96G 9% /
udev 11M 177k 11M 2% /dev
shm 2.0G 0 2.0G 0% /dev/shm
/dev/sdb1 985G 210M 935G 1% /mnt/TestMount
gandalf TestMount #



gandalf TestMount # mkdir usr
gandalf TestMount # time tar xjf /portage-latest.tar.bz2 -C /mnt/TestMount/usr

real 0m23.275s
user 0m8.614s
sys 0m2.644s
gandalf TestMount # time rm -rf /mnt/TestMount/usr/

real 0m3.720s
user 0m0.118s
sys 0m1.822s
gandalf TestMount # mkdir usr
gandalf TestMount # time tar xjf /portage-latest.tar.bz2 -C /mnt/TestMount/usr

real 0m13.828s
user 0m8.911s
sys 0m2.653s
gandalf TestMount # time rm -rf /mnt/TestMount/usr/

real 0m19.718s
user 0m0.128s
sys 0m2.025s
gandalf TestMount # mkdir usr
gandalf TestMount # time tar xjf /portage-latest.tar.bz2 -C /mnt/TestMount/usr

real 0m25.777s
user 0m8.579s
sys 0m2.660s
gandalf TestMount # time rm -rf /mnt/TestMount/usr/

real 0m2.564s
user 0m0.112s
sys 0m1.805s
gandalf TestMount #
 
Old 02-07-2010, 08:59 PM
Kyle Bader
 
Default 1-Terabyte drives - 4K sector sizes? -> bar performance so far

>>> 4KB physical sectors: KNOW WHAT YOU'RE DOING!

Good article by Theodore T'so, might be helpful:

http://thunk.org/tytso/blog/2009/02/20/aligning-filesystems-to-an-ssds-erase-block-size/

--

Kyle
 
Old 02-08-2010, 04:10 PM
Mark Knecht
 
Default 1-Terabyte drives - 4K sector sizes? -> bar performance so far

On Sun, Feb 7, 2010 at 6:08 PM, Willie Wong <wwong@math.princeton.edu> wrote:
> On Sun, Feb 07, 2010 at 01:42:18PM -0800, Mark Knecht wrote:
>> * *OK - it turns out if I start fdisk using the -u option it show me
>> sector numbers. Looking at the original partition put on just using
>> default values it had the starting sector was 63 - probably about the
>> worst value it could be. As a test I blew away that partition and
>> created a new one starting at 64 instead and the untar results are
>> vastly improved - down to roughly 20 seconds from 8-10 minutes. That's
>> roughly twice as fast as the old 120GB SATA2 drive I was using to test
>> the system out while I debugged this issue.
>
> That's good to hear.
>
>> * *I'm still a little fuzzy about what happens to the extra sectors at
>> the end of a track. Are they used and I pay for a little bit of
>> overhead reading data off of them or are they ignored and I lose
>> capacity? I think it must be the former as my partition isn't all that
>> much less than 1TB.
>
> As far as I know, you shouldn't worry about it. The
> head/track/cylinder addressing is a relic of an older day. Almost all
> modern drives should be accessed via LBA. If interested, take a look
> at the wikipedia entry on Cylinder-Head-Sector and Logical Block
> Addressing.
>
> Basically, you are not losing anything.
>
> Cheers,
>
> W
> --
> Willie W. Wong * * * * * * * * * * * * * * * * * * wwong@math.princeton.edu
> Data aequatione quotcunque fluentes quantitae involvente fluxiones invenire
> * * * * et vice versa * ~~~ *I. Newton
>
>

Hi,
Yeah, a little more study and thinking confirms this. The sectors
are 4K. WD put them on there. The sectors are 4K.

Just because there might be extra physical space at the end of a
track doesn't mean I can ever use it.

The sectors are 4K and WD put them on there and they've taken ALL
that into account already. They are 4K physically with ECC but
accessible by CHS and by LBA in 512B chunks. The trick for speed at
the OS/driver level is to make sure we are always grabbing 4K logical
blocks from a single 4K physical sector off the drive. If we do it's
fast. If we don't and start asking for a 4K block that isn't in a
single 4K physical block then it becomes very slow as the drive
hardware/firmware/processor has to do multiple reads and piece it
together for us which is slow. (VERY slow...) By using partitions
mapped to sector number values divisible by 8 we do this. (8 * 512B =
4K)

The extra space at the end of a track/cylinder is 'lost' but it was
lost before we bought the drive because the sectors are 4K so there is
nothing 'lost' by the choices we make in fdisk. I must remember to use
fdisk -u to see the sector numbers when making the partitions and
remember to do some test writes to the partition to ensure it's right
and the speed is good before doing any real work.

This has been helpful for me. I'm glad Valmor is getting better
results also.

I wish I had checked the title before I sent the original email it
was supposed to be

1-Terabyte drives - 4K sector sizes? -> bad performance so far

Maybe sticking that here will help others when they Google for this later.

Cheers,
Mark
 
Old 02-08-2010, 07:34 PM
Paul Hartman
 
Default 1-Terabyte drives - 4K sector sizes? -> bar performance so far

On Mon, Feb 8, 2010 at 12:52 PM, Valmor de Almeida <val.gentoo@gmail.com> wrote:
> Mark Knecht wrote:
> [snip]
>>
>> This has been helpful for me. I'm glad Valmor is getting better
>> results also.
> [snip]
>
> These 4k-sector drives can be problematic when upgrading older
> computers. For instance, my laptop BIOS would not boot from the toshiba
> drive I mentioned earlier. However when used as an external usb drive, I
> could boot gentoo. Since I have been using this drive as backup storage
> I did not investigate the reason for the lower speed. I am happy to get
> a factor of 8 in speed up now after you did the research
>
> Thanks for your postings.

Thanks for the info everyone, but do you understand the agony I am now
suffering at the fact that all disk in my system (including all parts
of my RAID5) are starting on sector 63 and I don't have sufficient
free space (or free time) to repartition them? I am really curious
if there are any gains to be made on my own system...

Next time I partition I will definitely pay attention to this, and
feel foolish that I didn't pay attention before. Thanks.
 
Old 02-08-2010, 11:37 PM
Mark Knecht
 
Default 1-Terabyte drives - 4K sector sizes? -> bar performance so far

On Mon, Feb 8, 2010 at 4:05 PM, Frank Steinmetzger <Warp_7@gmx.de> wrote:
> Am Sonntag, 7. Februar 2010 schrieb Mark Knecht:
>
>> Hi Willie,
>> * *OK - it turns out if I start fdisk using the -u option it show me
>> sector numbers. Looking at the original partition put on just using
>> default values it had the starting sector was 63
>
> Same here.
>
>> - probably about the worst value it could be.
>
> Hm.... what about those first 62 sectors?
> I bought this 500GB drive for my laptop recently and did a fresh partitioning
> scheme on it, and then rsynced the filesystems of the old, smaller drive onto
> it. The first two partitions are ntfs, but I believe they also use cluster
> sizes of 4k by default. So technically I could repartition everything and
> then restore the contents from my backup drive.
>
> And indeed my system becomes very sluggish when I do some HDD shuffling.
>
>> As a test I blew away that partition and
>> created a new one starting at 64 instead and the untar results are
>> vastly improved - down to roughly 20 seconds from 8-10 minutes. That's
>> roughly twice as fast as the old 120GB SATA2 drive I was using to test
>> the system out while I debugged this issue.
>
> Though the result justifies your decision, I would have though one has to
> start at 65, unless the disk starts counting its sectors at 0.
> --
> Gruß | Greetings | Qapla'
> Programmers don’t die, they GOSUB without RETURN.
>

Good question. I don't know where it starts counting but 63 seems to
be the first one you can use on any blank drive I've looked at so far.

There's a few small downsides I've run into with all of this so far:

1) Since we don't use sector 63 it seems that fdisk will still tell
you that you can use 63 until you use up all your primary partitions.
It used to be easier to put additional partitions on when it gave you
the next sector you could use after the one you just added.. Now I'm
finding that I need to write things down and figure it out more
carefully outside of fdisk.

2) When I do something like +60G fdisk chooses the final sector, but
it seems that it doesn't end 1 sector before something divisible by 8,
so again, once the new partition is in I need to do more calculations
to find where then next one will go. Probably better to decide what
you want for an end and make sure that the next sector is divisible by
8.

3) When I put in an extended partition I put the start of it at
something divisible by 8. When I went to add a logical partition
inside of that I found that there was some strange number of sectors
dedicated to the extended partition itself and I had to waste a few
more sectors getting the logical partitions divisible by 8.

4) Everything I've done so far leave me with messages about partition
1 not ending on a cylinder boundary. Googling on that one says don't
worry about it. I don't know...

So, it works - the new partitions are fast but it's a bit of work
getting them in place.

- Mark
 
Old 02-09-2010, 03:31 PM
Mark Knecht
 
Default 1-Terabyte drives - 4K sector sizes? -> bar performance so far

On Mon, Feb 8, 2010 at 4:37 PM, Mark Knecht <markknecht@gmail.com> wrote:
<SNIP>
>
> There's a few small downsides I've run into with all of this so far:
>
> 1) Since we don't use sector 63 it seems that fdisk will still tell
> you that you can use 63 until you use up all your primary partitions.
> It used to be easier to put additional partitions on when it gave you
> the next sector you could use after the one you just added.. Now I'm
> finding that I need to write things down and figure it out more
> carefully outside of fdisk.
>

Replying mostly to myself, WRT the value 63 continuing to show up
after making the first partition start at 64, in my case since for
desktop machines the first partition is general /boot, and as it's
written and read so seldom, in the future when faced with this problem
I will likely start /boot at 63 and just ensure that all the other
partitions - /, /var, /home, etc., start on boundaries divisible by 8.

It will make using fdisk slightly more pleasant.

- Mark
 
Old 02-09-2010, 04:33 PM
Paul Hartman
 
Default 1-Terabyte drives - 4K sector sizes? -> bar performance so far

On Mon, Feb 8, 2010 at 6:27 PM, Neil Bothwick <neil@digimed.co.uk> wrote:
> On Mon, 8 Feb 2010 14:34:01 -0600, Paul Hartman wrote:
>
>> Thanks for the info everyone, but do you understand the agony I am now
>> suffering at the fact that all disk in my system (including all parts
>> of my RAID5) are starting on sector 63 and I don't have sufficient
>> free space (or free time) to repartition them?
>
> With the RAID, you could fail one disk, repartition, re-add it, rinse and
> repeat. But that doesn't take care of the time issue.

I will admit that if a drive fails I will have to google for the
instructions to proceed from there. When I first set it up, I read the
info, but since I never had to use it I've completely forgotten the
specifics. And in hindsight I should have labeled the disks so I know
more easily which one failed (when one fails). Next time, I'll do it
right.

>> I am really curious
>> if there are any gains to be made on my own system...
>
> Me too, so post back after you've done it ;-)

I have a dmcrypt on top of the (software) RAID5, so speed is not so
much of an issue in this case, but reducing physical wear & tear on
the disks would always be a good thing. Maybe someday if I am brave I
will try it... but probably not until I made a full backup, just in
case.
 

Thread Tools




All times are GMT. The time now is 06:11 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org