FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Gentoo > Gentoo User

 
 
LinkBack Thread Tools
 
Old 02-09-2010, 02:27 PM
"J. Roeleveld"
 
Default 1-Terabyte drives - 4K sector sizes? -> bar performance so far

On Tuesday 09 February 2010 16:11:14 Stroller wrote:
> On 9 Feb 2010, at 13:57, J. Roeleveld wrote:
> > ...
> > With Raid (NOT striping) you can remove one disk, leaving the Raid-
> > array in a
> > reduced state. Then repartition the disk you removed, repartition
> > and then re-
> > add the disk to the array.
>
> Exactly. Except the partitions extend, in the same positions, across
> all the disks.
>
> You cannot remove one disk from the array and repartition it, because
> the partition is across the array, not the disk. The single disk,
> removed from a RAID 5 (specified by Paul Hartman) array does not
> contain any partitions, just one stripe of them.
>
> I apologise if I'm misunderstanding something here, or if your RAID
> works differently to mine.
>
> Stroller.
>

Stroller, it is my understanding that you use hardware raid adapters?
If that is the case, then the mentioned method won't work for you and if your
raid-adapters already align everything properly, then you shouldn't notice any
problems with these drives.
It would, however, be interesting to know how hardware raid adapters handle
these 4KB sector-sizes.

I believe Paul Hartman is, like me, using Linux Sofware raid (mdadm+kernel
drivers).

In that case, you can do either of the following:
Put the whole disk into the RAID, eg:
mdadm --create --level=5 --devices=6 /dev/sd[abcdef]
Or, you create 1 or more partitions on the disk and use these, eg:
mdadm --create --level=5 --devices=6 /dev/sd[abcdef]1

To have linux auto-detect for raid devices work, as far as I know, the
partitioning method is required.
For that, I created a single full-disk partition on my drives:
--
# fdisk -l -u /dev/sda

Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0xda7d8d6d

Device Boot Start End Blocks Id System
/dev/sda1 64 2930277167 1465138552 fd Linux raid autodetect
--

I, after reading this, redid the array with the partition starting at sector
64. Paul was unfortunate to have already filled his disks before this thread
appeared.

The downside is: you loose one sector, but the advantage is a much improved
performance (Or more precisely, not incur the performance penalty from having
misaligned partitions)

--
Joost Roeleveld
 
Old 02-09-2010, 02:43 PM
Neil Bothwick
 
Default 1-Terabyte drives - 4K sector sizes? -> bar performance so far

On Tue, 9 Feb 2010 15:11:14 +0000, Stroller wrote:

> You cannot remove one disk from the array and repartition it, because
> the partition is across the array, not the disk. The single disk,
> removed from a RAID 5 (specified by Paul Hartman) array does not
> contain any partitions, just one stripe of them.

A 3 disk RAID 5 array can handle one disk failing. Although information
is striped across all three disks, any two are enough to retrieve it.

If this were not the case, it would be called AID 5.


--
Neil Bothwick

Always remember to pillage before you burn.
 
Old 02-09-2010, 04:09 PM
Frank Steinmetzger
 
Default 1-Terabyte drives - 4K sector sizes? -> bar performance so far

Am Dienstag, 9. Februar 2010 schrieb Frank Steinmetzger:

> > 4) Everything I've done so far leave me with messages about partition
> > 1 not ending on a cylinder boundary. Googling on that one says don't
> > worry about it. I don't know...

Well since only the start of a partition determines its alignment with
hardware sectors, I think it's really not that important. Worst case: mkfs
truncates the last few sectors to make it a multiple of its cluster size.

> Anyway, mine's like this, just to throw it into the pot to the others
> ( those # are added by me to show their respective use )
>
> eisen # fdisk -l -u /dev/sda
>
> Disk /dev/sda: 500.1 GB, 500107862016 bytes
> 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Disk identifier: 0x80178017
>
> Device Boot Start End Blocks Id System
> /dev/sda1 * 63 25157789 12578863+ 7 HPFS/NTFS # Windows
> /dev/sda2 25157790 88084394 31463302+ 7 HPFS/NTFS # Games
> /dev/sda3 88084395 127941659 19928632+ 83 Linux # /
> /dev/sda4 127941660 976768064 424413202+ 5 Extended
> /dev/sda5 127941723 288816569 80437423+ 83 Linux # /home
> /dev/sda6 288816633 780341309 245762338+ 83 Linux # music
> /dev/sda7 813113973 976703804 81794916 83 Linux # X-Plane
> /dev/sda8 * 976703868 976768064 32098+ 83 Linux #
> /boot /dev/sda9 780341373 813113909 16386268+ 7 HPFS/NTFS #
> Win7 test

I have started amending my partitioning scheme, starting at the rear. Since my
backup drive has exactly the same scheme, I’m working on that and then
restore my local drive from it, so I need as little time in a LiveCD
environment as possible.

I have reset sdb7 to use boundaries divisible by 64.
Old range begin%64 size%64 New range begin%64 size%64
813113973-976703804 0.8281 0.125 813113984-976703935 0 0

And guess what - the speed of truecrypt at creating a new container doubled.
With the old scheme, it started at 13.5 MB/s, now it started at 26-odd. I’m
blaming that cap on the USB connection to the drive, though it’s gradually
getting more: after 2/3 of the partition, it’s at 27.7.

So sdb7 now ends at sector 976703935. Interestingly, I couldn’t use the
immediate next sector for sdb8:
start for sdb8 response by fdisk
976703936 sector already allocated
976703944 Value out of range. First sector... (default 976703999):

The first one fdisk offered me was exactly 64 sectors behind the end sector of
sdb7 (976703999), which would leave a space of those mysterious 62 “empty”
sectors in between. So I used 976704000, which is divisable by 64 again,
though it’s not that relevant for a partition of 31 MB.

As soon as truecrypt is finished, I'm going to solidify my findings by
performing this on another partition, and I’ll also see what happens if I
start at a start sector of k*64+1. Just out of curiousity. :-)
--
Gruß | Greetings | Qapla'
Crayons can take you more places than starships. (Guinan)
 
Old 02-09-2010, 04:17 PM
Stroller
 
Default 1-Terabyte drives - 4K sector sizes? -> bar performance so far

On 9 Feb 2010, at 15:43, Neil Bothwick wrote:


On Tue, 9 Feb 2010 15:11:14 +0000, Stroller wrote:


You cannot remove one disk from the array and repartition it, because
the partition is across the array, not the disk. The single disk,
removed from a RAID 5 (specified by Paul Hartman) array does not
contain any partitions, just one stripe of them.


A 3 disk RAID 5 array can handle one disk failing. Although
information

is striped across all three disks, any two are enough to retrieve it.

If this were not the case, it would be called AID 5.


Of course you can REMOVE this disk.

However, in hardware RAID you cannot do anything USEFUL to the single
disk.


In hardware RAID it is the controller card which manages the arrays
and consolidates them for the o/s. You attach three drives to a
hardware RAID controller, setup a RAID5 array and then the controller
exports the array to the operating system as a block device (e.g. /dev/
sda). You then run fdisk on this virtual disk and create the
partitions. You cannot connect just a partition to a hardware RAID
controller.


Thus in hardware RAID there are no partitions on each single disk,
only (as I said before) stripes of the partitions. You cannot usefully
repartition a single hard-drive from a hardware RAID set - anything
you do to that single drive will be wiped out when you re-add it to
the array and the current state of the virtual disk is propagated on
to it.


I hope this explanation makes sense.

I was not aware that Linux software RAID behaved differently. See
Joost's explanation of 9 February 2010 15:27:32 GMT. I asked if you
were referring to LVM because I set that up several years ago, and it
also allows you to add partitions as PVs. I can see how it would be
useful to add just a partition to a RAID array, and it's great that
you can do this in software RAID.


So this:

On 9 Feb 2010, at 00:27, Neil Bothwick wrote:
With the RAID, you could fail one disk, repartition, re-add it,
rinse and

repeat. But that doesn't take care of the time issue


only applies in the specific case that Paul Hartman is using Linux
software RAID, not the general case of RAID in general.


Stroller.
 
Old 02-09-2010, 04:38 PM
Stroller
 
Default 1-Terabyte drives - 4K sector sizes? -> bar performance so far

On 9 Feb 2010, at 15:27, J. Roeleveld wrote:


On Tuesday 09 February 2010 16:11:14 Stroller wrote:

On 9 Feb 2010, at 13:57, J. Roeleveld wrote:

...
With Raid (NOT striping) you can remove one disk, leaving the Raid-
array in a
reduced state. Then repartition the disk you removed, repartition
and then re-
add the disk to the array.


Exactly. Except the partitions extend, in the same positions, across
all the disks.

You cannot remove one disk from the array and repartition it, because
the partition is across the array, not the disk. The single disk,
removed from a RAID 5 (specified by Paul Hartman) array does not
contain any partitions, just one stripe of them.

I apologise if I'm misunderstanding something here, or if your RAID
works differently to mine.


Stroller, it is my understanding that you use hardware raid adapters?


Yes.


If that is the case, then the mentioned method won't work for you ...

I believe Paul Hartman is, like me, using Linux Sofware raid (mdadm
+kernel

drivers).

In that case, you can do either of the following:
Put the whole disk into the RAID, eg:
mdadm --create --level=5 --devices=6 /dev/sd[abcdef]
Or, you create 1 or more partitions on the disk and use these, eg:
mdadm --create --level=5 --devices=6 /dev/sd[abcdef]1


Thank you for identifying the source of this misunderstanding.

and if your raid-adapters already align everything properly, then
you shouldn't notice any problems with these drives.
It would, however, be interesting to know how hardware raid adapters
handle these 4KB sector-sizes.


I think my adaptor at least, being older, may very well be prone to
this problem. I discussed this in my post of 8 February 2010 19:57:46
GMT - certainly I have a RAID array aligned beginning at sector 63,
and it is at least a little slow. I will test just as soon as I can
afford 3 x 1TB drives.


I think the RAID adaptor would have to be quite "clever" to avoid this
problem. It may be a feature added in newer controllers, but that
would be a special attempt to compensate. I think in the general case
the RAID controller should just consolidate 3 x physical block devices
(or more) into 1 x virtual block device, and should not do anything
more complicated that this. I am sure that a misalignment will
propagate downwards through the levels of obscusification.


IMO this is a fdisk "bug". A feature should be added so that it tries
to align optimally in most circumstances. RAID controllers should not
be trying to do anything clever to accommodate potential misalignment
unless it is really cheap to do so.


Stroller.
 
Old 02-09-2010, 05:03 PM
Neil Walker
 
Default 1-Terabyte drives - 4K sector sizes? -> bar performance so far

Hey guys,

There seems to be a lot of confusion over this RAID thing.

Hardware RAID does not use partitions. The entire drive is used (or,
actually, the amount defined in setting up the array) and all I/O is
handled by the BIOS on the RAID controller. The array appears as a
single drive to the OS and can then be partitioned and formatted like
any other drive.

Software RAID can be created within existing MSDOS-style partitions -
indeed must be if the array is to be bootable.

The OP seems to be doing the latter so the comments about removing a
drive and re-formatting are perfectly valid.

In order not to confuse the matter further, I deliberately left out the
pseudo-hardware controllers on many modern motherboards.


Be lucky,

Neil
http://www.neiljw.com/
 
Old 02-09-2010, 06:29 PM
"J. Roeleveld"
 
Default 1-Terabyte drives - 4K sector sizes? -> bar performance so far

On Tuesday 09 February 2010 19:25:00 Mark Knecht wrote:
> On Tue, Feb 9, 2010 at 9:38 AM, Stroller <stroller@stellar.eclipse.co.uk>
> wrote: <SNIP>
>
> > IMO this is a fdisk "bug". A feature should be added so that it tries to
> > align optimally in most circumstances. RAID controllers should not be
> > trying to do anything clever to accommodate potential misalignment unless
> > it is really cheap to do so.
> >
> > Stroller.
>
> We think alike. I personally wouldn't call it a bug because drives
> with 4K physical sectors are very new, but adding a feature to align
> things better is dead on the right thing to do. It's silly to expect
> every Linux user installing binary distros to have to learn this stuff
> to get good performance.
>
> - Mark
>

I actually agree, although I think the 'best' solution (untill someone comes
up with an even better one, that is ) would be for the drive to actually be
able to inform the OS (via S.M.A.R.T.?) that it has 4KB sectors.
If then fdisk-programs and RAID-cards (ok, new firmware) then uses this to
come to sensible settings, that would then work.

If these RAID-cards then also pass on the correct settings for the raid-array
for optimal performance (stripe-size => sector-size?) using the same method,
then everyone would end up with better performance.

Now, if anyone has any idea on how to get this idea implemented by the
hardware vendors, then I'm quite certain the different tools can be modified
to take this information into account?

And Mark, it's not just people installing binary distros, I think it's
generally people who don't fully understand the way harddrives work on a
physical level. I consider myself lucky to have worked with older computers
where this information was actually necessary to even get the BIOS to
recognize the harddrive.

--
Joost
 
Old 02-09-2010, 06:37 PM
"J. Roeleveld"
 
Default 1-Terabyte drives - 4K sector sizes? -> bar performance so far

On Tuesday 09 February 2010 19:03:39 Neil Walker wrote:
> Hey guys,
>
> There seems to be a lot of confusion over this RAID thing.
>
> Hardware RAID does not use partitions. The entire drive is used (or,
> actually, the amount defined in setting up the array) and all I/O is
> handled by the BIOS on the RAID controller. The array appears as a
> single drive to the OS and can then be partitioned and formatted like
> any other drive.
>
> Software RAID can be created within existing MSDOS-style partitions -
> indeed must be if the array is to be bootable.
>
> The OP seems to be doing the latter so the comments about removing a
> drive and re-formatting are perfectly valid.
>
> In order not to confuse the matter further, I deliberately left out the
> pseudo-hardware controllers on many modern motherboards.

Don't get me started on those
The reason I use Linux Software Raid is because:
1) I can't afford hardware raid adapters
2) It's generally faster then hardware fakeraid

--
Joost
 
Old 02-09-2010, 07:30 PM
Neil Bothwick
 
Default 1-Terabyte drives - 4K sector sizes? -> bar performance so far

On Tue, 9 Feb 2010 17:17:48 +0000, Stroller wrote:

> only applies in the specific case that Paul Hartman is using Linux
> software RAID, not the general case of RAID in general.

That's true, although in the Linux world I expect that the number of
software RAID users far outnumbers the hardware RAID users. Unlike the
pseudo-RAID that Windows usually offers, Linux software RAID is proper
RAID with performance comparable to all but the most expensive hardware
setups.

With hardware RAID, removing and reading a disk wouldn't work for this,
just as it wouldn't for software RAID using whole disks. However, using
whole disk with RAID5 is unlikely unless you have another disk too,
otherwise you wouldn't be able to load the kernel.


--
Neil Bothwick

Top Oxymorons Number 16: Peace force
 
Old 02-09-2010, 08:13 PM
Frank Steinmetzger
 
Default 1-Terabyte drives - 4K sector sizes? -> bar performance so far

Am Dienstag, 9. Februar 2010 schrieb Frank Steinmetzger:

> I have reset sdb7 to use boundaries divisible by 64.
> Old range begin%64 size%64 New range begin%64
> size%64 813113973-976703804 0.8281 0.125 813113984-976703935 0
> 0
>
> And guess what - the speed of truecrypt at creating a new container
> doubled. With the old scheme, it started at 13.5 MB/s, now it started at
> 26-odd. I’m blaming that cap on the USB connection to the drive, though
> it’s gradually getting more: after 2/3 of the partition, it’s at 27.7.

I fear I'll have to correct that a little. This 13.5 figure seems to be
incorrect, in another try it was also shown at the beginning, but then
quickly got up to >20. Also, a buddy just told me that this 4k stuff applies
only to most recent drives, as old as 5 months or so.

When I use parted on the drives, it says (both the old external and my 2
months old internal):
Sector size (logical/physical): 512B/512B
So no speedup for me then. :-/
--
Gruß | Greetings | Qapla'
Keyboard not connected, press F1 to continue.
 

Thread Tools




All times are GMT. The time now is 10:07 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org