FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Debian > Debian User

 
 
LinkBack Thread Tools
 
Old 09-16-2012, 09:48 AM
Martin Steigerwald
 
Default Storage server

Am Samstag, 15. September 2012 schrieb Bob Proulx:
> Martin Steigerwald wrote:
> > Am Freitag, 7. September 2012 schrieb Bob Proulx:
> > > Unfortunately I have some recent FUD concerning xfs. I have had
> > > some recent small idle xfs filesystems trigger kernel watchdog
> > > timer ...
> > > due to these lockups. Squeeze. Everything current. But when idle
> > > it would periodically lock up and the only messages in the syslog
> > > and on
> >
> > Squeeze and everything current?
> > No way. At least when using 2.6.32 default squeeze kernel. Its really
> > old. Did you try with the latest 3.2 squeeze-backports kernel?
>
> But in the future when when Debian Jessie is being released I am going
> to be reading then on the mailing list about how old and bad Linux 3.2
> is and how it should not be used because it is too old. How can it be
> really good now when it is going to be really bad in the future when
> supposedly we know more then than we do now? :-)

I read a complaint about the very nature of software development out of
your statement. Developers and testers improve software and sometimes
accidentally introduce regressions. Thats the very nature of the process
it seems to me.

Yes, by now 2.6.32 is old. It wasn´t exactly fresh as Debian Squeeze was
released, but now its really old. And regarding XFS 3.2 contains big load
of improvements regarding metadata performance like delayed logging and
more, other performance and bug fixes. Some bug fixes might have been
backported via Stable maintainers. But not the improvements that might
play an important role for a storage server setup.

> For my needs Debian Stable is a really very good fit. Much better
> than Testing or Unstable or Backports.

So by all means, use it!

Actually I didn´t even recommend to upgrade to Sid. If you read my post
carefully you can easily notice it. I specifically recommended just to
upgrade to a squeeze-backports kernel.

But still if you do not use XFS or use XFS and do not have any issue, you
may well decide to stick with 2.6.32. Your choice.

> Meanwhile I am running Sid on my main desktop machine. I upgrade it
> daily. I report bugs as I find them. I am doing so specifically so I
> can test and find and report bugs. I am very familiar with living on
> Unstable. Good for developers. Not good for production systems.

Then tell that to my production use laptop here. It obviously didn´t hear
about Debian Sid being unfit for producation usage.

My virtual server still has Squeeze, but I am considering to upgrade it to
Wheezy. Partly cause at the time I upgrade customer systems, I want to
have seen Wheezy at work nicely for a while .

Sure, not the way for everyone. Sure, when using Sid / Wheezy the
occassional bug can happen and I recommend using apt-listbugs and apt-
listchanges on those systems.

But I won´t sign a all-inclusive Sid is unfit for production statement. If
I know how to look up the bug database and how to downgrade stuff possibly
also by using snapshot.debian.org then I might decide to use Sid or Wheezy
on some machines – preferably in the desktop usage area – and be just fine
with it. On servers I am quite more reluctant unless its my own virtual
server, but even there I am not running Sid.

For people new to Debian or people unwilling to deal with an occassional
bug I recommend stable. Possibly with a backport kernel in some cases.

Well so I think we basically say almost the same, but in different wording
and accentuation.

--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 201209161148.23293.Martin@lichtvoll.de">http://lists.debian.org/201209161148.23293.Martin@lichtvoll.de
 
Old 09-16-2012, 12:38 PM
Martin Steigerwald
 
Default Storage server

Am Freitag, 14. September 2012 schrieb Stan Hoeppner:
> On 9/14/2012 7:57 AM, Martin Steigerwald wrote:
> > Am Freitag, 14. September 2012 schrieb Stan Hoeppner:
> >> Thus my advice to you is:
> >>
> >> Do not use LVM. Directly format the RAID10 device using the
> >> mkfs.xfs defaults. mkfs.xfs will read the md configuration and
> >> automatically align the filesystem to the stripe width.
> >
> >
> >
> > Just for completeness:
> >
> >
> > It is possible to manually align XFS via mkfs.xfs / mount options.
> > But then thats an extra step thats unnecessary when creating XFS
> > directly on MD.
>
> And not optimal for XFS beginners. But the main reason for avoiding
> LVM is that LVM creates a "slice and dice" mentality among its users,
> and many become too liberal with the carving knife, ending up with a
> filesystem made of sometimes a dozen LVM slivers. Then XFS
> performance suffers due to the resulting inode/extent/free space
> layout.

Agreed.

I have seen VMs with seperate /usr and minimal / and mis-estimated sizing.
There was perfectly enough place in the VMDK, but just in the wrong
partition. I fixed it back then by adding another VMDK file. (So even with
partitions I found those setups.)

Something else is to split up /var/log or /var.

But then we are talking about user and not system data here anyway.

I have always recommended to leave at least 10-15% free, but from a
discussion on XFS mailinglist where you took part, I learned that
depending on use case for large volumes even more free space might be
necessary for performant long term operation.

--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 201209161438.22645.Martin@lichtvoll.de">http://lists.debian.org/201209161438.22645.Martin@lichtvoll.de
 
Old 09-16-2012, 12:43 PM
Martin Steigerwald
 
Default Storage server

Hi Kelly,

Am Samstag, 15. September 2012 schrieb Kelly Clowers:
> On Fri, Sep 14, 2012 at 2:51 PM, Stan Hoeppner <stan@hardwarefreak.com> wrote:
> > On 9/14/2012 11:29 AM, Kelly Clowers wrote:
> >> On Thu, Sep 13, 2012 at 4:45 PM, Stan Hoeppner <stan@hardwarefreak.com> wrote:
> >>> On 9/13/2012 5:20 AM, Veljko wrote:
> >>>> On Tue, Sep 11, 2012 at 08:34:51AM -0500, Stan Hoeppner wrote:
> >>>>> One of the big reasons (other than cost) that I mentioned this
> >>>>> card is that Adaptec tends to be more forgiving with non RAID
> >>>>> specific (ERC/TLER) drives, and lists your Seagate 3TB drives as
> >>>>> compatible. LSI and other controllers will not work with these
> >>>>> drives due to lack of RAID specific ERC/TLER.
> >>>>
> >>>> Those are really valuable informations. I wasn't aware that not
> >>>> all drives works with RAID cards.
> >>>
> >>> Consumer hard drives will not work with most RAID cards. As a
> >>> general rule, RAID cards require enterprise SATA drives or SAS
> >>> drives.
> >>
> >> They don't work with real hardware RAID? How weird! Why is that?
> >
> > Surely you're pulling my leg Kelly, and already know the answer.
> >
> > If not, the answer is the ERC/TLER timeout period. Nearly all
> > hardware RAID controllers expect a drive to respond to a command
> > within 10 seconds or less. If the drive must perform error recovery
> > on a sector or group of sectors it must do so within this time
> > limit. If the drive takes longer than this period the controller
> > will flag it as bad and kick it out of the array. The assumption
> > here is that a drive taking that long to respond has a problem and
> > should be replaced.
> >
> > Most consumer drives have no such timeout limit. They will churn
> > forever attempting to recover an unreadable sector. Thus routine
> > errors on consumer drives often get them kicked instantly when used
> > on read RAID controllers.
>
> Why would I be pulling your leg? I have never had opportunity to work
> with real raid cards. Nor have I ever heard anyone say that before.
> The highest end I have used was I believe a Highpoint card, about
> ~$150 range, which was fakeRAID (and I believe the drives
> attached to that were enterprise drives anyway)
>
> Thanks for the info.

Read the stuff that was linked from some other article link posted here.

Especially:

What makes a hard drive enterprise class?
Posted on 05-11-2010 23:19:18 UTC | Updated on 05-11-2010 23:43:48 UTC
Section: /hardware/disks/ | Permanent Link
http://www.pantz.org/hardware/disks/what_makes_a_hard_drive_enterprise_class.html


But also

Everything You Know About Disks Is Wrong
by ROBIN HARRIS on TUESDAY, 20 FEBRUARY, 2007
Update II: NetApp has responded. I’m hoping other vendors will as well.
http://storagemojo.com/2007/02/20/everything-you-know-about-disks-is-wrong/


Open Letter to Seagate, Hitachi GST, EMC, HP, NetApp, IBM and Sun
by ROBIN HARRIS on THURSDAY, 22 FEBRUARY, 2007
http://storagemojo.com/2007/02/22/open-letter-to-seagate-hitachi-gst-emc-hp-netapp-ibm-and-sun/


Google’s Disk Failure Experience
by ROBIN HARRIS on MONDAY, 19 FEBRUARY, 2007
http://storagemojo.com/2007/02/19/googles-disk-failure-experience/


is quite intesting.

So enterprise class drives have this configurable error correction timeout,
but that said, if you leave traditional RAID setups you may still very well
get away with using customer drives. Like Google did.

Now all that from Storagemojo is 2007 stuff. Dunno how much is changed
meanwhile.

Ciao,
--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 201209161443.11148.Martin@lichtvoll.de">http://lists.debian.org/201209161443.11148.Martin@lichtvoll.de
 
Old 09-16-2012, 02:35 PM
Stan Hoeppner
 
Default Storage server

On 9/16/2012 7:38 AM, Martin Steigerwald wrote:

> I have always recommended to leave at least 10-15% free, but from a
> discussion on XFS mailinglist where you took part, I learned that
> depending on use case for large volumes even more free space might be
> necessary for performant long term operation.

And this is due the allocation group design of XFS. When the filesystem
is used properly, its performance with parallel workloads simply runs
away from all other filesystems. When using LVM in the manner I've been
discussing, the way the OP of this thread wants to use it, you end up
with the following situation and problem:

1. Create 1TB LVM and format with XFS.
2. XFS creates 4 allocation group
3. XFS spreads directories and files fairly evenly over all AGs
4. When the XFS gets full, you end up with inode/files/free space
badly fragmented over the 4 AGs and performance suffers when reading
these back, or when trying to write new, or modify existing
5. So you expand the LV by 1TB and then grow the XFS over the new space
6. This operation simply creates 4 new AGs in the new space
7. New inode/extent creation to these new AGs is fast and reading back
is also fast.
8. But, here's the kicker, reading the fragmented files from the first
4 AGs is still dog slow, as well as modifying metadata in those AGs

Thus, the moral of the story is that adding more space to an XFS via LVM
can't fix performance problems that one has created while reaching the
"tank full" marker on the original XFS. The result is fast access to
the new AGs in the new LVM sliver, but slow access to the original 4 AGs
in the first LVM sliver. So as one does the LVM rinse/repeat growth
strategy, one ends up with slow access to all of their AGs in the entire
filesystem. Thus, this method of "slice/dice" expansion for XFS is insane.

This is why XFS subject matter experts and power users do our best to
educate beginners about the aging behavior of XFS. This is why we
strongly recommend that users create one large XFS of the maximum size
they foresee needing in the long term instead of doing the expand/grow
dance with LVM or doing multiple md/RAID reshape operations.

Depending on the nature of the workload, and careful, considerate,
judicious use of XFS grow operations, it is safe to grow an XFS without
the performance problems. This should be done long before one hits the
~90% full mark. Growing before it hit ~70% is much better. But one
should still never grow an XFS more than a couple of times, as a general
rule, if one wishes to maintain relatively equal performance amongst all
AGs.

--
Stan


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 5055E3A7.5050803@hardwarefreak.com">http://lists.debian.org/5055E3A7.5050803@hardwarefreak.com
 
Old 09-16-2012, 03:44 PM
Martin Steigerwald
 
Default Storage server

Am Sonntag, 16. September 2012 schrieb Stan Hoeppner:
> On 9/16/2012 7:38 AM, Martin Steigerwald wrote:
> > I have always recommended to leave at least 10-15% free, but from a
> > discussion on XFS mailinglist where you took part, I learned that
> > depending on use case for large volumes even more free space might be
> > necessary for performant long term operation.
>
> And this is due the allocation group design of XFS. When the
> filesystem is used properly, its performance with parallel workloads
> simply runs away from all other filesystems. When using LVM in the
> manner I've been discussing, the way the OP of this thread wants to
> use it, you end up with the following situation and problem:
>
> 1. Create 1TB LVM and format with XFS.
> 2. XFS creates 4 allocation group
> 3. XFS spreads directories and files fairly evenly over all AGs
> 4. When the XFS gets full, you end up with inode/files/free space
> badly fragmented over the 4 AGs and performance suffers when
> reading these back, or when trying to write new, or modify existing 5.
> So you expand the LV by 1TB and then grow the XFS over the new space
> 6. This operation simply creates 4 new AGs in the new space
> 7. New inode/extent creation to these new AGs is fast and reading back
> is also fast.
> 8. But, here's the kicker, reading the fragmented files from the first
> 4 AGs is still dog slow, as well as modifying metadata in those AGs
>
> Thus, the moral of the story is that adding more space to an XFS via
> LVM can't fix performance problems that one has created while reaching
> the "tank full" marker on the original XFS. The result is fast access
> to the new AGs in the new LVM sliver, but slow access to the original
> 4 AGs in the first LVM sliver. So as one does the LVM rinse/repeat
> growth strategy, one ends up with slow access to all of their AGs in
> the entire filesystem. Thus, this method of "slice/dice" expansion
> for XFS is insane.
>
> This is why XFS subject matter experts and power users do our best to
> educate beginners about the aging behavior of XFS. This is why we
> strongly recommend that users create one large XFS of the maximum size
> they foresee needing in the long term instead of doing the expand/grow
> dance with LVM or doing multiple md/RAID reshape operations.
>
> Depending on the nature of the workload, and careful, considerate,
> judicious use of XFS grow operations, it is safe to grow an XFS without
> the performance problems. This should be done long before one hits the
> ~90% full mark. Growing before it hit ~70% is much better. But one
> should still never grow an XFS more than a couple of times, as a
> general rule, if one wishes to maintain relatively equal performance
> amongst all AGs.

Thanks for your elaborate explaination.

I took note of this for my Linux Performance analysis & tuning trainings.

--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 201209161744.14359.Martin@lichtvoll.de">http://lists.debian.org/201209161744.14359.Martin@lichtvoll.de
 
Old 09-17-2012, 10:31 AM
Veljko
 
Default Storage server

On Fri, Sep 14, 2012 at 10:48:54AM +0200, Denis Witt wrote:
> I'm currently testing obnam on our external Backup-Server together with
> 6 clients. It's very easy to set up. Restore could be nicer if you need
> an older version of some file but it's rather fast and it is possible
> to restore single files only, so you might have to look at several
> versions to find the right one, but this shouldn't take too much time.

I just installed obnam and find it's tutorial very scarce. I'm planing
to run obnam from server only and to pull backups from several clients.

I guess I need to upload public key to the client machine. Does obnam
has to be installed on both machines or it uses rsync like rsnapshot?

Regards,
Veljko


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 20120917103116.GA7309@angelina.example.org">http://lists.debian.org/20120917103116.GA7309@angelina.example.org
 
Old 09-17-2012, 10:31 AM
Veljko
 
Default Storage server

On Thu, Sep 13, 2012 at 06:24:45PM -0500, Stan Hoeppner wrote:
> Due to its allocation group design, continually growing an XFS
> filesystem in such small increments, with this metadata heavy backup
> workload, will yield very poor performance. Additionally, putting an
> XFS filesystem atop an LV is not recommended as it cannot properly align
> journal write out to the underlying RAID stripe width. While this is
> more critical with parity arrays, it also effects non parity striped arrays.
>
> Thus my advice to you is:
>
> Do not use LVM. Directly format the RAID10 device using the mkfs.xfs
> defaults. mkfs.xfs will read the md configuration and automatically
> align the filesystem to the stripe width.
>
> When the filesystem reaches 85% capacity, add 4 more drives and create
> another RAID10 array. At that point we'll teach you how to create a
> linear device of the two arrays and grow XFS across the 2nd array.

I did what you advised and formated RAID10 using xfs defaults.

Thanks for you help, Stan.

Regards,
Veljko


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 20120917103136.GB7309@angelina.example.org">http://lists.debian.org/20120917103136.GB7309@angelina.example.org
 

Thread Tools




All times are GMT. The time now is 12:08 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org