FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > CentOS > CentOS

 
 
LinkBack Thread Tools
 
Old 12-09-2009, 12:05 PM
Timo Schoeler
 
Default XFS and LVM2 (possibly in the scenario of snapshots)

Hi list,

during the last days there was a discussion going on about the stability
of XFS; though I myself used XFS heavily and didn't run into issues yet,
I'd like to ask something *before* we create our next generation data
storage backend...

Les Mikesell wrote in [0] about issues in the combination of XFS and LVM
-- however, it was being discussed in context of using 32bit kernels.

What I specifically need is to run XFS (or something similar, I am *not*
forced to use XFS, but it was my preference for some years now, and I
didn't have any issues with it yet) on top of LVM to be able to create
snapshots. We're talking about several file systems of a size at about
4TiByte each.

On another place [1] I read that there were issues with that.

Can anyone shed some light on this? Would be very appreciated.

Regards,

Timo

[0] -- http://lists.centos.org/pipermail/centos/2009-December/086850.html

[1] -- http://www.paragon-cs.com/wordpress/?p=67
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 12-09-2009, 02:29 PM
Ross Walker
 
Default XFS and LVM2 (possibly in the scenario of snapshots)

On Dec 9, 2009, at 8:05 AM, Timo Schoeler
<timo.schoeler@riscworks.net> wrote:

> Hi list,
>
> during the last days there was a discussion going on about the
> stability
> of XFS; though I myself used XFS heavily and didn't run into issues
> yet,
> I'd like to ask something *before* we create our next generation data
> storage backend...
>
> Les Mikesell wrote in [0] about issues in the combination of XFS and
> LVM
> -- however, it was being discussed in context of using 32bit kernels.
>
> What I specifically need is to run XFS (or something similar, I am
> *not*
> forced to use XFS, but it was my preference for some years now, and I
> didn't have any issues with it yet) on top of LVM to be able to create
> snapshots. We're talking about several file systems of a size at about
> 4TiByte each.
>
> On another place [1] I read that there were issues with that.
>
> Can anyone shed some light on this? Would be very appreciated.

There is no problem if it is done on x86_64 with it's 8k stack frames,
but on i386 with it's 4k stack frames you could run into a stack
overflow when doing it on top of stackable block devices (md raid,
lvm, drbd, etc).

Also since the current LVM on CentOS doesn't support barriers (next
release I believe) journalling isn't safe on LVM unless you are using
a storage controller with BBU write-back cache.

I have heard anyways that the current implementation of barriers isn't
very performant and doesn't take into consideration controllers with
BBU cache, so most people will end up mounting with nobarriers which
just means they are in the same boat as they are now. Better make sure
your machine is bullet proof as a power outage or a kernel panic can
spell disaster for XFS (or any other file system really).

It is better to invest in a good hardware RAID controller until the
whole barriers stuff is ironed out. It should really perform better
then it does.

-Ross

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 12-09-2009, 02:39 PM
Timo Schoeler
 
Default XFS and LVM2 (possibly in the scenario of snapshots)

thus Ross Walker spake:
> On Dec 9, 2009, at 8:05 AM, Timo Schoeler
> <timo.schoeler@riscworks.net> wrote:
>
>> Hi list,
>>
>> during the last days there was a discussion going on about the
>> stability
>> of XFS; though I myself used XFS heavily and didn't run into issues
>> yet,
>> I'd like to ask something *before* we create our next generation data
>> storage backend...
>>
>> Les Mikesell wrote in [0] about issues in the combination of XFS and
>> LVM
>> -- however, it was being discussed in context of using 32bit kernels.
>>
>> What I specifically need is to run XFS (or something similar, I am
>> *not*
>> forced to use XFS, but it was my preference for some years now, and I
>> didn't have any issues with it yet) on top of LVM to be able to create
>> snapshots. We're talking about several file systems of a size at about
>> 4TiByte each.
>>
>> On another place [1] I read that there were issues with that.
>>
>> Can anyone shed some light on this? Would be very appreciated.
>
> There is no problem if it is done on x86_64 with it's 8k stack frames,
> but on i386 with it's 4k stack frames you could run into a stack
> overflow when doing it on top of stackable block devices (md raid,
> lvm, drbd, etc).
>
> Also since the current LVM on CentOS doesn't support barriers (next
> release I believe) journalling isn't safe on LVM unless you are using
> a storage controller with BBU write-back cache.
>
> I have heard anyways that the current implementation of barriers isn't
> very performant and doesn't take into consideration controllers with
> BBU cache, so most people will end up mounting with nobarriers which
> just means they are in the same boat as they are now. Better make sure
> your machine is bullet proof as a power outage or a kernel panic can
> spell disaster for XFS (or any other file system really).
>
> It is better to invest in a good hardware RAID controller until the
> whole barriers stuff is ironed out. It should really perform better
> then it does.

Thanks for your detailed explanation, that really clears things up;
however, I was intending to build a software RAID10 as we had really not
so good experiences on hw RAID controllers int the past (for all kinds
of phenomena).

Would barriering here still be a problem then?

Timo

> -Ross

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 12-09-2009, 10:52 PM
Ross Walker
 
Default XFS and LVM2 (possibly in the scenario of snapshots)

On Dec 9, 2009, at 10:39 AM, Timo Schoeler
<timo.schoeler@riscworks.net> wrote:

> thus Ross Walker spake:
>> On Dec 9, 2009, at 8:05 AM, Timo Schoeler
>> <timo.schoeler@riscworks.net> wrote:
>>
>>> Hi list,
>>>
>>> during the last days there was a discussion going on about the
>>> stability
>>> of XFS; though I myself used XFS heavily and didn't run into issues
>>> yet,
>>> I'd like to ask something *before* we create our next generation
>>> data
>>> storage backend...
>>>
>>> Les Mikesell wrote in [0] about issues in the combination of XFS and
>>> LVM
>>> -- however, it was being discussed in context of using 32bit
>>> kernels.
>>>
>>> What I specifically need is to run XFS (or something similar, I am
>>> *not*
>>> forced to use XFS, but it was my preference for some years now,
>>> and I
>>> didn't have any issues with it yet) on top of LVM to be able to
>>> create
>>> snapshots. We're talking about several file systems of a size at
>>> about
>>> 4TiByte each.
>>>
>>> On another place [1] I read that there were issues with that.
>>>
>>> Can anyone shed some light on this? Would be very appreciated.
>>
>> There is no problem if it is done on x86_64 with it's 8k stack
>> frames,
>> but on i386 with it's 4k stack frames you could run into a stack
>> overflow when doing it on top of stackable block devices (md raid,
>> lvm, drbd, etc).
>>
>> Also since the current LVM on CentOS doesn't support barriers (next
>> release I believe) journalling isn't safe on LVM unless you are using
>> a storage controller with BBU write-back cache.
>>
>> I have heard anyways that the current implementation of barriers
>> isn't
>> very performant and doesn't take into consideration controllers with
>> BBU cache, so most people will end up mounting with nobarriers which
>> just means they are in the same boat as they are now. Better make
>> sure
>> your machine is bullet proof as a power outage or a kernel panic can
>> spell disaster for XFS (or any other file system really).
>>
>> It is better to invest in a good hardware RAID controller until the
>> whole barriers stuff is ironed out. It should really perform better
>> then it does.
>
> Thanks for your detailed explanation, that really clears things up;
> however, I was intending to build a software RAID10 as we had really
> not
> so good experiences on hw RAID controllers int the past (for all kinds
> of phenomena).
>
> Would barriering here still be a problem then?

So long as LVM isn't involved it will use barriers, but I can tell you
you will be less then impressed by the performance.

Go for hardware RAID with BBU write-cache, go for a good hardware RAID
solution, look to spend $350-$700 get one that supports SAS and SATA.
I like the LSI MegaRAID cards with 512MB of battery backed cache.

Some cards allow you to run in JBOD mode with battery backed write-
back cache enabled, so if you really want software RAID you can run it
and still have fast, safe performance (though you spread the cache a
little thin across that many logical units).

-Ross

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 12-10-2009, 05:54 AM
Timo Schoeler
 
Default XFS and LVM2 (possibly in the scenario of snapshots)

thus Ross Walker spake:
> On Dec 9, 2009, at 10:39 AM, Timo Schoeler
> <timo.schoeler@riscworks.net> wrote:
>
>> thus Ross Walker spake:
>>> On Dec 9, 2009, at 8:05 AM, Timo Schoeler
>>> <timo.schoeler@riscworks.net> wrote:
>>>
>>>> Hi list,
>>>>
>>>> during the last days there was a discussion going on about the
>>>> stability
>>>> of XFS; though I myself used XFS heavily and didn't run into issues
>>>> yet,
>>>> I'd like to ask something *before* we create our next generation
>>>> data
>>>> storage backend...
>>>>
>>>> Les Mikesell wrote in [0] about issues in the combination of XFS and
>>>> LVM
>>>> -- however, it was being discussed in context of using 32bit
>>>> kernels.
>>>>
>>>> What I specifically need is to run XFS (or something similar, I am
>>>> *not*
>>>> forced to use XFS, but it was my preference for some years now,
>>>> and I
>>>> didn't have any issues with it yet) on top of LVM to be able to
>>>> create
>>>> snapshots. We're talking about several file systems of a size at
>>>> about
>>>> 4TiByte each.
>>>>
>>>> On another place [1] I read that there were issues with that.
>>>>
>>>> Can anyone shed some light on this? Would be very appreciated.
>>> There is no problem if it is done on x86_64 with it's 8k stack
>>> frames,
>>> but on i386 with it's 4k stack frames you could run into a stack
>>> overflow when doing it on top of stackable block devices (md raid,
>>> lvm, drbd, etc).
>>>
>>> Also since the current LVM on CentOS doesn't support barriers (next
>>> release I believe) journalling isn't safe on LVM unless you are using
>>> a storage controller with BBU write-back cache.
>>>
>>> I have heard anyways that the current implementation of barriers
>>> isn't
>>> very performant and doesn't take into consideration controllers with
>>> BBU cache, so most people will end up mounting with nobarriers which
>>> just means they are in the same boat as they are now. Better make
>>> sure
>>> your machine is bullet proof as a power outage or a kernel panic can
>>> spell disaster for XFS (or any other file system really).
>>>
>>> It is better to invest in a good hardware RAID controller until the
>>> whole barriers stuff is ironed out. It should really perform better
>>> then it does.
>> Thanks for your detailed explanation, that really clears things up;
>> however, I was intending to build a software RAID10 as we had really
>> not
>> so good experiences on hw RAID controllers int the past (for all kinds
>> of phenomena).
>>
>> Would barriering here still be a problem then?
>
> So long as LVM isn't involved it will use barriers, but I can tell you
> you will be less then impressed by the performance.
>
> Go for hardware RAID with BBU write-cache, go for a good hardware RAID
> solution, look to spend $350-$700 get one that supports SAS and SATA.
> I like the LSI MegaRAID cards with 512MB of battery backed cache.
>
> Some cards allow you to run in JBOD mode with battery backed write-
> back cache enabled, so if you really want software RAID you can run it
> and still have fast, safe performance (though you spread the cache a
> little thin across that many logical units).

Thanks for your eMail, Ross. So, reading all the stuff here I'm really
concerned about moving all our data to such a system. The reason we're
moving is mainly, but not only the longisch fsck UFS (FreeBSD) needs
after a crash. XFS seemed to me to fit perfectly as I never had issues
with fsck here. However, this discussion seems to change my mindset. So,
what would be an alternative (if possible not using hardware RAID
controllers, as already mentioned)? ext3 is not, here we have long fsck
runs, too. Even ext4 seems not too good in this area...

> -Ross

Timo
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 12-10-2009, 06:35 AM
Christopher Chan
 
Default XFS and LVM2 (possibly in the scenario of snapshots)

> Thanks for your eMail, Ross. So, reading all the stuff here I'm really
> concerned about moving all our data to such a system. The reason we're
> moving is mainly, but not only the longisch fsck UFS (FreeBSD) needs
> after a crash. XFS seemed to me to fit perfectly as I never had issues
> with fsck here. However, this discussion seems to change my mindset. So,
> what would be an alternative (if possible not using hardware RAID
> controllers, as already mentioned)? ext3 is not, here we have long fsck
> runs, too. Even ext4 seems not too good in this area...

I thought 3ware would have been good. Their cards have been praised for
quite some time...have things changed? What about Adaptec?
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 12-10-2009, 06:40 AM
Timo Schoeler
 
Default XFS and LVM2 (possibly in the scenario of snapshots)

thus Christopher Chan spake:
>> Thanks for your eMail, Ross. So, reading all the stuff here I'm really
>> concerned about moving all our data to such a system. The reason we're
>> moving is mainly, but not only the longisch fsck UFS (FreeBSD) needs
>> after a crash. XFS seemed to me to fit perfectly as I never had issues
>> with fsck here. However, this discussion seems to change my mindset. So,
>> what would be an alternative (if possible not using hardware RAID
>> controllers, as already mentioned)? ext3 is not, here we have long fsck
>> runs, too. Even ext4 seems not too good in this area...
>
> I thought 3ware would have been good. Their cards have been praised for
> quite some time...have things changed? What about Adaptec?

Well, for me the recommended LSI is okay as it's my favorite vendor,
too. I used to abandon Adaptec quite a while ago and my optinion was
confirmed when the OpenBSD vs. Adaptec discussion came up. However, the
question on the hardware RAID's vendor is totally independent from the
file system discussion.

I re-read XFS's FAQ on this issues, seems to me that we have to set up
two machines in the lab, one purely software RAID driven, and one with a
JBOD configured hardware RAID controller, and then benchmark and stress
testing the setup.

Timo
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 12-10-2009, 06:52 AM
Christopher Chan
 
Default XFS and LVM2 (possibly in the scenario of snapshots)

Timo Schoeler wrote:
> thus Christopher Chan spake:
>>> Thanks for your eMail, Ross. So, reading all the stuff here I'm really
>>> concerned about moving all our data to such a system. The reason we're
>>> moving is mainly, but not only the longisch fsck UFS (FreeBSD) needs
>>> after a crash. XFS seemed to me to fit perfectly as I never had issues
>>> with fsck here. However, this discussion seems to change my mindset. So,
>>> what would be an alternative (if possible not using hardware RAID
>>> controllers, as already mentioned)? ext3 is not, here we have long fsck
>>> runs, too. Even ext4 seems not too good in this area...
>> I thought 3ware would have been good. Their cards have been praised for
>> quite some time...have things changed? What about Adaptec?
>
> Well, for me the recommended LSI is okay as it's my favorite vendor,
> too. I used to abandon Adaptec quite a while ago and my optinion was
> confirmed when the OpenBSD vs. Adaptec discussion came up. However, the
> question on the hardware RAID's vendor is totally independent from the
> file system discussion.

Oh yeah it is. If you use hardware raid, you do not need barriers and
can afford to turn it off for better performance or use LVM for that matter.

>
> I re-read XFS's FAQ on this issues, seems to me that we have to set up
> two machines in the lab, one purely software RAID driven, and one with a
> JBOD configured hardware RAID controller, and then benchmark and stress
> testing the setup.

JBOD? You plan to use software raid with that? Why?!
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 12-10-2009, 08:28 AM
Timo Schoeler
 
Default XFS and LVM2 (possibly in the scenario of snapshots)

[off list]

>>>> Thanks for your eMail, Ross. So, reading all the stuff here I'm really
>>>> concerned about moving all our data to such a system. The reason we're
>>>> moving is mainly, but not only the longisch fsck UFS (FreeBSD) needs
>>>> after a crash. XFS seemed to me to fit perfectly as I never had issues
>>>> with fsck here. However, this discussion seems to change my mindset. So,
>>>> what would be an alternative (if possible not using hardware RAID
>>>> controllers, as already mentioned)? ext3 is not, here we have long fsck
>>>> runs, too. Even ext4 seems not too good in this area...
>>> I thought 3ware would have been good. Their cards have been praised for
>>> quite some time...have things changed? What about Adaptec?
>> Well, for me the recommended LSI is okay as it's my favorite vendor,
>> too. I used to abandon Adaptec quite a while ago and my optinion was
>> confirmed when the OpenBSD vs. Adaptec discussion came up. However, the
>> question on the hardware RAID's vendor is totally independent from the
>> file system discussion.
>
> Oh yeah it is. If you use hardware raid, you do not need barriers and
> can afford to turn it off for better performance or use LVM for that matter.

Hi, this ist off list: Could you please explain me the LVM vs. barrier
thing?

AFAIU, one should turn off write caches on HDs (in any case), and -- if
there's a BBU backed up RAID controller -- use this cache, but turn off
barriers. When does LVM come into play here? Thanks in advance!

>> I re-read XFS's FAQ on this issues, seems to me that we have to set up
>> two machines in the lab, one purely software RAID driven, and one with a
>> JBOD configured hardware RAID controller, and then benchmark and stress
>> testing the setup.
>
> JBOD? You plan to use software raid with that? Why?!

Mainly due to better manageability and monitoring. Honestly, all the
proprietary tools are not the best.

Timo
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 12-10-2009, 09:19 AM
Chan Chung Hang Christopher
 
Default XFS and LVM2 (possibly in the scenario of snapshots)

Timo Schoeler wrote:
> [off list]
>
>
>>>>> Thanks for your eMail, Ross. So, reading all the stuff here I'm really
>>>>> concerned about moving all our data to such a system. The reason we're
>>>>> moving is mainly, but not only the longisch fsck UFS (FreeBSD) needs
>>>>> after a crash. XFS seemed to me to fit perfectly as I never had issues
>>>>> with fsck here. However, this discussion seems to change my mindset. So,
>>>>> what would be an alternative (if possible not using hardware RAID
>>>>> controllers, as already mentioned)? ext3 is not, here we have long fsck
>>>>> runs, too. Even ext4 seems not too good in this area...
>>>>>
>>>> I thought 3ware would have been good. Their cards have been praised for
>>>> quite some time...have things changed? What about Adaptec?
>>>>
>>> Well, for me the recommended LSI is okay as it's my favorite vendor,
>>> too. I used to abandon Adaptec quite a while ago and my optinion was
>>> confirmed when the OpenBSD vs. Adaptec discussion came up. However, the
>>> question on the hardware RAID's vendor is totally independent from the
>>> file system discussion.
>>>
>> Oh yeah it is. If you use hardware raid, you do not need barriers and
>> can afford to turn it off for better performance or use LVM for that matter.
>>
>
> Hi, this ist off list: Could you please explain me the LVM vs. barrier
> thing?
>
> AFAIU, one should turn off write caches on HDs (in any case), and -- if
> there's a BBU backed up RAID controller -- use this cache, but turn off
> barriers. When does LVM come into play here? Thanks in advance!
>
>

No, barriers are specifically to allow you to turn on write caches on
HDs and not lose data. Before barriers, fsync/fsyncdata lied. They would
return before data hit the platters. With barriers, fsync/fsyncdata will
return only after data hit the platters.

However, the dm layer does not support barriers so you need to turn
write caches off if you care about data with lvm and you have no bbu
cache to use.

If you use a hardware raid card with bbu cache, you can use lvm without
worrying and if not using lvm, you can (should in the case of XFS) turn
off barriers.
>>> I re-read XFS's FAQ on this issues, seems to me that we have to set up
>>> two machines in the lab, one purely software RAID driven, and one with a
>>> JBOD configured hardware RAID controller, and then benchmark and stress
>>> testing the setup.
>>>
>> JBOD? You plan to use software raid with that? Why?!
>>
>
> Mainly due to better manageability and monitoring. Honestly, all the
> proprietary tools are not the best.
>

3dm2 for 3ware was pretty decent whether http or cli...
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 

Thread Tools




All times are GMT. The time now is 01:13 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org