FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > ArchLinux > ArchLinux User Repository

 
 
LinkBack Thread Tools
 
Old 01-05-2011, 06:37 PM
Kaiting Chen
 
Default TU Application -Thomas Hatch

On Wed, Jan 5, 2011 at 2:11 PM, Thomas S Hatch <thatch45@gmail.com> wrote:

> Hijack away, as long as I get the vote
>
> So the big difference with MooseFS is that it will run on commodity
> hardware
> and can be set up by a monkey.
> So you don't need hardware HPC equipment, just a bunch of computers with
> hard drives. This means it can scale and works well for both small and very
> large deployments.
> I set it up to test it on just a couple of virtual machines and it ran like
> a dream, and it also runs like a dream on my company's 165 TB setup
> supporting over 12 million files.
>
> There are a lot of other differences, but all in all, MooseFS is much MUCH
> more KISS than Lustre, effectively delivers the same product, is very fast
> for a distributed file system and is a snap to set up!
> Grab a couple of machines and try it out!
>

Oh God it's FUSE. One more question then, how does it compare with
GlusterFS? Which is also easy to set up, runs on FUSE, and can use commodity
hardware. Thanks, --Kaiting.

--
Kiwis and Limes: http://kaitocracy.blogspot.com/
 
Old 01-05-2011, 06:40 PM
Xyne
 
Default TU Application -Thomas Hatch

Thomas S Hatch wrote:

> Hello, ArchLinux Tus
>
> Xyne has agreed to sponsor me as a TU. I am very excited at the potential
> opportunity to become more directly involved with the development of
> ArchLinux.

/snip

I have indeed agreed to sponsor Thomas, so here I am sponsoring.
Let the discussion period begin!
 
Old 01-05-2011, 06:41 PM
Thomas S Hatch
 
Default TU Application -Thomas Hatch

Thanks Xyne

On Wed, Jan 5, 2011 at 12:40 PM, Xyne <xyne@archlinux.ca> wrote:

> Thomas S Hatch wrote:
>
> > Hello, ArchLinux Tus
> >
> > Xyne has agreed to sponsor me as a TU. I am very excited at the potential
> > opportunity to become more directly involved with the development of
> > ArchLinux.
>
> /snip
>
> I have indeed agreed to sponsor Thomas, so here I am sponsoring.
> Let the discussion period begin!
>
 
Old 01-05-2011, 06:50 PM
Thomas S Hatch
 
Default TU Application -Thomas Hatch

On Wed, Jan 5, 2011 at 12:37 PM, Kaiting Chen <kaitocracy@gmail.com> wrote:

> On Wed, Jan 5, 2011 at 2:11 PM, Thomas S Hatch <thatch45@gmail.com> wrote:
>
> > Hijack away, as long as I get the vote
> >
> > So the big difference with MooseFS is that it will run on commodity
> > hardware
> > and can be set up by a monkey.
> > So you don't need hardware HPC equipment, just a bunch of computers with
> > hard drives. This means it can scale and works well for both small and
> very
> > large deployments.
> > I set it up to test it on just a couple of virtual machines and it ran
> like
> > a dream, and it also runs like a dream on my company's 165 TB setup
> > supporting over 12 million files.
> >
> > There are a lot of other differences, but all in all, MooseFS is much
> MUCH
> > more KISS than Lustre, effectively delivers the same product, is very
> fast
> > for a distributed file system and is a snap to set up!
> > Grab a couple of machines and try it out!
> >
>
> Oh God it's FUSE. One more question then, how does it compare with
> GlusterFS? Which is also easy to set up, runs on FUSE, and can use
> commodity
> hardware. Thanks, --Kaiting.
>
> --
> Kiwis and Limes: http://kaitocracy.blogspot.com/
>

The difference is that Gluster is a nightmare!

The problem with gluster is that the replication is tiered, and that there
is no metadata. The client is then the master, this means that if you
connect to gluster with a mis-configured client you can have large scale
data corruption.

Next since the replication of data is tiered you don't have true
replication, so only the gluster server you connect to to save the data has
the correct data, if that server goes down the replications are old and you
have data corruption.

The gluster devs actually had to recall gluster 3.1 because the data
corruption was rampant.

The difference between gluster and MooseFS is that MooseFS works!

MooseFS also has a cool web frontend

We were using gluster and the business cost became catastrophic, picking up
the peices was a nightmare.

MooseFS saves data to replication nodes in paralell! MooseFS maintains a
master metalogger so client connections are agnostic.
MooseFS maintains metadata replication so you can restore is something
happens to the master.

I take it you don't like FUSE? EVERYBODY is doing it

I am looking forward to Ceph, which does not require fuse, but I don't think
it is going to be production ready for at least a year, and MooseFS easily
compete with Ceph IMHO.

If there are GlusterFS devs in the room, please disregard the previous rant


-Tom
 
Old 01-05-2011, 07:35 PM
Kaiting Chen
 
Default TU Application -Thomas Hatch

On Wed, Jan 5, 2011 at 2:50 PM, Thomas S Hatch <thatch45@gmail.com> wrote:

> The difference is that Gluster is a nightmare!
>
> The problem with gluster is that the replication is tiered, and that there
> is no metadata. The client is then the master, this means that if you
> connect to gluster with a mis-configured client you can have large scale
> data corruption.
>
> Next since the replication of data is tiered you don't have true
> replication, so only the gluster server you connect to to save the data has
> the correct data, if that server goes down the replications are old and you
> have data corruption.
>
> The gluster devs actually had to recall gluster 3.1 because the data
> corruption was rampant.
>
> The difference between gluster and MooseFS is that MooseFS works!
>
> MooseFS also has a cool web frontend
>
> We were using gluster and the business cost became catastrophic, picking up
> the peices was a nightmare.
>
> MooseFS saves data to replication nodes in paralell! MooseFS maintains a
> master metalogger so client connections are agnostic.
> MooseFS maintains metadata replication so you can restore is something
> happens to the master.
>
> I take it you don't like FUSE? EVERYBODY is doing it
>
> I am looking forward to Ceph, which does not require fuse, but I don't
> think
> it is going to be production ready for at least a year, and MooseFS easily
> compete with Ceph IMHO.
>
> If there are GlusterFS devs in the room, please disregard the previous rant
>
>

Thanks for the very thorough answer. And yes I hate the idea of a filesystem
in userspace. Everyone knows the FS's should be in kernel space! Mostly it's
the fact that in my opinion bypassing the kernel's caching mechanism is
entirely impractical for a high performance FS. Feel free to correct me if
I'm wrong.

Anyways your application looks really good. Good luck! --Kaiting.

--
Kiwis and Limes: http://kaitocracy.blogspot.com/
 
Old 01-05-2011, 07:44 PM
Thomas S Hatch
 
Default TU Application -Thomas Hatch

On Wed, Jan 5, 2011 at 1:35 PM, Kaiting Chen <kaitocracy@gmail.com> wrote:

> On Wed, Jan 5, 2011 at 2:50 PM, Thomas S Hatch <thatch45@gmail.com> wrote:
>
> > The difference is that Gluster is a nightmare!
> >
> > The problem with gluster is that the replication is tiered, and that
> there
> > is no metadata. The client is then the master, this means that if you
> > connect to gluster with a mis-configured client you can have large scale
> > data corruption.
> >
> > Next since the replication of data is tiered you don't have true
> > replication, so only the gluster server you connect to to save the data
> has
> > the correct data, if that server goes down the replications are old and
> you
> > have data corruption.
> >
> > The gluster devs actually had to recall gluster 3.1 because the data
> > corruption was rampant.
> >
> > The difference between gluster and MooseFS is that MooseFS works!
> >
> > MooseFS also has a cool web frontend
> >
> > We were using gluster and the business cost became catastrophic, picking
> up
> > the peices was a nightmare.
> >
> > MooseFS saves data to replication nodes in paralell! MooseFS maintains a
> > master metalogger so client connections are agnostic.
> > MooseFS maintains metadata replication so you can restore is something
> > happens to the master.
> >
> > I take it you don't like FUSE? EVERYBODY is doing it
> >
> > I am looking forward to Ceph, which does not require fuse, but I don't
> > think
> > it is going to be production ready for at least a year, and MooseFS
> easily
> > compete with Ceph IMHO.
> >
> > If there are GlusterFS devs in the room, please disregard the previous
> rant
> >
> >
>
> Thanks for the very thorough answer. And yes I hate the idea of a
> filesystem
> in userspace. Everyone knows the FS's should be in kernel space! Mostly
> it's
> the fact that in my opinion bypassing the kernel's caching mechanism is
> entirely impractical for a high performance FS. Feel free to correct me if
> I'm wrong.
>
> Anyways your application looks really good. Good luck! --Kaiting.
>
> --
> Kiwis and Limes: http://kaitocracy.blogspot.com/
>

Thanks Kaiting!

I was looking back at my post and worrying that it was too much of a rant
I spend a month full time testing and trying distributed filesystems and my
conclusion was that MooseFS is the best one out there.

I agree that fuse is something that should be used with caution, and that a
filesystem does belong in kernel space. But in the situation of a
distributed filesystem I thought that the benefits of fuse in allowing for
higher flexibility made it a permissible option. All in all I think that
fuse for network filesystems can be a huge advantage, on the other hand I am
much more cautious about local filesystems that operate behind fuse.

So to sum it up, I feel good with moosefs, sshfs, but am cautious when
looking at things like zfs-fuse.

But with that said, I don't believe in disparaging someone's project on the
grounds of what it is, the fact that people are creating new things and
sharing them is a wonderful thing!

On the other hand, I will speak my mind when someone's project corrupts my
data
 
Old 01-05-2011, 07:50 PM
Isaac Dupree
 
Default TU Application -Thomas Hatch

On 01/05/11 13:55, Thomas S Hatch wrote:

If you have any questions about MooseFS feel free to ask me, it has been an
amazing application for my company!


While we're asking, any thoughts about Tahoe-LAFS? - distributed,
fault-tolerant AND with quite thought-out encryption.
http://tahoe-lafs.org/ Googling suggests to me that it doesn't have its
own FUSE but it is sometimes combined with sshfs (that's possible since
Tahoe-LAFS provides an SFTP interface, among other interfaces).


-Isaac
 
Old 01-05-2011, 07:54 PM
Thomas S Hatch
 
Default TU Application -Thomas Hatch

On Wed, Jan 5, 2011 at 1:50 PM, Isaac Dupree <ml@isaac.cedarswampstudios.org
> wrote:

> On 01/05/11 13:55, Thomas S Hatch wrote:
>
>> If you have any questions about MooseFS feel free to ask me, it has been
>> an
>> amazing application for my company!
>>
>
> While we're asking, any thoughts about Tahoe-LAFS? - distributed,
> fault-tolerant AND with quite thought-out encryption.
> http://tahoe-lafs.org/ Googling suggests to me that it doesn't have its
> own FUSE but it is sometimes combined with sshfs (that's possible since
> Tahoe-LAFS provides an SFTP interface, among other interfaces).
>
> -Isaac
>

Ah yes Tahoe, I didn't spend as much time with this one, but it
looks promising! In my tests MooseFS was faster and the failure support was
a bit better, I would have to really dig into my note to remember exactly
what it was that turned me off on it.

I remember it being fairly nice though! I will have to play with it some
more!
 
Old 01-05-2011, 08:21 PM
Thomas S Hatch
 
Default TU Application -Thomas Hatch

On Wed, Jan 5, 2011 at 1:54 PM, Thomas S Hatch <thatch45@gmail.com> wrote:

>
>
> On Wed, Jan 5, 2011 at 1:50 PM, Isaac Dupree <
> ml@isaac.cedarswampstudios.org> wrote:
>
>> On 01/05/11 13:55, Thomas S Hatch wrote:
>>
>>> If you have any questions about MooseFS feel free to ask me, it has been
>>> an
>>> amazing application for my company!
>>>
>>
>> While we're asking, any thoughts about Tahoe-LAFS? - distributed,
>> fault-tolerant AND with quite thought-out encryption.
>> http://tahoe-lafs.org/ Googling suggests to me that it doesn't have its
>> own FUSE but it is sometimes combined with sshfs (that's possible since
>> Tahoe-LAFS provides an SFTP interface, among other interfaces).
>>
>> -Isaac
>>
>
> Ah yes Tahoe, I didn't spend as much time with this one, but it
> looks promising! In my tests MooseFS was faster and the failure support was
> a bit better, I would have to really dig into my note to remember exactly
> what it was that turned me off on it.
>
> I remember it being fairly nice though! I will have to play with it some
> more!
>

Oh, it is lower on my list, but I wanted to make SELinux more powerful in
Arch too, I am one of the VERY few who not only know how to handle SELinux,
and likes to use it
 
Old 01-05-2011, 08:33 PM
Martin Peres
 
Default TU Application -Thomas Hatch

Le 05/01/2011 22:21, Thomas S Hatch a écrit :

Oh, it is lower on my list, but I wanted to make SELinux more powerful in
Arch too, I am one of the VERY few who not only know how to handle SELinux,
and likes to use it

You WHAT? You like to use it? You must be a masochist then

I've been working around and on it for 2 years now and I wouldn't use it
for any desktop (even though that's what I'm doing at work).


Are you using the targeted mode or the strict one (I'm always using the
strict mode)?
 

Thread Tools




All times are GMT. The time now is 05:47 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright ©2007 - 2008, www.linux-archive.org