FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Fedora Development

 
 
LinkBack Thread Tools
 
Old 10-03-2011, 10:33 PM
Eric Sandeen
 
Default Heads up: e2fsprogs-1.42-WIP-0702 pushed to rawhide

On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
> On Mon, Oct 03, 2011 at 04:11:28PM -0500, Eric Sandeen wrote:
>> testing something more real-world (20T ... 500T?) might still be interesting.
>
> Here's my test script:
>
> qemu-img create -f qcow2 test1.img 500T &&
> guestfish -a test1.img
> memsize 4096 : run :
> part-disk /dev/vda gpt : mkfs ext4 /dev/vda1
>
> The guestfish "mkfs" command translates directly to "mke2fs -t ext4"
> in this case.
>
> 500T: fails with the same error:
>
> /dev/vda1: Cannot create filesystem with requested number of inodes while setting up superblock
>
> By a process of bisection I found that I get the same error for
> all sizes >= 255T.
>
> For 254T, I get:
>
> /dev/vda1: Memory allocation failed while setting up superblock
>
> I wasn't able to give the VM enough memory to make this succeed. I've
> only got 8G on this laptop. Should I need large amounts of memory to
> create these filesystems?
>
> At 100T it doesn't run out of memory, but the man behind the curtain
> starts to show. The underlying qcow2 file grows to several gigs and I
> had to kill it. I need to play with the lazy init features of ext4.
>
> Rich.
>

Bleah. Care to use xfs?

Anyway, interesting; when I tried the larger sizes I got many other problems,
but never the "requested number of inodes" error.

I just created a large sparse file on xfs, and pointed mke2fs at that.

But I'm using bleeding-edge git, ~= the latest WIP snapshot (which I haven't
put into rawhide yet, because it doesn't actually build for me w/o a couple
patches I'd like upstream to ACK first).

-Eric
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
 
Old 10-03-2011, 10:53 PM
Farkas Levente
 
Default Heads up: e2fsprogs-1.42-WIP-0702 pushed to rawhide

On 10/04/2011 12:33 AM, Eric Sandeen wrote:
> On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
>> On Mon, Oct 03, 2011 at 04:11:28PM -0500, Eric Sandeen wrote:
>> I wasn't able to give the VM enough memory to make this succeed. I've
>> only got 8G on this laptop. Should I need large amounts of memory to
>> create these filesystems?
>>
>> At 100T it doesn't run out of memory, but the man behind the curtain
>> starts to show. The underlying qcow2 file grows to several gigs and I
>> had to kill it. I need to play with the lazy init features of ext4.
>>
>> Rich.
>>
>
> Bleah. Care to use xfs?

why we've to use xfs? really? nobody really use large fs on linux? or
nobody really use rhel? why not the e2fsprogs has too much upstream
support? with 2-3TB disk the 16TB fs limit is really funny...or not so
funny:-(

--
Levente "Si vis pacem para bellum!"
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
 
Old 10-03-2011, 11:03 PM
Eric Sandeen
 
Default Heads up: e2fsprogs-1.42-WIP-0702 pushed to rawhide

On 10/3/11 5:53 PM, Farkas Levente wrote:
> On 10/04/2011 12:33 AM, Eric Sandeen wrote:
>> On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
>>> On Mon, Oct 03, 2011 at 04:11:28PM -0500, Eric Sandeen wrote:
>>> I wasn't able to give the VM enough memory to make this succeed. I've
>>> only got 8G on this laptop. Should I need large amounts of memory to
>>> create these filesystems?
>>>
>>> At 100T it doesn't run out of memory, but the man behind the curtain
>>> starts to show. The underlying qcow2 file grows to several gigs and I
>>> had to kill it. I need to play with the lazy init features of ext4.
>>>
>>> Rich.
>>>
>>
>> Bleah. Care to use xfs?
>
> why we've to use xfs? really? nobody really use large fs on linux? or
> nobody really use rhel? why not the e2fsprogs has too much upstream
> support? with 2-3TB disk the 16TB fs limit is really funny...or not so
> funny:-(

XFS has been proven at this scale on Linux for a very long time, is all.

But, that comment was mostly tongue in cheek.

Large filesystem support for ext4 has languished upstream for a very
long time, and few in the community seemed terribly interested to test it,
either.

It's all fairly late in the game for ext4, but it's finally gaining some
momentum, I hope. At least, the > 16T code is in the main git branch
now, and the next release will pretty well have to have the restriction
lifted. As Richard found, there are sure to be a few rough edges.

Luckily nobody is really talking about deploying ext4 (or XFS for that matter)
at 1024 petabytes.

Testing in the 50T range is probably reasonable now, though pushing the
boundaries (or maybe well shy of those boundaries) is worth doing.

-Eric
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
 
Old 10-04-2011, 06:36 AM
"Richard W.M. Jones"
 
Default Heads up: e2fsprogs-1.42-WIP-0702 pushed to rawhide

On Mon, Oct 03, 2011 at 05:33:47PM -0500, Eric Sandeen wrote:
> On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
> > At 100T it doesn't run out of memory, but the man behind the curtain
> > starts to show. The underlying qcow2 file grows to several gigs and I
> > had to kill it. I need to play with the lazy init features of ext4.

Actually this one isn't too bad once I let it run to the finish. The
qcow2 file ends up just 7.8G. I'll try mounting it etc later.

> Bleah. Care to use xfs?

Just playing with ext4 at the limits :-)

Rich.

--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
libguestfs lets you edit virtual machines. Supports shell scripting,
bindings from many languages. http://libguestfs.org
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
 
Old 10-04-2011, 06:57 AM
"Richard W.M. Jones"
 
Default Heads up: e2fsprogs-1.42-WIP-0702 pushed to rawhide

100T seems to work for light use.

I can create the filesystem, mount it, write files and directories and
read them back, and fsck doesn't report any problems.

Filesystem Size Used Avail Use% Mounted on
/dev/vda1 99T 129M 94T 1% /sysroot

Linux (none) 3.1.0-0.rc6.git0.3.fc16.x86_64 #1 SMP Fri Sep 16 12:26:22 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux

e2fsprogs-1.42-0.3.WIP.0925.fc17.x86_64

qcow2 is very usable as a method for testing at this size.

Rich.

--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
libguestfs lets you edit virtual machines. Supports shell scripting,
bindings from many languages. http://libguestfs.org
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
 
Old 10-04-2011, 07:09 AM
Farkas Levente
 
Default Heads up: e2fsprogs-1.42-WIP-0702 pushed to rawhide

On 10/04/2011 01:03 AM, Eric Sandeen wrote:
> On 10/3/11 5:53 PM, Farkas Levente wrote:
>> On 10/04/2011 12:33 AM, Eric Sandeen wrote:
>>> On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
>>>> On Mon, Oct 03, 2011 at 04:11:28PM -0500, Eric Sandeen wrote:
>>>> I wasn't able to give the VM enough memory to make this succeed. I've
>>>> only got 8G on this laptop. Should I need large amounts of memory to
>>>> create these filesystems?
>>>>
>>>> At 100T it doesn't run out of memory, but the man behind the curtain
>>>> starts to show. The underlying qcow2 file grows to several gigs and I
>>>> had to kill it. I need to play with the lazy init features of ext4.
>>>>
>>>> Rich.
>>>>
>>>
>>> Bleah. Care to use xfs?
>>
>> why we've to use xfs? really? nobody really use large fs on linux? or
>> nobody really use rhel? why not the e2fsprogs has too much upstream
>> support? with 2-3TB disk the 16TB fs limit is really funny...or not so
>> funny:-(
>
> XFS has been proven at this scale on Linux for a very long time, is all.

the why rh do NOT support it in 32 bit? there're still system that
should have to run on 32 bit:-(


--
Levente "Si vis pacem para bellum!"
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
 
Old 10-04-2011, 07:12 AM
Farkas Levente
 
Default Heads up: e2fsprogs-1.42-WIP-0702 pushed to rawhide

On 10/04/2011 01:03 AM, Eric Sandeen wrote:
> Large filesystem support for ext4 has languished upstream for a very
> long time, and few in the community seemed terribly interested to test it,
> either.

why? that's what i simple do not understand!?...

--
Levente "Si vis pacem para bellum!"
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
 
Old 10-04-2011, 10:34 AM
Josh Boyer
 
Default Heads up: e2fsprogs-1.42-WIP-0702 pushed to rawhide

On Tue, Oct 4, 2011 at 3:09 AM, Farkas Levente <lfarkas@lfarkas.org> wrote:
> On 10/04/2011 01:03 AM, Eric Sandeen wrote:
>> On 10/3/11 5:53 PM, Farkas Levente wrote:
>>> On 10/04/2011 12:33 AM, Eric Sandeen wrote:
>>>> On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
>>>>> On Mon, Oct 03, 2011 at 04:11:28PM -0500, Eric Sandeen wrote:
>>>>> I wasn't able to give the VM enough memory to make this succeed. *I've
>>>>> only got 8G on this laptop. *Should I need large amounts of memory to
>>>>> create these filesystems?
>>>>>
>>>>> At 100T it doesn't run out of memory, but the man behind the curtain
>>>>> starts to show. *The underlying qcow2 file grows to several gigs and I
>>>>> had to kill it. *I need to play with the lazy init features of ext4.
>>>>>
>>>>> Rich.
>>>>>
>>>>
>>>> Bleah. *Care to use xfs?
>>>
>>> why we've to use xfs? really? nobody really use large fs on linux? or
>>> nobody really use rhel? why not the e2fsprogs has too much upstream
>>> support? with 2-3TB disk the 16TB fs limit is really funny...or not so
>>> funny:-(
>>
>> XFS has been proven at this scale on Linux for a very long time, is all.
>
> the why rh do NOT support it in 32 bit? there're still system that
> should have to run on 32 bit:-(

Then you've come to the right list! We build 32-bit kernels and they
have XFS included in Fedora.

josh
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
 
Old 10-04-2011, 10:59 AM
Ric Wheeler
 
Default Heads up: e2fsprogs-1.42-WIP-0702 pushed to rawhide

On 10/04/2011 03:12 AM, Farkas Levente wrote:
> On 10/04/2011 01:03 AM, Eric Sandeen wrote:
>> Large filesystem support for ext4 has languished upstream for a very
>> long time, and few in the community seemed terribly interested to test it,
>> either.
> why? that's what i simple do not understand!?...
>

Very few users test anything larger than a few TB's in the fedora/developer
world. I routinely do a poll when I do talks with the audience about maximum
file system size and almost never see a large number of people testing over 16
TB (what ext4/ext3 support historically). Most big file system users are in the
national labs, bio sciences, etc...

There are also other ways to handle big data these days that pool together lots
of little file systems (gluster, ceph, lustre, hdfs, etc).

It just takes time and testing to get better confidence, we will get to
stability on ext4 soon enough at larger sizes.

Ric

--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
 
Old 10-04-2011, 03:30 PM
Eric Sandeen
 
Default Heads up: e2fsprogs-1.42-WIP-0702 pushed to rawhide

On 10/4/11 2:09 AM, Farkas Levente wrote:
> On 10/04/2011 01:03 AM, Eric Sandeen wrote:
>> On 10/3/11 5:53 PM, Farkas Levente wrote:
>>> On 10/04/2011 12:33 AM, Eric Sandeen wrote:
>>>> On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
>>>>> On Mon, Oct 03, 2011 at 04:11:28PM -0500, Eric Sandeen wrote:
>>>>> I wasn't able to give the VM enough memory to make this succeed. I've
>>>>> only got 8G on this laptop. Should I need large amounts of memory to
>>>>> create these filesystems?
>>>>>
>>>>> At 100T it doesn't run out of memory, but the man behind the curtain
>>>>> starts to show. The underlying qcow2 file grows to several gigs and I
>>>>> had to kill it. I need to play with the lazy init features of ext4.
>>>>>
>>>>> Rich.
>>>>>
>>>>
>>>> Bleah. Care to use xfs?
>>>
>>> why we've to use xfs? really? nobody really use large fs on linux? or
>>> nobody really use rhel? why not the e2fsprogs has too much upstream
>>> support? with 2-3TB disk the 16TB fs limit is really funny...or not so
>>> funny:-(
>>
>> XFS has been proven at this scale on Linux for a very long time, is all.
>
> the why rh do NOT support it in 32 bit? there're still system that
> should have to run on 32 bit:-(

32-bit machines have a 32-bit index into the page cache; on x86, that limits
us to 16T for XFS, as well. So 32-bit is really not that interesting for
large filesystem use.

If you need really scalable filesystems, I'd suggest a 64-bit machine.

-Eric
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
 

Thread Tools




All times are GMT. The time now is 07:28 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org