FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > EXT3 Users

 
 
LinkBack Thread Tools
 
Old 08-30-2013, 01:48 AM
"Richards, Paul Franklin"
 
Default Strange fsck.ext3 behavior - infinite loop

Greetings! Need your help fellow penguins!



Strange behavior with fsck.ext3: how to remove a long orphaned inode list?



After copying data over from one old RAID to another new RAID with rsync, the dump command would not complete because of filesystem errors on the new RAID. So I ran fsck.ext3 with the -y option and it would just run in an infinite loop restarting itself and
then trying to correct the same inodes over and over again. Some of the errors were lots of orphaned inodes. So I ran a tar tape backup and reformatted it with mkfs.ext3. After restoring the tar backup, I got the same errors when I ran fsck.ext3 -f. Again
fsck.ext3 -y would run in an infinite loop trying to correct the problem. So I formatted it again and ran fsck immediately afterwards and it's still detecting corrupted orphaned inode lists. This is on what should be a pristine filesystem with basically no
files. Is this a hardware problem with the RAID or a bug somewhere or just normal behavior?

*

[root@myhost /]# mkfs.ext3 /dev/sda1

mke2fs 1.35 (28-Feb-2004)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

268435456 inodes, 536868352 blocks

26843417 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

16384 block groups

32768 blocks per group, 32768 fragments per group

16384 inodes per group

Superblock backups stored on blocks:

******* 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

******* 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,

******* 102400000, 214990848, 512000000



Writing inode tables: done***************************

Creating journal (8192 blocks): done

Writing superblocks and filesystem accounting information: done



This filesystem will be automatically checked every 24 mounts or

180 days, whichever comes first.* Use tune2fs -c or -i to override.

[root@myhost /]# fsck

fsck******** fsck.cramfs* fsck.ext2*** fsck.ext3*** fsck.msdos** fsck.vfat

[root@myhost /]# man fsck.ext3

[root@myhost /]# fsck.ext3 -C /root/completion /dev/sda1

e2fsck 1.35 (28-Feb-2004)

/dev/sda1: clean, 11/268435456 files, 8450084/536868352 blocks

[root@myhost /]# fsck.ext3 -f -C /root/completion /dev/sda1

e2fsck 1.35 (28-Feb-2004)

Pass 1: Checking inodes, blocks, and sizes

Inodes that were part of a corrupted orphan linked list found.* Fix<y>? yes***



Inode 26732609 was part of the orphaned inode list.* FIXED.

Inode 26732609 has imagic flag set.* Clear<y>? yes



Inode 26732610 is in use, but has dtime set.* Fix<y>? yes



Inode 26732611 is in use, but has dtime set.* Fix<y>? yes



Inode 26732611 has imagic flag set.* Clear<y>? yes



Inode 26732612 is in use, but has dtime set.* Fix<y>? yes



Inode 26732613 is in use, but has dtime set.* Fix<y>? yes



Inode 26732613 has imagic flag set.* Clear<y>?



/dev/sda1: e2fsck canceled.



/dev/sda1: ***** FILE SYSTEM WAS MODIFIED *****

[root@myhost /]# fsck.ext3 -f /dev/sda1

e2fsck 1.35 (28-Feb-2004)

Pass 1: Checking inodes, blocks, and sizes

Inode 26732609 is in use, but has dtime set.* Fix<y>?






_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users
 
Old 08-30-2013, 06:07 PM
Andreas Dilger
 
Default Strange fsck.ext3 behavior - infinite loop

On 2013-08-29, at 7:48 PM, Richards, Paul Franklin wrote:
> Strange behavior with fsck.ext3: how to remove a long orphaned inode list?
>
> After copying data over from one old RAID to another new RAID with rsync, the dump command would not complete because of filesystem errors on the new RAID. So I ran fsck.ext3 with the -y option and it would just run in an infinite loop restarting itself and then trying to correct the same inodes over and over again. Some of the errors were lots of orphaned inodes. So I ran a tar tape backup and reformatted it with mkfs.ext3. After restoring the tar backup, I got the same errors when I ran fsck.ext3 -f. Again fsck.ext3 -y would run in an infinite loop trying to correct the problem. So I formatted it again and ran fsck immediately afterwards and it's still detecting corrupted orphaned inode lists. This is on what should be a pristine filesystem with basically no files. Is this a hardware problem with the RAID or a bug somewhere or just normal behavior?

Definitely a bug somewhere.

> [root@myhost /]# mkfs.ext3 /dev/sda1
> mke2fs 1.35 (28-Feb-2004)

First thing I would suggest is to update to a newer version of e2fsprogs, since this one is 9+ years old and that is a lot of
water under the bridge.

Cheers, Andreas

> Filesystem label=
> OS type: Linux
> Block size=4096 (log=2)
> Fragment size=4096 (log=2)
> 268435456 inodes, 536868352 blocks
> 26843417 blocks (5.00%) reserved for the super user
> First data block=0
> Maximum filesystem blocks=4294967296
> 16384 block groups
> 32768 blocks per group, 32768 fragments per group
> 16384 inodes per group
> Superblock backups stored on blocks:
> 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
> 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
> 102400000, 214990848, 512000000
>
> Writing inode tables: done
> Creating journal (8192 blocks): done
> Writing superblocks and filesystem accounting information: done
>
> This filesystem will be automatically checked every 24 mounts or
> 180 days, whichever comes first. Use tune2fs -c or -i to override.
> [root@myhost /]# fsck
> fsck fsck.cramfs fsck.ext2 fsck.ext3 fsck.msdos fsck.vfat
> [root@myhost /]# man fsck.ext3
> [root@myhost /]# fsck.ext3 -C /root/completion /dev/sda1
> e2fsck 1.35 (28-Feb-2004)
> /dev/sda1: clean, 11/268435456 files, 8450084/536868352 blocks
> [root@myhost /]# fsck.ext3 -f -C /root/completion /dev/sda1
> e2fsck 1.35 (28-Feb-2004)
> Pass 1: Checking inodes, blocks, and sizes
> Inodes that were part of a corrupted orphan linked list found. Fix<y>? yes
>
> Inode 26732609 was part of the orphaned inode list. FIXED.
> Inode 26732609 has imagic flag set. Clear<y>? yes

>
> Inode 26732610 is in use, but has dtime set. Fix<y>? yes
>
> Inode 26732611 is in use, but has dtime set. Fix<y>? yes
>
> Inode 26732611 has imagic flag set. Clear<y>? yes
>
> Inode 26732612 is in use, but has dtime set. Fix<y>? yes
>
> Inode 26732613 is in use, but has dtime set. Fix<y>? yes
>
> Inode 26732613 has imagic flag set. Clear<y>?
>
> /dev/sda1: e2fsck canceled.
>
> /dev/sda1: ***** FILE SYSTEM WAS MODIFIED *****
> [root@myhost /]# fsck.ext3 -f /dev/sda1
> e2fsck 1.35 (28-Feb-2004)
> Pass 1: Checking inodes, blocks, and sizes
> Inode 26732609 is in use, but has dtime set. Fix<y>?
>
> _______________________________________________
> Ext3-users mailing list
> Ext3-users@redhat.com
> https://www.redhat.com/mailman/listinfo/ext3-users


Cheers, Andreas






_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users
 
Old 08-30-2013, 06:23 PM
"Theodore Ts'o"
 
Default Strange fsck.ext3 behavior - infinite loop

On Fri, Aug 30, 2013 at 12:07:22PM -0600, Andreas Dilger wrote:
>
> > [root@myhost /]# mkfs.ext3 /dev/sda1
> > mke2fs 1.35 (28-Feb-2004)
>
> First thing I would suggest is to update to a newer version of e2fsprogs, since this one is 9+ years old and that is a lot of
> water under the bridge.

That's definitely good advice, but even with e2fsprogs 1.35, if e2fsck
-f is finding errors when run immediately after running mke2fs, it
would make me suspect the storage device.

Are you sure the RAID controller (is this a hw raid, or software raid)
is working correctly?

- Ted

_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users
 
Old 09-07-2013, 02:46 AM
"Richards, Paul Franklin"
 
Default Strange fsck.ext3 behavior - infinite loop

It appears that the RAID has hardware problems as three of the disks are being detected as "unhealthy".

Thank you all for your help!
________________________________________
From: Theodore Ts'o [tytso@mit.edu]
Sent: Friday, August 30, 2013 2:23 PM
To: Andreas Dilger
Cc: Richards, Paul Franklin; ext3-users@redhat.com
Subject: Re: Strange fsck.ext3 behavior - infinite loop

On Fri, Aug 30, 2013 at 12:07:22PM -0600, Andreas Dilger wrote:
>
> > [root@myhost /]# mkfs.ext3 /dev/sda1
> > mke2fs 1.35 (28-Feb-2004)
>
> First thing I would suggest is to update to a newer version of e2fsprogs, since this one is 9+ years old and that is a lot of
> water under the bridge.

That's definitely good advice, but even with e2fsprogs 1.35, if e2fsck
-f is finding errors when run immediately after running mke2fs, it
would make me suspect the storage device.

Are you sure the RAID controller (is this a hw raid, or software raid)
is working correctly?

- Ted

_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users
 

Thread Tools




All times are GMT. The time now is 03:10 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org