Linux Archive

Linux Archive (http://www.linux-archive.org/)
-   EXT3 Users (http://www.linux-archive.org/ext3-users/)
-   -   file open -> disk full -> save -> file 0 byte (http://www.linux-archive.org/ext3-users/436805-file-open-disk-full-save-file-0-byte.html)

Eric Sandeen 10-07-2010 01:41 PM

file open -> disk full -> save -> file 0 byte
 
Ralf Gross wrote:
> Hi,
>
> a user had a file open when the disk ran full. He then saved the file
> and now it's size is 0 byte (ext3). I don't know much more about this,
> but he asked me if there is any chance to get the data of this file
> back?

I'm not sure how that happens; writes to the file should have hit ENOSPC;
ext3 doesn't even have delalloc to worry about so.

Did the application check the write return value?

(or maybe it was mmap writes, and since ext3 has no pg_mkwrite, it'd
just get lost, unfortunately...)

-Eric

> Ralf

_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users

Ralf Gross 10-07-2010 01:52 PM

file open -> disk full -> save -> file 0 byte
 
Ralf Gross schrieb:
> Hi,
>
> a user had a file open when the disk ran full. He then saved the file
> and now it's size is 0 byte (ext3). I don't know much more about this,
> but he asked me if there is any chance to get the data of this file
> back?

ext3grep /dev/sda6 --restore-file path/to/file

restored only the 0 byte version but I found something with ext3grep. The user
remembered that the string "static void Associate_cluster" is part of the file.


~ # ext3grep /dev/sda6 --search "static void Associate_cluster"
Running ext3grep version 0.10.1
Number of groups: 53
Minimum / maximum journal block: 932 / 34660
Loading journal descriptors... sorting... done
The oldest inode block that is still in the journal, appears to be
from 1286405586 = Thu Oct 7 00:53:06 2010
Number of descriptors in journal: 24920; min / max sequence numbers:
63706 / 72291
Blocks containing "static void Associate_cluster": 325515 (allocated)
904535 915577 1428545


I can get some further output with 'ext3grep /dev/sda6 --block 325515'

~ # ext3grep /dev/sda6 --block 325515
Running ext3grep version 0.10.1
No --ls used; implying --print.

Number of groups: 53
Minimum / maximum journal block: 932 / 34660
Loading journal descriptors... sorting... done
The oldest inode block that is still in the journal, appears to be from 1286405586 = Thu Oct 7 00:53:06 2010
Number of descriptors in journal: 24920; min / max sequence numbers: 63706 / 72291
Hex dump of block 325515:
0000 | 61 6e 65 4f 66 66 73 65 74 3b 0a 20 20 20 20 73 | aneOffset;. s
0010 | 70 75 72 5f 70 6f 6c 79 5f 6d 65 73 73 2e 63 30 | pur_poly_mess.c0
[....]
0fd0 | 5f 48 6f 73 74 49 66 5f 74 20 2a 68 6f 73 74 49 | _HostIf_t *hostI
0fe0 | 66 2c 20 64 6f 75 62 6c 65 20 2a 56 61 6c 75 65 | f, double *Value
0ff0 | 4c 69 73 74 2c 20 69 6e 74 20 2a 56 61 6c 75 65 | List, int *Value



~ # ext3grep /dev/sda6 --search-inode 325515
Running ext3grep version 0.10.1
Number of groups: 53
Minimum / maximum journal block: 932 / 34660
Loading journal descriptors... sorting... done
The oldest inode block that is still in the journal, appears to be from 1286405586 = Thu Oct 7 00:53:06 2010
Number of descriptors in journal: 24920; min / max sequence numbers: 63706 / 72291
Inodes refering to block 325515: 145601


~ # ext3grep /dev/sda6 --inode 145601
Running ext3grep version 0.10.1
No --ls used; implying --print.

Number of groups: 53
Minimum / maximum journal block: 932 / 34660
Loading journal descriptors... sorting... done
The oldest inode block that is still in the journal, appears to be from 1286405586 = Thu Oct 7 00:53:06 2010
Number of descriptors in journal: 24920; min / max sequence numbers: 63706 / 72291

Hex dump of inode 145601:
0000 | ed 81 e8 03 2b ae 02 00 61 69 9b 4c ee c2 ad 4c | ....+...ai.L...L
0010 | 0e a8 7e 49 00 00 00 00 e8 03 01 00 60 01 00 00 | ..~I........`...
0020 | 00 00 00 00 00 00 00 00 77 f7 04 00 78 f7 04 00 | ........w...x...
0030 | 79 f7 04 00 7a f7 04 00 7b f7 04 00 7c f7 04 00 | y...z...{...|...
0040 | 7d f7 04 00 7e f7 04 00 7f f7 04 00 80 f7 04 00 | }...~...........
0050 | 81 f7 04 00 82 f7 04 00 83 f7 04 00 00 00 00 00 | ................
0060 | 00 00 00 00 f2 97 92 a7 00 00 00 00 00 00 00 00 | ................
0070 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................

Inode is Allocated
Group: 9
Generation Id: 2811402226
uid / gid: 1000 / 1000
mode: rrwxr-xr-x
size: 175659
num of links: 1
sectors: 352 (--> 1 indirect block).

Inode Times:
Accessed: 1285253473 = Thu Sep 23 16:51:13 2010
File Modified: 1286456046 = Thu Oct 7 14:54:06 2010
Inode Modified: 1233037326 = Tue Jan 27 07:22:06 2009
Deletion time: 0

Direct Blocks: 325495 325496 325497 325498 325499 325500 325501 325502 325503 325504 325505 325506
Indirect Block: 325507



So I know that there is something left of the file, but I don't know how to get
it back.


Ralf

_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users

Bodo Thiesen 10-08-2010 01:10 PM

file open -> disk full -> save -> file 0 byte
 
* Ralf Gross <Ralf-Lists@ralfgross.de> hat geschrieben:

> ~ # ext3grep /dev/sda6 --inode 145601
> size: 175659
> sectors: 352 (--> 1 indirect block).
> Direct Blocks: 325495 325496 325497 325498 325499 325500 325501 325502 325503 325504 325505 325506
> Indirect Block: 325507
>
> So I know that there is something left of the file, but I don't know how to get
> it back.

*** WARNING *** The following code snippet is meant to explain what you
could do. Please don't stop using your brain. ;)

*** BEGIN SNIPPET ***

#! /bin/sh

DEV=/dev/sda6
BS=4096
# This may be 2048 or 1024 - whatever cluster size your ext2
# file system uses

# Recover the first 12 clusters (the direct clusters)
dd if=$DEV bs=$BS of=/ramfs/restored.data skip=325495 count=12

# Get the indirect cluster
dd if=$DEV bs=$BS of=/ramfs/restored.ind skip=325507 count=1

# And dump it's content decimally ...
hexdump -e '4/4 "%10i " "
"' /ramfs/restored.ind
# you should get an output like
# 325508 325509 325510 325511
# 325512 [...]
# Check, that the numbers are one bigger than the previous ones.

# Recover the following parts of the file (assuming, that the first
# number is the 325508 and that there are 5 countiguous numbers.
# The 12 comes from the previous skip argument
dd if=$DEV bs=$BS of=/ramfs/restored.data skip=325508 seek=12 count=5

# If there is a jump in the numbers printed by hexdump, continue with
# the next cluster chain (17 = 12 + 5 - it's just the sum of clustes
# already written to the file):
dd if=$DEV bs=$BS of=/ramfs/restored.data
skip=$whatever_number_comes_now seek=17 count=$length_of_chain

# Repeat the last step until you are done.

*** END SNIPPET ***

After you are done, check the file and then copy it over to the file
system so your user can continue to work on it again. And tell that user
that he should stop using the application he was using all together.
Overwriting a file with updated content is not state of the art for at
least two decades. The old file content has to be saved in a backup file
first or the old file could just be renamed. Every software I use does it
either way. This way your user wouldn't have had this problem in the first
place (just take the backup file and throw away the last 20 minutes of
work - recovery takes longer anyways ...). Alternatively: Think about a
proper daily (or even hourly) backup plan.

Regards, Bodo

_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users

Stephen Samuel 10-08-2010 08:34 PM

file open -> disk full -> save -> file 0 byte
 
a slightly easier way of going through the indirect block...
recovered=12
for i in `hexdump -e '4/4 "%10i " "
"' /ramfs/restored.ind` ; do
******* if [[ "$i" -ne 0 ]] ; then

*************** dd if=$DEV bs=$BS of=/ramfs/restored.ind skip=$i seek=$((recovered++))* count=1
******* fi
done

However, if the inode in question still exists, then I'd be inclined to suggest that you mount the filesystem

(readonly preferably), and then hunt for the inode.... let the filesystem do the heavy lifting for you.

find /mount/recovered -inum 145601 -print

or, even better yet:

cp ` find /mount/recovered -inum 145601 -print` recovered-file



On Fri, Oct 8, 2010 at 6:10 AM, Bodo Thiesen <bothie@gmx.de> wrote:

* Ralf Gross <Ralf-Lists@ralfgross.de> hat geschrieben:



> ~ # ext3grep /dev/sda6 --inode 145601

> size: 175659

> sectors: 352 (--> 1 indirect block).

> Direct Blocks: 325495 325496 325497 325498 325499 325500 325501 325502 325503 325504 325505 325506

> Indirect Block: 325507

>

> So I know that there is something left of the file, but I don't know how to get

> it back.



*** WARNING *** The following code snippet is meant to explain what you

could do. Please don't stop using your brain. ;)



*** BEGIN SNIPPET ***



#! /bin/sh



DEV=/dev/sda6

BS=4096

# This may be 2048 or 1024 - whatever cluster size your ext2

# file system uses



# Recover the first 12 clusters (the direct clusters)

dd if=$DEV bs=$BS of=/ramfs/restored.data skip=325495 count=12



# Get the indirect cluster

dd if=$DEV bs=$BS of=/ramfs/restored.ind skip=325507 count=1



# And dump it's content decimally ...

hexdump -e '4/4 "%10i " "
"' /ramfs/restored.ind

# you should get an output like

# 325508 325509 325510 325511

# 325512 [...]

# Check, that the numbers are one bigger than the previous ones.



# Recover the following parts of the file (assuming, that the first

# number is the 325508 and that there are 5 countiguous numbers.

# The 12 comes from the previous skip argument

dd if=$DEV bs=$BS of=/ramfs/restored.data skip=325508 seek=12 count=5



# If there is a jump in the numbers printed by hexdump, continue with

# the next cluster chain (17 = 12 + 5 - it's just the sum of clustes

# already written to the file):

dd if=$DEV bs=$BS of=/ramfs/restored.data

skip=$whatever_number_comes_now seek=17 count=$length_of_chain



# Repeat the last step until you are done.



*** END SNIPPET ***



After you are done, check the file and then copy it over to the file

system so your user can continue to work on it again. And tell that user

that he should stop using the application he was using all together.

Overwriting a file with updated content is not state of the art for at

least two decades. The old file content has to be saved in a backup file

first or the old file could just be renamed. Every software I use does it

either way. This way your user wouldn't have had this problem in the first

place (just take the backup file and throw away the last 20 minutes of

work - recovery takes longer anyways ...). Alternatively: Think about a

proper daily (or even hourly) backup plan.



Regards, Bodo



_______________________________________________

Ext3-users mailing list

Ext3-users@redhat.com

https://www.redhat.com/mailman/listinfo/ext3-users



--
Stephen Samuel http://www.bcgreen.com* Software, like love,
778-861-7641* * * * * * * * * * * * * * * grows when you give it away


_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users

Bodo Thiesen 10-08-2010 10:28 PM

file open -> disk full -> save -> file 0 byte
 
* Stephen Samuel <samuel@bcgreen.com> hat geschrieben:

> a slightly easier way of going through the indirect block...
> recovered=12
> for i in `hexdump -e '4/4 "%10i " "
"' /ramfs/restored.ind` ; do
> if [[ "$i" -ne 0 ]] ; then
> dd if=$DEV bs=$BS of=/ramfs/restored.ind skip=$i
> seek=$((recovered++)) count=1
> fi
> done

;)

> However, if the inode in question still exists,

No it doesn't. Ralf used a tool called ext3grep which greps through the
journal to find old versions of the data in question.

> then I'd be inclined to suggest that you mount the filesystem
> (readonly preferably),

As to my knowledge, it is still impossible to mount an ext2 file system
with the needs_recovery flag read only with the ext3 driver and because
that flag is wrongly made "incompatible", it's even impossible to mount
it with the ext2 driver. Please do NEVER AGAIN suggest to anyone to mount
-o ro an ext2 filesystem having a journal if he has troubles with that file
system.

Regards, Bodo

_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users

Ralf Gross 10-18-2010 09:22 AM

file open -> disk full -> save -> file 0 byte
 
Bodo Thiesen schrieb:
> * Stephen Samuel <samuel@bcgreen.com> hat geschrieben:
>
> > a slightly easier way of going through the indirect block...
> > recovered=12
> > for i in `hexdump -e '4/4 "%10i " "
"' /ramfs/restored.ind` ; do
> > if [[ "$i" -ne 0 ]] ; then
> > dd if=$DEV bs=$BS of=/ramfs/restored.ind skip=$i
> > seek=$((recovered++)) count=1
> > fi
> > done
>
> ;)
>
> > However, if the inode in question still exists,
>
> No it doesn't. Ralf used a tool called ext3grep which greps through the
> journal to find old versions of the data in question.
>
> > then I'd be inclined to suggest that you mount the filesystem
> > (readonly preferably),
>
> As to my knowledge, it is still impossible to mount an ext2 file system
> with the needs_recovery flag read only with the ext3 driver and because
> that flag is wrongly made "incompatible", it's even impossible to mount
> it with the ext2 driver. Please do NEVER AGAIN suggest to anyone to mount
> -o ro an ext2 filesystem having a journal if he has troubles with that file
> system.

Thank you both for your sugestions. The disk with the filesystem is
not within reach anymore, so I can't try that. But I now know what to
do next time :)

Ralf

_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users

Andreas Dilger 10-19-2010 05:28 AM

file open -> disk full -> save -> file 0 byte
 
On 2010-10-18, at 03:22, Ralf Gross wrote:
> Bodo Thiesen schrieb:
>> As to my knowledge, it is still impossible to mount an ext2 file system
>> with the needs_recovery flag read only with the ext3 driver and because
>> that flag is wrongly made "incompatible", it's even impossible to mount
>> it with the ext2 driver.

Note that the needs_recovery flag was INTENTIONALLY made incompatible, not "wrongly" so. That is because with metadata being written into the journal, there is no guarantee that the filesystem is even consistent when mounted without journal replay. Metadata blocks can be reallocated as data blocks and overwritten by data, based only on changes committed to the journal, and this could result in errors.

Cheers, Andreas


_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users

Bodo Thiesen 10-20-2010 02:01 AM

file open -> disk full -> save -> file 0 byte
 
* Andreas Dilger <adilger.kernel@dilger.ca> hat geschrieben:

>> Bodo Thiesen schrieb:
>>> As to my knowledge, it is still impossible to mount an ext2 file system
>>> with the needs_recovery flag read only with the ext3 driver and because
>>> that flag is wrongly made "incompatible", it's even impossible to mount
>>> it with the ext2 driver.
> Note that the needs_recovery flag was INTENTIONALLY made incompatible, not
> "wrongly" so. That is because with metadata being written into the
> journal, there is no guarantee that the filesystem is even consistent when
> mounted without journal replay. Metadata blocks can be reallocated as
> data blocks and overwritten by data, based only on changes committed to
> the journal, and this could result in errors.

Right ... except ... what is the difference between an ext2 filesystem
without journal which was not cleanly unmounted and one with journal which
was not cleanly unmounted (except for the fact, that the latter one can be
made consistent by replaying the journal in a few seconds). Especially:
Why would it make a difference when mounting it -o ro -t ext2?

Making errors intentionally is not really an excuse for doing so.

Regards, Bodo

_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users


All times are GMT. The time now is 10:37 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.