Linux Archive

Linux Archive (http://www.linux-archive.org/)
-   Debian User (http://www.linux-archive.org/debian-user/)
-   -   Recovery from hard drive failure (http://www.linux-archive.org/debian-user/451164-recovery-hard-drive-failure.html)

Peter Tenenbaum 11-11-2010 03:16 PM

Recovery from hard drive failure
 
Hi everyone -- a few days ago the hard drive in my home Debian system started making unhappy noises and refuses to boot.* I discussed the situation with knowledgeable people and they diagnosed that indeed the hard drive had failed and needs replacement.


I have a recent backup of the hard drive which I made using dump, and I have a new hard drive on order.* My recovery plan is as follows:

1.* Burn a new netinst CD from a recent build (I am running Squeeze, btw)

2.* Replace the hard drive
3.* Use the netinst CD to set up the filesystem on the new hard drive
4.* Recover the backup using restore.

Here's my question:* should I allow the netinst CD to install Debian on the new hard drive, given that I plan to use restore to restore everything and thus would overwrite any new installation?* I realize that I can probably tune the action of the restore command so that it only restores what I need from the backup and doesn't touch a new OS install; but I think that the process of making the decisions for what needs to be restored and what does not would be complex, time-consuming, and error-prone; so I would rather just restore the whole thing.


Any advice you can offer would be welcome.*

Thanks in advance,
-PT

Klistvud 11-11-2010 03:47 PM

Recovery from hard drive failure
 
Dne, 11. 11. 2010 17:16:17 je Peter Tenenbaum napisal(a):

I have a recent backup of the hard drive which I made using dump, and
I have

a new hard drive on order. My recovery plan is as follows:

1. Burn a new netinst CD from a recent build (I am running Squeeze,
btw)

2. Replace the hard drive
3. Use the netinst CD to set up the filesystem on the new hard drive
4. Recover the backup using restore.



Wouldn't it be simpler and much faster if you just booted from a live
CD or DVD that supports "dump" and did everything from the live
environment (assuming that is viable)?


--
Cheerio,

Klistvud
http://bufferoverflow.tiddlyspot.com
Certifiable Loonix User #481801 Please reply to the list, not to
me.



--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 1289494067.9491.0@compax">http://lists.debian.org/1289494067.9491.0@compax

Peter Tenenbaum 11-11-2010 06:01 PM

Recovery from hard drive failure
 
Klistvud -- excellent, thanks, that is definitely the way to go!* I can see that the rescue CD from live.debian.net (actually from cdimage.debian.org/cdimage/squeeze_live_beta1/amd64/iso-hybrid/) contains everything I need, so I'll use that.


Now:* in the interim, I've decided to take this opportunity to make my system RAID-1, so that I get an extra level of protection and also so that if I ever have another drive failure I can limp along on the other drive for the few days it will take me to get myself organized to recover.* Also, I get to set up a RAID-1 array, which I don't yet know how to do, and learning is fun.* As I understand it, the steps I need to take are:


1.* Install the new hard drives
2.* Boot off the rescue CD
3.* Use fdisk and mdadm to set up the 2 drives as a RAID-1 array
4.* Use LVM (or fdisk?) to partition the resulting array (boot, linux, and swap)

5.* Recover my backup to the array via restore command.

So now, a few new questions:

1.* Is the list above generally correct?
2.* When I installed Debian back in the summer I let the install script handle the disk partitioning.* This time I have to do it manually.* What size should I use for the boot and swap partitions?

3.* Do I need to manually install and configure grub in order to make the RAID-1 array the boot disk?* Again, this was handled for me by the installer script the first time around.
4.* What, if anything, do I need to do so that the RAID-1 array is activated at boot time?


Whew!* Sorry for the huge stack of questions, any and all help, encouragement, etc, is welcome!

Thanks in advance,
-PT

On Thu, Nov 11, 2010 at 8:16 AM, Peter Tenenbaum <peter.g.tenenbaum@gmail.com> wrote:

Hi everyone -- a few days ago the hard drive in my home Debian system started making unhappy noises and refuses to boot.* I discussed the situation with knowledgeable people and they diagnosed that indeed the hard drive had failed and needs replacement.



I have a recent backup of the hard drive which I made using dump, and I have a new hard drive on order.* My recovery plan is as follows:

1.* Burn a new netinst CD from a recent build (I am running Squeeze, btw)


2.* Replace the hard drive
3.* Use the netinst CD to set up the filesystem on the new hard drive
4.* Recover the backup using restore.

Here's my question:* should I allow the netinst CD to install Debian on the new hard drive, given that I plan to use restore to restore everything and thus would overwrite any new installation?* I realize that I can probably tune the action of the restore command so that it only restores what I need from the backup and doesn't touch a new OS install; but I think that the process of making the decisions for what needs to be restored and what does not would be complex, time-consuming, and error-prone; so I would rather just restore the whole thing.



Any advice you can offer would be welcome.*

Thanks in advance,
-PT

Mark Allums 11-11-2010 09:43 PM

Recovery from hard drive failure
 
On 11/11/2010 1:01 PM, Peter Tenenbaum wrote:


4. Use LVM (or fdisk?) to partition the resulting array (boot, linux,
and swap)


I personally would definitely partition the /dev/md[0123...] RAID
devices, and then definitely use LVM inside the partitions. It gives
you an extra level of flexibility to work with during your "administration".


However, using DOS-style partitions is not necessary if you use LVM, and
in the past, I have been taken to task for recommending it.


In any case, *do* use LVM. You can work magic with it.


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org

Archive: 4CDC7187.2040509@allums.com">http://lists.debian.org/4CDC7187.2040509@allums.com

Peter Tenenbaum 11-12-2010 07:46 PM

Recovery from hard drive failure
 
Hi again --

As I'm studying the situation, my plan for how to do this recovery has evolved a bit.* What I'm planning now is the following:

1.* Install the new hard drives
2.* Boot off the rescue CD

3.* Use fdisk to set up one of the drives as the system / boot drive, with 3 DOS-style partitions (boot, swap, and everything else)
4.* Install grub in the boot partition
5.* Recover my backup to the new system disk via restore

6.* Update /etc/fstab to match the configuration I set up in (3) and (4), since I'm not setting up the new hard drives exactly the way that the old drive was configured
7.* Follow the instructions at http://linuxconfig.org/Linux_Software_Raid_1_Setup to incorporate the system disk and the second disk as a RAID-1 array.


If anyone wants to jump in and shout, "No, you fool!" when they see this plan, let me know.

Mark -- I've decided against using LVM because (a) it adds another level of complication to the overall recovery / RAID-ification procedure, which at my low level of expertise I really do not need, and (b) it's not clear to me that LVM offers that much benefit for a relatively simple home system with more hard drive capacity than I really need.* Maybe on my next system...


-PT

On Thu, Nov 11, 2010 at 11:01 AM, Peter Tenenbaum <peter.g.tenenbaum@gmail.com> wrote:

Klistvud -- excellent, thanks, that is definitely the way to go!* I can see that the rescue CD from live.debian.net (actually from cdimage.debian.org/cdimage/squeeze_live_beta1/amd64/iso-hybrid/) contains everything I need, so I'll use that.



Now:* in the interim, I've decided to take this opportunity to make my system RAID-1, so that I get an extra level of protection and also so that if I ever have another drive failure I can limp along on the other drive for the few days it will take me to get myself organized to recover.* Also, I get to set up a RAID-1 array, which I don't yet know how to do, and learning is fun.* As I understand it, the steps I need to take are:



1.* Install the new hard drives
2.* Boot off the rescue CD
3.* Use fdisk and mdadm to set up the 2 drives as a RAID-1 array
4.* Use LVM (or fdisk?) to partition the resulting array (boot, linux, and swap)


5.* Recover my backup to the array via restore command.

So now, a few new questions:

1.* Is the list above generally correct?
2.* When I installed Debian back in the summer I let the install script handle the disk partitioning.* This time I have to do it manually.* What size should I use for the boot and swap partitions?


3.* Do I need to manually install and configure grub in order to make the RAID-1 array the boot disk?* Again, this was handled for me by the installer script the first time around.
4.* What, if anything, do I need to do so that the RAID-1 array is activated at boot time?



Whew!* Sorry for the huge stack of questions, any and all help, encouragement, etc, is welcome!

Thanks in advance,
-PT


On Thu, Nov 11, 2010 at 8:16 AM, Peter Tenenbaum <peter.g.tenenbaum@gmail.com> wrote:

Hi everyone -- a few days ago the hard drive in my home Debian system started making unhappy noises and refuses to boot.* I discussed the situation with knowledgeable people and they diagnosed that indeed the hard drive had failed and needs replacement.




I have a recent backup of the hard drive which I made using dump, and I have a new hard drive on order.* My recovery plan is as follows:

1.* Burn a new netinst CD from a recent build (I am running Squeeze, btw)



2.* Replace the hard drive
3.* Use the netinst CD to set up the filesystem on the new hard drive
4.* Recover the backup using restore.

Here's my question:* should I allow the netinst CD to install Debian on the new hard drive, given that I plan to use restore to restore everything and thus would overwrite any new installation?* I realize that I can probably tune the action of the restore command so that it only restores what I need from the backup and doesn't touch a new OS install; but I think that the process of making the decisions for what needs to be restored and what does not would be complex, time-consuming, and error-prone; so I would rather just restore the whole thing.




Any advice you can offer would be welcome.*

Thanks in advance,
-PT

Bob Proulx 11-12-2010 08:46 PM

Recovery from hard drive failure
 
Peter Tenenbaum wrote:
> 1. Install the new hard drives
> 2. Boot off the rescue CD
> 3. Use fdisk to set up one of the drives as the system / boot drive, with 3
> DOS-style partitions (boot, swap, and everything else)

Because the debian-installer has a nice interface to setting up raid I
recommend using it to do the heavy lifting. You can always stop after
getting a filesystem and then restore your present system onto the old
system.

Plus if you want raid then you probably want swap on raid too.
Otherwise if you have a disk failure and it is on the disk with the
swap then your system crashes. If swap is on raid too then the system
keeps running. And nicely that keeps both disks identical.

> 4. Install grub in the boot partition

For Lenny remember to install grub on both disks in the RAID. For
Squeeze that has been improved and I think is done automatically. But
for Lenny you will definitely need to manually ensure that grub is on
both drives. Otherwise if your first boot drive fails the second
drive won't have the boot code on it. You can always boot a rescue
disk to recover from that problem if you hit it.

> 5. Recover my backup to the new system disk via restore
> 6. Update /etc/fstab to match the configuration I set up in (3) and (4),
> since I'm not setting up the new hard drives exactly the way that the old
> drive was configured

Also plan to do a dpkg-reconfigure of your old kernel to regenerate
the initrd with the new raid configuration.

> Mark -- I've decided against using LVM because (a) it adds another level of
> complication to the overall recovery / RAID-ification procedure, which at my
> low level of expertise I really do not need, and (b) it's not clear to me
> that LVM offers that much benefit for a relatively simple home system with
> more hard drive capacity than I really need. Maybe on my next system...

Personally I always set up lvm. It is worth the extra complexity.

Bob

Jochen Schulz 11-12-2010 09:02 PM

Recovery from hard drive failure
 
Peter Tenenbaum:
>
> Mark -- I've decided against using LVM because (a) it adds another level of
> complication to the overall recovery / RAID-ification procedure, which at my
> low level of expertise I really do not need, and (b) it's not clear to me
> that LVM offers that much benefit for a relatively simple home system with
> more hard drive capacity than I really need. Maybe on my next system...

I understand your argument about added complexity, but especially if you
don't know yet how you will use your disk's capacity, LVM is your
friend.

At work today, I prepared a virtual machine for a specific task. When I
realized that I had underestimated one of the filesystem's sizes, I just
needed to follow these steps:

- Add another virtual disk to the system
- Run pvcreate to prepare the disk for LVM
- Run vgextend to add the new space to the appropriate volume group
- Run lvextend to allocate the new space to the volume that was too
small
- Run resize2fs to grow the filesystem

This was a matter of less than ten minutes (including lookup of the
exact syntax of the LVM commands) and it didn't involve any reboots or
even umounts.

In your case, it gets even easier. Initially, you just need to prepare
one partition for LVM and setup logical volumes for your filesystems
using the size you are sure they will need. If, at one point, you
realize you need more space, you just need to run two commands: lvextend
and resize2fs (assuming you are using ext[234]).

When you have free space in a volume group, you get another benefit as
well: you can snapshot your filesystems for backup runs or before
actions that you are afraid might damage things.

I consider myself to be a quite proficent linux user (almost ten years
of Debian experience), but didn't start using LVM until about a year
ago. I deeply regret that.

J.
--
When standing at the top of beachy head I find the rocks below very
attractive.
[Agree] [Disagree]
<http://www.slowlydownward.com/NODATA/data_enter2.html>

Peter Tenenbaum 11-14-2010 06:20 PM

Recovery from hard drive failure
 
OK, steps 1-3 went fairly smoothly. *Now, however, I'm unable to get grub-install to work. *When I do the following:
mount /dev/sda1 /newbootgrub-install '(sd0,0)'

I get the error message:
/usr/sbin/grub-probe: error: cannot find a device for /boot/grub (is /dev mounted?)
I get the same when I do grub-install /dev/sda1 . *Clearly, I have not done some crucial preparatory step for installing grub from the live rescue CD to the new hard drive's partition. *Any suggestions?

Thanks in advance,-PT

On Fri, Nov 12, 2010 at 12:46 PM, Peter Tenenbaum <peter.g.tenenbaum@gmail.com> wrote:

Hi again --

As I'm studying the situation, my plan for how to do this recovery has evolved a bit.* What I'm planning now is the following:


1.* Install the new hard drives
2.* Boot off the rescue CD

3.* Use fdisk to set up one of the drives as the system / boot drive, with 3 DOS-style partitions (boot, swap, and everything else)
4.* Install grub in the boot partition
5.* Recover my backup to the new system disk via restore


6.* Update /etc/fstab to match the configuration I set up in (3) and (4), since I'm not setting up the new hard drives exactly the way that the old drive was configured
7.* Follow the instructions at http://linuxconfig.org/Linux_Software_Raid_1_Setup to incorporate the system disk and the second disk as a RAID-1 array.



If anyone wants to jump in and shout, "No, you fool!" when they see this plan, let me know.

Mark -- I've decided against using LVM because (a) it adds another level of complication to the overall recovery / RAID-ification procedure, which at my low level of expertise I really do not need, and (b) it's not clear to me that LVM offers that much benefit for a relatively simple home system with more hard drive capacity than I really need.* Maybe on my next system...



-PT

On Thu, Nov 11, 2010 at 11:01 AM, Peter Tenenbaum <peter.g.tenenbaum@gmail.com> wrote:


Klistvud -- excellent, thanks, that is definitely the way to go!* I can see that the rescue CD from live.debian.net (actually from cdimage.debian.org/cdimage/squeeze_live_beta1/amd64/iso-hybrid/) contains everything I need, so I'll use that.




Now:* in the interim, I've decided to take this opportunity to make my system RAID-1, so that I get an extra level of protection and also so that if I ever have another drive failure I can limp along on the other drive for the few days it will take me to get myself organized to recover.* Also, I get to set up a RAID-1 array, which I don't yet know how to do, and learning is fun.* As I understand it, the steps I need to take are:




1.* Install the new hard drives
2.* Boot off the rescue CD
3.* Use fdisk and mdadm to set up the 2 drives as a RAID-1 array
4.* Use LVM (or fdisk?) to partition the resulting array (boot, linux, and swap)



5.* Recover my backup to the array via restore command.

So now, a few new questions:

1.* Is the list above generally correct?
2.* When I installed Debian back in the summer I let the install script handle the disk partitioning.* This time I have to do it manually.* What size should I use for the boot and swap partitions?



3.* Do I need to manually install and configure grub in order to make the RAID-1 array the boot disk?* Again, this was handled for me by the installer script the first time around.
4.* What, if anything, do I need to do so that the RAID-1 array is activated at boot time?




Whew!* Sorry for the huge stack of questions, any and all help, encouragement, etc, is welcome!

Thanks in advance,
-PT


On Thu, Nov 11, 2010 at 8:16 AM, Peter Tenenbaum <peter.g.tenenbaum@gmail.com> wrote:

Hi everyone -- a few days ago the hard drive in my home Debian system started making unhappy noises and refuses to boot.* I discussed the situation with knowledgeable people and they diagnosed that indeed the hard drive had failed and needs replacement.





I have a recent backup of the hard drive which I made using dump, and I have a new hard drive on order.* My recovery plan is as follows:

1.* Burn a new netinst CD from a recent build (I am running Squeeze, btw)




2.* Replace the hard drive
3.* Use the netinst CD to set up the filesystem on the new hard drive
4.* Recover the backup using restore.

Here's my question:* should I allow the netinst CD to install Debian on the new hard drive, given that I plan to use restore to restore everything and thus would overwrite any new installation?* I realize that I can probably tune the action of the restore command so that it only restores what I need from the backup and doesn't touch a new OS install; but I think that the process of making the decisions for what needs to be restored and what does not would be complex, time-consuming, and error-prone; so I would rather just restore the whole thing.





Any advice you can offer would be welcome.*

Thanks in advance,
-PT


All times are GMT. The time now is 09:24 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.