FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.

» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Debian > Debian User

LinkBack Thread Tools
Old 02-28-2011, 07:58 AM
Default How do you avoid server/firewall downtime?

On Sun, 27 Feb 2011 23:49:58 -0600
Jason Hsu <jhsu802701@jasonhsu.com> wrote:

>> able to refute this. That said, I see that many other people (most
> of whom have more experience than I have) are also having difficulty
> with the upgrade from Lenny to Squeeze. In the ideal world, I can
> always avoid messing up. In the real world, I need to make sure that
> my screw-ups do NOT disrupt the system.

You do it in the same ways that it has always been done.

As you say, your own system is not that valuable, but as you also say,
you can't treat a real business system that way.

First, any computer you manage on behalf of a client *must* be capable
of being restored from a recent backup if the hardware catches fire,
or must be completely expendable, such as a workstation. Not only that,
you must be sure the restore strategy works, and I'm afraid that really
does mean trying it. And that's all the time, not just around an

Second, you make sure your own system has everything installed that
your client's does, and you make very sure you know all the issues with
the software before you go near the client's machine. Even then, of
course, it's unlikely you'll have the same hardware, but you should
make a special effort to have a (possibly spare) video card/chipset that
is the same. Nothing slows down a troubleshooting session like the lack
of a display, and that is regrettably common after a major upheaval in a
Linux system (OK, less so now than it was). Well, maybe a failure to
boot (yes, I'm looking at you, grub2).

But all the preparation in the world doesn't guarantee success, so you
do the job during downtime, possibly at the weekend. If you've got
trouble and you're running out of time, you try hard to work out what's
going wrong and then restore from backup. You go away and try to work
out a strategy for getting it right next weekend.

And so on. It's all common sense really. It's the same as Windows
people have been doing for decades, and licensing and 'security' issues
mean they have it much worse than Linux people. It's worse still with
Small Business Server, a heavily customised and limited version of
Windows Server intended to hang on to the business being lost to Linux.
The official, considered advice of SBS MVPs on restoring from backup is
to buy really good hardware in the first place. I'm not joking.


To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 20110228085847.3133a7a2@jresid.jretrading.com">htt p://lists.debian.org/20110228085847.3133a7a2@jresid.jretrading.com
Old 02-28-2011, 10:09 AM
shawn wilson
Default How do you avoid server/firewall downtime?

On Mon, Feb 28, 2011 at 12:49 AM, Jason Hsu <jhsu802701@jasonhsu.com> wrote:

Joe had good points for the above commentary. i'll add that if you want to make sure that you don't need to have physical access to a machine when there is a software issue, get ipmi when you spec out a server for someone (i prefer proliants so likewise i like ilo and it's ~$500 for remote cd drive with that). however, as long as the network is still up, you'll have access to that box. to make sure you can check out network issues, either get two isp and get two cisco boxes and connect the aux's from one to the other or get a router with a sim card. then, if you have to actually come in, you'd better run

Given all this, how do you avoid bringing down your clients' systems?
if you want minimal downtime, look into ddrd - you can also google ha or 'high availability'.

1. *Do you have two or more firewalls/servers running in parallel so that if one goes down, the rest can take over the traffic? *My guess is that the larger the company or organization, the more computers you can have running in parallel.

if you want to make sure servers stay up, you have two or more of everything - and i do mean everything. two locations, with both locations being able to be serviced by two power companies and two isps with a good sla. two or more computers at each site for each service you run - each computer has dual power supplies plugged into separate pdu. each pdu is plugged into a separate ups which is plugged into a separate backup generator which is plugged into your separate utility. you have multiple multi port nics with at least two trunks into separate switches going to a routers for your different isps with a proper bgp config for each site.

but, in order for this colo to really work, you need a big pipe so that your replication doesn't get bogged down.

2. *Do you have a way to make sure you can quickly restore a server back to the old obsolete-but-still-working setup? *Even if there aren't any company files that need to be saved, there's still the need to restore the old setup if necessary. *I am taking notes as I proceed to make sure I can restore my setup to a working state. *However, reinstalling would take up valuable time. *Do you clone the hard drive and save the image file (or whatever it is that stores all of the files and everything else) so that you can quickly restore everything back to the old setup if necessary WITHOUT having to go through the reinstallation process?

*3. *Do you have separate computers for each function (firewall/DHCP server, mail server, print server, web server, etc.)? *It seems to me that it's easier to maintain things this way, because you only need to restore one function instead of multiple functions per machine. *Then again, this means more equipment is needed to have redundancy in all server functions (print, mail, web, firewall, etc.), so maybe this isn't such a great idea.

not really, i setup a gateway/firewall (vyatta or an asa 5505). then i separate all other services depending on security function. ie, if the same people use a mail and print server, i'll probably make that the same computer (or vm). however, if accounting wants to store files on the network, i wouldn't hesitate to give them their own file server and router acls for that.

*4. *Do your employers/clients give you a spare machine that you can use to practice? *I know how to use VirtualBox, but that's not the same thing as a real computer. *I know from my old career as an electrical/RF engineer that simulation programs all have underlying assumptions that may be inaccurate and sometimes wildly inaccurate. *I know from my recent experience with Debian that certain computers are compatible with a fresh installation of Lenny but not a fresh installation of Squeeze.

if i want to deploy a server, i'll setup a virtual environment, clone a box that does pretty much what i want it to do (if i have one), if not i'll do a new install. i'll go and test it out in virtual and do a v2p. done.

there are a few things to note: the clock in any virtual environment isn't as accurate as an actual rtc on a computer - so be aware on using a virtual to track a sine wave or do benchmarking from the guest or things like that. if you have some proprietary hardware that the machine needs to access - look elsewhere - virtualization isn't for this. this does include direct fibre channel access to a san. if you need direct fc from a machine, you probably don't need to be virtualizing it anyway.

also, remember, virtual environments keep everything in ring 3 (iirc). so, if you have old programs that require direct hardware, it might fail in a virtual.


Thread Tools

All times are GMT. The time now is 12:42 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org