FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Debian > Debian ISP

 
 
LinkBack Thread Tools
 
Old 06-13-2008, 11:40 AM
Steve Kemp
 
Default Managing disperse servers

On Fri Jun 13, 2008 at 12:06:50 +0100, Keith Edmunds wrote:

> How do others approach the problem of security updates? Up until now,
> we've done this manually with some help from 'cssh' for some servers;
> however, that solution doesn't scale as the number of servers increases.

> We're reluctant to have servers automatically install updates. We're
> looking at CfEngine and Puppet, but I'd be interested in hearing of other
> approaches.

I think you need to choose; either you have automatic updates or
you do it manually, though there is a middle-ground where you could
apply automatically to machines A, B, and C. Then after you observe
no breakage for a period of time you could instruct machines D, E, F...,
to update themselves too.

I personally use cron-apt to auto-install security updates, at the
(small) risk of suffering breakages if there is a borked security update.
So far that hasn't been a problem, but I accept it is only a matter
of time & bad luck until I get a borked upgrade requiring manual
intervention on 200+ machines!

Anytime you need to have manual intervention to apply updates you're
running the risk of forgetting a few machines and having issues.

> I'm also interested in hearing of other techniques for managing multiple,
> mostly-similar (but not identical) systems. We're currently managing about
> 40 such servers, so not a huge number, but we're expecting that number to
> grow and we want to put some tools and techniques in place before we drown
> in trying to manually manage them.

CFEngine is what I use at home and work. I'd choose puppet for new
installs, but in the Sarge-timeframe it wasn't around (or if it was
I didn't trust it enough!)

Steve
--
Debian GNU/Linux System Administration
http://www.debian-administration.org/


--
To UNSUBSCRIBE, email to debian-isp-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
 
Old 06-13-2008, 11:40 AM
"Wojciech Ziniewicz"
 
Default Managing disperse servers

2008/6/13 Keith Edmunds <kae@midnighthax.com>:

> I'm also interested in hearing of other techniques for managing multiple,
> mostly-similar (but not identical) systems. We're currently managing about
> 40 such servers, so not a huge number, but we're expecting that number to
> grow and we want to put some tools and techniques in place before we drown
> in trying to manually manage them.

Maybe I'm wrong but your question should be "managing many servers"
because probably in your case geographic dispersion has no influence
on management from the point of view of management system.

For managing a farm of debian servers i wrote something like mysql
console. Maybe it's not pleasant but it works very well for me.
Every node (server) connects to my masterserver with question like
"are there any commands that i should run ? " every X minutes. There
exists a list of servers with the table format like " id,group,
servername, serverip, command_to_run, locked_state .. etc ".
When i schedule some updates/commands/etc within my mysql database ,
every server does it's own job. It works mainly like windows GPO but
with many advantages.
Of course there are issues like securing mysql traffic etc.

I think you should try this.

regards.


--
Wojciech Ziniewicz
Unix SEX :{look;gawk;find;sed;talk;grep;touch;finger;find;f l
ex;unzip;head;tail; mount;workbone;fsck;yes;gasp;fsck;more;yes;yes;eje
ct;umount;makeclean; zip;split;done;exit:xargs!!}


--
To UNSUBSCRIBE, email to debian-isp-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
 
Old 06-13-2008, 11:54 AM
Steve Kemp
 
Default Managing disperse servers

On Fri Jun 13, 2008 at 13:40:38 +0200, Wojciech Ziniewicz wrote:

> Maybe I'm wrong but your question should be "managing many servers"
> because probably in your case geographic dispersion has no influence
> on management from the point of view of management system.

Agreed.

> Every node (server) connects to my masterserver with question like
> "are there any commands that i should run ? " every X minutes. There
> exists a list of servers with the table format like " id,group,
> servername, serverip, command_to_run, locked_state .. etc ".

> Of course there are issues like securing mysql traffic etc.

Sounds like you've just re-invented a puppet/cfengine solution
but in a less flexible manner...

There was a time where I had a machine which hosted

https://master.my.flat/global.sh.gpg

Each client node would download that on the hour and execute it
if the signature matched and it existed. (There was also the ability
to look for https://master.my.flat/$hostname.sh.gpg.)

These kind of home-grown solutions tend to be woefully inflexible
and insecure - and I've seen too many of them in my time. Go with
something proven, reliable, and well-known if you can. The pain of
migration might be high, but you'll really be better off..


Steve
--


--
To UNSUBSCRIBE, email to debian-isp-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
 
Old 06-13-2008, 12:10 PM
"Brian Schrock"
 
Default Managing disperse servers

Just my 2 cents...

Along with puppet we have IPMI cards in every server and we run every server on xen. This solution gives us maximum flexibility to remotely manage the geographically dispersed servers we have.


On Fri, Jun 13, 2008 at 7:06 AM, Keith Edmunds <kae@midnighthax.com> wrote:

I'd be interested in hearing what others do to manage a number of

geographically-disperse servers. We currently use Nagios/Munin for

monitoring, which we're happy with, but system management is more

challenging.



How do others approach the problem of security updates? Up until now,

we've done this manually with some help from 'cssh' for some servers;

however, that solution doesn't scale as the number of servers increases.

We're reluctant to have servers automatically install updates. We're

looking at CfEngine and Puppet, but I'd be interested in hearing of other

approaches.



I'm also interested in hearing of other techniques for managing multiple,

mostly-similar (but not identical) systems. We're currently managing about

40 such servers, so not a huge number, but we're expecting that number to

grow and we want to put some tools and techniques in place before we drown

in trying to manually manage them.



Thanks,

Keith





--

To UNSUBSCRIBE, email to debian-isp-REQUEST@lists.debian.org

with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org



--
Brian J. Schrock
Systems Engineer, IntegraLink
The Cobalt Group, Inc.
4635 Trueman Blvd, Suite 100
Hilliard, OH 43026
bschrock@integralink.com

www.integralink.com
p. 614.324.7800 ext. 3295
 
Old 06-13-2008, 12:55 PM
"John Keimel"
 
Default Managing disperse servers

On Fri, Jun 13, 2008 at 7:40 AM, Wojciech Ziniewicz
<wojciech.ziniewicz@gmail.com> wrote:
> 2008/6/13 Keith Edmunds <kae@midnighthax.com>:
>
>> I'm also interested in hearing of other techniques for managing multiple,
>> mostly-similar (but not identical) systems. We're currently managing about
>> 40 such servers, so not a huge number, but we're expecting that number to
>> grow and we want to put some tools and techniques in place before we drown
>> in trying to manually manage them.
>
> Maybe I'm wrong but your question should be "managing many servers"
> because probably in your case geographic dispersion has no influence
> on management from the point of view of management system.
>

I'd disagree.

Geographic location makes a difference to me. I certainly will apply a
patch or a new untested package to the server that's in the physical
data center that I can touch before I apply it to a server that's in
Philadelphia or Atlanta, where I have no remote hands, no personnel
and if the patch pooches the machine, have only one thing to resort to
- a previous image of the machine. At least now I have previous images
of machines to go back to, which is nice.

But that's me, how I have my servers and my acceptance of that level
of risk. I am rather risk averse in my remote servers.

But I do agree with many of the assessments that mixing manual and
automatic updates is tricky and riskier. It's so easy to miss
something on one machine. I've chosen to update manually, but have
scripts that run 'apt-get update' and 'apt-get -s dist-upgrade' and
send me the output if there are updates to be had. That's my automated
kick in the pants.

$.02

HTH

j


--
To UNSUBSCRIBE, email to debian-isp-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
 
Old 06-13-2008, 04:35 PM
Héctor González
 
Default Managing disperse servers

John Keimel wrote:
> On Fri, Jun 13, 2008 at 7:40 AM, Wojciech Ziniewicz
> <wojciech.ziniewicz@gmail.com> wrote:
>
>> 2008/6/13 Keith Edmunds <kae@midnighthax.com>:
>>
>>
>>> I'm also interested in hearing of other techniques for managing multiple,
>>> mostly-similar (but not identical) systems. We're currently managing about
>>> 40 such servers, so not a huge number, but we're expecting that number to
>>> grow and we want to put some tools and techniques in place before we drown
>>> in trying to manually manage them.
>>>
>> Maybe I'm wrong but your question should be "managing many servers"
>> because probably in your case geographic dispersion has no influence
>> on management from the point of view of management system.
>>
>>
>
> I'd disagree.
>
> Geographic location makes a difference to me. I certainly will apply a
> patch or a new untested package to the server that's in the physical
> data center that I can touch before I apply it to a server that's in
> Philadelphia or Atlanta, where I have no remote hands, no personnel
> and if the patch pooches the machine, have only one thing to resort to
> - a previous image of the machine. At least now I have previous images
> of machines to go back to, which is nice.
>
>
Maybe you need a networked KVM, like this one:
http://www.dlink.com/products/?pid=528
It should lower that risk a lot.

> But that's me, how I have my servers and my acceptance of that level
> of risk. I am rather risk averse in my remote servers.
>
> But I do agree with many of the assessments that mixing manual and
> automatic updates is tricky and riskier. It's so easy to miss
> something on one machine. I've chosen to update manually, but have
> scripts that run 'apt-get update' and 'apt-get -s dist-upgrade' and
> send me the output if there are updates to be had. That's my automated
> kick in the pants.
>
> $.02
>
> HTH
>
> j
>
>
>


--
Hector Gonzalez
cacho@genac.org
http://www.genac.org


--
To UNSUBSCRIBE, email to debian-isp-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
 
Old 06-15-2008, 04:20 PM
Thomas Goirand
 
Default Managing disperse servers

Keith Edmunds wrote:
> I'd be interested in hearing what others do to manage a number of
> geographically-disperse servers. We currently use Nagios/Munin for
> monitoring, which we're happy with, but system management is more
> challenging.
>
> How do others approach the problem of security updates? Up until now,
> we've done this manually with some help from 'cssh' for some servers;
> however, that solution doesn't scale as the number of servers increases.
> We're reluctant to have servers automatically install updates. We're
> looking at CfEngine and Puppet, but I'd be interested in hearing of other
> approaches.
>
> I'm also interested in hearing of other techniques for managing multiple,
> mostly-similar (but not identical) systems. We're currently managing about
> 40 such servers, so not a huge number, but we're expecting that number to
> grow and we want to put some tools and techniques in place before we drown
> in trying to manually manage them.
>
> Thanks,
> Keith

Hi,

what we do is that we have a set of ssh keys, and we write scripts that
run the commands using ssh. This way, we have the output, and we can
watch it on the console. It's a lot faster to just sit behind the
console and read what happens, and this way we can see eventual errors
(like failed to download a package, etc.).

Thomas


--
To UNSUBSCRIBE, email to debian-isp-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
 
Old 06-16-2008, 04:06 PM
Micah Anderson
 
Default Managing disperse servers

* Steve Kemp <skx@debian.org> [2008-06-13 04:40-0400]:
> On Fri Jun 13, 2008 at 12:06:50 +0100, Keith Edmunds wrote:
>
> > How do others approach the problem of security updates? Up until now,
> > we've done this manually with some help from 'cssh' for some servers;
> > however, that solution doesn't scale as the number of servers increases.
>
> > We're reluctant to have servers automatically install updates. We're
> > looking at CfEngine and Puppet, but I'd be interested in hearing of other
> > approaches.

I think puppet is the way to go, however it does still mean there will
be challenges in figuring out the best way to handle these things.

> I think you need to choose; either you have automatic updates or
> you do it manually, though there is a middle-ground where you could
> apply automatically to machines A, B, and C. Then after you observe
> no breakage for a period of time you could instruct machines D, E, F...,
> to update themselves too.
>
> I personally use cron-apt to auto-install security updates, at the
> (small) risk of suffering breakages if there is a borked security update.
> So far that hasn't been a problem, but I accept it is only a matter
> of time & bad luck until I get a borked upgrade requiring manual
> intervention on 200+ machines!

I thought about cron-apt to auto-install security updates, but I didn't
want to take the risk of suffering breakages. Specifically kernel
related reboots scare me, and there have been some issues with certain
packages that sometimes require specific things to be done after an
update.

I went with a compromise with puppet. Every system has scheduled apt-get
updates run on it, every system has apticron and apt-show-versions
installed on it. This results in me getting an email once a day to give
me an up-to-date list of packages that are currently pending an upgrade.

If the package is something I am comfortable with upgrading on all the
machines that have it installed, I go ahead and add it to my puppet
manifest as an upgrade_package definition. This definition is something
I created in puppet which will upgrade the package to the specified
version if it is installed, otherwise it wont (you can also specify
'latest' as the version). Puppet runs every 15 minutes or so on all the
systems, so they query this and determine if they should run this
upgrade, and do it if they need to[0].

For packages that I am less comfortable with blowing out there, I will
decide what to do with them, sometimes I can do that work in puppet (in
the case of the clamav upgrades, I could write specific things in puppet
that had to be done to manage the upgrade), or do them manually (I have
a few systems that I need to schedule outages for kernel security
upgrades, and they need to have fail-overs initiated before I reboot
them).

micah



0. I have the following puppet definition, which allows me to do the
following:

1. in site.pp:

node somesystem
include etch_security_upgrades

2. in etch_upgrades.pp:

class etch_security_upgrades {

upgrade_package { "perl":
version => 5.8.8-7etch1;
"syslog-ng":
version => latest;
"perl-modules":
}
}

3. Then in components/upgrades.pp:

define upgrade_package ($version = "") {
case $version {
': {
exec { "aptitude -y install $name":
onlyif => [ "grep-status -F Status installed -a -P $name -q", "apt-show-versions -u $name | grep -q upgradeable" ],
}
}
'latest': {
exec { "aptitude -y install $name":
onlyif => [ "grep-status -F Status installed -a -P $name -q", "apt-show-versions -u $name | grep -q upgradeable" ],
}
}
default: {
exec { "aptitude -y install $name=$version":
onlyif => [ "grep-status -F Status installed -a -P $name -q", "apt-show-versions -u $name | grep -q upgradeable" ],
}
}
}
}
 
Old 06-16-2008, 05:03 PM
Keith Edmunds
 
Default Managing disperse servers

Thanks for all the responses; they've all been both interesting and useful.

Keith


--
To UNSUBSCRIBE, email to debian-isp-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
 
Old 06-16-2008, 05:22 PM
"Mario Spinthiras"
 
Default Managing disperse servers

Hello All,

Managing debian based servers is a topic that I brushed over in the
passed but had to let it go since the time and money spent to look
into the solution was starting to grow and show. But thats not
important right now. What I wanted to let you know about is that it
would only be reasonable to take geographical location into
consideration while managing servers. One good reason would be the
repositories each machine has for its updates. You wouldn't have US
repositories if you had a machine in Japan would you. My 2p.

PS If you do find something of a more to a "turnkey" solution on
managing servers in a centralized manner , please share its
capabilities since this would be something very interesting. I have
started a project called deb-raptor which offers centralized and
managed debian packaging (distribution in a centralized manner as it's
main theme). The code is still quite immature but rather functional.
More details can be found on my site below. Thanks for your time.
--
Warm Regards,
Mario A. Spinthiras
Blog: http://www.spinthiras.net
Mail: mspinthiras@gmail.com
Skype: smario125


--
To UNSUBSCRIBE, email to debian-isp-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
 

Thread Tools




All times are GMT. The time now is 06:12 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright ©2007 - 2008, www.linux-archive.org