FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > CentOS > CentOS

 
 
LinkBack Thread Tools
 
Old 05-13-2012, 03:45 PM
aurfalien
 
Default True bond howto for Centos 6

Hi all,

Read many posts on the subject.

Using 802.3ad.

Few problems;
Cannot ping some hosts on the network, they are all up.
Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts.
Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway.

When cold booting it somewhat works, some hosts are pingable while others are not.

When restarting the network service via /etc/init.d/network, nothing is pingable.

Here are my configs;

ifcfg-bond0
DEVICE=bond0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
IPADDR=10.0.0.10
NETMASK=255.255.0.0
NETWORK=10.0.0.0
TYPE=Unknown
IPV6INIT=no

ifcfg-eth0
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no

ifcfg-eth1
DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no

/etc/modprob.d/bonding.conf
alias bond0 bonding
options bond0 mode=5 miimon=100

Bonding worked great in Centos 5.x, not so well for me in Centos 6.2.

My goal is to get this working under bridging for KVM, I can only imagine the nightmare seeing I can't get a simple bond to work!

Any guidance is golden.

- aurf

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 05-13-2012, 04:23 PM
bob
 
Default True bond howto for Centos 6

On 5/13/2012 11:45 AM, aurfalien wrote:
> Hi all,
>
> Read many posts on the subject.
>
> Using 802.3ad.
>
> Few problems;
> Cannot ping some hosts on the network, they are all up.
> Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts.
> Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway.
>
> When cold booting it somewhat works, some hosts are pingable while others are not.
>
> When restarting the network service via /etc/init.d/network, nothing is pingable.
>
> Here are my configs;
>
> ifcfg-bond0
> DEVICE=bond0
> USERCTL=no
> BOOTPROTO=none
> ONBOOT=yes
> IPADDR=10.0.0.10
> NETMASK=255.255.0.0
> NETWORK=10.0.0.0
> TYPE=Unknown
> IPV6INIT=no
>
> ifcfg-eth0
> DEVICE=eth0
> BOOTPROTO=none
> ONBOOT=yes
> MASTER=bond0
> SLAVE=yes
> USERCTL=no
>
> ifcfg-eth1
> DEVICE=eth1
> BOOTPROTO=none
> ONBOOT=yes
> MASTER=bond0
> SLAVE=yes
> USERCTL=no
>
> /etc/modprob.d/bonding.conf
> alias bond0 bonding
> options bond0 mode=5 miimon=100
>
> Bonding worked great in Centos 5.x, not so well for me in Centos 6.2.
>
> My goal is to get this working under bridging for KVM, I can only imagine the nightmare seeing I can't get a simple bond to work!
>
> Any guidance is golden.
>
> - aurf
>
> _______________________________________________
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
>
I spent two months on bonding two nics inside a box to a bridge in the box.
There is a bug, very prominent in fedora bugzillas about it.
You cannot do it without some modification. Libvirt loses some vms, no
way to make it work that I know of
except for those suggested changes which I did not try.

if you look in your libvirt logs you will see xml bond errors....and
thus impossible to do inside of the box.
this only applies if the bonded nics and bridge are all in the same box,

also, the options should no longer go in bonding.conf, but in the bridge
file itself.

in all my testing all vms worked except the one assigned vnet0, that
always got 'lost'...
however, any attempt by the vm to send a signal outside to the net,
would cause it to be found again..

this bug is not fixed in 6 or in latest fedora when i last
checked....there are self made patches in fedora bugzilla though.
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 05-13-2012, 04:30 PM
aurfalien
 
Default True bond howto for Centos 6

On May 13, 2012, at 12:23 PM, bob wrote:

> On 5/13/2012 11:45 AM, aurfalien wrote:
>> Hi all,
>>
>> Read many posts on the subject.
>>
>> Using 802.3ad.
>>
>> Few problems;
>> Cannot ping some hosts on the network, they are all up.
>> Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts.
>> Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway.
>>
>> When cold booting it somewhat works, some hosts are pingable while others are not.
>>
>> When restarting the network service via /etc/init.d/network, nothing is pingable.
>>
>> Here are my configs;
>>
>> ifcfg-bond0
>> DEVICE=bond0
>> USERCTL=no
>> BOOTPROTO=none
>> ONBOOT=yes
>> IPADDR=10.0.0.10
>> NETMASK=255.255.0.0
>> NETWORK=10.0.0.0
>> TYPE=Unknown
>> IPV6INIT=no
>>
>> ifcfg-eth0
>> DEVICE=eth0
>> BOOTPROTO=none
>> ONBOOT=yes
>> MASTER=bond0
>> SLAVE=yes
>> USERCTL=no
>>
>> ifcfg-eth1
>> DEVICE=eth1
>> BOOTPROTO=none
>> ONBOOT=yes
>> MASTER=bond0
>> SLAVE=yes
>> USERCTL=no
>>
>> /etc/modprob.d/bonding.conf
>> alias bond0 bonding
>> options bond0 mode=5 miimon=100
>>
>> Bonding worked great in Centos 5.x, not so well for me in Centos 6.2.
>>
>> My goal is to get this working under bridging for KVM, I can only imagine the nightmare seeing I can't get a simple bond to work!
>>
>> Any guidance is golden.
>>
>> - aurf
>>
>> _______________________________________________
>> CentOS mailing list
>> CentOS@centos.org
>> http://lists.centos.org/mailman/listinfo/centos
>>
>>
> I spent two months on bonding two nics inside a box to a bridge in the box.
> There is a bug, very prominent in fedora bugzillas about it.
> You cannot do it without some modification. Libvirt loses some vms, no
> way to make it work that I know of
> except for those suggested changes which I did not try.
>
> if you look in your libvirt logs you will see xml bond errors....and
> thus impossible to do inside of the box.
> this only applies if the bonded nics and bridge are all in the same box,
>
> also, the options should no longer go in bonding.conf, but in the bridge
> file itself.
>
> in all my testing all vms worked except the one assigned vnet0, that
> always got 'lost'...
> however, any attempt by the vm to send a signal outside to the net,
> would cause it to be found again..
>
> this bug is not fixed in 6 or in latest fedora when i last
> checked....there are self made patches in fedora bugzilla though.

Hi Bob,

WOW, ok cool.

I will simply do 2 bridges and allocate some of my guests to either for a sort of manual load balancing.

Good info to know.

Really appreciate the feedback.

- aurf
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 05-13-2012, 04:54 PM
bob
 
Default True bond howto for Centos 6

On 5/13/2012 12:30 PM, aurfalien wrote:
> On May 13, 2012, at 12:23 PM, bob wrote:
>
>> On 5/13/2012 11:45 AM, aurfalien wrote:
>>> Hi all,
>>>
>>> Read many posts on the subject.
>>>
>>> Using 802.3ad.
>>>
>>> Few problems;
>>> Cannot ping some hosts on the network, they are all up.
>>> Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts.
>>> Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway.
>>>
>>> When cold booting it somewhat works, some hosts are pingable while others are not.
>>>
>>> When restarting the network service via /etc/init.d/network, nothing is pingable.
>>>
>>> Here are my configs;
>>>
>>> ifcfg-bond0
>>> DEVICE=bond0
>>> USERCTL=no
>>> BOOTPROTO=none
>>> ONBOOT=yes
>>> IPADDR=10.0.0.10
>>> NETMASK=255.255.0.0
>>> NETWORK=10.0.0.0
>>> TYPE=Unknown
>>> IPV6INIT=no
>>>
>>> ifcfg-eth0
>>> DEVICE=eth0
>>> BOOTPROTO=none
>>> ONBOOT=yes
>>> MASTER=bond0
>>> SLAVE=yes
>>> USERCTL=no
>>>
>>> ifcfg-eth1
>>> DEVICE=eth1
>>> BOOTPROTO=none
>>> ONBOOT=yes
>>> MASTER=bond0
>>> SLAVE=yes
>>> USERCTL=no
>>>
>>> /etc/modprob.d/bonding.conf
>>> alias bond0 bonding
>>> options bond0 mode=5 miimon=100
>>>
>>> Bonding worked great in Centos 5.x, not so well for me in Centos 6.2.
>>>
>>> My goal is to get this working under bridging for KVM, I can only imagine the nightmare seeing I can't get a simple bond to work!
>>>
>>> Any guidance is golden.
>>>
>>> - aurf
>>>
>>> _______________________________________________
>>> CentOS mailing list
>>> CentOS@centos.org
>>> http://lists.centos.org/mailman/listinfo/centos
>>>
>>>
>> I spent two months on bonding two nics inside a box to a bridge in the box.
>> There is a bug, very prominent in fedora bugzillas about it.
>> You cannot do it without some modification. Libvirt loses some vms, no
>> way to make it work that I know of
>> except for those suggested changes which I did not try.
>>
>> if you look in your libvirt logs you will see xml bond errors....and
>> thus impossible to do inside of the box.
>> this only applies if the bonded nics and bridge are all in the same box,
>>
>> also, the options should no longer go in bonding.conf, but in the bridge
>> file itself.
>>
>> in all my testing all vms worked except the one assigned vnet0, that
>> always got 'lost'...
>> however, any attempt by the vm to send a signal outside to the net,
>> would cause it to be found again..
>>
>> this bug is not fixed in 6 or in latest fedora when i last
>> checked....there are self made patches in fedora bugzilla though.
> Hi Bob,
>
> WOW, ok cool.
>
> I will simply do 2 bridges and allocate some of my guests to either for a sort of manual load balancing.
>
> Good info to know.
>
> Really appreciate the feedback.
>
> - aurf
> _______________________________________________
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
>
and dont forget a lot of the bond modes require a second device like a
switch...i think 0 and 6 were the ones I tried

I really wanted the bond, but decided to blow some ips and just use
extra bridges to try to balance.
It was not fun at all finding this bug...lol

I think the reason they do not want to get into it is alomost everyone
uses bond/bridges outside of the single server and it is not an
issue...and to rewrite and debug all that for a few of us that are crazy
to do such single box bridge bond, well, we in't gonna see that
according to the bugzilla responses I saw...still, one can hope
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 05-13-2012, 05:04 PM
aurfalien
 
Default True bond howto for Centos 6

On May 13, 2012, at 12:54 PM, bob wrote:

> On 5/13/2012 12:30 PM, aurfalien wrote:
>> On May 13, 2012, at 12:23 PM, bob wrote:
>>
>>> On 5/13/2012 11:45 AM, aurfalien wrote:
>>>> Hi all,
>>>>
>>>> Read many posts on the subject.
>>>>
>>>> Using 802.3ad.
>>>>
>>>> Few problems;
>>>> Cannot ping some hosts on the network, they are all up.
>>>> Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts.
>>>> Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway.
>>>>
>>>> When cold booting it somewhat works, some hosts are pingable while others are not.
>>>>
>>>> When restarting the network service via /etc/init.d/network, nothing is pingable.
>>>>
>>>> Here are my configs;
>>>>
>>>> ifcfg-bond0
>>>> DEVICE=bond0
>>>> USERCTL=no
>>>> BOOTPROTO=none
>>>> ONBOOT=yes
>>>> IPADDR=10.0.0.10
>>>> NETMASK=255.255.0.0
>>>> NETWORK=10.0.0.0
>>>> TYPE=Unknown
>>>> IPV6INIT=no
>>>>
>>>> ifcfg-eth0
>>>> DEVICE=eth0
>>>> BOOTPROTO=none
>>>> ONBOOT=yes
>>>> MASTER=bond0
>>>> SLAVE=yes
>>>> USERCTL=no
>>>>
>>>> ifcfg-eth1
>>>> DEVICE=eth1
>>>> BOOTPROTO=none
>>>> ONBOOT=yes
>>>> MASTER=bond0
>>>> SLAVE=yes
>>>> USERCTL=no
>>>>
>>>> /etc/modprob.d/bonding.conf
>>>> alias bond0 bonding
>>>> options bond0 mode=5 miimon=100
>>>>
>>>> Bonding worked great in Centos 5.x, not so well for me in Centos 6.2.
>>>>
>>>> My goal is to get this working under bridging for KVM, I can only imagine the nightmare seeing I can't get a simple bond to work!
>>>>
>>>> Any guidance is golden.
>>>>
>>>> - aurf
>>>>
>>>> _______________________________________________
>>>> CentOS mailing list
>>>> CentOS@centos.org
>>>> http://lists.centos.org/mailman/listinfo/centos
>>>>
>>>>
>>> I spent two months on bonding two nics inside a box to a bridge in the box.
>>> There is a bug, very prominent in fedora bugzillas about it.
>>> You cannot do it without some modification. Libvirt loses some vms, no
>>> way to make it work that I know of
>>> except for those suggested changes which I did not try.
>>>
>>> if you look in your libvirt logs you will see xml bond errors....and
>>> thus impossible to do inside of the box.
>>> this only applies if the bonded nics and bridge are all in the same box,
>>>
>>> also, the options should no longer go in bonding.conf, but in the bridge
>>> file itself.
>>>
>>> in all my testing all vms worked except the one assigned vnet0, that
>>> always got 'lost'...
>>> however, any attempt by the vm to send a signal outside to the net,
>>> would cause it to be found again..
>>>
>>> this bug is not fixed in 6 or in latest fedora when i last
>>> checked....there are self made patches in fedora bugzilla though.
>> Hi Bob,
>>
>> WOW, ok cool.
>>
>> I will simply do 2 bridges and allocate some of my guests to either for a sort of manual load balancing.
>>
>> Good info to know.
>>
>> Really appreciate the feedback.
>>
>> - aurf
>> _______________________________________________
>> CentOS mailing list
>> CentOS@centos.org
>> http://lists.centos.org/mailman/listinfo/centos
>>
>>
> and dont forget a lot of the bond modes require a second device like a
> switch...i think 0 and 6 were the ones I tried
>
> I really wanted the bond, but decided to blow some ips and just use
> extra bridges to try to balance.
> It was not fun at all finding this bug...lol
>
> I think the reason they do not want to get into it is alomost everyone
> uses bond/bridges outside of the single server and it is not an
> issue...and to rewrite and debug all that for a few of us that are crazy
> to do such single box bridge bond, well, we in't gonna see that
> according to the bugzilla responses I saw...still, one can hope


Thats a real bummer.

I don't get why they think this.

I mean whats so strange for one to bond interfaces anyways on a system? Oh well, multi bridges are fine for me.

Funny, my Blow Leopard (OSX 10.6.8) and Cryin (OSX 10.7) Servers trunk fine with my Foundry switch at 802.3ad.

- aurf


> _______________________________________________
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 05-13-2012, 05:16 PM
bob
 
Default True bond howto for Centos 6

On 5/13/2012 1:04 PM, aurfalien wrote:
> Thats a real bummer.
>
> I don't get why they think this.
>
> I mean whats so strange for one to bond interfaces anyways on a system? Oh well, multi bridges are fine for me.
>
> Funny, my Blow Leopard (OSX 10.6.8) and Cryin (OSX 10.7) Servers trunk fine with my Foundry switch at 802.3ad.
from what i get it is a problem with libvirt, using a bridge that is
going through a bond....on the same machine.
It must be rather detailed to fix and only a few people seem to use that
route. (like you and me)
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 05-13-2012, 06:27 PM
Mikael Fridh
 
Default True bond howto for Centos 6

On Sun, May 13, 2012 at 5:45 PM, aurfalien <aurfalien@gmail.com> wrote:
> Hi all,
>
> Read many posts on the subject.
>
> Using 802.3ad.
>
> Few problems;
> Cannot ping some hosts on the network, they are all up.
> Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts.
> Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway.
>
> When cold booting it somewhat works, some hosts are pingable while others are not.
>
> When restarting the network service via /etc/init.d/network, nothing is pingable.
>
> Here are my configs;
>
> ifcfg-bond0
> DEVICE=bond0
> USERCTL=no
> BOOTPROTO=none
> ONBOOT=yes
> IPADDR=10.0.0.10
> NETMASK=255.255.0.0
> NETWORK=10.0.0.0
> TYPE=Unknown
> IPV6INIT=no

Note I'm speaking bonding only and not bridging here:

These days bonding is supposed to be done in the network-script files,
not modprobe.conf:
# ifcfg-bond0:
DEVICE=bond0
IPADDR=10.0.0.6
NETMASK=255.255.255.0
#NETWORK=
#BROADCAST=
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
BONDING_OPTS="mode=active-backup primary=em1 arp_interval=2000
arp_ip_target=10.0.0.1 arp_validate=all num_grat_arp=12
primary_reselect=failure"

Adjust accordingly.

--
Mikael.
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 05-13-2012, 06:43 PM
Digimer
 
Default True bond howto for Centos 6

On 05/13/2012 11:45 AM, aurfalien wrote:
> Hi all,
>
> Read many posts on the subject.
>
> Using 802.3ad.
>
> Few problems;
> Cannot ping some hosts on the network, they are all up.
> Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts.
> Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway.
>
> When cold booting it somewhat works, some hosts are pingable while others are not.
>
> When restarting the network service via /etc/init.d/network, nothing is pingable.
>
> Here are my configs;
>
> ifcfg-bond0
> DEVICE=bond0
> USERCTL=no
> BOOTPROTO=none
> ONBOOT=yes
> IPADDR=10.0.0.10
> NETMASK=255.255.0.0
> NETWORK=10.0.0.0
> TYPE=Unknown
> IPV6INIT=no
>
> ifcfg-eth0
> DEVICE=eth0
> BOOTPROTO=none
> ONBOOT=yes
> MASTER=bond0
> SLAVE=yes
> USERCTL=no
>
> ifcfg-eth1
> DEVICE=eth1
> BOOTPROTO=none
> ONBOOT=yes
> MASTER=bond0
> SLAVE=yes
> USERCTL=no
>
> /etc/modprob.d/bonding.conf
> alias bond0 bonding
> options bond0 mode=5 miimon=100
>
> Bonding worked great in Centos 5.x, not so well for me in Centos 6.2.
>
> My goal is to get this working under bridging for KVM, I can only imagine the nightmare seeing I can't get a simple bond to work!
>
> Any guidance is golden.
>
> - aurf

I run KVM VMs, built and managed using libvirt, through bonded
interfaces all the time.

I don't have a specific tutorial for this, but I cover all the steps to
build a mode=1 (Active/Passive) bond and then routing VMs through it as
part of a larger tutorial.

Here are the specific sections I think will help you;

https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Network

This section covers building 3 bonds, which you only need one of. In the
tutorial, you only need to care about the "IFN" bond and bridge (bond2 +
vbr2).

https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Provisioning_vm0 001-dev

This covers all the steps used in the 'virt-install' call to provision
the VMs, which includes telling them to use the bridge.

Hope that helps.

--
Digimer
Papers and Projects: https://alteeve.com
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 05-13-2012, 06:44 PM
aurfalien
 
Default True bond howto for Centos 6

On May 13, 2012, at 2:27 PM, Mikael Fridh wrote:

> On Sun, May 13, 2012 at 5:45 PM, aurfalien <aurfalien@gmail.com> wrote:
>> Hi all,
>>
>> Read many posts on the subject.
>>
>> Using 802.3ad.
>>
>> Few problems;
>> Cannot ping some hosts on the network, they are all up.
>> Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts.
>> Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway.
>>
>> When cold booting it somewhat works, some hosts are pingable while others are not.
>>
>> When restarting the network service via /etc/init.d/network, nothing is pingable.
>>
>> Here are my configs;
>>
>> ifcfg-bond0
>> DEVICE=bond0
>> USERCTL=no
>> BOOTPROTO=none
>> ONBOOT=yes
>> IPADDR=10.0.0.10
>> NETMASK=255.255.0.0
>> NETWORK=10.0.0.0
>> TYPE=Unknown
>> IPV6INIT=no
>
> Note I'm speaking bonding only and not bridging here:
>
> These days bonding is supposed to be done in the network-script files,
> not modprobe.conf:
> # ifcfg-bond0:
> DEVICE=bond0
> IPADDR=10.0.0.6
> NETMASK=255.255.255.0
> #NETWORK=
> #BROADCAST=
> ONBOOT=yes
> BOOTPROTO=none
> USERCTL=no
> BONDING_OPTS="mode=active-backup primary=em1 arp_interval=2000
> arp_ip_target=10.0.0.1 arp_validate=all num_grat_arp=12
> primary_reselect=failure"
>
> Adjust accordingly.


Hi Mikael,

I didn't do them in the .conf as its depreciated in Centos 6.

However I did move the miimon etc... lines to my network scripts file and still no dice.

I didn't try your suggestions as it looks too much like a patch, not very clean like it used to be in version 5.

So I basically had done what you suggested but w/o the arp lines.

I'm in no hurry for this although I will keep your suggestions in my notes as I may have an up coming Centos 6 server that absolutely needs binding.

Thanks for the reply.

- aurf
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 05-13-2012, 07:42 PM
Jerry Franz
 
Default True bond howto for Centos 6

On 05/13/2012 10:16 AM, bob wrote:
>
> from what i get it is a problem with libvirt, using a bridge that is
> going through a bond....on the same machine.
> It must be rather detailed to fix and only a few people seem to use that
> route. (like you and me)
>

I've been running 14 CentOS5 VMs with bridged over active-backup bonded
interfaces (actually, over three sets of bonded interfaces) on a single
Ubuntu 10.04-LTS server KVM host for a couple of years now. The only
real issue I have had is that during a host reboot the 'thundering herd'
trying to autostart simultaneously sometimes doesn't reliably start all
14 VMs and I have to manually launch the one or two VMs that fail to
launch.

Also, I had to roll my own shutdown script because for whatever reason
Ubuntu 10.04 thinks shooting running VMs in the head during a shutdown
is a better approach than waiting for them to properly shutdown on request.

--
Benjamin Franz
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 

Thread Tools




All times are GMT. The time now is 10:32 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org