FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Ubuntu > Ubuntu Kernel Team

 
 
LinkBack Thread Tools
 
Old 01-05-2009, 08:50 PM
Daniel Lezcano
 
Default why the pid namespace is not compiled in the kernel ?

Hi,

I hope it is the right mailing list to ask

I tried the latest kernel version from "intrepid" and it looks like the
namespaces are compiled in except the pid namespace (according the
config file stored in /boot).
Is there any particular reason ?

Thanks.
-- Daniel

ps: I recently subscribed to this mailing list, sorry if this question
was already asked ...

--
kernel-team mailing list
kernel-team@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/kernel-team
 
Old 01-13-2009, 10:38 AM
Daniel Lezcano
 
Default why the pid namespace is not compiled in the kernel ?

Daniel Lezcano wrote:
> Hi,
>
> I hope it is the right mailing list to ask
>
> I tried the latest kernel version from "intrepid" and it looks like the
> namespaces are compiled in except the pid namespace (according the
> config file stored in /boot).
> Is there any particular reason ?
>
> Thanks.
> -- Daniel
>
> ps: I recently subscribed to this mailing list, sorry if this question
> was already asked ...
>
did I ask to the right mailing list ?

--
kernel-team mailing list
kernel-team@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/kernel-team
 
Old 01-13-2009, 01:54 PM
Tim Gardner
 
Default why the pid namespace is not compiled in the kernel ?

Daniel Lezcano wrote:
> Daniel Lezcano wrote:
>> Hi,
>>
>> I hope it is the right mailing list to ask
>>
>> I tried the latest kernel version from "intrepid" and it looks like the
>> namespaces are compiled in except the pid namespace (according the
>> config file stored in /boot).
>> Is there any particular reason ?
>>
>> Thanks.
>> -- Daniel
>>
>> ps: I recently subscribed to this mailing list, sorry if this question
>> was already asked ...
>>
> did I ask to the right mailing list ?
>

Though there are a few features included in the config that depend on
EXPERIMENTAL, CONFIG_PID_NS is not deemed sufficiently interesting to
mess with.

--
Tim Gardner tim.gardner@canonical.com

--
kernel-team mailing list
kernel-team@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/kernel-team
 
Old 01-13-2009, 02:13 PM
Daniel Lezcano
 
Default why the pid namespace is not compiled in the kernel ?

Tim Gardner wrote:
> Daniel Lezcano wrote:
>
>> Daniel Lezcano wrote:
>>
>>> Hi,
>>>
>>> I hope it is the right mailing list to ask
>>>
>>> I tried the latest kernel version from "intrepid" and it looks like the
>>> namespaces are compiled in except the pid namespace (according the
>>> config file stored in /boot).
>>> Is there any particular reason ?
>>>
>>> Thanks.
>>> -- Daniel
>>>
>>> ps: I recently subscribed to this mailing list, sorry if this question
>>> was already asked ...
>>>
>>>
>> did I ask to the right mailing list ?
>>
>>
>
> Though there are a few features included in the config that depend on
> EXPERIMENTAL, CONFIG_PID_NS is not deemed sufficiently interesting to
> mess with.
>
Ah, I see, like the network namespace, it is experimental, that makes sense.
We will have to wait a litlle before having a full featured container in
Ubuntu

Thanks.
-- Daniel

--
kernel-team mailing list
kernel-team@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/kernel-team
 
Old 01-13-2009, 02:18 PM
Tim Gardner
 
Default why the pid namespace is not compiled in the kernel ?

Daniel Lezcano wrote:
> Tim Gardner wrote:
>> Daniel Lezcano wrote:
>>
>>> Daniel Lezcano wrote:
>>>
>>>> Hi,
>>>>
>>>> I hope it is the right mailing list to ask
>>>>
>>>> I tried the latest kernel version from "intrepid" and it looks like
>>>> the namespaces are compiled in except the pid namespace (according
>>>> the config file stored in /boot).
>>>> Is there any particular reason ?
>>>>
>>>> Thanks.
>>>> -- Daniel
>>>>
>>>> ps: I recently subscribed to this mailing list, sorry if this
>>>> question was already asked ...
>>>>
>>> did I ask to the right mailing list ?
>>>
>>>
>>
>> Though there are a few features included in the config that depend on
>> EXPERIMENTAL, CONFIG_PID_NS is not deemed sufficiently interesting to
>> mess with.
>>
> Ah, I see, like the network namespace, it is experimental, that makes
> sense.
> We will have to wait a litlle before having a full featured container in
> Ubuntu
>
> Thanks.
> -- Daniel
>

I'm not totally opposed, but you'll need to convince me with use cases
and some stability analysis.

--
Tim Gardner tim.gardner@canonical.com

--
kernel-team mailing list
kernel-team@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/kernel-team
 
Old 01-13-2009, 08:52 PM
Daniel Lezcano
 
Default why the pid namespace is not compiled in the kernel ?

Tim Gardner wrote:

Daniel Lezcano wrote:


Tim Gardner wrote:


Daniel Lezcano wrote:



Daniel Lezcano wrote:



Hi,

I hope it is the right mailing list to ask

I tried the latest kernel version from "intrepid" and it looks like
the namespaces are compiled in except the pid namespace (according
the config file stored in /boot).
Is there any particular reason ?

Thanks.
-- Daniel

ps: I recently subscribed to this mailing list, sorry if this
question was already asked ...



did I ask to the right mailing list ?




Though there are a few features included in the config that depend on
EXPERIMENTAL, CONFIG_PID_NS is not deemed sufficiently interesting to
mess with.



Ah, I see, like the network namespace, it is experimental, that makes
sense.
We will have to wait a litlle before having a full featured container in
Ubuntu

Thanks.
-- Daniel




I'm not totally opposed, but you'll need to convince me with use cases
and some stability analysis.



The namespaces with the control group provides the ability to create a
virtual private server.
You can launch an application like sshd or apache with its own private
resources, that allows to make several instances of the same server on
the same host without conflicts. You can launch several operating
systems (eg. a debian) on the same host.
This is different from the virtual machine because the kernel is shared
and it is up to it to handle the system resources per group of processes.
The advantage of this approach is the scalability and the very low
overhead of the virtualization.


There are two projects implementing the container feature, the libvirt
and the liblxc.


The pid namespace is enabled since fedora 9 and opensuse 11, and I
didn't fall into any problem while using the liblxc, I guess we can
consider it stable.
The network namespace is mutually exclusive with sysfs until 2.6.29, I
spotted 2 bugs in the netwok namespace and I am fixing them right now,
one is leading to a kernel panic (fixed) and the last one just fails
gracefully, sometimes, to create a network namespace when trying to
instantiate a new network namespace in a infinite loop.


AFAICS, nobody complained about the namespaces being enabled in these
different distros.


The namespaces tests are included in the ltp test suite, so IMHO, it is
reasonable to say they are stable.
In any case, "experimental" is a scary word and I understand why the
feature would not be enabled for a stable kernel version
If the features are missing I can live with a custom kernel until
everything is enabled.


FYI, I added the lxc.7 man page to this email, I hope that can give some
clues of what we can do with the namespaces and the cgroup


Thanks.
-- Daniel






lxc

Name

lxc -- linux containers

Quick start

You are in a hurry, and you don't want to read this man page. Ok, without
warranty, here are the commands to launch a shell inside a container with
a predefined configuration template, it may work.
/usr/local/bin/lxc-execute -n foo -f /usr/local/etc/lxc/lxc-macvlan.conf
/bin/bash

Overview

The container technology is actively being pushed into the mainstream
linux kernel. It provides the resource management through the control
groups aka process containers and resource isolation through the
namespaces.

The linux containers, lxc, aims to use these new functionalities to
provide an userspace container object which provides full resource
isolation and resource control for an applications or a system.

The first objective of this project is to make the life easier for the
kernel developers involved in the containers project and especially to
continue working on the Checkpoint/Restart new features. The lxc is small
enough to easily manage a container with simple command lines and complete
enough to be used for other purposes.

Requirements

The lxc relies on a set of functionalies provided by the kernel which
needs to be active. Depending of the missing functionalities the lxc will
work with a restricted number of functionalities or will simply fails.

The following list gives the kernel features to be enabled in the kernel
to have the full features container:

* General
* Control Group support
-> namespace cgroup subsystem
-> cpuset support
-> Group CPU scheduler
-> control group freeze subsystem
-> Basis for grouping tasks (Control Groups)
-> Simple CPU accounting
-> Resource counters
-> Memory resource controllers for Control Groups
-> Namespace support
-> UTS namespace
-> IPC namespace
-> User namespace
-> Pid namespace
* Network support
-> Networking options
-> Network namespace support


For the moment the easiest way to have all the features in the kernel is
to use the git tree at:
git://git.kernel.org/pub/scm/linux/kernel/git/daveh/linux-2.6-lxc.git But
the kernel version >= 2.6.27 shipped with the distros, may work with lxc,
this one will have less functionalities but enough to be interesting. The
planned kernel version which lxc should be fully functionaly is 2.6.29.

Before using the lxc, your system should be configured with the file
capabilities, otherwise you will need to run the lxc commands as root. The
control group should be mounted anywhere, eg: mount -t cgroup cgroup
/cgroup

Functional specification

A container is an object where the configuration is persistent. The
application will be launched inside this container and it will use the
configuration which was previously created.

How to run an application in a container ?

Before running an application, you should know what are the resources you
want to isolate. The default configuration is to isolate the pids, the
sysv ipc and the mount points. If you want to run a simple shell inside a
container, a basic configuration is needed, especially if you want to
share the rootfs. If you want to run an application like sshd, you should
provide a new network stack and a new hostname. If you want to avoid
conflicts with some files eg. /var/run/httpd.pid, you should remount
/var/run with an empty directory. If you want to avoid the conflicts in
all the cases, you can specify a rootfs for the container. The rootfs can
be a directory tree, previously bind mounted with the initial rootfs, so
you can still use your distro but with your own /etc and /home

Here is an example of directory tree for sshd:


[root@lxc sshd]$ tree -d rootfs

rootfs
|-- bin
|-- dev
| |-- pts
| `-- shm
| `-- network
|-- etc
| `-- ssh
|-- lib
|-- proc
|-- root
|-- sbin
|-- sys
|-- usr
`-- var
|-- empty
| `-- sshd
|-- lib
| `-- empty
| `-- sshd
`-- run
`-- sshd


and the mount points file associated with it:

[root@lxc sshd]$ cat fstab

/lib /home/root/sshd/rootfs/lib none ro,bind 0 0
/bin /home/root/sshd/rootfs/bin none ro,bind 0 0
/usr /home/root/sshd/rootfs/usr none ro,bind 0 0
/sbin /home/root/sshd/rootfs/sbin none ro,bind 0 0


How to run a system in a container ?

Running a system inside a container is paradoxically easier than running
an application. Why ? Because you don't have to care about the resources
to be isolated, everything need to be isolated except /dev which needs to
be remounted in the container rootfs, the other resources are specified as
being isolated but without configuration because the container will set
them up. eg. the ipv4 address will be setup by the system container init
scripts. Here is an example of the mount points file:

[root@lxc debian]$ cat fstab

/dev /home/root/debian/rootfs/dev none bind 0 0
/dev/pts /home/root/debian/rootfs/dev/pts none bind 0 0


More information can be added to the container to facilitate the
configuration. For example, make accessible from the container the
resolv.conf file belonging to the host.

/etc/resolv.conf /home/root/debian/rootfs/etc/resolv.conf none bind 0 0


Container life cycle

When the container is created, it contains the configuration information.
When a process is launched, the container will be starting and running.
When the last process running inside the container exits, the container is
stopped.

In case of failure when the container is initialized, it will pass through
the aborting state.

---------
| STOPPED |<---------------
--------- |
| |
start |
| |
V |
---------- |
| STARTING |--error- |
---------- | |
| | |
V V |
--------- ---------- |
| RUNNING | | ABORTING | |
--------- ---------- |
| | |
no process | |
| | |
V | |
---------- | |
| STOPPING |<------- |
---------- |
| |
---------------------



Configuration

The container is configured through a configuration file, the format of
the configuration file is described in lxc.conf(5)

Creating / Destroying the containers

The container is created via the lxc-create command. It takes a container
name as parameter and an optional configuration file. The name is used by
the different commands to refer to this container. The lxc-destroy command
will destroy the container object.

lxc-create -n foo
lxc-destroy -n foo


Starting / Stopping a container

When the container has been created, it is ready to run an application /
system. When the application has to be destroyed the container can be
stopped, that will kill all the processes of the container.

Running an application inside a container is not exactly the same thing as
running a system. For this reason, there is two commands to run an
application into a container:

lxc-execute -n foo [-f config] /bin/bash
lxc-start -n foo [/bin/bash]


lxc-execute command will run the specified command into a container but it
will mount /proc and autocreate/autodestroy the container if it does not
exist. It will furthermore create an intermediate process, lxc-init, which
is in charge to launch the specified command, that allows to support
daemons in the container. In other words, in the container lxc-init has
the pid 1 and the first process of the application has the pid 2.

lxc-start command will run the specified command into the container doing
nothing else than using the configuration specified by lxc-create. The pid
of the first process is 1. If no command is specified lxc-start will run
/sbin/init.

To summarize, lxc-execute is for running an application and lxc-start is
for running a system.

If the application is no longer responding, inaccessible or is not able to
finish by itself, a wild lxc-stop command will kill all the processes in
the container without pity.

lxc-stop -n foo


Freeze / Unfreeze a container

Sometime, it is useful to stop all the processes belonging to a container,
eg. for job scheduling. The commands:

lxc-freeze -n foo


will put all the processes in an uninteruptible state and

lxc-unfreeze -n foo


will resume all the tasks.

This feature is enabled if the cgroup freezer is enabled in the kernel.

Getting information about the container

When there are a lot of containers, it is hard to follow what has been
created or destroyed, what is running or what are the pids running into a
specific container. For this reason, the following commands give this
information:

lxc-ls
lxc-ps -n foo
lxc-info -n foo


lxc-ls lists the containers of the system. The command is a script built
on top of ls, so it accepts the options of the ls commands, eg:

lxc-ls -C1


will display the containers list in one column or:

lxc-ls -l


will display the containers list and their permissions.

lxc-ps will display the pids for a specific container. Like lxc-ls, lxc-ps
is built on top of ps and accepts the same options, eg:

lxc-ps -n foo --forest


will display the process hierarchy for the container 'foo'.

lxc-info gives informations for a specific container, at present time,
only the state of the container is displayed.

Here is an example on how the combination of these commands allow to list
all the containers and retrieve their state.

for i in $(lxc-ls -1); do
lxc-info -n $i
done


And displaying all the pids of all the containers:

for i in $(lxc-ls -1); do
lxc-ps -n $i --forest
done


lxc-netstat display network information for a specific container. This
command is built on top of the netstat command and will accept its options

The following command will display the socket informations for the
container 'foo'.

lxc-netstat -n foo -tano


Monitoring the containers

It is sometime useful to track the states of a container, for example to
monitor it or just to wait for a specific state in a script.

lxc-monitor command will monitor one or several containers. The parameter
of this command accept a regular expression for example:

lxc-monitor -n "foo|bar"


will monitor the states of containers named 'foo' and 'bar', and:

lxc-monitor -n ".*"


will monitor all the containers.

For a container 'foo' starting, doing some work and exiting, the output
will be in the form:

'foo' changed state to [STARTING]
'foo' changed state to [RUNNING]
'foo' changed state to [STOPPING]
'foo' changed state to [STOPPED]


lxc-wait command will wait for a specific state change and exit. This is
useful for scripting to synchronize the launch of a container or the end.
The parameter is an ORed combination of different states. The following
example shows how to wait for a container if he went to the background.

# launch lxc-wait in background
lxc-wait -n foo -s STOPPED &
LXC_WAIT_PID=$!

# this command goes in background
lxc-execute -n foo mydaemon &

# block until the lxc-wait exits
# and lxc-wait exits when the container
# is STOPPED
wait $LXC_WAIT_PID
echo "'foo' is finished"



Setting the control group for a container

The container is tied with the control groups, when a container is started
a control group is created and associated with it. The control group
properties can be read and modified when the container is running by using
the lxc-cgroup command.

lxc-cgroup command is used to set or get a control group subsystem which
is associated with a container. The subsystem name is handled by the user,
the command won't do any syntax checking on the subsystem name, if the
subsystem name does not exists, the command will fail.

lxc-cgroup -n foo cpuset.cpus


will display the content of this subsystem.

lxc-cgroup -n foo cpu.shares 512


will set the subsystem to the specified value.

Bugs

The lxc is still in development, so the command syntax and the API can
change. The version 1.0.0 will be the frozen version.

See Also

lxc-create(1), lxc-destroy(1), lxc-start(1), lxc-execute(1), lxc-stop(1),
lxc-monitor(1), lxc-wait(1), lxc-cgroup(1), lxc-ls(1), lxc-ps(1),
lxc-info(1), lxc-freeze(1), lxc-unfreeze(1), lxc.conf(5),

Author

Daniel Lezcano <[1]daniel.lezcano@free.fr>

References

Visible links
1. mailto:daniel.lezcano@free.fr
--
kernel-team mailing list
kernel-team@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/kernel-team
 
Old 01-20-2009, 03:13 PM
Tim Gardner
 
Default why the pid namespace is not compiled in the kernel ?

Daniel Lezcano wrote:
> Tim Gardner wrote:
>> Daniel Lezcano wrote:
>>
>>> Tim Gardner wrote:
>>>
>>>> Daniel Lezcano wrote:
>>>>
>>>>
>>>>> Daniel Lezcano wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I hope it is the right mailing list to ask
>>>>>>
>>>>>> I tried the latest kernel version from "intrepid" and it looks like
>>>>>> the namespaces are compiled in except the pid namespace (according
>>>>>> the config file stored in /boot).
>>>>>> Is there any particular reason ?
>>>>>>
>>>>>> Thanks.
>>>>>> -- Daniel
>>>>>>
>>>>>> ps: I recently subscribed to this mailing list, sorry if this
>>>>>> question was already asked ...
>>>>>>
>>>>> did I ask to the right mailing list ?
>>>>>
>>>>>
>>>> Though there are a few features included in the config that depend on
>>>> EXPERIMENTAL, CONFIG_PID_NS is not deemed sufficiently interesting to
>>>> mess with.
>>>>
>>> Ah, I see, like the network namespace, it is experimental, that makes
>>> sense.
>>> We will have to wait a litlle before having a full featured container in
>>> Ubuntu
>>>
>>> Thanks.
>>> -- Daniel
>>>
>>>
>>
>> I'm not totally opposed, but you'll need to convince me with use cases
>> and some stability analysis.
>>
>
> The namespaces with the control group provides the ability to create a
> virtual private server.
> You can launch an application like sshd or apache with its own private
> resources, that allows to make several instances of the same server on
> the same host without conflicts. You can launch several operating
> systems (eg. a debian) on the same host.
> This is different from the virtual machine because the kernel is shared
> and it is up to it to handle the system resources per group of processes.
> The advantage of this approach is the scalability and the very low
> overhead of the virtualization.
>
> There are two projects implementing the container feature, the libvirt
> and the liblxc.
>
> The pid namespace is enabled since fedora 9 and opensuse 11, and I
> didn't fall into any problem while using the liblxc, I guess we can
> consider it stable.
> The network namespace is mutually exclusive with sysfs until 2.6.29, I
> spotted 2 bugs in the netwok namespace and I am fixing them right now,
> one is leading to a kernel panic (fixed) and the last one just fails
> gracefully, sometimes, to create a network namespace when trying to
> instantiate a new network namespace in a infinite loop.
>
> AFAICS, nobody complained about the namespaces being enabled in these
> different distros.
>
> The namespaces tests are included in the ltp test suite, so IMHO, it is
> reasonable to say they are stable.
> In any case, "experimental" is a scary word and I understand why the
> feature would not be enabled for a stable kernel version
> If the features are missing I can live with a custom kernel until
> everything is enabled.
>
> FYI, I added the lxc.7 man page to this email, I hope that can give some
> clues of what we can do with the namespaces and the cgroup
>
> Thanks.
> -- Daniel
>
>
>
>
>
>

Enabled.

http://kernel.ubuntu.com/git?p=ubuntu/ubuntu-jaunty.git;a=commit;h=c0399d5596fb3db7f685bd59ab0c 93b3612f3ee9
--
Tim Gardner tim.gardner@canonical.com

--
kernel-team mailing list
kernel-team@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/kernel-team
 
Old 01-20-2009, 06:44 PM
Daniel Lezcano
 
Default why the pid namespace is not compiled in the kernel ?

Tim Gardner wrote:
> Daniel Lezcano wrote:
>
>> Tim Gardner wrote:
>>
>>> Daniel Lezcano wrote:
>>>
>>>
>>>> Tim Gardner wrote:
>>>>
>>>>
>>>>> Daniel Lezcano wrote:
>>>>>
>>>>>
>>>>>
>>>>>> Daniel Lezcano wrote:
>>>>>>
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I hope it is the right mailing list to ask
>>>>>>>
>>>>>>> I tried the latest kernel version from "intrepid" and it looks like
>>>>>>> the namespaces are compiled in except the pid namespace (according
>>>>>>> the config file stored in /boot).
>>>>>>> Is there any particular reason ?
>>>>>>>
>>>>>>> Thanks.
>>>>>>> -- Daniel
>>>>>>>
>>>>>>> ps: I recently subscribed to this mailing list, sorry if this
>>>>>>> question was already asked ...
>>>>>>>
>>>>>>>
>>>>>> did I ask to the right mailing list ?
>>>>>>
>>>>>>
>>>>>>
>>>>> Though there are a few features included in the config that depend on
>>>>> EXPERIMENTAL, CONFIG_PID_NS is not deemed sufficiently interesting to
>>>>> mess with.
>>>>>
>>>>>
>>>> Ah, I see, like the network namespace, it is experimental, that makes
>>>> sense.
>>>> We will have to wait a litlle before having a full featured container in
>>>> Ubuntu
>>>>
>>>> Thanks.
>>>> -- Daniel
>>>>
>>>>
>>>>
>>> I'm not totally opposed, but you'll need to convince me with use cases
>>> and some stability analysis.
>>>
>>>
>> The namespaces with the control group provides the ability to create a
>> virtual private server.
>> You can launch an application like sshd or apache with its own private
>> resources, that allows to make several instances of the same server on
>> the same host without conflicts. You can launch several operating
>> systems (eg. a debian) on the same host.
>> This is different from the virtual machine because the kernel is shared
>> and it is up to it to handle the system resources per group of processes.
>> The advantage of this approach is the scalability and the very low
>> overhead of the virtualization.
>>
>> There are two projects implementing the container feature, the libvirt
>> and the liblxc.
>>
>> The pid namespace is enabled since fedora 9 and opensuse 11, and I
>> didn't fall into any problem while using the liblxc, I guess we can
>> consider it stable.
>> The network namespace is mutually exclusive with sysfs until 2.6.29, I
>> spotted 2 bugs in the netwok namespace and I am fixing them right now,
>> one is leading to a kernel panic (fixed) and the last one just fails
>> gracefully, sometimes, to create a network namespace when trying to
>> instantiate a new network namespace in a infinite loop.
>>
>> AFAICS, nobody complained about the namespaces being enabled in these
>> different distros.
>>
>> The namespaces tests are included in the ltp test suite, so IMHO, it is
>> reasonable to say they are stable.
>> In any case, "experimental" is a scary word and I understand why the
>> feature would not be enabled for a stable kernel version
>> If the features are missing I can live with a custom kernel until
>> everything is enabled.
>>
>> FYI, I added the lxc.7 man page to this email, I hope that can give some
>> clues of what we can do with the namespaces and the cgroup
>>
>> Thanks.
>> -- Daniel
>>
>>
>>
>>
>>
>>
>>
>
> Enabled.
>
> http://kernel.ubuntu.com/git?p=ubuntu/ubuntu-jaunty.git;a=commit;h=c0399d5596fb3db7f685bd59ab0c 93b3612f3ee9
>
Excellent
Thanks !

--Daniel

--
kernel-team mailing list
kernel-team@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/kernel-team
 

Thread Tools




All times are GMT. The time now is 10:12 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org