FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Red Hat Linux

 
 
LinkBack Thread Tools
 
Old 05-19-2010, 05:16 PM
"Allen, Jack"
 
Default Multipath I/O stats

Hello:
This is a general question about Multipath I/O stats, so I don't
think there is a need to state all the number of the kernel and
multipath.

With multipath set up to access a SAN with some number of LUNs
and for this question 2 paths set for round robin, how can the I/O stats
be seen/gathered to see the throughput on each path and how balanced the
I/O is?

I know that round robin just send the I/O request down each path
without regard of the request that may already be queued and therefore
it is not balanced. But knowing if one path for what ever reason is
slower than the other would help in some performance trouble shooting.

----------
Jack Allen

--
redhat-list mailing list
unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list
 
Old 05-20-2010, 06:28 PM
Yong Huang
 
Default Multipath I/O stats

> With multipath set up to access a SAN with some number of LUNs
> and for this question 2 paths set for round robin, how can the
> I/O stats be seen/gathered to see the throughput on each path
> and how balanced the I/O is?

I think we can do this. multiptha -l tells you what disks are combined to form a mapper path. Then you can use iostat to check I/O stats of each disk along with each mapper. It won't be hard to write a shell script to re-print the lines of iostat nicely, grouping the lines of the disks under their respective mapper path.

Yong Huang




--
redhat-list mailing list
unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list
 
Old 05-20-2010, 08:33 PM
"Allen, Jack"
 
Default Multipath I/O stats

-----Original Message-----
From: Yong Huang [mailto:yong321@yahoo.com]
Sent: Thursday, May 20, 2010 2:28 PM
To: Allen, Jack
Cc: redhat-list@redhat.com
Subject: Re: Multipath I/O stats

> With multipath set up to access a SAN with some number of LUNs
> and for this question 2 paths set for round robin, how can the
> I/O stats be seen/gathered to see the throughput on each path
> and how balanced the I/O is?

I think we can do this. multiptha -l tells you what disks are combined
to form a mapper path. Then you can use iostat to check I/O stats of
each disk along with each mapper. It won't be hard to write a shell
script to re-print the lines of iostat nicely, grouping the lines of the
disks under their respective mapper path.

Yong Huang

===========================
Thanks for the reply.

This is the output of just one of the mpaths that I monitored for a
while.

mpath13 (360060e8005491000000049100000703c) dm-0 HP,OPEN-V
[size=10G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
\_ 2:0:0:2 sdaa 65:160 [active][undef]
\_ 2:0:1:2 sdam 66:96 [active][undef]
\_ 1:0:0:2 sdc 8:32 [active][undef]
\_ 1:0:1:2 sdo 8:224 [active][undef]


Below is the command I used and the results. I know this is a small
sampling and I have eliminated the ones that had 0 I/O to save space
here. But it appears the I/O is not really being done round-robin as I
think it should be. You will notice sdam and sdb are the only ones that
do any I/O. Now maybe this is because of some preferred path and
controller relationship, I don't know. Any help understanding this would
be helpful.

iostat -d -p sdaa -p sdam -p sdc -p sdb -p dm-0 2 20 > /tmp/zzxx

Linux 2.6.18-164.el5PAE (h0009) 05/20/2010

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sdaa 1.53 49.43 15.03 60932848 18523880
sdam 1.53 49.35 15.10 60833016 18616608
sdc 1.53 49.41 15.04 60905568 18542936
sdb 1.38 57.21 3.68 70522704 4533144
dm-0 32.23 197.56 60.24 243542080 74259264

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sdaa 0.00 0.00 0.00 0 0
sdam 4.50 72.00 0.00 144 0
sdc 0.00 0.00 0.00 0 0
sdb 4.50 72.00 0.00 144 0
dm-0 9.00 72.00 0.00 144 0

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sdaa 0.00 0.00 0.00 0 0
sdam 10.00 160.00 0.00 320 0
sdc 0.00 0.00 0.00 0 0
sdb 7.50 112.00 8.00 224 16
dm-0 20.00 160.00 0.00 320 0

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sdaa 0.00 0.00 0.00 0 0
sdam 7.00 112.00 0.00 224 0
sdc 0.00 0.00 0.00 0 0
sdb 5.00 80.00 0.00 160 0
dm-0 14.00 112.00 0.00 224 0

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sdaa 0.00 0.00 0.00 0 0
sdam 3.50 56.00 0.00 112 0
sdc 0.00 0.00 0.00 0 0
sdb 3.50 56.00 0.00 112 0
dm-0 7.00 56.00 0.00 112 0

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sdaa 0.00 0.00 0.00 0 0
sdam 0.50 0.00 7.96 0 16
sdc 0.00 0.00 0.00 0 0
sdb 0.00 0.00 0.00 0 0
dm-0 1.00 0.00 7.96 0 16

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sdaa 0.00 0.00 0.00 0 0
sdam 3.50 32.00 96.00 64 192
sdc 0.00 0.00 0.00 0 0
sdb 0.00 0.00 0.00 0 0
dm-0 16.00 32.00 96.00 64 192

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sdaa 0.00 0.00 0.00 0 0
sdam 0.50 0.00 8.00 0 16
sdc 0.00 0.00 0.00 0 0
sdb 0.00 0.00 0.00 0 0
dm-0 1.00 0.00 8.00 0 16

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sdaa 0.00 0.00 0.00 0 0
sdam 1.50 0.00 24.00 0 48
sdc 0.00 0.00 0.00 0 0
sdb 0.00 0.00 0.00 0 0
dm-0 3.00 0.00 24.00 0 48

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sdaa 0.00 0.00 0.00 0 0
sdam 5.00 24.00 88.00 48 176
sdc 0.00 0.00 0.00 0 0
sdb 0.00 0.00 0.00 0 0
dm-0 14.00 24.00 88.00 48 176

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sdaa 0.00 0.00 0.00 0 0
sdam 0.50 8.00 0.00 16 0
sdc 0.00 0.00 0.00 0 0
sdb 0.00 0.00 0.00 0 0
dm-0 1.00 8.00 0.00 16 0

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sdaa 0.00 0.00 0.00 0 0
sdam 7.50 0.00 120.00 0 240
sdc 0.00 0.00 0.00 0 0
sdb 0.00 0.00 0.00 0 0
dm-0 15.00 0.00 120.00 0 240

-----
Jack Allen

--
redhat-list mailing list
unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list
 
Old 05-21-2010, 04:19 AM
Yong Huang
 
Default Multipath I/O stats

> > With multipath set up to access a SAN with some number of LUNs
> > and for this question 2 paths set for round robin, how can the
> > I/O stats be seen/gathered to see the throughput on each path
> > and how balanced the I/O is?
>
> I think we can do this. multiptha -l tells you what disks are combined
> to form a mapper path. Then you can use iostat to check I/O stats of
> each disk along with each mapper. It won't be hard to write a shell
> script to re-print the lines of iostat nicely, grouping the lines of the
> disks under their respective mapper path.
>
> Yong Huang
>
> ===========================
> Thanks for the reply.
>
> This is the output of just one of the mpaths that I monitored for a
> while.
>
> mpath13 (360060e8005491000000049100000703c) dm-0 HP,OPEN-V
> [size=10G][features=1 queue_if_no_path][hwhandler=0][rw]
> \_ round-robin 0 [prio=0][active]
> \_ 2:0:0:2 sdaa 65:160 [active][undef]
> \_ 2:0:1:2 sdam 66:96 [active][undef]
> \_ 1:0:0:2 sdc 8:32 [active][undef]
> \_ 1:0:1:2 sdo 8:224 [active][undef]
>
>
> Below is the command I used and the results. I know this is a small
> sampling and I have eliminated the ones that had 0 I/O to save space
> here. But it appears the I/O is not really being done round-robin as I
> think it should be. You will notice sdam and sdb are the only ones that
> do any I/O. Now maybe this is because of some preferred path and
> controller relationship, I don't know. Any help understanding this would
> be helpful.
>
> iostat -d -p sdaa -p sdam -p sdc -p sdb -p dm-0 2 20 > /tmp/zzxx
>
> Linux 2.6.18-164.el5PAE (h0009) 05/20/2010
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sdaa 1.53 49.43 15.03 60932848 18523880
> sdam 1.53 49.35 15.10 60833016 18616608
> sdc 1.53 49.41 15.04 60905568 18542936
> sdb 1.38 57.21 3.68 70522704 4533144
> dm-0 32.23 197.56 60.24 243542080 74259264
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sdaa 0.00 0.00 0.00 0 0
> sdam 4.50 72.00 0.00 144 0
> sdc 0.00 0.00 0.00 0 0
> sdb 4.50 72.00 0.00 144 0
> dm-0 9.00 72.00 0.00 144 0
>
> ...

Jack,

You can look at the first iteration of your iostat output, which is the accumulative stats since bootup (the later iterations are each a 2-second sample). If your iostat had argument -p sdo instead of -p sdb (it must be a typo compared with the outpuf of your multipath command), you would see all four paths have almost perfectly equal I/O stats, because all your paths are active. Numbers below this accumulative stats indicate your currently selected paths are sdam and (likely) sdo (not shown due to typo). After rr_min_io seconds I think, they'll switch to the other two paths.

Your multipath command seems to have the 4 path lines missing leading space; they should be indented below the priority group line.

Is it OK you show me the first part of /etc/multipath.conf, uncommented lines before the actual multipaths section?

BTW, I used a wrong word in my last message. Instead of "disk", I really should say "device".

Yong Huang




--
redhat-list mailing list
unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list
 
Old 05-21-2010, 08:04 PM
"Allen, Jack"
 
Default Multipath I/O stats

-----Original Message-----
From: Yong Huang [mailto:yong321@yahoo.com]
Sent: Friday, May 21, 2010 12:19 AM
To: Allen, Jack
Cc: redhat-list@redhat.com
Subject: RE: Multipath I/O stats

> > With multipath set up to access a SAN with some number of LUNs
> > and for this question 2 paths set for round robin, how can the
> > I/O stats be seen/gathered to see the throughput on each path
> > and how balanced the I/O is?
>
> I think we can do this. multiptha -l tells you what disks are combined
> to form a mapper path. Then you can use iostat to check I/O stats of
> each disk along with each mapper. It won't be hard to write a shell
> script to re-print the lines of iostat nicely, grouping the lines of
the
> disks under their respective mapper path.
>
> Yong Huang
>
> ===========================
> Thanks for the reply.
>
> This is the output of just one of the mpaths that I monitored for a
> while.
>
> mpath13 (360060e8005491000000049100000703c) dm-0 HP,OPEN-V
> [size=10G][features=1 queue_if_no_path][hwhandler=0][rw]
> \_ round-robin 0 [prio=0][active]
> \_ 2:0:0:2 sdaa 65:160 [active][undef]
> \_ 2:0:1:2 sdam 66:96 [active][undef]
> \_ 1:0:0:2 sdc 8:32 [active][undef]
> \_ 1:0:1:2 sdo 8:224 [active][undef]
>
>
> Below is the command I used and the results. I know this is a small
> sampling and I have eliminated the ones that had 0 I/O to save space
> here. But it appears the I/O is not really being done round-robin as I
> think it should be. You will notice sdam and sdb are the only ones
that
> do any I/O. Now maybe this is because of some preferred path and
> controller relationship, I don't know. Any help understanding this
would
> be helpful.
>
> iostat -d -p sdaa -p sdam -p sdc -p sdb -p dm-0 2 20 > /tmp/zzxx
>
> Linux 2.6.18-164.el5PAE (h0009) 05/20/2010
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sdaa 1.53 49.43 15.03 60932848 18523880
> sdam 1.53 49.35 15.10 60833016 18616608
> sdc 1.53 49.41 15.04 60905568 18542936
> sdb 1.38 57.21 3.68 70522704 4533144
> dm-0 32.23 197.56 60.24 243542080 74259264
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sdaa 0.00 0.00 0.00 0 0
> sdam 4.50 72.00 0.00 144 0
> sdc 0.00 0.00 0.00 0 0
> sdb 4.50 72.00 0.00 144 0
> dm-0 9.00 72.00 0.00 144 0
>
> ...

Jack,

You can look at the first iteration of your iostat output, which is the
accumulative stats since bootup (the later iterations are each a
2-second sample). If your iostat had argument -p sdo instead of -p sdb
(it must be a typo compared with the outpuf of your multipath command),
you would see all four paths have almost perfectly equal I/O stats,
because all your paths are active. Numbers below this accumulative stats
indicate your currently selected paths are sdam and (likely) sdo (not
shown due to typo). After rr_min_io seconds I think, they'll switch to
the other two paths.

Your multipath command seems to have the 4 path lines missing leading
space; they should be indented below the priority group line.

Is it OK you show me the first part of /etc/multipath.conf, uncommented
lines before the actual multipaths section?

BTW, I used a wrong word in my last message. Instead of "disk", I really
should say "device".

Yong Huang
==========

Thanks for the follow up.

You are correct I entered the wrong device name. I monitored again with
the correct device names for an over all longer period of time and it
did rotate through all the device. But it seemed to take about a few
minutes to rotate from one path to the next. So I added rr_min_io 2 in
the default section, ran multipathd -k, reconfigure, but it did not have
any effect. I am reading the multipath.conf man page now to see if I can
find out anything.

You are correct there are 4 paths, in my original question I just used 2
as an example. Then you asked questions and I provided more information.
And the lack of a space in the output of multipath -l is probably due to
my copy and pasting.

multipath.conf
VVVVVVVVVVVVVV
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss!c[0-9]d[0-9]*"
devnode "^hd[a-z]"
devnode "^vg*"
}

## Use user friendly names, instead of using WWIDs as names.
defaults {
user_friendly_names yes
polling_interval 10
path_checker readsector0
path_selector "round-robin 0"
path_grouping_policy multibus
failback 5
no_path_retry 5
rr_min_io 2
bindings_file "/etc/multipath.bindings"
}
^^^^^^^^^^^^^^^^^^

Everything else is commented out. It is using the build in multipath
rule/configuration.
device {
vendor (HITACHI|HP)
product OPEN-.*
path_checker tur
failback immediate
no_path_retry 12
rr_min_io 1000
}

Which when I was copy and pasting I noticed the rr_min_io 1000 which
probably why it is taking a while to rotate through the paths.

-----
Jack Allen


--
redhat-list mailing list
unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list
 
Old 05-22-2010, 05:28 PM
Yong Huang
 
Default Multipath I/O stats

While we're on this topic, here's some not very technical thought
on load balancing of multipath. When we talk about "load balance",
we always tend to associate it with overall performance improvement
(overall means scalability or throughput of multiple "clients", not
latency of a single "client"). For example, an Oracle cluster
database (called RAC by Oracle) allows more clients to connect to
the database without degraded response time. But here we're dealing
with multipath I/O. It's different in that the work done underneath
is on one single piece of storage hardware, a hard disk (or a
virtual one provided by some storage technology). Because read speed
on the storage itself is always much slower than any of the multi-
paths which is usually fiber channel, whether you have a single or
multiple paths to access the single slow disk will not provide
performance improvement. Am I missing anything obvious?

No doubt multipath provides failover capability or failure
resilience. Even with that one advantage, it's worth it.

Yong Huang




--
redhat-list mailing list
unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list
 
Old 05-22-2010, 06:05 PM
"Marti, Robert"
 
Default Multipath I/O stats

If you stripe reads across enough spindles, you can saturate a fiber
link. In most cases multipath is good for redundancy not throughout,
but it can be used for better throughput if you can feed enough data.

Also remember you can multipath with iSCSI and similar technologies
benefit immensly from multipathing since the media throughput is lower.

Sorry for incoherence - recently woke up after a 23 hour work day.

Sent from my iPhone

On May 22, 2010, at 12:59, "Yong Huang" <yong321@yahoo.com> wrote:

> While we're on this topic, here's some not very technical thought
> on load balancing of multipath. When we talk about "load balance",
> we always tend to associate it with overall performance improvement
> (overall means scalability or throughput of multiple "clients", not
> latency of a single "client"). For example, an Oracle cluster
> database (called RAC by Oracle) allows more clients to connect to
> the database without degraded response time. But here we're dealing
> with multipath I/O. It's different in that the work done underneath
> is on one single piece of storage hardware, a hard disk (or a
> virtual one provided by some storage technology). Because read speed
> on the storage itself is always much slower than any of the multi-
> paths which is usually fiber channel, whether you have a single or
> multiple paths to access the single slow disk will not provide
> performance improvement. Am I missing anything obvious?
>
> No doubt multipath provides failover capability or failure
> resilience. Even with that one advantage, it's worth it.
>
> Yong Huang
>
>
>
>
> --
> redhat-list mailing list
> unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe
> https://www.redhat.com/mailman/listinfo/redhat-list

--
redhat-list mailing list
unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list
 
Old 05-23-2010, 06:54 PM
Yong Huang
 
Default Multipath I/O stats

Robert,

If I'm not mistaken, multipath is about multiple accesses to one
single hard drive (or a virtual one on top of modern storage
technology). That is, having more than one path does not mean
using multiple spindles. It's still one single spindle.

Yong Huang

-----Original message-----

If you stripe reads across enough spindles, you can saturate a fiber
link. In most cases multipath is good for redundancy not throughout,
but it can be used for better throughput if you can feed enough data.

Also remember you can multipath with iSCSI and similar technologies
benefit immensly from multipathing since the media throughput is lower.

Sorry for incoherence - recently woke up after a 23 hour work day.


On May 22, 2010, at 12:59, "Yong Huang" <yong321@yahoo.com> wrote:

> While we're on this topic, here's some not very technical thought
> on load balancing of multipath. When we talk about "load balance",
> we always tend to associate it with overall performance improvement
> (overall means scalability or throughput of multiple "clients", not
> latency of a single "client"). For example, an Oracle cluster
> database (called RAC by Oracle) allows more clients to connect to
> the database without degraded response time. But here we're dealing
> with multipath I/O. It's different in that the work done underneath
> is on one single piece of storage hardware, a hard disk (or a
> virtual one provided by some storage technology). Because read speed
> on the storage itself is always much slower than any of the multi-
> paths which is usually fiber channel, whether you have a single or
> multiple paths to access the single slow disk will not provide
> performance improvement. Am I missing anything obvious?
>
> No doubt multipath provides failover capability or failure
> resilience. Even with that one advantage, it's worth it.
>
> Yong Huang




--
redhat-list mailing list
unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list
 
Old 05-23-2010, 07:35 PM
"Marti, Robert"
 
Default Multipath I/O stats

Not nessacarily. If you advertise one 10 disk LUN from the SAN, the OS
will see it as one disk, and multipath can make multiple paths to the
same "disk". That's 10 spindles in one disk, which, if they're fast or
SSD, will saturate a fiber link.

Sent from my iPhone

On May 23, 2010, at 2:25 PM, "Yong Huang" <yong321@yahoo.com> wrote:

> Robert,
>
> If I'm not mistaken, multipath is about multiple accesses to one
> single hard drive (or a virtual one on top of modern storage
> technology). That is, having more than one path does not mean
> using multiple spindles. It's still one single spindle.
>
> Yong Huang
>
> -----Original message-----
>
> If you stripe reads across enough spindles, you can saturate a fiber
> link. In most cases multipath is good for redundancy not throughout,
> but it can be used for better throughput if you can feed enough data.
>
> Also remember you can multipath with iSCSI and similar technologies
> benefit immensly from multipathing since the media throughput is
> lower.
>
> Sorry for incoherence - recently woke up after a 23 hour work day.
>
>
> On May 22, 2010, at 12:59, "Yong Huang" <yong321@yahoo.com> wrote:
>
>> While we're on this topic, here's some not very technical thought
>> on load balancing of multipath. When we talk about "load balance",
>> we always tend to associate it with overall performance improvement
>> (overall means scalability or throughput of multiple "clients", not
>> latency of a single "client"). For example, an Oracle cluster
>> database (called RAC by Oracle) allows more clients to connect to
>> the database without degraded response time. But here we're dealing
>> with multipath I/O. It's different in that the work done underneath
>> is on one single piece of storage hardware, a hard disk (or a
>> virtual one provided by some storage technology). Because read speed
>> on the storage itself is always much slower than any of the multi-
>> paths which is usually fiber channel, whether you have a single or
>> multiple paths to access the single slow disk will not provide
>> performance improvement. Am I missing anything obvious?
>>
>> No doubt multipath provides failover capability or failure
>> resilience. Even with that one advantage, it's worth it.
>>
>> Yong Huang
>
>
>
>
> --
> redhat-list mailing list
> unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe
> https://www.redhat.com/mailman/listinfo/redhat-list

--
redhat-list mailing list
unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list
 
Old 05-24-2010, 05:57 PM
Yong Huang
 
Default Multipath I/O stats

> Not nessacarily. If you advertise one 10 disk LUN from the SAN,
> the OS will see it as one disk, and multipath can make multiple
> paths to the same "disk". That's 10 spindles in one disk, which,
> if they're fast or SSD, will saturate a fiber link.

OK. I agree. Now a slightly different issue. Currently multipath load
balance allows only one path to be used at any given moment, chosen
in a round-robin fashion. Unless multiple paths are allowed to read
simultaneously, the bottleneck is on the single path when the "disk"
is faster. This makes "load balance" meaningless.

If the "disk" is slower, even future implementation of multiple paths
simultaneous read doesn't help in the sense of load balance because
the bottleneck is on the "disk".

Yong Huang




--
redhat-list mailing list
unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list
 

Thread Tools




All times are GMT. The time now is 06:03 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org