FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > CentOS > CentOS

 
 
LinkBack Thread Tools
 
Old 04-15-2011, 01:05 PM
Christopher Chan
 
Default 40TB File System Recommendations

On Friday, April 15, 2011 07:24 PM, Benjamin Franz wrote:
> On 04/14/2011 09:00 PM, Christopher Chan wrote:
>>
>> Wanna try that again with 64MB of cache only and tell us whether there
>> is a difference in performance?
>>
>> There is a reason why 3ware 85xx cards were complete rubbish when used
>> for raid5 and which led to the 95xx/96xx series.
>> _
>
> I don't happen to have any systems I can test with the 1.5TB drives
> without controller cache right now, but I have a system with some old
> 500GB drives (which are about half as fast as the 1.5TB drives in
> individual sustained I/O throughput) attached directly to onboard SATA
> ports in a 8 x RAID6 with *no* controller cache at all. The machine has
> 16GB of RAM and bonnie++ therefore used 32GB of data for the test.
>
> Version 1.96 ------Sequential Output------ --Sequential Input-
> --Random-
> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
> /sec %CP
> pbox3 32160M 389 98 76709 22 91071 26 2209 95 264892 26
> 590.5 11
> Latency 24190us 1244ms 1580ms 60411us 69901us
> 42586us
> Version 1.96 ------Sequential Create------ --------Random
> Create--------
> pbox3 -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
> /sec %CP
> 16 10910 31 +++++ +++ +++++ +++ 29293 80 +++++ +++
> +++++ +++
> Latency 775us 610us 979us 740us 370us
> 380us
>
> Given that the underlaying drives are effectively something like half as
> fast as the drives in the other test, the results are quite comparable.

Woohoo, next we will be seeing md raid6 also giving comparable results
if that is the case. I am not the only person on this list that thinks
cache is king for raid5/6 on hardware raid boards and the using hardware
raid + bbu cache for better performance one of the two reasons why we
don't do md raid5/6.


>
> Cache doesn't make a lot of difference when you quickly write a lot more
> data than the cache can hold. The limiting factor becomes the slowest
> component - usually the drives themselves. Cache isn't magic performance
> pixie dust. It helps in certain use cases and is nearly irrelevant in
> others.
>

Yeah, you are right - but cache is primarily to buffer the writes for
performance. Why else go through the expense of getting bbu cache? So
what happens when you tweak bonnie a bit?
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 04-15-2011, 01:17 PM
Rudi Ahlers
 
Default 40TB File System Recommendations

On Fri, Apr 15, 2011 at 3:05 PM, Christopher Chan <christopher.chan@bradbury.edu.hk> wrote:


On Friday, April 15, 2011 07:24 PM, Benjamin Franz wrote:

> On 04/14/2011 09:00 PM, Christopher Chan wrote:

>>

>> Wanna try that again with 64MB of cache only and tell us whether there

>> is a difference in performance?

>>

>> There is a reason why 3ware 85xx cards were complete rubbish when used

>> for raid5 and which led to the 95xx/96xx series.

>> _

>

> I don't happen to have any systems I can test with the 1.5TB drives

> without controller cache right now, but I have a system with some old

> 500GB drives *(which are about half as fast as the 1.5TB drives in

> individual sustained I/O throughput) attached directly to onboard SATA

> ports in a 8 x RAID6 with *no* controller cache at all. The machine has

> 16GB of RAM and bonnie++ therefore used 32GB of data for the test.

>

> Version *1.96 * * * ------Sequential Output------ --Sequential Input-

> --Random-

> Concurrency * 1 * * -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--

> --Seeks--

> Machine * * * *Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP

> /sec %CP

> pbox3 * * * *32160M * 389 *98 76709 *22 91071 *26 *2209 *95 264892 *26

> 590.5 *11

> Latency * * * * * * 24190us * *1244ms * *1580ms * 60411us * 69901us

> 42586us

> Version *1.96 * * * ------Sequential Create------ --------Random

> Create--------

> pbox3 * * * * * * * -Create-- --Read--- -Delete-- -Create-- --Read---

> -Delete--

> * * * * * * * * files */sec %CP */sec %CP */sec %CP */sec %CP */sec %CP

> /sec %CP

> * * * * * * * * * *16 10910 *31 +++++ +++ +++++ +++ 29293 *80 +++++ +++

> +++++ +++

> Latency * * * * * * * 775us * * 610us * * 979us * * 740us * * 370us

> 380us

>

> Given that the underlaying drives are effectively something like half as

> fast as the drives in the other test, the results are quite comparable.



Woohoo, next we will be seeing md raid6 also giving comparable results

if that is the case. I am not the only person on this list that thinks

cache is king for raid5/6 on hardware raid boards and the using hardware

raid + bbu cache for better performance one of the two reasons why we

don't do md raid5/6.





>

> Cache doesn't make a lot of difference when you quickly write a lot more

> data than the cache can hold. The limiting factor becomes the slowest

> component - usually the drives themselves. Cache isn't magic performance

> pixie dust. It helps in certain use cases and is nearly irrelevant in

> others.

>



Yeah, you are right - but cache is primarily to buffer the writes for

performance. Why else go through the expense of getting bbu cache? So

what happens when you tweak bonnie a bit?

_______________________________________________



As matter of interest, does anyone know how to use an SSD drive for cach purposes on Linux software RAID *drives? ZFS has this feature and it makes a helluva difference to a storage server's performance.*



--
Kind Regards
Rudi Ahlers
SoftDux

Website: http://www.SoftDux.com
Technical Blog: http://Blog.SoftDux.com


Office: 087 805 9573
Cell: 082 554 7532

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 04-15-2011, 01:47 PM
Jerry Franz
 
Default 40TB File System Recommendations

On 04/15/2011 06:05 AM, Christopher Chan wrote:
>
> Woohoo, next we will be seeing md raid6 also giving comparable results
> if that is the case. I am not the only person on this list that thinks
> cache is king for raid5/6 on hardware raid boards and the using hardware
> raid + bbu cache for better performance one of the two reasons why we
> don't do md raid5/6.
>
>

That *is* md RAID6. Sorry I didn't make that clear. I don't use anyone's
hardware RAID6 right now because I haven't found a board so far that was
as fast as using md. I get better performance from even a BBU backed 95X
series 3ware board by using it to serve the drives as JBOD and then
using md to do the actual raid.

> Yeah, you are right - but cache is primarily to buffer the writes for
> performance. Why else go through the expense of getting bbu cache? So
> what happens when you tweak bonnie a bit?

For smaller writes. When writes *do* fit in the cache you get a big
bump. As I said: Helps some cases, not all cases. BBU backed cache helps
if you have lots of small writes. Not so much if you are writing
gigabytes of stuff more sequentially.

--
Benjamin Franz
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 04-15-2011, 04:26 PM
Ross Walker
 
Default 40TB File System Recommendations

On Apr 15, 2011, at 9:17 AM, Rudi Ahlers <Rudi@SoftDux.com> wrote:



On Fri, Apr 15, 2011 at 3:05 PM, Christopher Chan <christopher.chan@bradbury.edu.hk> wrote:


On Friday, April 15, 2011 07:24 PM, Benjamin Franz wrote:

> On 04/14/2011 09:00 PM, Christopher Chan wrote:

>>

>> Wanna try that again with 64MB of cache only and tell us whether there

>> is a difference in performance?

>>

>> There is a reason why 3ware 85xx cards were complete rubbish when used

>> for raid5 and which led to the 95xx/96xx series.

>> _

>

> I don't happen to have any systems I can test with the 1.5TB drives

> without controller cache right now, but I have a system with some old

> 500GB drives *(which are about half as fast as the 1.5TB drives in

> individual sustained I/O throughput) attached directly to onboard SATA

> ports in a 8 x RAID6 with *no* controller cache at all. The machine has

> 16GB of RAM and bonnie++ therefore used 32GB of data for the test.

>

> Version *1.96 * * * ------Sequential Output------ --Sequential Input-

> --Random-

> Concurrency * 1 * * -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--

> --Seeks--

> Machine * * * *Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP

> /sec %CP

> pbox3 * * * *32160M * 389 *98 76709 *22 91071 *26 *2209 *95 264892 *26

> 590.5 *11

> Latency * * * * * * 24190us * *1244ms * *1580ms * 60411us * 69901us

> 42586us

> Version *1.96 * * * ------Sequential Create------ --------Random

> Create--------

> pbox3 * * * * * * * -Create-- --Read--- -Delete-- -Create-- --Read---

> -Delete--

> * * * * * * * * files */sec %CP */sec %CP */sec %CP */sec %CP */sec %CP

> /sec %CP

> * * * * * * * * * *16 10910 *31 +++++ +++ +++++ +++ 29293 *80 +++++ +++

> +++++ +++

> Latency * * * * * * * 775us * * 610us * * 979us * * 740us * * 370us

> 380us

>

> Given that the underlaying drives are effectively something like half as

> fast as the drives in the other test, the results are quite comparable.



Woohoo, next we will be seeing md raid6 also giving comparable results

if that is the case. I am not the only person on this list that thinks

cache is king for raid5/6 on hardware raid boards and the using hardware

raid + bbu cache for better performance one of the two reasons why we

don't do md raid5/6.





>

> Cache doesn't make a lot of difference when you quickly write a lot more

> data than the cache can hold. The limiting factor becomes the slowest

> component - usually the drives themselves. Cache isn't magic performance

> pixie dust. It helps in certain use cases and is nearly irrelevant in

> others.

>



Yeah, you are right - but cache is primarily to buffer the writes for

performance. Why else go through the expense of getting bbu cache? So

what happens when you tweak bonnie a bit?

_______________________________________________



As matter of interest, does anyone know how to use an SSD drive for cach purposes on Linux software RAID *drives? ZFS has this feature and it makes a helluva difference to a storage server's performance.*
Put the file system's log device on it.
-Ross
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 04-15-2011, 04:32 PM
Rudi Ahlers
 
Default 40TB File System Recommendations

On Fri, Apr 15, 2011 at 6:26 PM, Ross Walker <rswwalker@gmail.com> wrote:


On Apr 15, 2011, at 9:17 AM, Rudi Ahlers <Rudi@SoftDux.com> wrote:





On Fri, Apr 15, 2011 at 3:05 PM, Christopher Chan <christopher.chan@bradbury.edu.hk> wrote:




On Friday, April 15, 2011 07:24 PM, Benjamin Franz wrote:

> On 04/14/2011 09:00 PM, Christopher Chan wrote:

>>

>> Wanna try that again with 64MB of cache only and tell us whether there

>> is a difference in performance?

>>

>> There is a reason why 3ware 85xx cards were complete rubbish when used

>> for raid5 and which led to the 95xx/96xx series.

>> _

>

> I don't happen to have any systems I can test with the 1.5TB drives

> without controller cache right now, but I have a system with some old

> 500GB drives *(which are about half as fast as the 1.5TB drives in

> individual sustained I/O throughput) attached directly to onboard SATA

> ports in a 8 x RAID6 with *no* controller cache at all. The machine has

> 16GB of RAM and bonnie++ therefore used 32GB of data for the test.

>

> Version *1.96 * * * ------Sequential Output------ --Sequential Input-

> --Random-

> Concurrency * 1 * * -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--

> --Seeks--

> Machine * * * *Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP

> /sec %CP

> pbox3 * * * *32160M * 389 *98 76709 *22 91071 *26 *2209 *95 264892 *26

> 590.5 *11

> Latency * * * * * * 24190us * *1244ms * *1580ms * 60411us * 69901us

> 42586us

> Version *1.96 * * * ------Sequential Create------ --------Random

> Create--------

> pbox3 * * * * * * * -Create-- --Read--- -Delete-- -Create-- --Read---

> -Delete--

> * * * * * * * * files */sec %CP */sec %CP */sec %CP */sec %CP */sec %CP

> /sec %CP

> * * * * * * * * * *16 10910 *31 +++++ +++ +++++ +++ 29293 *80 +++++ +++

> +++++ +++

> Latency * * * * * * * 775us * * 610us * * 979us * * 740us * * 370us

> 380us

>

> Given that the underlaying drives are effectively something like half as

> fast as the drives in the other test, the results are quite comparable.



Woohoo, next we will be seeing md raid6 also giving comparable results

if that is the case. I am not the only person on this list that thinks

cache is king for raid5/6 on hardware raid boards and the using hardware

raid + bbu cache for better performance one of the two reasons why we

don't do md raid5/6.





>

> Cache doesn't make a lot of difference when you quickly write a lot more

> data than the cache can hold. The limiting factor becomes the slowest

> component - usually the drives themselves. Cache isn't magic performance

> pixie dust. It helps in certain use cases and is nearly irrelevant in

> others.

>



Yeah, you are right - but cache is primarily to buffer the writes for

performance. Why else go through the expense of getting bbu cache? So

what happens when you tweak bonnie a bit?

_______________________________________________



As matter of interest, does anyone know how to use an SSD drive for cach purposes on Linux software RAID *drives? ZFS has this feature and it makes a helluva difference to a storage server's performance.*


Put the file system's log device on it.
-Ross

_______________________________________________




Well, ZFS has a separate ZIL for that purpose, and the ZIL adds extra protection / redundancy to the whole pool.*
But the Cache / L2ARC drive caches all common reads & writes (simply put) onto SSD to improve overall system performance.*


So I was wondering if one could do this with mdraid or even just EXT3 / EXT4?

--
Kind Regards
Rudi Ahlers
SoftDux

Website: http://www.SoftDux.com


Technical Blog: http://Blog.SoftDux.com
Office: 087 805 9573
Cell: 082 554 7532

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 04-15-2011, 08:46 PM
Ross Walker
 
Default 40TB File System Recommendations

On Apr 15, 2011, at 12:32 PM, Rudi Ahlers <Rudi@SoftDux.com> wrote:



On Fri, Apr 15, 2011 at 6:26 PM, Ross Walker <rswwalker@gmail.com> wrote:


On Apr 15, 2011, at 9:17 AM, Rudi Ahlers <Rudi@SoftDux.com> wrote:





On Fri, Apr 15, 2011 at 3:05 PM, Christopher Chan <christopher.chan@bradbury.edu.hk> wrote:




On Friday, April 15, 2011 07:24 PM, Benjamin Franz wrote:

> On 04/14/2011 09:00 PM, Christopher Chan wrote:

>>

>> Wanna try that again with 64MB of cache only and tell us whether there

>> is a difference in performance?

>>

>> There is a reason why 3ware 85xx cards were complete rubbish when used

>> for raid5 and which led to the 95xx/96xx series.

>> _

>

> I don't happen to have any systems I can test with the 1.5TB drives

> without controller cache right now, but I have a system with some old

> 500GB drives *(which are about half as fast as the 1.5TB drives in

> individual sustained I/O throughput) attached directly to onboard SATA

> ports in a 8 x RAID6 with *no* controller cache at all. The machine has

> 16GB of RAM and bonnie++ therefore used 32GB of data for the test.

>

> Version *1.96 * * * ------Sequential Output------ --Sequential Input-

> --Random-

> Concurrency * 1 * * -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--

> --Seeks--

> Machine * * * *Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP

> /sec %CP

> pbox3 * * * *32160M * 389 *98 76709 *22 91071 *26 *2209 *95 264892 *26

> 590.5 *11

> Latency * * * * * * 24190us * *1244ms * *1580ms * 60411us * 69901us

> 42586us

> Version *1.96 * * * ------Sequential Create------ --------Random

> Create--------

> pbox3 * * * * * * * -Create-- --Read--- -Delete-- -Create-- --Read---

> -Delete--

> * * * * * * * * files */sec %CP */sec %CP */sec %CP */sec %CP */sec %CP

> /sec %CP

> * * * * * * * * * *16 10910 *31 +++++ +++ +++++ +++ 29293 *80 +++++ +++

> +++++ +++

> Latency * * * * * * * 775us * * 610us * * 979us * * 740us * * 370us

> 380us

>

> Given that the underlaying drives are effectively something like half as

> fast as the drives in the other test, the results are quite comparable.



Woohoo, next we will be seeing md raid6 also giving comparable results

if that is the case. I am not the only person on this list that thinks

cache is king for raid5/6 on hardware raid boards and the using hardware

raid + bbu cache for better performance one of the two reasons why we

don't do md raid5/6.





>

> Cache doesn't make a lot of difference when you quickly write a lot more

> data than the cache can hold. The limiting factor becomes the slowest

> component - usually the drives themselves. Cache isn't magic performance

> pixie dust. It helps in certain use cases and is nearly irrelevant in

> others.

>



Yeah, you are right - but cache is primarily to buffer the writes for

performance. Why else go through the expense of getting bbu cache? So

what happens when you tweak bonnie a bit?

_______________________________________________



As matter of interest, does anyone know how to use an SSD drive for cach purposes on Linux software RAID *drives? ZFS has this feature and it makes a helluva difference to a storage server's performance.*


Put the file system's log device on it.
-Ross

_______________________________________________




Well, ZFS has a separate ZIL for that purpose, and the ZIL adds extra protection / redundancy to the whole pool.*
But the Cache / L2ARC drive caches all common reads & writes (simply put) onto SSD to improve overall system performance.*


So I was wondering if one could do this with mdraid or even just EXT3 / EXT4?

Ext3/4 and XFS allow specifying an external log device which if is an SSD can speed up writes. All these file systems aggressively use page cache for read/write cache. The only thing you don't get is L2ARC type cache, but I heard of a dm-cache project that might provide provide that type of cache.
-Ross
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 04-16-2011, 12:07 AM
Christopher Chan
 
Default 40TB File System Recommendations

>
> As matter of interest, does anyone know how to use an SSD drive for cach
> purposes on Linux software RAID drives? ZFS has this feature and it
> makes a helluva difference to a storage server's performance.

You cannot. You can however use one for the external journal of ext3/4
in full journaling mode for something similar.
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 04-17-2011, 07:05 AM
Charles Polisher
 
Default 40TB File System Recommendations

On Wed, Apr 13, 2011 at 11:55:08PM -0400, Ross Walker wrote:
> On Apr 13, 2011, at 9:40 PM, Brandon Ooi <brandono@gmail.com> wrote:
>
> > On Wed, Apr 13, 2011 at 6:04 PM, Ross Walker <rswwalker@gmail.com> wrote:
> > >
> > > One was a hardware raid over fibre channel, which silently corrupted
> > > itself. System checked out fine, raid array checked out fine, xfs was
> > > replaced with ext3, and the system ran without issue.
> > >
> > > Second was multiple hardware arrays over linux md raid0, also over fibre
> > > channel. This was not so silent corruption, as in xfs would detect it
> > > and lock the filesystem into read-only before it, pardon the pun, truly
> > > fscked itself. Happened two or three times, before we gave up, split up
> > > the raid, and went ext3, Again, no issues.
> >
> > Every now and then I hear these XFS horror stories. They seem too
> > impossible to believe.
> >
> > Nothing breaks for absolutely no reason and failure to know where
> > the breakage was shows that maybe there wasn't adequately skilled
> > techinicians for the technology deployed.
> >
> > XFS if run in a properly configured environment will run flawlessly.

Here's some deconstruction of your argument:

"... and failure to know where the breakage was shows that maybe there
wasn't adequately skilled techinicians for the technology deployed"

This is blaming the victim. One must have the time, skills and
often other resources to do root cause analysis.

"XFS if run in a properly configured environment will run flawlessly."

I think a more narrowly qualified opinion is appropriate: "XFS,
properly configured, running on perfect hardware atop a perfect
kernel, will have fewer serious bugs than it had on Jan 1, 2009."
Here's a summary of XFS bugzilla data from 2009 through today:

Bug Status
Severity
NEW ASSIGNED REOPENED Total
blocker 3 . . 3
critical 10 2 . 12
major 48 2 . 50
normal 118 46 3 167
minor 26 3 . 29
trivial 7 . . 7
enhancement 39 9 1 49
Total 251 62 4 317

See also the XFS mailing list for a big dose of reality. Flawlessly
is not the label I would use for XFS. /Maybe/ for Ext2.
--
Charles Polisher


_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 04-18-2011, 04:27 PM
Ross Walker
 
Default 40TB File System Recommendations

On Apr 17, 2011, at 3:05 AM, Charles Polisher <cpolish@surewest.net> wrote:

> On Wed, Apr 13, 2011 at 11:55:08PM -0400, Ross Walker wrote:
>> On Apr 13, 2011, at 9:40 PM, Brandon Ooi <brandono@gmail.com> wrote:
>>
>>> On Wed, Apr 13, 2011 at 6:04 PM, Ross Walker <rswwalker@gmail.com> wrote:
>>>>
>>>> One was a hardware raid over fibre channel, which silently corrupted
>>>> itself. System checked out fine, raid array checked out fine, xfs was
>>>> replaced with ext3, and the system ran without issue.
>>>>
>>>> Second was multiple hardware arrays over linux md raid0, also over fibre
>>>> channel. This was not so silent corruption, as in xfs would detect it
>>>> and lock the filesystem into read-only before it, pardon the pun, truly
>>>> fscked itself. Happened two or three times, before we gave up, split up
>>>> the raid, and went ext3, Again, no issues.
>>>
>>> Every now and then I hear these XFS horror stories. They seem too
>>> impossible to believe.
>>>
>>> Nothing breaks for absolutely no reason and failure to know where
>>> the breakage was shows that maybe there wasn't adequately skilled
>>> techinicians for the technology deployed.
>>>
>>> XFS if run in a properly configured environment will run flawlessly.
>
> Here's some deconstruction of your argument:
>
> "... and failure to know where the breakage was shows that maybe there
> wasn't adequately skilled techinicians for the technology deployed"
>
> This is blaming the victim. One must have the time, skills and
> often other resources to do root cause analysis.
>
> "XFS if run in a properly configured environment will run flawlessly."
>
> I think a more narrowly qualified opinion is appropriate: "XFS,
> properly configured, running on perfect hardware atop a perfect
> kernel, will have fewer serious bugs than it had on Jan 1, 2009."
> Here's a summary of XFS bugzilla data from 2009 through today:

I already apologized for those comments last week. No need to keep flogging a dead horse here.


> Bug Status
> Severity
> NEW ASSIGNED REOPENED Total
> blocker 3 . . 3
> critical 10 2 . 12
> major 48 2 . 50
> normal 118 46 3 167
> minor 26 3 . 29
> trivial 7 . . 7
> enhancement 39 9 1 49
> Total 251 62 4 317
>
> See also the XFS mailing list for a big dose of reality. Flawlessly
> is not the label I would use for XFS. /Maybe/ for Ext2.

Basically it comes down to that all file systems, as do all software, have bugs and edge cases and thinking that one can find a file system that is bug free is naive.

Test, test, test.

-Ross

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 

Thread Tools




All times are GMT. The time now is 08:54 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org