FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Debian > Debian User

 
 
LinkBack Thread Tools
 
Old 09-08-2012, 04:50 PM
Veljko
 
Default Storage server

On Fri, Sep 07, 2012 at 09:43:57PM -0400, tdowg1 news wrote:
> >> I'm in the process of making new backup server, so I'm thinking of best
> >> way of doing it. I have 4 3TB disks and I'm thinking of puting them in
> >> software RAID10.
> >>
> >> I created 2 500MB partitions for /boot (RAID1) and the rest it RAID10.
> >
> > So far, so good.
> >
> >> LVM will provide me a way to expand storage with extra disks, but if
> >> there is no more room for that kind of expansion, I was thinking of
> >> GlusterFS for scaling-out.
> >
> > Let me suggest a different approach. It sounds like you're
> > planning on a lot of future expansion.
> >
> > Get a high-end SAS RAID card. One with two external SFF8088
> > connectors.
>
> I would 2nd the suggestion of investing in a high-end SAS RAID card. I
> would also avoid any kind of software raid if at all possible, at least for
> partitions that see a lot of I/O. I guess /boot would be ok, but I def
> would not put my root or /home under software raid if I had a discrete
> controller. However, you did say this is a storage/backup server and not a
> "main" machine... so I don't know... just something to think I guess.
> Software raid is free, so if performance becomes an issue you can always
> upgrade
>
> There are a lot of discrete hard disk drive controllers on the market. If
> you go this route, try to be sure that it supports SMART passthrough so
> that you can at least get _some_ kind status your drives.
>
> --tdowg1

If it was my call, I would go with high-end RAID card as well. But in
this case I have to work without them. However, I've heard that
software RAID is good for one thing. You can rebuild it in any other
machine. If you use hardware controller and it dies, you have to buy
same of very similar one to be able to save your data. Was I
misinformed?

Regards,
Veljko


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 20120908165012.GB12245@angelina.example.org">http://lists.debian.org/20120908165012.GB12245@angelina.example.org
 
Old 09-08-2012, 04:50 PM
Veljko
 
Default Storage server

On Fri, Sep 07, 2012 at 01:43:47PM -0600, Bob Proulx wrote:
> Veljko wrote:
> > Dan Ritter wrote:
> > > > OS I would use is Wheezy. Guess he will be stable soon enough and I
> > > > don't want to reinstall everything again in one year, when support for
> > > > old stable is dropped.
> > >
> > > This is Debian. Since 1997 or so, you have had the ability to
> > > upgrade from major version n to version n+1 without
> > > reinstalling. You won't need to reinstall unless you change
> > > architectures (i.e. from x86_32 to x86_64).
> >
> > But, isn't complete reintall safest way? Dist-upgrade can go wrong
> > sometime.
>
> If you follow the release notes there is no reason you shouldn't be
> able to upgrade from one major release to another. I have systems
> that started out as Woody that are currently running Squeeze.
> Upgrades work great. Debian, unlike some other distros, is all about
> being able to successfully upgrade. Upgrades work just fine. I have
> upgraded many systems and will be upgrading many more.
>
> But it is important to follow the release notes for the upgrade for
> each major release because there is special handling needed and it
> will be fully documented.
>
> Sometimes this special handling annoys me because the required manual
> cleanup mostly seems unnecessary to me if the packaging was done
> better. I would call them bugs. But regardless of the small
> packaging issues here and there the overall system upgrades just fine.
>
> Bob

I've acctualy never did distro upgrade, but allways thought that clean
reinstall is the safest option. Somehow I've thought that something can
allways go wrong with big version changes and that kind of things in
production environment is a big no-no. I stand corrected. This is, after
all, Debian and is expected to be highly stable.

Regards,
Veljko


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 20120908165037.GC12245@angelina.example.org">http://lists.debian.org/20120908165037.GC12245@angelina.example.org
 
Old 09-08-2012, 06:06 PM
Martin Steigerwald
 
Default Storage server

Am Freitag, 7. September 2012 schrieb Veljko:
> > This is Debian. Since 1997 or so, you have had the ability to
> > upgrade from major version n to version n+1 without
> > reinstalling. You won't need to reinstall unless you change
> > architectures (i.e. from x86_32 to x86_64).

> But, isn't complete reintall safest way? Dist-upgrade can go wrong
> sometime.

The Debian Wheezy on my ThinkPad T42 as a Debian Sarge or something like
this on my ThinkPad T23. Same with my workstation at work, heck I even
recovered from a bit error restore from a hardware raid controller by
reinstalling any package that debsums complained about.

That should give you an idea of the upgradeability of Debian.

I only every installed a new system on 32 => 64 bit switch. And in the not
to distant future even that might not be needed anymore. (Yes, I know of
the inofficial hacks that may even work without multiarch support, i.e. the
website with the big fat blinking warning not to do this

--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 201209082006.08167.Martin@lichtvoll.de">http://lists.debian.org/201209082006.08167.Martin@lichtvoll.de
 
Old 09-08-2012, 06:10 PM
Martin Steigerwald
 
Default Storage server

Am Freitag, 7. September 2012 schrieb Stan Hoeppner:
> On 9/7/2012 12:42 PM, Dan Ritter wrote:
[…]
> > Now, the next thing: I know it's tempting to make a single
> > filesystem over all these disks. Don't. The fsck times will be
> > horrendous. Make filesystems which are the size you need, plus a
> > little extra. It's rare to actually need a single gigantic fs.
>
> Whjat? Are you talking crash recovery boot time "fsck"? With any
> modern journaled FS log recovery is instantaneous. If you're talking
> about an actual structure check, XFS is pretty quick regardless of
> inode count as the check is done in parallel. I can't speak to EXTx
> as I don't use them. For a multi terabyte backup server, XFS is the
> only way to go anyway. Using XFS also allows infinite growth without
> requiring array reshapes nor LVM, while maintaining striped write
> alignment and thus maintaining performance.
>
> There are hundreds of 30TB+ and dozens of 100TB+ XFS filesystems in
> production today, and I know of one over 300TB and one over 500TB,
> attached to NASA's two archival storage servers.
>
> When using correctly architected reliable hardware there's no reason
> one can't use a single 500TB XFS filesystem.

I assume that such correctly architected hardware contains a lot of RAM in
order to be able to xfs_repair the filesystem in case of any filesystem
corruption.

I know RAM usage of xfs_repair has been lowered, but still such a 500 TiB
XFS filesystem can contain a lot of inodes.

But for upto 10 TiB XFS filesystem I wouldn´t care too much about those
issues.

--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 201209082010.26694.Martin@lichtvoll.de">http://lists.debian.org/201209082010.26694.Martin@lichtvoll.de
 
Old 09-08-2012, 06:13 PM
Martin Steigerwald
 
Default Storage server

Am Freitag, 7. September 2012 schrieb Bob Proulx:
> Unfortunately I have some recent FUD concerning xfs. I have had some
> recent small idle xfs filesystems trigger kernel watchdog timer
> recoveries recently. Emphasis on idle. Active filesystems are always
> fine. I used /tmp as a large xfs filesystem but swapped it to be ext4
> due to these lockups. Squeeze. Everything current. But when idle it
> would periodically lock up and the only messages in the syslog and on
> the system console were concerning xfs threads timed out. When the
> kernel froze it always had these messages displayed[1]. It was simply
> using /tmp as a hundred gig or so xfs filesystem. Doing nothing but
> changing /tmp from xfs to ext4 resolved the problem and it hasn't seen
> a kernel lockup since. I saw that problem on three different machines
> but effectively all mine and very similar software configurations.
> And by kernel lockup I mean unresponsive and it took a power cycle to
> free it.
>
> I hesitated to say anything because of lacking real data but it means
> I can't completely recommend xfs today even though I have given it
> strong recommendations in the past. I am thinking that recent kernels
> are not completely clean specifically for idle xfs filesystems.
> Meanwhile active ones seem to be just fine. Would love to have this
> resolved one way or the other so I could go back to recommending xfs
> again without reservations.

Squeeze and everything current?

No way. At least when using 2.6.32 default squeeze kernel. Its really old.

Did you try with the latest 3.2 squeeze-backports kernel?

--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 201209082013.01653.Martin@lichtvoll.de">http://lists.debian.org/201209082013.01653.Martin@lichtvoll.de
 
Old 09-08-2012, 06:23 PM
Martin Steigerwald
 
Default Storage server

Am Samstag, 8. September 2012 schrieb Veljko:
> On Fri, Sep 07, 2012 at 01:26:13PM -0500, Stan Hoeppner wrote:
> > On 9/7/2012 11:29 AM, Veljko wrote:
> >
> >
> > > I'm in the process of making new backup server, so I'm thinking of
> > > best way of doing it. I have 4 3TB disks and I'm thinking of
> > > puting them in software RAID10.
> >
> >
> >
> > ["what if" stream of consciousness rambling snipped for brevity]
> >
> >
> >
> > > What do you think of this setup? Good sides? Bad sides of this
> > > approach?
> >
> >
> >
> > Applying the brakes...
> >
> >
> >
> > As with many tech geeks with too much enthusiasm for various tools
> > and not enough common sense and seasoning, you've made the mistake
> > of
> >
> > approaching this backwards. Always start here:
> >
> >
> > 1. What are the requirements of the workload?
> > 2. What is my budget and implementation date?
> > 3. How can I accomplish #1 given #2 with the
> > 4. Least complexity and
> > 5. Highest reliability and
> > 6. Easiest recovery if the system fails?
> >
> >
> >
> > You've described a dozen or so overly complex technical means to some
> > end that tend to violate #4 through #6.
> >
> >
> >
> > Slow down, catch your breath, and simply describe #1 and #2. We'll
> > go from there.
> >
> >
> >
> > --
> > Stan
>
> Well, it did sound a little to complex and that is why I posted to this
> list, hoping to hear some other opinions.
>
> 1. This machine will be used for
> a) backup (backup server for several dedicated (mainly) web servers).
> It will contain incremental backups, so only first running will take
> a lot of time, rsnapshot will latter download only changed/added files
> and will run from cron every day. Files that will be added later are
> around 1-10 MB in size. I expect ~20 GB daily, but that number can
> grow. Some files fill be deleted, other will be added.
> Dedicated servers that will be backed up are ~500GB in size.
> b) monitoring (Icinga or Zabbix) of dedicated servers.
> c) file sharing for employees (mainly small text files). I don't
> expect this to be resource hog.
> d) Since there is enough space (for now), and machine have four cores
> and 4GB RAM (that can be easily increased), I figured I can use it
> for test virtual machines. I usually work with 300MB virtual machines
> and no intensive load. Just testing some software.
>
> 2. There is no fixed implementation date, but I'm expected to start
> working on it. Sooner the better, but no dead lines.
> Equipment I have to work with is desktop class machine: Athlon X4,
> 4GB RAM and 4 3TB Seagate ST3000DM001 7200rpm. Server will be in my
> office and will perform backup over internet. I do have APC UPS to
> power off machine in case of power loss (apcupsd will take care of
> that).
>
> In next few months it is expected that size of files on dedicated
> servers will grow and it case that really happen I'd like to be able to
> expand this system. Hardware RAID controllers are expensive and
> managers always want to go with least expenses possible, so I'm stuck
> with software RAID only.

Are you serious about that?

You are planning to mix backup, productions workloads and testing on a
single *desktop class* machine?

If you had a redundant and failsafe virtualization cluster with 2-3 hosts
and redundant and failsafe storage cluster, then maybe – except for the
backup. But for a single desktop class machine I´d advice against putting
such different workloads on it. Especially in a enterprise scenario.

While you may get away with running test and production VMs on a
virtualization host, I would at least physically (!) separate the backup
so that breaking the machine by testing stuff would not make the backup
inaccessible. And no: RAID is not a backup! So please forget about mixing
a backup with production/testing workloads. Now.

I personally do not see a strong reason against SoftRAID although I
battery backed up hardware RAID controller can be quite nice for
performance as you can disable cache flushing / barriers. But then that
should be possible with a battery backed up non RAID controller, if there
is any, as well.

Thanks Stan for asking the basic questions. The answers made obvious to me
that in the current form this can´t be a sane setup.

--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 201209082023.36940.Martin@lichtvoll.de">http://lists.debian.org/201209082023.36940.Martin@lichtvoll.de
 
Old 09-08-2012, 06:53 PM
Veljko
 
Default Storage server

On Sat, Sep 08, 2012 at 08:23:36PM +0200, Martin Steigerwald wrote:
> Are you serious about that?
>
> You are planning to mix backup, productions workloads and testing on a
> single *desktop class* machine?
>
> If you had a redundant and failsafe virtualization cluster with 2-3 hosts
> and redundant and failsafe storage cluster, then maybe – except for the
> backup. But for a single desktop class machine I´d advice against putting
> such different workloads on it. Especially in a enterprise scenario.
>
> While you may get away with running test and production VMs on a
> virtualization host, I would at least physically (!) separate the backup
> so that breaking the machine by testing stuff would not make the backup
> inaccessible. And no: RAID is not a backup! So please forget about mixing
> a backup with production/testing workloads. Now.
>
> I personally do not see a strong reason against SoftRAID although I
> battery backed up hardware RAID controller can be quite nice for
> performance as you can disable cache flushing / barriers. But then that
> should be possible with a battery backed up non RAID controller, if there
> is any, as well.
>
> Thanks Stan for asking the basic questions. The answers made obvious to me
> that in the current form this can´t be a sane setup.
>
> --
> Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
> GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7

Yes, I know how that sounds. But testing in my case is installing
slim Debian, apache on top of it and running some light web application
for a few hours. Nothing intensive. Just to have fresh machine with
nothing on it. But if running it sounds too bad I could just run it
somewhere else. Thanks for your advice, Martin!

On the other hand, monitoring has to be here, no place else to put it.

Regards,
Veljko


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 20120908185343.GA21655@angelina.example.org">http://lists.debian.org/20120908185343.GA21655@angelina.example.org
 
Old 09-08-2012, 07:28 PM
Martin Steigerwald
 
Default Storage server

Am Samstag, 8. September 2012 schrieb Veljko:
> On Sat, Sep 08, 2012 at 08:23:36PM +0200, Martin Steigerwald wrote:
> > Are you serious about that?
> >
> > You are planning to mix backup, productions workloads and testing on
> > a single *desktop class* machine?
> >
> > If you had a redundant and failsafe virtualization cluster with 2-3
> > hosts and redundant and failsafe storage cluster, then maybe –
> > except for the backup. But for a single desktop class machine I´d
> > advice against putting such different workloads on it. Especially in
> > a enterprise scenario.
> >
> > While you may get away with running test and production VMs on a
> > virtualization host, I would at least physically (!) separate the
> > backup so that breaking the machine by testing stuff would not make
> > the backup inaccessible. And no: RAID is not a backup! So please
> > forget about mixing a backup with production/testing workloads. Now.
> >
> > I personally do not see a strong reason against SoftRAID although I
> > battery backed up hardware RAID controller can be quite nice for
> > performance as you can disable cache flushing / barriers. But then
> > that should be possible with a battery backed up non RAID
> > controller, if there is any, as well.
> >
> > Thanks Stan for asking the basic questions. The answers made obvious
> > to me that in the current form this can´t be a sane setup.
>
> Yes, I know how that sounds. But testing in my case is installing
> slim Debian, apache on top of it and running some light web application
> for a few hours. Nothing intensive. Just to have fresh machine with
> nothing on it. But if running it sounds too bad I could just run it
> somewhere else. Thanks for your advice, Martin!
>
> On the other hand, monitoring has to be here, no place else to put it.

Consider the consequenzes:

If the server fails, you possibly wouldn´t know why cause the monitoring
information wouldn´t be available anymore. So at least least Nagios /
Icingo send out mails, in case these are not stored on the server as well,
or let it relay the information to another Nagios / Icinga instance.

What data do you backup? From where does it come?

I still think backup should be separate from other stuff. By design.

Well for more fact based advice we´d require a lot more information on
your current setup and what you want to achieve.

I recommend to have a serious talk about acceptable downtimes and risks
for the backup with the customer if you serve one or your boss if you work
for one.

--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 201209082128.09526.Martin@lichtvoll.de">http://lists.debian.org/201209082128.09526.Martin@lichtvoll.de
 
Old 09-08-2012, 07:51 PM
lee
 
Default Storage server

Veljko <veljko3@gmail.com> writes:

> On Fri, Sep 07, 2012 at 09:43:57PM -0400, tdowg1 news wrote:
>
> If it was my call, I would go with high-end RAID card as well. But in
> this case I have to work without them. However, I've heard that
> software RAID is good for one thing. You can rebuild it in any other
> machine. If you use hardware controller and it dies, you have to buy
> same of very similar one to be able to save your data. Was I
> misinformed?

Some people have argued it's even better to use software raid than a
hardware raid controller because software raid doesn't depend on
particular controller cards that can fail and can be difficult to
replace. Besides that, software raid is a lot cheaper.

So what is better, considering reliability? Performance might be a
different issue.


--
Debian testing amd64


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 87sjasuwd2.fsf@yun.yagibdah.de">http://lists.debian.org/87sjasuwd2.fsf@yun.yagibdah.de
 
Old 09-08-2012, 07:53 PM
Martin Steigerwald
 
Default Storage server

Am Samstag, 8. September 2012 schrieb Veljko:
> On Fri, Sep 07, 2012 at 01:26:13PM -0500, Stan Hoeppner wrote:
> > On 9/7/2012 11:29 AM, Veljko wrote:
> > > I'm in the process of making new backup server, so I'm thinking of
> > > best way of doing it. I have 4 3TB disks and I'm thinking of
> > > puting them in software RAID10.
> >
> > ["what if" stream of consciousness rambling snipped for brevity]
> >
> > > What do you think of this setup? Good sides? Bad sides of this
> > > approach?
> >
> > Applying the brakes...
> >
> > As with many tech geeks with too much enthusiasm for various tools
> > and not enough common sense and seasoning, you've made the mistake
> > of approaching this backwards. Always start here:
> >
> > 1. What are the requirements of the workload?
> > 2. What is my budget and implementation date?
> > 3. How can I accomplish #1 given #2 with the
> > 4. Least complexity and
> > 5. Highest reliability and
> > 6. Easiest recovery if the system fails?
> >
> > You've described a dozen or so overly complex technical means to some
> > end that tend to violate #4 through #6.
> >
> > Slow down, catch your breath, and simply describe #1 and #2. We'll
> > go from there.
>
> Well, it did sound a little to complex and that is why I posted to this
> list, hoping to hear some other opinions.
>
> 1. This machine will be used for
> a) backup (backup server for several dedicated (mainly) web servers).
> It will contain incremental backups, so only first running will take
> a lot of time, rsnapshot will latter download only changed/added files
> and will run from cron every day. Files that will be added later are
> around 1-10 MB in size. I expect ~20 GB daily, but that number can
> grow. Some files fill be deleted, other will be added.

For rsnapshot in my experience you need monitoring cause if it fails it
just complains to its log file and even just puts the rsync error code
without the actual error message there last I checked.

Let monitoring check whether daily.0 is not older than 24 hours.

Did you consider putting those webservers into some bigger virtualization
host and then let them use NFS exports for central storage provided by
some server(s) that are freed by this? You may even free up a dedicated
machine for monitoring and another one for the backup .

But well any advice depends highly on the workload, so this is just guess
work.

> Dedicated servers that will be backed up are ~500GB in size.

How many are they?

> b) monitoring (Icinga or Zabbix) of dedicated servers.

Then who monitors the backup? It ideally should be a different server than
this multi-purpose-do-everything-and-feed-the-dog machine your are talking
about.

> c) file sharing for employees (mainly small text files). I don't
> expect this to be resource hog.

Another completely different workload.

Where do you intend the backup for these files? I obviously wouldn´t put it
on the same machine as the fileserver.

See how mixing lots of stuff into one machine makes things complicated?

You may spare some hardware costs. But IMHO thats easily offset by higher
maintenance costs as well at higher risk of service outage and the costs
it causes.

> d) Since there is enough space (for now), and machine have four cores
> and 4GB RAM (that can be easily increased), I figured I can use it
> for test virtual machines. I usually work with 300MB virtual machines
> and no intensive load. Just testing some software.

4 GiB RAM of RAM for a virtualization host that also does backup and
fileservices? You aren´t kidding me, are you? If using KVM I at least
suggest to activate kernel same page merging.

Fast storage also depends on cache memory, which the machine will lack if
you fill it with virtual machines.

And yes as explained already yet another different workload.

Even this ThinkPad T520 has more RAM, 8 GiB, and I just occasionaly fire up
some virtual machines.

> 2. There is no fixed implementation date, but I'm expected to start
> working on it. Sooner the better, but no dead lines.
> Equipment I have to work with is desktop class machine: Athlon X4,
> 4GB RAM and 4 3TB Seagate ST3000DM001 7200rpm. Server will be in my
> office and will perform backup over internet. I do have APC UPS to
> power off machine in case of power loss (apcupsd will take care of
> that).

Server based loads on a desktop class machine and possibly desktop class
harddrives - didn´t look these up so if there are enterprise based ones
with extended warranty ignore my statement regarding them?

> In next few months it is expected that size of files on dedicated
> servers will grow and it case that really happen I'd like to be able to
> expand this system. Hardware RAID controllers are expensive and
> managers always want to go with least expenses possible, so I'm stuck
> with software RAID only.

Well extension of RAID needs some thinking ahead. While you can just add
disks to add capacity – not redundancy – into an existing RAID the risks
of a non recoverable failure of the RAID increases. How do you intend to
grow the RAID? And to what maximum size?

At least you do not intend to use RAID-5 or something like that. See

http://baarf.com

> But, one of dedicated server is slowly running out of space, so I don't
> think they will go for cheapest option there. I'll have to take care of
> that too, but first things first.

So the customer is willing to use dedicated servers for different web sites
and other services, but more than one machine for the workloads you
described above is too much?

Sorry, I do not get this.

Serious and honest consulting here IMHO includes exposing the risks of
such a setup in an absolutely clear to comprehend way to those managers.

Are these managers willing to probably loose the backup and face a several
day downtime of fileserver, backup and monitoring services in case of a
failure of this desktop class machine?

If so, if I would be in the position to say no, I would just say "no
thanks, search yourself a different idiot for setting up such an insane
setup". I understand, you probably do not feel yourself being in that
position…

> And, of course, thanks for your time and valuable advices, Stan, I've
> read some of your previous posts on this list and know you're storage
> guru.

It wasn´t Stan who wrote the mail you replied to here, but yes I think I
can learn a lot from him regarding storage setups, too.

I would love to learn more about those really big XFS installations and
how there were made. I never dealt with more than about 4 TiB big XFS
setups.

I read and write in these mailinglists to learn new stuff.

--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 201209082153.33818.Martin@lichtvoll.de">http://lists.debian.org/201209082153.33818.Martin@lichtvoll.de
 

Thread Tools




All times are GMT. The time now is 11:10 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org