FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Debian > Debian User

 
 
LinkBack Thread Tools
 
Old 09-10-2012, 10:47 AM
Veljko
 
Default Storage server

On Sun, Sep 09, 2012 at 03:42:12AM -0500, Stan Hoeppner wrote:
> Stop here. Never use a production system as a test rig.

Noted.

> You can build a complete brand new AMD dedicated test machine with parts
> from Newegg for $238 USD, sans KB/mouse/monitor, which you already have.
> Boot it up then run it headless, use a KVM switch, etc.
>
> http://www.newegg.com/Product/Product.aspx?Item=N82E16813186189
> http://www.newegg.com/Product/Product.aspx?Item=N82E16820148262
> http://www.newegg.com/Product/Product.aspx?Item=N82E16819103888
> http://www.newegg.com/Product/Product.aspx?Item=N82E16822136771
> http://www.newegg.com/Product/Product.aspx?Item=N82E16827106289
> http://www.newegg.com/Product/Product.aspx?Item=N82E16811121118
>
> If ~$250 stretches the wallet of your employer, it's time for a new job.

Not all of us have that kind of luxury to be that picky about our job,
but I get your point.

> Get yourself an Adaptec 8 port PCIe x8 RAID card kit for $250:
> http://www.newegg.com/Product/Product.aspx?Item=N82E16816103231
>
> The Seagate ST3000DM001 is certified. It can't do RAID5 so you'll use
> RAID10, giving you 6TB of raw capacity, but much better write
> performance than RAID5. You can add 4 more of these drives, doubling
> capacity to 12TB. Comes with all cables, manuals, etc. Anyone who has
> tried to boot a server after the BIOS configured boot drive that is
> mdraid mirrored knows why $250 is far more than worth the money. A
> drive failure with a RAID card doesn't screw up your boot order. It
> just works.

I'm gonna try to persuade my boss to buy one and in case he agrees and
I'm not able to find that card here (and I haven't so far), can I have another one?
What about something like this:
http://ark.intel.com/products/35340/Intel-RAID-Controller-SASMF8I

If not, how to find appropriate one? One with 8 supported devices,
hardware RAID10? What else to look for?

> > In next few months it is expected that size of files on dedicated
> > servers will grow and it case that really happen I'd like to be able to
> > expand this system.
>
> See above.
>
> > And, of course, thanks for your time and valuable advices, Stan, I've read
> > some of your previous posts on this list and know you're storage guru.
>
> You're welcome. And thank you. Recommending the above Adaptec card
> is the best advice you'll get. It'll make your life much easier, with
> better performance to boot.
>
> --
> Stan

There is something that is not clear to me. You recommended hardware
RAID as superior solution. I already knew that it is the case, but I
thought that linux software RAID is also some solution. What would be
drawbacks of using it? In case of one drive failure, it is possible that
it won't boot or it just won't boot? In case I don't get that card,
should I remove /boot from RAID1?

Regards,
Veljko


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 20120910104739.GC9096@angelina.example.org">http://lists.debian.org/20120910104739.GC9096@angelina.example.org
 
Old 09-10-2012, 01:04 PM
The Wanderer
 
Default Storage server

On 09/09/2012 02:37 AM, Stan Hoeppner wrote:


On 9/7/2012 3:16 PM, Bob Proulx wrote:



Whjat? Are you talking crash recovery boot time "fsck"? With any modern
journaled FS log recovery is instantaneous. If you're talking about an
actual structure check, XFS is pretty quick regardless of inode count as
the check is done in parallel. I can't speak to EXTx as I don't use
them.


You should try an experiment and set up a terabyte ext3 and ext4 filesystem
and then perform a few crash recovery reboots of the system. It will
change your mind. :-)


As I've never used EXT3/4 and thus have no opinion, it'd be a bit difficult
to change my mind. That said, putting root on a 1TB filesystem is a brain
dead move, regardless of FS flavor. A Linux server doesn't need more than
5GB of space for root. With /var, /home/ and /bigdata on other filesystems,
crash recovery fsck should be quick.


In my case, / is a 100GB filesystem, and 36GB of it is in use - even with both
/var and /home on separate filesystems.

All but about 3GB of that is under /root (almost all of it in the form of manual
one-off backups), and could technically be stored elsewhere, but it made sense
to put it there since root is the one who's going to be working with it.

Yes, 100GB for / is way more than is probably necessary - but I've run up
against a too-small / in the past (with a 10GB filesystem), even when not
storing more than trivial amounts of data under /root, and I'd rather err on the
side of "too much" than "too little". Since I've got something like 7TB to play
with in total, 100GB didn't seem like too much space to potentially waste, for
the peace of mind of knowing I'd never run out of space on /. (And from the
current use level, it may not have really been wasted.)

--
The Wanderer

Warning: Simply because I argue an issue does not mean I agree with any
side of it.

Every time you let somebody set a limit they start moving it.
- LiveJournal user antonia_tiger


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org

Archive: 504DE563.10009@fastmail.fm">http://lists.debian.org/504DE563.10009@fastmail.fm
 
Old 09-10-2012, 01:05 PM
Stan Hoeppner
 
Default Storage server

On 9/10/2012 5:47 AM, Veljko wrote:

> Not all of us have that kind of luxury to be that picky about our job,
> but I get your point.

Small companies with really tight purse strings may seem fine this week,
then suddenly fold next week, everyone loses their jobs in the process.

>> Get yourself an Adaptec 8 port PCIe x8 RAID card kit for $250:
>> http://www.newegg.com/Product/Product.aspx?Item=N82E16816103231

> I'm gonna try to persuade my boss to buy one and in case he agrees and

It's the least expensive real RAID card w/8 ports on the market, and a
high quality one at that. LSI is best, Adaptec 2nd, then the rest.

> I'm not able to find that card here (and I haven't so far), can I have another one?

That's hard to believe given the worldwide penetration Adaptec has, and
the fact UPS/FedEx ship worldwide. What country are you in?

> What about something like this:
> http://ark.intel.com/products/35340/Intel-RAID-Controller-SASMF8I

This Intel HBA with software assisted RAID is not a real RAID card. And
it uses the LSI1068 chip so it probably doesn't support 3TB drives. In
fact it does not, only 2TB:
http://www.intel.com/support/motherboards/server/sb/CS-032920.htm

> If not, how to find appropriate one? One with 8 supported devices,
> hardware RAID10? What else to look for?

There are many cards with the features you need. I simply mentioned the
least expensive one. Surely there is an international distributor in
your region that carries it. If you're in Antarctica and you're
limiting yourself to local suppliers, you're out of luck. Again, if you
tell us where you are it would make assisting you easier.

> There is something that is not clear to me. You recommended hardware
> RAID as superior solution. I already knew that it is the case, but I
> thought that linux software RAID is also some solution.

You mean "same" solution, yet? They are not equal. Far from it.

> What would be
> drawbacks of using it? In case of one drive failure, it is possible that
> it won't boot or it just won't boot?

This depends entirely on the system BIOS, its limitations, and how you
have device boot order configured. For it to work seamlessly you must
manually configure it that way. And you must make sure any/every time
you run lilo or grub that it targets both drives in the mirror pair,
assuming you've installed lilo/grub in the MBR.

Using a hardware RAID controller avoids all the nonsense above. You
simply tell the system BIOS to boot from "SCSI" or "external device",
whatever the manual calls it.

> In case I don't get that card,
> should I remove /boot from RAID1?

Post the output of

~$ cat /proc/mdstat

I was under the impression you didn't have this system built and running
yet. Apparently you do. Are the 4x 3TB drives the only drives in this
system?

--
Stan


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 504DE5AD.1070405@hardwarefreak.com">http://lists.debian.org/504DE5AD.1070405@hardwarefreak.com
 
Old 09-10-2012, 01:07 PM
Jon Dowland
 
Default Storage server

On Sat, Sep 08, 2012 at 06:49:45PM +0200, Veljko wrote:
> a) backup (backup server for several dedicated (mainly) web servers).
> It will contain incremental backups, so only first running will take a
> lot of time, rsnapshot

Best avoid rsnapshot. Use (at least) rdiff-backup instead, which is nearly
a drop-in replacement (but scales); or consider something like bup or obnam
instead.

> and will run from cron every day. Files that will be added later are
> around 1-10 MB in size. I expect ~20 GB daily, but that number can
> grow. Some files fill be deleted, other will be added.

If you want files to eventually be purged from backups avoid bup.


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: http://lists.debian.org/20120910130747.GG26700@debian
 
Old 09-10-2012, 01:11 PM
Jon Dowland
 
Default Storage server

On Sat, Sep 08, 2012 at 09:51:05PM +0200, lee wrote:
> Some people have argued it's even better to use software raid than a
> hardware raid controller because software raid doesn't depend on
> particular controller cards that can fail and can be difficult to
> replace. Besides that, software raid is a lot cheaper.

You also get transferrable skills: you can use the same tooling on different
systems. If you have a heterogeneous environment, you may have to learn a
totally different set of HW RAID tools for various bits and pieces, which
can be a pain.


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: http://lists.debian.org/20120910131115.GH26700@debian
 
Old 09-10-2012, 01:19 PM
The Wanderer
 
Default Storage server

On 09/10/2012 09:05 AM, Stan Hoeppner wrote:


On 9/10/2012 5:47 AM, Veljko wrote:



There is something that is not clear to me. You recommended hardware RAID
as superior solution. I already knew that it is the case, but I thought
that linux software RAID is also some solution.


You mean "same" solution, yet? They are not equal. Far from it.


What would be drawbacks of using it? In case of one drive failure, it is
possible that it won't boot or it just won't boot?


This depends entirely on the system BIOS, its limitations, and how you have
device boot order configured. For it to work seamlessly you must manually
configure it that way. And you must make sure any/every time you run lilo or
grub that it targets both drives in the mirror pair, assuming you've
installed lilo/grub in the MBR.

Using a hardware RAID controller avoids all the nonsense above. You simply
tell the system BIOS to boot from "SCSI" or "external device", whatever the
manual calls it.


But from what I'm told, hardware RAID has the downside that it often relies on
the exact model of RAID card; if the card dies, you'll need an exact duplicate
in order to be able to mount the RAID. It also (at least in the integrated cases
I've seen) works only with the ports provided by the card, not with any/all
ports the system may have.

Hardware RAID is simpler to configure, is easier to maintain, and is faster (or,
at least, places less load on the CPU). My own experience seems to indicate
that, all else being equal, software RAID is less hardware-dependent and more
expandable.

There are advantages and disadvantages to both options, including probably some
I haven't listed. I personally prefer software RAID for almost all cases, simply
due to my own personal evaulation of how much aggravation each of those
advantages and disadvantages provides or avoids, but hardware RAID is certainly
a legitimate choice for those who evaluate them differently.

--
The Wanderer

Warning: Simply because I argue an issue does not mean I agree with any
side of it.

Every time you let somebody set a limit they start moving it.
- LiveJournal user antonia_tiger


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org

Archive: 504DE8DC.4030308@fastmail.fm">http://lists.debian.org/504DE8DC.4030308@fastmail.fm
 
Old 09-10-2012, 01:55 PM
Stan Hoeppner
 
Default Storage server

On 9/10/2012 8:11 AM, Jon Dowland wrote:
> On Sat, Sep 08, 2012 at 09:51:05PM +0200, lee wrote:
>> Some people have argued it's even better to use software raid than a
>> hardware raid controller because software raid doesn't depend on
>> particular controller cards that can fail and can be difficult to
>> replace. Besides that, software raid is a lot cheaper.
>
> You also get transferrable skills: you can use the same tooling on different
> systems. If you have a heterogeneous environment, you may have to learn a
> totally different set of HW RAID tools for various bits and pieces, which
> can be a pain.

mdraid also allows one to use the absolute cheapest, low ball hardware
on the planet, and a vast swath of mdraid users do exactly that,
assuming mdraid makes it more reliable--wrong!

See the horror threads and read of the data loss in the last few years
of the linux-raid mailing list for enlightenment.

Linux RAID is great in the right hands when used for appropriate
workloads. Too many people are using it who should not be, and giving
it a bad rap due to no fault of the software.

Hardware RAID has a minimum price of entry, both currency and knowledge,
and forces one to use quality hardware and BCPs. Which is why you don't
often see horror stories about hardware RAID eating TBs of filesystems
and data. And when it does, it's usually because the vendor or user
skimped on hardware somewhere in the stack.

--
Stan


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 504DF15A.7010600@hardwarefreak.com">http://lists.debian.org/504DF15A.7010600@hardwarefreak.com
 
Old 09-10-2012, 02:02 PM
Martin Steigerwald
 
Default Storage server

Am Montag, 10. September 2012 schrieb Veljko:
> On Sat, Sep 08, 2012 at 09:28:09PM +0200, Martin Steigerwald wrote:
> > Consider the consequenzes:
> >
> > If the server fails, you possibly wouldn´t know why cause the
> > monitoring information wouldn´t be available anymore. So at least
> > least Nagios / Icingo send out mails, in case these are not stored
> > on the server as well, or let it relay the information to another
> > Nagios / Icinga instance.
>
> Ideally, Icinga/Nagios/any server would be on HA system but that,
> unfortunately is not an option. But of course, Icinga can't monitor
> system it's on, so I plan to monitor it from my own machine.

Hmmm, sounds like a workaround… but since it seems your resources are
tightly limited…

> > What data do you backup? From where does it come?
>
> Like I said, it's several dedicated, mostly web servers with users
> uploaded content on one of them (that part is expected to grow). None
> of them is in the same data center.

Okay, so thats fine.

I would still not be comfortable mixing production stuff with a backup
server, but I think you could get away with it.

But then you need a different backup server for the production stuff on the
server and the files from the fileserver service that you plan to run on it,
cause…

> > I still think backup should be separate from other stuff. By design.
> > Well for more fact based advice we´d require a lot more information
> > on your current setup and what you want to achieve.
> >
> > I recommend to have a serious talk about acceptable downtimes and
> > risks for the backup with the customer if you serve one or your boss
> > if you work for one.
>
> I talked to my boss about it. Since this is backup server, downtime is
> acceptable to him. Regarding risks of data loss, isn't that the reason
> to implement RAID configuration? "R" stands for redundancy. If hard
> disk fails, it will be replaced and RAID will be rebuild with no data
> loss. If processor or something else fails, it will be replaced with
> expected downtime of course.

… no again: RAID is not a backup.

RAID is about maximizing performance and/or minimizing downtime.

Its not a backup. And thats about it.

If you or someone else or an application that goes bonkers delete data on
the RAID by accident its gone. Immediately.

If you delete data on a filesystem that is backuped elsewhere, its still
there provided that you notice the data loss before the backup is
rewritten and old versions of it are rotated away.

See the difference?

Ok, so now you can argue: But if I rsnapshot the production data on this
server onto the same server I can still access old versions of it even
when the original data is deleted by accident.

Sure. Unless due to a hardware error like to many disks failing at once or
a controller error or a fire or what not the RAID where the backup is
stored is gone as well.

This is why I won´t ever consider to carry the backup of this notebook
around with the notebook itself. It just doesn´t make sense. Neither for a
notebook, nor for a production server.

Thats why I recommend an *offsite* backup for any data that you think is
important for your company. With offsite meaning at least a different
machine and a different set of harddisks.

If that doesn´t go into the head of your boss I do not know what will.

If you follow this, you need two boxes… But if you need two boxes, why
just don´t do the following:

1) virtualization host

2) backup host

to have a clear separation and an easier concept. Sure you could replicate
the production data of the mixed production/dataserver to somewhere else,
but going down this route it seems to be that you add workaround upon
workaround upon workaround.

I find it way easier if the backup server does backup (and nothing else!)
and the production server does backup (and nothing else). And removing
complexity removes possible sources of human errors as well.

In case you go above route, I wouldn´t even feel to uncomfortable if you
ran some test VMs on the virtualization host. But that depends on how
critical the production services on it are.

Thanks,
--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 201209101602.54668.Martin@lichtvoll.de">http://lists.debian.org/201209101602.54668.Martin@lichtvoll.de
 
Old 09-10-2012, 02:16 PM
Stan Hoeppner
 
Default Storage server

On 9/10/2012 8:19 AM, The Wanderer wrote:

> But from what I'm told, hardware RAID has the downside that it often
> relies on
> the exact model of RAID card; if the card dies, you'll need an exact
> duplicate
> in order to be able to mount the RAID.

You've been misinformed.

And, given your admission that you have no personal experience with
hardware RAID, and almost no knowledge of it, it seems odd you'd jump
into a thread and tell everyone about its apparent limitations.

--
Stan


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 504DF632.7000907@hardwarefreak.com">http://lists.debian.org/504DF632.7000907@hardwarefreak.com
 
Old 09-10-2012, 02:18 PM
Martin Steigerwald
 
Default Storage server

Am Montag, 10. September 2012 schrieb Jon Dowland:
> On Sat, Sep 08, 2012 at 09:51:05PM +0200, lee wrote:
> > Some people have argued it's even better to use software raid than a
> > hardware raid controller because software raid doesn't depend on
> > particular controller cards that can fail and can be difficult to
> > replace. Besides that, software raid is a lot cheaper.
>
> You also get transferrable skills: you can use the same tooling on
> different systems. If you have a heterogeneous environment, you may
> have to learn a totally different set of HW RAID tools for various
> bits and pieces, which can be a pain.

I think you got a point here.

While the hardware of some nice LSI / Adaptec controllers appears to be
excellent for me and the battery backed up cache can help performance a
lot if you configure mount options correctly, the software side regarding
administration tools in my eyes is pure and utter crap.

I usually installed 3-4 different packages from

http://hwraid.le-vert.net/wiki/DebianPackages

in order to just find out which tool it is this time. (And thats already
from a developer who provides packages, I wont go into downloading tools
from the manufacturers website and installing them manually. Been there,
done that.)

And of course each one of this goes by different parameters.

And then do Nagios/Icinga monitoring with this: You basically have to
write or install a different check for each different type of hardware raid
controller.

This is so utter nonsense.

I really do think this strongly calls for some standardization.

Id love to see a standard protocol on how to talk to hardware raid
controlllers and then some open source tool for it. Also for setting up
the raid (from a live linux or what).

And do not get me started about the hardware RAID controller BIOS setups.
Usabilitywise they tend to be so beyond anything sane that I do not even
want to talk about it.

Cause thats IMHO one of the biggest advantages of software RAID. You have
mdadm and be done with it. Sure it has a flexibility that may lure
beginners into creating setups that are dangerous. But if you stay by best
practices I think its pretty reliable.


Benefits of a standard + open source tool would be plenty:

1) One frontend to the controller, no need to develop and maintain a dozen
of different tools. Granted a good (!) BIOS setup may still be nice to be
able to set something up without booting a Linux Live USB stick.

2) Lower learning curve.

3) Uniform monitoring.


Actually its astonishing! You get pro hardware, but the software based
admin tool is from the last century.


Thats at least what I saw. If there are controllers by now which come with
software support that can be called decent Id like to now. I never saw an
Areca controller, maybe they have better software.


Otherwise I agree to Stan: Avoid dmraid. Either hardware RAID *or*
software RAID. Avoid anything in between .

Ciao,
--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 201209101618.43965.Martin@lichtvoll.de">http://lists.debian.org/201209101618.43965.Martin@lichtvoll.de
 

Thread Tools




All times are GMT. The time now is 02:45 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org