FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > CentOS > CentOS

 
 
LinkBack Thread Tools
 
Old 06-28-2010, 08:39 PM
John R Pierce
 
Default CentOS MD RAID 1 on Openfiler iSCSI

On 06/28/10 12:13 PM, Emmanuel Noobadmin wrote:
> Has anybody tried or knows if it is possible to create a MD RAID1
> device using networked iSCSI devices like those created using
> OpenFiler?
>
> The idea I'm thinking of here is to use two OpenFiler servers with
> physical drives in RAID 1, to create iSCSI virtual devices and run
> CentOS guest VMs off the MD RAID 1 device. Since theoretically, this
> setup would survive both a single physical drive failure as well as a
> machine failure on the storage side with a much shorter failover time
> than say using heartbeat.
>

I considered much the same a couple years ago, its certainly doable....
But, after playing with it a bit in the lab, I moved onto something more
robust...

the downside is A) iscsi on homebrew systems like openfiler tends to be
less than rock solid reliable. and B) upon a 'failure', the rebuild
times will require remirroring the whole volume, which is going to take
quite awhile across two iscsi targets.


_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 06-28-2010, 08:53 PM
Emmanuel Noobadmin
 
Default CentOS MD RAID 1 on Openfiler iSCSI

On 6/29/10, Karanbir Singh <mail-lists@karan.org> wrote:
> Depends on how you set it up, if you have 2 machines ( disk nodes ),
> exporting iscsi. 1 machine ( data node ) doing the import and sets up a
> raid1; you can afford to have one of those two machines down. You *cant*
> afford to have the data-node down. Thats where the filesystem lives. You
> can potentially have the same disks from the disk-nodes imported to a
> standby data node using something like drbd over the mdraid setup.
> Alternatively, you can look at using a clustered filesystem and have it
> go X way. But then you may as well use something like gnbd with gfs2
> instead(!).

Looking up gfs2 was what lead me to glusterFS actually and because
glusterFS had all the RAID stuff pointed out upfront, I stopped
reading about gfs2. Googling gluster then lead to openFiler which then
seemed like a simpler way to achieve the objectives.

> Yes, lots of options and different ways of doing the same thing. So
> start at the top, make a list of all the problems you are trying to
> solve. then split that into 3 segments:
> - Must have
> - Good to have
> - Dont really need
>

Must have
- low cost, clients have a budget which was why mirroring all the
machines is not an option
- data redundancy, application servers can go down, but data must not
be lost/corrupted.
- expandable capacity
- works with VM
- doable by noob admin

Good to have
- able to add/restore capacity without need to take down the whole setup
- application server redundancy
- webUI for remote management

I've done mostly LVM + mdraid setup so far, hence the openfiler +
remote iSCSI raid route looks to fit the above and is the most simple
(less new things to learn/mess up) option compared to most other which
needs multiple components to work together it seems.
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 06-28-2010, 09:04 PM
Les Mikesell
 
Default CentOS MD RAID 1 on Openfiler iSCSI

On 6/28/2010 3:25 PM, Emmanuel Noobadmin wrote:
>
>> I dont see why not. But you dont dont need openfilter to give you iscsi
>> capability. CentOS-5.1+ has had the ability to export an iscsi target
>> itself with all the tooling built in.
>
> I'm not sure yet since openFiler seems to provide a few more options,
> if I'm not mistaken the ability to soft RAID 5/6 on multiple machines
> and remote block duplication. So theoretically, I'm thinking with
> openFiler presenting a frontend to the application servers, I could
> increase storage without having to mess with the application server
> setup.


If you are looking at openfiler, you might also want to consider
nexentastor. Their community edition is free for up to 12TB of storage.
It's an OpenSolaris/ZFS based system with web management, able to
export cifs/nfs/ftp/sftp/iscsi with support for snapshots,
deduplication, compression, etc. I haven't used it beyond installing in
a VM and going through some options, but it looks more capable than
anything else I've seen for free.

--
Les Mikesell
lesmikesell@gmail.com
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 06-29-2010, 02:53 AM
Emmanuel Noobadmin
 
Default CentOS MD RAID 1 on Openfiler iSCSI

On 6/29/10, Les Mikesell <lesmikesell@gmail.com> wrote:

> If you are looking at openfiler, you might also want to consider
> nexentastor. Their community edition is free for up to 12TB of storage.
> It's an OpenSolaris/ZFS based system with web management, able to
> export cifs/nfs/ftp/sftp/iscsi with support for snapshots,
> deduplication, compression, etc. I haven't used it beyond installing in
> a VM and going through some options, but it looks more capable than
> anything else I've seen for free.

Thanks for the info, it looks quite interesting and seems like a
simpler option given the claim of easy setup wizard doing things in 15
minutes.

The only problem is their HA is commercial only and costs more than
the entire hardware budget I've got for this. Crucially, it relies on
a failover/heartbeat kind of arrangement. According to some sources,
the failover delay of a few seconds will cause certain services/apps
to fail/lock up. Not an issue for the immediate need but will be a
major no no for the other project I have in the pipeline.

Which was why I was thinking of MD raid 1 on the application server
site: no failover delay if one of the data server fails to respond to
respond in time.
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 06-29-2010, 05:04 AM
Christopher Chan
 
Default CentOS MD RAID 1 on Openfiler iSCSI

On Tuesday, June 29, 2010 10:53 AM, Emmanuel Noobadmin wrote:
> On 6/29/10, Les Mikesell<lesmikesell@gmail.com> wrote:
>
>> If you are looking at openfiler, you might also want to consider
>> nexentastor. Their community edition is free for up to 12TB of storage.
>> It's an OpenSolaris/ZFS based system with web management, able to
>> export cifs/nfs/ftp/sftp/iscsi with support for snapshots,
>> deduplication, compression, etc. I haven't used it beyond installing in
>> a VM and going through some options, but it looks more capable than
>> anything else I've seen for free.
>
> Thanks for the info, it looks quite interesting and seems like a
> simpler option given the claim of easy setup wizard doing things in 15
> minutes.
>
> The only problem is their HA is commercial only and costs more than
> the entire hardware budget I've got for this. Crucially, it relies on
> a failover/heartbeat kind of arrangement. According to some sources,
> the failover delay of a few seconds will cause certain services/apps
> to fail/lock up. Not an issue for the immediate need but will be a
> major no no for the other project I have in the pipeline.

So install Nexenta CP2/CP3 then. That's completely free and ZFS has its
own web interface...
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 06-29-2010, 05:06 AM
Christopher Chan
 
Default CentOS MD RAID 1 on Openfiler iSCSI

On Tuesday, June 29, 2010 04:53 AM, Emmanuel Noobadmin wrote:
> On 6/29/10, Karanbir Singh<mail-lists@karan.org> wrote:
>> Depends on how you set it up, if you have 2 machines ( disk nodes ),
>> exporting iscsi. 1 machine ( data node ) doing the import and sets up a
>> raid1; you can afford to have one of those two machines down. You *cant*
>> afford to have the data-node down. Thats where the filesystem lives. You
>> can potentially have the same disks from the disk-nodes imported to a
>> standby data node using something like drbd over the mdraid setup.
>> Alternatively, you can look at using a clustered filesystem and have it
>> go X way. But then you may as well use something like gnbd with gfs2
>> instead(!).
>
> Looking up gfs2 was what lead me to glusterFS actually and because
> glusterFS had all the RAID stuff pointed out upfront, I stopped
> reading about gfs2. Googling gluster then lead to openFiler which then
> seemed like a simpler way to achieve the objectives.

No acls on Gluster...but I suppose you have no need for acl support...


_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 06-29-2010, 05:23 AM
Emmanuel Noobadmin
 
Default CentOS MD RAID 1 on Openfiler iSCSI

On 6/29/10, Christopher Chan <christopher.chan@bradbury.edu.hk> wrote:
>> The only problem is their HA is commercial only and costs more than
>> the entire hardware budget I've got for this. Crucially, it relies on
>> a failover/heartbeat kind of arrangement. According to some sources,
>> the failover delay of a few seconds will cause certain services/apps
>> to fail/lock up. Not an issue for the immediate need but will be a
>> major no no for the other project I have in the pipeline.
>
> So install Nexenta CP2/CP3 then. That's completely free and ZFS has its
> own web interface...

Sorry, a little braindead by now but how would the Nexenta Core
Platform (I assume this is the CP you are referring to), solve the
failover delay problem since it would still be relying on HB to do
failover monitoring right?

Or do you mean to use NCP for the storage units, relying on ZFS to do
the disk management and export iSCSI interfaces to the application to
use as MD RAID 1 members?
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 06-29-2010, 05:58 AM
Les Mikesell
 
Default CentOS MD RAID 1 on Openfiler iSCSI

Christopher Chan wrote:
> On Tuesday, June 29, 2010 10:53 AM, Emmanuel Noobadmin wrote:
>> On 6/29/10, Les Mikesell<lesmikesell@gmail.com> wrote:
>>
>>> If you are looking at openfiler, you might also want to consider
>>> nexentastor. Their community edition is free for up to 12TB of storage.
>>> It's an OpenSolaris/ZFS based system with web management, able to
>>> export cifs/nfs/ftp/sftp/iscsi with support for snapshots,
>>> deduplication, compression, etc. I haven't used it beyond installing in
>>> a VM and going through some options, but it looks more capable than
>>> anything else I've seen for free.
>> Thanks for the info, it looks quite interesting and seems like a
>> simpler option given the claim of easy setup wizard doing things in 15
>> minutes.
>>
>> The only problem is their HA is commercial only and costs more than
>> the entire hardware budget I've got for this. Crucially, it relies on
>> a failover/heartbeat kind of arrangement. According to some sources,
>> the failover delay of a few seconds will cause certain services/apps
>> to fail/lock up. Not an issue for the immediate need but will be a
>> major no no for the other project I have in the pipeline.
>
> So install Nexenta CP2/CP3 then. That's completely free and ZFS has its
> own web interface...

Or 2 nexentastor (free community version) instances not configured for HA and do
what you planned with MD raid with their iscsi targets.

--
Les Mikesell
lesmikesell@gmail.com
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 06-29-2010, 07:08 AM
Christopher Chan
 
Default CentOS MD RAID 1 on Openfiler iSCSI

On Tuesday, June 29, 2010 01:23 PM, Emmanuel Noobadmin wrote:
> On 6/29/10, Christopher Chan<christopher.chan@bradbury.edu.hk> wrote:
>>> The only problem is their HA is commercial only and costs more than
>>> the entire hardware budget I've got for this. Crucially, it relies on
>>> a failover/heartbeat kind of arrangement. According to some sources,
>>> the failover delay of a few seconds will cause certain services/apps
>>> to fail/lock up. Not an issue for the immediate need but will be a
>>> major no no for the other project I have in the pipeline.
>>
>> So install Nexenta CP2/CP3 then. That's completely free and ZFS has its
>> own web interface...
>
> Sorry, a little braindead by now but how would the Nexenta Core
> Platform (I assume this is the CP you are referring to), solve the
> failover delay problem since it would still be relying on HB to do
> failover monitoring right?
>
> Or do you mean to use NCP for the storage units, relying on ZFS to do
> the disk management and export iSCSI interfaces to the application to
> use as MD RAID 1 members?

raid1/iscsi if you have a single host accessing the data or gluster if
you have more than one host accessing the data...

Nexentastor has a HA distributed filesystem? Gotta take a closer look at
that.
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 06-29-2010, 07:13 AM
John R Pierce
 
Default CentOS MD RAID 1 on Openfiler iSCSI

On 06/28/10 7:53 PM, Emmanuel Noobadmin wrote:
> The only problem is their HA is commercial only and costs more than
> the entire hardware budget I've got for this. Crucially, it relies on
> a failover/heartbeat kind of arrangement. According to some sources,
> the failover delay of a few seconds will cause certain services/apps
> to fail/lock up. Not an issue for the immediate need but will be a
> major no no for the other project I have in the pipeline.
>
> Which was why I was thinking of MD raid 1 on the application server
> site: no failover delay if one of the data server fails to respond to
> respond in time.
>

ISCSI gets REAL sketchy on network failures. it takes at least several
TCP timeouts before it gives up and returns an error. I do hope both
storage servers have ECC so you're not mirroring a soft bit error at an
inopportune time (Solaris ZFS would cope gracefully with this, but likes
iscsi timeouts even less than dmraid)


_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 

Thread Tools




All times are GMT. The time now is 08:11 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org