FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor


 
 
LinkBack Thread Tools
 
Old 05-29-2008, 01:46 AM
"Mag Gam"
 
Default GFS

Hello:

I am planning to implement GFS for my university as a summer project. I have 10 servers each with SAN disks attached. I will be reading and writing many files for professor's research projects. Each file can be anywhere from 1k to 120GB (fluid dynamic research images). The 10 servers will be using NIC bonding (1GB/network). So, would GFS be ideal for this? I have been reading a lot about it and it seems like a perfect solution.


Any thoughts?

TIA

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 05-29-2008, 09:16 AM
Karanbir Singh
 
Default GFS

Mag Gam wrote:
> I am planning to implement GFS for my university as a summer project. I
> have 10 servers each with SAN disks attached.

GFS works well, gfs2 is at the moment in technology-preview mode only,
but its still worth looking at.

--
Karanbir Singh : http://www.karan.org/ : 2522219@icq
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 05-29-2008, 09:46 AM
"Mag Gam"
 
Default GFS

So, how do you have your setup?

How many nodes? I need something stable so I will look into GFSv1, but may consider GFSv2 later on.



On Thu, May 29, 2008 at 5:16 AM, Karanbir Singh <mail-lists@karan.org> wrote:

Mag Gam wrote:

> I am planning to implement GFS for my university as a summer project. I

> have 10 servers each with SAN disks attached.



GFS works well, gfs2 is at the moment in technology-preview mode only,

but its still worth looking at.



--

Karanbir Singh : http://www.karan.org/ : 2522219@icq

_______________________________________________

CentOS mailing list

CentOS@centos.org

http://lists.centos.org/mailman/listinfo/centos



_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 05-30-2008, 03:21 AM
Jay Leafey
 
Default GFS

Mag Gam wrote:

Hello:

I am planning to implement GFS for my university as a summer project. I
have 10 servers each with SAN disks attached. I will be reading and
writing many files for professor's research projects. Each file can be
anywhere from 1k to 120GB (fluid dynamic research images). The 10
servers will be using NIC bonding (1GB/network). So, would GFS be ideal
for this? I have been reading a lot about it and it seems like a perfect
solution.


Any thoughts?

TIA



"Perfect"? No, but usable. We've got a cluster of 4 systems attached
to a fibre-channel-based SAN running CentOS 4 and the Cluster Suite
components with multiple instances of the Oracle database. It actually
works pretty well and fails over nicely in the case of exceptions. It
is moderately complex to set up, but the information needed REALLY IS in
the docs... you just have to REALLY read them!


We haven't tried CentOS 5 and the new cluster components as Oracle only
supports the version of the database we're running on Red Hat EL4.
Given that, the combination looks a bit more "finished" than the
versions in EL4.


Another alternative that we are examining is using OCFS2 (Oracle Cluster
File System 2) and iSCSI for the shared storage with Heartbeat for
service management. This combination looks to be a bit "lighter" than
the Cluster Suite and GFS, but I'm hoping to confirm or disprove that
impression this summer in my "copious free time".


As usual, you mileage may vary.
--
Jay Leafey - Memphis, TN
jay.leafey@mindless.com
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 05-30-2008, 10:16 AM
Karanbir Singh
 
Default GFS

Jay Leafey wrote:
> Another alternative that we are examining is using OCFS2 (Oracle Cluster
> File System 2) and iSCSI for the shared storage with Heartbeat for
> service management. This combination looks to be a bit "lighter" than
> the Cluster Suite and GFS, but I'm hoping to confirm or disprove that
> impression this summer in my "copious free time".

ocfs isnt really worth spending time on anymore. iirc, even oracle no
longer support an ocfs/ocfs2 based backend store.

might as well consider gpfs ( the cost per machine isnt that high, and
there is reasonable assurances that it would work ).

/me is still thrashing out gfs2 though, and conga! and clusterlvm!


--
Karanbir Singh : http://www.karan.org/ : 2522219@icq
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 10-07-2008, 08:49 PM
Ryan Golhar
 
Default GFS

Has anyone successfully setup GFS? I have SAN connected to several
computers by fibre, and it appears that GFS is the way to go as opposed
to use an NFS server.


Do I really need to set up all the other aspects of a Redhat cluster to
get GFS to work? There doesn't seem to be a good HOW-TO of this
anywhere, and the RedHat docs are not as helpful as I would have liked.


--
redhat-list mailing list
unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list
 
Old 10-08-2008, 06:23 AM
lingu
 
Default GFS

Hi,

Sorry i dont have time time thats why i mailed it in urgent if you
get strucked up anywhere mail me, sorry if you find any words are
mispeled.

For GFS to work u need to install all cluster related rpms and
configure the simple running cluster with following things configured.

Cluster rpms and Deps

rpm -ivh ccs-1.0.10-0.i686.rpm cluster-cim-0.9.1-8.i386.rpm
cluster-snmp-0.9.1-8.i386.rpm cman-1.0.17-0.i686.rpm
cman-kernel-2.6.9-50.2.i686.rpm cman-kernel-smp-2.6.9-50.2.i686.rpm
cman-kernheaders-2.6.9-50.2.i686.rpm dlm-1.0.3-1.i686.rpm
dlm-kernel-2.6.9-46.16.i686.rpm dlm-kernel-smp-2.6.9-46.16.i686.rpm
dlm-kernheaders-2.6.9-46.16.i686.rpm fence-1.32.45-1.i686.rpm
iddev-2.0.0-4.i686.rpm ipvsadm-1.24-6.i386.rpm luci-0.9.1-8.i386.rpm
magma-1.0.7-1.i686.rpm magma-devel-1.0.7-1.i686.rpm
magma-plugins-1.0.12-0.i386.rpm modcluster-0.9.1-8.i386.rpm
perl-Net-Telnet-3.03-3.noarch.rpm rgmanager-1.9.68-1.i386.rpm
system-config-cluster-1.0.45-1.0.noarch.rpm gulm-1.0.10-0.i686.rpm

GFS RPMS & DEPS

rpm -ivh cmirror-1.0.1-1.i386.rpm cmirror-kernel-2.6.9-32.0.i686.rpm
cmirror-kernel-smp-2.6.9-32.0.i686.rpm GFS-6.1.14-0.i386.rpm
GFS-kernel-2.6.9-72.2.i686.rpm GFS-kernel-smp-2.6.9-72.2.i686.rpm
GFS-kernheaders-2.6.9-72.2.i686.rpm
lvm2-cluster-2.02.21-7.el4.i386.rpm
warning: cmirror-1.0.1-1.i386.rpm:


modprobe -v gfs

system-config-cluster

1) cluster name --- apps_cluster

2) clusternode name --- node1
clusternode name ---node2

3) fencedevices

fencedevice agent="fence_manual" name="test"

4) failoverdomains

failoverdomain name="apps" ordered="0" restricted="0"

failoverdomainnode name="node1" priority="1"
failoverdomainnode name="node2" priority="1"

Then start the cluster , once it is started up and running without error.

Then for GFS LVM is required. execute the below commands from node1
or node 2 provided storage lun's should be presented to both the
nodes.

Example:
1) pvcreate /dev/sdd1
2) pvdisplay
3) vgcreate testapps /dev/sdd1
4) vgdisplay
5) lvcreate -L 135G -n data testapps

To format file system with GFS you need below details.

6) cman_tool status

Cluster name: apps_cluster

apps_cluster is the name of the cluster and u can get it using the above command

:data is the logical volume name used during above lvcreate command

gfs_mkfs -p lock_dlm -t apps_cluster:data -j 7 /dev/testapps/data


Options
-p LockProtoName

-t LockTableName
The lock table field appropriate to the lock module you're using.
It is clustername:fsname. Clustername must match that in cluster.conf;

-j

Specifies the number of journals to be created by the gfs_mkfs
command. One journal is required for each node that mounts the file
system. (More journals than are needed can be specified at creation
time to allow for future expansion.)



After that u can mount it to ur desired mount point in our case we
created /data-new using mkdir

mount -t gfs /dev/testapps/data /data-new/




2008/10/8 Ryan Golhar <golharam@umdnj.edu>:
> Has anyone successfully setup GFS? I have SAN connected to several
> computers by fibre, and it appears that GFS is the way to go as opposed to
> use an NFS server.
>
> Do I really need to set up all the other aspects of a Redhat cluster to get
> GFS to work? There doesn't seem to be a good HOW-TO of this anywhere, and
> the RedHat docs are not as helpful as I would have liked.
>
>
> --
> redhat-list mailing list
> unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe
> https://www.redhat.com/mailman/listinfo/redhat-list
>

--
redhat-list mailing list
unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list
 
Old 10-29-2008, 12:23 PM
Kristoffer Knigga
 
Default GFS

I just took RH436 last week, so here is what I understand from that:

You must have a basic cluster set up to use GFS. The reason being that everything using a shared file system like this must be able to communicate in order to negotiate locking. If there was no such communication, you'd have the possibility of node1 and node2 trying to write the same block at the same time, and thus causing inconsistent data. Red Hat Cluster Suite manages this communication to ensure everything is copasetic.

Shared storage + GFS is not a replacement for NFS.

Kris



-----Original Message-----
From: redhat-list-bounces@redhat.com [mailto:redhat-list-bounces@redhat.com] On Behalf Of Ryan Golhar
Sent: Tuesday, October 07, 2008 3:50 PM
To: General Red Hat Linux discussion list
Subject: GFS

Has anyone successfully setup GFS? I have SAN connected to several
computers by fibre, and it appears that GFS is the way to go as opposed
to use an NFS server.

Do I really need to set up all the other aspects of a Redhat cluster to
get GFS to work? There doesn't seem to be a good HOW-TO of this
anywhere, and the RedHat docs are not as helpful as I would have liked.



__________________________________________________ ____________________
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__________________________________________________ ____________________

--
redhat-list mailing list
unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list
 
Old 10-29-2008, 12:26 PM
"Marti, Rob"
 
Default GFS

Its at least a partial replacement - instead of a single box exporting an NFS share for a bunch of boxes to mount (IE single point of failure) you have each box mount it directly.

But yes, you need the clusterware installed and configured for gfs to be useable.

-----Original Message-----
From: redhat-list-bounces@redhat.com [mailto:redhat-list-bounces@redhat.com] On Behalf Of Kristoffer Knigga
Sent: Wednesday, October 29, 2008 8:24 AM
To: General Red Hat Linux discussion list
Subject: RE: GFS

I just took RH436 last week, so here is what I understand from that:

You must have a basic cluster set up to use GFS. The reason being that everything using a shared file system like this must be able to communicate in order to negotiate locking. If there was no such communication, you'd have the possibility of node1 and node2 trying to write the same block at the same time, and thus causing inconsistent data. Red Hat Cluster Suite manages this communication to ensure everything is copasetic.

Shared storage + GFS is not a replacement for NFS.

Kris



-----Original Message-----
From: redhat-list-bounces@redhat.com [mailto:redhat-list-bounces@redhat.com] On Behalf Of Ryan Golhar
Sent: Tuesday, October 07, 2008 3:50 PM
To: General Red Hat Linux discussion list
Subject: GFS

Has anyone successfully setup GFS? I have SAN connected to several
computers by fibre, and it appears that GFS is the way to go as opposed
to use an NFS server.

Do I really need to set up all the other aspects of a Redhat cluster to
get GFS to work? There doesn't seem to be a good HOW-TO of this
anywhere, and the RedHat docs are not as helpful as I would have liked.



__________________________________________________ ____________________
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__________________________________________________ ____________________

--
redhat-list mailing list
unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list

--
redhat-list mailing list
unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list
 
Old 10-29-2008, 12:34 PM
Kristoffer Knigga
 
Default GFS

For something like /home, though, an small NFS cluster would probably be way less of a hassle than a huge cluster and it would still eliminate the single point of failure.



-----Original Message-----
From: redhat-list-bounces@redhat.com [mailto:redhat-list-bounces@redhat.com] On Behalf Of Marti, Rob
Sent: Wednesday, October 29, 2008 8:26 AM
To: General Red Hat Linux discussion list
Subject: RE: GFS

Its at least a partial replacement - instead of a single box exporting an NFS share for a bunch of boxes to mount (IE single point of failure) you have each box mount it directly.

But yes, you need the clusterware installed and configured for gfs to be useable.

-----Original Message-----
From: redhat-list-bounces@redhat.com [mailto:redhat-list-bounces@redhat.com] On Behalf Of Kristoffer Knigga
Sent: Wednesday, October 29, 2008 8:24 AM
To: General Red Hat Linux discussion list
Subject: RE: GFS

I just took RH436 last week, so here is what I understand from that:

You must have a basic cluster set up to use GFS. The reason being that everything using a shared file system like this must be able to communicate in order to negotiate locking. If there was no such communication, you'd have the possibility of node1 and node2 trying to write the same block at the same time, and thus causing inconsistent data. Red Hat Cluster Suite manages this communication to ensure everything is copasetic.

Shared storage + GFS is not a replacement for NFS.

Kris



-----Original Message-----
From: redhat-list-bounces@redhat.com [mailto:redhat-list-bounces@redhat.com] On Behalf Of Ryan Golhar
Sent: Tuesday, October 07, 2008 3:50 PM
To: General Red Hat Linux discussion list
Subject: GFS

Has anyone successfully setup GFS? I have SAN connected to several
computers by fibre, and it appears that GFS is the way to go as opposed
to use an NFS server.

Do I really need to set up all the other aspects of a Redhat cluster to
get GFS to work? There doesn't seem to be a good HOW-TO of this
anywhere, and the RedHat docs are not as helpful as I would have liked.



__________________________________________________ ____________________
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__________________________________________________ ____________________

--
redhat-list mailing list
unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list

--
redhat-list mailing list
unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list

__________________________________________________ ____________________
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__________________________________________________ ____________________

--
redhat-list mailing list
unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list
 

Thread Tools




All times are GMT. The time now is 09:26 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org