Linux Archive

Linux Archive (http://www.linux-archive.org/)
-   CentOS (http://www.linux-archive.org/centos/)
-   -   connecting 2 servers using an FC card via iSCSI (http://www.linux-archive.org/centos/265885-connecting-2-servers-using-fc-card-via-iscsi.html)

"nate" 03-18-2009 10:46 PM

connecting 2 servers using an FC card via iSCSI
 
Erick Perez wrote:
> Nate, Ross. Thanks for the information. I now understand the difference.
>
> Ross: I cant ditch MSSS since it is a government purchase, so I *must* use
> it until something breaks and budget is assigned and maybe in 2 years we can
> buy something else. the previous boss purchased this equipment and i guess
> an HP EVA, Netapp or some other sort of NAS/SAN equipment was better suited
> for the job...but go figure!.
>
> Nate: The whole idea is to use the MSSServer and connect serveral servers to
> it. it has 5 available slots so a bunch of cards can be placed there.
>
> I think (after reading your comments) that i can install 2 dual port 10gb
> netcards in the MSSS, configure it for jumbo frames (9k) and then put 10gb
> netcards on the servers that will connect to this MSSS and also enable 9k
> frames. All this of course, connected to a good 10gb switch with a good
> backplane. Im currently using 1Gb so switching to fiber at 1Gb will not
> provide a lot of gain.
>
> using IOMeter we saw that we will not incurr in IOWait due to slow hard
> disks.


Don't set your expectations too high on performance no matter what
network card or HBA you have in the server. Getting high performance
is a lot more than just fast hardware, your software needs to take
advantage of it. MSSServer, I've never heard of so I'm not sure what
it is, my storage array here at work is a 3PAR T400, which at max
capacity can run roughly 26 gigabits per second of fiber channel
throughput to servers(and another 26Gbit to the disks), which would
likely require at least 640 15,000 RPM drives to provide that level
of throughput(the current max of my array is 640 drives, expecting
to go to 1,280 within a year).

This T400 is among the fastest storage systems on the planet, in the
top 5 or 6 I believe. My configuration is by no means fully decked
out, but I have plenty of room to grow into. Currently it has
200x750GB SATA-II disks and with I/O evenly distributed across all
200 drives I can get about 11Gbit of total throughput(about 60% to
disk, and 40% to servers). 24GB of cache, ultra fast ASICs for
RAID. The limitation in my system is the disks, the controllers don't
break a sweat.

You can have twice as many drives that are twice as fast as these
SATA disks and still get poorer performance, it's all about the
software and architecture. I know this because we've been busy
migrating off such a system onto this new system for weeks now.

So basically what I'm trying to say is just because you might get
a couple 10gig cards for a storage server don't expect anywhere near
10gig performance for most workloads.

There's a reason why companies like BlueArc charge $150k for 10gig
capable NFS head units(that is the head unit only, no disks),
getting that level of performance isn't easy.

nate

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

Ross Walker 03-19-2009 02:34 AM

connecting 2 servers using an FC card via iSCSI
 
On Mar 18, 2009, at 6:56 PM, Erick Perez <eaperezh@gmail.com> wrote:

> Nate, Ross. Thanks for the information. I now understand the
> difference.
>
> Ross: I cant ditch MSSS since it is a government purchase, so I
> *must* use it until something breaks and budget is assigned and
> maybe in 2 years we can buy something else. the previous boss
> purchased this equipment and i guess an HP EVA, Netapp or some other
> sort of NAS/SAN equipment was better suited for the job...but go
> figure!.
>
> Nate: The whole idea is to use the MSSServer and connect serveral
> servers to it. it has 5 available slots so a bunch of cards can be
> placed there.
>
> I think (after reading your comments) that i can install 2 dual port
> 10gb netcards in the MSSS, configure it for jumbo frames (9k) and
> then put 10gb netcards on the servers that will connect to this MSSS
> and also enable 9k frames. All this of course, connected to a good
> 10gb switch with a good backplane. Im currently using 1Gb so
> switching to fiber at 1Gb will not provide a lot of gain.
>
> using IOMeter we saw that we will not incurr in IOWait due to slow
> hard disks.
>
> we just cant trash the MSSS....sorry Ross.

Well I understand where your coming from. If you can't get rid of
MSSS, but you can leverage it's strengths by serving CIFS shares and
other Windows services while feeding it the FC storage from Linux host
via 10Gbe iSCSI.

While Microsoft's iSCSI initiator is very good, I can't say the same
about their target which is 25-33% slower then IET target on CentOS.
Probably has to do with running file based targets off of NTFS
partitions instead of the raw disks. You could run another target on
it, but I don't think Redmond would support that.

-Ross

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


All times are GMT. The time now is 07:57 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.