Cost is per TB. Would kill me here when one user occupies 150TB just themselves.
----- Original Message -----
| On 11/8/10 6:29 PM, James A. Peltier wrote:
| > I have a solution that is currently centered around commodity
| > storage bricks (Dell R510), flash PCI-E controllers, 1 or 10GbE (on
| > separate Jumbo Frame Data Tier) and Solaris + ZFS.
| > So far it has worked out really well. Each R510 is a box with a fair
| > bit of memory, running OpenIndiana for ZFS/RAIDZ3/Disk Dedup/iSCSI.
| > Each brick is fully populated and in a RAIDZ2 configuration with 1
| > hot spare. Some have SSDs most have SAS or SATA. I export this
| > storage pool as a single iSCSI target and I attach each of these
| > targets to the SAN pool and provision from there.
| > I have two VMWare physical machines which are identically
| > configured. If I need to perform administrative maintenance on the
| > boxes I can migrate the host over to the other machine. This works
| > for me, but it took a really long time to develop the solution and
| > for the cost of my time it *might* have been cheaper to just buy
| > some package deal.
| > It was a hell of a lot of fun learning though.
| Did you look at Nexentastor for this? You might need the commercial
| version for
| a fail-over set but I think the basic version is free up to a fairly
| large size.
| Les Mikesell
| CentOS mailing list
James A. Peltier
Systems Analyst (FASNet), VIVARIUM Technical Director
Simon Fraser University - Burnaby Campus
Phone : 778-782-6573
Fax : 778-782-3045
E-Mail : firstname.lastname@example.org
Website : http://www.fas.sfu.ca | http://vivarium.cs.sfu.ca
MSN : email@example.com
CentOS mailing list