* Alex Bligh <email@example.com> hat geschrieben:
>* --On 4 November 2010 13:46:38 +0100 Bodo Thiesen <firstname.lastname@example.org> wrote:
>> Question: Did you consider using plain LVM for this purpose?
>> By creating a
>> logical volume, no data is initialized, only the meta data is created
>> (what seems to be exactly what you need). Then, each client may access one
>> logical volume r/w. Retrieving the extents list is very easy as well. And
>> because there are no group management data (cluster bitmaps, inode bitmaps
>> and tables) of any kind, you will end up with only one single extent in
>> most cases regardless of the size of the volume you've created.
> Plain LVM or Clustered LVM? Clustered LVM has some severe limitations,
> including needing to restart the entire cluster to add nodes, which
> is not acceptable.
> Plain LVM has two types of issue:
> 1. Without clustered LVM, as far as I can tell there is no locking
> of metadata.
Possible (I don't know exactly)
> I have no guarantees that access to the disk does not
> go outside the LV's allocation.
In LVM you create one logical volume. In the process of creating that
volume, metadata get's updated. But just using the pre-existing logical
volumes doesn't change the metadata. So, if you do all creation and
removing of logical volumes on the same node, then you shouldn't get any
problems here. "lvchange -a[yn] $lv" doesn't even change the metadata,
it's a completely local operation (the local lvm cache get's updated, but
that's all). So, if you provide access via nbd or something like that to
the pv, all nodes could just use their portion of the lv without any
problems. Besides: You wanted to use ext4. I suggested to use lvm in the
same way you initially wanted to use ext4. So: On the main node you use
the command "lvdisplay -v $lv" (or thatever the exact command line is) and
you get a list of extents as result. Then you transfer that list to the
client and it can access the disk directly without issuing any lvm command
> For instance, when a CoW snapshot is
> written to and expanded, the metadata must be written to, and there
> is no locking for that.
Right, but that was not part of your use-case. If you need such things,
you can't use ext4 as well.
> 2. Snapshots suffer severe limitations. For instance,
> it is not possible to generate arbitrarily deep trees of snapshots
> (i.e. CoW on top of CoW) without an arbitrarily deep tree of loopback
> mounted lvm devices, which does not sound like a good idea.
> I think you can only use lvm like this where you have simple volumes
> mounted, and in essence take no snapshots.
Yea, and I mentioned lvm, because that was exactly your use-case
> To answer the implied question, yes we have a (partial) lvm replacement.
---> Did you consider using plain LVM for this purpose? <---
That was an explicit question
>>> GFS and OCFS both handle shared writers for the same SAN disk (AFAIK),
>> They are SUPPOSED to do that - in theory
> We have had similar experiences and don't actually need all the features
> (and thus complexity) that a true clustered filing system presents.
Ok, so not my fault
Ext3-users mailing list