-----BEGIN PGP SIGNED MESSAGE-----
Have you tried ZFS? The kernel modules are in the portage tree and I
am maintaining a FAQ regarding the status of Gentoo ZFS support at github:
Data stored on ZFS is generally safe unless you go out of your way to
lose it (e.g. put the ZIL/SLOG on a tmpfs).
On 02/24/12 18:26, Duncan wrote:
> Rich Freeman posted on Fri, 24 Feb 2012 13:47:45 -0500 as
>> On Fri, Feb 24, 2012 at 1:43 PM, Alexis Ballier
>> <email@example.com> wrote:
>>> moreover the && wont delete the lib if revdep-rebuild failed i
>>> think, so it should be even safer to copy/paste
> FWIW this is the preserved_libs feature/bug I ran into in early
> testing, that convinced me to turn it off. Running revdep-rebuild
> manually was far safer anyway, since at least then I /knew/ the
> status of various libs, they weren't preserved on first run, then
> arbitrarily deleted on second, even if it still broke remaining
> depending apps to do so.
> So if that was reliably fixed, I'd be FAR happier about enabling
> FEATURES=preserved-libs. I'm not sure I actually would as I like a
> bit more direct knowledge of stale libs on the system than the
> automated handling gives me, but at least I'd not have to worry
> about the so-called "preserved" libs STILL disappearing and leaving
> broken packages, if I DID enable it!
> So definitely ++ on this! =:^)
>> Am I the only paranoid person who moves them rather than
>> unlinking them? Oh, if only btrfs were stable...
> FWIW, in the rare event it breaks revdep-rebuild or the underlying
> rebuilding itself, I rely on my long set FEATURES=buildpkg and
> emerge -K. In the even rarer event that too is broken, there's
> always manual untarring of the missing lib from the binpkg (I've
> had to do that once when gcc itself was broken due to an unadvised
> emerge -C that I knew might break things given the depclean
> warning, but also knew I could fix with an untar if it came to it,
> which it did), or if it comes to it, booting to backup and using
> ROOT= to emerge -K back to the working system.
> [btrfs status discussion, skip if uninterested.]
> I'm not sure if that's a reference to the btrfs snapshots allowing
> rollbacks feature, or a hint that you're running it and worried
> about its stability underneath you...
> If it's the latter, you probably already know this, but if it's the
> former, and for others interested...
> I recently set the btrfs kernel options and merged btrfs-progs,
> then read up on the wiki and joined the btrfs list, with the plan
> being to get familiar with it and perhaps install it.
> From all the reports about it being an option for various distros,
> etc, now, and all the constant improvement reports, I had /thought/
> that the biggest issue for stability was the lack of an
> error-correcting (not just detecting) fsck.btrfs, and that the
> restore tool announced late last year, that allows pulling data off
> of unmountable btrfs volumes was a reasonable workaround.
> What I found, even allowing for the fact that such lists get the
> bad reports and not the good ones, thus paint a rather worse
> picture of the situation than actually exists for most users, is
> BTRFS still has a rather longer way to go than I had thought. It's
> still FAR from stable, even for someone like myself that often runs
> betas and was prepared to keep (and use, if necessary) TESTED
> backups, etc. Maybe by Q4 this year, but also very possibly not
> until next year. I'd definitely NOT recommend that anyone run it
> now, unless you are SPECIFICALLY running it for testing and bug
> reporting purposes with "garbage" data (IOW, data that you're NOT
> depending on, at the btrfs level, at all) that you are not only
> PREPARED to lose, but EXPECT to lose, perhaps repeatedly, during
> your testing.
> IOW, there's still known untraced and unfixed active data
> corruption bugs remaining. Don't put your data on btrfs at this
> point unless you EXPECT to have it corrupted, and want to actively
> help in tracing and patching the problems!
> Additionally, for anyone who has been interested in the btrfs RAID
> capacities, striped/raid0 it handles, but its raid1 and raid10
> capacities are misnamed. At present, it's strictly two-way-mirror
> ONLY, there's no way to do N-way (N>2) mirroring aside from
> layering on top of say mdraid, at all, and of course layering on
> top of mdraid loses the data integrity guarantees at that level,
> btrfs still has just the one additional copy it can fall back on.
> This SERIOUSLY limits btrfs data integrity possibilities in a 2+
> drive failure scenario.
> btrfs raid5/6 isn't available yet, but the current roadmap says
> kernels 3.4 or 3.5. Multi-mirror is supposed to be built on that
> code, tho the mentions of it I've seen are specifically
> triple-mirror, so it's unclear whether arbitrary N-way (N>3)
> mirroring as in true raid1 will be possible even then. But whether
> triple-way specifically or N-way (N>=3), given it's on top of
> raid5/6, to be introduced in 3.4/3.5, triple-way mirroring thus
> appears to be 3.5/3.6 at the earliest.
> So while I had gotten the picture that btrfs was stabilizing and it
> was mostly over-cautiousness keeping that experimental label
> around, that's definitely NOT the case. Nobody should really plan
> on /relying/ on it, even with backups, until at least late this
> year, and very possibly looking at 2013 now.
> So btrfs is still a ways out. =:^(
> Meanwhile, for anyone that's still interested in it at this point,
> note that the homepage wiki currently listed the btrfs-progs
> package is a stale copy on kernel.org, still read-only after the
> kernel.org breakin. The "temporary" but looking more and more
> permanent location is:
> Also, regarding the gentoo btrfs-progs package, see my recently
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.18 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
-----END PGP SIGNATURE-----