tl;dr version: Empower, rather than restrict, maintainers. Encourage
them to test by increasing test accuracy, coverage, unique
configurations, and visibility.
I don't think FESCo wishes to destroy the freedom of package
maintainers. I also don't think package maintainers want to release
broken packages. I believe that both have acted and will continue to
act in good faith. That being said, I'm not a package maintainer or a
member of FESCo. I use updates-testing, test packages when I can, and
report them when I can. I make no claim to be good at any of these.
That being said, here's what I propose:
* Retain the current policy of deferring to maintainers (AKA: "Thank
god I don't run KDE")
I, maybe naively, believe that maintainers know their packages best.
If they believe their package should (and will) be tested, then
they'll keep it in testing. If they believe it's stable, then I feel
like that's their prerogative as maintainers. Maintainers know if
their package is mission-critical, and they ought to base their
decisions on that knowledge. Plus, if they push a bad patch, they'll
remember it long after the rest of us forget.
I don't think you can enforce this, though, for the following reasons:
a.) Minimum values, whether time or karma, will either be too short
for some or too long for others
b.) Minimum values will not feasibly scale based on how "risky" a
change is. A new feature may be riskier than a bugfix, but it might
not. A policy will have difficulty accounting for this.
c.) An override mechanism, such as an emergency meeting, a "security"
flag, or approval by some third party, does not necessarily increase
chances of correctness. A member of QA, FESCo, releng, etc. may not be
familiar with a given package or a given patch.
But I don't think we should stop here. I like the idea of changes
being tested before they're shipped. It puts less burden on the
maintainer to ensure his change is correct, and less risk since his
change is exposed to a small sample before being widely distributed.
Here's what I think we can do to encourage maintainers to test and release:
* Minimize time spent in updates-testing, while not sacrificing quality
Time spent in updates-testing incurs a cost on everyone. Users are
forced to use a worse product, and maintainers get frustrated as their
package stays unbroken or un-updated.
But that's not even the worst thing about eating up time in
updates-testing! The worst thing is this: it doesn't actually decrease
the likelihood of bugs. Only passed tests do. What time gives you, is
users. As time goes on, more users (presumably) use and test your
package. But how many users have tested my package? Are they
reporting, or just ignoring, bugs? Are they even testing the right
things? The only indicators you have are time spent in updates-testing
and karma comments. I believe these are either wrong or inaccurate
Look at it this way: In a perfect scenario, every user of a package
runs every test the moment it hits updates-testing. That means that a
package only needs to stay in updates-testing for however long the
package tests take. Barely any time at all! We don't live in an ideal
world, though, but I think the formula looks something like this:
Quality of Product = Quality of Testing
Quality of Testing = Test Coverage * Number of Configurations
Configurations are your users, and coverage is depth. Coverage is also
accuracy: If I spend 30 minutes testing every preference even though
you didn't change any preferences, then I just wasted my time on your
package. The goal is to make tests easy and specific for the tester,
and you'll find more testers and better test coverage.
* Increase coverage through Bodhi
As a tester, I love Bodhi. It shows me new things to test. If I have a
list of bugs that were fixed in this build, they're all test-cases. If
I have a changelog that shows features that were added, each change is
a test-case. Unfortunately, if I have neither, then I can only really
do a smoke test. Here's a few concrete ideas to encourage better
coverage through Bodhi:
- Allow test cases to be added to Bodhi. These already exist for the
RCs, they could optionally be written by maintainers too. It doesn't
have to be formal, though. "Test the printing stuff" is sufficient.
- Integrate changelogs where available. Since these show what changed,
testers can use them to be more efficient and thorough with their
tests, increasing coverage.
- Further integrate Bugzilla and Bodhi. If I'm able to see what bugs
are open for a package through Bodhi, I can check if those bugs were
affected by the build I'm testing. I could also see if a bug is
already open for this build without having to search through Bugzilla,
increasing my efficiency.
- Tweak karma. I feel bad leaving negative karma and I shouldn't! Bug
reports are healthy. Also, positive karma is rare and subjective. I
think they should either be plain comments, or a per-user checkbox for
a given test-case.
Test cases don't have to be formal. We want maintainers and testers to
use these features, not hate them. They shouldn't get in the way. But,
if a maintainer wants users to test an area, it should be easy for him
to let users know and focus their feedback towards that area. Empower
maintainers and testers to figure out what works best.
* AutoQA, etc. are awesome ideas. Continue them!
Instantly run tests on many different configurations? This is an
instant-win for testers and maintainers, while providing no loss to
* Increase number of testers by increasing visibility of newer builds
I suggest these while understanding that user privacy and user
experience supersedes the need for things to get tested. However, I
think there's some real benefits to be had here while not confusing or
- Explain what updates-testing is in Add/Remove Software. Right now, a
repository exists for no real reason. The UI could explain in more
detail what updates-testing is, its risks, and its benefits. Ditto
- Opt-in feature to show testing updates for individual packages. This
would let users run a newer version of a package.
- Allow users to safely downgrade a package that's using
updates-testing. While this doesn't minimize the danger of running an
untested package, it does reduce the cost. It could even be integrated
into the system tray, like "updates" are.
- Allow users to quickly provide feedback on a given tested package,
without leaving the desktop.
If these are feasible and properly implemented, they could increase
the pool of testers for specific packages by reducing the cost of
testing them. I believe that ABRT is an excellent example of this sort
* Increase testing visibility to maintainers
The user privacy/experience disclaimer applies here, too. However,
like the above, you can get better visibility if you let testers
opt-in to report their data:
- Allow maintainers to see number of downloads by users who have
opted-in to share that data. If not number, then a simple range.
- Allow maintainers to see uptime for his application by users who
have opted-in to share that data.
These suggestions may be infeasible, outdated, or inaccurate. However,
it's not an all-or-nothing deal. Give testers more ways and directions
to test, and you'll have better test coverage. Make it easier for them
to report their findings, and you'll have higher visibility for
maintainers. This increases the quality of testing and makes it
worthwhile for maintainers to test.
-- Aaron Faanes
devel mailing list