FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Fedora Development

 
 
LinkBack Thread Tools
 
Old 03-05-2010, 09:47 PM
Till Maas
 
Default Refining the update queues/process

On Fri, Mar 05, 2010 at 01:46:34PM -0800, Adam Williamson wrote:

> Ah. You're looking at it on a kind of micro level; 'how can I tell this
> package has been tested?'

For a package maintainer it is especially interesting, whether the own
update has been tested.

> Maybe it makes it clearer if I explain more clearly that that's not
> exactly how I look at it, nor (I think) how the rest of QA sees it, or
> what the proposal to require -testing is intended to achieve. We're
> thinking more about 'the big picture', and we're specifically thinking
> about - as I said before - the real brown-paper-bag,
> oh-my-god-what-were-they-thinking kinds of regressions, the 'systems
> don't boot any more', 'Firefox doesn't run' kinds of forehead-slappers.
> What we believe is that requiring packages to go to updates-testing for
> some time improves our chances of avoiding that kind of issue.

Afaics, this misunderstanding is a big problem, e.g. my expectations of
updates testing also differ. Maybe you can add some more information to
the wiki about what QA for updates testing currently tries to ensure,
what it actually ensures and plans for the future. E.g. I always noticed
that there is not much karma given, but now that I wrote the script and
noticed that I provided more feedback within two days than the top
tester for F11 did until now, the visible test coverage was/is a lot
worse than I imagined.

> Obviously, the more testing gets done in updates-testing, the better.
> Hopefully Till's script will help a lot with that, it's already had a
> very positive response. But the initial trigger for the very first

I just did a quick evaluation. There are 384 updates in F12 updates
testing when I last ran fedora-easy-karma and only 108 (28%) received
any comment with karma != 0. For F11 its 34/272 (12.5%). I am curious to
to how these numbers have changed in a week. I hope then everyone from
the QA SIG is using the script to report feedback, so it will be save to
say that an update was not tested at all if it did not receive any
feedback.

Regards
Till

[1] https://admin.fedoraproject.org/updates/metrics/?release=F12
[2] https://admin.fedoraproject.org/updates/metrics/?release=F11
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
 
Old 03-05-2010, 09:52 PM
Michael Schwendt
 
Default Refining the update queues/process

On Fri, 05 Mar 2010 13:46:34 -0800, Adam wrote:

> Ah. You're looking at it on a kind of micro level; 'how can I tell this
> package has been tested?'

Exactly. Because I don't like to act on assumptions.

And "zero feedback" is only an indicator for "doesn't break badly", if
there are N>1 testers with N>1 different h/w and s/w setups who have
installed the update actually and have not rolled back without reporting a
problem. This may apply to certain core packages, but _not_ to all pkgs.

Not everyone runs "yum -y update" daily. Not everyone installs updates
daily. It may be that there are broken dependencies in conjunction
with 3rd party repos only (Audacious 2.2 test update as an example
again - the bodhi ticket warned about such dependency issues, and nobody
complained about them - all I know is that there are users who use
Audacious, just no evidence that the test-updates are tested, too).

It takes days for updates to be distributed to mirrors. A week may be
nothing for that important power-user of app 'A', who would find a problem
as soon as he *would* try out a test-update.

Further, I hear about users who have run into problems with Fedora but
haven't reported a single bug before. ABRT may help with that, but they
would still need to create a bugzilla account, which is something they
haven't done before and maybe won't do. Only sometimes a problem annoys
them for so long that they see themselves forced to look into how to
report a bug.

> Maybe it makes it clearer if I explain more clearly that that's not
> exactly how I look at it, nor (I think) how the rest of QA sees it, or
> what the proposal to require -testing is intended to achieve. We're
> thinking more about 'the big picture', and we're specifically thinking
> about - as I said before - the real brown-paper-bag,
> oh-my-god-what-were-they-thinking kinds of regressions, the 'systems
> don't boot any more', 'Firefox doesn't run' kinds of forehead-slappers.
> What we believe is that requiring packages to go to updates-testing for
> some time improves our chances of avoiding that kind of issue.

The key questions are still: Which [special] packages do you want to cover?
CRITPATH only? Or arbitrarily enforced delays for all packages?

For example, it would make sense to keep those packages in updates-testing
for an extended period, which have received feedback in bodhi _before_
and which have a high bug reporting activity in bugzilla.

> Obviously, the more testing gets done in updates-testing, the better.
> Hopefully Till's script will help a lot with that, it's already had a
> very positive response. But the initial trigger for the very first
> proposal from which all this discussion sprang was wondering what we
> could do to avoid the really-big-duh kind of problem.

I cannot answer that. Especially not because a package that may work fine
for you and other testers, may be a really-big-duh for other users.
This also leads to a not so funny scenario, where the big-duh has not
been noticed by any tester during F-N development, but shortly after
release it is found by ordinary users.

When I give +1 karma, I either acknowledge only the fix for a specific bug
that's linked, or I mention the type of usage, e.g. "basic daily usage" or
"didn't try the new features". To not give a false impression that I may
have tested everything. In general I hope that the feedback about me using
the software is more helpful than zero feedback. However, it may still be
that a certain feature/plugin I don't use is broken badly. That's not a
guess, it has happened before and will happen again. With updates or shortly
after a new Fedora release.
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
 
Old 03-06-2010, 01:39 AM
Adam Williamson
 
Default Refining the update queues/process

On Fri, 2010-03-05 at 23:47 +0100, Till Maas wrote:
> On Fri, Mar 05, 2010 at 01:46:34PM -0800, Adam Williamson wrote:
>
> > Ah. You're looking at it on a kind of micro level; 'how can I tell this
> > package has been tested?'
>
> For a package maintainer it is especially interesting, whether the own
> update has been tested.
>
> > Maybe it makes it clearer if I explain more clearly that that's not
> > exactly how I look at it, nor (I think) how the rest of QA sees it, or
> > what the proposal to require -testing is intended to achieve. We're
> > thinking more about 'the big picture', and we're specifically thinking
> > about - as I said before - the real brown-paper-bag,
> > oh-my-god-what-were-they-thinking kinds of regressions, the 'systems
> > don't boot any more', 'Firefox doesn't run' kinds of forehead-slappers.
> > What we believe is that requiring packages to go to updates-testing for
> > some time improves our chances of avoiding that kind of issue.
>
> Afaics, this misunderstanding is a big problem, e.g. my expectations of
> updates testing also differ. Maybe you can add some more information to
> the wiki about what QA for updates testing currently tries to ensure,
> what it actually ensures and plans for the future. E.g. I always noticed

Yeah, that may be a good idea. For the record, we certainly hope the
updates-testing system makes it possible to do far more intensive
testing, and we would love to see real in-depth evaluation of every
package in updates-testing; at present we don't really have enough
people using it to ensure this, but we'd certainly like to see that, and
we'll continue to try and encourage more people to use updates-testing
and report their experiences. Your script could definitely help with
that.

> to how these numbers have changed in a week. I hope then everyone from
> the QA SIG is using the script to report feedback, so it will be save to
> say that an update was not tested at all if it did not receive any
> feedback.

Well, I'm using your script, but still intentionally skipping certain
updates. I don't think it's a good idea to give a +1 on an update that I
haven't really directly tested just because it didn't blow up my system,
though if it *did* blow up my system I'd certainly give it a -1. We
could institute a 'I booted with this installed and nothing exploded'
button, but I'm not sure that would ultimately be valuable...?
--
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Fedora Talk: adamwill AT fedoraproject DOT org
http://www.happyassassin.net

--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
 
Old 03-06-2010, 01:43 AM
Adam Williamson
 
Default Refining the update queues/process

On Fri, 2010-03-05 at 23:52 +0100, Michael Schwendt wrote:
> On Fri, 05 Mar 2010 13:46:34 -0800, Adam wrote:
>
> > Ah. You're looking at it on a kind of micro level; 'how can I tell this
> > package has been tested?'
>
> Exactly. Because I don't like to act on assumptions.
>
> And "zero feedback" is only an indicator for "doesn't break badly", if
> there are N>1 testers with N>1 different h/w and s/w setups who have
> installed the update actually and have not rolled back without reporting a
> problem. This may apply to certain core packages, but _not_ to all pkgs.

I did say it was only a medium-strength indicator (in most cases), not
an infallible one, which was kinda intended to cover the above. IOW, I
agree, mostly. The more people we have running updates-testing, the more
likely we are to catch big breakages, of course.

> It takes days for updates to be distributed to mirrors. A week may be
> nothing for that important power-user of app 'A', who would find a problem
> as soon as he *would* try out a test-update.

In my experience, I get testing updates only a few hours after the email
listing them hits the mailing lists.

> Further, I hear about users who have run into problems with Fedora but
> haven't reported a single bug before. ABRT may help with that, but they
> would still need to create a bugzilla account, which is something they
> haven't done before and maybe won't do. Only sometimes a problem annoys
> them for so long that they see themselves forced to look into how to
> report a bug.

I'd hope this wouldn't describe anyone who takes the trouble to manually
activate updates-testing, but of course I could be wrong

> > Maybe it makes it clearer if I explain more clearly that that's not
> > exactly how I look at it, nor (I think) how the rest of QA sees it, or
> > what the proposal to require -testing is intended to achieve. We're
> > thinking more about 'the big picture', and we're specifically thinking
> > about - as I said before - the real brown-paper-bag,
> > oh-my-god-what-were-they-thinking kinds of regressions, the 'systems
> > don't boot any more', 'Firefox doesn't run' kinds of forehead-slappers.
> > What we believe is that requiring packages to go to updates-testing for
> > some time improves our chances of avoiding that kind of issue.
>
> The key questions are still: Which [special] packages do you want to cover?
> CRITPATH only? Or arbitrarily enforced delays for all packages?

The initial proposal to FESco would cover all packages. There is a
reason to cover all packages, which is that there _are_ cases where
there can be really serious breakage created by a package which isn't in
CRITPATH, though you could argue that's sufficiently unlikely to not
warrant holding up non-critpath packages. It could do with more
discussion, I guess.

> For example, it would make sense to keep those packages in updates-testing
> for an extended period, which have received feedback in bodhi _before_
> and which have a high bug reporting activity in bugzilla.

I'd say it's almost the opposite - you could hold those packages up only
for a little while, because you can be reasonably confident you'll find
out if they're badly broken *really fast* Obviously, it's a tricky
area.

> > Obviously, the more testing gets done in updates-testing, the better.
> > Hopefully Till's script will help a lot with that, it's already had a
> > very positive response. But the initial trigger for the very first
> > proposal from which all this discussion sprang was wondering what we
> > could do to avoid the really-big-duh kind of problem.
>
> I cannot answer that. Especially not because a package that may work fine
> for you and other testers, may be a really-big-duh for other users.
> This also leads to a not so funny scenario, where the big-duh has not
> been noticed by any tester during F-N development, but shortly after
> release it is found by ordinary users.
>
> When I give +1 karma, I either acknowledge only the fix for a specific bug
> that's linked, or I mention the type of usage, e.g. "basic daily usage" or
> "didn't try the new features". To not give a false impression that I may
> have tested everything. In general I hope that the feedback about me using
> the software is more helpful than zero feedback. However, it may still be
> that a certain feature/plugin I don't use is broken badly. That's not a
> guess, it has happened before and will happen again. With updates or shortly
> after a new Fedora release.

Yeah, this is a definite problem with the Bodhi system: it's not
particularly clear what +1 means or what it should mean, and different
reporters use it differently. It's definitely not something we've nailed
perfectly yet.
--
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Fedora Talk: adamwill AT fedoraproject DOT org
http://www.happyassassin.net

--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
 
Old 03-06-2010, 07:43 AM
Till Maas
 
Default Refining the update queues/process

On Fri, Mar 05, 2010 at 06:39:02PM -0800, Adam Williamson wrote:
> On Fri, 2010-03-05 at 23:47 +0100, Till Maas wrote:

> > to how these numbers have changed in a week. I hope then everyone from
> > the QA SIG is using the script to report feedback, so it will be save to
> > say that an update was not tested at all if it did not receive any
> > feedback.
>
> Well, I'm using your script, but still intentionally skipping certain
> updates. I don't think it's a good idea to give a +1 on an update that I
> haven't really directly tested just because it didn't blow up my system,
> though if it *did* blow up my system I'd certainly give it a -1. We
> could institute a 'I booted with this installed and nothing exploded'
> button, but I'm not sure that would ultimately be valuable...?

Currently you could use 0-karma with a comment to explain that you only
installed it. But I would like to have this button, just to make it at
least possible to find updates where nobody pushed this button.

Regards
Till
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
 
Old 03-08-2010, 10:21 PM
Bruno Wolff III
 
Default Refining the update queues/process

On Fri, Mar 05, 2010 at 23:52:24 +0100,
Michael Schwendt <mschwendt@gmail.com> wrote:
>
> It takes days for updates to be distributed to mirrors. A week may be
> nothing for that important power-user of app 'A', who would find a problem
> as soon as he *would* try out a test-update.

Some mirrors. Others have stuff within hours. Currently most of the kernel.org
mirrors are picking stuff up pretty rapidly. Though things change over time.
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
 
Old 03-08-2010, 10:30 PM
Mike McGrath
 
Default Refining the update queues/process

On Mon, 8 Mar 2010, Bruno Wolff III wrote:

> On Fri, Mar 05, 2010 at 23:52:24 +0100,
> Michael Schwendt <mschwendt@gmail.com> wrote:
> >
> > It takes days for updates to be distributed to mirrors. A week may be
> > nothing for that important power-user of app 'A', who would find a problem
> > as soon as he *would* try out a test-update.
>
> Some mirrors. Others have stuff within hours. Currently most of the kernel.org
> mirrors are picking stuff up pretty rapidly. Though things change over time.
>

Are we seeing mirrors that are more than 2 days out of date in the mirror
list?

-Mike
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
 
Old 03-08-2010, 11:00 PM
Bruno Wolff III
 
Default Refining the update queues/process

On Mon, Mar 08, 2010 at 17:30:14 -0600,
Mike McGrath <mmcgrath@redhat.com> wrote:
> On Mon, 8 Mar 2010, Bruno Wolff III wrote:
>
> > On Fri, Mar 05, 2010 at 23:52:24 +0100,
> > Michael Schwendt <mschwendt@gmail.com> wrote:
> > >
> > > It takes days for updates to be distributed to mirrors. A week may be
> > > nothing for that important power-user of app 'A', who would find a problem
> > > as soon as he *would* try out a test-update.
> >
> > Some mirrors. Others have stuff within hours. Currently most of the kernel.org
> > mirrors are picking stuff up pretty rapidly. Though things change over time.
> >
>
> Are we seeing mirrors that are more than 2 days out of date in the mirror
> list?

I occasionally see mirrors that appear to be that far out of date. Though
the way it happens is if the kernel.org mirrors are lagging (maybe some other
distro had an update), I'll use the rawhide mirrors web page and go looking
for other mirrors that appear to be up to date and then use them for rsyncing
for a while and then go back to mirrorsX.kernel.org. I suspect that there
are some that update weekly based on the lag I see.
If I run accross examples in the future, is there information you would like
captured?
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
 
Old 03-08-2010, 11:13 PM
Mike McGrath
 
Default Refining the update queues/process

On Mon, 8 Mar 2010, Bruno Wolff III wrote:

> On Mon, Mar 08, 2010 at 17:30:14 -0600,
> Mike McGrath <mmcgrath@redhat.com> wrote:
> > On Mon, 8 Mar 2010, Bruno Wolff III wrote:
> >
> > > On Fri, Mar 05, 2010 at 23:52:24 +0100,
> > > Michael Schwendt <mschwendt@gmail.com> wrote:
> > > >
> > > > It takes days for updates to be distributed to mirrors. A week may be
> > > > nothing for that important power-user of app 'A', who would find a problem
> > > > as soon as he *would* try out a test-update.
> > >
> > > Some mirrors. Others have stuff within hours. Currently most of the kernel.org
> > > mirrors are picking stuff up pretty rapidly. Though things change over time.
> > >
> >
> > Are we seeing mirrors that are more than 2 days out of date in the mirror
> > list?
>
> I occasionally see mirrors that appear to be that far out of date. Though
> the way it happens is if the kernel.org mirrors are lagging (maybe some other
> distro had an update), I'll use the rawhide mirrors web page and go looking
> for other mirrors that appear to be up to date and then use them for rsyncing
> for a while and then go back to mirrorsX.kernel.org. I suspect that there
> are some that update weekly based on the lag I see.
> If I run accross examples in the future, is there information you would like
> captured?
>

yes please, stop by #fedora-admin and try to get ahold of mdomsch or
myself.

-Mike
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
 

Thread Tools




All times are GMT. The time now is 02:30 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org