Hi all,
The top ten FAS account holders who have completed reviewing "Package review" components on bugzilla for the year ending December 31st, 2008 were Parag AN(पराग), Jason Tibbitts, Mamoru Tasaka, Manuel Wolfshant, Kevin Fenzi, Jon Ciesla, Brian Pepple, Dan Horák, Patrice Dumas, and Marek Mahut. Below is the number of completed package reviews done during 2008.
Parag AN(पराग) - 319 Jason Tibbitts - 264 Mamoru Tasaka - 158 manuel wolfshant - 95 Kevin Fenzi - 47 Jon Ciesla - 45 Brian Pepple - 42 Dan Horák - 40 Patrice Dumas - 36 Marek Mahut - 31 Jeroen van Meeuwen - 28 Orcan 'oget' Ogetbil - 25 Tom "spot" Callaway - 24 Lubomir Rintel - 23 Rex Dieter - 23 Peter Lemenkov - 22 Hans de Goede - 21 Marcela Maslanova - 20 Matthias Clasen - 19 Richard W.M. Jones - 19 Lubomir Kundrak - 18 Rakesh Pandit - 17 Michel Alexandre Salim - 16 Rahul Sundaram - 16 Debarshi Ray - 15 Michael Schwendt - 15 Till Maas - 15 Dennis Gilmore - 14 Ed Hill - 13 Ville Skyttä - 13 S.A. Hartsuiker - 12 Chris Weyl - 11 Ruben Kerkhof - 11 Xavier Bachelot - 11 Nicolas Chauvet (kwizart) - 10 Nicolas Mailhot - 10 Christoph Wickert - 9 Colin Walters - 9 David Nielsen - 9 Jens Petersen - 9 Jesse Keating - 9 Peter Robinson - 9 David Woodhouse - 8 Fabian Affolter - 8 Kevin Kofler - 8 Alex Lancaster - 7 Conrad Meyer - 7 David Lutterkort - 7 Matej Cepl - 7 Nigel Jones - 7 Adam Tkac - 6 Adel Gadllah - 6 Dominik 'Rathann' Mierzejewski - 6 Lillian Angel - 6 Lucian Langa - 6 Matt Domsch - 6 Nuno Santos - 6 Rob Crittenden - 6 Xavier Lamien - 6 Andrew Overholt - 5 Bastien Nocera - 5 Jerry James - 5 Remi Collet - 5 Simon Schampijer - 5 Sven Lankes - 5 Thomas Moschny - 5 Tim Lauridsen - 5 Bill Nottingham - 4 Chris Feist - 4 Daniel Berrange - 4 Denis Leroy - 4 Jan ONDREJ - 4 Jarod Wilson - 4 Johan Cwiklinski - 4 John Mahowald - 4 Jon Stanley - 4 Miroslav Lichvar - 4 Robert Scheck - 4 Terje Røsten - 4 Tyler Owen - 4 Alexander Kahl - 3 Brennan Ashton - 3 Bryan Kearney - 3 Caolan McNamara - 3 Chitlesh GOORAH - 3 David A. Wheeler - 3 Erik van Pienbroek - 3 Hans Ulrich Niedermann - 3 Ian Weller - 3 José Matos - 3 Jussi Lehtola - 3 Levente Farkas - 3 Marco Pesenti Gritti - 3 Michal Marciniszyn - 3 Miroslav Suchy - 3 Owen Taylor - 3 Paul F. Johnson - 3 Peter Vrabec - 3 Pierre-YvesChibon - 3 Rafał Psota - 3 Ralf Corsepius - 3 Ricky Zhou - 3 Robin Norwood - 3 Stepan Kasal - 3 Thomas Fitzsimmons - 3 Tomas Mraz - 3 Toshio Ernie Kuratomi - 3 Benoît Marcelin - 2 Chris Lalancette - 2 Christopher Aillon - 2 Christopher Stone - 2 Dan Horák - 2 Dan Smith - 2 Darryl L. Pierce - 2 Ignacio Vazquez-Abrams - 2 Jeremy Katz - 2 Jonathan Roberts - 2 Josh Boyer - 2 Julian Sikorski - 2 Lorenzo Villani - 2 Michal Nowak - 2 Neil Horman - 2 Nils Philippsen - 2 Orion Poplawski - 2 Paul Howarth - 2 Paulo Roma Cavalcanti - 2 Permaine Cheung - 2 Rahul Bhalerao - 2 Randall Berry - 2 Sergio Pascual - 2 Simon Wesp - 2 Steven Pritchard - 2 Todd Zullinger - 2 Wart - 2 Adam Jackson - 1 Adrian Reber - 1 Alan Dunn - 1 Alexey Torkhov - 1 Allisson Azevedo - 1 Andreas Thienemann - 1 Anthony Green - 1 Axel Thimm - 1 Benjamin Krill - 1 Benjamin Lewis - 1 Bernie Innocenti - 1 Casey Dahlin - 1 Charles R. Anderson - 1 Chris Lumens - 1 Christophe GRENIER - 1 Clint Savage - 1 David Cantrell - 1 David Timms - 1 Deepak Bhole - 1 Deji Akingunola - 1 Felix Kaechele - 1 Gianluca Sforna - 1 Harald Hoyer - 1 Hemant Goyal - 1 Ian Chapman - 1 Jaroslav Reznik - 1 Jef Spaleta - 1 Jochen Schmitt - 1 Jonathan Dieter - 1 Karol Trzcionka - 1 Karsten Hopp - 1 Luke Macken - 1 Luya Tshimbalanga - 1 Mads Villadsen - 1 Marc Wiriadisastra - 1 Martin Sourada - 1 Matt Wringe - 1 Matthias Saou - 1 Michal Schmidt - 1 Michał Bentkowski - 1 Milos Jakubicek - 1 Miloslav Trmac - 1 Patrick Monnerat - 1 Paul Nasrat - 1 Paul Wouters - 1 Pavel Lisý - 1 Pierre-Yves - 1 Robert M.Albrecht - 1 Roland McGrath - 1 Roy Rankin - 1 Sebastian Vahl - 1 Seth Vidal - 1 Shawn Starr - 1 Stefan Posdzich - 1 Stephen Warren - 1 Steve Grubb - 1 Steven M. Parrish - 1 Terje Røsten - 1 Thibault North - 1 Thorsten Leemhuis - 1 Tomas Heinrich - 1 Tomeu Vizoso - 1 Ville Skyttà - 1 Ville-Pekka Vainio - 1 Warren Togami - 1 _pjp_ - 1
Review Requests: 1813 Merge Reviews: 116 Total Reviews Completed: 1978
Thanks to everyone that spent time working on package reviews during 2008.
Later, /B
On Thu, Jan 01, 2009 at 01:26:39PM -0500, Brian Pepple wrote:
Hi all,
The top ten FAS account holders who have completed reviewing "Package review" components on bugzilla for the year ending December 31st, 2008 were Parag AN(पराग), Jason Tibbitts, Mamoru Tasaka, Manuel Wolfshant, Kevin Fenzi, Jon Ciesla, Brian Pepple, Dan Horák, Patrice Dumas, and Marek Mahut. Below is the number of completed package reviews done during 2008.
I'd like to arrange for some sort of reward for the top 10 reviewers. We could turn that into an annual event.
There was a previous attempt at a "Fedora Award," but it was based on completely subjective measures whereas this reward would be (1) less hoopla involved, and (2) based on the completely objective measure of package reviews, which are both sorely needed in the project and obviously well-connected to our mission of advancing free software -- in this case, by getting more of it included in the distribution.
Paul W. Frields wrote:
On Thu, Jan 01, 2009 at 01:26:39PM -0500, Brian Pepple wrote:
Hi all,
The top ten FAS account holders who have completed reviewing "Package review" components on bugzilla for the year ending December 31st, 2008 were Parag AN(पराग), Jason Tibbitts, Mamoru Tasaka, Manuel Wolfshant, Kevin Fenzi, Jon Ciesla, Brian Pepple, Dan Horák, Patrice Dumas, and Marek Mahut. Below is the number of completed package reviews done during 2008.
I'd like to arrange for some sort of reward for the top 10 reviewers.
Good idea! I'm #11 ;-)
We could turn that into an annual event.
There was a previous attempt at a "Fedora Award," but it was based on completely subjective measures whereas this reward would be (1) less hoopla involved, and (2) based on the completely objective measure of package reviews, which are both sorely needed in the project and obviously well-connected to our mission of advancing free software -- in this case, by getting more of it included in the distribution.
Given the overall sentiment the last time; How are you going to award whomever is doing whatever in whichever area doesn't have this kind of statistics? I'm afraid the ones that had something to complain about with the Fedora Award will find something to complain about this time.
Why not combine whatever statistics we can pull from wherever, have these people put their ranking on their personal Wiki page -if they even want to-, and elect who's getting the Fedora Award? We could arrange for the winners in YearN to not be eligible for YearN+1, too.
Just a thought ;-)
-Jeroen
On Fri, Jan 02, 2009 at 05:32:43PM +0100, Jeroen van Meeuwen wrote:
Paul W. Frields wrote:
On Thu, Jan 01, 2009 at 01:26:39PM -0500, Brian Pepple wrote:
Hi all,
The top ten FAS account holders who have completed reviewing "Package review" components on bugzilla for the year ending December 31st, 2008 were Parag AN(पराग), Jason Tibbitts, Mamoru Tasaka, Manuel Wolfshant, Kevin Fenzi, Jon Ciesla, Brian Pepple, Dan Horák, Patrice Dumas, and Marek Mahut. Below is the number of completed package reviews done during 2008.
I'd like to arrange for some sort of reward for the top 10 reviewers.
Good idea! I'm #11 ;-)
We could just as easily make this 5, or 15. The number isn't that important to me personally, and the community can help decide the cutoff point.
We could turn that into an annual event.
There was a previous attempt at a "Fedora Award," but it was based on completely subjective measures whereas this reward would be (1) less hoopla involved, and (2) based on the completely objective measure of package reviews, which are both sorely needed in the project and obviously well-connected to our mission of advancing free software -- in this case, by getting more of it included in the distribution.
Given the overall sentiment the last time; How are you going to award whomever is doing whatever in whichever area doesn't have this kind of statistics? I'm afraid the ones that had something to complain about with the Fedora Award will find something to complain about this time.
There was a lot more to complain about with that award being (1) decided out of public view, (2) not based on objective criteria, and (3) publicly trumpeted as an "award" and not a "reward." I see this as more of a bounty or a thank-you, not an award that promotes specific people as somehow having more value to the project than others.
Any team in Fedora is free to find objective metrics and show achievement based on those metrics. And I'd be open to finding a way to thank people based on their objective achievements.
Why not combine whatever statistics we can pull from wherever, have these people put their ranking on their personal Wiki page -if they even want to-, and elect who's getting the Fedora Award? We could arrange for the winners in YearN to not be eligible for YearN+1, too.
Elections devalue the point of the reward -- that it's based on an objective measure and not biased by popularity, group awareness, or any number of other social considerations. Those latter considerations were some of the biggest complaints about having awards bestowed on specific people -- as opposed to a reward for measurable work completed. I think it's worthwhile to recognize people who are measurably contributing to the fulfillment of our mission, in this case by getting more software included in Fedora with a token reward.
Paul W. Frields wrote:
I'd like to arrange for some sort of reward for the top 10 reviewers.
Since giving away a free copy of Fedora DVD wouldn't probably cut it (unless it's nicely labelled packaged or something unique), I was thinking - how about access to a paid redhat person for a day to move forward the reviewers favourite (most annoying, etc) Fedora bug or RFE ?
DaveT.
2009/1/2 Paul W. Frields stickster@gmail.com:
We could just as easily make this 5, or 15. The number isn't that important to me personally, and the community can help decide the cutoff point.
Do a histogram, fit a exponential distribution to the histogram, maybe a Poisson distribution... the distribution of "rare events". Define the number of awards on the variance,mean,median, or some complication combination there of.
This however will not solve the qauntity over quality problem.
-jef"You know what would be fun? A histogram of number of reviews weighted by the number of non-whitespace characters in the specfile or weighted by the total size of the sources associated with the package"spaleta
On Fri, Jan 02, 2009 at 05:33:43PM +0100, Michael Schwendt wrote:
On Fri, 2 Jan 2009 11:09:56 -0500, Paul wrote:
I'd like to arrange for some sort of reward for the top 10 reviewers. We could turn that into an annual event.
What are your plans on avoiding "quantity instead of quality" effects?
How does the packaging community currently ensure that packages reviewed are of sufficient quality?
2009/1/2 Paul W. Frields stickster@gmail.com:
On Fri, Jan 02, 2009 at 05:33:43PM +0100, Michael Schwendt wrote:
On Fri, 2 Jan 2009 11:09:56 -0500, Paul wrote:
I'd like to arrange for some sort of reward for the top 10 reviewers. We could turn that into an annual event.
What are your plans on avoiding "quantity instead of quality" effects?
How does the packaging community currently ensure that packages reviewed are of sufficient quality?
-- Paul W. Frields http://paul.frields.org/ gpg fingerprint: 3DA6 A0AC 6D58 FEC4 0233 5906 ACDB C937 BD11 3717 http://redhat.com/ - - - - http://pfrields.fedorapeople.org/ irc.freenode.net: stickster @ #fedora-docs, #fedora-devel, #fredlug
My thoughts on this is that as we progress further down the food chain the reviews are more likely to be in a quantity vs quality battle. In fact with these stats when someone all of a sudden jumps up they might then get a few looks at there reviews. Another thought would be to each month or two have the sponsor verify at least one of the bugs of each of there minions. As a packager myself I would actually appreciate this, keeps everyone in the loop on little changes by packaging committee.
--Brennan Ashton
On Fri, Jan 02, 2009 at 11:47:04AM -0500, Paul W. Frields wrote:
On Fri, Jan 02, 2009 at 05:33:43PM +0100, Michael Schwendt wrote:
On Fri, 2 Jan 2009 11:09:56 -0500, Paul wrote:
I'd like to arrange for some sort of reward for the top 10 reviewers. We could turn that into an annual event.
What are your plans on avoiding "quantity instead of quality" effects?
How does the packaging community currently ensure that packages reviewed are of sufficient quality?
I don't know if it was what Michael wanted to say, but there are reviews that are very easy and reviews that are quite hard, counting these the same may be misleading. However taking that into account requires a rating of the submission which is not obvious.
Another issue is that in some cases commentators may do more work in the review than the one approving the review request. This is also not something that is easily measured, though...
In any case, as long as those numbers are not misrepresented as the amount of work done through reviews or the like, but only plainly as review requests accepted, everything is fine.
-- Pat
On Fri, Jan 02, 2009 at 06:31:26PM +0100, Patrice Dumas wrote:
I don't know if it was what Michael wanted to say, but there are reviews that are very easy and reviews that are quite hard, counting these the same may be misleading. However taking that into account requires a rating of the submission which is not obvious.
Another issue is that in some cases commentators may do more work in the review than the one approving the review request. This is also not something that is easily measured, though...
In any case, as long as those numbers are not misrepresented as the amount of work done through reviews or the like, but only plainly as review requests accepted, everything is fine.
Precisely. This is not about rewarding people for the most work done, simply for a number of package reviews. One very detailed package review might be a lot of work for an experienced package reviewer. Similarly, one simple package review might just as easily be a lot of work for an enthusiastic but inexperienced package reviewer. Both may be completed equally successfully.
I don't see a way of equitably treating the amount of work done, and therefore this reward is not based on that measurement. And the fact that amounts of work may differ should not stop us from saying thank you to contributors doing work.
I think concerns of people gaming the system are completely out of proportion on a risk vs. reward basis. And if anyone's involved in Fedora processes purely out of an interest in being materially rewarded, I would say that person's priorities are somewhat askew!
On Fri, 2 Jan 2009 12:49:36 -0500, Paul wrote:
On Fri, Jan 02, 2009 at 06:31:26PM +0100, Patrice Dumas wrote:
I don't know if it was what Michael wanted to say, but there are reviews that are very easy and reviews that are quite hard, counting these the same may be misleading. However taking that into account requires a rating of the submission which is not obvious.
Another issue is that in some cases commentators may do more work in the review than the one approving the review request. This is also not something that is easily measured, though...
In any case, as long as those numbers are not misrepresented as the amount of work done through reviews or the like, but only plainly as review requests accepted, everything is fine.
Precisely. This is not about rewarding people for the most work done, simply for a number of package reviews. One very detailed package review might be a lot of work for an experienced package reviewer. Similarly, one simple package review might just as easily be a lot of work for an enthusiastic but inexperienced package reviewer. Both may be completed equally successfully.
I don't see a way of equitably treating the amount of work done, and therefore this reward is not based on that measurement. And the fact that amounts of work may differ should not stop us from saying thank you to contributors doing work.
Which is what Brian's OP has done. It has listed _all_ the people who've contributed package reviews in 2008, sorted by number of reviews. Why is that not enough? Why do you want to apply a system where the two reviewers from your example above won't see any official "thank you"?
I think concerns of people gaming the system are completely out of proportion on a risk vs. reward basis.
Still: somebody, who [unintentionally] hunts down dozens of tiny Perl module packages, would enter the Top 10 more likely [and possibly unintentionally] than somebody who fights a 20K spec file for a pkg that requires lots of work.
On Fri, 2 Jan 2009 11:47:04 -0500, Paul wrote:
On Fri, Jan 02, 2009 at 05:33:43PM +0100, Michael Schwendt wrote:
On Fri, 2 Jan 2009 11:09:56 -0500, Paul wrote:
I'd like to arrange for some sort of reward for the top 10 reviewers. We could turn that into an annual event.
What are your plans on avoiding "quantity instead of quality" effects?
How does the packaging community currently ensure that packages reviewed are of sufficient quality?
Are you aware of any efforts to do re-reviews? ;)
I'm not. With the current system, a single reviewer is enough to approve a package. That's all. One could argue that the packager (who submits the review request) is a second reviewer, but often (at least that is my experience) this is not the case. And afterall, once a package is approved, the packager can modify it and even violate the review guidelines as long as nobody notices it. It happens regularly. We've even had duplicate packages in the collection (using different names).
Once packaging bugs are found, hardly anyone looks up old review tickets to (1) find out whether an issue was missed during review and to (2) inform the reviewer about an issue.
Twenty reviews of small packages, which are trivial to review (or even flawless to begin with because the packager is experienced!), or reviews of package rename requests, may be less of an achievement than one review of a big beast with dependencies, which has been waiting in the review queue for many months and needed lots of work.
On Fri, 2009-01-02 at 18:36 +0100, Michael Schwendt wrote:
Twenty reviews of small packages, which are trivial to review (or even flawless to begin with because the packager is experienced!), or reviews of package rename requests, may be less of an achievement than one review of a big beast with dependencies, which has been waiting in the review queue for many months and needed lots of work.
Totally agree with you, but I don't think we have any way currently to differentiate the difficulty between reviews that we could extract into a report.
Later, /B
2009/1/2 Brian Pepple bpepple@fedoraproject.org:
On Fri, 2009-01-02 at 18:36 +0100, Michael Schwendt wrote:
Twenty reviews of small packages, which are trivial to review (or even flawless to begin with because the packager is experienced!), or reviews of package rename requests, may be less of an achievement than one review of a big beast with dependencies, which has been waiting in the review queue for many months and needed lots of work.
Totally agree with you, but I don't think we have any way currently to differentiate the difficulty between reviews that we could extract into a report.
The number of comments the reviewer himself added to the review ticket could give a hint on how to complicated the review was.
- Thomas
2009/1/3 Thomas Moschny thomas.moschny@gmail.com:
2009/1/2 Brian Pepple bpepple@fedoraproject.org:
On Fri, 2009-01-02 at 18:36 +0100, Michael Schwendt wrote:
Twenty reviews of small packages, which are trivial to review (or even flawless to begin with because the packager is experienced!), or reviews of package rename requests, may be less of an achievement than one review of a big beast with dependencies, which has been waiting in the review queue for many months and needed lots of work.
Totally agree with you, but I don't think we have any way currently to differentiate the difficulty between reviews that we could extract into a report.
The number of comments the reviewer himself added to the review ticket could give a hint on how to complicated the review was.
Sometime reviewer does not take a note of what he has tested [assuming reporter has already done clever job] unless package fails that test. So, number of comments are also not a good hint.
2009/1/2 Paul W. Frields stickster@gmail.com:
On Thu, Jan 01, 2009 at 01:26:39PM -0500, Brian Pepple wrote:
Hi all,
The top ten FAS account holders who have completed reviewing "Package review" components on bugzilla for the year ending December 31st, 2008 were Parag AN(पराग), Jason Tibbitts, Mamoru Tasaka, Manuel Wolfshant, Kevin Fenzi, Jon Ciesla, Brian Pepple, Dan Horák, Patrice Dumas, and Marek Mahut. Below is the number of completed package reviews done during 2008.
I'd like to arrange for some sort of reward for the top 10 reviewers. We could turn that into an annual event.
There was a previous attempt at a "Fedora Award," but it was based on completely subjective measures whereas this reward would be (1) less hoopla involved, and (2) based on the completely objective measure of package reviews, which are both sorely needed in the project and obviously well-connected to our mission of advancing free software -- in this case, by getting more of it included in the distribution.
-- Paul W. Frields http://paul.frields.org/ gpg fingerprint: 3DA6 A0AC 6D58 FEC4 0233 5906 ACDB C937 BD11 3717 http://redhat.com/ - - - - http://pfrields.fedorapeople.org/ irc.freenode.net: stickster @ #fedora-docs, #fedora-devel, #fredlug
Right now I have been working on writing a service that monitors package reviews and triage stats in an attempt to automate these types of reports with a very customizable TG interface. I have the back end running on my server for the last few weeks, and it seems to be running just fine. Right now I am working on designing the front end for this. Please fill me in on the types of reports people would like, I hope to have the web interface up up some time in the next month and a half.
--Brennan Ashton
On Fri, 2009-01-02 at 11:09 -0500, Paul W. Frields wrote:
I'd like to arrange for some sort of reward for the top 10 reviewers. We could turn that into an annual event.
I think that would be a great idea.
Looking at the stats the top 5 reviewers did something like 44% of all package reviews. One of the things I'd like to work on improving in 2009 is to increase the number of reviews done by the folks that did 10 or fewer reviews in 2008, since this is where the bulk of our package reviewers lie.
Later, /B
On Fri, Jan 02, 2009 at 12:14:57PM -0500, Brian Pepple wrote:
On Fri, 2009-01-02 at 11:09 -0500, Paul W. Frields wrote:
I'd like to arrange for some sort of reward for the top 10 reviewers. We could turn that into an annual event.
I think that would be a great idea.
Looking at the stats the top 5 reviewers did something like 44% of all package reviews. One of the things I'd like to work on improving in 2009 is to increase the number of reviews done by the folks that did 10 or fewer reviews in 2008, since this is where the bulk of our package reviewers lie.
I don't want to short the "long tail" of reviewers, so perhaps we could arrange for a smaller but still significant reward for people who did more than "N" reviews.
On Fri, 2009-01-02 at 12:32 -0500, Paul W. Frields wrote:
On Fri, Jan 02, 2009 at 12:14:57PM -0500, Brian Pepple wrote:
Looking at the stats the top 5 reviewers did something like 44% of all package reviews. One of the things I'd like to work on improving in 2009 is to increase the number of reviews done by the folks that did 10 or fewer reviews in 2008, since this is where the bulk of our package reviewers lie.
I don't want to short the "long tail" of reviewers, so perhaps we could arrange for a smaller but still significant reward for people who did more than "N" reviews.
Yeah, we briefly discussed in the Package Review sig perhaps having a random drawing of folks that completed x-number of reviews for a period of time (something like 3 months), and sending them some form of Fedora swag.
Later, /B
On Fri, Jan 02, 2009 at 01:02:39PM -0500, Brian Pepple wrote:
On Fri, 2009-01-02 at 12:32 -0500, Paul W. Frields wrote:
On Fri, Jan 02, 2009 at 12:14:57PM -0500, Brian Pepple wrote:
Looking at the stats the top 5 reviewers did something like 44% of all package reviews. One of the things I'd like to work on improving in 2009 is to increase the number of reviews done by the folks that did 10 or fewer reviews in 2008, since this is where the bulk of our package reviewers lie.
I don't want to short the "long tail" of reviewers, so perhaps we could arrange for a smaller but still significant reward for people who did more than "N" reviews.
Yeah, we briefly discussed in the Package Review sig perhaps having a random drawing of folks that completed x-number of reviews for a period of time (something like 3 months), and sending them some form of Fedora swag.
Perhaps this would be a good compromise -- using randomization to eliminate bias that favors quantity of simpler reviews.
Paul W. Frields wrote:
Perhaps this would be a good compromise -- using randomization to eliminate bias that favors quantity of simpler reviews.
But it also eliminates the incentive to continue doing reviews in that year once you hit the magical threshold 'N'.
Kevin Kofler
On 01/04/2009 01:16 AM, Kevin Kofler wrote:
Paul W. Frields wrote:
Perhaps this would be a good compromise -- using randomization to eliminate bias that favors quantity of simpler reviews.
But it also eliminates the incentive to continue doing reviews in that year once you hit the magical threshold 'N'.
Come on, Kevin, if someone does reviews just for the "award"/"reward"/"prize"/whatever-else-we-call-it, [s]he should not be here in the first place. We do it for the community, not for a personal pride. You've reached the threshold ? Fine. The community thanks you. Now, go on.
Kevin Kofler kevin.kofler@chello.at wrote:
Paul W. Frields wrote:
Perhaps this would be a good compromise -- using randomization to eliminate bias that favors quantity of simpler reviews.
But it also eliminates the incentive to continue doing reviews in that year once you hit the magical threshold 'N'.
Don't use a threshold then, just give each reviewer one ticket in the lottery for each package reviewed (but make sure nobody gets more than one prize). Or only give out tickets as above to anybody who did more than N. Or give out max(0, reviews - N + 1) tickets.
There are many variations possible (give an extra ticket in the lottery to new reviewers, give extra tickets for reviewing a package that person hasn't reviewed before, give extra tickets for packages for which that person is the lone reviewer, ...)
On Fri, 02 Jan 2009 12:14:57 -0500, Brian wrote:
One of the things I'd like to work on improving in 2009 is to increase the number of reviews done by the folks that did 10 or fewer reviews in 2008, since this is where the bulk of our package reviewers lie.
Better: increase the number of people, who've done 10 or fewer reviews in 2008.
There are first-time revievers and people, who are afraid of doing mistakes in reviews and who therefore don't review anything to stay on the safe side. Encourage them to ask for a review of their review prior to approval. Reward active contributors with becoming a "sponsor", with access to the "provenpackagers" group, and with removing the mandatory review requests of package renaming.
Michael Schwendt wrote:
There are first-time revievers and people, who are afraid of doing mistakes in reviews and who therefore don't review anything to stay on the safe side.
Perhaps we could ask that people with little package activity after having there package accepted need to perform at least one review per year. This sort of says "I'm keeping up with the guidelines".
Encourage them to ask for a review of their review prior to approval. Reward active contributors with becoming a
I like this bit, even though not really related to the reward process. In eg mozilla, changes on the source code require the developer put patches on a bug, requests review, reviewer makes comments, to help improve patch, developer updates patch. Eventually reviewer may sign off, then a superreviewer needs to be found who oversees a major area of the source. Only once both r+ and sr+ have provided signoff, are commits to cvs allowed.
In Fedora packaging, I'm sure there are people who are experts in general areas like java packaging, gnome desktop, multimedia etc, who could be called on to check over a package where the main reviewer sees is otherwise ready to be accepted.
DaveT.
2009/1/3 Paul W. Frields stickster@gmail.com:
I'd like to arrange for some sort of reward for the top 10 reviewers. We could turn that into an annual event.
Perhaps a silly idea, but how about a lottery? One ticket per package (or unit of work, if you can figure that out) and randomly give X prizes. I think it'll be more accessible for everyone, a bit more fun and a little less competitive.
With so many ideas coming up, I would have rather liked a program taking dictionary (Key: <Reviewer Name>, Value: <number_of_reviews_done>) and number of awards as input and selected candidates as output based on different algorithms proposed. I am very bad at probability, in case someone can take up the problem and suggest even a pseudo code , it would help.
Report generating script is already providing a dictionary.
On Sat, Jan 3, 2009 at 6:05 PM, Eric Springer erikina@gmail.com wrote:
Perhaps a silly idea, but how about a lottery? One ticket per package (or unit of work, if you can figure that out) and randomly give X prizes. I think it'll be more accessible for everyone, a bit more fun and a little less competitive.
I think your right. Anyway we cut it, awards are somewhat arbitrary since we aren't going to get the perfect metric of what the workload really looks like. I love plotting things, just because I love data, but I don't necessarily like personalizing the data. A lottery encodes the arbitrariness into the "award" process to some extent.
Taken this idea further, we could invest in one of those big air chambers like you see on cheesy gameshows and fill it with little scraps of paper with a reviewers name, one scrap of paper is a ticket in the lottery. Paul walks into the chamber, the air turns on and he randomly grabs X number of flying scraps of paper. All of this on video of course.
-jef"Is it wrong that I think of the packaging process as akin to the old Double Dare obstacle course, 'the messiest minute on television' ?"spaleta