On Thu, 2009-04-02 at 17:00 +0200, Christoph Höger wrote:
a) we should _always_ have USB images ready. That should lower the testing costs to zero.
Well, we don't provide USB images for releases on the basis it's a waste of space and bandwidth since you can simply convert them with the script. I don't see why that reasoning doesn't hold for test days.
b) we need public attention (aka PR), I already tried to get that into some german online media and phoronix for nouveau test day, but with little success (only prolinux.de reacted). I guess no one will make a headline for every testday, so it would be a good idea to bring attention to the test days (and the schedule) in the alpha release notes (and pounce the media on those). Some ambassadors having PR experience here?
Phoronix did run a story a few days later. I did get the nouveau day onto OS News and Linux Today. It was also sent to digg but didn't get voted up very high.
Test Days are already announced on -devel-list, -test-list, fedora forums, my blog, and FWN.
Interestingly, I didn't promote the radeon test day to general-interest media (only the Fedora-specific resources mentioned), and response was pretty similar to the response for the nouveau test day, which did hit some general-purpose media sites.
I don't think it's a good idea to spam general-interest news channels with all our test days; we do at least one a week so it would get irritating, and quite a few aren't really realistically going to get any interest outside the Fedora project. I do plan to do news pumping on specific days which are likely to have significant interest beyond the Fedora community, like the nouveau day.
c) make the test cases as automated as possible. Offer sending smolt profile (smoltGUI) and run test cases directly from the desktop. Also We should add a link (or the document itself) about how to test what and why to the desktop. If it's possible for the test generated results should also be posted automagically to the results page, or, if tests fail, bug reports could be created by bugbuddy.
These would all be nice, but we *are* running on a one-week cycle here, it may be hard to get it all done. Assistance welcome. =) it's pretty hard to automate some elements of some test cases, but some could certainly be done with scripts etc.
It's worth noting the current test case and results system (especially the results system...) are fairly ad-hoc. We are working within the QA department on more advanced tools for managing test cases and results.
This discussion should probably move to fedora-test-list. CCing there.
On Thu, 2009-04-02 at 11:11 -0700, Adam Williamson wrote:
c) make the test cases as automated as possible. Offer sending smolt profile (smoltGUI) and run test cases directly from the desktop. Also We should add a link (or the document itself) about how to test what and why to the desktop. If it's possible for the test generated results should also be posted automagically to the results page, or, if tests fail, bug reports could be created by bugbuddy.
These would all be nice, but we *are* running on a one-week cycle here, it may be hard to get it all done. Assistance welcome. =) it's pretty hard to automate some elements of some test cases, but some could certainly be done with scripts etc.
Oh, I forgot to mention another consideration here: I'm trying to write these test cases to be pretty future-proof, so we could use them virtually unmodified for F12, F13, F14...after all, the URLs don't have expire-by dates and the test cases will still be hanging around in 10 years most likely. I suspect scripts are liable to go stale faster than instructions.
On Thu, 2009-04-02 at 11:45 -0700, Adam Williamson wrote:
On Thu, 2009-04-02 at 11:11 -0700, Adam Williamson wrote:
c) make the test cases as automated as possible. Offer sending
smolt
profile (smoltGUI) and run test cases directly from the desktop.
Also We
should add a link (or the document itself) about how to test what
and
why to the desktop. If it's possible for the test generated
results
should also be posted automagically to the results page, or, if
tests
fail, bug reports could be created by bugbuddy.
These would all be nice, but we *are* running on a one-week cycle
here,
it may be hard to get it all done. Assistance welcome. =) it's
pretty
hard to automate some elements of some test cases, but some could certainly be done with scripts etc.
Oh, I forgot to mention another consideration here: I'm trying to write these test cases to be pretty future-proof, so we could use them virtually unmodified for F12, F13, F14...after all, the URLs don't have expire-by dates and the test cases will still be hanging around in 10 years most likely. I suspect scripts are liable to go stale faster than instructions.
My initial concern was the lack of defined test cases. Thankfully that problem is slowly going away (see https://fedoraproject.org/wiki/Category:Test_Cases). I'm always in favor of automation where possible, but I try to make sure we know what we automate before we dive into the shell.
However, with the test case "library" expanding ... it isn't a bad idea to begin thinking as scripts/automation comes online, where/how will we store these tests. Depending on the test target, automation may already exist. So leveraging any off-the-shelf test automation is always a "good thing." Some of these questions will naturally tie into the beaker project as that matures, but there's no reason we can't discuss/debate these issues now.
Thanks, James