Hi, everyone. So, in the recent debate about the update process it again became clear that we were lacking a good process for providing package-specific test instructions, and particularly specific instructions for testing critical path functions.
I've been working on a process for this, and now have two draft Wiki pages up for review which together describe it:
https://fedoraproject.org/wiki/User:Adamwill/Draft_QA_SOP_test_case_creation https://fedoraproject.org/wiki/User:Adamwill/Draft_QA_SOP_package_test_plan_...
the first isn't particularly specific to this, but it was a prerequisite that I discovered was missing: it's a guide to test case creation in general, explaining the actual practical process of how you create a test case, and the best principles to consider in doing it.
The second is what's really specific to this subject. It describes how to create a set of test cases for a particular package, and a proposed standardized categorization scheme which will allow us to denote test cases as being associated with specific packages, and also denote them as concerning critical path functionality.
Given that mediawiki has a handy API which also allows you to deal with categories, this should make it easy to both manually and programmatically derive a list of test cases for a given package, and a list of *critical path* test cases for a given package. You can do this manually, but I also envision Bodhi and fedora-easy-karma utilizing the API so that when an update is pushed for a package for which test cases have been created under this system, they will link to those test cases; and when an update is pushed for a critical path package, they will be able to display separately (and more prominently, perhaps) the list of test cases relevant to the critical path functionality of the package.
Comments, suggestions and rotten fruit welcome :) I'm particularly interested in feedback from package maintainers and QA contributors in whether you feel, just after reading these pages, that you'd be confident in going ahead and creating some test cases, or if there's stuff that's scary or badly explained or that you feel like something is missing and you wouldn't know where to start, etc.
The trac ticket on this is probably valuable for background, explaining why some things in the proposal are the way they are:
https://fedorahosted.org/fedora-qa/ticket/154
it also mentions one big current omission: dependencies. For instance, it would be very useful to be able to express 'when yum is updated, we should also run the PackageKit test plan' (because it's possible that a change in yum could be fine 'within itself', and all the yum test cases pass, but could break PackageKit). That's rather complex, though, especially with a Wiki-based system. If anyone has any bright ideas on how to achieve this, do chip in! Thanks.
Thanks Adam for getting the ball rolling on this topic.
On Tue, 2010-12-21 at 17:11 +0000, Adam Williamson wrote:
Hi, everyone. So, in the recent debate about the update process it again became clear that we were lacking a good process for providing package-specific test instructions, and particularly specific instructions for testing critical path functions.
I've been working on a process for this, and now have two draft Wiki pages up for review which together describe it:
https://fedoraproject.org/wiki/User:Adamwill/Draft_QA_SOP_test_case_creation https://fedoraproject.org/wiki/User:Adamwill/Draft_QA_SOP_package_test_plan_...
the first isn't particularly specific to this, but it was a prerequisite that I discovered was missing: it's a guide to test case creation in general, explaining the actual practical process of how you create a test case, and the best principles to consider in doing it.
Nice job here, this is something that's difficult to explain if you've done it a lot, but I think you've captured the key points. If possible, it might be helpful to highlight a few existing examples that stand out for the different characteristics you mention (comprehensive, but able to stand the test of time).
Another thought, any reason that we wouldn't want to keep all wiki tests in the QA: namespace (and with the prefix QA:Testcase_)? The door is left open for other names, I wonder if we want to cut that off ahead of time to keep our sanity by having all tests in the same namespace?
The page also talks about using [[Category:Test_Cases]]. I worry if we are too lax in categorizing new tests we'll end up with a large amount of random tests in the main [[Category:Test_Cases]] making it a maintenance nightmare to cleanup that category. Should we instead direct users to your other page (https://fedoraproject.org/wiki/User:Adamwill/Draft_QA_SOP_package_test_plan_...) for guidance on categorizing test cases?
The second is what's really specific to this subject. It describes how to create a set of test cases for a particular package, and a proposed standardized categorization scheme which will allow us to denote test cases as being associated with specific packages, and also denote them as concerning critical path functionality.
I think I mentioned this previously, in the section 'Preparation', I appreciate the distinction of 'core' and 'extended'. But I it resonates with me better under the context of test "priority". I don't see why we can't keep using the terms 'core' and 'extended', but just want to clarify their purpose. They're intended to add some sense of execution priority to a list of test cases, right? Where critpath comes first, then core, then extended, then other? Also, you describe categorizing/grouping test cases in more detail below, maybe just link to that instead?
In the section, 'Simple (required)', would it help to add a link to http://fedoraproject.org/wiki/BugZappers/CorrectComponent#Which_component_is... (or similar page). Something to help testers find the right src.rpm name of the component under test? Side note, this might also be a maintenance task we can define where I, or anyone interested, could manually scrub (or script) finding Categories:Test_Cases searching incorrectly named category pages.
Also in 'Simple (required)', we don't tell the author to add their 'Category:Package_${sourcename}_test_cases' to 'Category:Test_Cases'. I think we want all newly created package categories anchored under 'Category:Test_Cases'.
General comment. I know we've got an eye towards integrating this work with bodhi and/or f-e-k. Until that work is complete, I wonder if those notes will introduce confusion/speculation. Should we leave out the bits about possible future tool integration until such support is active?
Given that mediawiki has a handy API which also allows you to deal with categories, this should make it easy to both manually and programmatically derive a list of test cases for a given package, and a list of *critical path* test cases for a given package. You can do this manually, but I also envision Bodhi and fedora-easy-karma utilizing the API so that when an update is pushed for a package for which test cases have been created under this system, they will link to those test cases; and when an update is pushed for a critical path package, they will be able to display separately (and more prominently, perhaps) the list of test cases relevant to the critical path functionality of the package.
I should add too that we've explored the mediawiki remote API used to extract data from the wiki by other scripts/tools, and are confident that the desired queries, and data, are available.
Comments, suggestions and rotten fruit welcome :) I'm particularly interested in feedback from package maintainers and QA contributors in whether you feel, just after reading these pages, that you'd be confident in going ahead and creating some test cases, or if there's stuff that's scary or badly explained or that you feel like something is missing and you wouldn't know where to start, etc.
Agreed, would love to hear from others.
The trac ticket on this is probably valuable for background, explaining why some things in the proposal are the way they are:
https://fedorahosted.org/fedora-qa/ticket/154
it also mentions one big current omission: dependencies. For instance, it would be very useful to be able to express 'when yum is updated, we should also run the PackageKit test plan' (because it's possible that a change in yum could be fine 'within itself', and all the yum test cases pass, but could break PackageKit). That's rather complex, though, especially with a Wiki-based system. If anyone has any bright ideas on how to achieve this, do chip in! Thanks.
Certainly seems like a feature worth noting on the TCMS requirements page Hurry has been building (https://fedoraproject.org/wiki/Rhe/tcms_requirements_proposal). I've added that to the Talk page.
Thanks, James
On Tue, Dec 21, 2010 at 06:12:47PM -0500, James Laska wrote:
Something to help testers find the right src.rpm name of the component under test?
Something like that?
#!/bin/bash
me=$(basename $0) usage () { echo "Usage: $me <name>" echo "where <name> is either path to a file or an rpm package name" exit 1 }
[ -z "$1" ] && usage
arg="$1" case $arg in */*) file="-f" ;; esac
pkg=$(rpm -q $file --qf '%{sourcerpm}\n' $arg|head -1) bname=${pkg%-*} echo ${bname%-*} echo ${pkg%.src.rpm} exit
You can pass here as an argument /sbin/dmsetup or device-mapper with the same effect (source package for these is called 'lvm2').
Michal
On Tue, 2010-12-21 at 18:32 -0700, Michal Jaegermann wrote:
On Tue, Dec 21, 2010 at 06:12:47PM -0500, James Laska wrote:
Something to help testers find the right src.rpm name of the component under test?
Something like that?
Exactly, thanks for sharing. I've added some comments below for some common gotchas that always get me with bash scripts. Hope it's helpful.
#!/bin/bash
me=$(basename $0) usage () { echo "Usage: $me <name>" echo "where <name> is either path to a file or an rpm package name" exit 1 }
[ -z "$1" ] && usage
arg="$1" case $arg in */*) file="-f" ;; esac
It might be a bit more error-proof to use, since the file argument could be in the current directory (no '/'):
test -f %{arg} && file="-f" || file=""
pkg=$(rpm -q $file --qf '%{sourcerpm}\n' $arg|head -1)
If conditionally initializing the value of $file, you might want to provide a default value above. Perhaps something like the following to default to the empty '' string if no value is set. If you use the earlier statement, it will be initialized.
${file:-}
bname=${pkg%-*} echo ${bname%-*}
Nice, I didn't realize that a single '%' would do the *shortest* match. Thanks!
echo ${pkg%.src.rpm} exit
You can pass here as an argument /sbin/dmsetup or device-mapper with the same effect (source package for these is called 'lvm2').
Yeah, exactly. As an extra data point, dmalcolm posted a somewhat script that opens up a bugzilla web page to assist in filing a bug against a component. I'm sure that could be adjusted to find the %{sourcerpm} name using the procedure you outlined.
http://lists.fedoraproject.org/pipermail/test/2010-December/096079.html
Thanks, James
On Wed, Dec 22, 2010 at 07:53:22AM -0500, James Laska wrote:
On Tue, 2010-12-21 at 18:32 -0700, Michal Jaegermann wrote:
On Tue, Dec 21, 2010 at 06:12:47PM -0500, James Laska wrote:
Something to help testers find the right src.rpm name of the component under test?
Something like that?
Exactly, thanks for sharing. I've added some comments below for some common gotchas that always get me with bash scripts. Hope it's helpful.
#!/bin/bash
me=$(basename $0) usage () { echo "Usage: $me <name>" echo "where <name> is either path to a file or an rpm package name" exit 1 }
[ -z "$1" ] && usage
arg="$1" case $arg in */*) file="-f" ;; esac
It might be a bit more error-proof to use, since the file argument could be in the current directory (no '/'):
Then you can pass ./<my_arg>. This is more or less what yum wants to see when you ask "whatprovides".
test -f %{arg} && file="-f" || file=""
And then you can have a file in the current directory with the same name as an intended package. Chances are that both names indeed agree but there is no guarantee. We can possibly retry if there is no fit. Here is a modified scritpt:
#!/bin/bash
me=$(basename $0) usage () { echo "Usage: $me <name>" echo "where <name> is either path to a file or an rpm package name" exit 1 }
[ -z "$1" ] && usage
arg="$1" while : ; do case $arg in */*) file="-f" ; break ;; *) # Check if we have a package with this name rpm -q $arg >/dev/null 2>&1 && break # No? Modify arg and try again arg="./$arg" ;; esac done
# on multiarch we may get multiple package with the same n-v-r. pkg=$(rpm -q $file --qf '%{sourcerpm}\n' $arg|head -1) bname=${pkg%-*} echo ${bname%-*} echo ${pkg%.src.rpm} exit
Another possibility would be to require some flag signalling that we are passing a file name but I think that in practice you may often want to give as an argument something like "$(which cat)". I think that a possible retry will be the most convenient.
If conditionally initializing the value of $file, you might want to provide a default value above.
There are no unitialized variables in shell. A default value for a variable which was not yet asigned to is "". Yes, one can be explicit but this is not required.
As an extra data point, dmalcolm posted a somewhat script that opens up a bugzilla web page to assist in filing a bug against a component. I'm sure that could be adjusted to find the %{sourcerpm} name using the procedure you outlined.
http://lists.fedoraproject.org/pipermail/test/2010-December/096079.html
I think so; but I will leave going into Python to somebody else. :-)
Michal
On Tue, 2010-12-21 at 18:12 -0500, James Laska wrote:
the first isn't particularly specific to this, but it was a prerequisite that I discovered was missing: it's a guide to test case creation in general, explaining the actual practical process of how you create a test case, and the best principles to consider in doing it.
Nice job here, this is something that's difficult to explain if you've done it a lot, but I think you've captured the key points. If possible, it might be helpful to highlight a few existing examples that stand out for the different characteristics you mention (comprehensive, but able to stand the test of time).
Thanks. I'll see if I can find some and add them.
Another thought, any reason that we wouldn't want to keep all wiki tests in the QA: namespace (and with the prefix QA:Testcase_)? The door is left open for other names, I wonder if we want to cut that off ahead of time to keep our sanity by having all tests in the same namespace?
I was a bit unsure on that one. I think I thought of some possible scenario where you might want to write a test case in a different name space, but I'm not entirely sure I remember what it was. I can just change it to say test cases should always go in the qa namespace, I guess.
The page also talks about using [[Category:Test_Cases]]. I worry if we are too lax in categorizing new tests we'll end up with a large amount of random tests in the main [[Category:Test_Cases]] making it a maintenance nightmare to cleanup that category. Should we instead direct users to your other page (https://fedoraproject.org/wiki/User:Adamwill/Draft_QA_SOP_package_test_plan_...) for guidance on categorizing test cases?
This was something I wanted to call out for discussion and forgot - so far we've put all test cases directly into the Test_Cases category, but like you, I'm worried that really won't scale. I did wonder whether others would agree we should stop doing that and instead have them usually go into a more specific category which in turn would be a sub-category of Test_Cases, and only have test cases be members of Test_Cases directly if it really made no sense to have them in a more specific category.
The second is what's really specific to this subject. It describes how to create a set of test cases for a particular package, and a proposed standardized categorization scheme which will allow us to denote test cases as being associated with specific packages, and also denote them as concerning critical path functionality.
I think I mentioned this previously, in the section 'Preparation', I appreciate the distinction of 'core' and 'extended'. But I it resonates with me better under the context of test "priority". I don't see why we can't keep using the terms 'core' and 'extended', but just want to clarify their purpose. They're intended to add some sense of execution priority to a list of test cases, right? Where critpath comes first, then core, then extended, then other? Also, you describe categorizing/grouping test cases in more detail below, maybe just link to that instead?
well, the idea is that the two are complementary: if you're going to separate the test cases into 'core' and 'extended' groups, then why not identify which functionality is 'core' and which is 'extended' at the time you're identifying functionality to write test cases for? I'm not quite sure what your proposal is here - could you draft it up in terms of an actual change to the page so I can see it more clearly? thanks!
In the section, 'Simple (required)', would it help to add a link to http://fedoraproject.org/wiki/BugZappers/CorrectComponent#Which_component_is... (or similar page). Something to help testers find the right src.rpm name of the component under test? Side note, this might also be a maintenance task we can define where I, or anyone interested, could manually scrub (or script) finding Categories:Test_Cases searching incorrectly named category pages.
Also in 'Simple (required)', we don't tell the author to add their 'Category:Package_${sourcename}_test_cases' to 'Category:Test_Cases'. I think we want all newly created package categories anchored under 'Category:Test_Cases'.
yup, indeed, oversight - will add it. thanks!
General comment. I know we've got an eye towards integrating this work with bodhi and/or f-e-k. Until that work is complete, I wonder if those notes will introduce confusion/speculation. Should we leave out the bits about possible future tool integration until such support is active?
possibly. I was meaning those bits to be read simply as a potential illustration of programmatic use of the categories to illustrate why consistent categorization is important, but if you think it's confusing, we could take it out.
On Wed, 2010-12-22 at 17:29 +0000, Adam Williamson wrote:
On Tue, 2010-12-21 at 18:12 -0500, James Laska wrote:
the first isn't particularly specific to this, but it was a prerequisite that I discovered was missing: it's a guide to test case creation in general, explaining the actual practical process of how you create a test case, and the best principles to consider in doing it.
Nice job here, this is something that's difficult to explain if you've done it a lot, but I think you've captured the key points. If possible, it might be helpful to highlight a few existing examples that stand out for the different characteristics you mention (comprehensive, but able to stand the test of time).
Thanks. I'll see if I can find some and add them.
Another thought, any reason that we wouldn't want to keep all wiki tests in the QA: namespace (and with the prefix QA:Testcase_)? The door is left open for other names, I wonder if we want to cut that off ahead of time to keep our sanity by having all tests in the same namespace?
I was a bit unsure on that one. I think I thought of some possible scenario where you might want to write a test case in a different name space, but I'm not entirely sure I remember what it was. I can just change it to say test cases should always go in the qa namespace, I guess.
The page also talks about using [[Category:Test_Cases]]. I worry if we are too lax in categorizing new tests we'll end up with a large amount of random tests in the main [[Category:Test_Cases]] making it a maintenance nightmare to cleanup that category. Should we instead direct users to your other page (https://fedoraproject.org/wiki/User:Adamwill/Draft_QA_SOP_package_test_plan_...) for guidance on categorizing test cases?
This was something I wanted to call out for discussion and forgot - so far we've put all test cases directly into the Test_Cases category, but like you, I'm worried that really won't scale. I did wonder whether others would agree we should stop doing that and instead have them usually go into a more specific category which in turn would be a sub-category of Test_Cases, and only have test cases be members of Test_Cases directly if it really made no sense to have them in a more specific category.
Agreed ... I think it makes sense to keep Category:Test_Cases as just a container for sub-categories if possible. Mainly for the reasons you note around *trying* to keep content organized.
The second is what's really specific to this subject. It describes how to create a set of test cases for a particular package, and a proposed standardized categorization scheme which will allow us to denote test cases as being associated with specific packages, and also denote them as concerning critical path functionality.
I think I mentioned this previously, in the section 'Preparation', I appreciate the distinction of 'core' and 'extended'. But I it resonates with me better under the context of test "priority". I don't see why we can't keep using the terms 'core' and 'extended', but just want to clarify their purpose. They're intended to add some sense of execution priority to a list of test cases, right? Where critpath comes first, then core, then extended, then other? Also, you describe categorizing/grouping test cases in more detail below, maybe just link to that instead?
Was I accurate in my understanding above of your proposed groupings (critpath, core and extended)? Are they intended to convey an execution priority of the tests?
well, the idea is that the two are complementary: if you're going to separate the test cases into 'core' and 'extended' groups, then why not identify which functionality is 'core' and which is 'extended' at the time you're identifying functionality to write test cases for? I'm not quite sure what your proposal is here - could you draft it up in terms of an actual change to the page so I can see it more clearly? thanks!
I articulated several layouts in previous comments in the ticket. See https://fedorahosted.org/fedora-qa/ticket/154#comment:12 and https://fedorahosted.org/fedora-qa/ticket/154#comment:18.
I guess I'm hesitant about introducing new terminology ("core" and "extended") when I'm more familiar with prioritizing test cases using the term "priority". I'm not saying we shouldn't use them, I'm just trying to understand the context. I'm also trying to ensure your project ties in nicely with the work Hurry is doing with regards to scoping out a TCMS (http://fedorahosted.org/fedora-qa/ticket/152). My question (I guess I already re-stated above) was whether you consider the terms "core" and "extended" as a designation of test case priority?
Outside of the terminology, I have some concerns whether this is within the scope of the initial project, or something we want to leave as a phase#2 effort. We definitely need to think about it as non-critpath tests will come in, I just hope we don't spend all our collective energy on defining non-critpath tests and then we are still exposed to a lack of test documentation for the critpath.
In the section, 'Simple (required)', would it help to add a link to http://fedoraproject.org/wiki/BugZappers/CorrectComponent#Which_component_is... (or similar page). Something to help testers find the right src.rpm name of the component under test? Side note, this might also be a maintenance task we can define where I, or anyone interested, could manually scrub (or script) finding Categories:Test_Cases searching incorrectly named category pages.
Also in 'Simple (required)', we don't tell the author to add their 'Category:Package_${sourcename}_test_cases' to 'Category:Test_Cases'. I think we want all newly created package categories anchored under 'Category:Test_Cases'.
yup, indeed, oversight - will add it. thanks!
General comment. I know we've got an eye towards integrating this work with bodhi and/or f-e-k. Until that work is complete, I wonder if those notes will introduce confusion/speculation. Should we leave out the bits about possible future tool integration until such support is active?
possibly. I was meaning those bits to be read simply as a potential illustration of programmatic use of the categories to illustrate why consistent categorization is important, but if you think it's confusing, we could take it out.
No strong opinions here. I thought I learned somewhere that one should avoid future leading statements when documenting process. I could have sworn that was in the Fedora doc guide ... but I could be making it up.
Thanks, James
On Mon, 2011-01-03 at 10:52 -0500, James Laska wrote:
Agreed ... I think it makes sense to keep Category:Test_Cases as just a container for sub-categories if possible. Mainly for the reasons you note around *trying* to keep content organized.
OK. I think I actually went ahead and changed this in the current version, I'll go back and double check.
My question (I guess I already re-stated above) was whether you consider the terms "core" and "extended" as a designation of test case priority?
Yes. The terms themselves aren't hugely important, sure, it's more expressing the concept of priorities, but I kind of conceived it in terms of the importance of the functionality being tested.
Outside of the terminology, I have some concerns whether this is within the scope of the initial project, or something we want to leave as a phase#2 effort. We definitely need to think about it as non-critpath tests will come in, I just hope we don't spend all our collective energy on defining non-critpath tests and then we are still exposed to a lack of test documentation for the critpath.
My thinking here is that one of the typical workflows for creating test cases will be 'let's create a set of test cases for package X'. Say the maintainer of package X decides to contribute some test cases. I suspect it's quite unlikely they'll restrict themselves strictly to critical path functionality in all cases; so we should already have the groundwork for non-critical-path test cases laid out.
possibly. I was meaning those bits to be read simply as a potential illustration of programmatic use of the categories to illustrate why consistent categorization is important, but if you think it's confusing, we could take it out.
No strong opinions here. I thought I learned somewhere that one should avoid future leading statements when documenting process. I could have sworn that was in the Fedora doc guide ... but I could be making it up.
I'd agree with that - again the idea was to illustrate a design concept ('this is designed this way in order to enable this kind of programmatic usage') rather than to prescribe a particular form of programmatic usage on the part of particular tools. I tried to re-write it to be less specific in a later draft that's up now, is it better?
On Tue, 2011-01-04 at 17:57 +0000, Adam Williamson wrote:
On Mon, 2011-01-03 at 10:52 -0500, James Laska wrote:
Agreed ... I think it makes sense to keep Category:Test_Cases as just a container for sub-categories if possible. Mainly for the reasons you note around *trying* to keep content organized.
OK. I think I actually went ahead and changed this in the current version, I'll go back and double check.
My question (I guess I already re-stated above) was whether you consider the terms "core" and "extended" as a designation of test case priority?
Yes. The terms themselves aren't hugely important, sure, it's more expressing the concept of priorities, but I kind of conceived it in terms of the importance of the functionality being tested.
Gotcha, thanks.
Outside of the terminology, I have some concerns whether this is within the scope of the initial project, or something we want to leave as a phase#2 effort. We definitely need to think about it as non-critpath tests will come in, I just hope we don't spend all our collective energy on defining non-critpath tests and then we are still exposed to a lack of test documentation for the critpath.
My thinking here is that one of the typical workflows for creating test cases will be 'let's create a set of test cases for package X'. Say the maintainer of package X decides to contribute some test cases. I suspect it's quite unlikely they'll restrict themselves strictly to critical path functionality in all cases; so we should already have the groundwork for non-critical-path test cases laid out.
I see. Yeah, we certainly need to be prepared for tests created outside the initial scope.
possibly. I was meaning those bits to be read simply as a potential illustration of programmatic use of the categories to illustrate why consistent categorization is important, but if you think it's confusing, we could take it out.
No strong opinions here. I thought I learned somewhere that one should avoid future leading statements when documenting process. I could have sworn that was in the Fedora doc guide ... but I could be making it up.
I'd agree with that - again the idea was to illustrate a design concept ('this is designed this way in order to enable this kind of programmatic usage') rather than to prescribe a particular form of programmatic usage on the part of particular tools. I tried to re-write it to be less specific in a later draft that's up now, is it better?
That looks good. Your pages are looking really good IMO, thanks for the time+energy you've invested.
Thanks, James
On Tue, 2010-12-21 at 17:11 +0000, Adam Williamson wrote:
Hi, everyone. So, in the recent debate about the update process it again became clear that we were lacking a good process for providing package-specific test instructions, and particularly specific instructions for testing critical path functions.
I've been working on a process for this, and now have two draft Wiki pages up for review which together describe it:
https://fedoraproject.org/wiki/User:Adamwill/Draft_QA_SOP_test_case_creation https://fedoraproject.org/wiki/User:Adamwill/Draft_QA_SOP_package_test_plan_...
I've now converted one package's set of test cases to the proposed new system to serve as an illustration. I used the Radeon driver, xorg-x11-drv-ati . This was a good example as there's a lot of them, and they split neatly into critpath, core and extended to illustrate the optional advanced categorization system.
So, see:
https://fedoraproject.org/wiki/Category:Package_xorg-x11-drv-ati_test_cases
and note that one of the test cases is also in:
https://fedoraproject.org/wiki/Category:Critical_path_test_cases
you can also see at https://fedoraproject.org/wiki/Category:Test_Cases how this tracks back to that overall category - no test cases are in it directly, it's all hierarchical.
thanks! I'm planning to work on a mockup for the f-e-k and bodhi integration this afternoon to show how we envision this all being used to kick ass, I think that'll make it clearer.
On Thu, 2010-12-23 at 14:35 +0000, Adam Williamson wrote:
thanks! I'm planning to work on a mockup for the f-e-k and bodhi integration this afternoon to show how we envision this all being used to kick ass, I think that'll make it clearer.
Bodhi mockup post:
http://www.happyassassin.net/2010/12/23/package-specific-test-case-project-n...
On Thu, 2010-12-23 at 14:35 +0000, Adam Williamson wrote:
On Tue, 2010-12-21 at 17:11 +0000, Adam Williamson wrote:
Hi, everyone. So, in the recent debate about the update process it again became clear that we were lacking a good process for providing package-specific test instructions, and particularly specific instructions for testing critical path functions.
I've been working on a process for this, and now have two draft Wiki pages up for review which together describe it:
https://fedoraproject.org/wiki/User:Adamwill/Draft_QA_SOP_test_case_creation https://fedoraproject.org/wiki/User:Adamwill/Draft_QA_SOP_package_test_plan_...
I've now converted one package's set of test cases to the proposed new system to serve as an illustration. I used the Radeon driver, xorg-x11-drv-ati . This was a good example as there's a lot of them, and they split neatly into critpath, core and extended to illustrate the optional advanced categorization system.
So, see:
https://fedoraproject.org/wiki/Category:Package_xorg-x11-drv-ati_test_cases
and note that one of the test cases is also in:
https://fedoraproject.org/wiki/Category:Critical_path_test_cases
Nice examples, this really helps to visualize the proposed changes.
So if I were acting as f-e-k or bodhi, I would combine the two lists of cases to show the xorg-x11-drv-ati critpath tests...
* QA:Testcase radeon basic
Did I do that correctly?
you can also see at https://fedoraproject.org/wiki/Category:Test_Cases how this tracks back to that overall category - no test cases are in it directly, it's all hierarchical.
[[Category:Test_Cases|X]]
I did a minor tweak so that it would show up under 'X' (for xorg-x11...) rather than 'P' (for Package_xorg-x11...).
thanks! I'm planning to work on a mockup for the f-e-k and bodhi integration this afternoon to show how we envision this all being used to kick ass, I think that'll make it clearer.
Thanks, James
On Mon, 2011-01-03 at 10:58 -0500, James Laska wrote:
So, see:
https://fedoraproject.org/wiki/Category:Package_xorg-x11-drv-ati_test_cases
and note that one of the test cases is also in:
https://fedoraproject.org/wiki/Category:Critical_path_test_cases
Nice examples, this really helps to visualize the proposed changes.
So if I were acting as f-e-k or bodhi, I would combine the two lists of cases to show the xorg-x11-drv-ati critpath tests...
* QA:Testcase radeon basic
Did I do that correctly?
Yup, indeed, that's the only test case for a graphics driver that strictly hits our critpath parameters.
you can also see at https://fedoraproject.org/wiki/Category:Test_Cases how this tracks back to that overall category - no test cases are in it directly, it's all hierarchical.
[[Category:Test_Cases|X]]
I did a minor tweak so that it would show up under 'X' (for xorg-x11...) rather than 'P' (for Package_xorg-x11...).
Thanks! I forgot that.