A late discovered and just potentially fixed anaconda storage bug[1] has necessitated another week slip of our schedule. The change is important but invasive enough to require re-validating our storage tests. We were already late in producing the Release Candidate and there is not enough time to produce another one and validate it in time for next Tuesday's release date. Therefor we have decided to enact another week long slip of the release. This gives us time to create a second release candidate and fully validate it and hand it off to the mirrors in plenty of time to sync up for the new release date of June 9th. As much as we regret slipping, we also wish to avoid easily trigger-able bugs in our release, particularly in software that cannot be fixed with a 0-day update.
At this time we would only accept tag requests for critical issues.
[1]: https://bugzilla.redhat.com/show_bug.cgi?id=500808
the anaconda storage is very buggy since fc11 beta 2
this really need's to be fixed.
On Thu, May 28, 2009 at 2:36 PM, Jesse Keating jkeating@redhat.com wrote:
A late discovered and just potentially fixed anaconda storage bug[1] has necessitated another week slip of our schedule. The change is important but invasive enough to require re-validating our storage tests. We were already late in producing the Release Candidate and there is not enough time to produce another one and validate it in time for next Tuesday's release date. Therefor we have decided to enact another week long slip of the release. This gives us time to create a second release candidate and fully validate it and hand it off to the mirrors in plenty of time to sync up for the new release date of June 9th. As much as we regret slipping, we also wish to avoid easily trigger-able bugs in our release, particularly in software that cannot be fixed with a 0-day update.
At this time we would only accept tag requests for critical issues.
-- Jesse Keating Fedora -- Freedom² is a feature! identi.ca: http://identi.ca/jkeating
-- fedora-test-list mailing list fedora-test-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-test-list
On Thu, 2009-05-28 at 14:40 -0300, Itamar Reis Peixoto wrote:
the anaconda storage is very buggy since fc11 beta 2
this really need's to be fixed.
That's...not a very useful statement. It's been buggy ever since it was *completely rewritten* for the Fedora 11 cycle. Both the Anaconda developers and the QA team are perfectly well aware of this; rewriting such a fundamental chunk of code is _always_ going to introduce new bugs, we knew this before the project was ever initiated. But we still needed to rewrite the code, so there was nothing else to do.
Just saying "it's buggy, fix it!" doesn't help anyone. We know it's buggy. We're working very hard on fixing it. Constructive input would be one of:
1) reporting specific failures, with full information on them, and being available to provide any further information and testing required 2) contributing fixes
the anaconda storage is very buggy since fc11 beta 2
this really need's to be fixed.
That's...not a very useful statement. It's been buggy ever since it was *completely rewritten* for the Fedora 11 cycle. Both the Anaconda developers and the QA team are perfectly well aware of this; rewriting such a fundamental chunk of code is _always_ going to introduce new bugs, we knew this before the project was ever initiated. But we still needed to rewrite the code, so there was nothing else to do.
Just saying "it's buggy, fix it!" doesn't help anyone. We know it's buggy. We're working very hard on fixing it. Constructive input would be one of:
I agree to a point. It seems to me on a number of occasions over the history of Fedora anaconda has been the reason for release slips. I reported a number of bugs (or added my details to a number of bugs) for custom disk installations and even though they've been reported (beta or even alpha stage) they're still not fixed.
Anaconda definitely needs massive re-writes but I would expect that to land by the alpha rather than having pretty decent fixes still coming in after the preview release and hence the delays that have come with that. Not reporting fairly major bugs in the beta and still not having them closed now having read comments like "custom installs to single partitions aren't normal use cases".
Peter
On Thu, 2009-05-28 at 22:20 +0100, Peter Robinson wrote:
Anaconda definitely needs massive re-writes but I would expect that to land by the alpha
The rewrite landed a long time ago.
rather than having pretty decent fixes still coming in after the preview release and hence the delays that have come with that.
The fixes are fixes for issues that have been reported *since the rewrite landed*. The Anaconda team has been working flat out on nothing but fixing reports on issues reported since the landing of the rewrite, which itself was landed as early in the cycle as possible.
Not reporting fairly major bugs in the beta and still not having them closed now having read comments like "custom installs to single partitions aren't normal use cases".
Unfortunately, there's not enough manpower in the anaconda team to fix all the reported issues in time for release, hence the most important ones get the focus.
Peter Robinson wrote:
the anaconda storage is very buggy since fc11 beta 2
this really need's to be fixed.
That's...not a very useful statement. It's been buggy ever since it was *completely rewritten* for the Fedora 11 cycle. Both the Anaconda developers and the QA team are perfectly well aware of this; rewriting such a fundamental chunk of code is _always_ going to introduce new bugs, we knew this before the project was ever initiated. But we still needed to rewrite the code, so there was nothing else to do.
Just saying "it's buggy, fix it!" doesn't help anyone. We know it's buggy. We're working very hard on fixing it. Constructive input would be one of:
I agree to a point. It seems to me on a number of occasions over the history of Fedora anaconda has been the reason for release slips. I reported a number of bugs (or added my details to a number of bugs) for custom disk installations and even though they've been reported (beta or even alpha stage) they're still not fixed.
Anaconda definitely needs massive re-writes but I would expect that to land by the alpha rather than having pretty decent fixes still coming in after the preview release and hence the delays that have come with that. Not reporting fairly major bugs in the beta and still not having them closed now having read comments like "custom installs to single partitions aren't normal use cases".
If install to a single partition isn't a normal use for a new release, which many people will install in parallel with their working release, then we are not talking to the same power users, administrators, and developers of non-Fedora software.
Swept under the rug, my aunt would have called it.
2009/5/28 Jesse Keating jkeating@redhat.com
A late discovered and just potentially fixed anaconda storage bug[1] has necessitated another week slip of our schedule. The change is important but invasive enough to require re-validating our storage tests. We were already late in producing the Release Candidate and there is not enough time to produce another one and validate it in time for next Tuesday's release date. Therefor we have decided to enact another week long slip of the release. This gives us time to create a second release candidate and fully validate it and hand it off to the mirrors in plenty of time to sync up for the new release date of June 9th.
it would be very nice to announce the release candidates when they become available.
As much as we regret slipping, we also wish to avoid easily trigger-able bugs in our release, particularly in software that cannot be fixed with a 0-day update.
At this time we would only accept tag requests for critical issues.
-- Jesse Keating Fedora -- Freedom² is a feature! identi.ca: http://identi.ca/jkeating
-- fedora-test-list mailing list fedora-test-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-test-list
-------- Original Message -------- Subject: Re: One (more) week slip of Fedora 11 Release From: cornel panceac cpanceac@gmail.com To: For testers of Fedora Core development releases fedora-test-list@redhat.com Date: 05/28/2009 12:54 PM
it would be very nice to announce the release candidates when they become available.
If you wish to participate in testing Fedora, I suggest you join the fedora-test-list. The honourable James Laska has made such an announcement[1] just today.
[1] https://www.redhat.com/archives/fedora-test-list/2009-May/msg01272.html
2009/5/28 Michael Cronenworth mike@cchtml.com
-------- Original Message -------- Subject: Re: One (more) week slip of Fedora 11 Release From: cornel panceac cpanceac@gmail.com To: For testers of Fedora Core development releases < fedora-test-list@redhat.com> Date: 05/28/2009 12:54 PM
it would be very nice to announce the release candidates when they become available.
If you wish to participate in testing Fedora, I suggest you join the fedora-test-list. The honourable James Laska has made such an announcement[1] just today.
[1] https://www.redhat.com/archives/fedora-test-list/2009-May/msg01272.html
to be honest, i've seen the message but i thought it was a testing report, since ive not seen the "announce" part anywhere.
but i can agree to the fact that james laska and you all are honourable :)
-- fedora-test-list mailing list fedora-test-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-test-list
On 05/28/2009 02:25 PM, cornel panceac wrote:
2009/5/28 Michael Cronenworth <mike@cchtml.com mailto:mike@cchtml.com>
-------- Original Message -------- Subject: Re: One (more) week slip of Fedora 11 Release From: cornel panceac <cpanceac@gmail.com <mailto:cpanceac@gmail.com>> To: For testers of Fedora Core development releases <fedora-test-list@redhat.com <mailto:fedora-test-list@redhat.com>> Date: 05/28/2009 12:54 PM it would be very nice to announce the release candidates when they become available. If you wish to participate in testing Fedora, I suggest you join the fedora-test-list. The honourable James Laska has made such an announcement[1] just today. [1] https://www.redhat.com/archives/fedora-test-list/2009-May/msg01272.html
to be honest, i've seen the message but i thought it was a testing report, since ive not seen the "announce" part anywhere.
but i can agree to the fact that james laska and you all are honourable :)
-- fedora-test-list mailing list fedora-test-list@redhat.com <mailto:fedora-test-list@redhat.com> To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-test-list
-- Linux counter #213090
Release schedule:
Jim wrote:
On 05/28/2009 02:25 PM, cornel panceac wrote:
2009/5/28 Michael Cronenworth <mike@cchtml.com mailto:mike@cchtml.com>
-------- Original Message -------- Subject: Re: One (more) week slip of Fedora 11 Release From: cornel panceac <cpanceac@gmail.com <mailto:cpanceac@gmail.com>> To: For testers of Fedora Core development releases <fedora-test-list@redhat.com <mailto:fedora-test-list@redhat.com>> Date: 05/28/2009 12:54 PM it would be very nice to announce the release candidates when they become available. If you wish to participate in testing Fedora, I suggest you join the fedora-test-list. The honourable James Laska has made such an announcement[1] just today. [1] https://www.redhat.com/archives/fedora-test-list/2009-May/msg01272.html
to be honest, i've seen the message but i thought it was a testing report, since ive not seen the "announce" part anywhere.
but i can agree to the fact that james laska and you all are honourable :)
Since this schedule doesn't seem to mention the RC dates, it doesn't seem to shed much light on the RC schedule...
On Sat, 2009-05-30 at 11:12 -0400, Bill Davidsen wrote:
Since this schedule doesn't seem to mention the RC dates, it doesn't seem to shed much light on the RC schedule...
There are detailed tasks linked from the schedule page. Click the link at the bottom of the table "Detailed tasks and durations".
Thanks, James
On Thu, 2009-05-28 at 12:59 -0500, Michael Cronenworth wrote:
-------- Original Message -------- Subject: Re: One (more) week slip of Fedora 11 Release From: cornel panceac cpanceac@gmail.com To: For testers of Fedora Core development releases fedora-test-list@redhat.com Date: 05/28/2009 12:54 PM
it would be very nice to announce the release candidates when they become available.
If you wish to participate in testing Fedora, I suggest you join the fedora-test-list. The honourable James Laska has made such an announcement[1] just today.
Heh, thanks for your kind words Michael.
[1] https://www.redhat.com/archives/fedora-test-list/2009-May/msg01272.html
There will likely be an update to that wiki page once a new release candidate lands. I'll follow up with an announcement with some pointers.
While any testing is valuable, additional testing with emphasis on the installer storage layer will likely be needed once release engineering hands off the RC2 build.
Thanks, James
James Laska jlaska@redhat.com writes:
While any testing is valuable, additional testing with emphasis on the installer storage layer will likely be needed once release engineering hands off the RC2 build.
Is that anaconda fix in rawhide now?
regards, tom lane
On Thu, 2009-05-28 at 17:23 -0400, Tom Lane wrote:
James Laska jlaska@redhat.com writes:
While any testing is valuable, additional testing with emphasis on the installer storage layer will likely be needed once release engineering hands off the RC2 build.
Is that anaconda fix in rawhide now?
Not yet. It was only built an hour or two ago. It'll appear in tomorrow's rawhide.
http://koji.fedoraproject.org/koji/buildinfo?buildID=103967
-w
On Thu, 2009-05-28 at 20:54 +0300, cornel panceac wrote:
it would be very nice to announce the release candidates when they become available.
We can't broadly announce them, as the RCs are not mirrored and piling more people into trying to get access to the RCs would just make it that much slower for the key people to get them in the first place. If we went to a mode where we could only do one RC a week and it was mirrored and we had something like 3 months to get through RC phase that might work, but I doubt our developers would appreciate that.
On Thu, 2009-05-28 at 11:07 -0700, Jesse Keating wrote:
On Thu, 2009-05-28 at 20:54 +0300, cornel panceac wrote:
it would be very nice to announce the release candidates when they become available.
We can't broadly announce them, as the RCs are not mirrored and piling more people into trying to get access to the RCs would just make it that much slower for the key people to get them in the first place. If we went to a mode where we could only do one RC a week and it was mirrored and we had something like 3 months to get through RC phase that might work, but I doubt our developers would appreciate that.
Forgot to mention, each night's rawhide is made up of the frozen content, so by testing rawhide, you're essentially testing the RC.
2009/5/28 Jesse Keating jkeating@redhat.com
On Thu, 2009-05-28 at 11:07 -0700, Jesse Keating wrote:
On Thu, 2009-05-28 at 20:54 +0300, cornel panceac wrote:
it would be very nice to announce the release candidates when they
become
available.
We can't broadly announce them, as the RCs are not mirrored and piling more people into trying to get access to the RCs would just make it that much slower for the key people to get them in the first place. If we went to a mode where we could only do one RC a week and it was mirrored and we had something like 3 months to get through RC phase that might work, but I doubt our developers would appreciate that.
i was thinking about torrent-only. limiting the audience means less testing.
Forgot to mention, each night's rawhide is made up of the frozen content, so by testing rawhide, you're essentially testing the RC.
unfortunately for me i don't have the time xor resources to test each boot.iso, as i'm sure you understand :)
-- Jesse Keating Fedora -- Freedom² is a feature! identi.ca: http://identi.ca/jkeating
-- fedora-test-list mailing list fedora-test-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-test-list
On Thu, 2009-05-28 at 21:22 +0300, cornel panceac wrote:
unfortunately for me i don't have the time xor resources to test each boot.iso, as i'm sure you understand :)
You don't have the time to test the boot.iso, but you'd have time to test the daily RCs?
2009/5/28 Jesse Keating jkeating@redhat.com
On Thu, 2009-05-28 at 21:22 +0300, cornel panceac wrote:
unfortunately for me i don't have the time xor resources to test each boot.iso, as i'm sure you understand :)
You don't have the time to test the boot.iso, but you'd have time to test the daily RCs?
daily? i thought there will be another one, only. no, if there will be one rc per day i will not test them all.
-- Jesse Keating Fedora -- Freedom² is a feature! identi.ca: http://identi.ca/jkeating
-- fedora-test-list mailing list fedora-test-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-test-list
Jesse Keating said the following on 05/28/2009 11:07 AM Pacific Time:
On Thu, 2009-05-28 at 20:54 +0300, cornel panceac wrote:
it would be very nice to announce the release candidates when they become available.
We can't broadly announce them, as the RCs are not mirrored and piling more people into trying to get access to the RCs would just make it that much slower for the key people to get them in the first place. If we went to a mode where we could only do one RC a week and it was mirrored and we had something like 3 months to get through RC phase that might work, but I doubt our developers would appreciate that.
I realize this has been the party line for a long time and I've also been told that many Fedora decisions cannot be measured and that we "go with our gut." ;-)
My gut keeps telling me that given all the smart, innovative people we have that we should be able to fix this issue. In most areas of Fedora we place a heavy emphasis on making things accessible to everyone instead of just the "key people", particularly when it comes to content that can be tested.
We've gone down the road before of saying rawhide is equal to the RC, but we've also found that not to be the case and it hurt us. Additionally, if we go with with what is being proposed here:
https://fedoraproject.org/wiki/Fedora_Activity_Day_Fedora_Development_Cycle_... won't we be another step removed from being able to use rawhide this way for Fedora 12?
If our current process isn't working to involve everyone in testing the most important bits (RC), why can't we explore more ways to fix it or have a FAD for it? Ideas like: 1) Changing the structure of our release schedule and test releases so RCs can get broad testing? 2) Changing the amount of last minute package changes? 3) Spending part of the FAD (scheduled above) for brainstorming ways to explore this issue? 4) Not being so concerned about "holding up the developers" --developers are an extremely important part of our releases because without them there wouldn't be any content to release :) No, I'm not advocating doing RCs for three months (one week might be a great place to start), but I think other parts of the Fedora community, including our end users should be considered too. 5) Define "What is Fedora" and who our target audience is so we can address #4 in a proper way (I realize the Board owns this and started discussing it in January). If it turns out that our releases are all for developers then not holding up developers will be a key concern :)
I think it is more accurate to say that fixing the issue of making the RCs available to everyone is not a priority versus saying we can't.
John
On Thu, 2009-05-28 at 15:23 -0700, John Poelstra wrote:
https://fedoraproject.org/wiki/Fedora_Activity_Day_Fedora_Development_Cycle_... won't we be another step removed from being able to use rawhide this way for Fedora 12?
No, we'd essentially have two rawhides. One that is the next release, one that is the pending release. Both would be composed with installable images nightly.
If our current process isn't working to involve everyone in testing the most important bits (RC), why can't we explore more ways to fix it or have a FAD for it? Ideas like:
- Changing the structure of our release schedule and test releases so
RCs can get broad testing? 2) Changing the amount of last minute package changes? 3) Spending part of the FAD (scheduled above) for brainstorming ways to explore this issue? 4) Not being so concerned about "holding up the developers" --developers are an extremely important part of our releases because without them there wouldn't be any content to release :) No, I'm not advocating doing RCs for three months (one week might be a great place to start), but I think other parts of the Fedora community, including our end users should be considered too. 5) Define "What is Fedora" and who our target audience is so we can address #4 in a proper way (I realize the Board owns this and started discussing it in January). If it turns out that our releases are all for developers then not holding up developers will be a key concern :)
I think it is more accurate to say that fixing the issue of making the RCs available to everyone is not a priority versus saying we can't.
Sure, however you want to put it. Right now it's not feasable to try and get 34G worth of content into the hands of the entire world in a reasonable enough time for that content to be of any value. Typically by the time we hit RC stage, we're verifying that the blocker bugs were fixed and that the particular compose didn't break anything. We can typically do that within a few hours of the compose being finished, and if it's broken, we spin a new one, in about an hour or 2.
Now, if you want to go to a mode where the last RC we compose gets mirrored and we sit on it for a week plus, just so that the world feels better about having participated, we can do that. But anything found is going to introduce at least a week slip of the schedule, in order to fix, verify, produce an RC and let it soak for another week. That's just entirely too long of a wait cycle for Fedora's 6 month development and release cycle.
You're right, it's not a high priority (at least not mine) to try and "fix" this.
On 29.05.2009 00:23, John Poelstra wrote:
[...] I think it is more accurate to say that fixing the issue of making the RCs available to everyone is not a priority versus saying we can't.
Just a short comment: Years ago someone made daily RCs avilable via an Rsync server. Thus it was very easy to get the latest RC -- you just rsynced the installer ISO you already had using rsync. As only small parts of the ISO changed from day to day it was just a few minutes task even with an not that fast internet connection.
I haven't checked if that still works, but I guess it would. Of course not with the Live spins, as they are compressed and thus everything changes all the time. But maybe we could offer them as uncompressed images somewhere -- you could still test them using USB-Sticks, DVDs, or with visualization solutions like KVM, Qemu, and Virtual-Box.
CU knurd
On Fri, 2009-05-29 at 10:01 +0200, Thorsten Leemhuis wrote:
Just a short comment: Years ago someone made daily RCs avilable via an Rsync server. Thus it was very easy to get the latest RC -- you just rsynced the installer ISO you already had using rsync. As only small parts of the ISO changed from day to day it was just a few minutes task even with an not that fast internet connection.
We had that not too long ago. The amount of people trying to rsync it made the server completely unavailable and nobody got the bits. It was a disaster. Too many people are just eager to "Get the latest bits" whether they'll do useful testing with them or not, which ruins it for the people who desperately need to verify a bug fix or sanity test a compose.
On Fri, 2009-05-29 at 09:25 -0700, Jesse Keating wrote:
On Fri, 2009-05-29 at 10:01 +0200, Thorsten Leemhuis wrote:
Just a short comment: Years ago someone made daily RCs avilable via an Rsync server. Thus it was very easy to get the latest RC -- you just rsynced the installer ISO you already had using rsync. As only small parts of the ISO changed from day to day it was just a few minutes task even with an not that fast internet connection.
We had that not too long ago. The amount of people trying to rsync it made the server completely unavailable and nobody got the bits. It was a disaster. Too many people are just eager to "Get the latest bits" whether they'll do useful testing with them or not, which ruins it for the people who desperately need to verify a bug fix or sanity test a compose.
This seems like the sort of problem BitTorrent is designed to solve. Is that not a feasible solution because of administrative overhead on the setup side (which might be fixed with some automation) or because a differences-only mechanism is needed?
-B.
----- "Christopher Beland" beland@alum.mit.edu wrote:
This seems like the sort of problem BitTorrent is designed to solve. Is that not a feasible solution because of administrative overhead on the setup side (which might be fixed with some automation) or because a differences-only mechanism is needed?
The problem becomes the fact that torrents without seeds are worthless.
------------------------------------------------------------------------ | Robert 'Bob' Jensen || Fedora Unity Founder | | bob@fedoraunity.org || http://fedoraunity.org/ | | http://bjensen.fedorapeople.org/ | | http://blogs.fedoraunity.org/bobjensen | ------------------------------------------------------------------------
On Fri, 2009-05-29 at 18:47 +0000, Robert 'Bob' Jensen wrote:
----- "Christopher Beland" beland@alum.mit.edu wrote:
This seems like the sort of problem BitTorrent is designed to solve. Is that not a feasible solution because of administrative overhead on the setup side (which might be fixed with some automation) or because a differences-only mechanism is needed?
The problem becomes the fact that torrents without seeds are worthless.
And when the contents change daily or even multiple times a day getting any kind of seeding is not going to happen.
Jesse Keating wrote:
And when the contents change daily or even multiple times a day getting any kind of seeding is not going to happen.
Sounds like another feature request for a QA tool. Not only do we need something for better management of bodhi/updates-testing packages, but for Preview/RC images. If there was a way to know a torrent for an RC was available I'd provide some seeding. I don't think I would be alone. Such a tool could be extended for release ISOs. Right now I only know about torrents.fedoraproject.org, which may go down during release day...
File it down somewhere instead of counting out such a feature. If you provide a simple/easy-to-use system it will work.
On Fri, 2009-05-29 at 20:32 -0500, Michael Cronenworth wrote:
Sounds like another feature request for a QA tool. Not only do we need something for better management of bodhi/updates-testing packages, but for Preview/RC images. If there was a way to know a torrent for an RC was available I'd provide some seeding. I don't think I would be alone. Such a tool could be extended for release ISOs. Right now I only know about torrents.fedoraproject.org, which may go down during release day...
But how do we get the isos to you to seed in the first place?
On Fri, May 29, 2009 at 21:28:55 -0700, Jesse Keating jkeating@redhat.com wrote:
On Fri, 2009-05-29 at 20:32 -0500, Michael Cronenworth wrote:
Sounds like another feature request for a QA tool. Not only do we need something for better management of bodhi/updates-testing packages, but for Preview/RC images. If there was a way to know a torrent for an RC was available I'd provide some seeding. I don't think I would be alone. Such a tool could be extended for release ISOs. Right now I only know about torrents.fedoraproject.org, which may go down during release day...
But how do we get the isos to you to seed in the first place?
Useful testing could be done by people by building their own images. The may not be exactly in sync, but should be close enough to be helpful. Paul Frields wrote up how to create install images at: https://fedoraproject.org/wiki/User:Pfrields/Building_an_ISO_image_for_testi... Testing live images is similar, but you run livecd-creator to build the images. People with local rawhide mirrors should be able to do this pretty conveniently.
Jesse Keating wrote:
But how do we get the isos to you to seed in the first place?
Either what Bruno suggested, or have an automated seeder on a fedoraproject.org system. It would seed only the latest pre-release/RC. You could set a bandwidth limit on it if preferred. Instead of that same fedoraproject.org system grinding to a halt under stress, multiple peers would be sharing the load with it. Essentially replacing the HTTP link with a .torrent link.
Jesse Keating wrote:
On Fri, 2009-05-29 at 20:32 -0500, Michael Cronenworth wrote:
Sounds like another feature request for a QA tool. Not only do we need something for better management of bodhi/updates-testing packages, but for Preview/RC images. If there was a way to know a torrent for an RC was available I'd provide some seeding. I don't think I would be alone. Such a tool could be extended for release ISOs. Right now I only know about torrents.fedoraproject.org, which may go down during release day...
But how do we get the isos to you to seed in the first place?
Since that's not required, the question needs no answer. The idea is to get the changed files to the users, then assemble them into the RC candidate at the tester's end. Download the changes, including any to the "make an RC" metadata file, then run the tool of choice to generate the install media.
The nice thing about this is that it's useful day-to-day, if I need to do an install I can build an install ISO which is up-to-date (and has some unique release number or date in the metadata), and not do the dance of installing a bunch of obsolete packages and then updating them, which beats the network and servers a lot more than people who do a lot of installs keeping an rsync current. It would also encourage a "standard" full install, dual layer DVD burners are standard these days, another reduction in load and bandwidth could be had.
On Sat, 2009-05-30 at 11:26 -0400, Bill Davidsen wrote:
Since that's not required, the question needs no answer. The idea is to get the changed files to the users, then assemble them into the RC candidate at the tester's end. Download the changes, including any to the "make an RC" metadata file, then run the tool of choice to generate the install media.
The nice thing about this is that it's useful day-to-day, if I need to do an install I can build an install ISO which is up-to-date (and has some unique release number or date in the metadata), and not do the dance of installing a bunch of obsolete packages and then updating them, which beats the network and servers a lot more than people who do a lot of installs keeping an rsync current. It would also encourage a "standard" full install, dual layer DVD burners are standard these days, another reduction in load and bandwidth could be had.
I'm afraid I'm not following your logic. Torrents aren't free, either the torrent server is going to have to give everybody the bits, or somebody is going to have to download them outside of the torrent and seed them for people.
If you have a real proposal here, I suggest you write it up as a wiki page and bring it to the attention of mroe than just the test list.
Jesse Keating wrote:
I'm afraid I'm not following your logic. Torrents aren't free, either the torrent server is going to have to give everybody the bits, or somebody is going to have to download them outside of the torrent and seed them for people
Jesse, I'm not following your logic about bittorrent. The torrent server would not "have to give everybody the bits." Is this why you are so against torrents? That's not how they work.
Your automated seeder would be the initial seeder. Once just one tester gets it, they will be able to take the load off of the automated seeder and spread it even faster to more people, and more people, and more people. The automated seeder would not be "overburdened" and would not slow down. Millions of people would be able to download an RC if you used a torrent instead of an HTTP link.
2009/5/31 Michael Cronenworth mike@cchtml.com
Jesse Keating wrote:
I'm afraid I'm not following your logic. Torrents aren't free, either the torrent server is going to have to give everybody the bits, or somebody is going to have to download them outside of the torrent and seed them for people
Jesse, I'm not following your logic about bittorrent. The torrent server would not "have to give everybody the bits." Is this why you are so against torrents? That's not how they work.
Your automated seeder would be the initial seeder. Once just one tester gets it, they will be able to take the load off of the automated seeder and spread it even faster to more people, and more people, and more people. The automated seeder would not be "overburdened" and would not slow down. Millions of people would be able to download an RC if you used a torrent instead of an HTTP link.
otoh, it's exactly what torrent.fedoraproject.org is doing. let's say for the sake of simplicity that the maximum number of clients torrent.fedoraproject.org can get is ten. once the eleventh client wants the bits, it will no longer get them from the main site but instead from the other leechers. and yes, torrent can get only the different bytes between rc2 and rc1, if you already have rc1.
-- fedora-test-list mailing list fedora-test-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-test-list
cornel panceac wrote:
otoh, it's exactly what torrent.fedoraproject.org http://torrent.fedoraproject.org is doing. let's say for the sake of simplicity that the maximum number of clients torrent.fedoraproject.org http://torrent.fedoraproject.org can get is ten. once the eleventh client wants the bits, it will no longer get them from the main site but instead from the other leechers. and yes, torrent can get only the different bytes between rc2 and rc1, if you already have rc1.
Bittorrent does let you save space, but it's not as efficient in this regard as either rsync or deltaisos. Rsync (and bittorrent) also have the disadvantage that they let you start with a file which can be very different from the end file, which generates lots of bandwidth. Deltaisos starting with the Preview only work if you already have the Preview, which is both widely available on other mirrors and very close to the final file. So putting only disos instead of full isos on the RC server ensures that each downloader pulls much less bandwidth from the RC server than at present. Also, if there is more than one RC, additional disos could be provided from RC(n) to RC(n+1). These should be much smaller than those from Preview to RC, which should already be about 1/15 the size of the full ISO.
Andre Robatino wrote:
Bittorrent does let you save space, but it's not as efficient in this regard as either rsync or deltaisos. Rsync (and bittorrent) also have the disadvantage that they let you start with a file which can be very different from the end file, which generates lots of bandwidth. Deltaisos starting with the Preview only work if you already have the Preview, which is both widely available on other mirrors and very close to the final file. So putting only disos instead of full isos on the RC server ensures that each downloader pulls much less bandwidth from the RC server than at present. Also, if there is more than one RC, additional disos could be provided from RC(n) to RC(n+1). These should be much smaller than those from Preview to RC, which should already be about 1/15 the size of the full ISO.
Well, Bittorrent could be used to transfer the deltaisos.
Kevin Kofler
Jesse Keating wrote:
On Sat, 2009-05-30 at 11:26 -0400, Bill Davidsen wrote:
Since that's not required, the question needs no answer. The idea is to get the changed files to the users, then assemble them into the RC candidate at the tester's end. Download the changes, including any to the "make an RC" metadata file, then run the tool of choice to generate the install media.
The nice thing about this is that it's useful day-to-day, if I need to do an install I can build an install ISO which is up-to-date (and has some unique release number or date in the metadata), and not do the dance of installing a bunch of obsolete packages and then updating them, which beats the network and servers a lot more than people who do a lot of installs keeping an rsync current. It would also encourage a "standard" full install, dual layer DVD burners are standard these days, another reduction in load and bandwidth could be had.
I'm afraid I'm not following your logic. Torrents aren't free, either the torrent server is going to have to give everybody the bits, or somebody is going to have to download them outside of the torrent and seed them for people.
Everyone seeds for people, the server should not be seeding more than 1-2 clients, who then seed other, and others, leaving the server to do only bookkeeping.
If you have a real proposal here, I suggest you write it up as a wiki page and bring it to the attention of mroe than just the test list.
At the moment I'm locating pieces to have a proposal which is based on using existing parts rather than starting with a blank screen. And it feels as though discussion of a lower overhead means of propagating rapid changes is on topic here, since people trying new software tend to see a higher rate of upgrade than people using a more stable version.
I get the impression that jigdo is out of favor for some reason, but it does what I had in mind, allow the end user to create an install media as often as needed, and just upgrade the jigdo file and go. If the jigdo control file created a dated ISO image, it could be updated very regularly. And if editing the jigdo file were easier people could add their own list of non-default packages if they were creating a 8.5GB image.
It appears to be as easy as updating the jigdo file, which may be done already if you do internal daily (or frequent) install media creation for testing.
There, that didn't take a wiki page, all it needs is a comment on the state of the tools needed.
Bill Davidsen wrote:
I get the impression that jigdo is out of favor for some reason
My own personal opinion of jigdo is that I don't have much use for it outside of Fedora, so why should I familiarize myself with yet another tool? Bittorrent gets the job done.
Since I don't keep RPMs on my local machine, isn't jigdo going to keep downloading full RPM sets anyway? I'd have to download from mirrors, which is costing mirror bandwidth.
Jigdo seems to be a worse option over bittorrent.
Once upon a time, Michael Cronenworth mike@cchtml.com said:
I'd have to download from mirrors, which is costing mirror bandwidth.
You've mentioned this a couple of times; why do you think mirrors mirror? If I didn't want people to consume bandwidth downloading rawhide, I wouldn't mirror rawhide.
I personally rsync rawhide to my disk server at home. I can PXE boot test installs, and I keep a fairly up-to-date KVM virtual system around to build LiveCD images and LiveUSB sticks. It is much faster than I'd be able to download a new DVD image daily.
Chris Adams wrote:
I personally rsync rawhide to my disk server at home. I can PXE boot test installs, and I keep a fairly up-to-date KVM virtual system around to build LiveCD images and LiveUSB sticks. It is much faster than I'd be able to download a new DVD image daily.
I suspect that the reason rsync didn't work out for the RCs is that it allows you to start from a file very different from the end file, generating lots of bandwidth. Last time I used it, I don't remember it giving any estimate of the required download size, either (is there any way to get this?). This could mean lots of people starting from something like the previous Final version, meaning they'd have to download something like half of the full size, without knowing that. With disos, you have to start from a fixed point which can be chosen to be very close to the finish, and already available on mirrors (like the Preview), and people know in advance exactly how big the diso is.
Chris Adams wrote:
Once upon a time, Michael Cronenworth mike@cchtml.com said:
I'd have to download from mirrors, which is costing mirror bandwidth.
You've mentioned this a couple of times; why do you think mirrors mirror? If I didn't want people to consume bandwidth downloading rawhide, I wouldn't mirror rawhide.
Sorry, I wasn't referring to total bandwidth over time, but bandwidth per second.
Example: Release day is often murder for mirrors. I wait at least a week before upgrading because it would take hours to download packages via preupgrade or yum.
On Mon, 2009-06-01 at 12:15 -0500, Michael Cronenworth wrote:
Example: Release day is often murder for mirrors. I wait at least a week before upgrading because it would take hours to download packages via preupgrade or yum.
Many of our mirrors report a sharp decline in the release day bandwidth recently over previous years/releases, to the point that some are questioning the disk costs to mirror Fedora if they aren't going to be used to their potential.
Jesse Keating wrote:
Many of our mirrors report a sharp decline in the release day bandwidth recently over previous years/releases, to the point that some are questioning the disk costs to mirror Fedora if they aren't going to be used to their potential.
Fedora is seeing its user-base shrink? More mirrors? Faster mirrors? More people are waiting like me? I'll try updating a system to F11 next week and see how well it goes.
On Mon, 2009-06-01 at 12:22 -0500, Michael Cronenworth wrote:
Fedora is seeing its user-base shrink? More mirrors? Faster mirrors? More people are waiting like me? I'll try updating a system to F11 next week and see how well it goes.
We have more mirrors, we have more people using bittorrent, we have less people downloading large DVD isos, etc... Our numbers of users don't seem to be declining.
Once upon a time, Michael Cronenworth mike@cchtml.com said:
Chris Adams wrote:
Once upon a time, Michael Cronenworth mike@cchtml.com said:
I'd have to download from mirrors, which is costing mirror bandwidth.
You've mentioned this a couple of times; why do you think mirrors mirror? If I didn't want people to consume bandwidth downloading rawhide, I wouldn't mirror rawhide.
Sorry, I wasn't referring to total bandwidth over time, but bandwidth per second.
Okay, so? Not sure what you are trying to differentiate here. Rawhide changes daily, so I expect anybody using rawhide to use some of my bits per second every day after the sync.
Example: Release day is often murder for mirrors. I wait at least a week before upgrading because it would take hours to download packages via preupgrade or yum.
Yeah, but that's release day, not the weeks before release when you are wanting RCs.
There are also a number of high-bandwidth mirrors these days (such as kernel.org) that greatly reduce the slowdown during release. IIRC even my little mirror only really topped out its bandwidth for a couple of days after the release of F10 (and it wasn't flat-lined; HTTP was still getting through just fine for the most part).
Chris Adams wrote:
Okay, so? Not sure what you are trying to differentiate here. Rawhide changes daily, so I expect anybody using rawhide to use some of my bits per second every day after the sync.
Example: Release day is often murder for mirrors. I wait at least a week before upgrading because it would take hours to download packages via preupgrade or yum.
Yeah, but that's release day, not the weeks before release when you are wanting RCs.
There are also a number of high-bandwidth mirrors these days (such as kernel.org) that greatly reduce the slowdown during release. IIRC even my little mirror only really topped out its bandwidth for a couple of days after the release of F10 (and it wasn't flat-lined; HTTP was still getting through just fine for the most part).
Sounds like mirrors could/should take RCs then? Push them out and announce their availability when they show up (24h?)? Negates the whole bittorrent/jigdo discussion.
Once upon a time, Michael Cronenworth mike@cchtml.com said:
Sounds like mirrors could/should take RCs then? Push them out and announce their availability when they show up (24h?)? Negates the whole bittorrent/jigdo discussion.
Feeding the bits out isn't usually the problem, it is getting the bits synced. There's no way mirrors could get daily (or probably even weekly) builds synced in a useful time. For 11-Preview, the size of just the ISOs is 36.5G; if you exclude source and Live images, it is still 25.3G.
Chris Adams wrote:
Feeding the bits out isn't usually the problem, it is getting the bits synced. There's no way mirrors could get daily (or probably even weekly) builds synced in a useful time. For 11-Preview, the size of just the ISOs is 36.5G; if you exclude source and Live images, it is still 25.3G.
Doh. Brain is on half-power atm.
Sounds like the mirrors should use bitorrent? (then why not just let users use bittorrent) This will be my last random thought today.
Once upon a time, Michael Cronenworth mike@cchtml.com said:
Sounds like the mirrors should use bitorrent? (then why not just let users use bittorrent) This will be my last random thought today.
Bittorrent is not some magic wand. The problem is just the volume of bits that would have to be pushed out in a short period of time to be useful. To generate 25G+ of data and try to distribute it in a timely fashion, Red Hat would have to dedicate large amounts of bandwidth for just that purpose. The bits would have to get from the point the ISOs are generated to the public servers to at least some consumers (be they mirrors or end-users).
Let's say you wanted to get 25G out (at least to the initial point of distribution) in 4 hours. That's an average line rate of about 14 megabits per second to distribute them to _one_ other site. If you have just 10 other sites trying to get them simultaneously, you'll need a full OC-3 or a fractional gigabit ethernet link.
Also, mirrors aren't going to use Bittorrent to fetch bits because AFAIK the automation is not available. I have scripts wrapped around rsync to keep things in sync (and even that still needs some "hand holding" now and then).
Chris Adams wrote:
Once upon a time, Michael Cronenworth mike@cchtml.com said:
Sounds like the mirrors should use bitorrent? (then why not just let users use bittorrent) This will be my last random thought today.
Bittorrent is not some magic wand. The problem is just the volume of bits that would have to be pushed out in a short period of time to be useful. To generate 25G+ of data and try to distribute it in a timely fashion, Red Hat would have to dedicate large amounts of bandwidth for just that purpose. The bits would have to get from the point the ISOs are generated to the public servers to at least some consumers (be they mirrors or end-users).
The initial ISO mirror wouldn't be burdened with 100 mirrors connecting to it. Each of those 100 mirrors would share the load. Yes, it's not a magical solution, but it's better than HTTP/rsync. ..or am I mistaken? Is bittorrent worse in your eyes?
Let's say you wanted to get 25G out (at least to the initial point of distribution) in 4 hours. That's an average line rate of about 14 megabits per second to distribute them to _one_ other site. If you have just 10 other sites trying to get them simultaneously, you'll need a full OC-3 or a fractional gigabit ethernet link.
Also, mirrors aren't going to use Bittorrent to fetch bits because AFAIK the automation is not available. I have scripts wrapped around rsync to keep things in sync (and even that still needs some "hand holding" now and then).
I realize that. It would be an interesting project to take up if I had the time. "Automated mass data dispersement"
Following up to myself, because I re-read my responses...
I'm sorry to come off as "Mr. Negative" about this; that's not my intent. The more people testing, the better the end result will be. I just think that right now, the best way for end users to test is to rsync rawhide and either do network installs (if you have more than one computer this is very easy to set up) or build images yourself. I haven't built install ISOs myself in a while, so I don't know how hard that is, but building Live images (once you have a tree) is almost trivial (edit one line in one file to point to your local repo and run one command).
During the time leading up to a release, rawhide has little churn, so rsyncing it does not use much bandwidth.
The biggest problem with rsyncing rawhide is that it is the equivalent of the "Everything" directory in a release, not just the "Fedora" bits that land on a DVD, so you have a big hit the first time downloading a bunch of stuff that wasn't on a DVD.
Jesse (if you are still reading): what would it take to have "Everything" and "Fedora" directories in rawhide? Would it make sense to do that to make it easier for the "casual rsyncer" that would start with a DVD ISO (e.g. 11-Preview) to build a tree for testing?
On Mon, 2009-06-01 at 13:02 -0500, Chris Adams wrote:
Jesse (if you are still reading): what would it take to have "Everything" and "Fedora" directories in rawhide? Would it make sense to do that to make it easier for the "casual rsyncer" that would start with a DVD ISO (e.g. 11-Preview) to build a tree for testing?
Pungi caches it's downloads, so instead of rsync locally to compose, just compose against your local mirror. Next time you compose pungi will only download the new bits saving you tonnes of bandwidth. Easier than trying to create both Everything and Fedora repos all the time.
On Mon, 1 Jun 2009, Chris Adams wrote:
Once upon a time, Michael Cronenworth mike@cchtml.com said:
Sounds like the mirrors should use bitorrent? (then why not just let users use bittorrent) This will be my last random thought today.
Bittorrent is not some magic wand. The problem is just the volume of bits that would have to be pushed out in a short period of time to be useful. To generate 25G+ of data and try to distribute it in a timely fashion, Red Hat would have to dedicate large amounts of bandwidth for just that purpose. The bits would have to get from the point the ISOs are generated to the public servers to at least some consumers (be they mirrors or end-users).
Let's say you wanted to get 25G out (at least to the initial point of distribution) in 4 hours. That's an average line rate of about 14 megabits per second to distribute them to _one_ other site. If you have just 10 other sites trying to get them simultaneously, you'll need a full OC-3 or a fractional gigabit ethernet link.
Also, mirrors aren't going to use Bittorrent to fetch bits because AFAIK the automation is not available. I have scripts wrapped around rsync to keep things in sync (and even that still needs some "hand holding" now and then).
And it is probably worth reading the analysis of mirrors and torrents written by John Hawley
http://linuxsymposium.org/2008/ols-2008-Proceedings-V1.pdf page 173
-sv
Seth Vidal wrote:
And it is probably worth reading the analysis of mirrors and torrents written by John Hawley
http://linuxsymposium.org/2008/ols-2008-Proceedings-V1.pdf page 173
The analysis leaves out DHT and PXE. These features alleviate stress on trackers. Also, UPnP is becoming common place. This allows transparent port forwarding -- although Fedora keeps the firewall blocking by default.
It also does not disclose torrent user clients. Depending on the user client, it may/may not work well. Bad clients can ruin a torrent. Black listing bad clients (even users) can keep torrents healthy.
The analysis also leaves off any numbers of HTTP download counts. Most people want to click on an HTTP link and download with their browser. They'll skip over the bittorrent link, which is hidden anyway.
I don't like the analysis not just because it shows bittorrent in a bad light, but because the author has a poor understanding of bittorrent.
Chris Adams wrote:
Once upon a time, Michael Cronenworth mike@cchtml.com said:
Sounds like mirrors could/should take RCs then? Push them out and announce their availability when they show up (24h?)? Negates the whole bittorrent/jigdo discussion.
Feeding the bits out isn't usually the problem, it is getting the bits synced. There's no way mirrors could get daily (or probably even weekly) builds synced in a useful time. For 11-Preview, the size of just the ISOs is 36.5G; if you exclude source and Live images, it is still 25.3G.
I think this would be a valuable improvement, though, if you could feed changes through some mechanism and then do the image build on the mirror server, you would save everyone a ton of effort and bandwidth, and the CPU time is minimal to do the image build.
Using a bittorrent approach can't actually be too hard to do if you wanted to do it, I bet most mirror sites could nfs export storage to save the bytes, so you just need any small system running Fedora and a few scripts to pull the images down and seed them. Note: I said "not too hard" rather than trivial, because error correction is needed. I still think a jigdo style delta+build solution is best, if overall bandwidth is to be saved.
Once upon a time, Bill Davidsen davidsen@tmr.com said:
I think this would be a valuable improvement, though, if you could feed changes through some mechanism and then do the image build on the mirror server, you would save everyone a ton of effort and bandwidth, and the CPU time is minimal to do the image build.
Building an image isn't free in terms of CPU, RAM, or I/O. Also, IIRC you can only build an image on the same arch as the target (e.g. my i686 server can only build ix86 targets, not x86_64 or PPC).
Chris Adams wrote:
Once upon a time, Bill Davidsen davidsen@tmr.com said:
I think this would be a valuable improvement, though, if you could feed changes through some mechanism and then do the image build on the mirror server, you would save everyone a ton of effort and bandwidth, and the CPU time is minimal to do the image build.
Building an image isn't free in terms of CPU, RAM, or I/O. Also, IIRC you can only build an image on the same arch as the target (e.g. my i686 server can only build ix86 targets, not x86_64 or PPC).
I'll take your word for that, although I have no idea why assembling parts into a whole would take any architecture dependent code. I make ISOs for PPC & Mac using x86 tools (mkisofs), so clearly it's a limitation rather than a requirement to build ISOs.
Once upon a time, Bill Davidsen davidsen@tmr.com said:
Chris Adams wrote:
Building an image isn't free in terms of CPU, RAM, or I/O. Also, IIRC you can only build an image on the same arch as the target (e.g. my i686 server can only build ix86 targets, not x86_64 or PPC).
I'll take your word for that, although I have no idea why assembling parts into a whole would take any architecture dependent code. I make ISOs for PPC & Mac using x86 tools (mkisofs), so clearly it's a limitation rather than a requirement to build ISOs.
Building ISOs is the last step of a many step process. You have to build content to go _on_ the ISOs. Setting up the boot images is arch specific.
On Wed, 2009-06-03 at 10:58 -0500, Chris Adams wrote:
I'll take your word for that, although I have no idea why assembling parts into a whole would take any architecture dependent code. I make ISOs for PPC & Mac using x86 tools (mkisofs), so clearly it's a limitation rather than a requirement to build ISOs.
Building ISOs is the last step of a many step process. You have to build content to go _on_ the ISOs. Setting up the boot images is arch specific.
Bill is talking about a delta like system to re-combine existing isos. Chris is talking about using a compose tool to work from a repo of packages to an iso, doing the buildinstall along the way. These are two different topics and I think the source of confusion between these two posters.
Chris Adams wrote:
Once upon a time, Bill Davidsen davidsen@tmr.com said:
Chris Adams wrote:
Building an image isn't free in terms of CPU, RAM, or I/O. Also, IIRC you can only build an image on the same arch as the target (e.g. my i686 server can only build ix86 targets, not x86_64 or PPC).
I'll take your word for that, although I have no idea why assembling parts into a whole would take any architecture dependent code. I make ISOs for PPC & Mac using x86 tools (mkisofs), so clearly it's a limitation rather than a requirement to build ISOs.
Building ISOs is the last step of a many step process. You have to build content to go _on_ the ISOs. Setting up the boot images is arch specific.
But that's the point, if you change a few packages, say on rawhide or updates, those get pushed to the mirrors, the mirrors build the ISO from the packages. The RPM build needs to be on the target machine, but after it's pushed it's just ones and zeros. So it takes minimal time to distribute a change, and mostly i/o to create the image, check the CRC on the image to be sure it's as expected, and start serving.
Fedora machine have to do the build anyway, I think the mirrors will have to take the changes anyway, so assembling the ISO would appear to be the part which you could distribute to sites willing to participate.
Thought: push that back one step and add a package to rawhide with a name like 'daily-jigdo-090821.rpm' which when installed would pull all the RPMs to build an ISO into some known place and create an install ISO ready for use. Because I test first on a VM (I but lots of people do), a full install using a kickstart is no big deal, and it positively prevents some bugfix I hacked up yesterday from causing an issue today. And if I want to install somewhere on bare iron, I can burn and install without worrying about how much network I need later for updates.
Call this approach the "daily spin" and move all the work to my machine, since people seem to have invented tons of reasons why it would be too much load on a mirror server. Feel free to substitute any other package for jigdo, the goal is to be able to really test the state of the art at some point in time. Dare I hope someone will at least discuss this for fc12?
Michael Cronenworth wrote:
Chris Adams wrote:
Okay, so? Not sure what you are trying to differentiate here. Rawhide changes daily, so I expect anybody using rawhide to use some of my bits per second every day after the sync.
Example: Release day is often murder for mirrors. I wait at least a week before upgrading because it would take hours to download packages via preupgrade or yum.
Yeah, but that's release day, not the weeks before release when you are wanting RCs.
There are also a number of high-bandwidth mirrors these days (such as kernel.org) that greatly reduce the slowdown during release. IIRC even my little mirror only really topped out its bandwidth for a couple of days after the release of F10 (and it wasn't flat-lined; HTTP was still getting through just fine for the most part).
Sounds like mirrors could/should take RCs then? Push them out and announce their availability when they show up (24h?)? Negates the whole bittorrent/jigdo discussion.
Not for most people, I have no desire to pull an RC on a regular basis, I would rather track rawhide. But if I could use the RPMs I have and a little jigdo to get anything I haven't upgraded, then I would be glad to build an RC regularly and test the install on a VM at least. I wouldn't beat up my bandwidth to pull 1-2 DVDs a day (or a week).
On Mon, Jun 01, 2009 at 12:01:52 -0500, Chris Adams cmadams@hiwaay.net wrote:
You've mentioned this a couple of times; why do you think mirrors mirror? If I didn't want people to consume bandwidth downloading rawhide, I wouldn't mirror rawhide.
And thanks for doing that! Hiwaay is one of the best Fedora mirrors for the US. It syncs up a couple of times a day. I switch between it and the US kernel.org mirror. Most of the other US mirrors do not sync up daily.
Bruno Wolff III said the following on 06/04/2009 06:24 AM Pacific Time:
On Mon, Jun 01, 2009 at 12:01:52 -0500, Chris Adamscmadams@hiwaay.net wrote:
You've mentioned this a couple of times; why do you think mirrors mirror? If I didn't want people to consume bandwidth downloading rawhide, I wouldn't mirror rawhide.
And thanks for doing that! Hiwaay is one of the best Fedora mirrors for the US. It syncs up a couple of times a day. I switch between it and the US kernel.org mirror. Most of the other US mirrors do not sync up daily.
Definitely agree!
Would a specific mirror like hiwaay be willing/able to carry RCs for testing purposes?
John
John Poelstra wrote:
Bruno Wolff III said the following on 06/04/2009 06:24 AM Pacific Time:
On Mon, Jun 01, 2009 at 12:01:52 -0500, Chris Adamscmadams@hiwaay.net wrote:
You've mentioned this a couple of times; why do you think mirrors mirror? If I didn't want people to consume bandwidth downloading rawhide, I wouldn't mirror rawhide.
And thanks for doing that! Hiwaay is one of the best Fedora mirrors for the US. It syncs up a couple of times a day. I switch between it and the US kernel.org mirror. Most of the other US mirrors do not sync up daily.
Definitely agree!
Would a specific mirror like hiwaay be willing/able to carry RCs for testing purposes?
John
I don't think this is necessary. I never had any trouble getting a full-speed (for me) 3Mbps download for any of the 6 full DVD images I had to download from the RC server in order to create the relatively tiny deltaisos - and if the RC server distributed only these, its bandwidth could be reduced by a factor of 20-100. The problem seems to be a shortage of people willing to do testing. Deltaisos would make it a lot easier to download the RCs, which is probably a factor, since it's easier and faster to run the one command to apply the deltas (if they're available) than it is to have to spend several hours downloading the full image over, and over, and over. Creating them is equally easy on the server side, so that shouldn't be an issue either.
On the other hand, it would be nice if a few servers could host deltaisos from N Final to (N+1) Final. All that's necessary to use them is a good copy of N Final either as an ISO, or on media. Most of the existing Fedora users probably qualify.
Once upon a time, John Poelstra poelstra@redhat.com said:
Would a specific mirror like hiwaay be willing/able to carry RCs for testing purposes?
For me, it comes down to bandwidth. I'm running this on the side out of our excess bandwidth capacity. We don't have anything like the bandwidth that some of the "big" mirrors (such as kernel.org) have. Even though the number of downloads may be smaller for RCs, if there were fewer mirrors carrying them, I'd probably see a bandwidth increase, and that might be a problem.
Also, in my case I'm running mirror.hiwaay.net on old parts that I've scrounged over the years that are starting to reach their limits, and disk space is getting to be a premium.
Michael Cronenworth wrote:
Bill Davidsen wrote:
I get the impression that jigdo is out of favor for some reason
My own personal opinion of jigdo is that I don't have much use for it outside of Fedora, so why should I familiarize myself with yet another tool? Bittorrent gets the job done.
Since I don't keep RPMs on my local machine, isn't jigdo going to keep downloading full RPM sets anyway? I'd have to download from mirrors, which is costing mirror bandwidth.
Unless you throw away your previous ISO each time, of course you keep the RPMs, they're part of the image. Pulling them off a DVD is still probably faster than the net, unless you have a killer connection, and in any case it will be nicer to the rest of the world.
Jigdo seems to be a worse option over bittorrent.
I can't imagine any metric by which it would be faster to pull a whole distribution than a few changes. If you choose to throw away everything any start over each time, that's your problem, most people want to save time, and pulling less over the net seems the answer.
Note: a jigdo which used multiple connections to pull RPMs would be great, but what we have is getting the job done.
On Mon, 2009-06-01 at 11:47 -0400, Bill Davidsen wrote:
Everyone seeds for people, the server should not be seeding more than 1-2 clients, who then seed other, and others, leaving the server to do only bookkeeping.
This is the classic torrent problem, particularly with rapid torrents. If we get the bits to the torrent server (which takes a while in itself from place of creation to place of torrenting isn't the fastest link), then launch the torrent, now what? There is exactly 1 seeder, the torrent server. So you're going to have lots of clients jumping on that torrent and they're all going to be fighting for the tiny bit of bandwidth that server will have left, to get little bits out there. The clients can share those little bits that they're hopefully getting in a random pattern from the server, but inevitably they will all wind up at the same % done, and all still waiting on the torrent server. Our torrents only really perform well /after/ a number of people have successfully downloaded and are now seeding the torrent storm. When we're dealing with one or multiple RCs a day, there is no chance for this to scale.
If you have a real proposal here, I suggest you write it up as a wiki page and bring it to the attention of mroe than just the test list.
At the moment I'm locating pieces to have a proposal which is based on using existing parts rather than starting with a blank screen. And it feels as though discussion of a lower overhead means of propagating rapid changes is on topic here, since people trying new software tend to see a higher rate of upgrade than people using a more stable version.
I get the impression that jigdo is out of favor for some reason, but it does what I had in mind, allow the end user to create an install media as often as needed, and just upgrade the jigdo file and go. If the jigdo control file created a dated ISO image, it could be updated very regularly. And if editing the jigdo file were easier people could add their own list of non-default packages if they were creating a 8.5GB image.
It appears to be as easy as updating the jigdo file, which may be done already if you do internal daily (or frequent) install media creation for testing.
There, that didn't take a wiki page, all it needs is a comment on the state of the tools needed.
Jigdo is out of favor mostly because it's an end user UI nightmare. It's a really terrible program to try and use effectively.
Jesse Keating wrote:
On Mon, 2009-06-01 at 11:47 -0400, Bill Davidsen wrote:
Everyone seeds for people, the server should not be seeding more than 1-2 clients, who then seed other, and others, leaving the server to do only bookkeeping.
This is the classic torrent problem, particularly with rapid torrents.
Wrong. This is the nature of torrenting. It seems you are dead-set against bittorrent so this will be my last discussion on it.
One person connects up at 100kB/sec download. Then another connects. Seeder's at 50kB/sec split. However - the peers are sharing. If they have 50kB/sec upload speeds (most people do have around 512kbit) then guess what! They are sharing 100kB/sec bandwidth all around! It's must faster than two clients connecting to one HTTP URL at 50kB/sec. It scales with more people connected and once more seeders are available. There is no hard load on the Fedora server and lots of users are getting RCs at reasonable speeds.
The only problem with bittorrent is that Jesse Keating doesn't like it.
On Mon, 2009-06-01 at 11:27 -0500, Michael Cronenworth wrote:
Wrong. This is the nature of torrenting. It seems you are dead-set against bittorrent so this will be my last discussion on it.
One person connects up at 100kB/sec download. Then another connects. Seeder's at 50kB/sec split. However - the peers are sharing. If they have 50kB/sec upload speeds (most people do have around 512kbit) then guess what! They are sharing 100kB/sec bandwidth all around! It's must faster than two clients connecting to one HTTP URL at 50kB/sec. It scales with more people connected and once more seeders are available. There is no hard load on the Fedora server and lots of users are getting RCs at reasonable speeds.
The only problem with bittorrent is that Jesse Keating doesn't like it.
It's not that I don't like bittorrent, it's just what I've observed from our tracker. Client torrents are horribly slow until enough people have gotten large enough amounts to effectively seed. The only way we've made torrents useful on release day is to pre-sync the bits to 5 or more people to seed for us, and they connect to the tracker immediate and offer up their full download to seed. Without this the clients trying to torrent get very poor speeds for a very long time.
I know what the theory of Bittorrent is, but I also know the practice of how it's working for the Fedora project.
On Mon, Jun 1, 2009 at 10:27 AM, Michael Cronenworth mike@cchtml.com wrote:
Jesse Keating wrote:
On Mon, 2009-06-01 at 11:47 -0400, Bill Davidsen wrote:
Everyone seeds for people, the server should not be seeding more than 1-2 clients, who then seed other, and others, leaving the server to do only bookkeeping.
This is the classic torrent problem, particularly with rapid torrents.
Wrong. This is the nature of torrenting. It seems you are dead-set against bittorrent so this will be my last discussion on it.
In the end the variables that need to be looked at are: A = number of primary seeds. B = size of the data being blocked off. C = time it takes to get to the primary seeds D = time it takes to get a download from those seeds. E = Amount of time that the data is useful.
There are also limiting factors in the number of networks that allow bittorrent. Many ISPs and backbones bandwidth shape it down or limit the number of peers a person can see. There is also whether the bandwidth is asyncrynous or not.
Ok the next part is all about experience without any actual numbers put to it. From experience seeing previous downloads.. the number of primary peers is 1-2. [Now when having to deal with pirated music, movie, or game dvd the numbers are usually a lot higher.]
The amount of time to get the seeds ready is 4 hours and due to other restrictions a lot of people seem to take 8-16 hours to get a download done via bittorrent. If they do not (or cannot) peer that makes things slower. That basically says that testing 1/day is not feasible for accurate data. Maybe 1/week is possible but it all depends on the number of sites that are willing to be primary peers and with bandwidth costs going up.. my guess is that is a limited amount.
Jesse Keating wrote:
Our torrents only really perform well /after/ a number of people have successfully downloaded and are now seeding the torrent storm. When we're dealing with one or multiple RCs a day, there is no chance for this to scale.
Bittorrent is overkill for this. I had no trouble getting a full speed 3Mbps direct download for 2 of the DVD images from the RC server, despite the fact that the lack of deltaisos forced me to download 15-20 times as much as would have been necessary. Between two RCs, the disos would be microscopic - probably less than 1% of the full ISO's size. There's no need for more than one direct download server for something that small. I really don't understand why nobody seems to take the idea seriously. All that's needed is to tell people in advance that RCs will be made available as disos from the Preview, and disos between RCs - NOT as full ISOs. Anyone who is interested can download the Preview from one of the many existing mirrors well in advance. Then, everyone should be able to get the tiny disos by direct download from the RC server - and even after spending 15 minutes or so reconstructing the full ISO, it's still much faster than a full download. If, despite the diso's small size, the RC server is still overloaded somehow, they can be made available via bittorrent.
On Mon, 2009-06-01 at 13:06 -0400, Andre Robatino wrote:
Jesse Keating wrote:
Our torrents only really perform well /after/ a number of people have successfully downloaded and are now seeding the torrent storm. When we're dealing with one or multiple RCs a day, there is no chance for this to scale.
Bittorrent is overkill for this. I had no trouble getting a full speed 3Mbps direct download for 2 of the DVD images from the RC server, despite the fact that the lack of deltaisos forced me to download 15-20 times as much as would have been necessary. Between two RCs, the disos would be microscopic - probably less than 1% of the full ISO's size. There's no need for more than one direct download server for something that small. I really don't understand why nobody seems to take the idea seriously. All that's needed is to tell people in advance that RCs will be made available as disos from the Preview, and disos between RCs - NOT as full ISOs. Anyone who is interested can download the Preview from one of the many existing mirrors well in advance. Then, everyone should be able to get the tiny disos by direct download from the RC server - and even after spending 15 minutes or so reconstructing the full ISO, it's still much faster than a full download. If, despite the diso's small size, the RC server is still overloaded somehow, they can be made available via bittorrent.
disos adds complexity. A lot of complexity, from the generation side to the consumption side. Where the isos are generated there isn't easy access to the old isos, and often rsync is much easier to manage (as long as the rsync master isn't being bombarded with tonnes of requests) to get an ongoing RC set updated enough to continue tests.
Jesse Keating wrote:
disos adds complexity. A lot of complexity, from the generation side to the consumption side. Where the isos are generated there isn't easy access to the old isos, and often rsync is much easier to manage (as long as the rsync master isn't being bombarded with tonnes of requests) to get an ongoing RC set updated enough to continue tests.
On the consumption side, running the applydeltaiso command is very simple - anyone sophisticated enough to be a good tester should be able to handle it. The man page is about 2/3 of a screen. On the generation side, all that's needed is for someone with access to the RC server to run the makedeltaiso command once for each ISO. The number is small enough that it can be done manually. If I wanted to, I could download every one of them, generate the diso, then upload them to a file hosting service as I did with the 32- and 64-bit DVD images. It's tedious and time-consuming since I have to do the down/uploading, which wouldn't be necessary for someone with direct access to the server.
On 29.05.2009 18:25, Jesse Keating wrote:
On Fri, 2009-05-29 at 10:01 +0200, Thorsten Leemhuis wrote:
Just a short comment: Years ago someone made daily RCs avilable via an Rsync server. Thus it was very easy to get the latest RC -- you just rsynced the installer ISO you already had using rsync. As only small parts of the ISO changed from day to day it was just a few minutes task even with an not that fast internet connection.
We had that not too long ago. The amount of people trying to rsync it made the server completely unavailable and nobody got the bits. It was a disaster.
I meant the "long ago" time, back in the Core days and before you were working for RH. Back then it worked quite well afaicr.
CU knurd
2009/5/31 Thorsten Leemhuis fedora@leemhuis.info
On 29.05.2009 18:25, Jesse Keating wrote:
On Fri, 2009-05-29 at 10:01 +0200, Thorsten Leemhuis wrote:
Just a short comment: Years ago someone made daily RCs avilable via an Rsync server. Thus it was very easy to get the latest RC -- you just rsynced the installer ISO you already had using rsync. As only small parts of the ISO changed from day to day it was just a few minutes task even with an not that fast internet connection.
We had that not too long ago. The amount of people trying to rsync it made the server completely unavailable and nobody got the bits. It was a disaster.
I meant the "long ago" time, back in the Core days and before you were working for RH. Back then it worked quite well afaicr.
maybe it will help to set priorities, like what's important, saving users efforts, or keeping fedora servers alive? or something else?
CU knurd
-- fedora-test-list mailing list fedora-test-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-test-list
On Fri, May 29, 2009 at 10:01:19 +0200, Thorsten Leemhuis fedora@leemhuis.info wrote:
I haven't checked if that still works, but I guess it would. Of course not with the Live spins, as they are compressed and thus everything changes all the time. But maybe we could offer them as uncompressed images somewhere -- you could still test them using USB-Sticks, DVDs, or with visualization solutions like KVM, Qemu, and Virtual-Box.
If you don't need to test the exact bits, another option is to maintain a local mirror of rawhide and build livecd/dvd images yourself using livecd-creator. It is pretty easy to do. Rawhide churn is a bit high, so that even with rsync you will regularly get > 100 MB of changes and sometimes
1 GB on a daily basis. There also aren't a lot of rsync mirrors that are
updated daily.
Jesse Keating said the following on 05/28/2009 10:36 AM Pacific Time:
A late discovered and just potentially fixed anaconda storage bug[1] has necessitated another week slip of our schedule. The change is important but invasive enough to require re-validating our storage tests. We were already late in producing the Release Candidate and there is not enough time to produce another one and validate it in time for next Tuesday's release date. Therefor we have decided to enact another week long slip of the release. This gives us time to create a second release candidate and fully validate it and hand it off to the mirrors in plenty of time to sync up for the new release date of June 9th. As much as we regret slipping, we also wish to avoid easily trigger-able bugs in our release, particularly in software that cannot be fixed with a 0-day update.
At this time we would only accept tag requests for critical issues.
Forgive me for asking the obvious.... have we done a review of all the open anaconda bugs for 'rawhide' to make sure there are no other bugs already open for anaconda that could put us back into this same situation?
John
Forgive me for asking the obvious.... have we done a review of all the open anaconda bugs for 'rawhide' to make sure there are no other bugs already open for anaconda that could put us back into this same situation?
We have now:
https://www.redhat.com/archives/anaconda-devel-list/2009-May/msg00352.html
- Chris
On Thu, 2009-05-28 at 10:36 -0700, Jesse Keating wrote:
A late discovered and just potentially fixed anaconda storage bug[1] has necessitated another week slip of our schedule. The change is important but invasive enough to require re-validating our storage tests. We were already late in producing the Release Candidate and there is not enough time to produce another one and validate it in time for next Tuesday's release date. Therefor we have decided to enact another week long slip of the release. This gives us time to create a second release candidate and fully validate it and hand it off to the mirrors in plenty of time to sync up for the new release date of June 9th. As much as we regret slipping, we also wish to avoid easily trigger-able bugs in our release, particularly in software that cannot be fixed with a 0-day update.
At this time we would only accept tag requests for critical issues.
Does "critical issues" mean reviewed and approved F11Blocker bugs?
Thanks, James
On Thu, 2009-05-28 at 19:34 +0000, James Laska wrote:
Does "critical issues" mean reviewed and approved F11Blocker bugs?
Yes, that's a valid interpretation. Case in point a bug was just fixed that would lock up intel video cards if an Xv window (totem, mplayer, et al) was made fullscreen. We accepted that bugfix.
On Thu, May 28, 2009 at 01:52:12PM -0700, Jesse Keating wrote:
On Thu, 2009-05-28 at 19:34 +0000, James Laska wrote:
Does "critical issues" mean reviewed and approved F11Blocker bugs?
Yes, that's a valid interpretation. Case in point a bug was just fixed that would lock up intel video cards if an Xv window (totem, mplayer, et al) was made fullscreen. We accepted that bugfix.
The ATI cards aren't in much better shape these days. I think lately the balance for "preferred Fedora video card" is tipped more in the Intel direction, however, since there seem to be fewer issues and more bugfixes coming more quickly for the issues they do have.
2009/5/29 Chuck Anderson cra@wpi.edu:
On Thu, May 28, 2009 at 01:52:12PM -0700, Jesse Keating wrote:
On Thu, 2009-05-28 at 19:34 +0000, James Laska wrote:
Does "critical issues" mean reviewed and approved F11Blocker bugs?
Yes, that's a valid interpretation. Case in point a bug was just fixed that would lock up intel video cards if an Xv window (totem, mplayer, et al) was made fullscreen. We accepted that bugfix.
The ATI cards aren't in much better shape these days. I think lately the balance for "preferred Fedora video card" is tipped more in the Intel direction, however, since there seem to be fewer issues and more bugfixes coming more quickly for the issues they do have.
Very subjective and has nothing to do with the topic. That was just an example.