Hi,
One of the three is causing a problem on booting. Basically, the terminal reports that /boot or some more of the bits not on the lvm part can't be found (it's not reached firing up lvm).
The kernel and udev came from todays (20th April) rawhide update.
The system works fine if I revert to the 2134_fc6 kernel.
TTFN
Paul
On Thu, 2006-04-20 at 16:24 +0100, PFJ wrote:
Hi,
One of the three is causing a problem on booting. Basically, the terminal reports that /boot or some more of the bits not on the lvm part can't be found (it's not reached firing up lvm).
The kernel and udev came from todays (20th April) rawhide update.
The system works fine if I revert to the 2134_fc6 kernel.
TTFN
Paul
I'm seeing something similar but no kernel works for me. The latest crashes and earlier ones have problems. Logical volumes inside a volume group are not being found even though I can vgdisplay -v them.
Where to file the bug????
tjb
Hi,
I'm seeing something similar but no kernel works for me. The latest crashes and earlier ones have problems. Logical volumes inside a volume group are not being found even though I can vgdisplay -v them.
Where to file the bug????
kernel
If it's wrong, the RH chaps are usually so lovely and kind that they put it in the right place.
TTFN
Paul
Paul wrote:
Hi,
I'm seeing something similar but no kernel works for me. The latest crashes and earlier ones have problems. Logical volumes inside a volume group are not being found even though I can vgdisplay -v them.
Where to file the bug????
kernel
If it's wrong, the RH chaps are usually so lovely and kind that they put it in the right place.
TTFN
Paul
Problem is in new device-mapper or lvm2. Had to revert both to get LVs back and mounted.
Clyde E. Kunkel clydekunkel7734@cox.net wrote:
Paul wrote:
Hi,
I'm seeing something similar but no kernel works for me. The latest crashes
Same here (kernel-2.6.16-1.2141_FC6.i686), earlier one works (kernel-2.6.16-1.2136_FC6.i686). BTW, latest vanilla kernels crash similarly.
and earlier ones have problems.
Which ones?
Logical volumes inside a volume
group are not being found even though I can vgdisplay -v them.
Where to file the bug????
kernel If it's wrong, the RH chaps are usually so lovely and kind that they put it in the right place. TTFN Paul
Problem is in new device-mapper or lvm2. Had to revert both to get LVs back and mounted.
Kernel is broken. Everything (except kernel) is up to rawhide here, and that is what I'm using right now.
Horst von Brand wrote:
Clyde E. Kunkel clydekunkel7734@cox.net wrote:
Paul wrote:
Hi,
I'm seeing something similar but no kernel works for me. The latest crashes
Same here (kernel-2.6.16-1.2141_FC6.i686), earlier one works (kernel-2.6.16-1.2136_FC6.i686). BTW, latest vanilla kernels crash similarly.
and earlier ones have problems.
Which ones?
Logical volumes inside a volume
group are not being found even though I can vgdisplay -v them.
Where to file the bug????
kernel If it's wrong, the RH chaps are usually so lovely and kind that they put it in the right place. TTFN Paul
Problem is in new device-mapper or lvm2. Had to revert both to get LVs back and mounted.
Kernel is broken. Everything (except kernel) is up to rawhide here, and that is what I'm using right now.
From Dave Jones in another thread, see especially sentence in ():
"This is actually a broken lvm2/device-mapper problem. Downgrade those to the versions that shipped with FC5, and reinstall the kernel, and it'll boot just fine.
(The reason the old kernel continued to work is because it still had the old versions of lvm2 in its initrd)
Dave"
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Clyde E. Kunkel wrote:
From Dave Jones in another thread, see especially sentence in ():
"This is actually a broken lvm2/device-mapper problem. Downgrade those to the versions that shipped with FC5, and reinstall the kernel, and it'll boot just fine.
(The reason the old kernel continued to work is because it still had the old versions of lvm2 in its initrd)
Dave"
I did and it still crashes on boot...
Kevin
- -- Get my public GnuPG key from http://keyserver.veridis.com:11371/export?id=7574690260641978351
Kevin DeKorte wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Clyde E. Kunkel wrote:
From Dave Jones in another thread, see especially sentence in ():
"This is actually a broken lvm2/device-mapper problem. Downgrade those to the versions that shipped with FC5, and reinstall the kernel, and it'll boot just fine.
(The reason the old kernel continued to work is because it still had the old versions of lvm2 in its initrd)
Dave"
I did and it still crashes on boot...
Kevin
Ok. All I know at this point is that when I reverted, my LVs were seen and I could mount them. I do not have / on an LV. If your 2147 kernel was installed with the new mapper and lvm2, then the initrd was built with the new mapper and lvm2. An older kernel with initrd built with previous versions "should work." I see in another thread that you can boot with an older kernel. Try rebuilding the initrd for the 2147 kernel with the reverted mapper and lvm2 installed. I see in rawhide today that it has the older versions.
On Thu, 2006-04-20 at 15:16 -0400, Thomas J. Baker wrote:
I'm seeing something similar but no kernel works for me. The latest crashes and earlier ones have problems. Logical volumes inside a volume group are not being found even though I can vgdisplay -v them.
Where to file the bug????
Personally, I'd say anaconda unless your system has a good reason for using LVM.
Normal users have no particular need for LVM -- it just makes the boot process gratuitously more fragile, and this is just a symptom of that fragility.
I can't imagine why we'd want to default to using LVM on single-disk workstation installs.
Having ext3 as a module is similarly questionable.
On Fri, Apr 21, 2006 at 03:53:44PM +0100, David Woodhouse wrote:
I can't imagine why we'd want to default to using LVM on single-disk workstation installs.
I want LVM on single-disk workstation installs.
David,
On Fri, 2006-04-21 at 15:53 +0100, David Woodhouse wrote:
Normal users have no particular need for LVM -- it just makes the boot process gratuitously more fragile, and this is just a symptom of that fragility.
with current hard disk sizes, LVM is really a good thing to have as you're not forced to allocate every scrap of the disk at installation time, but can resize volumes when needed. Your argument of fragility is a strawman, just because you deem LVM the spawn of evil doesn't mean that this breakage is not a bug that should be fixed.
Nils
On Mon, 2006-04-24 at 22:13 +0200, Nils Philippsen wrote:
with current hard disk sizes, LVM is really a good thing to have as you're not forced to allocate every scrap of the disk at installation time, but can resize volumes when needed.
Nobody suggested that users should be 'forced to allocate every scrap of the disk at installation time'. That really _does_ appear to be a straw man.
If a user wants to select LVM and allocate only a part of the disk, of course that should be permitted. I was talking about what the _default_ should be.
Of course LVM has its benefits. I'm certainly not suggesting that we should remove support for it from the installer. But what percentage of people actually _use_ the potential that it offers? I certainly don't. What percentage of Fedora users even know that it's there?
Most single-disk workstation installs don't seem to benefit from the fact that LVM gets enabled.
Your argument of fragility is a strawman, just because you deem LVM the spawn of evil doesn't mean that this breakage is not a bug that should be fixed.
(I'm not entirely sure I understand what you mean by a strawman in that context. I'm aware of the concept of a straw man argument, of course, but I don't see how it relates to what I said. What _was_ the straw man you're referring to -- the words I put in your mouth which I then contradicted?)
I certainly don't consider LVM to be 'the spawn of evil' -- I just think it's a poor _default_ because it makes the boot process more fragile, and I can't imagine that _many_ people actually get any benefit from it at all (unless of course they actually know and care about it, and would have enabled it anyway regardless of the default. So they don't benefit from the fact that it's the default.)
It's just a simple trade-off of risk vs. benefit, for the case of the user who doesn't know about LVM and _doesn't_ enable it for themselves.
The risks may be relatively small, but they are real. Anything which introduces extra dependencies in the boot process is something which we really should be very careful about. I've seen a _lot_ of machines render themselves unbootable in my time, for a lot of reasons. :)
We should make the boot process as robust as possible. Of _course_ there's a real bug which needs to be (and in fact has been) fixed in LVM. These things happen. We _know_ these things happen. And we should make sure our system copes with that as well as possible.
Once upon a time, David Woodhouse dwmw2@infradead.org said:
Of course LVM has its benefits. I'm certainly not suggesting that we should remove support for it from the installer. But what percentage of people actually _use_ the potential that it offers? I certainly don't.
A big benefit is that it would be easy to write a tool to move a Fedora installation using LVM from one drive to another (for example if you buy a bigger drive). With LVM, this can be done on-line; hook up the new drive, partition (/boot and LVM), copy /boot (and install GRUB), and pvmove the rest. Leave it running in the background, and when it is done, you shut down and remove the old drive.
On Fri, 2006-04-28 at 03:14 +0100, David Woodhouse wrote:
Of course LVM has its benefits. I'm certainly not suggesting that we should remove support for it from the installer. But what percentage of people actually _use_ the potential that it offers? I certainly don't.
I think this is a pretty bogus argument. People don't use the features it gives them -- namely the ability to add space on another disk and have it be part of the same filesystem -- mostly because we provide no serious UI for doing so.
What percentage of Fedora users even know that it's there?
Again, same problem.
Most single-disk workstation installs don't seem to benefit from the fact that LVM gets enabled.
True -- but those that aren't laptops do actually have the potential to benefit from it, if our tools provided for that user experience.
Your argument of fragility is a strawman, just because you deem LVM the spawn of evil doesn't mean that this breakage is not a bug that should be fixed.
[meta removed]
I certainly don't consider LVM to be 'the spawn of evil' -- I just think it's a poor _default_ because it makes the boot process more fragile, and I can't imagine that _many_ people actually get any benefit from it at all (unless of course they actually know and care about it, and would have enabled it anyway regardless of the default. So they don't benefit from the fact that it's the default.)
I don't think, unless you're tracking rawhide, that it actually makes booting more fragile if you've only got one disk. Ironically, the people whom you somewhat accurately point out don't benefit are also the people least likely to see any additional risk.
I say somewhat accurately on purpose though -- there is a significant advantage to doing the same thing by default for most users. I can see the argument for not doing LVM by default on laptops, but honestly I think we gain *so* much more because of the testing this leads to that it's well worth it.
The risks may be relatively small, but they are real. Anything which introduces extra dependencies in the boot process is something which we really should be very careful about. I've seen a _lot_ of machines render themselves unbootable in my time, for a lot of reasons. :)
This _is_ mostly a strawman. Quantify the risks, please, instead of just waving your hands.
We should make the boot process as robust as possible. Of _course_ there's a real bug which needs to be (and in fact has been) fixed in LVM. These things happen. We _know_ these things happen. And we should make sure our system copes with that as well as possible.
Yeah, but you'd only see it if you're tracking rawhide. Woop-de-do, rawhide will break sometimes.
On 4/28/06, Peter Jones pjones@redhat.com wrote:
Yeah, but you'd only see it if you're tracking rawhide. Woop-de-do, rawhide will break sometimes.
Clearly rawhide needs to break more often and more severely. I find the lack of breakage as evidence that the developers are in an innovation slump.
-jef
On Fri, 2006-04-28 at 10:40 -0400, Jeff Spaleta wrote:
On 4/28/06, Peter Jones pjones@redhat.com wrote:
Yeah, but you'd only see it if you're tracking rawhide. Woop-de-do, rawhide will break sometimes.
Clearly rawhide needs to break more often and more severely. I find the lack of breakage as evidence that the developers are in an innovation slump.
hehe you're glad I no longer work for RH.. such a challenge would otherwise result in a, ehm, highly risky but interesting kernel build or so ;)
On 4/28/06, Arjan van de Ven arjan@fenrus.demon.nl wrote:
hehe you're glad I no longer work for RH.. such a challenge would otherwise result in a, ehm, highly risky but interesting kernel build or so ;)
fine by me... i'm not running rawhide at the moment.
-jef
Arjan van de Ven arjan@fenrus.demon.nl wrote:
On Fri, 2006-04-28 at 10:40 -0400, Jeff Spaleta wrote:
On 4/28/06, Peter Jones pjones@redhat.com wrote:
Yeah, but you'd only see it if you're tracking rawhide. Woop-de-do, rawhide will break sometimes.
Clearly rawhide needs to break more often and more severely. I find the lack of breakage as evidence that the developers are in an innovation slump.
hehe you're glad I no longer work for RH.. such a challenge would otherwise result in a, ehm, highly risky but interesting kernel build or so ;)
Got one of those right now (2.6.16-1.2171_FC6): Hangs after "Enabling local quotas" or something near, on i386 but not on (similar packages installed, also up to date rawhide) x86_64.