Today was the day to install f20 on my Lenovo x120e. I had two drives to work with: an old 320Gb HD and a new 240Gb SSD.
I thought I would be smart and hibernate my system with the current drive (f17). Don't know why I did not just poweroff. Something major went wrong and now that drive will not boot up. So building the new drive became imperative.
Short and long of it see bugzilla reports: 1006304, 1033749, & 1046482.
I am up and running with i386 on the 320Gb HD. I was using x86_64 with f17. At least I have an installation and I can access my emails and figure out how to get x86_64 on the SSD.
The problems writing the bootloader MAY be due to USB problems I have observed on this system. ~5 months ago, I was no longer able to write USB sticks or SD cards (in the SD slot). I have been able to write to USB HD and USB CD and DVD. Or at least it seems so.
How can I determine if I really do have USB problems and have to buy new hardware?
With all the attempts to install, I spent a lot of time with the installation and have a couple of small strange behaviours to report:
In setting the location from New York to Detroit, if I did not get Detroit the 1st time or even when back to try and select something else, I lost the down arrow to scroll beyond the 'a's. I had to type Detroit in on the location bar. This was consistant behaviour. Had enough times doing this step.
If I selected my local repo, Updates became not an option. Regardless if I used the DVD install (i386 and x86_64) or the netinstal (only tried x86_64), consistantly this became greyed out. I could not provide the URL for where I have my local updates repo. It did not matter if I added repo=url to the boot line, or did it in the GUI. The moment I selected my own http URL, I lost updates.
As far as adding repo= to the boot line, i386 and x86_64 work differently! But I suspect you know that. tab with i386 and 'e' with x86_64.
Sometimes when I selected LVM for the partitioning, the LVM partition name would be 'fedora_19'. I left it like that in this install that finally worked and here is what df has to say for itself:
$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/fedora_19-root 29G 4.8G 23G 18% / devtmpfs 1.3G 0 1.3G 0% /dev tmpfs 1.3G 164K 1.3G 1% /dev/shm tmpfs 1.3G 1016K 1.3G 1% /run tmpfs 1.3G 0 1.3G 0% /sys/fs/cgroup tmpfs 1.3G 44K 1.3G 1% /tmp /dev/sda1 477M 96M 352M 22% /boot /dev/mapper/fedora_19-home 257G 32G 212G 14% /home
Some times it used the host name. Depended on what steps I took to get to setting up the LVM partition.
In summary of what is important:
Why have my installs to the SSD failed (writing bootloader). Why can I not provide my updates URL.
thank you
On Wed, 2013-12-25 at 18:11 -0500, Robert Moskowitz wrote:
In setting the location from New York to Detroit, if I did not get Detroit the 1st time or even when back to try and select something else, I lost the down arrow to scroll beyond the 'a's. I had to type Detroit in on the location bar. This was consistant behaviour. Had enough times doing this step.
That dropdown is known to be a bit wacky; apparently it's a bug in GTK+ the anaconda devs can't do much about. Seems like it's been waiting a long time to get fixed.
If I selected my local repo, Updates became not an option. Regardless if I used the DVD install (i386 and x86_64) or the netinstal (only tried x86_64), consistantly this became greyed out. I could not provide the URL for where I have my local updates repo. It did not matter if I added repo=url to the boot line, or did it in the GUI. The moment I selected my own http URL, I lost updates.
With the interactive install, I believe anaconda doesn't expect you to pass multiple repos; it's expecting either a (single) actual yum package repository, or a mirror tree with a .treeinfo file specifying the location of the standard repo set. With a kickstart you can specify multiple separate repos, I think. But I haven't really poked into this behaviour much since newUI.
As far as adding repo= to the boot line, i386 and x86_64 work differently! But I suspect you know that. tab with i386 and 'e' with x86_64.
Um. What? Oh. The boot menu. I think you more likely saw live vs. non-live rather than x86_64 vs. i386. They're built a bit differently. But I can't say I've bothered looking into it that closely either. ctrl-X vs. F10 to actually boot once you've edited it is similar...sometimes one works, sometimes the other, sometimes both. I just file it under 'not sufficiently serious that I have time for it' at present, sadly.
Sometimes when I selected LVM for the partitioning, the LVM partition name would be 'fedora_19'. I left it like that in this install that finally worked and here is what df has to say for itself:
$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/fedora_19-root 29G 4.8G 23G 18% / devtmpfs 1.3G 0 1.3G 0% /dev tmpfs 1.3G 164K 1.3G 1% /dev/shm tmpfs 1.3G 1016K 1.3G 1% /run tmpfs 1.3G 0 1.3G 0% /sys/fs/cgroup tmpfs 1.3G 44K 1.3G 1% /tmp /dev/sda1 477M 96M 352M 22% /boot /dev/mapper/fedora_19-home 257G 32G 212G 14% /home
Some times it used the host name. Depended on what steps I took to get to setting up the LVM partition.
It's probably re-using the existing VG rather than blowing it away and creating a new one, the '19' rather suggests that.
In summary of what is important:
Why have my installs to the SSD failed (writing bootloader).
It is absolutely impossible to tell without at least program.log. It's kind of annoying that all cases of bootloader install failing on UEFI install keep getting written off as dupes of 1006304, because when they're marked as dupes we don't get logs, and there's just no way to know what's going on. I've posted a couple of comments asking if anaconda and libreport devs can figure that out, but nothing doing yet.
If you have a copy of program.log (and ideally all the other logs...) from one of those failures, we could try and figure it out.
Thanks for responding. Trimed down to things to reply to...
On 12/27/2013 02:59 AM, Adam Williamson wrote:
On Wed, 2013-12-25 at 18:11 -0500, Robert Moskowitz wrote:
If I selected my local repo, Updates became not an option. Regardless if I used the DVD install (i386 and x86_64) or the netinstal (only tried x86_64), consistantly this became greyed out. I could not provide the URL for where I have my local updates repo. It did not matter if I added repo=url to the boot line, or did it in the GUI. The moment I selected my own http URL, I lost updates.
With the interactive install, I believe anaconda doesn't expect you to pass multiple repos; it's expecting either a (single) actual yum package repository, or a mirror tree with a .treeinfo file specifying the location of the standard repo set.
First is there a way to specify the updates repo on the boot line. There was (I believe) back on F17. Or maybe I am just suffering from a senior moment.
Where are there instructions for making a .treeinfo file and can I specify it in the repo=url boot parameter?
With a kickstart you can specify multiple separate repos, I think. But I haven't really poked into this behaviour much since newUI.
My 'practice' is to first get an install working, then use the anaconda.cfg to build a kickstart file. So here I am, not getting to 1st base. I almost discourage you from poking around. I feel it is a real dumb down, and very saddening. Particularly that I can not customize what apps and groups to install or not. Just a large general catagory.
As far as adding repo= to the boot line, i386 and x86_64 work differently! But I suspect you know that. tab with i386 and 'e' with x86_64.
Um. What? Oh. The boot menu. I think you more likely saw live vs. non-live rather than x86_64 vs. i386. They're built a bit differently. But I can't say I've bothered looking into it that closely either. ctrl-X vs. F10 to actually boot once you've edited it is similar...sometimes one works, sometimes the other, sometimes both. I just file it under 'not sufficiently serious that I have time for it' at present, sadly.
No absolutely I downloaded the full DVD and netinstal isos (I use wget, so I know the exact url I download). Never touched the live isos.
But like you said: 'not sufficiently serious'; once I understood what was happening, I worked with the hand dealt.
Sometimes when I selected LVM for the partitioning, the LVM partition name would be 'fedora_19'. I left it like that in this install that finally worked and here is what df has to say for itself:
$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/fedora_19-root 29G 4.8G 23G 18% / devtmpfs 1.3G 0 1.3G 0% /dev tmpfs 1.3G 164K 1.3G 1% /dev/shm tmpfs 1.3G 1016K 1.3G 1% /run tmpfs 1.3G 0 1.3G 0% /sys/fs/cgroup tmpfs 1.3G 44K 1.3G 1% /tmp /dev/sda1 477M 96M 352M 22% /boot /dev/mapper/fedora_19-home 257G 32G 212G 14% /home
Some times it used the host name. Depended on what steps I took to get to setting up the LVM partition.
It's probably re-using the existing VG rather than blowing it away and creating a new one, the '19' rather suggests that.
On a new, empty, SSD drive from Crucial or a old drive that had f17 on it? Never installed f19 here. I think the disk druid developers need to search their code for some label they did not change.
In summary of what is important:
Why have my installs to the SSD failed (writing bootloader).
It is absolutely impossible to tell without at least program.log. It's kind of annoying that all cases of bootloader install failing on UEFI install keep getting written off as dupes of 1006304, because when they're marked as dupes we don't get logs, and there's just no way to know what's going on. I've posted a couple of comments asking if anaconda and libreport devs can figure that out, but nothing doing yet.
If you have a copy of program.log (and ideally all the other logs...) from one of those failures, we could try and figure it out.
Instructions on generating (or capturing) the logs? I would be glad to try. Though I am thinking of building a netinstal USB stick and seeing if there are timing problems with the USB DVD drive. The observed symptom is the drive starts up (it had been off during the post install process) and a spinning wait object is going on the screen. After some time (minute or so?) I get the error.
I really want to be running on the SSD card before my next bit of travel (IEEE 802 meeting Jan 19 in LA).
thanks for your time.
On Fri, 2013-12-27 at 08:42 -0500, Robert Moskowitz wrote:
Thanks for responding. Trimed down to things to reply to...
On 12/27/2013 02:59 AM, Adam Williamson wrote:
On Wed, 2013-12-25 at 18:11 -0500, Robert Moskowitz wrote:
If I selected my local repo, Updates became not an option. Regardless if I used the DVD install (i386 and x86_64) or the netinstal (only tried x86_64), consistantly this became greyed out. I could not provide the URL for where I have my local updates repo. It did not matter if I added repo=url to the boot line, or did it in the GUI. The moment I selected my own http URL, I lost updates.
With the interactive install, I believe anaconda doesn't expect you to pass multiple repos; it's expecting either a (single) actual yum package repository, or a mirror tree with a .treeinfo file specifying the location of the standard repo set.
First is there a way to specify the updates repo on the boot line. There was (I believe) back on F17. Or maybe I am just suffering from a senior moment.
I don't know, it's not something I do myself and I haven't had occasion to investigate it. The parameters themselves haven't really changed much since F17 (they changed a lot since F15 or F16, whenever we did 'noloader') but anaconda's behaviour wrt repos did change a bit with newUI.
Where are there instructions for making a .treeinfo file and can I specify it in the repo=url boot parameter?
Again, sorry, I don't know, not anything I've needed to know how to do. It's part of making a Fedora mirror tree, releng would know if it's something you can do. You don't specify a treeinfo file exactly, but if the directory you point to with repo= contains one, anaconda will use it, AIUI.
Now I go and look at an actual treeinfo file it may not even be the important thing:
https://dl.fedoraproject.org/pub/fedora/linux/releases/20/Fedora/x86_64/os/....
I just know that when you point anaconda at an actual properly-laid out Fedora mirror tree, it finds both the 'stable' and 'update' repos. I'd have to go poking at the code to remember preciselyt how.
With a kickstart you can specify multiple separate repos, I think. But I haven't really poked into this behaviour much since newUI.
My 'practice' is to first get an install working, then use the anaconda.cfg to build a kickstart file. So here I am, not getting to 1st base.
Well aside from the other bugs, the obvious thing would be to do your base install without multiple repos and then add them once you get to working on your kickstart.
I almost discourage you from poking around. I feel it is a real dumb down, and very saddening. Particularly that I can not customize what apps and groups to install or not. Just a large general catagory.
There's generally a reason for dumbing things down, and here the reason is simple: complexity in repos and package selection is a heavy maintenance burden for a small reward since the majority of people install from the media or the official mirrors and are perfectly okay with customizing package set post-install or in a kickstart. There really aren't very many cases where it's significantly useful to be able to customize the package set on a per-package basis, interactively, during installation, and to back that fairly fringe feature, anaconda devs had to maintain an entire packaging GUI inside the installer, which was a lot of work (and a potential source of lots of bugs). As I said I don't know the full details of how multiple repos work/don't work interactively vs. cmdline vs. kickstart, but any 'simplification' in this area likely has a similar explanation.
Sometimes when I selected LVM for the partitioning, the LVM partition name would be 'fedora_19'. I left it like that in this install that finally worked and here is what df has to say for itself:
$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/fedora_19-root 29G 4.8G 23G 18% / devtmpfs 1.3G 0 1.3G 0% /dev tmpfs 1.3G 164K 1.3G 1% /dev/shm tmpfs 1.3G 1016K 1.3G 1% /run tmpfs 1.3G 0 1.3G 0% /sys/fs/cgroup tmpfs 1.3G 44K 1.3G 1% /tmp /dev/sda1 477M 96M 352M 22% /boot /dev/mapper/fedora_19-home 257G 32G 212G 14% /home
Some times it used the host name. Depended on what steps I took to get to setting up the LVM partition.
It's probably re-using the existing VG rather than blowing it away and creating a new one, the '19' rather suggests that.
On a new, empty, SSD drive from Crucial or a old drive that had f17 on it? Never installed f19 here. I think the disk druid developers need to search their code for some label they did not change.
If there is one, it's not particularly obvious...
[adamw@adam anaconda (f20-branch %)]$ grep -R fedora_19 * [adamw@adam anaconda (f20-branch %)]$ grep -R vgname * pyanaconda/kickstart.py: vgname = ksdata.onPart.get(self.vgname, self.vgname) pyanaconda/kickstart.py: vg = devicetree.getDeviceByName(vgname) pyanaconda/kickstart.py: raise KickstartValueError(formatErrorMsg(self.lineno, msg="No volume group exists with the name "%s". Specify volume groups before logical volumes." % self.vgname)) pyanaconda/kickstart.py: if not self.vgname: pyanaconda/kickstart.py: dev = devicetree.getDeviceByName(self.vgname) pyanaconda/kickstart.py: raise KickstartValueError(formatErrorMsg(self.lineno, msg="No preexisting VG with the name "%s" was found." % self.vgname)) pyanaconda/kickstart.py: elif self.vgname in (vg.name for vg in storage.vgs): pyanaconda/kickstart.py: raise KickstartValueError(formatErrorMsg(self.lineno, msg="The volume group name "%s" is already in use." % self.vgname)) pyanaconda/kickstart.py: name=self.vgname, pyanaconda/kickstart.py: ksdata.onPart[self.vgname] = request.name [adamw@adam anaconda (f20-branch %)]$ grep -R request.name * pyanaconda/kickstart.py: ksdata.onPart[self.vgname] = request.name
probably in there somewhere, but meh. Again, doesn't seem burning-down-the-house serious, but worth filing a bug on, sure.
In summary of what is important:
Why have my installs to the SSD failed (writing bootloader).
It is absolutely impossible to tell without at least program.log. It's kind of annoying that all cases of bootloader install failing on UEFI install keep getting written off as dupes of 1006304, because when they're marked as dupes we don't get logs, and there's just no way to know what's going on. I've posted a couple of comments asking if anaconda and libreport devs can figure that out, but nothing doing yet.
If you have a copy of program.log (and ideally all the other logs...) from one of those failures, we could try and figure it out.
Instructions on generating (or capturing) the logs?
They're in /tmp while the installation is running, you can scp or fpaste or copy-to-a-USB-stick them out from there.
I would be glad to try. Though I am thinking of building a netinstal USB stick and seeing if there are timing problems with the USB DVD drive. The observed symptom is the drive starts up (it had been off during the post install process) and a spinning wait object is going on the screen. After some time (minute or so?) I get the error.
It's worth trying, but it could well just be an issue with your UEFI firmware in some way. Again, impossible to tell without the logs, unfortunately (all the anaconda error message tells us is that EFI bootloader configuration failed; it has zero indication of why, that's always in the output from efibootmgr in program.log).
Trimmed to lastest attempt.
On 12/27/2013 01:54 PM, Adam Williamson wrote:
On Fri, 2013-12-27 at 08:42 -0500, Robert Moskowitz wrote:
Thanks for responding. Trimed down to things to reply to...
On 12/27/2013 02:59 AM, Adam Williamson wrote:
Sometimes when I selected LVM for the partitioning, the LVM partition name would be 'fedora_19'. I left it like that in this install that finally worked and here is what df has to say for itself:
$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/fedora_19-root 29G 4.8G 23G 18% / devtmpfs 1.3G 0 1.3G 0% /dev tmpfs 1.3G 164K 1.3G 1% /dev/shm tmpfs 1.3G 1016K 1.3G 1% /run tmpfs 1.3G 0 1.3G 0% /sys/fs/cgroup tmpfs 1.3G 44K 1.3G 1% /tmp /dev/sda1 477M 96M 352M 22% /boot /dev/mapper/fedora_19-home 257G 32G 212G 14% /home
Some times it used the host name. Depended on what steps I took to get to setting up the LVM partition.
It's probably re-using the existing VG rather than blowing it away and creating a new one, the '19' rather suggests that.
On a new, empty, SSD drive from Crucial or a old drive that had f17 on it? Never installed f19 here. I think the disk druid developers need to search their code for some label they did not change.
If there is one, it's not particularly obvious...
[adamw@adam anaconda (f20-branch %)]$ grep -R fedora_19 * [adamw@adam anaconda (f20-branch %)]$ grep -R vgname * pyanaconda/kickstart.py: vgname = ksdata.onPart.get(self.vgname, self.vgname) pyanaconda/kickstart.py: vg = devicetree.getDeviceByName(vgname) pyanaconda/kickstart.py: raise KickstartValueError(formatErrorMsg(self.lineno, msg="No volume group exists with the name "%s". Specify volume groups before logical volumes." % self.vgname)) pyanaconda/kickstart.py: if not self.vgname: pyanaconda/kickstart.py: dev = devicetree.getDeviceByName(self.vgname) pyanaconda/kickstart.py: raise KickstartValueError(formatErrorMsg(self.lineno, msg="No preexisting VG with the name "%s" was found." % self.vgname)) pyanaconda/kickstart.py: elif self.vgname in (vg.name for vg in storage.vgs): pyanaconda/kickstart.py: raise KickstartValueError(formatErrorMsg(self.lineno, msg="The volume group name "%s" is already in use." % self.vgname)) pyanaconda/kickstart.py: name=self.vgname, pyanaconda/kickstart.py: ksdata.onPart[self.vgname] = request.name [adamw@adam anaconda (f20-branch %)]$ grep -R request.name * pyanaconda/kickstart.py: ksdata.onPart[self.vgname] = request.name
probably in there somewhere, but meh. Again, doesn't seem burning-down-the-house serious, but worth filing a bug on, sure.
Probably built from a couple variables, but not important, as you said. Until it shows up again in f21!
But on to the real problem.
In summary of what is important:
Why have my installs to the SSD failed (writing bootloader).
It is absolutely impossible to tell without at least program.log. It's kind of annoying that all cases of bootloader install failing on UEFI install keep getting written off as dupes of 1006304, because when they're marked as dupes we don't get logs, and there's just no way to know what's going on. I've posted a couple of comments asking if anaconda and libreport devs can figure that out, but nothing doing yet.
If you have a copy of program.log (and ideally all the other logs...) from one of those failures, we could try and figure it out.
Instructions on generating (or capturing) the logs?
They're in /tmp while the installation is running, you can scp or fpaste or copy-to-a-USB-stick them out from there.
You will find all the contents of /tmp from the latest bug 1006304 at:
http://medon.htt-consult.com/~rgm/logs/
Let me know when you have them and I will take them down.
I would be glad to try. Though I am thinking of building a netinstal USB stick and seeing if there are timing problems with the USB DVD drive. The observed symptom is the drive starts up (it had been off during the post install process) and a spinning wait object is going on the screen. After some time (minute or so?) I get the error.
It's worth trying, but it could well just be an issue with your UEFI firmware in some way. Again, impossible to tell without the logs, unfortunately (all the anaconda error message tells us is that EFI bootloader configuration failed; it has zero indication of why, that's always in the output from efibootmgr in program.log).
On Tue, 2013-12-31 at 12:49 -0500, Robert Moskowitz wrote:
If you have a copy of program.log (and ideally all the other logs...) from one of those failures, we could try and figure it out.
Instructions on generating (or capturing) the logs?
They're in /tmp while the installation is running, you can scp or fpaste or copy-to-a-USB-stick them out from there.
You will find all the contents of /tmp from the latest bug 1006304 at:
http://medon.htt-consult.com/~rgm/logs/
Let me know when you have them and I will take them down.
It would be best to attach them to the bug, or rather file a new bug and attach them to that, so that the files will always be available for anyone looking at the report. Thanks!
On 12/31/2013 01:14 PM, Adam Williamson wrote:
On Tue, 2013-12-31 at 12:49 -0500, Robert Moskowitz wrote:
If you have a copy of program.log (and ideally all the other logs...) from one of those failures, we could try and figure it out.
Instructions on generating (or capturing) the logs?
They're in /tmp while the installation is running, you can scp or fpaste or copy-to-a-USB-stick them out from there.
You will find all the contents of /tmp from the latest bug 1006304 at:
http://medon.htt-consult.com/~rgm/logs/
Let me know when you have them and I will take them down.
It would be best to attach them to the bug, or rather file a new bug and attach them to that, so that the files will always be available for anyone looking at the report. Thanks!
All the files or which files in particular?
anaconda.log ifcfg.log sensitive-info.log syslog anaconda-tb-pc7tmF packaging.log storage.log X.log anaconda-yum.conf program.log storage.state
On Tue, 2013-12-31 at 14:13 -0500, Robert Moskowitz wrote:
On 12/31/2013 01:14 PM, Adam Williamson wrote:
On Tue, 2013-12-31 at 12:49 -0500, Robert Moskowitz wrote:
If you have a copy of program.log (and ideally all the other logs...) from one of those failures, we could try and figure it out.
Instructions on generating (or capturing) the logs?
They're in /tmp while the installation is running, you can scp or fpaste or copy-to-a-USB-stick them out from there.
You will find all the contents of /tmp from the latest bug 1006304 at:
http://medon.htt-consult.com/~rgm/logs/
Let me know when you have them and I will take them down.
It would be best to attach them to the bug, or rather file a new bug and attach them to that, so that the files will always be available for anyone looking at the report. Thanks!
All the files or which files in particular?
anaconda.log ifcfg.log sensitive-info.log syslog anaconda-tb-pc7tmF packaging.log storage.log X.log anaconda-yum.conf program.log storage.state
It's usually best to include as many as possible, but in this case, the significant ones are likely the anaconda-tb file, anaconda.log , syslog, storage.log, and program.log . Thanks!
On 12/31/2013 02:50 PM, Adam Williamson wrote:
On Tue, 2013-12-31 at 14:13 -0500, Robert Moskowitz wrote:
On 12/31/2013 01:14 PM, Adam Williamson wrote:
On Tue, 2013-12-31 at 12:49 -0500, Robert Moskowitz wrote:
> If you have a copy of program.log (and ideally all the other logs...) > from one of those failures, we could try and figure it out. Instructions on generating (or capturing) the logs?
They're in /tmp while the installation is running, you can scp or fpaste or copy-to-a-USB-stick them out from there.
You will find all the contents of /tmp from the latest bug 1006304 at:
http://medon.htt-consult.com/~rgm/logs/
Let me know when you have them and I will take them down.
It would be best to attach them to the bug, or rather file a new bug and attach them to that, so that the files will always be available for anyone looking at the report. Thanks!
All the files or which files in particular?
anaconda.log ifcfg.log sensitive-info.log syslog anaconda-tb-pc7tmF packaging.log storage.log X.log anaconda-yum.conf program.log storage.state
It's usually best to include as many as possible, but in this case, the significant ones are likely the anaconda-tb file, anaconda.log , syslog, storage.log, and program.log . Thanks!
You got them. let me know if anything else is needed to get this working. It takes ~ 2hrs to do a test.
On Dec 31, 2013, at 1:25 PM, Robert Moskowitz rgm@htt-consult.com wrote:
You got them. let me know if anything else is needed to get this working. It takes ~ 2hrs to do a test.
I don't see any kernel configs for EFI that aren't already enabled in config. And there isn't a debug option for _EFI_VARS so I don't know how we get more info when the kernel fails to make NVRAM changes.
Macs have had a long history of using PRAM/NVRAM with a ritualistic keyboard command on boot used to perform the voodoo incantation of "zapping the PRAM" to clear all entries. Since HFS+ has a volume header that includes a pointer to the bootloader, that is used as a fallback when NVRAM is empty.
If Fedora is the only installed system, we have BOOTX64.efi and fallback.efi which possibly sort out what happens if NVRAM is cleared. I don't know about multiboot with Windows or other Linux distros, whether Microsoft puts a BOOTX64.efi there, and whether Fedora replaces it.
Chris Murphy
On 12/31/2013 04:46 PM, Chris Murphy wrote:
On Dec 31, 2013, at 1:25 PM, Robert Moskowitz rgm@htt-consult.com wrote:
You got them. let me know if anything else is needed to get this working. It takes ~ 2hrs to do a test.
I don't see any kernel configs for EFI that aren't already enabled in config. And there isn't a debug option for _EFI_VARS so I don't know how we get more info when the kernel fails to make NVRAM changes.
Macs have had a long history of using PRAM/NVRAM with a ritualistic keyboard command on boot used to perform the voodoo incantation of "zapping the PRAM" to clear all entries. Since HFS+ has a volume header that includes a pointer to the bootloader, that is used as a fallback when NVRAM is empty.
If Fedora is the only installed system, we have BOOTX64.efi and fallback.efi which possibly sort out what happens if NVRAM is cleared. I don't know about multiboot with Windows or other Linux distros, whether Microsoft puts a BOOTX64.efi there, and whether Fedora replaces it.
This is a Lenovo x120e duo core that I have had for a year; reconditioned. Nothing on my drives except Fedora. Had f17 x86_64 previously; jumping a few releases. There MAY be some hardware problems to shake out. I was having problems writing to USB sticks before, but those have gone away since getting f20 i386 working. Speaker is a problem. It starts working at boot, but soon stops. I added the .asoundrc file, but that did not seem to make a difference.