RISC-V -- are we ready for more, and what do we need to do it?
by Matthew Miller
Hi all! I just got back from Open Source Summit, several of the talks I
found interesting were on RISC-V -- a high-level one about the
organizational structure, and Drew Fustini's more technical talk.
In that, he noted that there's a Fedora build *, but it isn't an official
Fedora arch. As I understand it, the major infrastructure blocker is simply
that there isn't server-class hardware (let alone hardware that will build
fast enough that it isn't a frustrating bottleneck).
So, one question is: if we used, say, ARM or x86_64 Amazon cloud instances
as builders, could we build fast enough under QEMU emulation to work? We
have a nice early advantage, but if we don't keep moving, we'll lose that.
But beyond that: What other things might be limits? Are there key bits of
the distro which don't build yet? Is there a big enough risc-v team to
respond to arch-specific build failures? And, do we have enough people to do
QA around release time?
* see http://fedora.riscv.rocks/koji/
--
Matthew Miller
<mattdm(a)fedoraproject.org>
Fedora Project Leader
11 months, 3 weeks
Self Introduction: Andreas Vögele
by Andreas Vögele
Hello Fedora developers,
I'm Andreas from Stuttgart in Germany. I'm a system administrator and
software developer, who moved his computers to Fedora about a year ago.
I've written a handful of Perl modules that I package at the Open Build
Service and Copr. I'd like to maintain some of these modules directly in
Fedora. In the past, I maintained ports of other software at
SlackBuilds.org and OpenBSD. Occasionally, I contribute patches to free
software projects. I enjoy programming in C, Perl and recently Kotlin.
Kind regards,
Andreas
12 months
memtest86plus v6.00
by Richard W.M. Jones
Earlier discussion:
https://www.mail-archive.com/devel@lists.fedoraproject.org/msg169800.html
Current memtest86+ 5.x requires non-UEFI, which makes it increasingly
irrelevant to modern hardware. memtest86 forked into a proprietary
product some time ago. However there is hope because upstream
memtest86+ 6.00 is (a) open source and (b) seems to work despite the
large warnings on the website:
https://memtest.org/
Note this new version is derived from pcmemtest mentioned in the
thread above which is only indirectly derived from memtest86+ 5.x and
removes some features.
So my question is are we planning to move to v6.00 in future?
I did attempt to build a Fedora RPM, but it basically involves
removing large sections of the existing RPM (eg. the downstream script
we add seems unnecessary now and the downstream README would need to
be completely rewritten). It's probably only necessary to have
memtest.efi be installed as /boot/memtest.efi and although it won't
appear automatically in the grub menu, it can be accessed by a trivial
two line command.
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-p2v converts physical machines to virtual machines. Boot with a
live CD or over the network (PXE) and turn machines into KVM guests.
http://libguestfs.org/virt-v2v
1 year
DNF Sytem Upgrade requirements for an F37 → F38 upgrade
by Alexander Ploumistos
Hello,
TL;DR:
DNF memory usage during upgrades from F37 to F38 on a couple of Fedora
Cloud images (with 2 GB of RAM each) led to oomd kicking in and
killing the upgrade process. It might be worth looking into before the
final (also beta?) release.
The longer story:
I have a couple of hosted VMs, which started their lives as Fedora
Cloud 27 and which I have been upgrading to the next Fedora release
via dnf-plugin-system-upgrade. These VMs run very few things, mainly
some PHP applications on Apache and a good chunk of the PHP stack,
some scheduled network scripts, a private rpm repository, OpenVPN and
Wireguard nodes and when I am far from my main computer and koji or
copr happen to be busy, I test things in mock on them. These VMs are
near clones of each other, so usually only one of them is up at any
moment.
I've never had an issue with memory use up to now and when the systems
aren't building packages, they seldom require more than 200 MB of RAM.
What's more, there's the default swap-on-zram, so they should be more
than happy memory-wise with 2 GB of RAM.
I tried to upgrade one VM to F38, to see what problems I might face
and after the packages had been downloaded, I lost the ssh connection
as DNF was running the transaction test, right after a successful
transaction check. I repeated the whole thing a couple more times,
until I remembered that oomd is there and sure enough, DNF had been
consuming 1-1.3 GB until oomd decided to kill it. Well, actually it
killed my entire session and not just DNF. I tried running the upgrade
inside a screen (the GNU one) session and I came back to an empty
session. The transaction involved the upgrade of 1620 packages, with
a total size of 1.2 GB. I think the difference between that and the
previous upgrade F36 to F37) is around 100 MB and 30 or 40 packages.
In each of the failed attempts, the memory pressure was 55 to 78% for
more than 20 seconds, according to systemd-oomd. The exact same issues
appeared when upgrading the second VM.
Eventually I did the upgrade by doubling the available RAM to 4 GB
(and then set it back to 2), but perhaps this could be a problem for
others, especially those who pay for cloud-based resources. Is DNF
supposed to be a legitimate target for systemd-oomd? Conversely, is
DNF supposed to use up so much memory?
1 year
Conflicting build-ids in nekovm and haxe
by Andy Li
Hi list,
Re. https://bugzilla.redhat.com/show_bug.cgi?id=1896901
Since haxe-4.1.3-4 and nekovm-2.3.0-4, both nekovm and haxe packages contains "/usr/lib/.build-id/b0/aed4ddf2d45372bcc79d5e95d2834f5045c09c".
The nekovm one is a symlink to "/usr/bin/neko". The haxe one to "/usr/bin/haxelib".
Both the neko and haxelib binaries are built with libneko, with a nearly identical main.c with the only difference of the present of neko bytecode embedded as a byte array (neko: the byte array is null; haxelib: the byte array is the haxelib neko bytecode).
I'm not sure how to resolve it.
Please advice.
Best regards,
Andy
1 year
nodejs broken?
by Iñaki Ucar
Hi,
RStudio (which depends on nodejs-devel) is FTBFS since the latest
nodejs update. I see this in F37 and F38:
Error: Transaction test error:
file /usr/lib64/libv8.so.10 conflicts between attempted installs of
nodejs-libs-1:18.14.2-5.fc37.x86_64 and
nodejs20-libs-1:19.7.0-13.fc37.x86_64
file /usr/lib64/libv8_libbase.so.10 conflicts between attempted
installs of nodejs-libs-1:18.14.2-5.fc37.x86_64 and
nodejs20-libs-1:19.7.0-13.fc37.x86_64
file /usr/lib64/libv8_libplatform.so.10 conflicts between attempted
installs of nodejs-libs-1:18.14.2-5.fc37.x86_64 and
nodejs20-libs-1:19.7.0-13.fc37.x86_64
Best,
--
Iñaki Úcar
1 year
Firecracker microVM manager
by David Michael
Hi,
Firecracker[0] is a minimal virtual machine manager (a la QEMU)
written in Rust that uses KVM to start Linux VMs extremely quickly and
securely. It is used by AWS Lambda and Fargate among other things to
make VM startup time comparable to containers. I've built it for
Fedora x86_64 and shared it in a Copr repository[1] which includes
some example commands for starting VMs.
Making it build for Fedora required changes across a few components,
so I'm writing to ask if this is acceptable for inclusion in Fedora.
The Copr specs are all dumped in a Git repository[2] for readability.
Changes include:
- The musl package adds /usr paths for compatibility with the
compiler --sysroot option.
- The rust compiler adds musl target subpackages.
- The kernel must set CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y to be
usable as a guest.
- About two dozen Rust crates must be added to Fedora (but a handful
are just new versions of existing packages).
- Unrelated, but in the Copr repo anyway: The musl package is fixed
to allow multilib installs, and Rust includes both 32- and 64-bit
targets.
I used upstream-preferred settings when adding things, but they may be
in conflict with Fedora guidelines. Here are some concerns:
- Firecracker can be built with Fedora's libc (glibc), but it is
officially unsupported upstream[3]. Functionality would be harmed by
not using musl, e.g. seccomp filters are not used.
- Upstream Rust wants musl targets to be statically linked by
default[4]. It can be changed by patching (Gentoo does this) if
dynamic linking is still a priority with Rust binaries, but I haven't
tested that.
- Firecracker uses two crates forked from crates.io, but they are
not vendored/bundled nor published to a registry. I'm currently
manually bundling them as if they were vendored to avoid package name
conflicts since nothing else uses them, but I don't know the preferred
way to deal with those.
So does any of that sound like a showstopper for being included in
Fedora? Is there any other interest in the project from the
community?
Thanks.
David
[0] https://firecracker-microvm.github.io/
[1] https://copr.fedorainfracloud.org/coprs/dm0/Firecracker
[2] https://github.com/dm0-/copr-firecracker
[3] https://github.com/firecracker-microvm/firecracker/blob/v1.3.0/tools/rele...
[4] https://github.com/rust-lang/rust/blob/1.67.1/compiler/rustc_target/src/s...
1 year, 1 month
RFC: No koji builds during mass branching and updates-testing enablement
by Fabio Valentini
Hi all,
As a follow-up from a recent discussion on Matrix/IRC, I'm proposing
the following change to the development cycle / release schedule:
"Koji builds are blocked while mass branching and updates-testing
enablement are in progress."
That's it, that's the entire RFC.
Roughly every six months, I run a check for updates that are present
in the current "stable" release, but missing from "branched", and
every six months, there's a non-negligible number of builds and / or
bodhi updates that get stuck in a void because they just happened to
have been run at the exact worst moment.
In my opinion, the benefits of implementing this change (less releng
time spent on fixing builds that are stuck in an inconsistent state)
would outweigh the downsides (two windows of a few hours each during
the early development cycle where no builds can be launched).
Issues that I see with builds that just "happened to be in the wrong
place at the wrong time" fall broadly into two categories (though I
have seen other types of problems that are more rare):
1. Builds launched while the mass branching is in progress have the
fcXX (where XX = old-rawhide / branched) dist-tag, but only gets
tagged with fXY (XY = new-rawhide) by koji. This results in them only
being available in the rawhide repos, and not from "branched" at all.
Just resubmitting the build for "branched" doesn't work, because the
wrong dist-tag causes NVR conflicts. Fixing this requires either
releng intervention (useless busywork) or bumping the release and
submitting new builds for *both rawhide and branched* (waste of
resources).
2. Builds launched just before updates-testing enablement can get
stuck in "testing" state before there is an actual updates-testing
repo, and are hence not available from *any* repository (for testing?)
during the beta freeze, but will get pushed to stable afterwards. This
results in users who want to test the beta release (or "pre-beta" with
updates-testing enabled) to not see these updates at all, but they
will be pushed to "stable" immediately after the beta freeze is lifted
(i.e. without *any* amount of testing).
Fabio
1 year, 1 month