Welp no change. I’m guessing the motherboard firmware already contained the latest microcode. Oh well, was worth a try, thank you.
Welp no change. I’m guessing the motherboard firmware already contained the latest microcode. Oh well, was worth a try, thank you.
It’s a pain in the butt to swap CPUs one more time but that may pale in comparison to trying to convince the shop that a core is bad and having intermittent faults. 🤪
This sounds like my best shot, thank you.
I’ve installed the amd-ucode
package. It already adds microcode
to the HOOKS
array in /etc/mkinitcpio.conf
and runs mkinitcpio -P
but I’ve moved microcode
before autodetect
so it bundles code for all CPUs not just for the current one (to have it ready when I swap) and re-ran mkinitcpio -P
. Also had to re-run grub-mkconfig -o /boot/grub/grub.cfg
.
I’ve seen the message “Early uncompressed CPIO image generation successful” pass by, and lsinitcpio --early /boot/initramfs-6.12-x86_64.img|grep micro
shows kernel/x86/microcode/AuthenticAMD.bin
, there’s a /boot/amd-ucode.img
, and an initrd
parameter for it in grub.cfg
. I’ve also confirmed that /usr/lib/firmware/amd-ucode/README
lists an update for that new CPU (and for the current one, speaking of which).
Now from what I understand all I have to do is reboot and the early stage will apply the update?
Any idea what it looks like when it applies the microcode? Will it appear in dmesg
after boot or is it something that happens too early in the boot process?
BIOS is up to date, CPU model explicitly listed as supported, memtest ran fine, not using XMP profiles.
All hardware is the same, I’m trying to upgrade from a Ryzen 3100 so everything should be compatible. Both old and new CPU have a 65W TDP.
I’m on Manjaro, everything is up to date, kernel is 6.12.17.
Memory runs at 2133 MHz, same as for the other CPU. I usually don’t tweak BIOS much if at all from the default settings, just change the boot drive and stuff like “don’t show full logo at startup”.
I’ve add some voltage readings in the post and answered some other posts here.
Everything is up to date as far as I can tell, I did Windows too.
memtest ran fine for a couple of hours, CPU stress test hang up partway through though, while CPU temp was around 75C.
RAM is indeed at 2133 MHz and the cooling is great, got a tower cooler (Scythe Kotetsu mark II), idle temps are in the low 30’s C, stress temp was 76C.
Motherboard is a Gigabyte B450 Aorus M. It’s fully updated and support for this particular CPU is explicitly listed in a past revision of the mobo firmware.
Manual doesn’t list any specific CPU settings but their website says stepping A0
, and that’s what the defaults were setting. Also I got “core speed: 400 MHz”, “multiplier: x 4.0 (14-36)”.
even some normal batch cpus might sometimes require a bit more (or less) juice or a system tweak
What does that involve? I wouldn’t know where to begin changing voltages or other parameters. I suspect I shouldn’t just faff about in the BIOS and hope for the best. :/
The problem is that the main container can (and usually does) rely on other layers, and you may need to pull updates for those too. Updating one app can take 5-10 individual pulls.
And let’s not forget Cortana.
The dev has not made available any means to donate to him directly. He asks that people donate to the maintainers of the block lists instead.
Linux printing is very complex. Before Foomatic came along you got to experience it in all it’s glory and setting up a working printing chain was a pain. The Foomatic Wikipedia page has a diagram that will make your head spin.
If you end up with resizing /var as the only solution, please post your partition layout first and ask, don’t rush into it. A screenshot from an app like Disk Manager or Gparted should do it, and we’ll explain the steps and the risks.
When you’re ready to resize, you MUST use a bootable stick, not resize from inside the running system. You have to make a stick using something like Ventoy, and drop the ISO for the live version of GParted on the stick, then boot with it and pick the Gparted live. You’ll have to write down the instructions and be careful what you do, and also hope that there’s no power outage during.
The safest method, if your /home has enough space, is to use it instead of /var for (some) Flatpak installs. You can force any Flatpak install to go to /home by adding --user
to the command.
If you look at the output of flatpak list
it will tell you which package is installed in user home dir and which in system (/var). You can also show the size of each package with flatpak list --columns=name,application,version,size,installation
.
I don’t think you can move installed apps directly between system/user like Steam can (Flatpak is REALLY overdue for a good package manager) but you can uninstall apps from system, then run flatpak remove --unused
, then install them again with --user
.
Please note that apps installed with --user
are only seen by the user that installed them. Also you’ll have to cleanup separately for system and user(s) in the future (flatpak remove --unused
for system, then flatpak remove --unused --user
for each user).
Interesting, I’ll keep it in mind.
Still not sure it would help in all cases. Particularly when 3rd party repos have to override core packages because they need to be patched to support whatever they’re installing. Which is another very bad practice in the Ubuntu/Debian world, granted.
I’m not sure how that would help. First of all, it would still end up blocking proper updates. Secondly, it’s hard to figure out what exactly you’re supposed to pin.
Third party package mechanism is fundamentally broken in Ubuntu (and in Debian).
Third party repos should never be allowed to use package names from the core repos. But they are, so they pretend they’re core packages, but use different version names, and at upgrade time the updater doesn’t know what to do with those version and how to solve dependencies.
That leaves you with a broken system where you can’t upgrade and can’t do anything entirely l eventually except a clean reinstall.
After this happened several times while using Ubuntu I resorted to leaving more and more time between major upgrades, running old versions on extended support or even unsupported.
Eventually I figured that if I’m gonna reinstall from scratch I might as well install a different distro.
I should note I still run Debian on my server, because that’s a basic install with just core packages and everything else runs in Docker.
So if you delegate your package management to a completely different tool, like Flatpak, I guess you can continue to use Ubuntu. But it seems dumb to be required to resort to Flatpak to make Ubuntu usable.
People often think that things like recording your screen or keylogging are the worst but they’re not. These attacks would require you to be targeted by someone looking for something specific.
Meanwhile automated attacks can copy all your files, or encrypt them (ransomware), search for sensitive information, or use your hardware for bad things (crypto mining, spam, DDoS, spreading the malware further), or most likely all of the above.
Automated attacks are much more dangerous and pervasive because they are conducted at massive scale. Bots scan massive amounts of IPs and try all the known exploits and vulnerabilities without getting tired, without caring how daunting it may be, without even caring if they’re trying the right vulnerability against the right kind of OS or app. They just spray everything and see what sticks.
You’re thousands of times more likely to be caught by such malware than it is to be targeted by someone with the skill and motive to record your screen or your keyboard.
Secondly, if someone like that targets you and has access to your user account, Wayland won’t stop them. They can gain access to your root account, they can install elevated spyware, they can patch Wayland and so on.
What Wayland is doing is the equivalent of asking you to wear a motorcycle helmet 24/7, just in case you slip on some spilled juice, or a flower pot falls on your head, or the bus you’re in crashes. All those things are possible and the helmet would come in handy but are they likely? We don’t do it because it’s not, and it would be a major inconvenience.
You were merely lucky that they didn’t break.
Lucky… over 5 years and with a hundred AUR packages installed at any given time? I should play the lottery.
I’ve noticed you haven’t given me any example of AUR packages that can’t be installed on Manjaro right now, btw.
it wasn’t just a rise in popularity of Arch it was Manjaro’s PAMAC sending too many requests DDoSing the AUR.
You do realize that was never conlusively established, right? (1) Manjaro was already using search caching when that occured so they had no way to spam AUR, (2) there’s more than one distro using pamac, and (3) anybody can use “pamac” as a user agent and there’s no way to tell if it’s coming from an actual Manjaro install.
My money is on someone actually DDoS’ing AUR and using pamac as a convenient scapegoat.
Last but not least you’re trying to use this to divert from the fact AUR packages work fine on Manjaro.
Honestly I’ll just send it back at this point. I have kernel panics that point to at least two of the cores being bad. Which would explain the sporadic nature of the errors. Also why memcheck ran fine because it only uses the first core by default. Too bad I haven’t thought about it when running memtest because it lets you select cores explicitly.