I’m curious how software can be created and evolve over time. I’m afraid that at some point, we’ll realize there are issues with the software we’re using that can only be remedied by massive changes or a complete rewrite.
Are there any instances of this happening? Where something is designed with a flaw that doesn’t get realized until much later, necessitating scrapping the whole thing and starting from scratch?
Systemd. Nuke it from fucking orbit.
Everyone hates on it. Here I am; a simply Silverblue user and it seems fine to me. What is the issue actually?
The gatekeeping community
there are issues with the software we’re using that can only be remedied by massive changes or a complete rewrite.
I think this was the main reason for the Wayland project. So many issues with Xorg that it made more sense to start over, instead of trying to fix it in Xorg.
And as I’ve understood and read about it, Wayland had been a near 10 years mess that ended up with a product as bad or perhaps worse than xorg.
Not trying to rain on either parade, but x is like the Hubble telescope if we added new upgrades to it every 2 months. Its way past its end of life, doing things it was never designed for.
Wayland seems… To be missing direction?
I do not want to fight and say you misunderstood. Let’s just say you have been very influenced by one perspective.
Wayland has taken a while to fully flesh out. Part of that has been delay by the original designers not wanting to compromise their vision. Most of it is just the time it takes to replace something mature ( X11 is 40 years old ). A lot of what feels like Wayland problems actually stem from applications not migrating yet.
While there are things yet to do, the design of Wayland is proving itself to be better fundamentally. There are already things Wayland can do that X11 likely never will ( like HDR ). Wayland is significantly more secure.
At this point, Wayland is either good enough or even superior for many people. It does not yet work perfectly for NVIDIA users which has more to do with NVIDIA’s choices than Wayland. Thankfully, it seems the biggest issues have been addressed and will come together around May.
The desktop environments and toolkits used in the most popular distros default to Wayland anlready and will be Wayland only soon. Pretty much all the second tier desktop environments have plans to get to Wayland.
We will exit 2024 with almost all distros using Wayland and the majority of users enjoying Wayland without issue.
X11 is going to be around for a long time but, on Linux, almost nobody will run it directly by 2026.
Wayland is hardly the Hubble.
It’s actually a classic programmer move to start over again. I’ve read the book “Clean Code” and it talks about a little bit.
Appereantly it would not be the first time that the new start turns into the same mess as the old codebase it’s supposed to replace. While starting over can be tempting, refactoring is in my opinion better.
If you refactor a lot, you start thinking the same way about the new code you write. So any new code you write will probably be better and you’ll be cleaning up the old code too. If you know you have to clean up the mess anyways, better do it right the first time …
However it is not hard to imagine that some programming languages simply get too old and the application has to be rewritten in a new language to ensure continuity. So I think that happens sometimes.
Yeah, this was something I recognized about myself in the first few years out of school. My brain always wanted to say “all of this is a mess, let’s just delete it all and start from scratch” as though that was some kind of bold/smart move.
But I now understand that it’s the mark of a talented engineer to see where we are as point A, where we want to be as point B, and be able to navigate from A to B before some deadline (and maybe you have points/deadlines C, D, E, etc.). The person who has that vision is who you want in charge.
Chesterton’s Fence is the relevant analogy: “you should never destroy a fence until you understand why it’s there in the first place.”
I’d counter that with monolithic, legacy apps without any testing trying to refactor can be a real pain.
I much prefer starting from scratch, while trying to avoid past mistakes and still maintaining the old app until new up is ready. Then management starts managing and new app becomes old app. Rinse and repeat.
The difference between the idiot and the expert, is the expert knows why the fences are there, and can do the rewrite without having to relearn lessons. But if you’re supporting a package you didn’t originally write, a rewrite is much harder.
We haven’t rewritten the firewall code lately, right? checks Oh, it looks like we have. Now it’s nftables.
I learned ipfirewall, then ipchains, then iptables came along, and I was like, oh hell no, not again. At that point I found software to set up the firewall for me.
Damn, you’re old. iptables came out in 1998. That’s what I learned in (and I still don’t fully understand it).
Linux does this all the time.
ALSA -> Pulse -> Pipewire
Xorg -> Wayland
GNOME 2 -> GNOME 3
Every window manager, compositor, and DE
GIMP 2 -> GIMP 3
SysV init -> SystemD
OpenSSL -> BoringSSL
Twenty different kinds of package manager
Many shifts in popular software
Not really software but, personally I think the FHS could do with replacing. It feels like its got a lot of historical baggage tacked on that it could really do with shedding.
Fault handling system?
Filesystem Hierarchy Standard
/bin
,/dev
,/home
and all that stuffWhat’s wrong with it?
$PATH
shouldn’t even be a thing, as today disk space is cheap so there is no need to scatter binaries all over the place.Historically,
/usr
was created so that you could mount a new disk here and have more binaries installed on your system when the disk with/bin
was full.And there are just so many other stuff like that which doesn’t make sense anymore (
/var/tmp
comes to mind,/opt
,/home
which was supposed to be/usr
but name was already taken, etc …).$PATH shouldn’t even be a thing, as today disk space is cheap so there is no need to scatter binaries all over the place.
$PATH is very useful for wrapper scripts, without it there wouldn’t be an easy way to for fix the mess that steam does in my homedir that places a bunch of useless dotfiles in it. The trick is simply have a script with the same name as the steam binary in a location that is first in $PATH therefore it will always be called first before steam can start and murder my home again.
About /var/tmp, I just have it symlinked to /tmp, technically /var/tmp still has a reason to exist, as that location is use for temporary files that you don’t want to lose on power loss, but I actually went over the list possible issues and iirc it was mostly some cache files of vim.
EDIT: Also today several distros symlink /bin and /sbin to /usr/bin.
You missed my point. The reason $PATH exists in the first place is because binaries were too large to fit on a single disk, so they were scattered around multiple partitions (
/bin
,/sbin
,/usr/bin
, etc…). Now, all your binaries can easily fit on a single partition (weirdly enough,/usr/bin
was chosen as the “best candidate” for it), but we still have all the other locations, symlinked there. It just makes no sense.As for the override mechanism you mention, there are much better tools nowadays to do that (overlayfs for example).
This is what plan9 does for example. There is no need for
$PATH
because all binaries are in/bin
anyways. And to override a binary, you simply “mount” it over the existing one in place.but we still have all the other locations, symlinked there. It just makes no sense.
Because a lot of shit would break if that wasn’t the case, starting with every shell script that has the typical
or
shebang.
This is what plan9 does for example. There is no need for $PATH because all binaries are in /bin anyways. And to override a binary, you simply “mount” it over the existing one in place.
Does that need elevated privileges? Because with PATH what you do is export this environment variable with the order you want, like this:
export PATH="$HOME/.local/bin:$PATH"
(And this location is part of the xdg base dir spec btw).This means that my home bin directory will always be first in PATH, and for the steam example it means that I don’t have to worry about having to add/change the script every time the system updates, etc.
Also what do you mean by mounting a binary over? I cannot replace the steam binary in this example. What I’m doing is using a wrapper script that launches steam on a different location instead (and also passes some flags that makes steam launch silently).
How would virtual environment software, like conda, work without $PATH?
GUI toolkits like Qt and Gtk. I can’t tell you how to do it better, but something is definitely wrong with the standard class hierarchy framework model these things adhere to. Someday someone will figure out a better way to write GUIs (or maybe that already exists and I’m unaware) and that new approach will take over eventually, and all the GUI toolkits will have to be scrapped or rewritten completely.
Desktop apps nowadays are mostly written in HTML with Electron anyway.
Which - in my considered opinion - makes them so much worse.
Is it because writing native UI on all current systems I’m aware of is still worse than in the times of NeXTStep with Interface Builder, Objective C, and their class libraries?
And/or is it because it allows (perceived) lower-cost “web developers” to be tasked with “native” client UI?
Are you aware of macOS? Because it is still built with the same UI tools that you mention.
There is some Rust code that needs to be rewritten in C.
Agree, call me unreasonable or whatever but I just don’t like Rust nor the community behind it. Stop trying to reinvent the wheel! Rust makes everything complicated.
On the other hand… Zig 😘
Happens all the time on Linux. The current instance would be the shift from X11 to Wayland.
The first thing I noticed was when the audio system switched from OSS to ALSA.
And then from ALSA to PulseAudio haha
They’re at different layers of the audio stack though so not really replacing.
And then ALSA to all those barely functional audio daemons to PulseAudio, and then again to PipeWire. That sure one took a few tries to figure out right.
Are there any things in Linux that need to be started over from scratch?
Yes, Linux itself! (ie the kernel). It would’ve been awesome if Linux were a microkernel, there’s so many advantages to it like security, modularity and resilience.
The year of Hurd, maybe?
Hurd-ng is on its way: https://www.gnu.org/software/hurd/hurd/ng.html
Some form of stable, modernized bluetooth stack would be nice. Every other bluetooth update breaks at least one of my devices.