Linux is ready, but not the professional software devs. Literally only thing stopping me from fully switching
I’ve been using Linux as my main OS (NixOS btw) for everything for years now. The only things that doesn’t work is anti-cheat…
I don’t think Windows, Mac, Android, iOS, whatever is “ready yet” either. operating systems are always in development. There are things I can do on my linux machine that I can’t do on my windows machine, and vice versa.
Linux was ready for me 15 years ago.
I’m at the point where printers, bad WiFi, local file sharing/casting, crash recovery, GPU compute, even some driver issues, stuff like that just works in Linux (CachyOS specifically), but doesn’t in Windows.
Windows is getting progressively worse.
I still dual boot a very-stripped Windows for games, HDR stuff, and anything that requires a weird driver (like phone tethering), but man, Microsoft just keeps removing or hiding things I use to make Windows sorta functional.
2 days ago, and about 4 different distros to get one that would even load on my laptop with discreet video cards…
Have you tried bazzite? I use it on my laptop with dGPU and it works fine out of the box.
That is the one I finally got to work.
Hell yeah… I’ve been using it for maybe 6 months now and I really like it. OStree took a little getting used to, but I’ve grown to really like the “immutable” concept. Very stable.
And as someone who loves gaming, Bazzite has so much shit pre-installed and pre-configured (or very easily installed/configured) to make that experience better.
Cannot recommend it enough.
My excuse for not switching to Linux for a long time was that it couldn’t play games. Now that proton is a pretty developed thing, that’s no longer an excuse. I actually tried out mint Linux for a friend to see how easy it was to use and I just kept using it because it did everything I wanted it to. As a power user I had to modify it quite a lot but my friend just wants to basically load into the OS, launch a browser or play games from steam and that’s about it, so for him it’s pretty easy and straightforward.
I actually ended up installing kubuntu on his computer and modified it to look exactly like Windows 7, which is what he’s upgrading from. It’s kind of scary how close it got.
I dual booted with Windows purely for gaming and Linux for everything else for a long time.
After upgrading to Windows 11 I switched the default boot option to Linux and moved all my games there.
Now Windows is used exclusively for printing with thay pesky Canon printer of ours.
Tobii haven’t released Linux drivers for their eye-tracker, but that’s the only gaming-related problem I’ve had this time around.
Tried it again a few months ago when HDR support first dropped in KDE. It didn’t work at all. Everything was desaturated and dim. Literally the opposite of what HDR is supposed to do.
I’m giving it another year before I try Linux again. Hopefully the bugs are sorted by then.
Sadly I’m still not sure if it is ready. I installed Mint to a couple systems this year and am really disappointed at how much tinkering and troubleshooting I have had to do. Like I had to order a specific wifi card because almost nobody makes linux compatible wifi usb adapters. My brand new computer couldn’t connect to the internet despite me already having an expensive wifi dongle.
The linux community will do anything besides improve the usability of their technology in their quest to get people to use their inferior technology.
Post less memes, make an OS that is stable, has a navicable UI, and runs the things people want to run.
Question: Would I still struggle to get games working on a desktop using Linux as I have in the past (always some driver issue for some crucial bit of hardware; either the GPU can’t do 3D or the NIC doesn’t function, etc) or would they work as well as on a Steam Deck, that doesn’t have to account for a variety of hardware differences? Almost every single person I have seen lately saying gaming on Linux is awesome now, is using a literal device designed for it. But what about my hardware? Is getting wrappers for nVidia drivers still a fucking PITA with a 50/50 chance of actually working correctly?
I love Linux for just basic computing needs or running servers. But I’ve always had a bad time when trying to play games.
Last year with Ubuntu.
Installed it on an old laptop. Booted once then never again.
Installed windows. Worked like a charm.
This is Ubuntu, the OS that makes all the decisions for you like windows.
Most people’s measure of whether it’s ready is “How soon until I have to type into a console to get something done”.
If it’s within the first three months - then it’s not ready.
Most people’s measure of whether it’s ready is “How soon until I have to type into a console to get something done”.
[citation needed]
I agree that that’s one possible way someone could decide that Linux isn’t ready, but I don’t think it’s a particularly good one, and definitely not one I’d agree with.
Would you agree that if you need to use the Registry Editor, Windows isn’t ready for mass adoption?
Regular users would never have to put anything in the registry.
That is only for power users.
By that definition Windows 11 isn’t ready for people too. You’ll need the command line at installation to circumvent the mandatory MS account requirement.
No, you need the command line for that. Most people will just create an MS account and continue.
You need a command line to install it on unsupported hardware.
Where are all the people that grew up with MS-DOS and had to edit their autoexec.bat files to install a TSR? Why is it such a big deal now but somehow everybody was okay with it 30 years ago? It won’t kill people to learn a bit about how their computer works.
It’s like owning a car but not even knowing where the windshield wiper fluid goes. And that’s becoming a thing too, sadly. Might as well lock the hood and only let the dealer in, that seems to be what people want nowadays.
There’s a guy at work proudly reminiscing about how we had to fuck around with autoexec.bat and config.sys back in the day to get things to work
But refuses to use linux because CLI…
Where are all the people that grew up with MS-DOS
People for the most part haven’t had to deal with the command line since Windows 95 was released, and that was 30 years ago. Which means anyone old enough to had regularly used DOS is at least in their 40’s now.
Right, and cars got pushbutton ignition, backup cameras, lane sensors, and front end collision warnings. That doesn’t mean people should stop learning how to change a tire. I blame schools for not keeping kids technologically literate in a world where computers run our entire lives.
Sounds to me like you’d be shocked at the number of people who couldn’t manage to change a tire on their own if their life depended on it.
You can blame whoever you want, but frankly, most people just don’t care to learn anything unless they’re already interested in it. There are tons of people who take their car to a mechanic for the most mundane shit. They’ll call in the Geek Squad when they have a PC problem. They have no interest in learning how their stuff works because there’s always someone else they can pay to handle that.
Somewhere a Gen-Z or Gen Alpha is reading this on a tablet and has no idea why anyone owns a computer - they’re thinking “Computers are dumb, because they’re just large outdated clunky looking tablets”
…and somewhere in the past there was probably an old man, angry at the fact that modern keyboards will never match the elegance and typing skills found by using a type writer. Lamenting that people have lost their mechanical understanding of things because they’ve never had to replace a ribbon when it’s lost its ability to pick up or put down ink as it should.
You’re standing between these two arguments (thinking you’re correct)… when the arguments (and ones like them) stretch all the way back in time to the first technoologies, and all the way forwards into the future, to the last of them, or as far as the mind can see.
Behind the pretty UIs, computers and tablets are still computers, with CPUs running machine code residing in memory. Nothing has fundamentally changed since the 60s. Somebody has to continue to understand how it all works behind the scenes to move us forward, or we’ll have the movie “Idiocracy” coming true, and we’ll all stagnate as a species while an AI tries its best to manage us and keep us alive.
In your analogy, it would be as if we’re all still using mechanical typewriters, but have created an automaton with a pretty face to talk to which pushes the keys and changes the ribbon behind a curtain. The typewriter is still there.
I’ve definitely had to do that with Windows so is it not ready?
I’m obviously going to be downvoted for this, but the second you ask me to use the terminal is the second the OS is not ready.
Last week I reinstalled Windows after trying MintOS. I have a 54" Ultrawide screen monitor and I wanted the windows to snap in 3 sections.
I spent a few hours in terminal trying to install something after trying everything in flatpak. Windows 11 split screens out of the box. It can even tile. You can even use hotkeys to snap left and right.
In order for normies like me to switch, you have to make the OS at as easy to use as Windows. Don’t make us use terminal like I’m on DOS.
I’ve used Linux for 25 years now and I remember every time when back then people needed help with windows it was always "go to the registry editor and add the key djrgegfbwkgisgktkwbthagnsfidjgnwhtjrtv in position god-knows-where to fix some stupid windows shit. that, apparently, made windows user ready
On Linux I’d have to edit an English language file and add an English word and that meant it wasn’t user ready
Yeah, Linux was ready long ago
So my experience has been mixed. I should note that I have always run some Linux systems (my pihole as an example), but I did, about 2 months ago, try to switch over my windows media sever to Linux mint.
(Long story short, I am still running the windows server)
I really, really, really liked Linux Mint, I should say at the outset. I wanted to install the same -arr stack I use, and self-host a few web apps that I use to provide convenience in my home. To be very fair to Linux Mint, I’ve been a windows user for 30+ years and I never knew how to auto-start python scripts in windows.
But, to be critical, I spent hours and hours fighting permission settings in every -arr app, Plex, Docker, any kind of virtual desktop software (none of which would run prior to logging in which made running headless impossible), getting scripts to auto-run at startup, compatibility with my mouse/keyboard and lack of a real VPN client from my provider without basically coding the damn thing myself.
After about a month and a half of trying to get it working, I popped over to my windows install to get the docker command that had somehow worked on that OS but not Linux and everything was just working. I am sorry I love Linux but I wanted to get back to actually coding things I wanted to code, not my fucking operating system.
I’ll go back to Linux because Windows is untenable but I’m going to actually have to actually set aside real project time to buckling down and figuring out the remaining “quirks”.
If you do try again try lmde (Linux mint Debian edition) you should have less issues Ubuntu has weird permission issues that I’ve ran into before
There’s actually a good UI for managing permissions I eventually found in Mint, I think the main issues I’m having with it now are the lack of it running headless and unreliability with running my native scripts. I’ll try the Debian version though, that sounds intriguing. When y’all talk about distro hopping, how much re-setup are we talking?
The “arr” stack is a very Windowsey. It’s built in C# and has some baked-in assumptions that mean running it in a container is a bit of a pain. But, I’ve been running it for years on Linux. My linux server boxes are all headless, and I’ve never needed a GUI for anything. I don’t use Plex though, so maybe it’s the difference?
I don’t know why you were trying to run virtual desktop software, or what that has to do with running the “arr” stack. But, of course, a virtual desktop is a GUI thing, so if you want a virtual desktop of course you’ll need some kind of GUI connection. Also, your talk about “getting scripts to auto-run at startup” makes me suspect you were approaching the problem in an usual way, because that’s not how you run services in Linux, and hasn’t been for decades.
If you ever want to try again, I recently migrated my personal kludged-together “arr” stack to the “Home Operations” method of running things. They run a bunch of apps in a local at-home kubernetes cluster using essentially “declarative operations” based on Flux. Basically, you have a git repo, you check in a set of files there describing which parts of the “arr” stack you want to run, and your system picks up those git changes and runs the apps. The documentation is terrible, but the people are friendly and happy to help.
Currently I have the parts of the “arr” stack I want, plus a few other apps, running on an old Mac Mini from 2014.
Oh, and for a VPN on Linux, I recommend gluetun. It’s one app that supports just about every major commercial VPN provider, and provides features like firewalling non-VPNed traffic, and re-connecting if something goes wrong.
that’s not how you run services in Linux, and hasn’t been for decades
Thanks for your response. I’m open to the idea that Linux is a different computing paradigm, my frustration is on needing to learn that on the fly and how much of a distraction it was, even on a tertiary machine… that said, how should I be thinking about this?
Ok, back in the “init” days the approach was to have a bunch of scripts in /etc/rc.d/ that the system would run when you started up. The scripts were numbered so they’d execute in a particular order. So, if you required a network for your program, you’d number your script so that it was started after the network script was done. These scripts were often somewhat modular so you could pass them arguments and stuff. You also had corresponding scripts that executed in a certain order when the system was shutting down.
Starting in about 2015 that was changed to the systemd approach where instead of having services you had configuration files that described what services you wanted running, what they depended on, etc. This mostly eliminated the need for complex startup / shutdown scripts because the systemd service took care of that part. So, often instead of a script to start a program you just needed an executable and some config files.
So, for a while a Linux server running a web server might have had a systemd service describing how they wanted to run apache or nginx. But, right around the same time that systemd was being adopted, containers were becoming the new hotness. I would guess that most people running web servers are now doing them in containers. I guess you know something about that since you were talking about Docker. You can run containers with systemd, but most people use some form of container orchestration service like Docker Swarm or Kubernetes.
I’ve personally never used docker swarm or docker compose, so I can’t really talk about how it does things. Instead, I’ve used kubernetes, even for running services on a single underpowered machine (I’ve even used it on Raspberry Pi machines (but you have to be careful with how it uses the “disk” when you do that)) I didn’t do it because of convenience, more to learn kubernetes and to avoid using Docker things.
Kubernetes is a bit overkill for a home setup, but the idea there is that you have dozens or hundreds of servers and you have thousands of microservices running in containers. You don’t want to have to manually manage each machine that might run the service. Instead you tell the kubernetes system details like how many copies to run and it figures out where to run them, and will restart them if they fail, etc.
So, the way I do things is: systemd runs kubernetes, kubernetes runs a containerized version of the ‘arr’ apps. I think you could do the same thing with docker where systemd runs docker (compose? swarm?) and that runs your containers.
And then there’s the Flux / GitOps / Declarative Ops setup where you have a git repository that describes the state of the system you want, including how kubernetes is supposed to run, and you have a system that observes that git repo and gets things running as described in the configuration stored in the repo.
How deep you want to go in that setup is up to you. It’s just that glueing things together using scripts isn’t really best practices, especially in the age of containers.