

There are a lot of mods here, and I don’t think most of us ever volunteered, we were just early to the party. And considering the community is called “Fuck AI” you’re probably fine posting mildly offensive joke comments.
There are a lot of mods here, and I don’t think most of us ever volunteered, we were just early to the party. And considering the community is called “Fuck AI” you’re probably fine posting mildly offensive joke comments.
This article seems more written from the perspective that AI is bad because it doesn’t work. And while I don’t know what kind of Photographer this was who had this happen to him - maybe he’s just a hobbyist - so I’m trying not to judge him too harshly.
But as someone who went to school for photography (technically cinematography but we still did all the photography classes the school offered), it bothers me that the art of even doing your own edits in photoshop is being replaced by lazy use of AI. Why give up control of your art, and turn it over to a guessing machine?
Anyway, probably just having an old-man yells at cloud moment, but real artists who use “professional” software like this don’t want the AI in there either. It’s not a tool I’d ever want to use.
While I personally agree with your sentiment, and much prefer arch to debian for my own systems, there is one way where debian can be more stable. When projects release software with bugs I usually have to deal with those on Arch, even if someone else has already submitted the bug reports upstream and they are already being worked on. There are often periods of a couple of weeks where something is broken - usually nothing big enough to be more than a minor annoyance that I can work around. Admittedly, I could just stop doing updates when everything seems to be working, to stay in a more stable state, but debian is a bit more broadly and thoroughly tested. Although the downside is that when upstream bugs do slip through into debian, they tend to stay there longer than they do on arch. That said, most of those bugs wouldn’t get fixed as fast upstream if not for rolling distro users testing things and finding bugs before buggy releases get to non-rolling “stable” distros.
I’ve been using Syncthing-Fork (on F-Droid) for the extra features it has. I wonder if that developer will be able to continue.
I remember ordering some samples from them when they were a newer company, and how cool it was when they added metal as a material option. Sad to see them go. Seems like much more of another company ruined by going public than a failure of their business model. I guess the silver lining is that they simply went under rather than morphing into a worst possible version of what they were trying to squeeze every penny in the pursuit of infinite growth (or maybe they tried that for a while and it failed too, I’ll admit I haven’t been paying attention to the scene for the last several years).
Not sure exactly how good this would work for your use case of all traffic, but I use autossh and ssh reverse tunneling to forward a few local ports/services from my local machine to my VPS, where I can then proxy those ports in nginx or apache on the VPS. It might take a bit of extra configuration to go this route, but it’s been reliable for years for me. Wireguard is probably the “newer, right way” to do what I’m doing, but personally I find using ssh tunnels a bit simpler to wrap my head around and manage.
Technically wireguard would have a touch less latency, but most of the latency will be due to the round trip distance between you and your VPS and the difference in protocols is comparatively negligible.
I figured you were being genuine, but there’s usually a few people who point at Microsoft’s “embracing” of Linux as the first step in the “embrace, extend, extinguish” trope, and see any involvement by Microsoft as nefarious. When the reality is just that Microsoft’s Azure cloud services are a much larger share of their annual revenue than Windows, and Linux is a major part of their cloud offerings.
If you browse the LKML (Linux Kernel Mailing List) for 5 minutes, you’ll probably see a bunch of microsoft.com email addresses, and it’s been that way for years. I understand why it bothers some people, but also Linus (and a couple others) approve everything that actually gets merged, whether it’s from a microsoft employee, or a redhat employee, or anyone else. Even if microsoft wanted to pay employees to submit patches that would hurt the kernel, the chance that they’d actually be approved is so low it wouldn’t be worth their time.
Maybe I’ll try and give it another go soon to see if things have improved for what I need since I last tried. I do have a couple aging servers that will probably need upgraded soon anyway, and I’m sure my python scripts that I’ve used in the past to help automate server migration will need updated anyway since I last used them.
I think that my skepticism and desire to have docker get out of my way, has more to do with already knowing the underlying mechanics, being used to managing services before docker was a thing, and then docker coming along and saying “just learn docker instead.” Which is fine, if it didn’t mean not only an entire shift from what I already know, but a separation from it, with extra networking and docker configuration to fuss with. If I wasn’t already used to managing servers pre-docker, then yeah, I totally get it.
That’s a big reason I actively avoid docker on my servers, I don’t like running a dozen instances of my database software, and considering how much work it would take to go through and configure each docker container to use an external database, to me it’s just as easy to learn to configure each piece of software for yourself and know what’s going on under the hood, rather than relying on a bunch of defaults made by whoever made the docker image.
I hope a good amount of my issues with docker have been solved since I last seriously tried to use docker (which was back when they were literally giving away free tee shirts to get people to try it). But the times I’ve peeked at it since, to me it seems that docker gets in the way more often than it solves problems.
I don’t mean to yuck other people’s yum though, so if you like docker, and it works for you, don’t let me stop you from enjoying it. I just can’t justify the overhead for myself (both at the system resource level, and personal time level of inserting an additional layer of configuration between me and my software).
I’m sure you’ll do a great job as mod. Thanks for starting this community.
There is still a desktop overview that allows dragging windows between virtual desktops (Meta+G) unfortunately when they removed the old overview, they forgot to fully integrate the new overview, so it can’t be activated by screen edges (which is how I used to access the old desktop overview).
I think that’s actually what discord should be used for. It’s one of the better platforms for voice/video/text chat. It’s mostly just when people use discord for what should be a public forum or wiki that it becomes a problem.
And sure, it’s not a great place for open source developers to do all their communication in, because being able to reference things in the future if a project lead closes the server is important. But it’s probably fine for coding sprints and meetings here and there as long as someone is taking notes to be documented elsewhere. Discord is arguably better than zoom for that use case.
There are some differences between distros as to whether TRIM is enabled by default or not (I’ve read Ubuntu enables it by default, but Debian does not). That said, depending on what file-system your ssd is formatted with it may be enabled by default at that level. The most-often recommended file-systems for SSDs are Btrfs and F2FS, both of which support and enable TRIM by default (as of Linux 6.2 for Btrfs, so if you are running an older kernel version you might need to manually enable it). I think most distro installers support using Btrfs as the main file-system, but F2FS is a bit more hit and miss I think. Safest bet would be to investigate once you settle on a distro, but support should be pretty standard, even if it’s not enabled by default.
For what it’s worth, Epic Games sold Bandcamp to Songtradr in 2023. It’s still an American company, and probably isn’t meaningfully better than Epic, but at least they haven’t totally tanked the platform yet either.