• 0 Posts
  • 27 Comments
Joined 1 year ago
cake
Cake day: February 10th, 2024

help-circle
  • If the only problem is that you can’t use dynamic linking (or otherwise make relinking possible), you still can legally use LGPL libraries. As long as you license the project using that library as GPL or LGPL as well.

    However, those platforms tend to be a problem for GPL in other ways. GPL has long been known to conflict with Apple’s App Store and similar services, for example, because the GPL forbids imposing extra limits that restrict user freedom and those stores have a terms of service that does exactly that.


  • If it was a community addition why would it matter? And why would they remove the codecs.

    You don’t have to be a corporation to be held liable for legal issues with hosting codecs. Just need to be big enough for lawyers to see you as an attractive target and in a country where codec patent issues apply. There’s a very good reason why the servers for deb-multimedia (Debian’s multimedia repo), RPM Fusion (Fedora’s multimedia repo), VLC’s site, and others are all hosted in France and do not offer US-based mirrors. France is a safe haven for foss media codecs because its law does not consider software patentable, unlike the US and even most other EU nations.

    Fedora’s main repos are hosted in the US. Even if they weren’t, the ability for any normal user around the world to host and use mirrors is a very important part of an open community-friendly distro, and the existence of patented codecs in that repo would open any mirrors up to liability. Debian has the same exact issue, and both distros settled on the same solution: point users to a separate repo that is hosted in France which contains extra packages for patent-encumbered codecs.


  • I stopped using Arch a long time ago for this same reason. Either Fedora (or derivatives like Nobara) or an atomic/immutable distro (like Bazzite, Silverblue, Kinoite) is probably the way to go.

    I used to feel like Ubuntu was a good option for this, but it no longer is: too often they try to push undesirable changes that need manual tweaking to fix after release upgrades. Debian Stable is generally good for low-maintenance use but doesn’t keep up as well with newer hardware or newer updates to video drivers and mesa, which makes it suboptimal for typical gaming use. Debian Testing can be prone to break things in updates (in my experience, worse than Arch does).

    I saw another comment recommend Rocky/RHEL, but note that their kernel doesn’t support btrfs. Since you mentioned a root snapshot, I expect you probably use it.


  • For what it’s worth, the “Download & transfer via USB” feature was applying DRM locked to the key of the specific Kindle device you select, giving you a file that’s incompatible with other devices even if they’re kindles linked to the same Amazon account. For many publishers it also gives files with drastically lower image quality than the Kindle app: about one-fourth to one-third the file size. For a couple examples, a 368MB KFX manga volume has a 125MB AZW3 file and an 8.0MB KFX light novel has a 2.2MB AZW3 file. Those smaller AZW3 files are also similar in size to DRMed EPUB files of the same books from other markets like Kobo and Google Play, so I expect it’s a deliberate choice to limit the quality of formats that are more trivial to strip DRM from.

    The best way I’ve found to make personal backups of owned Kindle content is to use a rooted Android device to download everything through the Kindle app, copy the KFX files to a computer, extract the key in a root shell, and then use DeDRM tools on those files with that key.

    A quick and dirty shell command I’ve used for that purpose is egrep -ao 'dsn[0-9a-f]{32}' /data/data/com.amazon.kindle/databases/map_data_storage.db. The key is 32 hex characters.

    Having a rooted Android device in the first place is the biggest hurdle for being able to do that. This new jailbreak should make it possible to do something similar with e-ink kindles instead.


  • I’ve been using single-disk btrfs for my rootfs on every system for almost a decade. Great for snapshots while still being an in-tree driver. I also like being able to use subvolumes to treat / and /home (maybe others) similar to separate filesystems without actually being different partitions.

    I had used it for my NAS array too, with btrfs raid1 (on top of luks), but migrated that over to ZFS a couple years ago because I wanted to get more usable storage space for the same money. btrfs raid5 is widely reported to be flawed and seemed to be in purgatory of never being fixed, so I moved to raidz1 instead.

    One thing I miss is heterogenous arrays: with btrfs I can gradually upgrade my storage one disk at a time (without rewriting the filesystem) and it uses all of my space. For example, two 12TB drives, two 8TB drives, and one 4TB drive adds up to 44TB and raid1 cuts that in half to 22TB effective space. ZFS doesn’t do that. Before I could migrate to ZFS I had to commit to buying a bunch of new drives (5x12TB not counting the backup array) so that every drive is the same size and I felt confident it would be enough space to last me a long time since growing it after the fact is a burden.


  • The Y axis here is not an absolute international political compass. It measures which political party each person favors, and judging by that country’s local standards categorizes that party as either left or right.

    A rising number in the US chart means a larger number of people prefer democrats over republicans. It doesn’t mean that people’s stances are necessarily moving further left. Similarly, it’s no coincidence that the inflection point where UK numbers rise by a lot correspond to Brexit: the party seen as responsible for the unpopular change lost a lot of support, but that doesn’t mean the population has so sharply moved drastically more progressive in such a short time.




  • A standard called SystemReady exists. For the systems that actually follow its standards, you can have a single ARM OS installation image that you copy to a USB drive and can then boot through UEFI and run with no problems on an Ampere server, an NXP device, an Nvidia Jetson system, and more.

    Unfortunately it’s a pretty new standard, only since 2020, and Qualcomm in particular is a major holdout who hasn’t been using it.

    Just like x86, you still need the OS to have drivers for the particular device you’re installing on, but this standard at least lets you have a unified image, and many ARM vendors have been getting better about upstreaming open-source drivers in the Linux kernel.


  • To the contrary, I would expect the sample to skew more towards people who have a heavily customized X session and strong opinions about window managers while drastically underrepresenting average GNOME users who stick with the default Wayland session. Someone who likes their custom setup can still be waiting for a Wayland equivalent while casual Ubuntu users have been defaulted to Wayland on new non-nvidia installs since early 2021.




  • A ground-up overhaul of the copyright system would make things so much worse, not better, considering the current climate of power. In the US for example, MPA, RIAA, Entertainment Software Association, Association of American Publishers, and others wouldn’t want public libraries or the used market to exist at all; they would push for making every single transfer of “ownership” on any media involve a payment to the rights holder. Lawmakers are far more likely to accommodate those groups’ desires than the public good.

    The worst parts of the current copyright system are the most recent. Both the DMCA and the extension of US copyright term to 95 years took effect in 1998, and the early 2000s saw many other countries passing laws to make their copyright system closer to US’s in various ways such as the WIPO Copyright Treaty which took effect in 2002 and EU’s 2006 Copyright Directive. Just about the only positive news we’ve seen in US copyright law since then is in temporary exemptions to DMCA’s anti-circumvention rules (Section 1201) which change every year. Copyright law was far less hostile to consumers and the public before the 90s than it is now, and up until 1976 it used to be expected that most media someone consumes would enter public domain within their lifetime.

    The digital era makes market relevance far more ephemeral than ever and yet the laws written for the digital era moved copyright in the opposite direction. Movie studios simultaneously judge whether a film succeeded almost exclusively based on its first week of ticket sales and also claim that depriving public domain for 95 years is necessary. Nothing should be able to justify more than 20 years of copyright. Media formats don’t even last as long as copyright; CDs and DVDs rot, game cartridges die, servers shut down, and even books printed on today’s low-quality paper will fall apart.

    Some of it is absurd to me, like the way something can be online but geographically restricted.

    This is a consequence of contract terms moreso than copyright. One issue in copyright law that this does connect to, though, is the fact that the question of whether the rightsholder keeps a work reasonably available on the market does not impact whether the work retains copyright protections. If copyright law did hypothetically include that limitation, providers would become far more likely to make sure that all content is available in all countries, but even then things could still vary in terms of which content is on which platform.


  • For years I’ve been using KeepassXC on desktop and Keepass2Android on mobile. Rather than sync the kdbx file between my devices, I have each device access it through the network. Either via sftp, smb, or nfs, but regardless I need to connect to my home’s VPN to access it when away from home since I don’t directly expose those things to the outside world.

    I used to also keep a second copy of the website-tied passwords in Firefox Sync, but recently tried migrating that to Proton Pass because I thought the PIN feature might help, then ultimately decided to move away from that too and start using the KeepassXC-Browser plugin instead. I considered Bitwarden too but haven’t tried it out yet, was somewhat deterred by seeing people say its UI seems very outdated.


  • There’s only one case I’ve found where Wi-Fi use seems acceptable in IoT: ESPHome. It’s open-source firmware for microcontrollers that makes DIY IoT sensors and controls accessible over LAN without phoning home to whatever remote server, without trying to make anything accessible over the Internet, and without breaking in any way if the device has no route to the Internet.

    I still wouldn’t call Wi-Fi use ideal even there; mesh can help in larger homes and Z-Wave/Zigbee radios tend to be more power efficient, though ESP32 isn’t exactly suited for a battery-powered device that’s expected to run 24/7 regardless.


  • Something I’ve noticed that is somewhat related but tangential to your problem: The result I’ve always gotten from using compose files is that container names and volume names get assigned names that contain a shared prefix by default. I don’t use docker and instead prefer podman but I would expect both to behave the same on this front. For example, when I have a file at nextcloud/compose.yml that looks like this:

    volumes:
      nextcloud:
      db:
    
    services:
      db:
        image: docker.io/mariadb:10.6
        ...
      app:
        image: docker.io/nextcloud
        ...
    

    I end up with volumes named nextcloud_nextcloud and nextcloud_db, with containers named nextcloud_db and nextcloud_app, as long as neither of those services overrides this behavior by specifying a container_name. I believe this prefix probably comes from the file-level name: if there is one and the parent directory’s name otherwise.

    The reasons I adjust my own compose files to be different from the image maintainer’s recommendation include: to accommodate the differences between podman and docker, avoiding conflicts between the exported listen ports, any host filesystem paths I want to mount in the container, and my own preferences. The only conflict I’ve had with other containers there is the exported port. zigbee2mqtt, nextcloud, and freshrss all suggest using port 8080 so I had to change at least two of them in order to run all three.


  • I have configured custom Android kernel builds to enable more USB drivers, enable module support, and tweak various other things. For one tangible example of the result: I could plug in a USB Wi-Fi adapter and use it to simultaneously connect to another Wi-Fi network with the internal NIC while also sharing my own AP over USB. On an Android device of all things. I have also adjusted kernel builds for SBCs (like Pi clones) to get things working at all.

    I have never seen any reason to configure a custom kernel for my own desktop/laptop systems. Default builds for the distros I’ve used have been fine for me; if I’m ever dissatisfied with anything it’s the version number rather than the defconfig. The RHEL/Rocky kernel omits a few features I want (like btrfs) but I’d rather stick to other distros on personal systems than tweak a distro that isn’t even meant for tweaking.



  • I’ve long known about it. I don’t seriously use it, but I would if only my Wi-Fi router was fully supported. It’s an Asus one (that I got for free from T-Mobile a decade ago) so I installed Asuswrt-Merlin on it instead.

    Following the recommendation of homelab communities, I got into OpnSense (a BSD-based firewall system for x86 hardware only) last year, still keeping my Wi-Fi router as a dedicated AP. In hindsight I somewhat regret that choice and probably would’ve been better off buying a new OpenWRT-compatible router and using it to handle firewall/routing/AP all in one device instead of wasting the power draw of another separate N100 system. I like having wireguard and vnstat in my router now, which Merlin didn’t offer, but I know OpenWRT has those too and I don’t have any other needs that warrant a higher-power router.


  • I’ve been using it since it felt usable enough in GNOME to me. Around 2015-ish, give or take a year. GNOME leading on Wayland support is a big part of why I switched to it from Xfce back then. Nowadays KDE and others have plenty good Wayland support too (better in some ways like allowing server-side decorations and global shortcuts) but I just haven’t felt like trying to properly experiment to see what I like.

    I’ve always avoided Nvidia on my desktops. Stuck with either radeon or intel and never had any exceptionally big issues with them on Wayland. Though other things like hardware accelerated video decoding have had a history of being spotty on some drivers/GPUs.