You’re right. Unfortunately, open-source has proven time and time again to be unsustainable and burn maintainers out
That’s a good reason for people to take the money they would have spent buying a proprietary solution and instead donate that money to an open source project. For me it’s not always about the cost, but what I get out of it. I’d rather the money go to the community and better it.
Its only everything from other instances and communities that the current instance subscribes to. It doesn’t subscribe to the full pipe of everything.
What’s likely happening is people in aggregate generally subscribe to the most popular communities and those communities have the most upvoted posts.
I use Jellyfin for music mostly and it struggles with metadata. For example, if a song has two artists on it and I edit to correct it, it won’t update correctly and I’ll edit up with the artist “Artist A; Artist B”.
Have you tried a packet capture with Wireshark or tcpdump to see what it’s doing? It might give better clues than a general error message.
I’m working on adding ActivityPub to my Hugo blog right now. I support RSS, but I figured AP support means that you can get it into your Mastodon feed or even Lemmy feed making it easy to follow. Additionally, commenting (assuming it doesn’t get taken over by spammers.)
Which stops malicious usage, but doesn’t stop cases where web pages over use pushState as users move around instead of replaceState. I’ve seen maps that would add to the history every time a user moves around the map.
I’m on Wayland and KDE/Plasma. It worked on GNOME, but sadly not on Plasma.
I tried self hosting Pixelfed but gave up because it wouldn’t work. I’m used to Docker containers that are able to just start up by themselves, but the guide didn’t work for me. Maybe it’s time to try again.
One place it would be useful is if you are worried about somebody breaking into your home and stealing your computer. Don’t store the key on the home computer, instead store it on a cloud server. The home computer connects to the cloud server, authenticates itself with some secret, then if the cloud server authorizes, it can return the decryption key.
Then if your computer gets stolen or seized, it’ll connect via a different IP and the cloud server can deny access or even wipe the encryption key.
this doesn’t protect against all risks, but it has its uses.
Example: https://www.ogselfhosting.com/index.php/2023/12/25/tang-clevis-for-a-luks-encrypted-debian-server
I just saw this one mention endurain, a fitness tracker. I’ve been looking for something to self host data about my health, fitness, etc. Has anyone tried this or anything else in the self-hosted or open source fitness space?
Monthly active users. A metric to show the number of users who are considered active at least once per month.
If you are port forwarding. I recommend not exposing it on the default port of 25565 and instead expose it as a random port. Then, assuming you have a domain name, create an SRV record that points to your IP and port. This will cut down on the drive by scanners who scan by ports, but won’t totally eliminate it. If you do use the SRV record, your friends won’t even notice there’s a different port.
There’s two main ways of doing geo-based load balancing:
Of course, this doesn’t matter for companies that only have one data center.
Sorry, what do you mean route it directly? Maybe I didn’t clarify well enough.
My DNS is routed over the VPN but Internet traffic is routed directly. The problem is the load balancing is done based on where the DNS server is so say Google even though the traffic egresses directly to the internet bypassing the VPN it still goes to a Google DC near my home. Not all websites do this so its not always an issue.
Yes, but if you hit a company doing DNS based load balancing, DNS is going to return an IP that’s near to your DNS server which may not be near your device. That’s going to add to the latency.
I have Wireguard and I forward DNS and my internal traffic from my phone over the VPN to my pi-hole at home. All other traffic goes directly over the Internet, not the VPN. So that means only DNS encounters higher latency.
However, because a lot of companies do DNS based geo load balancing that means even if I’m on the east coast all my traffic gets sent to the West Coast because my DNS server is located there. That right there has the biggest impact on latency.
It’s tolerable on the same continent, but once I start getting into other continents then it gets a bit slow.
I think this a problem with applications with a privacy focused user basis. It becomes very black and white where any type of information being sent somewhere is bad. I respect that some people have that opinion and more power to them, but being pragmatic about this is important. I personally disabled this flag, and I recognize how this is edging into a risky area, but I also recognize that the Mozilla CTO is somewhat correct and if we have the option between a browser that blocks everything and one that is privacy-preserving (where users can still opt for the former), businesses are more likely to adopt the privacy-preserving standards and that benefits the vast majority of users.
Privacy is a scale. I’m all onboard with Firefox, I block tons of trackers and ads, I’m even somebody who uses NoScript and suffers the ramifications to due to ideology reasons, but I also enable telemetry in Firefox because I trust that usage metrics will benefit the product.
Oh that would be nice. I would use that to just go into the database and fix all my broken music metadata which I can’t see to fix any other way.