Whether you’re really passionate about RPC, MQTT, Matrix or wayland, tell us more about the protocols or open standards you have strong opinions on!
Unified Push.
Unbelievable that we have to rely on Google and co for sth as essential as push messages! Even among the open source community, the adoption is surprisingly limited.
Nobody knows about unifiedpush. Last time I checked, their Linux dbus distributor also wasn’t ready. There has to be a unified push to get it adopted.
Fuck Unified Push. Just use the Web Push standard. https://www.rfc-editor.org/rfc/rfc8030
It is what is used for browser push messages, is already widely supported. Is compatible with existing push infrastructure and users and is end-to-end encrypted. IDK why Unified Push felt the need to create a new protocol when a perfectly good one already existed.
Although there is no “client side” spec. The Unified Push client side could be useful. But they should throw away their custom backend protocol and just use Web Push.
I wish my employer just accepted my push to use OAuth…
Do Not Track
Such a simple solution for the cookie banner issue. But it prevented websites from tricking users into allowing them to gather their data, so it had to go.
Nobody was going to honor that. That’s just giving them an extra bit of data to track you with.
It could be forced by law
Globally?
Those cookie banners were introduced because of an EU law and are seen all over the world
Yes, seen by people visiting EU websites or companies with an EU presence. And because whether or not they assign a cookie is easily verifiable by the person on the other end.
Most of those cookie banners are not even needed, you only need them for tracking cookie, not login and session cookies. But of course everyone decided it is just easier to nag all the users with a big splash screen.
A lot of them are not even doing it right, you are not allowed to hint the user that accept all is the “correct” choice by having it in a different color than the others. And being able to say no to all shouls be as easy as accepting all, often it isn’t.
Basically, cookie banners are usually not needed and when they are they are most often incorrectlt designed (not by accident).
But of course everyone decided it is just easier to nag all the users with a big splash screen.
Nope, the thing is, you’ll very rarely find a website that only uses technically necessary session/login cookies. The reason every fucking website, yes, even the one from the barber shop around the corner, has a humongous cookie banner is that every fucking website helps google and other corporations to track users across the whole internet for no reason.
LaTeX. As someone in academia, I absolutely love it. It has some issues like package incompatibility, but it’s far far better than anything else I’ve used. It’s basically ubiquitous in academia, and I wish it were the case everywhere else as well.
It’s not a standard but still its an interesting software so I’ll post this here:
Joking aside, I love and hate it. Its paradigm is almost like using the C preprocessor to build a really awkward Turing-machine. TeX/LaTeX does a great job of what it was intended to do; it applies high quality typesetting rules to complex material and produces really good results. I love the output I can get with it and I will be eternally grateful that Donald Knuth decided to tackle this problem. And despite my complaints below, that gratitude is genuine. Being able to redefine something in a context-sensitive way, or to be able to rely on semantics to produce spacing appropriate to an operator vs a variable etc; these are beautiful things.
The problem is, at least once a day I’m left wishing I could just write a callable routine in a normal language with variables, types, arrays, loops and so on. You can implement all those things in TeX, but TeX doesn’t have a normal notion of strings, numbers or arrays, so it is rare that you can do a complicated thing in an efficient way, with readable code. So as a language, TeX frequently leads to cargo-cult programming. I’m not aware that you can invoke reflection after a page is output, to see what decisions on glue and breaks were made; but at the same time you can’t conditionally include something that is dependent on those decisions, since the decision will depend on what is included. This leads to some horrible conditionals combined with compiling twice, and the results are not always deterministic. Sometimes I find it’s quicker to work around things like that by writing an external program that modifies the resulting PDF output, but that seems perverse.
At the same time, there’s really nothing else out there that comes close to doing what LaTeX does, and if you have the patience, the quality of documents it can produce is essentially unbounded. The legacy of encodings, category codes, parameter limits, stack limits etc. just makes it very hard for package writers, and consumes a great deal of time for a lot of people. But maybe I am picky about things that a saner person would just live with.
A lot of very talented people have written a lot of very complex packages to save the user from these esoteric details, and as a result LaTeX is alive and well, and 99% of the time you can get the results you want, using off-the-shelf parts. The remaining 1% of the time, getting the result you want requires a level of expertise that is unreasonable to expect of users. (For comparison, I wrote an optimising C compiler and generally found it far easier to make that work as expected, than some of the things I’ve tried, and failed, to do properly in LaTeX. I now have a rule; if getting some weird alignment to work takes me more than an hour, I just fake it with a postscript file, an image, or write an external program to generate it longhand, in order to save my sanity.)
I think (and certainly hope) that LaTeX is here to stay, in much the same way that C and assembly language are. As time moves forward I think we’ll see more and more abstractions and fewer people dealing with the internals. But I will be forever grateful to the people who are experts in TeX, and who keep providing us with incredible packages.
For me it’s more pleasant than editing formulae in LO, but still took a lot of time.
What about Typst?
The Typst compiler is open source. It is the open core of the web app and we will develop and maintain it in cooperation with the community
Try Typst now!
Create a free account to join the public beta.
Beta software marketing with “free accounts” and an open core compiler for a (probably) future paid web service tells me all I need to know.
Even though LaTeX has issues, not being an online service is not one of them.
They host a proprietary service that does all the stuff, the compiler and spec are completely FOSS. So you need to create your own implementations, which is not hard.
I dont think they will close source the compiler. And thats basically everything thats needed?
I have 0 problems with people creating a fancy proprietary implementation to get people hooked. I will never use an online editor, but why care?
or you could also just make an open source wrapper for latex and call it a day.
Nothing needs to be closed source to get people to use it.
And it isnt :D the compiler produces PDFs which can be read with anything. The spec is open so you can write the code with any editor.
Just needs integration, will see if I can add the syntax highlighting to Kate
Learning LaTeX and working around its quirks seems like a much better time investment than sidegrading to something that lives on premises given by a proprietary commercial project. If someone saw LaTeX and said “I want to make some version of this that is better”, without alterior motives, they would probably just work on improving LaTeX (which a whole lot of people do).
Fancy does not mean better, and often is in many ways worse than plain old boring.
You know Overleaf is a thing right?
Many projects need to be rewritten from scratch I think. But I also think an easier markup language for LaTeX could be possible, keeping all the nice templates etc.
From the LaTeX project:
The experience gained from the production and maintenance of LaTeX2e (the version you have been using for many years) had a major influence on our goals for future development and on new code which is now integrated into LaTeX.
A while ago we made the decision to drop the idea of a separate LaTeX3 format that would exist in parallel to LaTeX2e, but instead decided to gradually modernize LaTeX to keep it competitive in today’s world while maintaining compatibility methods for older documents.
I think this decision was pretty much a good one.
Overleaf does not modernize LaTeX in meaningful ways. It only adds cloud functionality and glossy appearance that you can get on dedicated editors anyways.
No, but Overleaf is just a proprietary fancy editor like the Typst one. Meanwhile typst is just as usable for building editor too.
I dont see any arguments against typst really. I am using Markdown all time and find it best, but lacking. Then LaTeX, honestly I dont want to learn as it must be a pain to write.
Now in typst, you can write academic papers etc just as well. All you need is free software, with good backing, modern tooling (rust, cargo), thus it runs everywhere. Its pretty cool!
It’s basically ubiquitous in academia
You mean STEM. In the humanities we do just fine without, tyvm.
IDK dude. My sister is doing master’s in Philosophy. She uses LaTeX, and so do most others in her batch.
ok well to be fair philosophers will also fuck shit up just to make a point. So i’m not sure how fair that is.
I wrote my masters in LaTeX and while I appreciate the structuredness and the fact I could use vim, it was so quirky. Having to spend half an hour to fix a non obvious compile error, more than once, was a big distractor. I’m sure it gets better when you use it more but I don’t think I have ever used it since. I’m not in academia and I don’t need to solve compile problems when creating an invoice or writing a letter to local government.
I personally feel like it should be a standard extended markdown that allows latex code.
I honestly just use it for my resume with a template I found, so my knowledge is extremely basic, but I really do love the concept that I can “compile” and actually see the source of my document’s formatting.
It really needs to significantly improve its live update capability. Typst is more capable in that regard.
Is it practical outside of academia? I heard the learning curve is kinda big
Nope and yep. It’s an incredible tool, but it’s got a vim-sized learning curve to really leverage it plus other significant drawbacks. Still my beloved one-and-only when I can get away with it, but its a bit of a masochistic acquired taste for sure.
Template tweaking, as I imagine academia heavily relies on, is really the closest to practical it gets. You do still get beautiful results, it’s just hard to express yourself arbitrarily without really committing to the bit.
It’s got a vim-sized learning curve to really leverage it
As a regular vim user, I have to say. Vim makes sense after you put some effort into learning it. I can’t say the same about latex.
Outside of academia, would you say it still provides significant upside over markdown?
Markdown and LaTeX are meant for entirely different purposes. It’s somewhat analogous to HTML vs PDF. While it’s possible to write books with Markdown, it’s a vastly inferior solution compared to latex or typst (for fixed format docs like books).
OpenTelemetry and in particular I wish more protocols had Traceparent propagation support and more software had support for sending spans and traces to an OTLP endpoint to construct a full picture of everything that is going on in a distributed system.
Dconf
TeX. I was able to use it during school for some beautiful type setting and formatting but nobody I work with wants to use anything other than plain text or unfortunately more commonly binary wysiwyg editor formats. It’s frustrating and ugly.
finger cyclohexane@lemmy.ml
peer to peer, i would be happier thitking that every time i open somo application, i’m helping it, like i2p
🤨
Ever heard of IPFS? I really hope that will take off some time.
Unfortunately the reality of IPFS is that despite its huge funding it was poorly designed from the start and still to this day has much slower loading times then my I2pd instance (despite i2p transmiting messages through multiple encrypted proxies), to the point where the company working on the rust implementation determined it was so bad they had to scrap the whole thing to make something that actually worked. Not to mention that I managed to have my server taken over by some kind of malware by downloading a particular piece of content.
Thanks, that was an interesting read! I always felt IPFS wasn’t ready yet, but the value it tries to provide of being a file system, I’ve found no real alternative to. Very good to read that iroh is willing to look beyond the IPFS spec to provide its values with better performance. I hope it works out.
Others have said already, but XMPP and RSS. Also, nobody mentioned NNTP yet.
I wish everything was accessible by NNTP and we had better NNTP clients. NNTP is like RSS but for forums (so, Lemmy, Reddit, or anything where you could reply to posts). Download for offline reading, read in your client, define your own formatting, sorting, filtering, your client, your rules.
If Lemmy was accessible via NNTP, I could just download all posts and comments I’m interested in and reply to them without any connection, and my replies would get synced with the server later when I connect to WiFi or something.
Back in the day I was a big Usenet fan. What’s the modern solution to the spam issue? At the time, folk wisdom was that the demise was being caused by spam, and that due to the nature of the protocol it was somehwhat unsolveable.
I also wonder to what extent activity pub is the barrier to offline use? For reddit, the Slide client had offline reading and iirc posting. I have been disappointed it isn’t available for Lemmy. My guess has been it simply isn’t a priority for the devs. Maybe eventually we will get it.
I think it would be cool if RSS got put into Lemmy clients. Example you could make a unified inbox for all accounts by automatically getting the private RSS for incoming messages for all logged in accounts. I have manually set this up a couple of times but its tedious. Completely lacks smoothness when it comes to clicking a link, replying etc. But a client could add a little finesse to fix that.
Probably it would be better to edit my comment, but I’ll go with a reply to myself.
To all fans of RSS: there’s this service called FeedBase that is essentially a RSS to NNTP gate. You add your RSS feed to that and it becomes a newsgroup on their server, and you can subscribe to it using any NNTP client. New articles appear as new posts in that newsgroup and you can post your own replies to them. So, you get RSS but with discussions or comments.
If you try this, let me know what RSS feeds you’re reading, so we could read the articles together and have some discussion there!
P.S. This comment is not an ad. I genuinely love feedbase and use that myself.
Holy cow, that’s neat as hell! Thanks for sharing!
Many years ago I found Silc very interesting.
Idk I just wanna finger my server
I’d love to see more adoption of… I2C!
Bazillions of motherboards and SBCs support I2C and many have the ability to use it via GPIO pins or even have connectors just for I2C devices (e.g. QWIIC). Yet there’s very little in the way of things you can buy and plug in. It feels like such a waste!
There’s all sorts of neat and useful things we could plug in and make use of if only there were software to use it. For example, cheap color sensors, nifty gesture sensors, time-of-flight sensors, light sensors, and more.
There’s
lmsensors
which knows I2C and can magically understand zillions of temperature sensors and PWM things (e.g. fan control). We need something like that for all those cool devices and chips that speak I2C.If you have an unused VGA port, you can use the DDC pins for I2C. Be sure to add ESD protection if you do this. An I2C isolator would be even better.
I2C is really not meant to be used over cables. It has a very limited common mode input voltage range and it can’t handle much capacitance on the bus.
Except that in the case of VGA (and DVI, HDMI, and DisplayPort) the i2c interface is intended for use over the cable. All of those ports have a pair of i2c pins and corresponding wires in their cables. The i2c interface is used for DDC/EDID which is how the computer can identify the capabilities and specifications of the attached display. DDC even provides some rarely-used control functionality. Probably the most useful of which is being able to control the brightness of the display from software. I use the ddcci module on Linux and it lets me control my desktop monitor brightness the same way a laptop would, which is great. I have no idea why this isn’t widely used.
Edit:
This i2c interface is widely used to control the lighting on modern graphics cards that have RGB lighting. We’ve spent a lot of time reverse engineering these chips and their i2c protocols for OpenRGB. GPU chips usually have more i2c buses than the cards have display connectors, so the RGB chip is wired to one of the unused buses. I think AMD GPUs tend to have 8 separate i2c buses but most cards only use 4 or 5 of them for display connectors. There is also an i2c interface present on RAM slots normally used for reading the SPD chip that stores RAM module specifications, timings, etc. This interface is also used for RAM modules with controllable RGB lighting.
I2C is a bit goofy though. As a byproduct of being an undiscoverable bus you basically just have to poke random addresses and guess what you’re talking to. The fact lmsensors i2c detection works as well as it does is a miracle. (Plus you get the neat issue where even the act of scanning the bus can accidentally reconfigure endpoints)
Yeah, the lack of proper discoverability on i2c truly sucks. You have to just poke random addresses and hope for the best to see if an i2c device exists on the bus. It’s a great standard but I wish it would get updated with some sort of plug and play autodetection feature. Standardized device PID/VID system like USB and PCI would be acceptable or a standardized register that returns a part string. Anything other than blindly poking registers and hoping you’re not accidentally overvolting the CPU or whatever because the register on your expected device overlaps with the overvolt the CPU register on the same address of a different device.
XMPP
Call me old fashioned, but I still call it Jabber.
🙂
Why not matrix?
You’re going off-topic from the OP question :-) But to answer your new question : I do not trust Matrix enough when it comes to privacy. I know that this link is old but still. https://disroot.org/en/blog/matrix-closure
Then again I do not trust Signal that much either but sometimes compromises need to be made to get things done. With XMPP the end user can host their own server if they wish to, without meta data going to a centralized point. And video calls via XMPP and Conversations were a pleasure to use when I used it during the Covid-19 pandemic.
I came here to say matrix but I’m not gonna lie. If XMPP had gotten the traction it deserved we wouldn’t need matrix.
I’m really into CloudEvents because I love event-driven systems, and since events can come from, or be consumed by, so many different services, having a robust spec is super duper useful.
So what problem is this solving? What are some event-driven systems that need to interoperate? Seems like even if you have a common encapsulation method, you still need code to understand and deal with the message body. Just seems like an extra layer around a JSON blob.