

Looking at linuxserver/jackett
on Docker Hub, it seems it indeed update everyday.
pending anonymous user
Looking at linuxserver/jackett
on Docker Hub, it seems it indeed update everyday.
Congrats on finding a suitable program. It looks promising.
I don’t think such things exists. You would need to do some coding work to gule a few tools together. I would do “FFmpeg -> OpenCV -> Tesseract OCR -> tanslator”.
That means only “authorized” clients equipped with “correct” DRM module can ever plays those video. If I have to guess, it would be Widevine L3 for browsers.
I used to use dedicated server from OneProvider in Paris.
Run Wireshark on the client to see if you actually got the reply.
Say I lived there. BBC needs fundings I get it, but what the BBC contributes to when I watch VoD? Not even watching live programmes as zero of the content have BBC ever contributed. When the content is licensed via BBC, I already paid part with my subscription. Thst’s a disgusting double dipping. If no one watches your programmes that’s your problem, and citizens have no responsibility to keep a corporate from collapsing. This shit reminds me of how NHK works in Japan.
If I understand correctly, you want a two component setup. A PWA client for you to read the mail, and a server acts as IMAP client, fetches mails from all you mailboxes. The server will expose an API for tge PWA to access mail content. When new mail arrives, the server push a beacon via the Push API. The PWA would fetch the sender and title, and display a notification. If you clicks it, only then the PWA will fetch the body.
After a quick glance of the demo, I think SnappyMail fit the bill? It seems can be installed as PWA, and my browser does ask me if I want to give it push notification permission. However, I’m not too sure if the fetch logic happens as I laid out.
While I’m using Proton rn, I’m planning to migrate to Posteo with Addy.io for aliases. However they all cost money. If you mean free email that’s not tie to a billionaire, I can’t think one off my head. You can achieve “free” by hosting your own email server as it sounds you’re intended for receiving only, but the electricity still cost some, plus you are doning free labor to make sure it is happy.
Apologies. Never realized American spell it as check. Got confused.
Check or cheque?
What’s your setup and your goal?
I guess you can also use NFS/iSCSI for images too?
I heard ActualBudget can do this, but less a software but a server.
I will just get an AMD (7745HX?) mini PC with adequate RAM and call it a day. It should run almost anything that you throw in a light setup with minimal power usage.
Didn’t they already done such thing before?
I highly doubt if they really live stream the video you took, or pictures. I would much rather believe an OCR is being done locally and send it to a server for translation.
Wouldn’t DNSSEC makes the whole poisoning moot?
I don’t a single guide for you but I can layout a road map.
After you got those foundation ready, you can go on and try to build a webscraper. I advice aginst using Scrapy. Not because it is bad but too overwhelming and abstracted for any beginner. I will instead advice you use requests
for HTTP, and BeautifulSoup4 for HTML parsing. You will build a more solid foundation and transition to scrapy later when you need those advanced function.
When you get stuck, don’t afraid to pause on your attempt and read tutorials again. Head to the Python Community on Discord to get interactive help. We welcome noobs as we once were noobs too. Just don’t ever mention scraping there as they can’t help if they suspect you’re trying to do something inappropriate, malicious, or illegal. They are notoriously aginst yt-dlp
which frustrates me a bit. Phrase it nicely and in an generic way. I will be there occasionally offering help.
Using 7900XTX with LMS. Speed are everwhere, driver dependent. With QwQ-32B-Q4_K_M, I got about 20 tok/s, with all VRAM filled. Phi-4 runs at about 30-40 tok/s. I can give more numbers if you can wait for a bit.
If you don’t enjoy finding which driver works best, I strongly aginst running AMD for AI workload.