It kept fucking up double quotes and escape characters. Like, if it can’t get that right in a batch file it’s hopeless.
It kept fucking up double quotes and escape characters. Like, if it can’t get that right in a batch file it’s hopeless.
For real. Asked chatgpt to help with a batch script. Literally the easiest thing it could possibly program. And guess what? Almost every example it gave was wrong. It seemed to know a lot about the command line program I was trying to run but it didn’t really understand what it was repeating to me.
That means it’s free.
It wasn’t for nothing. It was to get off on it. Which made it more attractive than those doing it for money.
He is though, indirectly. Lessening the visibility of actual victims. Hurting their creditability, their voices and even potential streams of revenue.
Awfully geocentric for the nature of the universe.
Almost seems like astroturfing to me. This would wreck everyone’s perception of the apps if they all go down from traffic and DDOS.
What devs would ever consider such a clearly bad idea?
Trump sure as hell is only signing those EOs because of the titles they put on them. This fucking guy can barely read at a fifth grade level. Sure, he put the guys in charge of things that he wants or thinks he does cause they suck up to him. He’s a fucking moron.
I wouldn’t care so much if he wasn’t such a bitch about it. You want to post that stuff than own up and admit it. Fucking cryptofascist.
I feel like Hollywood was trolling with Connery’s roles. They cast him as a fucking Russian sub commander with that accent!
An insult would make me no longer respect you. Why would I base work off of someone I don’t respect?
Ai chatbots are sycophants. They will say literally anything if you convince it. You can get them to tell you to kill yourself or others or anything at all. They are only dangerous if you believe them. Unfortunately that’s going to be a huge problem.
Don’t worry. Ai has that covered for you. In no time there will be so much misinformation online that objective truth will be a thing of the distant past.
https://huggingface.co/RASMUS/Whisper_Finnish_finetuned_small_200k_samples
This could work if you look into setting up Whisper Search for other languages also
I honestly don’t see a difference
Oh, that part is. But the splitting tech is built into llama.cpp
With modern methods sometimes running a larger model split between GPU/CPU can be fast enough. Here’s an example https://dev.to/maximsaplin/llamacpp-cpu-vs-gpu-shared-vram-and-inference-speed-3jpl
fp8 would probably be fine, though the method used to make the quant would greatly influence that.
I don’t know exactly how Ollama works but a more ideal model I would think would be one of these quants
https://huggingface.co/bartowski/Qwen2.5-Coder-1.5B-Instruct-GGUF
A GGUF model would also allow some overflow into system ram if ollama has that capability like some other inference backends.
That version most likely burned the equivalent output of the sun to run it.