Profile pic is from Jason Box, depicting a projection of Arctic warming to the year 2100 based on current trends.

  • 0 Posts
  • 59 Comments
Joined 1 year ago
cake
Cake day: March 3rd, 2024

help-circle







  • I stopped at speech recognition. That’s the only important part of this that needs to involve any complex AI. The rest is basic programming and doesn’t need a neural net at all. Think of a modern phone tree. There’s some things that will get recognized as menu items, and anything beyond gets dumped to the human still monitoring the activity.

    Good luck with the first part though. Having worked drive thru in my days (long ago, but nothing much has changed) the noise level on the input is all over the place. The human ear is very good at picking out things, so usually you can piece together what the order was, but even today’s phone trees that I mentioned or even a smartphone that’s tied to Google/Siri in real time can screw basic words up, and that’s from an algorithm that learns the user’s voice, not a different person with a different accent/ volume, car sound, etc. each time.

    Also let me add - even though the human ear is far superior at picking out nuances in a high noise environment, many people working the drive thru still suck at understanding even clear speech. To get AI/LLM/whatever past even the basic incompetent human order taker will be a monumental accomplishment that could filter into so many other things. Read that as sarcastic - it hasn’t happened yet, it won’t happen via a drive thru cost replacement to increase bottom line profits.











  • Rhaedas@fedia.iotoFediverse@lemmy.worldCensorship
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    2 months ago

    Which is exactly what happened when things started growing quickly due to the first Reddit exodus. OP should try and set up their own instance without any filtering and if it gains enough users and attention they’ll see what full openness gets them. Have to remember, sometimes censorship doesn’t mean you’re being repressed, it just means most people don’t care for or are ignoring your viewpoint.


  • I’ve run a local LLM on my PC for a while, so I’m familiar enough with Ollama to understand what’s going on. I’ve tried this with my Samsung Tracfone, not really expecting a lot. Surprisingly I’ve gotten all the way to getting a prompt, but then things crash and I’m kicked back to the starting terminal. Pretty sure it’s memory, so I’m now trying to use virtual memory to bump it up to the 4GB you’ve had success with (the phone looks to have 3GB actual memory, plenty of storage though).

    If it doesn’t work, I’ll try some of the others, perhaps they’re a bit smaller.

    I did get the 0.5 Qwen to run well. I’m surprised how fast it is even using CPU mode, but maybe being smaller also helps with the processing.

    Just a tip (maybe obvious to experienced users): while you do have to run the terminal, login to debian, start the server and then run the model, remember that you can use the arrow keys in the terminal to repeat past commands, so it’s pretty quick to do. I actually missed the arrow keys the first time around because they aren’t very distinct or highlighted, but then when I had to look for how to do CTRL, I realized they were right in front of me.