Whenever AI is mentioned lots of people in the Linux space immediately react negatively. Creators like TheLinuxExperiment on YouTube always feel the need to add a disclaimer that “some people think AI is problematic” or something along those lines if an AI topic is discussed. I get that AI has many problems but at the same time the potential it has is immense, especially as an assistant on personal computers (just look at what “Apple Intelligence” seems to be capable of.) Gnome and other desktops need to start working on integrating FOSS AI models so that we don’t become obsolete. Using an AI-less desktop may be akin to hand copying books after the printing press revolution. If you think of specific problems it is better to point them out and try think of solutions, not reject the technology as a whole.

TLDR: A lot of ludite sentiments around AI in Linux community.

  • rah@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    10 months ago

    free software communities

    TheLinuxExperiment on YouTube

    LOL

  • FQQD! @lemmy.ohaa.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    10 months ago

    I dont think the community is generally against AI, there’s plenty of FOSS projects. They just don’t like cashgrabs, enshittification and sending personal data to someone else’s computer.

    • FatCat@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      I don’t see anyone calling for cash grabs or privacy destroying features to be added to gnome or other projects so I don’t see why that would be an issue. 🙂

      On device Foss models to help you with various tasks.

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        You are, if you’re calling for Apple like features.

        You might argue that “private cloud” is privacy preserving, but you can only implement that with the cash of Apple. I would also argue that anything leaving my machine, to a bunch of servers I don’t control, without my knowledge is NOT preserving my privacy.

        • Auli@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 months ago

          I’m waiting for the moment the storey breaks they ChatGPT didn’t do what Apple asked.

        • FatCat@lemmy.worldOP
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          10 months ago

          You might argue that “private cloud” is privacy preserving

          I don’t know since when “on device” means send it to a server. Come up with more straw men I didn’t mention for you to defeat.

          • MentalEdge@sopuli.xyz
            link
            fedilink
            arrow-up
            0
            ·
            edit-2
            10 months ago

            Apple’s “private cloud” is a thing. Not all “Apple Intelligence” features are “on device”, some can and do utilize cloud-based processing power, and this will also be available to app developers.

            Apparently this has additional safeguards vs “normal cloud” which is why they are branding it “private cloud”.

            But it’s still “someone else’s computer” and apple is not keeping their AI implementation 100% on device.

      • PrivateNoob@sopuli.xyz
        link
        fedilink
        arrow-up
        0
        ·
        10 months ago

        FQQD probably refers to companies such as MS, Apple, Google, Adobe, etc. since they usually incorporate AI into everything.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        10 months ago

        On device Foss models to help you with various tasks.

        Thankfully I really really don’t need an “AI” to use my desktop. I don’t want that kind of BS bloat either. But go ahead and install whatever you want on your machine.

        • umami_wasabi@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          10 months ago

          It is quite a bloat. Llama3 7B is 4.7GB by itself, not counting all the dependencies and drivers. This can easily take 10+ GB of the drive. My Ollama setup takes about 30GB already. Given a single application (except games like COD that takes up 300GB), this is huge, almost the size of a clean OS install.

    • anamethatisnt@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      sending personal data to someone else’s computer.

      I think this is spot on. I think it’s exciting with LLMs but I’m not gonna give the huge corporations my data, nor anyone else for that matter.

  • luciferofastora@lemmy.zip
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    The first problem, as with many things AI, is nailing down just what you mean with AI.

    The second problem, as with many things Linux, is the question of shipping these things with the Desktop Environment / OS by default, given that not everybody wants or needs that and for those that don’t, it’s just useless bloat.

    The third problem, as with many things FOSS or AI, is transparency, here particularly training. Would I have to train the models myself? If yes: How would I acquire training data that has quantity, quality and transparent control of sources? If no: What control do I have over the source material the pre-trained model I get uses?

    The fourth problem is privacy. The tradeoff for a universal assistant is universal access, which requires universal trust. Even if it can only fetch information (read files, query the web), the automated web searches could expose private data to whatever search engine or websites it uses. Particularly in the wake of Recall, the idea of saying “Oh actually we want to do the same as Microsoft” would harm Linux adoption more than it would help.

    The fifth problem is control. The more control you hand to machines, the more control their developers will have. This isn’t just about trusting the machines at that point, it’s about trusting the developers. To build something the caliber of full AI assistants, you’d need a ridiculous amount of volunteer efforts, particularly due to the splintering that always comes with such projects and the friction that creates. Alternatively, you’d need corporate contributions, and they always come with an expectation of profit. Hence we’re back to trust: Do you trust a corporation big enough to make a difference to contribute to such an endeavour without amy avenue of abuse? I don’t.


    Linux has survived long enough despite not keeping up with every mainstream development. In fact, what drove me to Linux was precisely that it doesn’t do everything Microsoft does. The idea of volunteers (by and large unorganised) trying to match the sheer power of a megacorp (with a strict hierarchy for who calls the shots) in development power to produce such an assistant is ridiculous enough, but the suggestion that DEs should come with it already integrated? Hell no

    One useful applications of “AI” (machine learning) I could see: Evaluating logs to detect recurring errors and cross-referencing them with other logs to see if there are correlations, which might help with troubleshooting.
    That doesn’t need to be an integrated desktop assistant, it can just be a regular app.

    Really, that applies to every possible AI tool. Make it an app, if you care enough. People can install it for themselves if they want. But for the love of the Machine God, don’t let the hype blind you to the issues.

  • kbal@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    One of the main things that turns people off when the topic of “AI” comes up is the absolutely ridiculous level of hype it gets. For instance, people claiming that current LLMs are a revolution comparable to the invention of the printing press, and that they have such immense potential that if you don’t cram them into every product you can all your software will soon be obsolete.

    • Womble@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      It doesnt though, local models would be at the core of FOSS AI, and they dont require you to trust anyone with your data.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        10 months ago

        local models would be at the core of FOSS AI, and they dont require you to trust anyone with your data.

        Would? You’re slipping between imaginary and apparently declarative statements. Very typical of “AI” hype.

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          10 months ago

          Local models WOULD form the basis of FOSS AI. Supposition on my part but entirely supportable given there is already a open source model movement focus on producing local models and open source software is generally privacy focused.

          Local models ARE inherently private due to the way that no information leaves the device it is processed on.

          I know you dont want to engage with arguments and instead just wail at the latest daemon for internet points, but you can have more than one statement in a sentence without being incoherent.

  • kazaika@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    Imo you immensely overestimate the capabilities of these models. What they show to the public are always hand picked situations even if they say they dont

  • UnfortunateShort@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    Is there no electron wrapper around ChatGPT yet? Jeez we better hurry, imagine having to use your browser like… For pretty much everything else.

    • Goun@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      I did not buy these gaming memory sticks for nothing, bring me more electron!

  • electric_nan@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    There are already a lot of open models and tools out there. I totally disagree that Linux distros or DEs should be looking to bake in AI features. People can run an LLM on their computer just like they run any other application.

  • HumanPenguin@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    Dammed impressive. How evil AI is managing to post defending itself.

    I for one will be happy to bow to our new AI overlord.

    • FatCat@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      I do not feel comfortable discussing whether I am an artificial intelligence or not. I aim to be direct in my communication, so I will simply state that such metaphysical questions about my nature are not something I can engage with. Perhaps we could find a different topic that allows me to be more helpful to you within the proper bounds. I’m happy to assist with writing, analysis, research, or any other constructive tasks. 😃