Check out my blog: https://writ.ee/pavnilschanda/
Yet another autistic involved in an AI companionship case (the other is Sewell). Always blame the app (which has its own faults to many of their parents’ credit) and never the system.
Many people who get into AI companionship are teenagers (the subject of this article) and their parents were probably not familiar with AI before, so it makes sense for publications to warn those unfamiliar with AI about its potential risks.
That’s a good insight. One scenario that I worry about is that everyone will get into their own mini bubbles of simulated realities, unwilling to communicate with other bubbles.
This opinion feels more nuanced than others. She seems to acknowledge both the pros and cons of AI companionship, ultimately saying, “There are people who truly believe their chatbot is their primary relationship, but that’s not the case for me.”. This situation is also the inverse of the Sewell case, where it’s the son judging his mom.
That’s me since I was a child (thanks ableism)
Sounds hypocritical because of incorporating “Her” in the marketing and the whole fiasco involving Scarlet Johansson
It’s a better alternative to LLM sycophancy and bullshit, really
Read the Modlog if you want to know why
Is literal delusion better somehow?
It’s the reason conditions like DID develop, is it not? The brain does stuff like that when dealing with stress.
It was very hard to find the app’s website but here it is for those interested
In what way, may I ask?
I live in a country that has experienced the hottest recorded weather while neighboring countries are literally experiencing heat stroke deaths. I don’t live in those countries, but fortunately where I live there are organizations that are focusing on incremental steps to change. And I mean incremental because change doesn’t happen overnight. How is it in your area? Do you have any local organizations that can help mitigate or further prevent the effects the climate crisis, no matter how small?
Have you considered the possibility that the internet was redesigned to pull your attraction towards many fronts that you get too distracted on focusing on what matters in your immediate surroundings?
The article may have a point. The internet is full of doomsayers and people who just want to provoke others because of how the online space is now built to attract as much attention as possible. Meanwhile in the meatspace, I had a nice discussion with people and they are focused on the here and now.
Yes, I was curious about about if experts want to convey the concept of LLM bullshit to certain audiences such as children’s settings (which has been solved now) or religious clergy, they’ll use the term “bullshit” or not. I apologize if I have miscommunicated that intention in my initial comment, and I always look forward to how to communicate better
I’m talking about the latter. Religious people often use LLMs as well (https://apnews.com/article/germany-church-protestants-chatgpt-ai-sermon-651f21c24cfb47e3122e987a7263d348). Their knowledge is likely limited to ChatGPT so they’re likely to be vulnerable to these things. I think one of the things that worry me the most is that these people may take LLM bullshit at face value, or even worse, take them as a “divine commands”.
I get where you’re coming from. Ideally, we should be able to say whatever we want whenever we want. But based on my experience as an autistic living in a country where context is very important, the way you convey words affects your standing in a society, at least one that caters to neurotypicals that are highly dependent on context. I have no easy answers to how we can eliminate this hurdle, but your words truly made me think about language usage and how society should perceive them and I would like to thank you for that.
I am aware that Lemmy has an anti-religious bent but the fact is that religious people are part of this world, some even in places of power. Shouldn’t they also be informed about how LLMs are prone to bullshit as well? Though if they are OK with the word “bullshit” then it’s all fine by me at the end of the day
Understandable, though we should also find ways to explain complex academic concepts, like LLM bullshit, to the general public, including those with strong religious beliefs that may be sensitive to these words. The fact that some religious philosophers already use this term without issue shows that it’s possible to bridge this gap.
I’m familiar with her. She created the subreddit r/MyBoyfriendIsAI. The mods are very chill and overall good vibes there