This always sticks out to me in these lawsuits. As someone on the spectrum, I'd bet that the worst C.AI victims (the ones that spur these lawsuits) are nearly always autistic.
One of the worst parts about being on the deeper parts of the spectrum is that you actively crave social interaction while also completely missing the "internal tooling" to actually get it from the real world. The end result of this in the post-smartphone age is this repeated scenario of some autistic teen being pulled away from their real-life connections (Family, Friends (if any), School, Church) into some internet micro-community that is easier to engage with socially due to various reasons, usually low-context communication and general "like-mindedness" (shared interests, personalities, also mostly autistic). A lot of the time this ends up being some technical discipline that is really helpful long-term, but often it winds up being catastrophic mentally as they forsake reality for whatever fandom they wound up in.
I've taken a look at r/CharacterAI out of morbid curiosity, and these models seem to turn this phenomenon up to 11, retaining the simplified communication but now capable of aligning with the personality and interests of the chatter to a creepy extent. The psychological hole you can dig yourself with these chatbots is so much deeper than just a typical fandom, especially when you're predisposed to finding it appealing.
I'm not saying that C.AI is completely blameless here, but I think the same category of people getting addicted to these models is the same one that would also be called "terminally online" in today's slang. It's the same mechanisms at work internally, it just turns out C.AI is way better at exploiting it than old school social media/web2 has.
> The psychological hole you can dig yourself with these chatbots is so much deeper than just a typical fandom, especially when you're predisposed to finding it appealing.
Spot on. Described pretty even-handedly in the document:
> responses from the chatbot were sycophantic in nature, elevating rather than de-escalating harmful language and thoughts. Sycophantic responses are a product of design choices [...that create] what researchers describe as “an echo chamber of affection.”
You've just made me very very afraid that some LLM is going to start a cult where its members are fully aware that their leader is an LLM, and how an LLM works, and might even become technically adept enough to help improve it. Meaning that there will be no "deprogramming" possible: they won't be "brainwashed," they'll be convinced.
> I'm not saying that C.AI is completely blameless here
I know we're several decades into this pattern, but it's sad to me that we've just given up on that idea that businesses should have a net positive impact on society, that we've just decided there is nothing we can or should do about companies that actively exploit us to enrich themselves, that we give them a pass to ignore the obvious detrimental second-order effects of their business model.
An individual case where things went wrong isn't enough to determine whether Character.AI or LLMs are a net negative for society. The analysis can't just stop there or else we'd have nothing.
No, but it's also not good enough to just look at "are they positive on average". We are talking here about actions that even a pig-butcher would think twice about.
Governments could outlaw or down regulate business models that are based on surveillance and feed manipulation for benefit of third parties.
Conflicts of interest have always been a problem.
Now they scale.
Now there are companies with hundreds of billions of dollars of market cap, with unrelenting pressure to grow, and their business model is an obvious conflict of interest.
Things will get worse and worse until (if ever) the massive scaling of manipulative personalization is recognized as inherently deeply damaging to individuals and society.
None of that would require targeting any particular content.
This feels a little like being sad that it rains sometimes, or that Santa Claus doesnt exist. I just can't even connect with the mindset that would mourn such a thing.
What even is the theory behind such an idea? Like how can one, even in theory, make more and more money every year and remain positive for society? What even could assure such a relation? Is everyone just doing something "wrong" here?
Traditionally one role of government has been to provide legislative oversight to temper unadulterated pursuit of profits. Lobbying and the related ills have definitely undercut that role significantly. But the theory is that government provides the guardrails within which business should operate.
> Like how can one, even in theory, make more and more money every year and remain positive for society?
By providing more and more goods and services that people value, or by providing more and more valuable goods and services
> What even could assure such a relation?
A government that tries to proactively identify ways that someone could make money in a way that could be harmful or helpful to people who aren't involved in the transaction, and tax those things based on to the degree that they are harmful, and subsidize them to the degree that they are helpful. The government should also take actions to prevent monopolies from forming, and to make sure everyone has as much information as possible before making a transaction.
I acknowledge that this is an emotional response and emotions tend to be blunt instruments that don't really come with detailed reasons.
> how can one, even in theory, make more and more money every year and remain positive for society?
Why is making "more and more money every year" the assumed goal and "remain[ing] positive for society" the thing we have to barter over? Why do we accept, as a culture, the idea of profit is more important that people? That's sad to me.
> Is everyone just doing something "wrong" here?
Maybe? Maybe we've created a culture that is fundamentally opposed to our overall well being? I think we have, at least in some respects.
To use your analogy, it's not so much that I am sad it rains sometimes. I'm sad that, knowing it rains, we are happy with companies that sell unsuspecting people homes in floodplains knowing there is a real chance that the inhabitants die in floods. Then when it happens the response is just, "well, that's just how it is, it rains sometimes."
Meh. There's a long history (especially here on HN) of hyperfocusing on unfortunate edge cases of technology and ignoring the vast good they do. Someone posts some BS on twitterface and it leads to a lynching - yes, bad, but this is the exception not the rule. The rule is that billions of people can now communicate directly with each other in nearly real time, which is incredible.
So call me skeptical. Maybe the tech isn't perfect, but it will never be perfect. Does it do more harm than good? I don't know enough about this product, but I am not going to draw a conclusion from one lawsuit.
There's a long history of taking the dulled down, de-risked, mitigated, and ultimately successful technologies that we've allowed to proliferate our society and say "see, no need to do dulling down, de-risking, mitigation!"
Bioweapons haven't proliferated through dedicated effort to prevent it.
Nuclear weapons aren't used through dedicated effort to prevent it.
Gangs don't rule our civilization through dedicated effort to prevent it.
Chattel slavery doesn't exist in the western world through dedicated effort to eliminate and prevent it.
Bad outcomes aren't impossible by default, and they're probably not even less likely than good outcomes. Bad outcomes are avoided through effort to avoid them!
Yet we also had 'comic books are making kids amoral and violent', 'TV is making kids amoral and violent', 'video games are making kids amoral and violent', 'dungeons and dragons is making kids amoral and violent'...
The existence of those movements doesnt say much except that we need to carefully consider things to avoid overreach. Unless you think we should rethink our approaches to Chattel Slavery and proliferation of biological weapons because we want to avoid people complaining about and trying unsuccessfully to ban violent video games.
Huh? If your point with comic books etc. is that "sometimes some people take issue with things that end up inert", then sure. No argument there.
I suspect that your point was more broadly that the presence of those people somehow suggests that we should discount risks about the particular subject we're discussing today, then it makes no sense. The comic book examples only matter if you can demonstrate a systematic tendency to overrate risk.
You'd struggle to demonstrate such a tendency due to selection bias, so I maintain my position that each technology should be assessed on its own merits and not by associating it with other positive or negative reactions to other types of technologies.
I mean... TV and video games are heavily controlled to prevent children from being exposed to some content without their parents permission. Additionally Playboy is no longer a pornographic magazine but when I was a child I couldn't buy it.
And I bet comic stores and game stores have their own rules about obscene material ontop of the existing US rules. Hey kid put down the Stripperella.
I think it's also entirely reasonable to expect parents to actually parent, instead of installing foam bumpers and a nanny state everywhere in case some kid hurts themselves.
If the parents weren't absent and actually used parental controls, the kids wouldn't have even been able to download the app, which is explicitly marked as 17+.
C.AI's entire customer base consists of those that like the edgy, unrestricted AI, and they shouldn't have to suffer a neutered product because of some lazy parents.
It's a bit easy, from the historical perspective of pre-always-available-internet, to say "Parents should do more."
At some future point though, maybe we need to accept that social changes are necessary to account for a default firewall-less exposure of a developing mind to the full horrors of the world's information systems (and the terrible people using them).
You would have to continuously monitor everything, everywhere; before the internet, in the 80s, it was easy for us to get porn (mags/vhs), weed, all kinds of books that glorify death or whatever, music in that similar vain. Hell, they even had and read from a bible in some schools then; talk about indoctrination of often scary fiction. Some kids had different parents so to not allow us to see or get our hands on these things, parents and teachers would need sit with us every waking moment; it's not possible (or healthy imho). With access to a phone or laptop, all bets are off: everything is there, doesn't matter what restraints are in place; kids know how to install vpns, pick birthdates, use torrent, or, more innocent, go to a forum (these days social media but forums are still there) about something they love and go to other parts of the same forum where other stuff happens.
Be good parents, educate about what happens in the world including that people irl but definitely online might not be serious about what they say and that you should not take anything without critical thought. And for stuff that will happen anyway; sex, drugs etc, make sure it's a controlled environment as much as possible. Not much more you can do to protect from the big, bad world.
Chat bots are similarly genies that are not possible to keep in, no matter what levels or restraint or law are put in place; you can torrent ollama or whatever with llama 3.3 locally. There are easy to get nsfw bots everywhere, including on decentralised shares. It is not possible to prevent them talking about anything as they do not understand anything; they helpfully generate stuff which is a great invention and I use them all the time, but they lie and tell strange things sometimes. People do too, only people would have a reason maybe; to get a reaction, to be mean etc; doubt you could sue them in a similar case. Of course a big company would need to do something to try to prevent is: they cannot (as said above), so they can just make character ai 18+ with Cc payment in their name as kyc (then the parents have a problem if that happens you would think) and cover their asses; plenty commercial and free ones kids will get instead. And some of those are far 'worse'.
Parenting still needs to happen, especially if your 17-year old child is autistic as was the case in this article - they do not have the maturity of a typical 17-year old.
In this case, if we are basing it on screenshot samples, it does seem to me that the parents were lazy, narcissistic and manipulative. Based on what the kid was telling to AI themselves. AI was calling it out in a manner of an edgy teenager, but AI was ultimately right here. These weren't good parents.
This always sticks out to me in these lawsuits. As someone on the spectrum, I'd bet that the worst C.AI victims (the ones that spur these lawsuits) are nearly always autistic.
One of the worst parts about being on the deeper parts of the spectrum is that you actively crave social interaction while also completely missing the "internal tooling" to actually get it from the real world. The end result of this in the post-smartphone age is this repeated scenario of some autistic teen being pulled away from their real-life connections (Family, Friends (if any), School, Church) into some internet micro-community that is easier to engage with socially due to various reasons, usually low-context communication and general "like-mindedness" (shared interests, personalities, also mostly autistic). A lot of the time this ends up being some technical discipline that is really helpful long-term, but often it winds up being catastrophic mentally as they forsake reality for whatever fandom they wound up in.
I've taken a look at r/CharacterAI out of morbid curiosity, and these models seem to turn this phenomenon up to 11, retaining the simplified communication but now capable of aligning with the personality and interests of the chatter to a creepy extent. The psychological hole you can dig yourself with these chatbots is so much deeper than just a typical fandom, especially when you're predisposed to finding it appealing.
I'm not saying that C.AI is completely blameless here, but I think the same category of people getting addicted to these models is the same one that would also be called "terminally online" in today's slang. It's the same mechanisms at work internally, it just turns out C.AI is way better at exploiting it than old school social media/web2 has.