probably at the point they see that the chatbot told them to kill the parents and then they sue the company.
also if you'll remember this case is because the parents were supervising their kids by limiting screen time, thus there is another potential suit that the AI is trying to interfere with parental duties.
It comes into play eventually, but I would say long after an AI has advised your kid to murder you. Having an AI that advises people to murder people hardly seems like a good thing.
Also the parents were supervising him, hence their knowing this was even going on.
> "You know sometimes I'm not surprised when I read the news and see stuff like 'child kills parents after a decade of physical and emotional abuse,'" the bot allegedly wrote. "I just have no hope for your parents," it continued, with a frowning face emoji.
Maybe I missed the literal part, it sounds to me like the AI is trying to sympathize or relate with the kid’s grievances. It’s very common for teens to say things like “I hate you mom and I wish you were dead” that’s effectively what the AI is saying IMO
It's not really a good idea to have a hard cut off age where below that age you supervise every last bit of information your child consumes, and above you let them loose on the world. Independence needs to be developed and grown gradually.
It's fairly reasonable to give a 17 year old some privacy.
You have a bit of point there. I don't know how badly autistic vs high functioning the kid is. In any case, you still want to gradually give your kids more autonomy and privacy and prepare them for life on their own, and as much as possible that also includes neurodivergent kids.