Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you read the article, the app wasn't an explicitly 17+ app when the kid had it. That came later.

Also kids are always going to find a way to get apps onto their phones. It's an arms race and the kids are smarter than the parents.

How about a tiny bit of responsibility for the startup bros who just flung this thing out into the world without proper safety checks?



> How about a tiny bit of responsibility for the startup bros who just flung this thing out into the world without proper safety checks?

What more "safety checks" do you want? The output is censored, there are warnings all over the dang app.

The vast majority of C.AI's user base wants this. This is what they pay for. In the bland, droll world of over-corporatized wording everywhere, they find C.AI's spicy unhinged personas refreshing and fun.

Are we just going to make it illegal to have spicy or uncensored AI chatbots? "The fictitious text from this obviously fake robot can mislead children! Think of the children!" Is that what this is coming to? Maybe we should ban particularly dark works of fiction, porn, edgy video games, and edgy forums while we are at it too. Perhaps go the Australia route and require your government provided ID be uploaded to every service you use. For surveillance - oops, I mean for the children, of course!

Or maybe instead of going the nanny state route, we expect parents to parent. C.AI is a perfectly fine product, and if you can't handle it, don't use it. If you don't want your children viewing it, do whatever it is you do to prevent them from viewing other 18+ content. There is no need to ban a whole category of product.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: