Character.ai doesn't seem to have direct monetization mechanisms. In addition, sites like HN aren't generally held responsible for everything a user says. They could try to argue that the characters are sufficiently influenced by the user-generated prompts and user-facing conversations to be no longer their own. (Section 230)
In any case I think society and building of society should be directed in such a way that we don't have to censor the models or baby the models, but rather educate people on what these things really are, and what makes them produce which content, for which reasons. I don't want to live in a society where we have to helicopter everyone around in fear of one single misinterpreted response by the LLM.
But they're still acting on the company's behest. If I hire a jerk to work tech support and they insult or cause damage to my customers, I don't get to say "shrug, they don't represent my company". Of course they do. They were on my payroll. I think it'd be pretty easy to argue that the AI was performing the duties of a contractor, so the company should be responsible for its misbehavior, just as if a human contractor did it.
But with Character AI you are hiring a roleplay service which can be open ended in terms of what you are looking for. If you are looking to roleplay with a jerk, why shouldn't you be able to do that, and in such case why should the company be held liable?
Do you think that applies to open source models, or is it the act of performing inference that makes it an act the business is responsible for? ie, Meta's Llama does the same thing.