Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the suggestion of violence is actually on page 31 (paragraph 103), though it's not a directive.

It does seem a bit wild to me that companies are betting their existence on relatively unpredictable algorithms, and I don't think they should be given any 'benefit of the doubt'.



Page 5 is pretty strong too. And that's as far as I've gotten.

And paragraph 66, page 18 is super creepy. The various posters apparently defending this machine are disturbing. Maybe some adults wish that as a kid they'd had a secret friend to tell them how full of shit their parents - and wouldn't have position if that friend was either real or imagined by them. But synthesized algorithms that clearly are emulating the behavior of villains from thrillers should be avoided, woah...


I think it’s more that some people are excited by the prospects of further progress in this area, and are afraid that cases like this will stunt the progress (if successful).


We mean the same page. The one that has a 28 written on it but is the 31st in the pdf. I didn't notice the discrepancy.

Given the technology we have, I'm not entirely sure what Character AI could have done differently here. Granted, they could build in more safeguards, and adjust the models a bit. But their entire selling point are chat bots that play a pre-agreed persona. A too sanitized version that constantly breaks character would ruin that. And LLMs are the only way to deliver the product, unless you dial it back to one or two hand-crafted characters instead of the wide range of available characters that give the service its name. I'm not sure they can change the service to a point where this complaint would be satisfied.


>"And LLMs are the only way to deliver the product, unless you dial it back to one or two hand-crafted characters instead of the wide range of available characters that give the service its name. I'm not sure they can change the service to a point where this complaint would be satisfied."

I agree with everything you're saying, but there are no legal protections for incitements to violence, or other problematic communications (such as libel) by an LLM. It may be that they provide a very valuable service (though I don't see that), but the risk of them crafting problematic messages is too high to be economically viable (which is how it seems to me).

As it stands, this LLM seems analogous to a low-cost, remote children’s entertainer which acts as a foolish enabler of children’s impulses.


The cynic would say that if their business model isn't viable in the legal framework that's just because they didn't scale fast enough. After all Uber and AirBnB have gotten away with a lot of illegal stuff.

But yes, maybe a service such as this can't exist in our legal framework. Which on the internet likely just means that someone will launch a more shady version in a more favorable jurisdiction. Of course that shouldn't preclude us from shutting down this version if it turns out to be too harmful. But if the demand is there, finding a legal pathway to a responsibly managed version would be preferable (not that this one is perfectly managed by any means)


There has to be a 'reasonable person' factor here - otherwise if I'm watching Henry V I can sue everyone and his mother because the actor 'directed me to take up arms'! I never wanted to go into the breach, damn you Henry.


> I'm not entirely sure what Character AI could have done differently here.

Not offer the product? Stop the chat when it goes off the rails?


> I'm not entirely sure what Character AI could have done differently here.

You're taking it as a given that Character AI should exist. It is not a person, but an offering of a company made up of people. Its founders could have started a different business altogether, for example. Not all ideas are worth persuing, and some are downright harmful.


No no, they're betting other people's children on relatively unpredictable algorithms. Totally different!


Well, the founders already won, according to the article:

> Google does not own Character.AI, but it reportedly invested nearly $3 billion to re-hire Character.AI's founders, former Google researchers Noam Shazeer and Daniel De Freitas, and to license Character.AI technology.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: