Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you read the literature on AI safety carefully (which uses the word “goal”), you'll find they're not talking about LLMs either.


I think the Anthropic "omg blackmail" article clearly talks about both LLMs and their "goals".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: