Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> LLMs are already being trained with RL to have goal directedness.

That might be true, but we're talking about the fundamentals of the concept. His argument is that you're never going to reach AGI/super intelligence on an evolution of the current concepts (mimicry) even through fine tuning and adaptions - it'll like be different (and likely based on some RL technique). At least we have NO history to suggest this will be case (hence his argument for "the bitter lesson").



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: