Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When you are speaking to a person, they have inner thoughts and outer actions/words. If a person sees a chess board they will either consciously or unconsciously evaluate all the legal moves available to them and then choose one. An LLM like ChatGPT does not distinguish between inner thoughts and outer actions/words. The words that it speaks when prompted are its inner thoughts. There is also no distinction between subconscious and conscious thoughts. Humans generate and discard a multitude of thoughts in the subconscious before any thoughts ever make it to the conscious layer. In addition, most humans do not immediately speak every conscious thought they have before evaluating it to see whether speaking it aloud is consistent with their goals.

There's already a lot of research on this, but I strongly believe that eventually the best AIs will consist of LLMs stuck in a while loop that generate a stream of consciousness which will be evaluated by other tools (perhaps other specialized LLMs) that evaluate the thoughts for factual correctness, logical consistency, goal coherence, and more. There may be multiple layers as well, to emulate subconscious, conscious, and external thoughts.

For now though, in order to prompt the machine into emulating a human chess player, we will need to act as the machine's subconscious.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: