Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From the article.

> Occasionally it does make an illegal move, but I decided to interpret that as ChatGPT flipping the table and saying “this game is impossible, I literally cannot conceive of how to win without breaking the rules of chess.” So whenever it wanted to make an illegal move, it resigned.

But you can do even better than the OP with a few tweaks.

1. One is by taking the most common legal move from a sample of responses.

2. Telling GPT what all the current legal moves are telling it to only respond with an element from the list.

3. Ending the prompt with the current sequence of moves and having it complete from there.



How many 1400 human chess players do you have to explain every possible move to it every single move?


When you are speaking to a person, they have inner thoughts and outer actions/words. If a person sees a chess board they will either consciously or unconsciously evaluate all the legal moves available to them and then choose one. An LLM like ChatGPT does not distinguish between inner thoughts and outer actions/words. The words that it speaks when prompted are its inner thoughts. There is also no distinction between subconscious and conscious thoughts. Humans generate and discard a multitude of thoughts in the subconscious before any thoughts ever make it to the conscious layer. In addition, most humans do not immediately speak every conscious thought they have before evaluating it to see whether speaking it aloud is consistent with their goals.

There's already a lot of research on this, but I strongly believe that eventually the best AIs will consist of LLMs stuck in a while loop that generate a stream of consciousness which will be evaluated by other tools (perhaps other specialized LLMs) that evaluate the thoughts for factual correctness, logical consistency, goal coherence, and more. There may be multiple layers as well, to emulate subconscious, conscious, and external thoughts.

For now though, in order to prompt the machine into emulating a human chess player, we will need to act as the machine's subconscious.


I feel like we have very different expectations about what tools like this are good for and how to use them. When I say GPT3 can play chess what I mean is, I can build a chess playing automaton where the underlying decision making system is entirely powered by the LLm.

I, as the developer, am providing contextual information like what the current board state is, and what the legal moves are, but my code doesn't actually know anything about how to play chess, the Llm is doing all the "thinking."

Like it's nuts that people aren't more amazed that there's a piece of software that can function as a chess playing engine (and a good one) that was trained entirely generically.


Does that matter? I’m really very confused by the argument you are making.

That you may have to babysit this particular aspect of playing the game seems quite irrelevant to me.


When they are blindfolded? Almost all of them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: