Why do people always have to interpret everything in absolute terms?
It's clearly following some opening theory in all the games I've looked at so far. So yes, it is regurgitating opening moves. That's clearly not all it's doing, which is very impressive, but these are not mutually exclusive.
I am responding to OP, who said "Most likely it has seen a similar sequence of moves in its training set."
From this, I take it that the question is if ChatGPT is repeating existing games, or not. All you need is a single game where it's not repeating a single game to prove it definitively. You can hardly play 60 moves without an error by accident.
I believe you're responding to a different question, something like "does ChatGPT fully understand the game of chess".
The OP was too unsophisticated in their analysis(as is TFA), no doubt. But I'm not too interested in what OP said or who was wrong or not, and rather more interested in finding what's right.
As someone very clever once said, welcome to the end of the thought process.
We've established that:
1. It doesn't repeat entire games when the games go long enough
2. It does repeat a lot of opening theory
3. It seems to repeat common, partially position independent tactical sequences even when they're illegal or don't work tactically.
It's clearly following some opening theory in all the games I've looked at so far. So yes, it is regurgitating opening moves. That's clearly not all it's doing, which is very impressive, but these are not mutually exclusive.