When I look at an image of a dog on my computer screen, I don't think that there's an actual dog anywhere in my computer. Saying that these models "understand" because we like their output is, to me, no different from saying that there is, in fact, a real, actual dog.
"It looks like understanding" just isn't sufficient for us to conclude "it understands."
"It looks like understanding" just isn't sufficient for us to conclude "it understands."