I don't know what you're thinking of, but mine are.
Practice of any kind (sports, coding, puzzles) works like that.
Most of all: interactions with any other conscious entity. I carry at least intuitive expectations of how my wife / kid / co-workers / dog (if you count that) will respond to my behavior, but... Uh. Often wrong, and have to update my model of them or of myself.
Yes, I am saying in both cases the expectations are violated regularly. It’s not obvious at all that an LLM’s “perception” of its “world” is any more coherent than ours of our world.
The question of how to evaluate whether something is conscious is totally different from the question of whether it actually is conscious.