The argument is about causation or generation, not simulation. Of course we can simulate just about anything, I could write a program that just prints "Hello, I'm a conscious being!" instead of "Hello, World!".
The weather example is a good one: you can run a program that simulates the weather in the same way my program above (and LLMs in general) simulate consciousness, but no one would say the program is _causing_ weather in any sense.
Of course, it's entirely possible that more and more people will be convinced AI is generating consciousness, especially when tricks like voice or video chat with the models are employed, but that doesn't mean that the machine is actually conscious in the same way a human body empirically already is.
>but that doesn't mean that the machine is actually conscious in the same way a human body empirically
Does it matter? Is a dog/cow/bird/lizard conscious in the same way a human is? We're built from the same basic parts, and yet humans seem to have a higher state of consciousness than other animals around us.
For example the definition of the word conscious is
>aware of and responding to one's surroundings; awake.
I'll give that we likely mean this in a general sense, but I'd say we're pretty close to this with machines. They can observe the real world with sensors of different types, and then either directly compute, or use neural nets to make generalized decisions on what is occurring around them, then proceed to act on those observations.
The weather example is a good one: you can run a program that simulates the weather in the same way my program above (and LLMs in general) simulate consciousness, but no one would say the program is _causing_ weather in any sense.
Of course, it's entirely possible that more and more people will be convinced AI is generating consciousness, especially when tricks like voice or video chat with the models are employed, but that doesn't mean that the machine is actually conscious in the same way a human body empirically already is.