Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s cool to see people recognizing this basic fact — consciousness is a prerequisite for intelligence. GPT is a philosophical zombie.


Problem is, we have no agreed-upon operational definition of consciousness. Arguably, it's the secular equivalent of the soul. Something everything believes they have, but which is not testable, locatable or definable.

But yet (just like with the soul) we're sure we have it, and it's impossible for anything else to have it. Perhaps consciousness is simply a hallucination that makes us feel special about ourselves.


I disagree. There is a simple test for consciousness: empathy.

Empathy is the ability to emulate the contents of another consciousness.

While an agent could mimic empathetic behaviors (and words), given enough interrogation and testing you would encounter an out-of-training case that it would fail.


Uh... so is it autistic people or non-autistic people who lack consciousness? (Generally autistic people emulate other autistic people better and non-autists emulate non-autists better)

> given enough interrogation and testing you would encounter an out-of-training case that it would fail.

This is also the case with regular humans.


For one thing, this would imply that clinical psychopaths aren't conscious, which would be a very weird takeaway.

But also, how do you know that LMs aren't empathic? By your own admission they do "mimic empathetic behaviors", but you reject this as the real thing because you claim that with enough testing you would encounter a failure. This raises all kinds of "no true Scotsman" flags, not to mention that empathy failure is not exactly uncommon among humans. So how exactly do you actually test your hypothesis?


Great point and great question! Yes, it does imply that people who lack the capacity for empathy (as opposed to those who do not utilize their capacity for empathy) may lack conscious experience. Empathy failure here means lacking the data empathy provides rather than ignoring the data empathy provides (which as you note, is common). I’ve got a few prompts that are somewhat promising in terms of clearly showing that GPT4 is unable to correctly predict human behavior driven by human empathy. The prompts are basic thought experiments where a person has two choices: an irrational yet empathic choice, and a rational yet non-empathic choice. GPT4 does not seem able to predict that smart humans do dumb things due to empathy, unless it is prompted with such a suggestion. If it had empathy itself, it would not need to be prompted about empathy.


Can you give some examples of such prompts?


You can't even know that other people have it. We just assume they do because they look and behave like us, and we know that we have it ourselves.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: