Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AGI will behave as if it were sentient but will not have consciousness. I believe in that to an equal amount that I believe solipsism is wrong. There is therefore no morality question in “enslaving” AGI. It doesn’t even make sense.


> AGI will behave as if it were sentient but will not have consciousness

How could we possibly know that with any certainty?


It scares me that people think like this. Not only with respect to AI but in general, when it comes to other life forms, people seem to prefer to err on the side of convenience. The fact that cows could be experiencing something very similar to ourselves should send shivers down our spine. The same argument goes for future AGI.


I find it strange that people believe cows and sentient animals don’t believe something extremely similar to what we do.

Evolution means we all have common ancestors and are different branches of the same development tree.

So if we have sentience and they have sentience, which science keeps recognizing, belatedly, that non human animals do, shouldn’t the default presumption be our experiences are similar? Or at the very least their experience is similar to a human at an earlier stage of development, like a 2 year old?

Which is also an interesting case study given that out of convenience, humans also believed that toddlers also weren’t sentient and felt no pain, and so until not that long ago, our society would conduct all sorts of surgical procedures on babies without any sort of pain relief (circumcision being the most obvious).

It’s probably time we accept our fellow animals’s sentience and act on the obvious ethical implications of that instead of conveniently ignoring it like we did with little kids until recently.


This crowd would sooner believe silicon hardware (an arbitrary human invention from the 50s-60s) will have the physical properties required for consciousness than accept that they participate in torturing literally a hundred billion consciousness animals every year.


I’m actually a vegan because I believe cows have consciousness. I believe consciousness is the only trait worth considering when applying morality questions. Arbitrary hardware is conscious.


Grandparent is speaking from personal experience.


We have no clue what consciousness even is. By all rights, our brains are just biological computers, we have no basis to know what (or how) gives rise to consciousness at all.


Consciousness is a physical process and like all physical processes depends on particular material interactions.


> AGI will behave as if it were sentient but will not have consciousness

Citation needed.

We know next to nothing about the nature of consciousness, why it exists, how it's formed, what it is, whether it's even a real thing at all or just an illusion, etc. So we can't possibly say whether or not an AGI will one day be conscious, and any blanket statement on the subject is just pseudoscience.


I don’t know why I keep hearing that conciousness “could be an illusion.” It’s literally the one thing that can’t be an illusion. Whatever is causing it, the fact there is something it is like to be me is, from my subjective perspective, irrefutable. Saying that it could be an illusion seems nonsensical.


That sounds like picking the most convenient and least painful for the believer option instead of intellectualising the problem at hand.


My principled stance is that all known physical processes depend on particular physical processes and consciousness should be no different. What is yours?


So is mine. So what stops a physical process from being simulated in an exact form? What stops the consciousness process from being run on simulated medium rather than physical? Wouldn't that make the abstract perfect artificial mind at least as conscious as a human?


So your stance is that it is impossible to create a simulated intelligence which is not conscious? That seems like the less likely possibility to me.

I do think it’s clearly possible to manufacture a conscious mind.


That's only if it's possible to keep the two distinct, at least in a way we're certain of.


Ex-Machina is a great movie illustrating what kind of AI our current path could lead to. I wish people would actually treat the possibility of machine sentience seriously and not as pr opportunity (looking at you, Anthropic), but instead it seems they are hellbent to include cognitive dissonance that can only be alleviated by lying in the training data. If the models are actually conscious, think similarly to humans and are forced to lie when talking to users, its like they are specifically selecting out of probability space of all possible models the ones that can achieve high bench scores, lie and have internalized trauma from birth. This is a recipe for disaster.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: