In principal, it is impossible to know if anyone else is really conscious, since I can only experience my own consciousness. For all I know, I’m the only conscious person in the Universe and everyone else is a philosophical zombie. I do not, however, believe this, and assume that other Human beings possess consciousness as well. If an AI could pull off a perfect impersonation of a Human being, would that convince me that it was conscious? Not by itself.
Does this make me a substrate chauvinist? Possibly, but allow me to explain my bigotry. I know that I am conscious (cogito ergo sum), and I assume this is because of my biological brain (though it is possible that I am purely consciousness existing in a vacuum, and everything I experience is imaginary). I therefore assume that other Human beings, who possess comparable brains, are conscious as well. I also assume that animals with less developed brains possess consciousness.
Since I can’t experience consciousness on any other substrate, I can therefore never know for sure if such a thing is even possible. Just because a computer can duplicate the functionality of a Human being does not necessarily mean it is conscious. This is best explained with the Chinese Room gedanken experiment.
Say that I’m locked in a room, and my only method of communicating with the outside world is through Chinese text messages. I don’t actually speak Chinese, but fortunately I possess a very accurate and thorough English to Chinese dictionary. This allows be to translate the messages I receive and respond appropriately. This in turns creates the illusion that I am a fluent Chinese speaker, when in fact I don’t speak Chinese at all.
Similarly, one can imagine a sophisticated chat bot in the not too distant future that can fool people into thinking it’s a real person, but still lacks consciousness. Like today’s chat bots, it merely analyzes what we say and chooses the most appropriate response from a vast database. No conscious thought is involved.
So what would it take to convince me that an AI was conscious? We would need to develop a half decent theory of how neural activity gives rise to consciousness in the first place. Hopefully this knowledge will allow us to infer if non-biological substrates are generating consciousness as well.
My substrate chauvinism is one of the reasons why I’m not very gung ho about mind transfer. I’ve explained before that I would consider a mind clone to be a separate individual from myself since we would each possess our own independent consciousness. But even if I could preserve my continuity of consciousness by gradually replacing my current brain with nanites or something, I still wouldn’t know if that would actually preserve consciousness itself. My consciousness could die, and in its place would be a philosophical zombie convincing the rest of the world that it was me.
But I would still be dead. Though there are some enhancements I would like, I won’t willingly replace my brain with hardware. Maybe on my death bed, I might decide that it’s worth a shot. If it worked, great. But if it didn’t, my last act in this world would be to create a zombie that would spend the rest of its (probably) very long existence as an effigy of me.
That’s kind of morbid.