A few years ago, I disagreed with Peter Singer’s assertion that a robot could be a person, complete with attendant rights — an argument with which he then took exception.
Now, a Stanford publication illustrates one reason why robots will never attain true consciousness, per philosopher John Searle. From, “Can Joe Six-Pack Compete with Sid Cyborg?”:
Sure, machines might be able to “think” in the sense of manipulating symbols, said Searle. But when it comes to consciousness, such “thoughts” do not a mind make. Syntax (the manipulation of symbols — nothing but ones and zeroes, in this case) isn’t the equivalent of semantics (the effects of those manipulations on our consciousness: in a word, “meaning”.)
“We still don’t know how the brain creates consciousness,” Searle said, arguing that to fully understand subjectivity, it will be necessary not merely to simulate brain function but to duplicate it. (A street map is not the same as the city it’s a map of.) That’s a comforting constraint for carbon-based throwbacks such as myself, who would like to feel our dominance is assured, at least for a while, by the excruciating nested complexity of the biological components-within-components-within-components of the human brain.
Works for me.
Also, robots/cyborgs — while perhaps consisting of living materials — will never actually be living organisms. I think that matters too. A lot.