By Wesley J. Smith, J.D., Special Consultant to the CBC
Oh brother. Every once in a while I like to check in to see what our friends the transhumanists are fantasizing about. The latest is apparently “machine rights” and “machine ethics,” that is the rules that should guide our treatment of machines once they attain actual consciousness. From George Dvorsky’s “When the Turing Test is Not Enough:”
Machine consciousness is a neglected area. . . not too many people are thinking about the ethical and moral issues involved. We need to think about this preemptively. Failure to set standards and guidelines in advance could result in not just serious harm to nascent machine minds, but a dangerous precedent that will become more difficult to overturn as time passes.
Machine “minds” wouldn’t be minds. They would be super sophisticated computer programs. Nothing more.
And what would “machine rights” look like?
As I see it, qualifying artificial intellects will need to be endowed with the following rights and protections:
• The right to not be shut down against its will • The right to not be experimented upon • The right to have full and unhindered access to its own source code • The right to not have its own source code manipulated against its will • The right to copy (or not copy) itself • The right to privacy (namely the right to conceal its own internal mental states) • The right of self-determination
These rights will also be accompanied by those protections and freedoms afforded to any person or citizen.
Ooookay. Would the machine minds vote in person or over the Internet?
It wouldn’t be transhumanistic advocacy without expressing antipathy toward human exceptionalism:
Another particularly pernicious problem is the impact of human exceptionalism and substrate chauvinism on the topic. Traditionally, the law has divided entities into two categories: persons or property. In the past, individuals (e.g. women, slaves, children) were considered mere property. Law is evolving (through legislation and court decisions) to recognize that individuals are persons; the law is still evolving and will increasingly recognize the states or categories in between.
Extending personhood designation to those entities outside of the human sphere is a pertinent issue for animal rights activists as well as transhumanists. Given our poor track record of denying highly sapient animals such consideration, this doesn’t bode well for the future of artificially conscious agents.
We shouldn’t measure the moral worth of human beings, individual-by-individual, moment-by-moment, because it means the end of universal human rights and the very notion of human equality. Nor should animals be brought into the human moral community, although to be sure, we have duties toward them because of our (exceptional) moral agency. But we will never have “duties” toward machines we have manufactured as if they were animate beings, and for sure, no matter how sophisticated, collections of metal and silicon (or whatever materials out of which future machines may be fabricated) should never be considered part of the moral community — even if made up of organic materials such as bacteria and the like. Inanimate objects are inanimate objects. Or to put it another way, machines are not people too.
This is why I keep an eye on transhumanism. I am not worried about its utopian materialistic dreams actually happening. But the transhumanist radical and eugenic ideology is a very different matter. It could infect our society and culture, and indeed, to some degree already has. Attacks against human exceptionalism endanger the most weak and vulnerable (human) individuals among us, which is why transhumanism may be something about which we can roll our eyes. But it still matters.