Why don't bots have rights?

Why don’t bots have rights?

In 2010, the European Union issued a legal directive entitled “On the protection of animals used for scientific purposes. It defines and explains which animals should enjoy some protection, and requires the 27 countries of the European Union to follow certain rules for capturing and testing animals. It prohibits any work that would bring those animals “extreme pain, suffering, or distress, which potentially be long lasting and cannot be mitigated.” Furthermore, all tests need independent feasibility studies to prove that the animals were it is necessary For this or that search. In this list are non-human primates, most mammals, and many vertebrates. (Interestingly, cyclostomes and cephalopods such as octopuses are included as honorary vertebrates.)

While the directive explicitly states that “all animals have intrinsic value,” the document is essentially an exercise in justifying how some animals are more valuable than others. But what criteria should we use if we are going to rank species? If we imagine a long line of animals from amoeba to humans, at what point would you draw a line to say, “After that point, rights“?

It is an issue that applies more and more to another thorny area: the rights of robots. Here, too, we can draw a similar line. At one end are calculators, and at the other are the transcendent and knowledgeable artificial intelligence systems of science fiction. Where is our line now? When does a computer have rights? And when, if any, robot A person?

future slaves

In most science fiction stories, robots are portrayed as slaves or resources to be exploited. They always have fewer rights, if any, than the good old human rights in blood and genes. in star WarsRobots use language, plan, and seem to be in trouble for different things. Yet they are treated little more than slaves. Blade RunnerImitations – indistinguishable from “real” humans – are forced to work as soldiers or prostitutes. Star Trek so dedicate full episode to the question of whether data can choose his life path. Science fiction depicts worlds free of any kind of robots.

This is because we all grew up with some kind of emotional detachment from robots. They are separate and different. It is usual in our time to treat computers and artificial intelligence as things that can be used. Their well-being, choices, and “life” are trivial matters, even if they matter at all.

All this is an intriguing moral contradiction. In our legal and ethical documents (such as the EU directive above), we often refer to two concepts: mind (complex intelligence) and sensation (the ability to have subjective experiences or “feelings”), both of which we attribute to animals to varying degrees. We do not I know They have, too, but we assume they do. Even EU directives say, “Animals should always It is treated as sensitive creatures. This does not mean that they are, but rather that we should act as if they are. However, why not easily attribute them to complex AI?

Inverse Turing test

Turing test It is a popular experiment where human users ask an AI a series of questions to determine whether or not it is indistinguishable from a human. It is a matter of intelligence and imitation. But what if we turned this on its head and did the Turing Test for Human Robots? What if we asked the question, “What is something about robots that excludes them from rights, protection, and respect?” What makes robots Different to humans?

Here are just three possible suggestions and their responses.

1. Robots lack advanced and generalized intelligence (wisdom). This is certainly true, for now. The calculator is great for working with π and cos(x), but it doesn’t help you read road signs. Voice assistants are good at telling you the weather, but they can’t hold a conversation. There are three problems with this objection. First, there are too many animals that lack a lot of intelligence and we still respect and treat them well. Second, there are some humans who lack advanced intelligence – such as children or those with severe intellectual disabilities – to whom we still give rights. Third, given the speed of improvement in AI, this is a threshold we may soon be able to cross. are we truly Willing to unchain computers and treat them as equal human beings?

Subscribe to get unexpected, surprising and touching stories delivered to your inbox every Thursday

2. Robots cannot sense emotions such as pain or love (a sensation). This is a tricky area, not least because we don’t quite know what feelings are. On a physical level, if we reduce emotions to hormones or electrochemical reactions in the brain, it seems reasonable to reproduce that in AI. In this context, will they be given legal/ethical rights? The other problem, however, is that we still encounter human-centered bias here. If your friend cries, you are assuming they are sad – not that they are imitating sadness. If your mom takes off her jacket, she’ll assume she’s sexy. When AI shows a feeling (this is the case in sci-fi representations more than anything else today) why do we do it Supposedly Are they just imitating? Is this because…

3. Humans make and program robots. Even when the most advanced AI can “learn” from situations, it still needs a lot of human guidance in the form of programming. Nothing based on human agency can be considered worthy of rights. There are two problems with this. First, while we’re not talking about humans being “programmed,” it’s not an exaggeration to say that’s exactly what our genes do. You are simply the result of your genetics wire your social parents Input. Other than words, a little different. Second, why does dependency exclude you from rights? Dogs, children and the elderly are very dependent on humans, yet we do not allow them to be mistreated or abused. (Aside, Aristotle made the “dependency = slavery” argument.)

The wrong side of history

There are two questions hiding in one big issue. The first is whether AI is worthy of personality; This is probably a very big question at the moment (and something that may never be answered). The second, however, is at what stage is AI qualified to treat it with respect and care? When can we no longer exploit or abuse advanced bots?

It may well be that future generations will look back in amazement at our behavior. If we imagine a future in which conscious and wise AI will be treated just like humans, we can also imagine how shocking they would be in our time. If the AI ​​of 22second abbreviation Century are our friends, colleagues and gaming partners, it will be the theme of 21Street Exploiting the century is an embarrassing topic?

We may be on the brink of a “gracious AI”. Nobody fights for the rights to the calculator, but maybe we should start re-evaluating how we view the impressive general AI that we’re creating. We have one foot on either side of the technological threshold, and it is time to re-examine our moral and societal values.

Johnny Thompson teaches philosophy at Oxford. He runs a popular account called little philosophy And his first book is Small Philosophy: A Little Book of Big Ideas.

#dont #bots #rights

Leave a Comment

Your email address will not be published. Required fields are marked *