Concepts such as Machine Ethics or Robot Rights are already beginning to be repeated in media around the world. After all, questions about the machine ethics and robot rights they are discussed from the moment in which technological advances put them at the center of the daily scene.
Of course, this has generated enormous controversy between those who want to allow machines to make ethical decisions or have rights, and those who believe that they must find a new solution to a new problem. Basically, we believe that the scientific community should work on the development of intelligent systems capable of demonstrating that these machines and robots can be safe under any circumstances.
However, the perceived abundance of smart machine safety research right now can be a bit misleading. In fact, the vast majority of published articles are purely philosophical in nature and do little more than reiterate how important it is to control these issues.
Challenge of Computer Security Engineering and AI
Even if we succeed in designing machines capable of passing a Turing test, something possible since 2014, we will have other additional drawbacks. For example, what about humans’ own immoral actions? Indeed, it is clear that such actions should not be acceptable for the machines we design.
We could say, then, that we need our machines to be inherently safe and law-abiding. And not that they remain in the ability to reason like human beings.
As Robin Hanson rightly commented to the media at the time: “In the early to middle ages, when robots are not much more capable than humans, I would like peaceful, law-abiding robots to be as capable as possible, in order to be productive partners. But everything changes over time.
‘In a later era where robots are far more capable than people it should be very similar to choosing a nation to retreat to. In this case, we don’t expect to have many skills to offer, so we primarily care that they are law-abiding enough to respect our property rights. If they use the same law to keep the peace with each other that they use to keep the peace with us, we could have a long and prosperous future in whatever strange world they evoke, ”he says.
Thus arises the relevance of Computer Security Engineering and Artificial Intelligence and its studies.
Simulate virtual worlds, an option
Beyond the different ideas that were woven over the years, David Chalmers’s 2010 seems the most viable. The expert proposed that, for security reasons, artificial intelligence systems should first be restricted to virtual worlds. Simulated worlds until their behavioral tendencies could be fully understood within controlled conditions. Many agree with this position.
Meanwhile, others consider that if those machines are not exposed to humans, their chances of reaction will never really be known. Therefore, we are at a crossroads.
Ideally, each generation of automatic enhancement system for these machines and robots should be able to produce verifiable proof of their safety for external review. It would be catastrophic to allow a safe smart machine to engineer an inherently unsafe upgrade for itself.
On the other hand, we know that certain types of research, such as human cloning, certain medical or psychological experiments on humans, research with animals, and so on, do not meet different ethical conditions. Similarly, certain types of AI research fall under the category of dangerous technologies and should be restricted. Especially when we talk about strong AI.
The risk of strong AI
If strong AI is allowed to develop, there will be direct competition between super smart machines and people. Ultimately, machines will come to dominate due to their self-improvement capabilities.
Ted Kazynsky has his own theory on this. It could be argued that the human race would never be foolish enough to hand over all power to machines. But we are not suggesting that the human race voluntarily hand over power to machines or that machines take it voluntarily. What we suggest is that the human race could easily allow itself to fall into a position of such dependence on machines that it would have no practical choice but to accept all the decisions of the machines. ‘
“Eventually, a stage can be reached where the decisions required to keep the system running will be so complex that humans will be unable to make them intelligently. At that stage, the machines will be in effective control ”, he adds in this regard.
While we continue to ask ourselves what the future will hold in relation to Artificial Intelligence, we can already reach some general conclusions. For example, the axis of research has to move from the purely theoretical and philosophical to focus, once and for all, on the participation of practicing computer scientists.
At the same time, it is essential to develop limited artificial intelligence systems. As a way to experiment with non-anthropomorphic minds and improve current security protocols.
Luckily, we are somewhat pleased to report that some groundwork has started to appear at science venues that are aimed specifically at addressing AI safety and ethics issues.
What do you think of the ethics of machines and the rights of robots?
Share it with your friends!