How confident can we all be by leaving all kinds of life and death decisions on robots? Robots might crash the entire data and so they ought to be coded by humans to utilize the sensitive health data productively. This implies that we, as humans, have to combat all such scenarios to command robots about how to take decisions in context to life and death scenarios. From self-driving cars to war based drones, highly critical matters are getting handed over to robots and we are standing at a point where several people are speculating that in life and death decisions, Artificial Intelligence-driven robots are normally going to execute the death decisions, literally.
Decision-making ability of Self-Driving Cars
However, self-driving cars are anticipated to decrease the amount of accidents taking place on roads as McKinsey & Company reports have claimed. Accidents can still take place and we have to contemplate how to code robots. In addition to this, we have to understand who is accountable for taking decisions regarding how to code various devices no matter if it is for customers, politicians, marketplace, insurance industries or many more. If a self-driving car faces a hindrance going down the road, it will reply in several ways and save it and its passengers from risking or hitting the swerving lanes which ends up harming various passengers.
So, is the decision regarding the lane to diverge relying upon the influence of the people sitting in the cars? Perhaps the person who can get killed might be a parent or even a prominent scientist. And what if there are some small children sitting in the self-driving car? Conceivably, at this point the decision here can be taken by avoiding the hindrance or by simply a flipping a coin or serendipitously selecting from any of the features. These are basically all the core perplexities we have to resolve as we create and model AI-driven systems. Another fold in these decision-making algorithms requires one to entails accidents which can cause damages like brain or spinal injury and various disabilities.
Military drones drowning out targets
As we have talked about various spheres, the world of weaponization includes rising fields of artificial intelligence and robotic process automation. It has been estimated that more or less 30 nations have been reported about investing in armed drones, however these are basically operated remotely. Various drones possess the capability to fly and then land autonomously although any amalgamation of these weapons is completely commanded by operators. Advancement in the applications of Artificial Intelligence in India enhances the performance of future-driven weapons which can discover, recognize and determine the engagement of targets automatically.
AI-based weapons which would discover their targets are going to be the ultimate step in the coming decades when we talk about the enhancement in the field of automation in weaponization. Since the age of World War II, countries have used “fire-and-forget” weapons like torpedoes and missiles which will not be stopped once they are launched.
On the contrary, autonomous weapons, although, they will not decide about the targets and their engagement. The humans basically take the decision to destroy or finish the target and the weapons simply perform the action. Various weapons even utilize automation to facilitate humans in taking the decision of firing the concerned targets. In the present era, radars utilize automation to assist in classifying objects, however humans yet take the decision to bore people in supermarkets the maximum number of times.
Robots With Mortality: Can Life And Death Decisions Be Coded?
Recently, Microsoft have disclosed the complexity of building morality-driven robots. Chatbot Tay, which was recently created to speak similar to a teenage girl, has started behaving like a Nazi-loving racist even less than twenty four hours on social media. Therefore we can say here that, Tay was not made with a feature to be completely moral. However, loads of robots are indulged in the work which has simple ethical consequences.
Since robots are becoming highly precocious, their moral decision-making capability can become highly complicated. However this awakens a question for coding morals into robots, along with a question that can we trust robots when it comes to moral decisions?
To be clear, there exist mainly two approaches for building a moral robot. The primary one is to decide over a particular morality law (increase happiness, for an instance), create an algorithm for that law, and design a robot which perfectly obeys the code. However the complexity is technically about the correct morality based rule. Every ethical law, in fact the above mentioned, has a multitude of exceptions as well as plenty of examples.
The secondary is to build a machine-learning robot and then train it perfectly respond to several scenarios for arriving with a desired output. This is pretty much similar to the ways by which humans learn ethics, however it raises a question that either humans are the best ethics teacher or not. If those humans who communicated with Tay train an AI-driven robot, then it is likely that it will not build the right morality-oriented features.
Putting theories into action
How to design morality-driven robots is not just a speculative theory; numerous philosophers as well as programmers are recently working this project and anticipate the coming decades will have the desired result.
Georgia Institute of Technology is currently operating to create robots obey with global humanitarian law. If we talk about this scenario, there exist a large set of laws as well as instructions for robots to obey, that have been designed by us and thus approved by international states. However several cases are not much like others, GIT believes that their project can be successful in the upcoming decades.
They discovered that if a robot has been trained with an ethical response for about 4 respective scenarios, it was capable to expand and take a suitable moral decision in the rest 14 cases. It is highly possible to design robots with respective moral-based features, however should we track that goal?
Humans can also make some mistakes. In fact, it is doubtful, that robots will be capable to resolve the complicated moral decisions for the predictable future. And till today, humans are not sure that we should hand over the reign of decision making to robots, especially when the cost could be human lives.