
Student Entry
Student Name Nicholas Slamka
Date of Submission
Topic Title:Teaching Robot Ethics
Ethics is a system of moral principles that affect how people make decisions and lead their lives (“Ethics: A General Introduction,” n.d.). Ethics covers the dilemmas of how to live a good life, our rights and responsibilities, the language of right and wrong, and making moral decisions (“Ethics: A General Introduction,” n.d.). While the ability to make ethical decisions has always historically been thought of as a human trait, lately there has been a lot of debate over whether robots can be ethical decisions or not and if it is important for them to have the ability to do so.
Driverless Car Scenario
The case that is most often used to demonstrate of ethics in robotic machines is the driverless car scenario. In this scenario a driverless car is about to make an unavoidable crash and must either kill a bunch of pedestrians that are in the way of the vehicle or have the vehicle swerve in such a way that it kills the driver instead and spares the pedestrians. MIT researchers have found that most people would support the utilitarian approach that the driver should be killed instead of the group of pedestrians, but would be far less likely to use the car if this was the case and they were the driver (Dizikes, 2016). This also raises the questions of who gets to make such ethical decisions, as it could be the government, the manufacturer, the driver, or perhaps some other party (“Can We Teach Robots Ethics?,” 2017). As such situations are inevitable where a collision will happen, the driverless car must be able to make a decision on how to resolve the dilemma of who should die, and many believe that in order to do it the machine must contain some kind of moral
grounding.
Machine Learning
One way that it is believed that robots could learn how to approach ethical dilemmas is through a process called machine learning. In this process a robot is first programs to promote certain universal ethical principles such as avoid suffering and promote happiness (which the robot must be able to learn how to distinguish as well) and then put it is various scenario where it could learn to apply these principles to new scenarios over time (“Can We Teach Robots Ethics?,” 2017). This is seen in, for example carebots, which are robots designed to help the sick and the elderly. In a scenario where the patient refuses to take their medication, the robot may respect that decision at first as the patient’s autonomy is an important value, but if enough time elapses where the patient’s life could potentially be in danger, the robot knows to get help as the patient’s life is a higher ethical value (“Can We Teach Robots Ethics?,” 2017). Being able to use machine learning to potentially teach robots ethics comes with various foreseen challenges however. One is that we need to have an agreed upon scientific consensus of what ethical values could serve as parameters for robots to use to make decisions. For example,
“Germany’s Ethics Commission on Automated and Connected Driving has recommended to specifically programme ethical values into self-driving cars to prioritize the protection of human life above all else. In the event of an unavoidable accident, the car should be ‘prohibited to offset victims against one another’. In other words, a car shouldn’t be able to choose whether to kill one person based on individual features, such as age, gender or physical/mental constitution when a crash is inescapable.” (Polonski, 2017).
And once such an ethical consensus as to what is right and wrong can be agreed on, human morality then has to be crowdsourced in order for machines to be able to learn how to apply different ethical approaches in different scenarios (Polonski, 2017). While such an approach has been found to be successful in programs such as the MIT Moral Machine, it is of course debatable whether this is really teaching robots ethics or not. As ethicist Aimee van Wynsberghe argues, machines are not the actual moral agents making decisions in any of these scenarios, but are rather just making calculations to determine right and wrong based on data that humans algorithmically instill into them (“Can We Teach Machines A Code of Ethics?,” 2019). This is an important distinguishing factor as we as humans can make ethical decisions that we independently formulate, while robots cannot, which raises the question of whether they could be considered to be truly learning ethics or not.
Learning Proper Ethical Lessons
A major concern having robotic machines learn ethics is the fact that often times, since they rely on crowdsourced data, they could in fact be learning the wrong lessons. This is problematic in the sense that amplify racial and gender biases, and the structural discrimination that comes with such bias in its operation. For example, just as flowers are often linked to pleasantness and insects are often linked to unpleasantness based on data compiled by web search AI machines, the word female is more often associated with the arts, humanities, and home occupations, while the word male is more closely associated with the mathematical and engineering fields (Devlin, 2017). So, when ethical machines are trying to help those who are looking for employment or a future career, they are more likely to push people into more stereotypical roles based on their gender instead of giving a more unbiased overview. Additionally, models are often based on human behaviors, causing machines that more supposedly trained to act in an ethical manner when reviewing job applications to be 50% more likely extend an interview offer to white Americans versus to African-Americans (Devlin, 2017). In some of the worst case scenarios, ethically trained machines have been found to “deny services to minorities, impede people’s employment opportunities or get the wrong political candidate elected.” (Polonski, 2017). In order for a machine to be able to be effectively be “taught” how to act in a fair and ethical manner, it must be programmed with a precise conception by those engineering it, free of the (likely) unintentional bias they may have, which could be a nearly impossibly difficult task to complete.
References
Can We Teach Machines A Code of Ethics? (2019, March 27). Retrieved from https://www.forbes.com/sites/insights-intelai/2019/03/27/can-we-teach-machines-a-code-of-ethics/#71dce2627a8f
Can We Teach Robots Ethics? (2017, October 15). Retrieved from https://www.bbc.com/news/magazine-41504285
Devlin, H. (2017, April 13). AI programs exhibit racial and gender biases, research reveals. Retrieved from https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals
Dizikes, P. (2016, June 23). Driverless cars: Who gets protected? Retrieved from http://news.mit.edu/2016/driverless-cars-safety-issues-0623
Ethics - A General Introduction. (n.d.). Retrieved from http://www.bbc.co.uk/ethics/introduction/intro_1.shtml
Polonski, V. (2017, December 19). Can we teach morality to machines? Three perspectives on ethics for artificial intelligence. Retrieved from https://medium.com/@drpolonski/can-we-teach-morality-to-machines-three-perspectives-on-ethics-for-artificial-intelligence-64fe479e25d3