Artificial intelligence (AI) is cheaper, easier, and more complex than ever before. It is used in healthcare, insurance, accounting, smart homes, and smart cities; for screening job applicants; as personal assistants; and in numerous other human endeavors. But what are the ethical issues in a technology that is so interwoven in human affairs?
Questions are being asked about the jobs that AI replaces, the level of intelligence and awareness of AI, threats to human dignity, liability, morality, and even the rights of AI. There is no doubt that AI will replace manual jobs. Robots will replace humans in factories and in dangerous situations, even in war. They will replace drivers of vehicles. They will affect white-collar workers such as accountants, auditors, and, perhaps, even judges. They will also replace application developers and coders; the list goes on. Questions are being asked about AI bias, racism, and sexism. Who is liable for an AI mistake? There is a discipline of designing artificial moral agents (AMAs), AI agents that act morally.
When AI shortlists job applicants, conducts facial recognition, or tutors students, what are the biases involved, and how visible are they? AI is also self-improving, which suggests that original programming may have unintended consequences. Nick Bostrom, a Swedish philosopher, the author of Superintelligence, argues that the long-term goal of general AI may have the capability of bringing about human extinction.
These are weighty issues that go beyond the immediate future and that may affect humanity as a whole.
The Partnership on AI (www.partnershiponai.org) was formed by the major tech companies to research issues about AI safety, transparency, and labor; the economy; the people/AI interface; social influence; and the societal challenges posed by AI. It is hoped that this research will inform AI law and regulation, which is patchy. For instance, San Francisco introduced a facial recognition software ban in May 2019; Illinois introduced regulations controlling AI hiring practices at the same time; and the Algorithmic Accountability Act of 2019 is being debated in the House Committee on Energy and Commerce. In February 2019, President Trump issued an executive order, "Maintaining American Leadership in Artificial Intelligence," which requires the Office of Management and Budget (OMB) to issue guidance to regulatory agencies within six months.
The European Union issued a set of guidelines in April 2019 on how companies and governments should develop ethical applications of AI. These guidelines cover human autonomy and dignity, safety and robustness, privacy and data governance, fairness and bias, and accountability among other areas. The EU General Data Protection Regulation (GDPR) deals with some of these issues.
The recent focus on AI ethics as a separate activity (exemplified by the recent funding for an Institute for Ethics in AI at the University of Oxford) has puzzled some researchers, because clearly ethics should apply in all AI work. The problem is that ethics has caught out many experts in the field, and a separate entity will help balance the lack of attention to ethics. An external agency will approach the AI ethics issue from an unbiased perspective and advise across the broad AI research community so they don't fall into ethics traps.
AI has reached a cusp: big data, internet ubiquity, computing power, and machine learning are allowing AI to advance rapidly. Will we be able to keep up and manage the ethical, social, governmental, and legal questions that AI poses?