Artificial Intelligence (AI) is entering a new phase where the narrative is shifting from hype to reality. Many thought leaders in technology, such as Stephen Hawking, Elon Musk, and Bill Gates, have already expressed their concerns and fears regarding AI. AI is a powerful, multipurpose technology with the potential to transform industries, education, the public sector, the way we do business, the way we work, and possibly every aspect of our lives.
While it is too early to lay down stringent rules to regulate AI, several governments and policy-makers across the globe have already initiated efforts through public consultations and discussions between policy-makers, academia, companies, and associated technical bodies on how to approach AI regulation.
Voice/speech recognition, video surveillance, and network/IT operations monitoring and management are already the top three largest use cases for AI, according to Ovum's sister firm Tractica's Artificial Intelligence Market Forecasts report published in 1Q19. This shows how AI's pace of growth has accelerated; it is starting to solidify within the consumer, enterprise, government, and defense sectors, moving away from just talking about AI to actually deploying and building solutions.
Technology moves fast, but AI regulation is still at a very early stage. This is positive, as imposing regulations on emerging technologies so early would harm innovation and investment. On the other hand, however, delays would lead to a competition crisis and antitrust motions, and might create legal problems due to regulatory uncertainty. But even Google has recently published its Perspectives on Issues in AI Governance, in which it says that although the self- and co-regulatory approaches informed by current laws and perspectives from companies, academia, and associated technical bodies have been largely successful at curbing inopportune AI use, this "does not mean that there is no need for action by government." The paper is a call for governments and civil society groups worldwide to make a substantive contribution to the AI governance discussion. Google highlights five areas where government, in collaboration with wider civil society and AI practitioners, has a crucial role to play in clarifying expectations about AI's application on a context-specific basis. These include explainability standards (minimum acceptable standards in different industry sectors and application contexts), safety considerations, requirements for human-AI collaboration (decision-making should not be fully automated), general liability frameworks (sector-specific safe harbor frameworks and liability caps/insurance alternatives), and approaches to appraising fairness.
Ovum has identified seven key AI regulation challenges (see figure 1). AI security, ethics, privacy, controllability, and enforcement are the tip of the iceberg that policy makers are struggling with. Many of these areas require international coordination and agreements, which seems to be the main obstacle to achieving a comprehensive AI regulatory framework, considering the race between several countries to lead in this strategic technology. The main competition is clearly between the US and China, but many other countries have a stake in the AI race. Although challenging, efforts to achieve an international consensus on AI regulation should be a top priority to ensure that AI succeeds in realizing the positive alternative of Stephen Hawking's prediction and become the "best" thing to happen to humanity.
Figure 1: Key AI regulatory challenges
Straight Talk is a weekly briefing from the desk of the Chief Research Officer. To receive this newsletter by email, please contact us.