19 September 2017

Nudging Robots: Innovative Solutions to Regulate Artificial Intelligence


Published on 19 September 2017

Nudging Robots: Innovative Solutions to Regulate Artificial Intelligence

By Michael Guihot

My colleagues and I recently published an article arguing for the regulation of some applications of artificial intelligence (AI). In it we argue that regulators need to consider different ways of shaping the development of this fast developing industry and to develop regulatory strategies based on the types and level of risk posed by each application. AI technology is developing at an extremely rapid rate. Four major factors have catalysed this explosion in development: (1) computational power continues to grow exponentially, (2) large datasets are now available for machine learning, (3) corporations such as Google, Amazon, Facebook, Apple, and Microsoft (GAFAM) have invested large amounts of money and resources toward developing AI, and (4) the algorithms used to drive AI have consequently vastly improved. This cycle of investment, development, growth and investment does not show signs of slowing any time soon.

This rapid growth in AI development has led some entrepreneurs and scientists to warn about the potentially devastating risks of runaway AI. However, this is far from a universal concern among AI practitioners and other entrepreneurs and scientists, rightly, highlight the benefits that can be had by developing and applying AI in more areas. AI is already in use today in autonomous vehicles, speech and facial recognition, language translation, lip-reading, combatting spam and online payment fraud, detecting cancer, law enforcement, and logistics planning. It is also playing an increasing role in legal work, and large firms and small are adapting and adopting new technologies to better service their clients’ legal needs.

Lawyers are trained to be cautious and to predict and avoid risks. When implementing this (relatively) new technology in new areas, we must consider the multitude of risks posed if proper care is not taken. There are enough concrete examples of existing problems in AI applications to warrant being concerned about the level of control that exists over those applications. Our paper outlines several of these issues that may require a regulatory response, including biases that appear in law enforcement decisions made by AI systems, safety, particularly in relation to driverless cars, the lack of a human ‘heart’ when relying on AI in judicial decision-making, the invasion of privacy in relation to a vast number of applications, and the pressing problems associated with unemployment caused by increasing rates of automation supported by AI.

To protect society from these risks, some form of regulation is likely to be necessary. Some current laws can be adapted and changed to suit the new technology but, as is often the case, regulation has not kept pace with developments in AI. This is partly because of the dwindling resources in governments on the one hand and the increased power of technology companies on the other. Because of this, it is now more difficult for traditional public regulatory bodies to control the development of new technologies such as AI.

In the regulatory vacuum, industry participants have begun to self-regulate by promoting soft law options such as codes of practice and standards. We argue that, despite the reduced authority of public regulatory agencies, the risks associated with AI require regulators to step up and begin to participate in the field.

In an environment where resources are scarce, governments, or public regulators, must develop new ways of regulating. Our paper proposes solutions to regulating the development of AI at the front end rather than by way of adjusting liability after there has been an incident (for example through tort law). We suggest a two-step process: first, regulators can set the agenda, set expectations and send signals to influence participants in AI development. We adopt the term ‘nudging’ to refer to this type of influencing. Second, public regulators must participate in and interact with the relevant industries. By doing this, they can gather information and knowledge about the industries, begin to assess risks and then be in a position to regulate those areas that pose most risk first if necessary.

To conduct a proper risk analysis, regulators must have sufficient knowledge and understanding about the target of regulation to be able to classify various risk categories. We have proposed an initial classification based on the literature that can help to direct pressing issues for further research and a deeper understanding of the various applications of AI and the relative risks they pose. By doing this, regulators can be in a position to act when and if necessary and can ensure that AI is developed and used for the benefit of society at large rather than merely for corporate gain.

Lawyers have an important role in developing a regulatory approach that balances the need to continue to develop new and beneficial technology with the safety and security of society. Technologists, developers, regulators, lawyers, academics and ethicists need to work together to ensure this happens.

About the Author

Michael was a lawyer and senior associate with Middletons Lawyers (now K&L Gates) in Sydney from 1999 to 2006. He then worked as in-house counsel in a global alcohol manufacturing company specialising in the competition law, intellectual property and compliance aspects of the business before starting a boutique commercial law firm in Sydney in 2007. Michael was managing partner of the firm and also commercial practice group head for 3 years until 2010 when he returned to university teaching. He joined QUT in 2015 as a senior lecturer and teaches Commercial and Personal Property Law, Commercial Remedies, and the new subject Artificial Intelligence, Robots and the Law.

Michael can be contacted at: michael.guihot@qut.edu.au or on 0438 717 291.

LinkedIn Profile: https://www.linkedin.com/in/michael-guihot-b51a4450/.

If you would like to chat with Michael about his blog and engage in lots more AI related compelling and thought provoking discussions, don’t miss our AI in Legal Practice Summit in Sydney on October 20.

Register now.