Artificial Intelligence, Big Data and the Rule of Law

Artificial Intelligence, Big Data and the Rule of Law
November 1, 2017 HRBDT
In News

On 9 October 2017, Lorna McGregor participated in a debate on ‘Artificial Intelligence, Big Data and the Rule of Law’, organised by the Bingham Centre for the Rule of Law in partnership with The Law Society.

The following is taken from the event report, which you can find here:

“Prof Lorna McGregor elaborated on the opportunities and challenges posed by artificial intelligence technology and big data for human rights. She underlined that there are issues here for all our human rights, not only for privacy. Privacy plays a critical role and is the linchpin to the enjoyment of all our rights and even what is means to be human. If privacy is at risk of being violated, it can have a chilling effect on the way we live. She emphasised that it is not about pushing back on the use of big data and artificial intelligence, but rather about trying to understand the human rights implications and opportunities. To illustrate this, she mentioned the UN Sustainable Development Goals (SDGs) and it is often said that big data and artificial intelligence are crucial to the advancement of the SDGs and that we need to ensure that no one is left behind. So another part of this debate is about ensuring that big data and AI are not used solely to the benefit the global north or certain companies, but that we all share in the benefits. Prof McGregor emphasised that the goal is therefore about understanding the risks and ensuring that a human rights framework is in place to protect against these risks while ensuring that we can benefit from innovation that is taking place and that is going to continue to take place.

Prof McGregor then focussed on the impact of algorithmic decision-making and algorithmic accountability. She noted that there are huge human rights implications brought about by algorithmic decision-making. With algorithms themselves becoming increasingly complex and sophisticated, and with advances in technology, we are moving beyond algorithms that are easy to understand. Coupled with automation and the possibility of autonomous decision-making, again changing the nature of the algorithmic landscape and with that interaction with big data.

She then discussed predictive policing and noted that while algorithmic decision-making is on the one hand a useful tool for police when they are considering how to allocate resources to help with crime reduction, other studies show concerns that this will then exacerbate existing inequalities in policing and existing discrimination where certain offenders and certain communities are over-policed or discriminated against. Prof McGregor also noted the US case where algorithms were being used in judicial decision-making. Here she underlined not just the way in which these studies about risk assessments are undertaken and the risk for discrimination within them, but also that there was an attempt to challenge the use of algorithms for risk assessment, and the response there that the outcome was sufficient, based on the idea that the technology was more predictable and neutral in some way. So we also have to understand that technology can be as fallible as humans. Prof McGregor also noted a recently reported study suggesting that facial recognition software can be used to identify an individual’s sexuality (see e.g., media coverage here). Putting to one side issues around accuracy, she noted that this also raises questions about whether it is appropriate to be using algorithmic decision-making in all circumstances or whether there are red lines we ought to be considering in terms of where we should not be employing this kind of software.

Prof McGregor noted that these examples reveal a wide range of human rights issues, from the impact algorithms have on liberty, to the effect on surveillance, privacy and the chilling effect. Discrimination is a huge risk here both through feeding algorithms with discriminatory data and also with algorithms themselves acting in a discriminatory way. She also noted inequality and discrimination in terms of who is subject to algorithmic decision-making and who still gets access to a human decision-maker, and asked where the checks and balances, and fairness are in these processes. She raised further questions as to the ease of challenging algorithmic decision-making and here she highlighted the US case noted above where whilst there was the possibility of challenging the decision, but there seemed to be a strong trust in the objectivity of technology and of algorithmic decision-making. Even though there was an element of human involvement, we also need to question how meaningful that is when there is algorithmic decision-making taking place. This is particularly the case where algorithms are becoming more autonomous, complex and difficult to understand.

Lastly, Prof McGregor outlined some ideas as to how we might move forward. Whilst we are late in coming to the table, it is not too late to think about what regulatory frameworks might look like and for example what multi-stakeholder frameworks mean at the national, regional and international level, in order to address some of the human rights impacts we are seeing. She also underlined that while the technology is out there, some of these current negative effects can be rolled back. The starting point is to understand the bigger situation that we are facing, which is a world which is driven by big data, algorithmic decision-making and the increasing presence of artificial intelligence, and how they are interdependent and intersect.

Prof McGregor noted that there are many discussions taking place about ethics and ethical approaches, about how to make algorithms more transparent, how to deal with proprietary interests in algorithms etc. There is a lot of debate about whether the GDPR is requiring explainability of algorithmic decision-making, and questions around trust and fairness. All of these elements are important when we are thinking about algorithmic accountability. However, Prof McGregor emphasised that when we focus on ethics, we also need to remember responsibilities. We need to consider what the business and human rights framework means in this context. Are we looking for voluntary engagement or for a different model? We also need to think about how responsibility works across the algorithmic life cycle, especially when there is self-learning involved. In other words, when is the developer responsible and how does it work throughout the life cycle of an algorithm.

Prof McGregor emphasised that while the ethical approaches are key to dealing with these issues and an understanding of the responsibilities of businesses and states are crucial, we also need to look at the existing human rights frameworks in this context. She noted that with international human rights law we already have a framework which considers prevention, monitoring and oversight, remedies and accountability. At the prevention stage, we are hearing discussions about how developers can include ethics from the outset. However, we need to think about what the criteria would be here – what ought to be built into an algorithm and are there any red lines where algorithmic decision-making ought not form part of a particular public or private decision given the risks to human rights. There are also questions around the design of impact assessments – how can we design impact assessments that are ongoing, that can trace an algorithm that is changing to see whether there are any unintended human rights consequences, how can that be monitored, and how do we design oversight bodies that can work with proprietary interests.

Lastly, Prof McGregor highlighted the question of remedies – from a human rights perspective we also need to ensure there are models of remedies that work for individuals and groups.”

Please see here for more information about the event.