4 key recommendations for Scotland’s AI strategy
In late summer of 2019 the Scottish Government announced that they would develop an AI Strategy for Scotland. The aim as Kate Forbes MSP Cabinet Secretary for Finance puts it, is to: “…develop a strategy that is of Scotland, by Scotland and for Scotland.” The strategy’s scoping document was out for consultation until 22 May 2020. Open Rights Group (ORG) responded to that consultation and this blog summarises the submission.
Trying to develop a strategy for a branch of technology that encompasses everything from image recognition in facial recognition technology to natural language processing in chatbots to algorithmic decision-making for proposing criminal sentences is an enormous task. It will be an unhelpful task unless the strategy makes sure to dig deeper into three key questions:
- What form of AI is going to be used?
- What sector or context it will be used in? and;
- What are the potential impacts of its use on the sector and the fundamental human rights of Scottish citizens?
Without exploring those key areas or creating a framework for those areas to be explored, the strategy risks being nothing more than a high-level talking shop that remains inaccessible to most people in Scotland. Something which the team behind the strategy are clearly keen to avoid.
ORG will be encouraging the strategy to make the most of this opportunity for a public debate on these key areas, we encourage as many of you who can to do the same.
There are 4 key recommendations included in ORG’s response:
- Assess the legal framework that underpins the form of AI to be adopted, specifically sector-based regulation to ensure compatibility with fundamental human rights.
- Conduct assessment of individual rights framework in Scotland as it relates to AI, and seek to strengthen the rights-based system
- Review intellectual property status for public-private partnerships in the field of technical procurement, adjusting to preserve public benefit and limit the risk of exploitation of public data for private benefit.
- Conduct comprehensive sectoral impact assessments prior to the introduction of any AI initiative seeking to explore what effect it will have on Scotland’s people, followed by impact evaluations.
Behind these recommendations lies those key questions (What technology? What sector? What effect will it have?). AI does not get to exist beyond the law or our existing frameworks. It is a group of technologies that have to be interpreted through the standards we have set in Scotland and the principles that we believe in.
Impact assessments and assessing legal frameworks
A good example of this approach in Scotland is in the Justice Sub-Committee’s report on Police Scotland’s use of facial recognition technology released earlier this year. Police Scotland are not actively using facial recognition, but there were intentions expressed in a Policing 2026 strategy document from 2017. The Justice Sub-Committee picked up on that intention and held a quick consultation , ultimately deciding that there was no justification for its use, and cautioned against adopting it without proper safeguards which Police Scotland to their credit agreed with.
This investigation was proactive, considered the effect the technology would have in Scotland, and reflected on existing standards, like the lacking legal framework and principles, the overriding need to safeguard the right to privacy and freedom of association. It should be seen as a leading example of how a proposal for the use of AI should be considered.
The AI strategy could set out an impact assessment process by which a form of AI will be assessed prior to its adoption, the process should also allow for a decision that it is not the time to adopt the AI, just as the Justice Sub-Committee did. The assessment should incorporate wider impact assessments, such as economic effect or effects on the workforce. A host of impact assessments are in the process of being scoped and piloted. Scotland could establish a best practice for such assessments.
From individual rights to collective accountability
Alongside reflecting on the legal frameworks in place, Scotland should explore the individual rights framework in place. While individual rights are a part of any legal framework, a specific focus on how an individual would hold a decision made by AI is necessary because of the lack of a strong individual rights framework.
While the General Data Protection Regulation does provide for a right not to be subject to automated decision-making if it produces a legal or similarly significant effect, and access to meaningful information about the logic involved, these may not be up to the task. As a result of the complexity of decisions made, the framing of individual rights is currently unlikely to present a complete remedy to algorithmic harms. Some have recommended to consider the group harms of algorithmic decision-making, and exploring remedies aimed at empowering or protecting groups, such as a “supercomplaint” system to empower third party organisations to take action against harms caused by algorithms.
Scotland’s strategy should explore the individual rights framework in place and seek to articulate new ways of improving accountability, whether by individuals or groups.
Where is the public benefit?
A strategy that only seeks to find forms of AI and add it to sectors is not a full strategy. We need to consider the effect of AI through every form, from legal frameworks, to individual rights, to economic models, to knowledge sharing. For this reason, we recommend Scotland to consider the intellectual property framework in place, in particular who benefits from the deployment of AI in the public sector by private partners.
AI requires data to be useful. The data provided creates the insights or values that the AI is able to extract from its processing. Without either one, the beneficial outcomes are not there. However, if private parties are in a position to use public data to train their systems, this may have some efficiency gains for the public sector but what is really improving is the learning held within the particular AI system. Even if there is no public benefit such as efficiency, the system has learned something that can be retained and retooled for the next contract, whereas the public benefit never arrived.
The most famous example of this is DeepMind’s collaboration with the NHS and the Royal Free Hospital, according to contracts available online, DeepMind claims ownership of any “Developed IP” in their dealing with the NHS. What does this mean for new versions of technologies created on the basis of patient data and NHS resources? Does DeepMind retain all the value derived from that insight? If no openly available research paper is produced by DeepMind, the current form of providing public benefit, then has the public lost its benefit and had its personal data used for someone else’s benefit?
If you cannot separate what the model learns from the model itself, then that question needs to be answered first before opening up Scotland’s data.
The strategy should consider whether the public procurement, and the trade secrets restrictions available to private companies when operating in the public sector using AI techniques truly represent the creation of an “open, connected” society that makes a positive contribution internationally. This commitment comes from the strategies alignment with Scotland’s National Performance Framework, a set of national outcomes that Scotland aims to create involving national and local government, businesses, voluntary organisations, and people living in Scotland.
This first consultation was a series of questions on a scoping document. It was high level to say the least. It is key that the strategy takes its time to get down from the high level and explore what it would really mean in Scotland for forms of AI to be adopted.
AI does not spread its effects equally. Scotland should use the strategy to explore what those effects for different parts of Scotland are, on our principles, on our legal framework and on our principles as a people. It is only through fully assessing all of these aspects, and making those assessments repeatable, mandatory, and in conversation with the public that we can, looking back, say that this strategy has been shaped “by Scotland, and for Scotland.”
========
ORG’s Scotland Director has been invited to join the Ethics and Regulatory Frameworks Working Group for the strategy, an offer which he has accepted. It is vital that we have the voice of human rights and data privacy advocates in the room. ORG encourages everyone to engage fully with the public engagement of the strategy.