When a decision is made that affects us, we hope the people involved are thinking carefully about its impact. But it isn’t always a human who is doing the legwork. Algorithms, including complicated AI systems, are playing an increasing role in decision making. This can bring many positives, but we need to ensure that their use doesn’t limit the duty of care and careful consideration that we expect in decision-making. We believe that collaborating closely with the people who may be affected by decisions is a critical part of ensuring that algorithms are used responsibly within decision-making. To make sure this happens, we have helped create a set of principles that spell out how organisations should collaborate with residents around the use of these technologies.
We are hugely grateful to everyone who has worked with us to create these principles. You can read the principles in full here.
What are automated decision-making systems?
Algorithmic tools follow set rules (called algorithms) to turn data fed into them into outputs. We have used the term automated decision-making (ADM) system to refer to any use of an algorithmic tool that has a significant impact on how a decision is made.
This covers a wide range of tools, from those following straightforward rules to complicated systems that use artificial intelligence or machine learning. It also covers a range of ways that humans are (or are not) involved in the process of making decisions, or in designing, training, maintaining, and reviewing these systems.
What difference will these principles make?
We think collaborating closely with residents will be a vital part of ensuring automated decision-making systems are used appropriately.
We hope these new principles will grow this practice by:
- Providing clear and accessible guidance for those exploring the use of these technologies in the public sector; drawing from, but going beyond, the wide range of existing legislation and guidance.
- Helping those outside the public sector to hold organisations to a high standard, that they can have trust in.
- Developing an ambitious, but feasible, standard of good practice. Encouraging organisations to sign the principles as a way of committing to work towards this.
The principles have been explicitly focussed on the public sector within Greater Manchester. But they are intended to be a resource that those working in other places and contexts can pick and use, adapting as needed.
How do they fit into the current landscape?
There is a wide range of evolving legislation that relates to the proper use of automated decision-making systems, and to the proper use of community engagement, including:
Plus, guidance such as:
- The UK Government’s Algorithmic Transparency Recording Standard
- CDDO’s Data Ethics Framework
- The Alan Turing Institute’s Understanding Artificial Intelligence ethics and safety
- Principles of Community Engagement, such as the Scottish Community Development Centre’s National Standards for Community Engagement
The principles have been designed to give straightforward guidance that builds upon, and goes beyond, these existing frameworks.
How were these principles created?
These principles were created in partnership with people based within, and outside, the public sector in Greater Manchester. This work was funded by the Greater Manchester Combined Authority, and came out of earlier discussions with Foxglove Legal.
- Scoping interviews and mixture of online and offline workshops were used to gather initial ideas about what should be in a set of principles. These were advertised through local networks. We spoke to just over 30 people through these activities, roughly half from the public sector and half outside the public sector.
- Our team pulled these ideas into a draft set of principles, which were shared back with participants for their feedback, and iterated based on what we heard.
- A near-finished version of the principles was promoted through an online event, where we also created space to hear feedback and ideas for achieving impact with the principles. with a wider group of people, and hear a final round of feedback. This included people interested in the topic beyond Greater Manchester, with just over 40 people joined the call.
A key challenge in creating these principles was striking a balance between being:
- Ambitious but feasible (avoiding an ‘all or nothing’ approach to the actions required)
- Clear in their requirements, whilst concise and accessible to read
- Aligned with existing requirements
We found, aligning with language used in the UK Government’s Algorithmic Transparency Standard particularly helpful for finding a way through this.
What’s in the principles?
The principles set out an ambitious, but feasible, standard for public sector organisations. There are based around a wide-ranging definition of automated decision-making systems. They are split into:
- Guiding values for the use of ADM systems
- 8 core requirements for collaborating effectively with communities on the use of ADM systems.
- A set of 5 principles outlining what public engagement around the use of ADM systems should look like.
How you can help
We need your help to have the most impact with these principles. We are particularly interested in hearing from people who:
- Work in the public sector and are interested in encouraging their organisation to sign up to the principles.
- Want to explore opportunities to apply the principles to settings beyond the public sector.
Please get in touch to find out more about the principles, or explore how we can help. You can get in touch via hello@opendatamanchester.org.uk or by calling +44 (0) 161 885 3185
We also need your help to spread the word about these principles. Please consider sharing this post with others in your networks to help us get the word out there.
Cover image by ‘geralt’ on Pixabay https://pixabay.com/es/illustrations/inteligencia-artificial-cerebro-3382507/