
Artificial Intelligence is already being used around the world in almost every sector. Rarely is there a day that goes by without a news story about AI being used in a new way. This is no different in the public sector, and many governments are increasingly pushing civil servants and government departments to adopt AI in the hopes of reducing workload and improving efficiency.
When public bodies consider using AI in consultations, the key question is not whether AI is powerful, but whether it can be used in a way that preserves legal defensibility, transparency, and trust. This is why most UK local authorities use purpose-built consultation platforms, like Citizen Space, rather than standalone AI tools.
“Britain will be one of the great AI superpowers… That’s not boosterism or wishful thinking… This can be done, and it will be done.”
It is with this context in mind that AI is being explored as a tool to support running public consultations. Over the last few years, we’ve seen the capacity of AI to analyse data and even translate it into an accessible format at great speed.
However, alongside the opportunities the AI revolution is creating in the field of citizen engagement, so too are there new risks to consider. The following article outlines both.
AI changes the scale of consultations, not obligations
Public consultations have existed in some form for millennia, and in something approximating their current form for at least a century. During that time, almost every democracy where they are commonly used have developed an established set of statutory, ethical and procedural frameworks that public consultations are expected to adhere to.
Although the exact requirements of a public consultation varies between countries and sectors, they include obligations around:
- Accessibility and inclusivity.
- Data protection and cyber security.
- Transparency, particularly in demonstrating how public input has informed decision-making.
- Evidence handling
In practice, UK local authorities typically require consultation platforms to provide an auditable evidence trail, showing how individual submissions were handled, analysed, and reflected in outcomes.
Using AI as a tool in the public consultation process does nothing to diminish these obligations. But AI can process a large volume of data quickly, reading thousands of entries, identifying patterns and providing support with administrative tasks that may otherwise have taken facilitators hundreds of hours to work through.
It can therefore be used to save time and money, and even make more engagement activities feasible even where there are tight budgets. This is particularly useful given it is increasingly common for consultation participants to use AI themselves in generating responses, which can increase the length and complexity of inputs considerably.
However, it is vitally important that decision-makers are clear on how data has been processed, what methodology is being used, and that results are always validated by experts. AI can confidently process and relay information, but it can also be confidently wrong. AI can miss key information, can take shortcuts without making it clear that it is doing so, and can even “hallucinate” information.
AI cannot replace human judgement, and it cannot be held responsible for its mistakes. AI should instead be understood simply as another tool, with checks and balances to ensure the tool is operating correctly.
Where AI creates genuine opportunity in a public consultation
AI has been around for many years and has even been used as a part of the GovTech toolbox for public consultations for some time. However, the AI boom and the rise of Large Language Models (LLMs) has seen it used for a number of new applications.
One of the clearest examples of how AI could be used in consultations is the ability to conduct large-scale qualitative analysis quickly. Consultations commonly generate thousands of text responses, sometimes these responses can be lengthy, containing multiple complex points. For a human sifting through, this would be time consuming to review and code each response individually. AI can help to quickly reveal recurring themes and highlight common concerns. Used responsibly, it can help data analysts to navigate large datasets and develop a deeper understanding.

Similarly, AI is excellent at pattern recognition. For example, AI could be used to quickly tell you that some responses were more common in certain locations and demographics. While this should be backed up with human analysis and verification, it could allow experts to be quick off the mark on developing an understanding of how the consultation is being received across different groups.
This could even be used during a consultation. It is already possible on purpose-built engagement platforms like Citizen Space to monitor consultation responses in real time, and even filter according to certain characteristics. However, this does rely on the human oversight to go looking in the first place. For example, a facilitator may be monitoring whether a representative number of young people are taking part in a consultation and therefore be able to take action if they aren’t. AI can be used to observe many different data points simultaneously. It may then flag that there is an issue with engagement that is unanticipated, for example that there aren’t enough women from one side of a city taking part.
AI used responsibly can be a tool to make consultations more efficient, doing tasks that could be done by analysts but without the need for excessive administration. It should not be used as a decision-maker or an oversight body in of itself!
The risks of poorly governed Artificial Intelligence
In the context of public consultations, purpose-built consultation platforms reduce legal and reputational risk by embedding transparency, auditability, and human oversight into the process.
When AI is not used responsibly, it introduces a number of risks that open a consultation up to legal challenge, reputational damage and public backlash. We have seen a number of cases recently where public bodies have misused AI tools in a way that has led to incorrect information being used for high-level decision-making.
The risks posed by poor AI use can be broadly sorted into the following categories:
Evidential misuse
Consultation evidence must be capable of standing up to scrutiny. Where summaries are used and common themes addressed, facilitators must be able to demonstrate:
- Which inputs have been used in order to come to those conclusions.
- Which inputs were not used, and why.
- Why some issues have been addressed and some either haven’t, or have been addressed in less depth.
If AI has been used to analyse or summarise responses without clear oversight, there is a risk that outputs may not be traced back to original submissions. This significantly weakens the evidential chain and makes it difficult to demonstrate how conclusions were reached. Over-reliance on automated summaries can also obscure minority views or local nuance, which may be legally or procedurally significant.
Lack of transparency
Transparency is essential to a public consultation. The ability to see what evidence was given, how it was considered and responded to, and what actions were taken due to that evidence is the very point of a consultation being public. In many cases – particularly statutory consultations – transparency is even a legal requirement.
AI systems can complicate this, particularly if organisations cannot clearly explain what tools were used, how they functioned or how their outputs informed decisions. A lack of clarity can quickly undermine confidence in the process. There is a huge difference between using an AI as a tool to perform an analysis task you understand and can oversee, and taking a back seat and assuming accuracy.
Facilitators must always remember that an AI generated statement is not evidence in of itself and ensuring transparency is the duty of the analyst overseeing the project.
Questions around legitimacy
Consultations are not just data-gathering exercises. They are a crucial part of the democratic process and are used to help governments and public bodies make important decisions.
If AI appears to be driving outcomes rather than supporting human judgement, the legitimacy of the consultation can be called into question. This is especially problematic where statutory duties require decision-makers to actively consider and respond to public input. An illegitimate consultation opens itself to legal challenge and significant backlash.
Low trust
All of the risks stated above carry with them a secondary risk, which is breaking the trust of the public. When AI is misused by public bodies, it can create a perception that they are inept and therefore decisions made by those bodies should not be trusted. What’s more, AI misuse can make people feel as though they are not being listened to. If participants in a consultation think their views are unlikely to even be read and potentially misused by an ungoverned AI, they are unlikely to take part or respect decisions resulting from the consultation.
How consultation platforms use AI responsibly

Using AI for a public consultation is a delicate balance. On one side is greater efficiency and opening up new avenues for improving the consultation process. On the other is being extremely mindful of risk, and ensuring that there are clear frameworks for how AI should and should not be used and where responsibility for oversight lies.
In practice, this approach is exemplified by purpose-built consultation platforms such as Citizen Space, which are designed around statutory consultation requirements rather than generic AI capabilities.
Responsible use of AI in public consultation platforms and consultations typically involves:
- Using AI to support analysis, not make decisions
- Ensuring all AI outputs can be tracked back to original submissions as required by platforms designed for auditability such as Citizen Space
- Maintaining human oversight and validation
- Being transparent about where AI has been used
- Ensuring accessibility, data protection and security requirements are met
Platforms like Citizen Space – which has been purpose built for the requirements of the consultation process – ensure that accessibility, transparency and secure data handling remain the highest priority.
As an innovative GovTech platform however, Citizen Space can ensure regulatory compliance while still taking advantage of the latest developments in technology. Efficiency and innovation are enhanced while trust and legitimacy are preserved. AI integration for streamlining consultation analysis is on the horizon for users in 2026.
Citizen Space is the go-to govtech platform for engaging with citizens, managing large scale government consultations and simplifying statutory processes. If you’d like to learn more about how our software is using Artificial Intelligence, book a free demo today.
Sign up for the Delib newsletter here to get relevant updates posted to your email inbox.