Editor’s Note: Nabila returns with a fascinating new commentary. Nabila has a Master’s in Migration Studies as well as being a board member on the UNDP Youth Panel and the Hub Coordinator for the Beirut branch of the Al Sharq Forum. Make sure to follow her work, here.
As artificial intelligence becomes increasingly embedded in how states manage migration, we are entering a new era of digital borders — one where decisions about who gets to move, stay, or be protected may be shaped less by people and more by opaque algorithms. This shift raises fundamental questions: Can a machine assess human vulnerability? And what happens when it gets wrong?
These questions are no longer hypothetical. Across Europe, including in the UK, governments are experimenting with AI-driven tools to optimize border control and immigration enforcement. However, while technology is advancing, protections for rights have not kept pace. And that’s where civil society must step in.
When Border Control Goes Digital
Let’s start with the basics. AI in migration governance encompasses a range of technologies, from biometric databases to predictive analytics that identify and flag “risky” asylum seekers. One example in the UK is the Identify and Prioritise Immigration Cases (IPIC) system, which reportedly sifts through personal data to flag individuals for enforcement action. It is part of a broader trend toward automating decision-making across immigration processes.
In theory, this might seem like a logical solution to an overburdened system. In practice, however, it’s deeply problematic. These systems often rely on historical data that may be biased or incomplete. Worse, they make assumptions about human behavior that can’t always be quantified, let alone verified.
What does this mean for real people? For refugees arriving at UK borders, it could mean being flagged as a “risk” before their story is even heard. For migrants trying to regularize their status, it might mean rejection based on criteria they’re not even allowed to see. This level of automation creates what some call “digital dispossession” — where people’s rights are quietly eroded by invisible processes they can’t challenge.
In November 2024, The Guardian reported that AI tools were influencing immigration decisions in the UK without apparent public oversight. A follow-up investigation revealed that government departments had failed to register their AI systems on the required public database, raising red flags about transparency.
More Than Just a Tech Issue
The dangers of these systems aren’t just technical; they’re political. Algorithms are shaped by the priorities of those who design them. When security and control are the primary objectives, rights-based considerations often take a backseat. And when refugees, particularly from the Middle East, Africa, or South Asia, are profiled as “high risk,” these technologies risk reinforcing structural racism under the guise of objectivity.
This is especially relevant for the UK, where public discourse around migration is already highly charged. From the Rwanda deportation plan to efforts to crack down on “small boats,” the broader political environment is not exactly one of welcome. Introducing AI into this context doesn’t neutralize bias — it automates it.
Resistance from Below: Civil Society’s Role
Thankfully, civil society actors aren’t standing idly by. In the UK and beyond, activists, legal advocates, and grassroots organizations are pushing back.
Groups like Privacy International, Liberty, and the Open Rights Group have been vocal about the risks of algorithmic injustice. They’ve called for algorithmic accountability, demanding that AI systems affecting people’s rights be subject to meaningful oversight, transparency, and appeal mechanisms. Others are working with affected communities to raise awareness about digital surveillance and empower migrants to assert their rights.
At the international level, organizations such as the Center for Democracy and Technology utilized the UK-hosted AI Safety Summit to advocate for greater inclusion of marginalized voices in tech governance, particularly refugees, migrants, and civil society actors who are often excluded from high-level policymaking.
Learning from the SWANA Region
This isn’t just a UK issue. Across the Southwest Asia and North Africa region, AI-powered surveillance is expanding rapidly. Biometric data collection in refugee camps, facial recognition at border crossings, and predictive systems in visa processes are becoming normalized, often with limited safeguards.
In Lebanon and Tunisia, civil society organizations have challenged projects that collect refugee data without informed consent or robust data protection laws. Their resistance highlights a critical truth: civil society is not just a watchdog; it’s a source of alternative governance frameworks.
By re-politicizing discussions around AI in migration, these actors remind us that technology is never neutral. They advocate for participatory design, where migrants are not merely data points but co-creators of the systems that affect their lives. They push for collective data governance that centers dignity, justice, and transparency over control and efficiency.
The UK’s Opportunity and Responsibility
So, where does the UK go from here?
First, transparency must become non-negotiable. If AI is being used to inform or make decisions in the immigration system, the public has the right to know. A functioning register of AI tools, publicly accessible and regularly updated, is the bare minimum.
Second, there must be legal safeguards in place. Refugees and migrants must have the right to challenge decisions influenced by automated systems. Those systems should be independently audited for bias and their potential human rights impact.
Finally, civil society must be included, not just as observers, but as partners. Individuals with firsthand experience of displacement and digital exclusion possess unique insights into the risks associated with algorithmic governance. Their voices can help reimagine what migration systems could look like — systems that are rights-based, transparent, and just.
Beyond the Algorithm
In an era where efficiency is often prioritized over equity, we must ask: What kind of border system are we creating? If we allow AI to take the reins without accountability quietly, we risk creating a migration regime that is not only more exclusionary but also less human.
The UK, with its resources, strong civil society, and significant political influence, has an opportunity to lead. However, that leadership must begin with a clear commitment: that no technology should ever come at the expense of dignity, protection, or the fundamental right to seek asylum.
The future of migration governance is being shaped right now — not just by policymakers and programmers, but by all of us. The question is, whose values will it reflect?