Author’s Note: This article focuses primarily on sexism and the disproportionate consequences of AI for women, it goes without saying that these technologies also affect other marginalised communities disproportionately. Applying a gendered lens to Global AI Governance, naturally also requires recognising and acting on this fact.
‘The brave new world we are hurtling towards reflects the terrifying global regression in women’s rights that we are currently living through.’
– Laura Bates, The New Age of Sexism
A few weeks ago, I picked up Laura Bates’s book The New Age of Sexism: How the AI Revolution is Reinventing Misogyny. Upon finishing it, I was left both shocked and enraged. As Bates makes clear, this is exactly her aim. Unfortunately, few of her upsetting observations felt surprising, and I doubt they would to many other women. Big tech companies developing AI models in the ruthless pursuit of financial profit in the name of ‘progress’ at the cost of women? Of course. Men’s mental health is at a low, so tech companies pose chatbots and sex robots as a solution? Of course. AI is trained on biased data against women, produces unequal outcomes, and replicates existing sexist prejudices? Of course.
I finished this book at a moment when the global debate on sexism and violence against women is again in sharp focus. In the Netherlands, the recent murder of a 17-year-old girl cycling home at night has reignited a reckoning with femicide and women’s safety in public spaces. Similar outrage was sparked in the US with the brutal stabbing of a young Ukrainian woman on a train. These horrific tragedies are reminders that misogyny is not a thing from the past.
We are currently facing the reality that these same forms of inequality and oppression are now being encoded into the very technologies that will shape our future. As I write from New York, where I am interning at the UN, the 80th General Assembly has just concluded. Among its high-level debates was the governance of artificial intelligence, framed by the UN’s launch of a Global Dialogue on AI Governance. Looking back on these discussions, this feels like the right moment to reflect on Bates’s observations, and on the broader risks of sexism being embedded into AI. Drawing on Bates, this article explores why her warnings are vital for current debates on AI governance; how they should be reflected in the multilateral agenda; and what the role of the UK should be in ensuring that AI governance delivers safety, dignity, and equality for all.
The New Age of Sexism
Bates published her book The New Age of Sexism earlier this year after publicly available images of hers were misused to fabricate sexualised content. As the title implies, the book demonstrates how sexism is not a relic of the past and is instead finding new expression in technologies that, over the next few years, will increasingly shape our lives. Bates argues that rather than focusing solely on the potential, existential, long-term risks of AI, we must confront the immediate and equally existential harms these technologies are already inflicting on women and other marginalised groups.
Bates structures the book around a series of themes that each examine different facets of how sexism is being reinforced and reinvented through technology and artificial intelligence.* For instance, she shows how deepfakes, revenge porn, and image-based abuse have created an escalating wave of sexualised harassment and disinformation, of which women are overwhelmingly the target. A 2019 study found that ‘96 per cent of deepfakes were non-consensual sexual deepfakes, and of those, 99 per cent were made of women’. The consequences, naturally, do not remain confined to the online sphere: they spill into women’s everyday lives. Victims face reputational damage, workplace consequences, and harassment in their communities. Yet, while policy-makers and media have often framed deepfakes as a threat to democracy and have focused on political disinformation and electoral interference, far less attention has been paid to their most widespread use: the harassment and abuse of women and girls. In this way, deepfake technology does not become a tool of progress but rather a digital extension of misogyny.
She also highlights how supposedly ‘neutral’ AI systems produce discriminatory outcomes and reproduce existing prejudices. Diagnostic tools trained primarily on male data misread women’s symptoms; hiring algorithms penalise female applicants; and predictive policing tools entrench inequalities already baked into criminal justice systems. A 2025 study by the London School of Economics confirmed that Google’s widely-used AI model Gemma systematically downplays women’s physical and mental health issues compared to men’s, posing serious risks for equality in care provision. Another paper recently revealed systematic occupational stereotypes in text-to-image models, where all assessed models produced ‘nurses exclusively as women and surgeons predominantly as men’. Here again, the problem is not just technical bias: it is that AI magnifies existing blind spots in society. If women’s pain is already minimised by healthcare providers, embedding that bias into AI tools ensures it will be automated, scaled, and far harder to challenge.
Bates also examines the worlds of the metaverse and sex tech, which are often marketed as the next great frontiers of human connection. However, in virtual reality platforms, women routinely face harassment, groping, and even incidents described as ‘virtual rape’. Avatars are hyper-sexualised, and entire digital environments are built around monetising women’s bodies. These virtual spaces amplify and replicate the misogyny of the offline world. Alongside this, the rise of AI ‘companions’ and sex robots are presented as an innovation to combat (predominantly male) loneliness or support mental health. In reality, they entrench troubling dynamics: chatbots designed never to refuse; robots programmed for obedience. These technologies risk normalising objectification and consent-erasure, shaping cultural expectations about women’s roles.
To some, these examples may seem abstract or even far-fetched. Yet the testimonies gathered in Bates’s book from those using these technologies show just how immediate the risks already are. And that is why, above all, the book serves as a wake-up call: if the embedding of sexism into AI is left unaddressed, we may soon reach a point where these harms are too deeply entrenched to undo. We cannot afford to wait, we must act now.
Multilateral AI Governance at UNGA 80
All of this discussion leads me to UNGA 80.
‘We must act now’ is a phrase that we often hear when it comes to AI and other technologies. However, typically, this urgency is framed in terms of preventing a potential technological singularity that could threaten humanity. In practice, the phrase is also used in response to the growing weaponisation of AI, both in terms of offensive applications: autonomous weapons, mass surveillance, and AI-driven cyberattacks; and information manipulation: the use of AI to generate and spread misinformation, disinformation, and propaganda.
These were also the exact concerns echoed in debates at the UNGA. Yet the issue of gender and AI was markedly absent from the conversation. AI took centre stage across multiple days of high-level debate, underscoring the growing global awareness of the growing risks associated with its increasing usage. Member states repeatedly emphasised the need for responsible AI governance, calling for universal guidelines, ethical standards, and inclusive frameworks. Many leaders acknowledged the dangers of unregulated AI, such as deepening inequality, exclusion, algorithmic manipulation, and the centralisation of power in the hands of a few actors. The newly announced Independent International Scientific Panel on AI and the Global Dialogue on AI Governance were widely welcomed as multilateral efforts to steer the technology in a fairer direction. Speakers also championed the potential of AI to drive development, improve healthcare, enhance education, and reduce digital divides, especially for low- and middle-income countries. Notably, the President of the General Assembly warned that age-old biases are being perpetuated by algorithms, referencing the targeting of women and girls by sexually-related deepfakes. This remark stands out as one of the very few moments in which the gendered consequences of AI were explicitly acknowledged.
In spite of this brief recognition, the broader conversation around AI and gender at UNGA 80 remained painfully sparse. To be sure, many interventions acknowledged that AI risks deepening inequality, and several states called for inclusive, human-centred governance. However, these references remained largely abstract, rarely addressing who is most affected or naming gender or sexism explicitly. This omission may reflect the constraints of the General Debate as a high-level forum with limited time and broad scope, rather than deliberate neglect. Still, as the conversation moves into more specialised spaces such as the Global Dialogue on AI Governance and the Independent International Scientific Panel, this gap must be urgently addressed. These mechanisms have the potential to shape global norms and must therefore go beyond general principles of fairness to engage directly with gendered harms, feminist perspectives, and intersectional analysis. Without this, any vision of ‘responsible AI’ will be incomplete and risk reinforcing the very inequalities it aims to overcome.
The UK’s Role
What, then, should the UK’s role be in all of this? The UK is uniquely positioned to shape the future of AI governance as a country with a robust research infrastructure, a dynamic tech and AI community, and strong diplomatic reach. However, it must decide whether it will lead with principle or follow others’ lead. While its AI White Paper gestures towards fairness and accountability, the UK’s decision not to sign the Paris Declaration on AI signals a significant step back from collaborative, values-based regulation. Endorsed by 57 countries and international bodies, the Declaration shifted the conversation away from abstract talk about ‘AI safety’ towards more concrete concerns such as sustainability, labour rights, inclusion, and human dignity. By seemingly strategically aligning with the US’ decision not to sign, the UK appears to favour a growth-first, deregulatory approach that does exactly what Bates warns about: prioritising innovation and profit over starting from a place of sustainability, fairness, and social progress.
If the UK is to remain a credible voice in shaping global norms it must ensure that questions of gender and power are not treated as afterthoughts in AI governance. This means, amongst other measures, mandating gender impact assessments (akin to equality impact assessments) before deployment, to identify how new systems might amplify biases by sex, gender, intersectionality or other protected traits. It also means applying algorithmic audits and disaggregated reporting to detect unfair outcomes for women and marginalised groups, a recommendation long echoed by feminist AI frameworks. Crucially, the UK should institutionalise participatory design and co-production, embedding feminist and intersectional expertise throughout the design, oversight, and evaluation phases. This means not merely consulting such voices but involving them as true partners in governance. A compelling precedent can be found in UNESCO’s collaboration with the Argentine Ministry of Foreign Affairs through the Symposium on Feminist Contributions in Artificial Intelligence, which united government bodies, women-led AI initiatives, equality agencies, and academics to co-develop policy recommendations later adopted in national reports. The UK should adopt a similar approach, both domestically and abroad, championing inclusive governance in international forums such as those recently initiated by the UN. In doing so, Britain could help ensure that global standards for AI reflect not just technological ambition, but also commitment to fairness.
To conclude, it goes without saying that any effort to address the gendered consequences of AI must also confront the deeper social structures that enable them. Regulation and governance can mitigate bias, but they cannot by themselves undo the inequalities that technology reflects and amplifies. As Laura Bates writes in the conclusion of The New Age of Sexism: ‘Ultimately, what will make the biggest difference is the same thing we needed to change long before the metaverse or sex robots or AI came along – it is the underlying misogyny and inequality in our society, which these products either accidentally or maliciously amplify’. Tackling the gendered harms of AI therefore requires more than technical solutions; it demands political will, cultural change, and the courage to imagine technology built on equality.
____________
* This is only a brief overview of the ground Bates covers. The book itself is extensive, filled with data, testimonies, and moments of brutal honesty that make the realities even more vivid. What I have sketched here are just some of the themes and shortened explanations.