CETaS conducts interdisciplinary research on a range of issues relating to emerging technology and national security policy. The ongoing projects below are expected to be completed in 2024-25.
To be the first to read our upcoming publications, sign up to join the CETaS Network. To view our completed projects, read our Reports and Briefing Papers.
If you would like to get involved in any of these projects, please contact the team at cetas@turing.ac.uk.
AI-Enabled Disinformation and Security Incidents: Mitigating Real-World Violence
Supported by the UK Government’s AI Security Institute (AISI), this project is exploring how AI-enabled disinformation can foment real-world violence and undermine democratic stability, during or in the immediate aftermath of a serious security incident such as a terrorist attack. The project is adopting a sociotechnical approach to analyse ways in which such disinformation interacts with other complex factors to sow division and incite physical harm, and to recommend appropriate mitigation measures.
From a public safety perspective, the hours and days after a serious security incident are crucial. It is in this short time frame that AI and other automated tools can be weaponised effectively. People with heightened emotions are likelier to suspend reason, critical judgement and reference to evidence – and to be more receptive to disinformation that exploits their deep-seated fears. Social media users react rapidly following such incidents, a process that can be accelerated by AI. These dynamics were evident during the UK riots of summer 2024, in which targeted attacks on minority groups followed many social media accounts’ efforts to spread inflammatory or incorrect details about the Southport murder suspect.
The project uses international case studies to assess the means by which AI systems interact with the outbreak of violence, and to forecast user trends and changes in societal dynamics. Through collaboration with government agencies, academia and civil society, the research will produce actionable recommendations to counter AI-driven threats to democracy. As AI continues to reshape the global information landscape, the project will be vital in helping prevent technological advancements from eroding public trust or fuelling violence.
Public Attitudes to Data Processing for National Security

Emerging technologies are already transforming the ways in which national security and law enforcement agencies use their investigatory powers, as AI and other data processing methods offer increasing options to automate parts of this covert information-gathering process. The impact this is having on privacy intrusion for the UK public is uncertain. Some argue emerging technologies could worsen privacy intrusion, for example if an AI system led to personal data being incorrectly flagged as of interest to national security decision-makers. Others argue emerging technologies could improve privacy, for example lessening the volume of data that needs to be processed by human operators because AI can filter out most data as irrelevant.
Despite this ongoing debate in academic, policy and legal circles, little has been done to consult the public on what they think about human versus machine intrusion in national security. For example, are the public concerned that data-driven technologies might lead to more data getting collected by national security in the long term? Would the public perceive their privacy to be protected by automated methods which reduce human involvement in the processing of their data? And, do the public think the current UK regime for investigatory powers oversight supports a level of privacy intrusion that is proportionate to the national security threat? This project aims to address these questions by consulting the public, aiming to understand what they really think about AI, privacy and national security.
AI for Strategic Warning
Predicting political change, both stabilisations and escalations, has been an important function of the IC, which has traditionally relied on deep expertise of human analysts to make qualitative predictions about the likelihoods of belligerent activities and conciliatory responses. The use of human analysts has been particularly important in understanding human behaviour for assessments on leadership decision-making.
Current conflict modelling data is relatively static and able to identify the persistent hot spots in the world but is not yet able to predict new outbreaks or escalations in violence or political instability in real-time. While analysts have a growing number of data sources available to them, the data picture is fragmented and inconsistent and there is limited ability to forecast flashpoints and escalations/de-escalation of instability (e.g. as a result of political transitions) reliably. Industry and open-source tools predominantly show intra-state, not inter-state conflict and do not incorporate multi-domain conflict and several important contested spaces such as space or international waters. Moreover, there are innumerable factors represented in the academic literature on social complexity (e.g. dissent, infighting, collective memory, public opinion, realistic information/disinformation flows), which are not incorporated in the majority of existing conflict modelling tools. Data gaps to build AI-based conflict modelling tools are an important challenge and building the data infrastructure and data sharing practices to enable a performant AI-based conflict modelling tool is an enormous, complex and expensive undertaking.
This Special Competitive Studies Project (SCSP)-CETaS study aims to develop a deeper understanding of the next frontier for AI in conflict modelling and whether AI should be adopted for conflict modelling.
AI and Online Criminality

The landscape of online criminality is constantly evolving in response to new technological developments. The recent explosion in popularity of generative AI systems has lowered the barriers to experimentation for online criminals, as well as for the public. Yet there remains much that is unclear about how criminal tradecraft reflects the pace of change in the AI space.
It is important for researchers and the security/law enforcement community to know the evidence of whether AI tools are significantly empowering online criminals, and to forecast trends in such activity over the next five years. They need a detailed understanding of not only whether and how AI tools have become more integral to practices such as cyber reconnaissance, the creation of malware, phishing and the generation of child sexual abuse material, but also the roles of audio, text, image and video in this. They also need to consider how criminals could increasingly use AI in areas where it has not yet reached its full potential, and how the adaptation or jailbreaking of industry AI tools could accelerate these processes. Finally, it is crucial to understand how malicious actors will commit new types of crime in response to changes in economic incentives brought about by the development of AI.
In this operating environment, the security/law enforcement community also needs a better understanding of measures to effectively counter AI-enabled online criminality now and in future. It will need to identify barriers to delivery in the area to stay on top of the threat.
By focusing on online criminality, this project will build on CETaS research into harms created by malicious actors’ uses of AI. It will produce evidence-based analysis of how AI tools are transforming and empowering the types of criminality that the public are most likely to experience on a day-to-day basis, and will provide actionable suggestions for how law enforcement can more effectively counter the threat.
AI Safety and Generative AI Evaluation

CETaS has an ongoing programme of work on AI safety and generative AI evaluation. In August 2023, in the run-up to the Bletchley AI Safety Summit, CETaS co-published with the Centre for Long-Term Resilience a briefing paper titled 'Strengthening Resilience to AI Risk: A guide for UK policymakers'. This paper informed various CETaS contributions to the November 2023 AI Safety Summit, garnering extensive engagement across the UK technology and security policy community. The paper preceded a longer-form research report titled 'The Rapid Rise of Generative AI: Assessing risks to security and safety.' The most comprehensive UK-based study of the national security implications of generative AI, the report is based on extensive engagement with more than 50 experts across government, academia, industry and civil society. The report laid the foundations for follow-on papers that focused on 'Generative AI in Cybersecurity' and 'Evaluation of Malicious Generative AI Capabilities'. These outputs have been supported by various expert workshops that convened world-leading thinkers in AI safety and generative AI evaluation, forming the basis of several CETaS briefings to policymakers and presentations at international conferences.
Privacy-Preserving Moderation of Illegal Online Content

With the passage of the Online Safety Act (OSA) in 2023, online platforms now have a legal requirement to actively monitor and remove illegal content. However, in needing to implement comprehensive strategies and sophisticated tools capable of identifying such content, there is also a desire to reduce the impact of these processes on user privacy. This is particularly the case on services which use end-to-end encryption protocols.
As current content moderation techniques continue to suffer from limitations in relation to their effectiveness, efficiency and impact on user privacy, it is vital to understand the range of nascent and future methods in this space which could enhance the tackling of illegal online content.
This project will focus on analysing nascent and future content moderation methods, including AI-based and privacy-enhancing technologies, to assist online platforms in fulfilling their new legal duties under the OSA. The research will look to understand what metrics can be used to assess content moderation methods, as well explore the feasibility for effectively implementing any promising capabilities identified.