Mohammad Naiseh
Home Publications Teaching Projects Blog Talks

I am involved in a range of funded research projects spanning responsible AI, explainable AI, autonomous vehicles, human-swarm systems, and digital wellbeing. These projects are funded by the European Commission, UKRI, and other international bodies.

8
Research projects
3
As PI or Co-I lead
€16M+
Total funding
4
Countries involved
Current projects
PRESERVE project — AI for law enforcement
PRESERVE: Using Big Data to Support Criminal Investigations
Co-Investigator Active EU Horizon Europe

PRESERVE aims to improve the daily investigative and preventive work of Law Enforcement Authorities (LEAs), enhance proactive threat detection and response, and provide a regulatory-compliant framework for the responsible use of AI in crime detection and prevention. The €6 million project involves partners from 8 countries across Europe.

PI: Prof. Hamid Bouchachia
Co-I: Dr. Mohammad Naiseh, Dr. Hammadi Nait Charif
Partners: Center for Security Studies (Greece), Università Degli Studi Bari (Italy), GRADIANT (Spain)
ExtremeXP — big data analytics
ExtremeXP: Experiment-Driven and User Experience-Oriented Analytics for Extremely Precise Outcomes
Co-Investigator Active EU Horizon Europe

ExtremeXP aims to provide accurate, precise, fit-for-purpose, and trustworthy data-driven insights by evaluating complex analytics variants and placing end users at the centre of the process. The €10 million project integrates explainable AI, machine learning, visual analytics, and knowledge engineering into a unified framework validated across crisis management, predictive maintenance, mobility, public safety, and cybersecurity.

PI: Prof. Hamid Bouchachia
Team: Dr. Mohammad Naiseh, Dr. Waqas Jamil, Dr. George Lee
Digital Wellbeing project
Digital Wellbeing: Culture, Design, and Healthy Technology Use
Collaborator Active HBKU / Qatar

This project investigates the notion of digital wellbeing and the role of culture and digital design in both triggering concerns and mediating solutions. It proposes socio-technical solutions to enhance digital wellbeing using social expectations setting and psychological inoculation theory, with a focus on Qatari society as a case study.

Lead PI: Prof. Raian Ali
Partners: HBKU, Hamad Medical Corporation, Primary Health Care Corporation, Sidra Medicine, Bournemouth University
Past projects
REFORMIST — oesophageal cancer decision support
REFORMIST: Mirrored Decision Support Framework for Multidisciplinary Teams in Oesophageal Cancer
Co-Investigator Past UKRI TAS Hub

This project mirrored the decision-making process of upper gastrointestinal multidisciplinary teams managing oesophageal cancer, incorporating human-selected critical variables and machine learning to standardise data-driven clinical decisions. The work ensured trustworthiness for routine clinical use through human-AI partnerships.

PI: Dr. Ganesh Vigneswaran
Co-I: Dr. Mohammad Naiseh, Prof. Gopal Ramchurn, Dr. Zoe Walters, Prof. Tim Underwood
Partners: University Hospital Southampton, University College London
Autonomous vehicles liability project
Communicating Liability in Autonomous Vehicles
Principal Investigator Past UKRI TAS Hub

This project examined how liability is perceived and communicated between autonomous vehicle drivers and third-party insurers, developing integrated mental models that help each stakeholder understand their risks and responsibilities. It addressed how factors such as loss of control and operational limits affect liability perceptions.

PI: Dr. Mohammad Naiseh
Co-I: Dr. Richard Hyde, Dr. Paurav Shukla, Dr. Justyna Lisinska
Partners: Johns Hopkins University, Ultra Leap, Connected Places Catapult, Centre for Connected and Autonomous Vehicles
XHS — explainable human-swarm systems
XHS: eXplainable Human-Swarm Systems
Co-Investigator Past UKRI TAS Hub

This project explored explainability as a factor in facilitating effective and trustworthy human-swarm interaction. It investigated foundational challenges: what questions an explainable swarm should answer, and what types of explanations a swarm is expected to generate in safety-critical human-swarm environments.

PI: Dr. Mohammad Divband Soorati
Co-I: Dr. Mohammad Naiseh, Dr. Gopal Ramchurn, Dr. Katie Parnell
Inclusive autonomous vehicles — trust and risk perception
Inclusive Autonomous Vehicles: Human Risk Perception and Trust Narratives
Researcher Past UKRI TAS Hub

Grounded in social psychology and behavioural economics, this project investigated how concerns with AI in autonomous vehicles can be alleviated at each level of control handover, and what mechanisms can address consumers' concerns to increase trust and well-being in human-AI interactions.

PI: Prof. Paurav Shukla
Researchers: Dr. Mohammad Naiseh, Dr. Jediah Clark, Dr. Liz Dowthwaite
Partners: Catapult Connected Places
Intersectional approaches to trustworthy autonomous systems
Intersectional Approaches to Design and Deployment of Trustworthy Autonomous Systems
Co-Investigator Past UKRI TAS Hub

This project explored how intersectional approaches can inform the design and deployment of trustworthy autonomous systems towards an inclusive, fair and just world. Focusing on health and maritime sectors, it addressed equity issues through individual, technical, systemic, cultural and institutional lenses.

PI: Dr. Caitlin Bentley
Co-I: Dr. Mohammad Naiseh, Dr. Laura Sbaffi, Dr. Efpraxia Zamani
Partners: NHS, University of Westminster, Maritime and Coastguard Agency