I am involved in a range of funded research projects spanning responsible AI, explainable AI, autonomous vehicles, human-swarm systems, and digital wellbeing. These projects are funded by the European Commission, UKRI, and other international bodies.
PRESERVE aims to improve the daily investigative and preventive work of Law Enforcement Authorities (LEAs), enhance proactive threat detection and response, and provide a regulatory-compliant framework for the responsible use of AI in crime detection and prevention. The €6 million project involves partners from 8 countries across Europe.
ExtremeXP aims to provide accurate, precise, fit-for-purpose, and trustworthy data-driven insights by evaluating complex analytics variants and placing end users at the centre of the process. The €10 million project integrates explainable AI, machine learning, visual analytics, and knowledge engineering into a unified framework validated across crisis management, predictive maintenance, mobility, public safety, and cybersecurity.
This project investigates the notion of digital wellbeing and the role of culture and digital design in both triggering concerns and mediating solutions. It proposes socio-technical solutions to enhance digital wellbeing using social expectations setting and psychological inoculation theory, with a focus on Qatari society as a case study.
This project mirrored the decision-making process of upper gastrointestinal multidisciplinary teams managing oesophageal cancer, incorporating human-selected critical variables and machine learning to standardise data-driven clinical decisions. The work ensured trustworthiness for routine clinical use through human-AI partnerships.
This project examined how liability is perceived and communicated between autonomous vehicle drivers and third-party insurers, developing integrated mental models that help each stakeholder understand their risks and responsibilities. It addressed how factors such as loss of control and operational limits affect liability perceptions.
This project explored explainability as a factor in facilitating effective and trustworthy human-swarm interaction. It investigated foundational challenges: what questions an explainable swarm should answer, and what types of explanations a swarm is expected to generate in safety-critical human-swarm environments.
Grounded in social psychology and behavioural economics, this project investigated how concerns with AI in autonomous vehicles can be alleviated at each level of control handover, and what mechanisms can address consumers' concerns to increase trust and well-being in human-AI interactions.
This project explored how intersectional approaches can inform the design and deployment of trustworthy autonomous systems towards an inclusive, fair and just world. Focusing on health and maritime sectors, it addressed equity issues through individual, technical, systemic, cultural and institutional lenses.