Data Science for Mental Health (DS4MH) @ The Alan Turing Institute
About Us
The vision for this interest group is to kick-start one or more projects using contemporary data science and multi-modal data for mental health to provide insight and benefit for individuals, clinicians, and contribute to fundamental research in mental health (including dementia) as well as the data science methodology. It aims to provide an informal bridge between clinicians, charities, and data owners (like CRIS, UKDP, and Biobank) and data science researchers to stimulate and align cutting edge research in this area.
Events
Meetings
We organise monthly meetings (including half-an-hour long invited talks) at the Turing. Meetings are organised and moderated by Iqra Ali and Yue Wu. Please join our mailing list for more updated information.
As a part of AI UK Fringe, we jointly organised a hybrid event with the NLP interest group on AI for Mental Health Monitoring on 28th March 2024.
See here for our previous talks.
Upcoming Events
Meetings
| Date | Time | Presenter | Title |
|---|---|---|---|
| 2026.1.22 | 15:00 | Introduction | |
| 15:05 | Aseem Srivastava (Postdoctoral Researcher, MBZUAI, UAE) and Zuhair Hasn Shaik (Research Engineer, MBZUAI, UAE) (MBZUAI, UAE) |
Structure and Psycho-social Safety as Language Models Move Closer to Human.
The role of AI in social and mental wellbeing is undergoing a fundamental shift, from systems that mediate human interaction to systems that increasingly function as direct social counterparts (aka AI companions). This shift introduces a critical challenge: as AI moves closer to human, the cost of missing structure increases, and the question of social safety becomes important. Despite their ability, language models remain fragile in socially vulnerable contexts, largely because current architectures lack explicit and interpretable representations of human vulnerability. In the absence of such a structure, safety mechanisms based on output filtering or post-hoc moderation are insufficient, particularly in sensitive settings involving emotional support, cultural nuance, and longitudinal interaction. We argue that as AI systems transition from mediators to companions, safety must be ensured in interpretable, taxonomical representations of psychosocial states rather than surface-level control of generated text. Addressing this gap requires rethinking how vulnerability, risk, and social context are represented, evaluated, and constrained within language models, motivating a broader research agenda at the intersection of NLP, AI safety, and mental and social wellbeing. In this talk, we will outline this shift through empirical examples, discuss emerging failure modes and design principles for socially aligned language models, and highlight open research challenges for building systems that interact safely and responsibly with human vulnerability. |
|
| 15:50 | After talks discussion |