Developing and implementing AI in medicine and care responsibly 

This case study explores how AI in medicine and care can be developed, implemented and regulated in ways that reflect the needs of marginalised and underrepresented communities.

What are the relationships between AI, care, voice, and expertise? How is AI across medicine and care shaped by different voices? This case study asks how the development, regulation and implementation of AI across medicine and care can better account for the expertise of, and reflect the needs of marginalised and underrepresented voices.

Questions we're exploring

What is being lost and gained from the disruption caused by AI in medicine to care, voices, expertise and responsibility across medicine?

How is regulation accounting for these shifts and how can it do so responsibly and in ways that account for the demands of equitable and inclusive innovation?

What are the implications of the shifting nature and locus of expertise and fragmented responsibility caused by the adoption of AI across medicine and care?

People

Senior Lecturer, Centre for Biomedicine, Self and Society

Dr Nayha Sethi is Senior Lecturer and Chancellor’s Fellow in the responsible regulation of health data and AI. Her background is in socio-legal and ethical aspects of health research, innovation and practice, with a focus on health technologies, especially data and AI.