AI & Health Equity: Navigating the challenges of human-AI teaming in healthcare
This presentation introduces participants to the concept of human–AI teaming and its potential to support better decision-making, improved patient outcomes, and patient-centered care. Attendees will explore the limits of current Fair-AI approaches, explore social and technical considerations that enable effective teaming (e.g., mental models of AI), and examine novel ideas for advancing equitable AI integration in practice.
No events to show
Description
Artificial intelligence (AI) is transforming how healthcare is delivered and experienced, yet concerns remain about biases embedded in algorithms that can exacerbate inequities for marginalized groups. Much current work in “Fair-AI” aims to mitigate bias by adjusting training data, system design, or outputs. While valuable, these strategies cannot fully address the social and political roots of inequity in healthcare. An emerging alternative is to reconceptualize AI not as an autonomous tool but as a “teammate” that works alongside clinicians. Human–AI teaming emphasizes reflexive, collaborative engagement with AI systems to enhance, rather than replace, physician judgment. This presentation introduces participants to the concept of human–AI teaming and its potential to support better decision-making, improved patient outcomes, and patient-centered care. Attendees will explore the limits of current Fair-AI approaches, explore social and technical considerations that enable effective teaming (e.g., mental models of AI), and examine novel ideas for advancing equitable AI integration in practice.
Presenters
Dr. Laura Sikstrom is a medical anthropologist and Scientist in the Krembil Centre for Neuroinformatics at the Centre for Addiction and Mental Health (CAMH) and is an Assistant Professor in the Department of Anthropology at the University of Toronto. Dr. Sikstrom co-leads the Predictive Care Lab, an interdisciplinary research team exploring AI, health service delivery, and social justice. Much of her current research focuses on the growing emphasis on “Fair AI” within the computer sciences, particularly the range of sociotechnical solutions designed to mitigate algorithmic bias. Through ethnographic and computational approaches, she helps shape AI systems that are not only technically robust but also socially responsible—promoting fair, inclusive, and patient-centered mental health care.
Marta Maslej is a Staff Scientist with The Krembil Centre for Neuroinformatics (KCNI) at CAMH and Assistant Professor with the Department of Psychiatry at The University of Toronto. At KCNI, she co-leads of the Predictive Care Team, which uses interdisciplinary methods to understand how AI applications will impact mental health care. As part of this work, she focuses on using AI methods to derive insights from clinical data, with the aim of improving assessment, informing treatment decisions, and identifying and mitigating bias. She also contributes this expertise to support research and discovery in other areas, such as the study of schizophrenia, suicide prevention, addictions, and medical education.
Rounds Details
Best Practices in Education Rounds (BPER) are co-hosted by the Centre for Faculty Development, The Wilson Centre and the Centre for Advancing Collaborative Healthcare & Education.
Accreditation Details
Each BPER has been accredited for up to:
- 1 College of Family Physicians of Canada – Mainpro+ credits
- 1 Royal College of Physicians and Surgeons of Canada – Section 1 hours
Review complete accreditation details.
For more information about BPER, please click here.
Event Details
-
Start date: Oct 14Duration: 1 hourLocation: OnlineCost:Offers:
- Free
Additional Info:Tags:
Recommended Events
-
14OctThis presentation introduces participants to the concept of human–AI teaming and its potential to support better decision-making, improved patient outcomes, and patient-centered care. Attendees will explore the limits of current Fair-AI approaches, explore social and technical considerations that enable effective teaming (e.g., mental models of AI), and examine novel ideas for advancing equitable AI integration in practice.
-
16OctClosing a medical practice requires a thoughtful and well-structured approach to ensure continuity of care, uphold professional and institutional responsibilities, preserve academic commitments and research obligations. and mitigate disruption to patients, learners, and colleagues. Come and join the discussion with our seasoned panelists and share your insights and stories!
-
22OctJoin this practical and thought-provoking workshop to strengthen your ability to design, critique, and refine performance-based assessments that are meaningful, fair, and defensible.