I have been interested recently in the ways in which AI is being integrated into healthcare as part of my personal goal to widen my understanding and broaden my own definition of AI. I’m seeing increasing need to do this as part of growing awareness and literacy as well as a need to show that AI is impacting curricula well beyond the ongoing kerfuffle around generative AI and assessment integrity. I was recommended this panel by Professor Dan Nicolau Jr who chaired this session at the recent event which looked at the many barriers to advances in a context where early detection, monitoring, business models and data availability impact the ways in which we do medicine and advance it in a world where ageing populations present an existential threat to global healthcare systems. It struck me when I watched this how much the potentials and barriers expressed here will likely be mirrored in other disciplines. Medicine does seem to be an effective bellweather though.
Some of the issues that stood out:
Data availability and validity: Just as healthcare AI can produce skewed results from over-represented organisms in protein design, we see similar issues of data bias emerging across AI applications. The challenges around electronic health records – inconsistent, incomplete and error-prone – mirror concerns about data quality in other domains.
Business models and willingness/ ability to use what is available: The difficulty in monetising preventative AI applications in medicine, for example, reflects broader questions about how we value different types of AI innovation. Similarly, the need to shift mindsets from reactive to proactive approaches in healthcare has parallels with cultural change required for effective AI adoption elsewhere. The comments from the panel about human propensities NOT to use devices or take medicines that will help them are quite shocking but still somehow unsurprising. Cracking that, according to the panel, would increase life expectancy more than finding a cure for cancer.
The regulatory landscape: The NHS’s procurement processes, which can stifle AI innovation, demonstrate how existing institutional frameworks may need significant adaptation. This raises important questions about how we balance innovation with appropriate oversight – something all sectors grappling with AI must address.
For me, healthcare exemplifies the complex relationship between technical capability and human behaviour. The adoption issue is obviously one that has parallels with willingness/ openness to using novel technologies, even where they can be shown to make life better or easier. The panel’s observations about patient compliance mirror wider challenges around user adoption and engagement with AI systems. We cannot separate the technology from the human context in which it operates.