Go Ahead, A.I. — Surprise Us
By Kim Bellard, June 29, 2021
Last week I was on a fun podcast with a bunch of people who were, as usual, smarter than me, and, in particular, more knowledgeable about one of my favorite topics — artificial intelligence (A.I.), particularly for healthcare. With the WHO releasing its “first global report” on A.I. — Ethics & Governance of Artificial Intelligence for Health — and with no shortage of other experts weighing in recently, it seemed like a good time to revisit the topic.
My prediction: it’s not going to work out quite like we expect, and it probably shouldn’t.
WHO’s proposed six principles are:
- Protecting human autonomy
- Promoting human well-being and safety and the public interest
- Ensuring transparency, explainability and intelligibility
- Fostering responsibility and accountability
- Ensuring inclusiveness and equity
- Promoting AI that is responsive and sustainable
All valid points, but, as we’re already learning, easier to propose than to ensure. Just ask Timnit Gebru. When it comes to using new technologies, we’re not so good about thinking through their implications, much less ensuring that everyone benefits. We’re more of a “let the genie out of the bottle and see what happens” kind of species, and I hope our future AI overlords don’t laugh too much about that.
The example that I’ve been using for years is that we can’t even agree on how human physicians seeing patients in other states via telehealth should be licensed/regulated, so how are we going to decide how a cloud-based healthcare A.I. should be?
AI is going to evolve much more rapidly than other healthcare technologies, and our existing regulatory practices may not be sufficient, especially in a global market (as we’ve seen with CRISPR). Not to be facetious, but we may need AI regulators to oversee AI clinicians/clinical support, just as we may need AI lawyers to handle the inevitable AI-related malpractice suits. Only another black box may be able to understand what a black box is doing.
I worry that we’re thinking about how we can use A.I. to make our healthcare system do more of the same, just better. I think that’s the wrong approach. We should be going to ground principles. What do we want from our healthcare system? And, then, how can A.I. help get us there?
If A.I. for healthcare is a better Siri or a new decision support tool in an EHR, we’ve failed. If we’re setting the bar for A.I. to only support clinicians, or even to replicate physicians’ current functions, we’ve failed. We should be expecting much more.
E.g., how can we use A.I. to democratize health care, to get advice and even treatment in people’s hands? How can we use it to help health care be much more affordable? How can A.I. help diagnose issues sooner and deliver recommendations faster and more accurately?
In short, how can A.I. help us reorient our health care from the healthcare system that delivers it, and the people who work in it, to our health? If that means making some of those irrelevant, or at least greatly redefining their roles, so be it.
Right now, much A.I. work in healthcare seems to be focused primarily on granular problems, such as diagnosing specific diseases. That’s understandable, as data is most comparable/available around granular tools (e.g., imaging) or conditions (e.g., breast cancer). But our health is usually not confined within service lines. We need more macro A.I. approaches.
We might need A.I. to tell us how A.I. can not just improve our healthcare but also to “fix” our healthcare system. And I’m OK with that.
This post is an abridged version of the original posting in Medium. Please follow Kim on Medium and on Twitter (@kimbbellard)
Reader Comments