Challenges for AI in health literacy

Yesterday, I posted on LinkedIn about my upcoming panel presentation at Health Literacy Annual Research Conference (HARC) on Oct 21 and Oct 22 2024.

Here’s what I posted:

“The Health Literacy Annual Research Conference (HARC) is coming up next week - Monday Oct 21 and Tuesday Oct 22 2024! 

Advancing the field of health literacy is incredibly important in realizing beneficial health outcomes and overall well being for people individually and as a society. 

I’m thrilled to be at HARC this year on a panel titled “Person-Centered Use of Artificial Intelligence for Health Literacy: Risks and Opportunities” with Sam Mendez (moderator), Vishala Mishra and Tracy Mehan next Monday Oct 21 2024 at 4:30 pm ET/1:30 pm PT.

By sharing research, first-hand experiences, guidance and general underpinnings of AI, we’ll explore opportunities and challenges for how AI can be applied responsibly in advancing health literacy.

The discussion is intended to provide background and insight for framing policies, processes, best practices and interactions for using AI in health communications in combination with health literacy professionals.

I’m looking forward to seeing you next week! In the meantime, feel free to DM me with questions or comments.”

Well, in response to my LinkedIn post, I got this very insightful question in the comments:

 
AI in health literacy has mad potential. What specific challenges do you foresee?
— James Stephan-Usypchuk

This is a great question!

Yes, the potential for AI in this field is indeed mad.

And, I encourage anyone who is especially interested to attend our panel discussion where we’ll talk about exactly that :) You can register for HARC here.

Kidding aside, there are indeed challenges. Some of them are not necessarily unique to health literacy. That said, they are amplified because of the sensitivity and impact of the work in this field.

For example, currently, one major challenge - probably the major challenge, honestly -  is building trust in AI systems, which I expect will continue to be a main challenge. Accuracy and appropriateness of outputs, to name a couple of specifics, are currently among trust-related concerns. There’s also what I’m going to “fairness and representation”, which includes things like bias and non-representation, which are a trust-related concern, too.

Creating responsible, ethical, human-centered and purpose-built applications of AI with correspondingly appropriate evaluation frameworks would help here.

Of course, doing that can be, in and of itself, a challenge for many organizations because it requires a different kind of thinking and doing the “hard work”, so to speak, i.e., the design and modeling and the thought work.

A related challenge is the burden on users of these AI systems, whether individuals or organizations. 

Compared with more traditional enterprise software and similar productivity tools, there’s a much bigger onus on users to:

  1. Identify where AI is in play;

  2. Determine its impact;  and,

  3. Develop and execute responsible and appropriate AI strategies, implementations, processes, policies, supporting tools and evaluations. 

In many cases, this can be a heavy lift. For example, for some organizations, it could mean tuning, optimizing or developing an LLM (or other language model) whether in-house or through 3rd parties or partnerships. For others, it can mean decisions about automated translation of health literate content.

Considerations like this need to be contextualized within a holistic AI strategy and roadmap. Often, teams and organizations who are facing these considerations are still building capabilities and competencies with AI, which is a pretty complex ball of wax.  

So, developing and expanding AI literacy among patients, health literacy practitioners and cross-functionally within organizations is definitely a challenge worth noting here. While certainly not unique to health literacy, the realities of how this manifests itself and can be most effectively addressed are.

Beyond that, yes, there are challenges specific to health literacy. 

One of these is the challenge of responsibly and ethically equipping AI systems with sufficient context  - cultural, social, medical, etc. - and relevant data to produce outputs that are effective, useful and meaningful for health literacy purposes across the spectrum of health, populations, geographies and individuals and use cases . 

Where and how experienced and knowledgeable health literacy professionals are “in the loop” is another challenge for AI that feels somewhat unique to this space. AI has to work within the processes, workflows, principles and mental models of the people who do the work, presenting another angle on human-centered and purpose-built design beyond the patient perspective.

 In subsequent posts, we’ll dig into specifics around navigating these and similar challenges. For now, we

 
Temese Szalai