AI after Kigali: Navigating Hope and Hard Questions

two hands reaching for a flying object in the sky
two hands reaching for a flying object in the sky
two hands reaching for a flying object in the sky

Jul 23, 2025


Rose Wilcher


The IAS 2025 conference in Kigali was teeming with examples of artificial intelligence contributing to the HIV response. We heard about chatbots that text patients empathetic medication reminders and on-demand answers about side-effects; an AI-powered toolkit designed to improve HIV prevention care among adolescent girls and young women in South Africa; mobile vans in Nigeria that pair digital chest X-rays with AI image readers to speed detection of TB-HIV co-infection; and a variety of applications to accelerate HIV vaccine research. Together, these innovations painted a picture of AI boosting progress along the entire discovery-to-delivery continuum.

The potential feels enormous, yet the path forward isn’t without caution. The Lancet Global Health Commission on AI and HIV, launched last year to chart a responsible and ethical course for these tools and highlighted at the conference, will publish its recommendations in 2026. That guidance is important, but the pace of AI development demands we grapple with some big questions now. 

Three of those questions, prompted by discussions at IAS, include:
1 . Who owns AI-powered solutions?

AI is fueled by data: clinic records, pharmacy inventories, even anonymous social media chatter. Much of this data is generated locally, yet the algorithms processing it often live on distant servers, developed by distant companies. If ministries, local implementers, and civil society groups aren’t deeply involved in shaping AI tools, or if those tools are locked behind pricey subscriptions or closed-source code, we risk repeating the cycle of donor-driven tools that shine in pilots and vanish when funding shifts. 

2 . Will the solutions reach those who need them most?

Much of the hype around AI revolves around its scalability and its potential to empower individuals to manage their own healthcare. However, many AI solutions assume steady internet, reliable power and updated devices – conditions that aren’t always accessible to many rural clinics, informal pharmacies, and individuals. Without intentional design choices, such as offline functionality, affordable hardware and local languages, AI could deepen inequities, leaving remote communities further behind.

3 . What happens when AI makes mistakes?

Speakers in Kigali were candid about AI’s imperfections: large language models sometimes invent facts, predictive systems can drift off-course, and biases can sneak into the algorithms. In HIV programs, wrong advice can do real harm. Safeguards, such as human oversight and accessible ways for patients and providers to challenge or question outcomes, must be built in from day one. 

Looking ahead with optimism

Kigali made one thing clear: AI is not some distant promise. It is already changing everything from drug discovery to patient care. At Root to Rise, we are engaged in figuring out how AI can make introducing new health products faster and more adaptable. 

But technology alone will not be enough. The true power of AI in the HIV response will emerge when guided by people who deeply understand health systems and the communities they serve, and who know how to translate knowledge of what works into meaningful policy and programming at scale. With country ownership, genuine equity, and rigorous safety at the core, we are excited about the potential for AI to enable smarter, fairer and faster progress in HIV and other global health areas.

Explore Our Offerings

Explore Our Offerings

Explore Our Offerings

2025 © Root to Rise