news
AI for the Global Majority
11 May 2026

Who Governs AI in Global Health? Rethinking Power, Data, and Responsibility

As part of the AI for the Global Majority initiative, a multidisciplinary research team led by Professor Jude Kong and including Jake Effoduh, Jim Hinton, Abbas Yazdinejad, and Maral Niazi,  is examining how AI is deployed across healthcare systems in the Global Majority, and what this reveals about current gaps in governance.

Beyond principles: the challenge of implementation

Discussions on AI governance often focus on high-level principles such as fairness, accountability, and transparency. Yet in many contexts, especially across Africa and other regions of the Global Majority, the challenge lies not in defining principles, but in implementing them effectively.

The team’s research highlights a persistent gap between global frameworks and on-the-ground realities, where governance structures are often fragmented, under-resourced, or still emerging.

This raises a key question:
How can AI governance move from abstract guidelines to practical, context-sensitive tools?

 

A framework built around risks, rights, and rules

To address this challenge, the project proposes a governance approach structured around three core dimensions:

  • risks, identifying potential harms associated with AI deployment; 
  • rights, ensuring that systems respect fundamental human and social protections; 
  • rules, defining the regulatory and institutional mechanisms needed to govern AI effectively. 

Rather than treating these dimensions in isolation, the framework connects technical systems with their broader societal implications, particularly in the context of public health.

 

Data, power, and inequality

The research also points to deeper structural issues shaping AI governance.

One major challenge is the lack of accessible and reliable data on how AI systems are deployed in healthcare settings across the Global Majority. Much of this information is not publicly available, requiring direct engagement with organisations and experts.

At the same time, concerns around digital colonialism and data extractivism are becoming increasingly central. When data is collected, processed, and monetised without local control or benefit, existing inequalities risk being reinforced.

These dynamics highlight the need to rethink not only governance frameworks, but also the distribution of power in the global AI ecosystem.

 

Trust and legitimacy

Another key issue is public trust.

In contexts where governance frameworks are still evolving, the deployment of AI in sensitive areas such as healthcare raises important questions about legitimacy, accountability, and consent.

Without trust, even the most advanced systems may struggle to achieve meaningful impact.

 

A broader shift

Taken together, these challenges suggest that AI governance cannot be approached as a purely technical or regulatory issue.

It requires a broader perspective that integrates infrastructure, institutions, and lived realities, and that recognises the diversity of contexts in which AI is deployed.

This is precisely what the team’s work seeks to operationalise, as we will see in Part 2 of this series, coming soon.

 

About AI for the Global Majority

AI for the Global Majority (AI4GM) is a joint initiative of the Geneva Graduate Institute, Microsoft, and the International Telecommunication Union (ITU) dedicated to supporting innovative, evidence-based, and context-sensitive research on how artificial intelligence can benefit the world’s majority populations.

Bringing together interdisciplinary teams from across regions and sectors, the initiative explores practical pathways for more inclusive, responsible, and impactful AI in areas such as governance, education, health, finance, and digital innovation.

Selected teams will present their work in Geneva as part of the AI for Good Global Summit, contributing to international discussions on the future of AI and global development.