What is your current role at the ITU and what types of projects are you involved in?
I joined the ITU 18 months ago, following a 20-year journey as an entrepreneur specialised in deeptechs. The ITU has always been a unique and inspiring institution to me, not only due to its rich historical significance but also because of the critical role it played and still is set to play in the future of global digital transformation.
As Senior Coordinator within ITU’s Telecommunication Development Bureau, I lead the OSEE initiative, which plays a key role in shaping strategic projects under the Digital Networks and Environment (DNE) department. Through OSEE, I contribute to and support related initiatives such as GovStack, the Open Source AI (OSAI) reference implementation, and the Open Source United (OSU) Common Policy Framework at the UN. My role focuses on advancing open-source ecosystems as a driver for sustainable digital transformation, aligning with the UN Global Digital Compact and supporting member states in their digital development efforts.
What are the major challenges in AI governance in our current environment?
Today’s AI governance faces a complex set of challenges, particularly in ensuring openness, inclusiveness, and equity across diverse global contexts. One of the most pressing challenges is the need to help countries reach a comparable level of maturity in AI development and usage. Many countries still need to develop a comprehensive understanding of critical issues like digital sovereignty, inclusiveness, and transparency.
In this context, the 2024 AI Governance Day, held alongside the AI for Good Global Summit in Geneva, underscored the importance of transitioning from AI governance principles to concrete implementation strategies. The event brought together government leaders, international organisations, and industry experts to discuss practical approaches to AI regulation, standardisation, and capacity building. These discussions further reinforced the role of open-source ecosystems in fostering transparency and inclusiveness in AI governance, aligning closely with ITU’s mission and ongoing initiatives such as OSEE and OSAI.
Digital public goods, digital public infrastructures, and open-source initiatives are essential tools for addressing these gaps. However, this must be done against the backdrop of rapid AI advancements. To keep pace with this evolution, an unprecedented global push toward open-source development and the scaling of digital public infrastructures is necessary. AI, in this sense, can be considered a digital common, the benefits and risks of which should be shared equitably.
What is the role of open source in transparent and ethical AI governance?
Open source, formally defined in 1998, encompasses the right to use, modify, distribute, and study software. Many existing AI systems, however, do not meet the open-source criteria, as they lack the openness needed to enable full public scrutiny and analysis. For AI governance to achieve transparency, adhering to the fundamental principles of open source is critical.
At the ITU and the UN, we advocate for initiatives like the Open Source United community of practice, and the OSAI reference implementation. These programmes emphasise the development of an open-source AI that extends beyond software and includes regulated access to critical data when necessary.
Ethical AI governance poses further challenges though, as ethics definition can vary significantly across countries and cultures. Rather than aiming for universal ethical alignment, it may be more practical to focus on responsible AI development. This requires raising awareness of AI’s risks and applications and ensuring global representation in AI training datasets. A responsible approach to AI governance includes the promotion of transparency, inclusiveness, equitable access, and bias-free data. When applied correctly, these foster trust in AI systems, a necessity for digital public goods and infrastructures.
How can innovation be balanced with regulation, and what should it mean for policymakers?
Policymakers face a delicate balancing act when trying to promote innovation while establishing regulations that ensure safety and responsibility. Predicting technological innovations, their applications, and their societal impact is inherently challenging. For this reason, regulations should not stifle innovation but rather facilitate its growth by providing a clear framework within which innovation can thrive.
Effective regulation often involves market creation and tailored risk management strategies. One promising regulatory approach is to anticipate emerging innovations by identifying potential risks and defining clear rules and responsibilities for their handling. The European Union’s AI Act offers a noteworthy example of this method. It classifies AI applications into four risk categories, ranging from low-risk to prohibited systems, thereby allowing innovation to proceed while mitigating potential harm.
Ultimately, by framing AI as a digital common — i.e., shared, accessible, and responsibly governed — policy-makers can create regulatory frameworks that not only mitigate risks but also foster collaborative innovation, ensuring that AI development serves the collective good and advances equitable global progress.
This article was published in Globe #35, the Graduate Institute Review.