Robert Opp, featured on this 12 months’s A.I. Energy Index, is likely one of the clearest voices urging the world to decelerate—not in innovation, however in assumption. As chief digital officer of the United Nations Growth Programme, Opp is guiding A.I. and information technique throughout greater than 170 nations. That world perspective has made him deeply skeptical of the concept that A.I. delivers advantages equally, in every single place. “In actuality, the advantages have been distributed unequally throughout and inside nations,” he tells Observer.
When information units and options fail to seize native languages or cultural contexts, Opp warns, A.I. doesn’t shut gaps, it widens them. His mandate at UNDP is to make sure the other: that digital infrastructure, inclusive governance and robust foundations come first, earlier than layering on A.I. options. That strategy is rooted in expertise. On the World Meals Programme, Opp helped launch ShareTheMeal, a cellular app that raised $40 million to fight starvation. The lesson—that digital platforms succeed after they cut back friction and construct belief—now informs how he thinks about embedding A.I. into humanitarian work. Underneath his management, UNDP has piloted A.I. initiatives in agriculture, well being and training, proving the know-how’s potential to immediately enhance lives if deployed responsibly.
What’s one assumption about A.I. that you just suppose is useless unsuitable?
A standard assumption is that A.I. will robotically ship advantages in every single place in the identical method. In actuality, the advantages have been distributed unequally throughout and inside nations. It’s extremely depending on context, as an illustration, on whether or not folks have entry to related information, reasonably priced compute and the required abilities. If the information units and A.I. options don’t mirror native realities or languages, A.I. can really amplify exclusion. What’s lacking within the dialog is the best way to localize A.I. so it addresses native issues.
For those who needed to decide one second within the final 12 months while you thought “Oh shit, this adjustments every part” about A.I., what was it?
If I needed to decide one second within the final 12 months, it might be the MIT report from August displaying that 95 p.c of corporations are seeing “zero ROI” from their generative A.I. investments. That felt like an actual turning level, a sign that we’d lastly be shifting previous the hype cycle and right into a extra sober dialog about what A.I. is definitely good for. To me, it raised elementary questions we must always all be asking: Why are we constructing this? Do we all know if it really works nicely? Do we all know who it really works nicely for? And most significantly, how can we be certain that its advantages contribute to shared prosperity?
It additionally underscored the pressing want for extra rigorous analysis of A.I. instruments—particularly within the public sector. With out sturdy proof of influence, we threat investing time, cash and belief into options that don’t ship. However with the proper evaluations in place, we are able to establish which investments are actually transformative and which aren’t, guaranteeing that A.I. is a software for significant progress reasonably than simply one other wave of tech hype.
What’s one thing about A.I. growth that retains you up at night time that most individuals aren’t speaking about?
A lot of the information used to coach A.I. is sourced from the World North, predominantly in English, and it doesn’t seize native realities, languages or cultural context. With out various and inclusive datasets, A.I. will proceed to misrepresent and even marginalize whole populations. This subject doesn’t make headlines as a lot as job displacement or security dangers, however it’s elementary as to whether A.I. can really serve everybody.
ShareTheMeal has raised over $40 million via micro-donations. What did that educate you about how folks have interaction with world issues via digital platforms?
It confirmed that digital platforms can radically cut back the friction of engagement. When folks can act immediately from their telephones, they’re extra prepared to take part, even in small methods. And people small actions, aggregated at scale, can generate actual influence. However greater than the know-how, it’s about belief: folks have interaction when the aim is evident, the influence is clear and the expertise feels human.
You’re main digital transformation throughout 170 nations with vastly totally different tech infrastructure. How do you construct A.I. options that work in each Silicon Valley and rural Bangladesh?
The start line shouldn’t be the know-how itself, however the foundations: digital infrastructure, enabling insurance policies and capability constructing for folks. We give attention to serving to nations construct digital public infrastructure, the equal of roads and bridges, like digital ID, funds and information exchanges. As soon as these are in place, A.I. options could be layered on prime in methods which can be secure, inclusive and related to native wants. That method, whether or not in Silicon Valley or rural Bangladesh, the answer works as a result of the foundations are strong.
The UN has been pushing “digital public items” as options to Huge Tech platforms. What’s one digital public good that’s really working at scale, and why?
It’s not about pushing options to tech corporations; it’s about opening extra selections to nations which can be attempting to construct their digital infrastructures. One digital public good that has been adopted at scale is DHIS2, an open-source, web-based software program platform mostly used as a well being administration data system (HMIS) however adaptable for different sectors. Initially developed by the HISP Centre on the College of Oslo, it has grown via collaboration with a world community of native HISP teams over the previous three a long time. DHIS2 is now used because the nationwide HMIS in additional than 80 low- and middle-income nations, overlaying about 3.2 billion folks, and can also be utilized in areas corresponding to logistics and training on account of its versatile, customizable design. Its world community-based growth mannequin combines worldwide requirements with native adaptation, making it each extensively carried out and domestically owned.
You’ve written about South Africa prioritizing A.I. fairness over A.I. development. Ought to growing nations leapfrog the “transfer quick and break issues” section fully?
The growing nations don’t have to repeat the errors of others. They’ve a chance to prioritize fairness, inclusion and rights from the beginning, reasonably than retrofitting protections later. That doesn’t imply slowing down innovation. It means shaping it with guardrails in order that A.I. accelerates sustainable growth with out leaving populations behind. In different phrases, placing folks first.
UNDP works on every part from local weather change to poverty discount. The place is A.I. making the most important distinction in UN applications?
We’re seeing promising functions in agriculture, the place A.I. gives farmers with real-time suggestions on crops. Within the well being sector, language fashions are bettering entry to data, corresponding to on maternal well being. And in training, A.I. can remodel training by making studying extra accessible, personalised and efficient, benefiting each educators and college students. These are areas the place A.I. immediately improves lives—however provided that nations have the infrastructure, information and governance to make it work.
How do you stability innovation with defending susceptible populations when deploying A.I. in nations with restricted information privateness legal guidelines?
We take a people-first strategy. Meaning supporting nations in constructing strong information governance frameworks, privateness protections and belief mechanisms alongside deploying new applied sciences. One instance is our AI Belief and Security Re-imagination Programme, which strikes past reactive threat administration towards proactive, inclusive, and context-sensitive approaches to A.I. governance. Drawing on insights from the 2025 Human Growth Report, the programme strengthens native enabling environments whereas complementing world analysis and coverage efforts. By partaking innovators throughout the private and non-private sectors, it re-imagines belief and security frameworks that prioritize fairness, anticipate and forestall hurt and guarantee A.I. growth advantages communities pretty.