Wealthy countries have more access to AI but are more skeptical about its use due to past problems and privacy worries, while less developed regions are more hopeful and expect to use more AI in the future. Surveys show that trust in AI is lowest where it’s most common, and highest where it’s new and less used. Many poorer countries are left out of big AI decision-making meetings, risking being left behind. Policymakers are trying to boost trust and understanding with new education and infrastructure plans. In the end, clearer rules and more local involvement help people trust and use AI more.
Why do wealthier countries show more skepticism toward AI, while less developed regions are more optimistic?
A 2025 UNDP survey reveals an inverse trust curve: Wealthy nations, despite higher AI access, are more skeptical due to privacy concerns and past biases, while lower-HDI countries show rising optimism and expect greater AI adoption, driven by anticipation of benefits and fewer negative experiences.
A fresh United Nations Development Programme survey of more than 21 000 adults in 21 countries shows that how much people trust and use artificial intelligence depends less on access to the latest gadgets and more on the country they happen to live in. The report, released in April 2025, uncovers a striking pattern: enthusiasm is rising fastest in regions with the least current AI use, while the world’s wealthiest nations report both the highest access and the lowest appetite for more.
The four tiers of AI use
HDI level | Recent AI use in health, education or work | Share expecting “more AI” next year |
---|---|---|
Low / Medium | 14.4 % | ≈ 66 % |
High (China, Brazil, etc.) | 23.6 % | ≈ 60 % |
Very high (US, EU, Japan) | 19.0 % | 45.9 % |
Source: UNDP AI-adoption survey, 2025
The figures reveal an inverse trust curve: where AI is already common, skepticism grows; where it is scarce, optimism soars.
Behind the numbers
- Access vs. attitude. Only 14.4 % of respondents in low- or medium-HDI countries had used an AI tool at work, school or for health in the past month, yet two-thirds expect their usage to climb in the coming year.
- Wealthy nations’ hesitation. Among very high-HDI countries, barely half of users anticipate a rise in AI reliance, suggesting a confidence gap despite superior infrastructure and faster internet speeds.
- Global market size. One reason for the optimism gap is scale: the worldwide AI market is now projected to reach USD 4.8 trillion by 2033, roughly the size of Germany’s economy.
Why trust erodes where tools are abundant
Stanford’s 2025 AI Index notes that global trust in AI firms to safeguard personal data slipped from 50 % in 2023 to 47 % in 2024 [source: Stanford HAI report 2025]. The drop is sharpest in North America and Europe, where high-profile cases of biased hiring algorithms and opaque data sharing have amplified privacy fears. By contrast:
- Asia-Pacific leads in optimism: 83 % of Chinese respondents, 80 % of Indonesians and 77 % of Thais see AI as more helpful than harmful.
- Europe and North America lag: just 39 % in the US and 36 % in the Netherlands share that positive view.
What policy makers are doing
The United States is tackling the trust deficit head-on. New executive orders from April 2025 will:
- roll out AI lessons in K-12 classrooms nationwide
- fund teacher-training grants so educators can both teach and safely use AI tools
- expand AI apprenticeships for high-school students through Labor-Department partnerships
Early modelling by the Afterschool Alliance predicts the measures could lift baseline AI literacy among US students by up to 25 % within two academic years.
Bridging the governance gap
Meanwhile, 118 low- and middle-income countries remain absent from most high-level AI governance forums, according to UNCTAD’s 2025 Technology and Innovation Report. Without a seat at the table, these nations risk becoming mere data suppliers to algorithms trained and monetised elsewhere. The “Global AI Governance Action Plan” adopted at the July 2025 World AI Conference urges:
- accelerated investment in digital infrastructure across the Global South
- region-specific algorithms that respect local languages and laws
- open repositories that lower the barrier to entry for universities and start-ups outside the US-EU-China axis
Take-away for organisations
Companies and NGOs that want to widen AI adoption should note that confidence often follows transparency, not the other way around. Pilot programmes that pair local educators with open-source models, disclose training-data sources and invite community audits have already tripled user-engagement rates in Kenya’s agritech sector and cut drop-off by half in Brazil’s tele-health trials.
In short, the fastest route to higher AI uptake may not be faster chips but clearer rules, broader seats at the table and proof that the technology works for the people it is meant to serve.
What is the “inverse curve” of AI adoption and skepticism?
The inverse curve describes how adoption and trust in AI are moving in opposite directions across regions. While overall usage keeps rising, confidence in AI systems is falling, especially in wealthier countries that have had early access.
Key points from the latest UN Development Programme survey of 21,000+ people:
- 23.6 % of residents in high-HDI countries (e.g. China, Brazil) report recent AI use
- Only 19 % in very-high-HDI countries (US, Europe, Japan) say the same
- Yet two-thirds of respondents in lower-income nations expect to increase AI use next year, versus < 46 % in very-high-income nations
The result is a trust gap: more access does not automatically mean more confidence.
Why are lower-income countries more optimistic about AI?
Despite lower current usage, enthusiasm is higher because:
- Perceived benefit outweighs risk – AI is seen as a leap-frog tool for health, education and finance
- Infrastructure investments are ramping up – 40 % annual drop in the cost to run large models makes advanced tools newly affordable
- Future-first mindset – 66 % of citizens in these countries anticipate greater personal use in the next 12 months, compared with < 50 % in the richest economies
In short, expectation is ahead of experience, driving a positive outlook.
How wide is the AI governance gap between North and South?
- 118 countries (mostly Global South) are still missing from global AI governance forums
- Only 100 firms (US & China-heavy) control 40 % of all private AI R&D spending
- Africa and parts of Latin America remain under-represented in rule-making, even as their data and talent are used to train global systems
Without seats at the table, these regions risk AI colonialism – value extracted locally, rules set elsewhere.
What concrete steps are governments taking to raise AI literacy?
United States
– April 2025 executive orders created a White House Task Force on AI Education
– Goals: integrate AI into K-12 curricula, fund teacher training, expand apprenticeships
– Early projection: broader AI literacy among students and workforce by 2026
Global initiatives
– Global AI Governance Action Plan (July 2025) calls for accelerated digital infrastructure in the Global South
– Emphasis on open model sharing and lowering innovation barriers to close the skills gap
Is trust in AI companies actually declining worldwide?
Yes – and the drop is measurable:
- Trust to protect personal data fell from 50 % (2023) → 47 % (2024) globally
- North America & Europe show the lowest confidence (39-40 %)
- Asia-Pacific registers the highest optimism (up to 83 % in China, 80 % in Indonesia)
Driving factors: privacy breaches, biased algorithms, lack of transparency. The trend underscores that technical capability must be matched by ethical credibility if adoption curves are to stay positive.