Through my work as an AI Tech Lead across startups, enterprises, and government projects spanning Pakistan, the US, Ireland, and France, I've witnessed firsthand how the current AI development paradigm creates unequal relationships between technology-producing and technology-consuming regions.
Through my work as an AI Tech Lead across startups, enterprises, and government projects spanning Pakistan, the US, Ireland, and France, I’ve witnessed firsthand how the current AI development paradigm creates unequal relationships between technology-producing and technology-consuming regions. This isn’t an abstract critique—it’s based on real observations from the ground about data flows, labor practices, and whose voices shape AI development.
Over the past seven years, I’ve had the privilege of working on AI projects across multiple continents—from aerospace applications in Pakistan to startup ecosystems in Ireland, from enterprise solutions in the Caribbean to design innovation in France. What I’ve observed isn’t the democratizing force that AI advocates often promise, but a more complex reality where the benefits and burdens of AI development are unevenly distributed.
This post reflects on what I’ve learned about the global AI ecosystem and raises questions we need to address as the technology becomes more pervasive.
During my time leading data science teams at various organizations, I’ve seen how data flows in the global AI economy. When we built analytics frameworks for enterprise clients, the pattern was consistent: data generated in emerging markets often gets processed and monetized by platforms headquartered elsewhere1.
Take mobile financial services, an area I’ve worked on extensively. While innovations like M-Pesa originated in Kenya2, the behavioral data generated by millions of users across Africa increasingly flows to Western AI companies building credit scoring and fraud detection systems. The insights derived from this data—understanding spending patterns, predicting financial behavior, optimizing user interfaces—become intellectual property that’s then licensed back to local financial institutions3.
This isn’t inherently problematic, but it raises questions about value distribution. When a startup in Silicon Valley uses transaction data from Lagos to improve their algorithm, who benefits from that improvement? Usually, it’s the shareholders of the Silicon Valley company, not the Lagos users whose behavior created the training data.
Through platforms like Omdena, where I led machine learning projects for social impact, I regularly worked with data scientists and ML engineers from across the Global South. The talent and dedication were extraordinary, but the economic dynamics were troubling.
The global AI workforce structure reveals concerning patterns about how labor and profits are distributed:
Many of the data annotation and model training tasks that make AI systems possible are outsourced to countries where labor costs are lower4. I’ve seen brilliant engineers in Pakistan, India, and the Philippines working on cutting-edge AI projects for a fraction of what their counterparts in Silicon Valley earn for similar work.
Content moderation—the essential but traumatic work of training AI systems to recognize harmful content—is disproportionately performed by workers in Kenya, the Philippines, and other countries where Western tech companies can hire talent cheaply5. These workers face significant psychological risks while protecting users in wealthier countries from disturbing content.
While building LLM-based solutions like Bob-The Startup Advisor and Sandy-The Financial Advisor, I encountered the limitations of current AI models firsthand. Despite claims of multilingual capability, these systems struggle with non-English contexts in ways that go beyond simple translation.
Large language models trained primarily on English text exhibit systematic biases when dealing with non-Western concepts6. When I tested financial advisory models with questions about Islamic banking principles or traditional business practices common in South Asian markets, the responses were often inadequate or culturally inappropriate.
When I asked my financial advisor LLM about hawala (traditional Islamic money transfer), it provided generic responses about “informal banking” without understanding the cultural and religious principles that make hawala a legitimate and important financial instrument in many communities.
This isn’t just a technical limitation—it reflects whose knowledge and perspectives are valued in AI training data. The vast majority of text used to train large language models comes from English-language sources, primarily from Western contexts7. Local knowledge systems, indigenous practices, and non-Western ways of organizing information are systematically underrepresented.
One of the most frustrating aspects of the current AI ecosystem is how innovation is perceived and valued. During my MBA at Rennes School of Business, I studied how technological innovation is often framed as flowing from “centers” (Silicon Valley, Boston, London) to “peripheries” (everywhere else).
The following chart illustrates how AI investment is concentrated in wealthy regions:
This framing ignores the reality I’ve witnessed: incredible innovation happening across the Global South, often out of necessity rather than venture capital abundance. The aerospace projects I worked on in Pakistan involved sophisticated optimization algorithms developed under resource constraints that would be unimaginable in Western tech companies.
Yet these innovations rarely receive global recognition or investment. The AI research emerging from universities in Nigeria, Pakistan, Brazil, or India is often overlooked by major conferences and journals, which maintain editorial boards dominated by Western institutions8.
I’m not arguing that all AI development should be localized or that global collaboration is inherently problematic. The projects I’ve worked on have benefited enormously from international collaboration and knowledge sharing.
But we need more honest conversations about power dynamics in AI development. Some concrete steps that could help:
Equitable Partnership Models: When AI companies use data from emerging markets, they should share the value created, not just extract insights9.
Diverse Training Data: Deliberate efforts to include non-Western knowledge sources in AI training data, with proper compensation and attribution to source communities10.
Local AI Capacity Building: Investment in AI research institutions and startups in the Global South, not just outsourcing implementation work11.
Ethical Labor Practices: Fair compensation and psychological support for workers performing essential but difficult AI training tasks12.
The following chart compares current AI value distribution with a more equitable proposed model:
As someone who has worked across this ecosystem, I’m left with questions that the AI community needs to address:
These aren’t abstract philosophical questions—they’re practical challenges that will determine whether AI becomes a force for reducing or increasing global inequality.
The technology itself is remarkable. I’ve seen AI systems optimize supply chains, predict equipment failures, and automate routine tasks in ways that genuinely improve people’s lives. But technology alone doesn’t determine outcomes—the economic and social structures around it do.
As AI practitioners, we have a responsibility to think critically about these structures and work toward more equitable alternatives. The future of AI isn’t predetermined, but it won’t democratize itself.
What do you think? Have you experienced similar patterns in your work with AI systems? Share your thoughts in the comments below.
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs. ↩
Hughes, N., & Lonie, S. (2007). M-PESA: mobile money for the “unbanked” turning cellphones into 24-hour tellers in Kenya. Innovations, 2(1-2), 63-81. ↩
Aitken, R. (2017). ‘All data is credit data’: Constituting the unbanked. Competition & Change, 21(4), 274-300. ↩
Gray, M. L., & Suri, S. (2019). Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Houghton Mifflin Harcourt. ↩
Roberts, S. T. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media. Yale University Press. ↩
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610-623. ↩
Rogers, A., Kovaleva, O., Downey, M., & Rumshisky, A. (2020). What’s in your embedding? Analyzing word embedding bias in conceptual spaces. Proceedings of the 1st Workshop on Gender Bias in Natural Language Processing, 1-16. ↩
Mohamed, S., Png, M. T., & Isaac, W. (2020). Decolonising science–reconstructing relations. eLife, 9, e65546. ↩
McDonald, S., & Milne, R. (2021). Corporate power and global health governance: The example of foundation and pharmaceutical industry relations. Global Social Policy, 21(2), 275-297. ↩
Blodgett, S. L., Barocas, S., Daumé III, H., & Wallach, H. (2020). Language (technology) is power: A critical survey of “bias” in NLP. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 5454-5476. ↩
Adams, R. (2021). Can artificial intelligence be decolonized? Interdisciplinary Science Reviews, 46(1-2), 176-197. ↩
Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press. ↩