The Colonial Pattern: Whoever Writes the Rules Controls the Technology

Contents

In the late 1990s, I worked at UzAgroIndustrialBank in Tashkent — the institution that financed Uzbekistan’s agricultural sector, including the cotton industry that generated roughly a third of the country’s export revenues. The bank was large enough to sit at the intersection of state policy and international pressure. And from that position, I had a direct view of how governance frameworks actually arrive.

International financial institutions came with reform packages: privatization timelines, agricultural restructuring requirements, price liberalization schedules. The language was that of technical assistance. The structure was that of conditionality — help was available, but only if certain standards were adopted. Standards drafted almost entirely by institutions headquartered in Washington and London — at the time the twin centers of international finance and the home of institutions such as the World Bank, IMF, and the EBRD — designed for economies whose conditions differed substantially from ours.

The officials who arrived with these frameworks were, in my direct experience, capable and sincere. The problem was structural: the rules were written elsewhere, by people who had not grown cotton under Soviet irrigation systems, had not watched rural families depend on collective farm employment, and had not managed a payment system still rebuilding its institutions after seven decades of central planning. The frameworks were applied without understanding local conditions — copied from templates designed for other contexts, with Uzbekistan treated as just another transitional economy to be processed through a standard checklist. For those designing the programs, we were interchangeable with a dozen other post-Soviet republics. The specific gravity of our circumstances — ecological, social, institutional — did not figure in the formula. The farmers of the Fergana Valley, the workers in cotton processing plants, the families of Samarkand Region — they had no meaningful seat at the table where the rules were being drafted.

That table. I have been thinking about it for the past two years. Because I recognize its architecture now in AI governance. The geometry is strikingly familiar. Only the subject has changed.


The thesis is uncomfortable, but I believe it is correct. The institutions now shaping AI governance are reproducing a pattern older than artificial intelligence itself: those who write the rules control the technology — and its benefits. The countries that will absorb the disruption, contribute the data labor, supply the raw materials, and live most immediately with the consequences of AI deployment are, with rare exceptions, absent from the rooms where global frameworks are being drafted. This is not merely incidental. It is a structural feature of how international governance has historically operated. Unless deliberately redesigned, AI governance will encode that structure into the technology itself — at a depth that will make it extraordinarily difficult to correct later.

I. The Architecture of External Standards

I use the term “colonial pattern” not as a slogan, but as a description of a governance structure: rule-writing concentrated at the center, implementation costs externalized to the periphery, and limited representation for those most affected. This structure does not require imperial intent. It requires only institutional asymmetry and the absence of mechanisms that would give voice to those governed.

The parallel with the 1990s development finance experience is not metaphorical. It is structural.

When international financial institutions condition assistance on adopting specific reform frameworks — privatization models, capital account liberalization, agricultural restructuring templates — three mechanisms operate simultaneously: rule-making concentrated among a small number of actors; implementation costs borne by those who had no voice in design; and compliance incentivized through access to resources rather than through demonstrated local efficacy. The rules need not be designed with exploitative intent to produce exploitative outcomes. The structure does that work regardless of intention.

AI governance is replicating these mechanisms. The EU AI Act — the most comprehensive AI regulatory framework produced by any major jurisdiction — was developed through EU institutional processes and shaped primarily within EU stakeholder ecosystems [1]. It will nonetheless shape AI development globally, because market access to the EU conditions how companies design their systems everywhere. The US NIST AI Risk Management Framework [2], the G7 Hiroshima AI Process principles [3], and the voluntary commitments from the Bletchley Park and Seoul AI Safety Summits [4] were all developed within processes dominated by a small number of technologically advanced economies.

I am not in a position to assert that this architecture is the product of deliberate design rather than structural momentum. But I observe that the effect — whatever the intent — is consistent with a strategy of global standardization on terms set by a small number of actors: frameworks that encode specific cultural values and institutional preferences into the technology itself, creating conditions under which the rest of the world must adapt to standards it did not shape. Whether this reflects conscious geopolitical calculation or simply the natural tendency of powerful institutions to universalize their own norms, the structural consequence is the same: those who set the standards of AI governance position themselves to extract the benefits, while others inherit the costs of conformity — and the deeper risk of gradual erosion of the institutional and cultural distinctiveness that makes sovereign governance meaningful in the first place.

The failure mechanism is the same as in the 1990s: standard-setting without representation produces standards that reflect the priorities and risk tolerances of those setting them — which may or may not correspond to the realities of those governed by them.

UNESCO’s Recommendation on the Ethics of Artificial Intelligence, adopted by 193 member states in 2021, was a genuine attempt at broader inclusion [5]. But the research capacity, technical expertise, and institutional infrastructure required to meaningfully shape such a framework — rather than simply endorse it — is concentrated in a small number of countries. Signing a document and drafting it are structurally different activities. The distinction matters.

II. Extraction Without Representation

There is a second dimension to this pattern that extends beyond standard-setting: the extraction of value from populations who have no voice in how that value is governed.

The infrastructure of frontier AI depends on supply chains that run through the Global South in ways rarely visible in governance discussions. The cobalt, lithium, and rare earth elements essential for AI hardware are sourced substantially from the Democratic Republic of Congo, Chile, Argentina, Brazil, China, and Indonesia [6]. The data annotation labor that makes large language models functional — the human work of reviewing, classifying, and correcting AI outputs that shapes how these systems learn — is performed at scale in Kenya, the Philippines, India, and elsewhere, often at wages reflecting the same asymmetries as other forms of digitally outsourced labor [7]. The training data itself includes text, images, and cultural content generated by billions of people who were never consulted and receive no governance voice in return.

This is not a new pattern. It is the oldest one in the history of international economic organization: raw materials and labor flow from the periphery to the center; finished products, standards, and governance frameworks flow back. The digital economy adapted the mechanism. It did not replace it.

What is genuinely new is the invisibility of this extraction to those experiencing it. Colonial resource extraction left physical traces — mines, plantations, railways built to move commodities to ports. Data extraction leaves none visible at the point of origin. The cotton farmer in Uzbekistan knew that someone was buying his cotton at a price set elsewhere. The content creator in Lagos, the service worker in Manila, the rural user in Uttar Pradesh — they have no comparable signal that their contributions are foundational to systems whose governance they have no voice in.

III. The Center Decides, the Periphery Executes

The third dimension of the pattern is the architecture of AI governance itself: who participates in the forums where AI development norms are being established, and whose institutional interests are reflected in emerging frameworks.

The frontier AI laboratories whose systems are at the center of global governance discussions — OpenAI, Google DeepMind, Anthropic, Meta AI, Mistral, and other major platforms — are concentrated in the United States and Europe [8]. The regulatory bodies developing the most consequential AI oversight frameworks are EU and US institutions. The AI safety research community whose work informs these governance decisions is similarly concentrated geographically. This is not a conspiracy. It reflects the historical distribution of technical infrastructure, research investment, and capital accumulation. But it produces a structural consequence: the frameworks emerging from this concentrated ecosystem reflect the safety priorities, ethical assumptions, and institutional risk tolerances of a specific set of societies — and are then applied globally, often as conditions of market access or technical compatibility.

The structural analogy to what I observed in Uzbekistan in the 1990s is close. The IMF and World Bank were not malicious. They were applying frameworks developed from the experience of market economies to an economy structured entirely differently. The mismatch was structural, not intentional. And it produced consequences that fell on people who had had no voice in the frameworks they were required to adopt.

In AI governance, the consequences of analogous mismatches are likely to be more severe and harder to reverse. A failed agricultural privatization policy can, in principle, be corrected — land redistributed, subsidies restored, institutions rebuilt. An AI governance architecture that encodes one set of cultural values, risk tolerances, and economic interests into the standards governing intelligent systems deployed globally is more difficult to undo once those systems are integrated into critical infrastructure: finance, healthcare, education, public administration.

The Non-Aligned Movement arose in the 1950s and 1960s partly as a response to governance structures in which newly independent states found themselves formally equal participants in frameworks designed without them and for purposes that did not reflect their interests [9]. The question for AI governance is whether a comparable structural response will emerge — and whether it will emerge before or after the architecture becomes effectively irreversible.

IV. Why This Pattern Is More Durable in AI Than Previous Cycles

Previous cycles of internationally imposed standards — financial regulation, pharmaceutical approval, telecommunications infrastructure — were partially correctable over time. Developing countries built institutional capacity, negotiated more effectively within multilateral frameworks, and gradually gained voice in shaping the rules governing their participation in global systems. The process was slow, uneven, and incomplete. But it existed.

AI governance may be structurally harder to correct for three reasons.

First, the capability gap between AI-producing and AI-consuming countries is widening faster than it can be closed through conventional capacity-building. Current frontier AI training runs already cost hundreds of millions of dollars in computational resources alone, with next-generation systems projected to require billions [11]. This level of investment is not accessible to most countries. This creates an asymmetry that is not temporary. Some countries will remain rule-takers in AI governance not because they lack institutional sophistication, but because participation in rule-making at the technical frontier requires resources that are structurally unavailable to them.

Second, technical governance standards tend to become entrenched as the systems they govern are integrated into infrastructure. The governance architecture established around early AI deployments will constrain what is possible to govern later — not because it was designed to, but because technical path dependence operates regardless of design intent. The window for deliberate architectural choice is open now. It will not remain open indefinitely.

Third, and most fundamentally: the values encoded in AI systems are not neutral. Every design choice — about what constitutes private information, about how to weigh individual rights against collective interests, about what speech is harmful, about whose historical experience counts as training data — reflects the cultural and institutional context of those who made it. When those systems are deployed globally, those values travel with them. This is not merely a governance problem. It is an epistemic one: the representation of reality embedded in AI systems reflects the representational choices of those who built them.

Implications

First: the current architecture of AI governance concentrates rule-making authority in a small number of technologically advanced economies while distributing the risks and disruption of AI deployment globally. This structural inversion — those most affected have least voice; those least affected have most — is not a temporary imbalance that will self-correct through market forces or good intentions. It requires deliberate architectural redesign. And that redesign becomes harder with each passing year. As AI systems are embedded deeper into critical infrastructure, the technical path dependencies that would need to be unwound grow more extensive. The window for meaningful correction is not closed. But it is narrowing in ways that are rarely acknowledged in the forums where these decisions are made.

Second: effective AI governance that does not include meaningful participation from the Global South will produce frameworks that are fragile in ways their designers cannot fully anticipate. Governance designed for one set of social realities will fail when applied to others — not because the designers were negligent, but because they lacked access to the realities they were governing. This is a risk for everyone, not only for the excluded. The IMF’s shock therapy frameworks failed partly for this reason: designed for contexts they did not describe, they produced outcomes they did not predict.

Third: the populations most directly exposed to AI-driven disruption — in labor markets, in access to public services, in exposure to automated decision systems — are precisely those with the least representation in governance design. This is the defining structural feature of the colonial pattern in its current form. It is not primarily a moral problem, though it is that. It is an institutional design problem: governance without feedback from those governed will systematically fail to identify the failure modes that matter most to the people experiencing them.

Signals to Watch

Composition of delegations at major AI governance forums — not nominal representation, but substantive participation in drafting processes. If Global South delegations are consistently in the position of endorsing frameworks rather than shaping them, the structural imbalance is deepening regardless of the language of inclusion in official documents.

Whether major AI regulatory frameworks include provisions specifically designed for contexts with limited regulatory infrastructure — or whether they simply export compliance requirements that assume institutional capacity which most of the world does not possess. The EU AI Act’s conformity assessment requirements, for instance, presuppose technical audit ecosystems that do not yet exist in most countries that will need to comply.

The emergence, or absence, of South-South AI governance coalitions. The African Union’s AI continental strategy, the ASEAN AI governance framework, and similar regional initiatives represent early indicators [10]. Whether these mature into substantive governance architecture or remain aspirational documents will reveal whether countries outside the current centers are seeking voice or accepting the existing architecture as given.

The Questions That Remain Open

In the 1990s, Uzbekistan resisted the reform frameworks being imposed on it — not always successfully, not without real costs, but with a clarity about what was being asked and who was asking it. The government understood, whatever one may think of its methods, that accepting the rules as written meant accepting the power structure embedded in them. Accepting those conditions — rapid privatization, agricultural restructuring, price liberalization — would have produced predictable and severe consequences: mass unemployment as collective farms dissolved without alternative employment structures, food price inflation in a population already dependent on state subsidies, currency instability, and the kind of socio-economic destabilization that turned parts of the broader post-Soviet space into scenes of poverty and social collapse within a decade. I was close enough to the mechanics of the agricultural finance system to understand this was not a theoretical risk. It was the foreseeable trajectory.

I am not arguing that resistance is always the right response. I am observing that the precondition for any meaningful governance participation is clarity about whose rules are being adopted and who benefits from them. The current AI governance conversation does not, for the most part, ask this question with sufficient directness.

The table exists. The frameworks are being written. The question is not whether rules will be established — they will be — but whether the process that produces them will achieve the representational breadth that gives governance its legitimacy.

If the rules of AI governance are written without meaningful participation from the overwhelming majority of humanity — the billions of people who will live with their consequences but had no hand in drafting them — what exactly is being governed?

And if the answer is: not the interests of those people — then on what basis does the governance claim legitimacy?


Sources & Notes

[1] European Parliament. Regulation (EU) 2024/1689 — Artificial Intelligence Act. Official Journal of the European Union, 12 July 2024. eur-lex.europa.eu

[2] National Institute of Standards and Technology. Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST AI 100-1. U.S. Department of Commerce, January 2023. nist.gov

[3] G7 Hiroshima AI Process. International Guiding Principles for Advanced AI Systems and Code of Conduct for Advanced AI Systems. October 30, 2023. g7hiroshima.go.jp

[4] Bletchley Declaration on AI Safety, November 1, 2023 (28 countries); Seoul Ministerial Statement for Advancing AI Safety, May 21, 2024. UK Department for Science, Innovation and Technology. gov.uk

[5] UNESCO. Recommendation on the Ethics of Artificial Intelligence. Adopted by the General Conference at its 41st session, November 23, 2021. 193 member states. unesdoc.unesco.org

[6] International Energy Agency. Energy Technology Perspectives 2023 and related critical minerals analysis. The DRC supplies approximately 70% of global cobalt production; China approximately 60% of rare earth elements; Indonesia approximately 40% of nickel; Australia and Chile are among the leading lithium mining jurisdictions. iea.org

[7] Perrigo, Billy. “Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic.” TIME, January 18, 2023. time.com. See also: Gray, Mary L. and Siddharth Suri. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Houghton Mifflin Harcourt, 2019.

[8] Stanford HAI. AI Index Report 2024. Stanford University, 2024. Chapter on geographic concentration of frontier AI research and development. aiindex.stanford.edu

[9] Prashad, Vijay. The Darker Nations: A People’s History of the Third World. New Press, 2007. On the structural conditions that produced the Non-Aligned Movement and the governance architecture it contested.

[10] African Union. Continental Artificial Intelligence Strategy. AU, 2024. au.int; ASEAN. ASEAN Guide on AI Governance and Ethics. ASEAN, 2024. asean.org

[11] International AI Safety Report 2026. The International AI Safety Report: An Independent Scientific Assessment. January 2026. Current frontier training runs already cost approximately $500 million in computational resources alone; next-generation models are projected to require $1–10 billion. internationalaisafetyreport.org

Full essay and updated sources: okhodjaev.com/essays/the-colonial-pattern/


Oybek Khodjaev: systems transformation analyst, Founder & CEO of INVEXI LLC. Former Deputy Governor (Deputy Khokim) of Samarkand Region. Previously, Treasury Director and Deputy Chairman of the Management Board at JSC UzAgroIndustrialBank. More than thirty years’ experience in economics, banking, finance, and business across Uzbekistan and the CIS.

Published