Essays

These essays explore artificial intelligence not as a technological problem, but as a governance challenge. They examine how AI reshapes power, institutional control, and humanity’s capacity to govern systems that increasingly escape human oversight. Analysis at the intersection of AI governance, institutional control, and systemic transformation. All essays are published on Substack with full email delivery. —

Published

(10) The Infrastructure Question: Who Controls the Compute Controls the Future

The physical configuration of the compute stack — chip fabs, EUV lithography, high-bandwidth memory, hyperscale data centres, and energy grids — pre-determines the choice space for most jurisdictions before any sovereign decision is taken. Drawing on Uzbekistan’s cotton-textile cluster reform as a structural parallel, and on U.S. semiconductor export controls, TSMC concentration, and IEA energy projections, this essay identifies the limit of matter: the second structural constraint on any attempt at AI governance correction, distinct from and operating alongside the limit of sovereign will established in Essay 9. Any governance framework that ignores either will measure its own intent rather than its effect.

April 20, 2026. Read Essay →


(9) The Sovereignty Question: Who Governs the Governors?

Every enforcement architecture in AI governance meets one structural limit — the sovereign will of a state for which the technology has become an element of strategic autonomy. The limit is neither normative nor technical, and in the AI domain it compresses into a window shorter than any prior sovereign conflict. Drawing on the IMF/Uzbekistan reforms of the 1990s, the NPT and SWIFT precedents, the live Anthropic–Pentagon dispute, the Strait of Hormuz crisis of March 2026, the October 2025 Chinese rare earth licensing cycle and its November 2025 partial suspension, and Uzbekistan’s Resolution No. 109 of March 2026, this essay opens Block 2 of the series by isolating the first structural limit on any attempt at correction. It maps how sovereign override reproduces across domains, and why the AI domain compresses the cycle of override and adjustment below the interval in which any historical enforcement coalition has formed and acted.

April 13, 2026. Read Essay →


(8) The Agency Transfer: What Happens When Machines Make Decisions Humans Used to Make

Agency transfer — the migration of consequential decisions from human judgment to automated systems — is not a binary event. It is a gradient with a threshold beyond which reversal becomes operationally non-viable on the timelines that matter. Drawing on banking automation in 1990s Uzbekistan, the live rollout of electronic prescriptions (DMED), the classical automation literature (Bainbridge, Parasuraman), and the Anthropic–Pentagon episode, this essay names the mechanism through which the correction window closes: not through crisis, but through the quiet atrophy of human institutional capacity. It proposes an operational instrument — the agency transfer audit — and identifies three dimensions of reversibility that current governance frameworks do not measure.

April 6, 2026. Read Essay →


(7) The Correction Window: When Governance Worked — and What Made It Possible

Under what structural conditions has governance historically worked? Three domains — banking after the 2008 global financial crisis, pharmaceutical regulation, nuclear verification — reveal three elements that made enforcement real rather than performative: consequences for misrepresentation, halt authority, and independent verification with access. Drawing on the 2003 Uzbekistan correspondent account reform, Basel III stress-testing, FDA approval gates, and the IAEA inspection regime, this essay examines when correction was still possible — and why that window is closing faster for AI than for any previous domain.

March 30, 2026. Read Essay →


(6) The Pattern Closes: When Governance Fails in Real Time

The mechanisms traced across the first five essays are no longer theoretical. In March 2026, they became operational simultaneously in two domains: geopolitics and the relationship between AI companies and state power. The Joe Kent NCTC resignation and the Anthropic–Pentagon dispute demonstrate the same structural triad — performative control, information asymmetry, incentive misalignment — operating in real time. The correction window is narrowing. In AI, it changes category.

March 23, 2026. Read Essay →


(5) The Colonial Pattern: Whoever Writes the Rules Controls the Technology

The institutions shaping AI governance are reproducing a pattern older than artificial intelligence itself: whoever writes the rules controls the technology. Drawing on direct experience of IMF and World Bank conditionality in 1990s Uzbekistan, this essay traces the structural mechanisms — rule-making concentration, extraction without representation, epistemic imposition — that make AI governance more difficult to correct than any previous cycle of internationally imposed standards.

March 10, 2026. Read Essay →


(4) The Myth of Alignment: Why the AI Industry’s Central Promise Is a Question of Power, Not Technology

The AI industry’s central promise — that advanced AI systems can be reliably aligned to human values — misframes the problem it claims to solve. Alignment is not primarily a technical challenge. It is a question of power: who defines the values, who enforces them, and who bears the consequences when the gap between declared alignment and operational behavior can no longer be contained. Drawing on incentive misalignment in 1990s Uzbekistan banking, the OpenAI board crisis, and the structural exclusion of 6.4 billion people from value choices, this essay examines three levels of the alignment problem — and why only one is currently being addressed.

March 3, 2026. Read Essay →


(3) The Regulator’s Dilemma: Why You Cannot Govern What You Cannot Keep Up With

Every regulator facing a fast-moving technology confronts the same impossible constraint: understand it, move quickly, maintain legitimacy. Pick two. AI governance is attempting all three — and achieving none. Drawing on the 1990s Uzbekistan capital markets crisis and the structural limits of the EU AI Act, this essay traces the regulator’s trilemma and why it has no clean exit.

February 23, 2026. Read Essay →


(2) The Transparency Trap: Why More Data Does Not Mean More Accountability in AI Governance

AI governance instruments are rebuilding a familiar architecture: disclosure without enforceable accountability. Drawing on direct experience in Uzbekistan’s banking sector and three decades of institutional observation, this essay identifies the structural mechanisms — window dressing, regulatory capture, speed asymmetry — that make transparency a trap rather than a solution.

February 17, 2026. Read Essay →


(1) The Illusion of Control: What the Fall of the USSR Teaches Us About AI Governance

I witnessed institutions designed to last forever disintegrate in months. Now I examine how AI governance debates repeat familiar patterns of failed institutional control. Three structural failure mechanisms — performative control, incentive misalignment, information asymmetry — traced from the 1991 Soviet collapse through the 2008 financial crisis to today’s AI governance frameworks.

February 12, 2026. Read Essay →


Coming Soon

(11) The Institutional Gap: Why No Existing Institution Can Govern AI

(12) Beyond Control: What Happens When the Correction Window Closes


Subscribe to receive weekly essays: okhodjaev.substack.com