AI Governance & Systems Transformation
I’m Oybek Khodjaev — an independent systems transformation analyst examining AI governance. For over thirty years, I have worked across finance, banking, government, and business, including serving as Deputy Governor of Samarkand Region and Deputy Chairman of the Management Board at JSC UzAgroIndustrialBank. I witnessed the collapse of the Soviet Union from the inside and have spent decades observing how large systems transform, lose control, and sometimes fail.
I write about how artificial intelligence is reshaping power dynamics, institutional control, and humanity’s ability to govern what it creates. My perspective is unusual in this field: not a technologist or AI researcher, but a systems analyst who has watched governance frameworks succeed and collapse across thirty years of institutional practice in emerging markets.
Latest Essays
Beyond Control: What Happens When the Correction Window Closes
The series ends here — not with a prescription, not with optimism, not with alarm. With a structural conclusion. Three limits established across eleven essays — institutional, sovereign, material — interact multiplicatively to produce one architectural impossibility: the institutional order created for previous technologies is categorically incapable of governing this one. What remains is the governance residual: partial, asymmetric, uncoordinated, insufficient for correction. And a choice — not between control and chaos, but between acknowledged dependency and unacknowledged collapse.
May 4, 2026. Read Essay →
The Institutional Gap: Why No Existing Institution Can Govern AI
The problem is not that governance is missing. The problem is a category mismatch: what real enforcement requires and what existing institutions can produce are not the same thing. Drawing on the 2022 Dok-1 Max pharmaceutical tragedy, Uzbekistan’s unified QR payment architecture of 2026, and the documented pattern of safety team dissolution at frontier AI laboratories, this essay explains why halt authority, independent access, and consequences for misrepresentation are not merely absent from AI governance — they are structurally unavailable within the current institutional order. The gap is constitutive, not correctable.
April 27, 2026. Read Essay →
The Infrastructure Question: Who Controls the Compute Controls the Future
The physical configuration of the compute stack — chip fabs, EUV lithography, high-bandwidth memory, hyperscale data centres, and energy grids — pre-determines the choice space for most jurisdictions before any sovereign decision is taken. Drawing on Uzbekistan’s cotton-textile cluster reform as a structural parallel, and on U.S. semiconductor export controls, TSMC concentration, and IEA energy projections, this essay identifies the limit of matter: the second structural constraint on any attempt at AI governance correction, distinct from and operating alongside the limit of sovereign will established in Essay 9.
April 20, 2026. Read Essay →
The Sovereignty Question: Who Governs the Governors?
Every enforcement architecture in AI governance meets one structural limit — the sovereign will of a state for which the technology has become an element of strategic autonomy. Drawing on the IMF/Uzbekistan reforms of the 1990s, the NPT and SWIFT precedents, the live Anthropic–Pentagon dispute, the Strait of Hormuz crisis, the October 2025 Chinese rare earth licensing cycle, and Uzbekistan’s Resolution No. 109 of March 2026, this essay opens Block 2 of the series by isolating the first structural limit on any attempt at correction: sovereign override, operating now within a window shorter than any prior sovereign conflict.
April 13, 2026. Read Essay →
The Agency Transfer: What Happens When Machines Make Decisions Humans Used to Make
Agency transfer — the migration of consequential decisions from human judgment to automated systems — is not a binary event. It is a gradient with a threshold beyond which reversal becomes operationally non-viable. Drawing on banking automation in the 1990s, the live rollout of electronic prescriptions in Uzbekistan, and the classical automation literature, this essay names the mechanism through which the correction window closes: not through crisis, but through the quiet atrophy of human institutional capacity.
April 6, 2026. Read Essay →
The Correction Window: When Governance Worked — and What Made It Possible
Under what structural conditions has governance historically worked? Three domains — banking after the 2008 crisis, pharmaceutical regulation, nuclear verification — reveal three elements that made enforcement real: consequences for misrepresentation, halt authority, and independent verification with access. AI governance today possesses none of them in operational form. The correction window is closing faster than in any previous domain.
March 30, 2026. Read Essay →
The Pattern Closes: When Governance Fails in Real Time
The mechanisms traced across the first five essays are no longer theoretical. In March 2026, they became operational simultaneously in two domains: geopolitics and the relationship between AI companies and state power. The correction window is narrowing — and in AI, it changes category.
March 24, 2026. Read Essay →
The Colonial Pattern: Whoever Writes the Rules Controls the Technology
The institutions shaping AI governance are reproducing a pattern older than artificial intelligence itself: whoever writes the rules controls the technology. Drawing on direct experience of IMF and World Bank conditionality in 1990s Uzbekistan, this essay traces the structural mechanisms — rule-making concentration, extraction without representation, epistemic imposition — that make AI governance more difficult to correct than any previous cycle of internationally imposed standards.
March 10, 2026. Read Essay →
The Myth of Alignment: Why the AI Industry’s Central Promise Is a Question of Power, Not Technology
The AI industry’s central promise — that advanced AI systems can be reliably aligned to human values — misframes the problem it claims to solve. Alignment is not primarily a technical challenge. It is a question of power: who defines the values, who enforces them, and who bears the consequences. Drawing on incentive misalignment in 1990s banking and the structural exclusion of 6.4 billion people from value choices embedded in globally deployed systems.
March 3, 2026. Read Essay →
The Regulator’s Dilemma: Why You Cannot Govern What You Cannot Keep Up With
Every regulator facing a fast-moving technology confronts the same impossible constraint: understand it, move quickly, maintain legitimacy. Pick two. AI governance is attempting all three — and achieving none. Drawing on the 1990s Uzbekistan capital markets crisis, this essay traces the regulator’s trilemma and why it has no clean exit.
February 23, 2026. Read Essay →
All essays are also published on Substack with full email delivery.
About: Learn more about my background
Contact: Get in touch