Beyond Control: What Happens When the Correction Window Closes
Essay 12 of 12 · Beyond Control: Theory of Limits of AI Governance
May 4, 2026 · okhodjaev.com
Contents
- I. Three Limits, One Impossibility
- II. The Gradient of Irreversibility
- III. What Closure Looks Like
- IV. The Governance Residual
- V. The Asymmetry the Mainstream Governance Discourse Does Not See
- Signals to Watch
- VI. What the Series Has Established
- Sources and Notes
August 1991. I was in Tashkent when the system collapsed. Not on the day it fell — but in the months before, when it was still formally intact. The sessions continued. Reports were filed. Committees convened. From the outside, and often from inside, the machinery of governance appeared fully operational.
The question that has stayed with me is not why the system fell. It is: at what point had rollback become structurally impossible? Not the date of the coup, not the declaration of independence — but the earlier, quieter moment when the architecture of reversibility had already been dismantled while the performance of control continued without interruption.
The answer, I now believe, lies somewhere in the mid-to-late 1980s — not a single day but a threshold crossed when partial liberalization destroyed the command system’s internal coherence faster than any alternative could replace it. By then, the institutional memory of market coordination had been systematically extinguished. Human capacity to operate outside the plan had atrophied through decades of disuse. The economists who understood this — Aganbejan, Gaidar, the reform planners whose internal reports circulated years before political acknowledgment — saw it earlier than the center that was charged with managing it. Those closest to the mechanism knew first. Those holding formal authority knew last.
This series began with that observation. It ends here, with its structural conclusion.
I. Three Limits, One Impossibility
The preceding eleven essays have established three structural limits on AI governance, approached from three angles. Essay 9 identified the sovereign limit: any enforcement architecture faces states for which AI has become an element of strategic autonomy, making multilateral correction non-viable at operational speed. Essay 10 identified the material limit: the physical configuration of the compute stack — semiconductor fabrication, EUV lithography, energy infrastructure, data center geography — predetermines the strategic choices available to most jurisdictions before any governance dialogue begins. Essay 11 identified the institutional limit: the category mismatch between existing governance architectures and the three elements of real enforcement — halt authority, independent access, and consequences for misrepresentation.
These are not three separate problems awaiting three separate solutions. They are one architectural impossibility, visible from three directions.
Their interaction is not additive but multiplicative. When all three limits operate simultaneously, the corrective capacity of any governance structure does not decline gradually — it collapses toward zero. Material dependence constrains sovereign choice before deliberation begins. Sovereign override nullifies institutional instruments at the moment of geopolitical stress. Institutional category mismatch prevents correction even where material capacity and political will nominally exist. Each limit annihilates the corrective effect of the others.
This is the limit theorem this series has been building toward: not that AI is ungovernable in principle, but that the institutional order created for previous technologies is structurally incapable of governing this one. The failure is architectural, not incidental.
II. The Gradient of Irreversibility
The question of when rollback becomes structurally impossible recurs across the three historical precedents this series has used as reference points.
In nuclear weapons: not in August 1945, when the bomb existed in one jurisdiction, but by 1946, when the failure of international control had already become structurally determined. The Baruch Plan failed because its precondition — that major powers would surrender sovereign override on a matter of existential strategic importance — could never be met. The scientists understood the irreversibility of the knowledge in 1945. The policymakers acknowledged the irreversibility of the strategic configuration by 1946. Rollback was never a technical question. It was always an institutional one.
In the Soviet economy: not in 1991, but in the mid-to-late 1980s, when partial reform degraded command coherence faster than alternatives could emerge. The decisive indicator was not political but cognitive: institutional memory of non-command coordination had been extinguished — not because people had forgotten how to operate outside the plan, but because the institutional structures that sustained that capacity — training programmes, documentation systems, career incentives — had been systematically dismantled. When the structure collapsed, the human competence required to operate outside it had already atrophied beyond recovery. There was no rollback not because the decision was not taken, but because the capacity to execute it no longer existed.
In the internet: not at its invention, but in 1990–1991, when commercialization and distributed adoption created network effects that made closure self-defeating. The governance and infrastructure actors who understood first were those watching the agency transfer happen in real time — not the public, not the regulators who would later attempt to govern a network already operating beyond their effective reach.
The pattern across all three cases is identical to what Essay 8 documented in the agency transfer mechanism: irreversibility is not an event. It is a gradient, crossed when the atrophy of human and institutional capacity to operate without the system coincides with the hardening of path dependencies that make switching costs politically non-viable.
In AI, this gradient is compressing at velocities that none of the preceding cases approached. The interval between deployment and deep institutional integration — which measured in decades for nuclear infrastructure, in years for internet commercialization — now measures in months. The human competence required for rollback atrophies at a pace that institutional memory cannot recover from within any political cycle.
III. What Closure Looks Like
If irreversibility is a gradient, closure is not a single moment but a recognisable state — observable through specific institutional behaviours that follow predictably once the threshold has been crossed.
Four observable functions replace the enforcement function that Essay 7 identified as the condition of real governance. Institutions shift from maintaining corrective capacity to legitimating deployment — making continuation politically sustainable through documentation that resembles oversight without constituting it. They shift from preventing harm to distributing its consequences: investigations that end in recommendations, procedures that signal action without altering the underlying architecture. They maintain the appearance of reversibility — continuing to speak of exit options, license revocation, and provider switching, even as operational dependencies have already rendered these options non-viable. And they manage the narrative of control — publicly sustaining the claim that the correction window remains open when operational reality has already moved elsewhere.
None of these four functions requires halt authority, independent access, or consequences for misrepresentation. They require only that the documentation layer remain dense and the public account remain coherent.
This is not dishonesty in the ordinary sense. It is structural hypocrisy without individual hypocrites — the condition in which every actor behaves rationally within an incentive architecture that collectively produces misrepresentation. The Soviet planning apparatus did not conspire to misrepresent its condition in its final decade. It adapted to the only role it could still perform: producing the documentation of a system whose operational reality it could no longer shape.
The Anthropic–Pentagon dispute, documented in Essay 6, showed how quickly declared red lines become negotiable once operational dependence has formed. The DMED electronic prescription system, documented in Essay 8, shows the same dynamic in public health administration: the institutional memory of manual verification decays through disuse; by the time a failure demands fallback competence, that competence is no longer operationally available. These are not exceptional cases. They are the pattern under acceleration.
The testimony of those closest to the mechanism confirms the same dynamic. In December 2025, AI safety researcher Stuart Russell recounted in a widely-viewed public interview a private conversation with the CEO of a leading AI laboratory, who described a Chernobyl-scale incident as the “best-case scenario” for triggering meaningful regulation — because governments, having been approached directly, had already declined to act. [11] The CEO continued building. The governance apparatus continued producing documentation. The narrative of control continued. This is not hypocrisy. It is the institutional logic of a closed window: those who understand the situation most clearly often have the fewest structural options to alter its trajectory.
IV. The Governance Residual
If the correction window has closed, what governance capacity actually persists? I call what remains the governance residual: the set of instruments that retain operational influence over AI deployment after formal enforcement capacity has been lost. The governance residual does not constitute control. It constitutes distributed, partial, uncoordinated pressure — insufficient to reopen the window, but capable of slowing degradation, localizing consequences, and preserving pockets of institutional memory long enough to matter.
Four actor classes comprise the governance residual.
Institutions with direct financial exposure — insurers pricing tail risk, investors assessing agency transfer depth, procurement officers conditioning access on demonstrated reversal capacity — operate under incentives that do not depend on the narrative of control. When their behaviour diverges from the public assurances of AI developers — when insurance terms harden while public statements remain unchanged — this divergence is the most reliable signal that the window has closed. Markets do not earn returns on optimism about governance they cannot verify.
Courts — not as sources of systemic solutions, but as arenas where formal and operational truth can be placed in proximity. Litigation does not govern. But it compels disclosure, forces the reconciliation of contractual safety commitments with operational practice, and produces a judicial record that voluntary transparency never generates. When such cases become routine rather than exceptional, they mark a closed window: the only remaining correction mechanism is point-specific and post-hoc.
Agencies that govern adjacent infrastructure — energy, communications, financial messaging, export control — without governing AI models directly. Data residency requirements, energy consumption constraints, semiconductor licensing: these instruments do not require AI expertise, but they constrain the material conditions under which deployment occurs.
States that do not produce frontier compute but can manage their own dependencies — through provider diversification, interoperability requirements, procurement conditions that price agency transfer depth. They cannot control the stack. But they can determine how dependence on the stack is embedded in their own institutions, how deep the agency transfer goes before it becomes irreversible, how much human competence is maintained alongside automated systems.
The governance residual is not enforcement. It is the architecture of last resort — the set of levers that remain when the primary control architecture has already been structurally superseded. The distribution of this residual, however, is not uniform. Its accessibility follows the same structural asymmetry that governed rule-making authority and compute concentration — and that asymmetry is the subject of the section that follows.
V. The Asymmetry the Mainstream Governance Discourse Does Not See
For jurisdictions occupying what this series has called the structural position of the Global South — defined not by geography but by four conditions: consumption without design, subordination without representation, responsibility without control, sovereignty without material power — the correction window was never as wide as it appeared for those who designed the systems now being deployed.
The AI systems deployed in healthcare, credit scoring, public administration, and judicial support across these jurisdictions were trained on non-local value systems, governed by frameworks written in distant regulatory environments, and operate on infrastructure that cannot be independently audited, modified, or replaced by the jurisdictions hosting them. When the window closes, it closes asymmetrically: jurisdictions with the least leverage lose corrective capacity earliest and most completely. The systems are inherited; the governance residual that arrives with them is borrowed.
But there is a structural inversion here that the mainstream governance discourse consistently fails to register. Jurisdictions with fewer institutional layers insulating the gap between declared control and operational reality — where the distance between governance documentation and operational truth is shorter — see the mechanism more clearly. The centralized QR payment architecture documented in Essay 11 created the first real-time behavioral dataset covering an entire national economy through a single regulatory instrument, without triggering a governance response proportionate to the concentration risk it created. The same concentration logic — a single instrument creating a nationwide behavioral dataset without proportionate governance capacity — is now being replicated in AI deployment across multiple domains: centralized model access, standardized APIs, and the architecture of apparent local control over systems whose core parameters remain externally determined.
The analytical advantage of structural peripherality is not comfort. It is clarity. A ministry that acknowledges it cannot independently audit the AI system it deploys will, at minimum, invest in shadow manual processes and provider diversification. A ministry that maintains the assertion of full oversight will do neither — and will discover the operational gap only when the system fails under conditions the documentation did not anticipate. Jurisdictions without the institutional buffers that allow central actors to maintain the appearance of control for longer are forced to confront that gap sooner. That confrontation, undertaken without illusion, is the only honest starting point available to those who occupy the structural periphery.
Signals to Watch
The closure of the correction window does not announce itself. It is observable, however, through specific indicators that diverge from the public account:
Whether major insurers begin pricing AI-dependency depth as a distinct underwriting factor — separate from general cyber risk — while public safety assurances from developers remain unchanged. As of this writing, that divergence is not yet established in underwriting practice; when it appears, it will be the most direct market signal that corrective capacity has degraded beyond what governance documentation reflects.
Whether AI-related litigation routinely compels the disclosure of internal safety evaluations, producing judicial records that conflict with voluntary transparency materials. When courts become the primary mechanism of factual discovery about AI systems, the correction window has already closed through every other channel.
Whether any jurisdiction outside the compute-core states formally requires demonstrated rollback capacity — not declared oversight compliance — as a condition of procurement. The first such requirement will mark the first institutional acknowledgment that the governance residual, not formal governance, is the operative condition.
Whether mandatory manual-competence exercises appear in any AI-critical domain before dependence becomes total. Aviation still requires them. The domains that do not will discover their absence precisely when they are most needed.
VI. What the Series Has Established
Twelve essays, approached from twelve angles, have established one structural finding: the institutional order created for governing previous technologies is categorically incapable of governing AI — not because the people within it are incompetent or dishonest, but because the three limits interact multiplicatively to produce a governance environment in which the conditions for real enforcement cannot be simultaneously satisfied.
This is a limit theorem, not a policy recommendation. The conclusion is not that governance should be abandoned, or that effort is pointless, or that catastrophe is inevitable. The conclusion is that the frame within which governance is currently being pursued — the assumption that incremental improvement of existing institutional architectures will eventually produce effective control — is structurally inadequate to the problem it is attempting to solve.
What follows from a limit theorem is not prescription but repositioning. Those designing governance instruments need to start from the governance residual — from what actually retains influence — rather than from ideal architectures that the three limits will prevent from becoming operational. Those managing institutions need to conduct honest assessments of agency transfer depth: how much of the decision-making that was human two years ago has already been transferred, and whether the human competence required for rollback still exists or has been allowed to atrophy. Those representing jurisdictions in the structural position of the Global South need to stop treating governance frameworks designed at the center as templates and start treating them as evidence of what the center is willing to commit to — which is different from what it is willing to enforce.
The series began in a hall in Tashkent, asking when rollback had become impossible while governance continued its performance. It ends with the structural answer: governance over a complex system exists only within the interval between the system’s emergence and the moment at which the operational environment adapts to the system’s logic faster than institutions can adapt to the operational environment. In AI, that interval has closed — not uniformly, not permanently in every domain, but structurally, at the level of any architecture capable of governing the system as a whole.
What remains is the governance residual: partial, asymmetric, uncoordinated, insufficient for correction. What remains is also a choice — not between control and chaos, but between acknowledged dependency and unacknowledged collapse.
Those who continue to act as though the correction window remains open are not simply mistaken. They are consuming the residual capacity that still exists — the insurance incentives, the litigation channels, the adjacent regulatory levers — in the service of a narrative that makes correction appear unnecessary until the moment it becomes impossible.
The window has closed. The pattern is complete. What we build now will not be governance of AI. It will be governance of our own relationship to systems that no longer require our permission to continue.
Sources and Notes
[1] The three elements of real enforcement — halt authority, independent access, consequences for misrepresentation: Essay 7, “The Correction Window,” March 30, 2026. okhodjaev.com
[2] Agency transfer mechanism and atrophy of human capacity: Essay 8, “The Agency Transfer,” April 6, 2026. okhodjaev.com
[3] The sovereign limit: Essay 9, “The Sovereignty Question.” okhodjaev.com
[4] The material limit (compute concentration): Essay 10, “The Infrastructure Question.” okhodjaev.com
[5] The institutional limit (category mismatch): Essay 11, “The Institutional Gap,” April 27, 2026. okhodjaev.com
[6] Anthropic–Pentagon dispute: Essays 6 and 11. Reuters timeline, February–March 2026.
[7] DMED electronic prescription system, Uzbekistan: Government of Uzbekistan announcements, December 2025–April 2026. Referenced in Essay 8.
[8] QR payment centralization, Central Bank of Uzbekistan (CBU Resolution No. 3817, March 2026): Essay 11.
[9] Historical irreversibility thresholds — Soviet economy, nuclear weapons, internet 1990s: Author’s synthesis. Supporting sources: Aganbejan et al., Novosibirsk Report (1983); U.S. Interim Committee records, 1945–1946; NSF/DARPA privatization documentation, 1990–1991.
[10] Author’s direct professional experience: UzAgroIndustrialBank treasury operations (1990–2001); Samarkand Region public administration (2019–2022). Cross-referenced in Essays 2, 7, 8, 11.
[11] Stuart Russell, interview with Steven Bartlett, The Diary of A CEO, December 4, 2025 (2.6 million views). Russell recounts a private conversation with the CEO of a leading AI laboratory regarding the conditions under which meaningful regulation might be triggered. youtube.com/@TheDiaryOfACEO
Full series and references: okhodjaev.com/essays/
Oybek Khodjaev is a systems transformation analyst with thirty-five years of experience across economics, banking, business, and government in Uzbekistan and the CIS, including service as Deputy Governor (Deputy Khokim) of Samarkand Region (2019–2022). He is the founder and CEO of INVEXI LLC. His work on AI governance and institutional risk is published at okhodjaev.com.