The Correction Window: When Governance Worked — and What Made It Possible
Contents
- I. The Global Financial Crisis: Partial Correction Under Pressure
- II. Pharmaceutical Regulation: Halt Authority
- III. Nuclear Verification: Independent Access
- IV. Why AI Governance Has None of These Three
- Implications
- Signals to Watch
- The Questions That Remain Open
- Sources & Notes
Until September 2003, every branch of a multi-branch commercial bank in Uzbekistan maintained its own correspondent account at the Central Bank. Each branch managed its own liquidity independently. A bank could present consolidated reports showing adequate resources while individual branches operated with effectively empty accounts. The reporting architecture was intact. The capacity to execute payments was not.
In September 2003, all multi-branch banks completed the transition to a single correspondent account per institution. By May 2004, operations across the entire banking system were reflected in a unified Central Bank balance in real time [1].
What changed was not the rules. The rules had existed before. What changed was the infrastructure — the architecture that made the gap between reported and actual liquidity structurally difficult to maintain. Control arrived not when penalties were introduced — but when the technical architecture changed the behavior of participants, when the cost of misrepresentation exceeded the cost of correction.
The first six essays of this series examined how governance fails. This essay asks the opposite question: under what structural conditions has governance historically worked — even partially, even under pressure, even after catastrophic failure first?
I define the correction window as a measurable interval: the time between the moment a system’s dysfunction becomes visible to those with authority to act and the moment its consequences become irreversible. In each of the three cases below, the correction window was open long enough for meaningful reform. In AI governance, it is closing faster than in any previous domain.
I. The Global Financial Crisis: Partial Correction Under Pressure
In 2008, the global financial system demonstrated what happens when risk management operates as a performance for regulators rather than a genuine operational constraint [2]. Rating agencies assessed mortgage-backed securities against defined criteria. The criteria were met. The underlying risk was not captured by the criteria.
The correction window framework reveals why the crisis happened and why partial correction was possible.
When did the dysfunction become visible? Not in 2008. There are grounds to believe that risk professionals at major institutions understood the scale of exposure considerably earlier than the crisis made it public. Some of that information existed inside the system long before it became visible to those with authority to act — regulators, legislators, the public. The architecture for converting that information into correction did not.
Who had access and authority to act? After the collapse, legislatures did. The Dodd-Frank Act (2010) and the Basel III framework (2010–2019) introduced stress-testing regimes that required banks to demonstrate capital adequacy under adverse scenarios — not merely report it [3][4]. For the first time, regulators could compare a bank’s declared risk position against an independently modeled stress scenario. The gap between declared and actual exposure became financially and legally consequential.
What converted visibility into correction? Not transparency alone. Banks had been producing detailed risk disclosures before the crisis. What changed was the mechanism: consequences for misrepresentation. When the gap between a bank’s reported capital position and its stress-test results became grounds for regulatory intervention — restrictions on dividends, forced capital raises, public disclosure of shortfalls — the cost of maintaining the gap exceeded the cost of closing it. The architecture changed behavior.
Why was the window still open? Because the financial system, however damaged, operated on human timescales. Legislation took two years. Implementation took a decade. The correction was partial and asymmetrical — it reduced the space for performative compliance in core capital metrics, but left room for strategic disclosure practices in less visible corners of the balance sheet. Yet the window remained open because the underlying transactions were reversible in principle: loans could be restructured, capital could be raised, institutions could be wound down under supervised resolution.
The reform was partial. It did not eliminate performative compliance. But it demonstrated that the correction window can be held open — through consequences for misrepresentation that make the gap between declared and actual risk profiles financially expensive to maintain.
Uzbekistan is currently implementing Basel III standards through a phased transition: updated capital adequacy norms including CET1, AT1, and Tier 2 requirements, conservation and countercyclical buffers, liquidity coverage ratios, and IFRS-aligned risk-weighted asset methodology [5]. The enforcement architecture born from the 2008 crisis is now reaching jurisdictions far from its origin. This is what successful — if partial — correction looks like at institutional scale.
II. Pharmaceutical Regulation: Halt Authority
The pharmaceutical industry operates under one of the few mature regulatory regimes with routine pre-deployment gating authority. The US Food and Drug Administration, and equivalent bodies in other jurisdictions, possess a structural power that no AI regulator currently holds: the ability to physically prevent a product from reaching the market until independent evaluation is complete [6].
When did the dysfunction become visible? Early and repeatedly. Thalidomide (1950s–60s) demonstrated what happens when a pharmaceutical product reaches the market without adequate independent evaluation of its risk profile. The consequences were irreversible — not merely financially, but physiologically, across generations.
Who had access and authority to act? In the United States, Dr. Frances Kelsey at the FDA refused to approve thalidomide for the American market, citing insufficient safety data. One individual, operating within a regulatory structure that granted halt authority, prevented a catastrophe that had already unfolded in dozens of other countries. The authority existed. The architecture supported its exercise [6].
What converted visibility into correction? The creation and reinforcement of mandatory pre-market approval gates — institutional checkpoints that physically prevent deployment until an independent third party has evaluated the evidence. Not a recommendation. Not a voluntary commitment. A structural impossibility of proceeding without sign-off. The registration and approval of a new original pharmaceutical product requires years and consists of distinct phases: documentary and pharmacopoeial review, laboratory analysis, preclinical studies, and multiple stages of clinical trials — each functioning as a correction window in its own right.
Why was the window still open? Because pharmaceutical deployment operates on timescales measured in years, not months. Each phase of development creates a natural pause point at which halt authority can be exercised. The correction window is built into the deployment architecture by design.
The trade-off is real on both sides. Delayed approvals cost lives when effective treatments reach patients later than they could. But deployment without independent verification costs lives when harmful products reach patients before their risk profiles are understood — thalidomide being the most documented, but far from the only, case. The multi-phase approval architecture exists precisely to hold both risks in balance: it slows deployment to prevent irreversible harm while creating structured pathways for effective treatments to reach the market. The correction window exists because the architecture creates it. Remove the architecture, and the window closes.
III. Nuclear Verification: Independent Access
The International Atomic Energy Agency operates the most developed international verification regime in which sovereign states have accepted routine, physically invasive inspection of their most sensitive facilities by an independent third party with defined legal standing [7].
When did the dysfunction become visible? At the most extreme possible cost. Hiroshima and Nagasaki demonstrated what uncontrolled nuclear capability produces. The dysfunction was not theoretical. It was measured in hundreds of thousands of deaths.
Who had access and authority to act? Initially, no one. The IAEA was established in 1957 — twelve years after the first use of nuclear weapons — not because states wanted external inspection, but because the alternative had become existentially intolerable. The Non-Proliferation Treaty (1968) and the subsequent safeguards system created a framework in which inspectors gained physical access to declared nuclear facilities, with the authority to verify that materials were not being diverted to weapons programs [7].
What converted visibility into correction? Not trust. Not voluntary commitment. The mechanism was independent verification with access — routine, non-voluntary, physically present inspection by a third party with defined legal standing and the technical capacity to detect diversion. States did not agree to this because they valued transparency. They agreed because the cost of its absence — unchecked proliferation — was higher than the sovereignty cost of inspection.
Why was the window still open? Because nuclear weapons programs operate on timescales measured in years and decades. Enrichment facilities take years to build. Weapons-grade material accumulates slowly. The correction window was wide enough for institutional architecture to be constructed before the point of irreversibility.
States accepted invasive verification only after the price of its absence became existential. For AI frontier laboratories, no equivalent threshold has been crossed. The correction window is narrower, the deployment cycle faster, and the institutional architecture for independent verification with routine access to model internals does not exist. The window compresses with each integration of frontier AI into critical infrastructure.
IV. Why AI Governance Has None of These Three
Three domains. Three structural elements. Each emerged only after catastrophic failure made the cost of absence undeniable. Each required decades to build. Each remains imperfect.
AI governance today possesses none of them in operational form.
Consequences for misrepresentation — no major jurisdiction has established a liability framework that makes the gap between declared and actual AI safety profiles financially or legally consequential for frontier developers. Voluntary commitments without liability exposure are structurally equivalent to pre-2008 banking capital requirements without supervisory enforcement [8].
Halt authority — no regulator holds routine pre-deployment gating power over frontier AI systems. The EU AI Act provides for post-market enforcement and conformity assessment, but not for mandatory pre-deployment halt authority equivalent to pharmaceutical approval gates [9]. No regulatory body can currently prevent a frontier model from being deployed.
Independent verification with access — no AI Safety Institute has routine, non-voluntary access to model weights, training processes, and deployment configurations across all frontier laboratories simultaneously, with the legal authority to compel disclosure [10]. Access remains limited, episodic, and dependent on laboratory cooperation.
All three elements are absent. This is not an oversight. It is a structural condition. Each deployment cycle compresses the correction window further — not merely because the technology moves fast, but because agency transfer erodes the institutional capacity to reverse it. The human expertise, organizational processes, and institutional memory that existed before integration atrophy proportionally to the depth of dependence.
The correction window for AI governance is not merely narrower than in banking, pharmaceuticals, or nuclear verification. It is categorically different. In each previous domain, the deployment cycle was slow enough for institutional architecture to be constructed before consequences became irreversible. In AI, the deployment cycle is measured in months, each deployment creates dependencies that make subsequent rollback more expensive than the last, and the window does not narrow linearly — it compresses with each integration into critical infrastructure.
Implications
Governance works not when rules are written, but when the underlying architecture makes dysfunction visible, halt enforceable, and misrepresentation costly. In every domain examined here, this required catastrophic failure before institutional will materialized. Banking required a global financial crisis. Pharmaceutical regulation required thalidomide. Nuclear verification required Hiroshima.
The correction window for AI is compressing in a way that has no precedent in previous governance domains. Each deployment of a frontier AI system into critical infrastructure — finance, healthcare, logistics, public administration — creates operational dependencies. Each dependency increases the cost of rollback. The window does not narrow gradually. It narrows with each integration, because the institutional capacity to reverse course erodes proportionally to the depth of dependence.
Real enforcement in previous domains appeared not from the institutions that produced governance frameworks, but from actors with direct financial exposure — insurers, institutional investors, procurement officers. In AI governance, no such actor yet possesses the combination of technical access, legal authority, and market reach required to impose enforcement at the necessary scale.
Signals to Watch
Whether mandatory pre-deployment independent audit for frontier AI systems appears in any major jurisdiction. The transition from voluntary to mandatory is the structural threshold separating disclosure from enforcement.
Whether a commercial AI liability insurance market develops with standardized risk categories and pricing that diverges from companies’ public safety communications. When insurers begin systematically pricing AI tail risk, they will constitute a market-based verification mechanism independent of regulatory definitions.
The first major enforcement action against a frontier AI laboratory will reveal whether it rests on independent technical evaluation with access — or on the laboratory’s own documentation. The evidentiary basis of that case will reveal more about governance capacity than any published framework.
The Questions That Remain Open
The Uzbekistan banking reform of 2003 required twelve years of recurring payment failures before the institutional and political will to change the infrastructure materialized. Basel III required a global financial crisis. The FDA’s gating authority was strengthened after thalidomide. The IAEA’s inspection regime required Hiroshima.
In each case, the correction window was still open when the will to act appeared. The infrastructure could be rebuilt. The expertise still existed. The systems, however damaged, could still be corrected at human scale.
The question for AI governance is not whether equivalent enforcement architecture is needed. It is whether the correction window — the interval between visible dysfunction and irreversible consequence — will remain open long enough for that architecture to be built. And whether the cost of opening it will be paid in advance, through deliberate institutional design — or after the fact, through a failure whose dimensions we cannot yet fully anticipate.
In 2003, a technical reform closed the gap that twelve years of reporting improvements had failed to address. The reform worked not because it punished non-compliance, but because it made non-compliance structurally visible. That is the standard against which AI governance should be measured — and by which, as of today, it falls short.
Sources & Notes
[1] Central Bank of the Republic of Uzbekistan. Transition from decentralized to centralized correspondent account infrastructure, completed September 2003. Unified real-time balance reporting operational from May 2004. Author’s direct professional experience (UzAgroIndustrialBank / Agrobank, 1990s–2001). cbu.uz
[2] Financial Crisis Inquiry Commission. The Financial Crisis Inquiry Report. U.S. Government Publishing Office, 2011. fcic.law.stanford.edu
[3] Basel Committee on Banking Supervision. Basel III: A Global Regulatory Framework for More Resilient Banks and Banking Systems. Bank for International Settlements, December 2010 (revised June 2011). bis.org
[4] U.S. Congress. Dodd-Frank Wall Street Reform and Consumer Protection Act. Pub.L. 111–203, July 21, 2010. Title I: stress-testing requirements for systemically important financial institutions. congress.gov
[5] Central Bank of the Republic of Uzbekistan. Capital adequacy regulations aligned with Basel III framework, including CET1, AT1, Tier 2 requirements, conservation buffer, countercyclical buffer, and systemically important bank buffers. Phased implementation 2015–2026, with IFRS transition and risk-weighted asset methodology. cbu.uz
[6] U.S. Food and Drug Administration. History of FDA regulation and the thalidomide case. Kelsey, Frances O. “Autobiographical Reflections.” FDA, 2005. See also: Carpenter, Daniel. Reputation and Power: Organizational Image and Pharmaceutical Regulation at the FDA. Princeton University Press, 2010. fda.gov
[7] International Atomic Energy Agency. IAEA Safeguards: Serving Nuclear Non-Proliferation. IAEA, 2023. Treaty on the Non-Proliferation of Nuclear Weapons (NPT), 1968. iaea.org
[8] OECD. OECD AI Policy Observatory — Liability and Accountability Frameworks, 2024. As of 2026, no major jurisdiction has established enforceable liability frameworks connecting frontier AI deployment decisions to financial consequences for misrepresented safety posture. oecd.ai
[9] European Parliament. Regulation (EU) 2024/1689 — Artificial Intelligence Act. Official Journal of the European Union, 12 July 2024. eur-lex.europa.eu
[10] UK AI Safety Institute. AISI’s Approach to Evaluations, 2024. As of 2026, no jurisdiction has established a broadly empowered, routine, independent verification regime with consistent access to frontier model weights and training processes across major laboratories. aisi.gov.uk
Full essay and updated sources: okhodjaev.com/essays/the-correction-window/
Oybek Khodjaev: systems transformation analyst, Founder & CEO of INVEXI LLC. Former Deputy Governor (Deputy Khokim) of Samarkand Region. Previously, Treasury Director and Deputy Chairman of the Management Board at JSC UzAgroIndustrialBank. More than thirty years’ experience in economics, banking, finance, and business across Uzbekistan and the CIS.