Beyond Control: Theory of Limits of AI Governance
An Analytical Synthesis · Version 1.0 · May 2026
Part of the series: Beyond Control — Theory of Limits of AI Governance (Essays 1–12)
↓ Download PDF Zenodo Record Series Community
Cite as: Khodjaev, O. (2026). Beyond Control: Theory of Limits of AI Governance — An Analytical Synthesis. Version 1.0, May 2026. okhodjaev.com. https://doi.org/10.5281/zenodo.20120514
Contents
- Abstract
- I. Introduction: The Frame
- II. The Diagnostic Block: Six Failure Patterns
- III. The Measurement Problem: The Correction Window
- IV. The Degradation Mechanism: Agency Transfer
- V. The Theory of Limits: Three Structural Constraints
- VI. The Governance Residual
- VII. Three Repositioning Imperatives
- VIII. Conclusion: From Governance to Positioning
- Appendix: Series Overview and How to Cite
Abstract
AI governance has failed to produce effective oversight not because regulators are incompetent or developers are dishonest, but because the institutional architectures deployed for this purpose are structurally incapable of performing the function assigned to them. This synthesis draws on twelve analytical essays produced between February and May 2026 to establish a single structural finding: three limits — sovereign override, material predetermination, and institutional mismatch — operate simultaneously on any AI governance architecture, and their interaction is multiplicative rather than additive. When all three limits are active, the corrective capacity of governance does not decline gradually; it collapses below the threshold at which meaningful correction remains operationally viable. What follows from this finding is not a policy roadmap. It is a repositioning: from asking how to build better governance to asking what remains available for constraint when the correction window has already closed.
I. Introduction: The Frame
In the months before the Soviet Union formally dissolved in August 1991, its governance machinery remained operationally intact. Committees met. Reports were filed. Sessions convened. The performance of control continued without interruption while the architecture of reversibility had already been dismantled. The question that has remained since that period is not why the system fell — but when correction had become structurally impossible, while its performance continued without acknowledgment.
That question is the frame for this document.
The dominant discourse on AI governance treats the problem as one of design: which institution should hold authority, which transparency mechanism should be mandated, which technical standard should be required. This document argues for a different frame. The problem is not that no one has yet designed the right governance architecture. The problem is that three structural limits define the space within which any architecture must operate — and within that space, the conditions for real enforcement cannot be simultaneously satisfied.
This is not a pessimistic claim. It is a structural one. A theorem about limits is not a prescription for inaction; it is a repositioning of the problem. Those who continue to pursue governance frameworks as though the three limits do not exist are not simply mistaken — they are consuming the residual corrective capacity that still exists in the service of a narrative that makes correction appear unnecessary until the moment it becomes impossible.
A Note on Method. This document is neither an academic paper nor an institutional report. It is an analytical synthesis built from practitioner observation across banking, regional governance, and investment over 35 years — brought into contact with the live dynamics of frontier AI development. The author’s position outside the research institutions whose outputs this document engages is not a limitation. It is a condition of clarity: practitioners embedded in institutional failure tend to observe certain structural dynamics earlier than institutions optimized for maintaining system continuity.
II. The Diagnostic Block: Six Failure Patterns
Before arriving at the theory of limits, it is necessary to establish the diagnostic base: the six failure patterns that recur across AI governance in real time, each with a documented historical precedent. These are not six independent problems. They are six observable manifestations of a single underlying dynamic: institutional architecture designed for a prior technological order operating on a system it was not built to govern.
Table 1: Six Governance Failure Patterns
| Failure Pattern | Historical Precedent | AI Manifestation (2026) | Observable Signal |
|---|---|---|---|
| Performative Control | Soviet Gosplan: reporting structures that documented targets while obscuring systemic failure | Anthropic safety commitments vs. Pentagon override; documentation that does not constrain deployment | Gap between public safety assurances and absence of any mechanism for independently verifying them |
| Transparency Trap | Pre-2008 AAA ratings: data abundance that produced accountability opacity | Model cards and red-teaming reports without independent access to model internals | Volume of safety documentation grows while independent audit capacity remains absent |
| Regulator’s Trilemma | Uzbekistan capital markets, 1994: regulator chooses legitimacy over technical competence and speed | EU AI Act drafting on a legislative cycle measured in years; capability cycle runs on months | Frequency of regulatory revision without change in enforcement outcomes |
| Alignment Myth | Climate COP voluntary pledges: institutional commitments eroding under competitive pressure | 0.0002% of humanity encoding values for systems deployed globally; declared vs. operational objectives diverge | Safety team dissolution at major laboratories as deployment timelines compress |
| Colonial Pattern | 1990s IFI conditionality: frameworks designed by and for creditors exported as universal standards | Data extracted from Global South contexts, models trained in California, rules drafted in Brussels | Absence of non-Western representation in safety benchmark design and model evaluation |
| Pattern Closure | OpenAI board crisis (2023): formal authority and actual locus of decision-making separate; structure holds, control evaporates | Anthropic–Pentagon blacklisting (2026): safety boundary declared by developer, contested by sovereign actor | Distance between declared governance capacity and actual correction mechanisms in operational use |
These patterns are not exhaustive; they are the recurring structural configurations most consistently visible across frontier governance disputes. The six patterns do not operate independently. They compound. Performative control creates the conditions for the transparency trap: once appearances suffice for accountability, information asymmetry is weaponized. The transparency trap enables the regulator’s trilemma: an institution that cannot read the actual risk profile of a system it oversees cannot allocate its limited capacity across the three demands of understanding, speed, and legitimacy simultaneously. The trilemma perpetuates the alignment myth: when no external party can verify whether declared objectives correspond to operational behavior, the gap between institutional and actual alignment becomes undetectable until it closes catastrophically. The alignment myth entrenches the colonial pattern: the frameworks produced by and for a narrow set of actors are exported as universal standards, while the populations most exposed to system failures have no seat in their design. Together, these five patterns close — the moment at which the gap between declared control and actual control becomes undeniable is reached not through deliberate disclosure but through a failure event that the governance architecture was designed to prevent.
III. The Measurement Problem: The Correction Window
If the six patterns describe what is failing, a useful governance analysis requires a metric: a way to measure how far along the failure trajectory any given system has traveled. The concept of the correction window — the interval between a system’s emergence and the moment at which rollback becomes operationally non-viable — provides that metric.
Real enforcement — as opposed to its performance — comes down to three conditions, none of which currently exists in operational form in AI governance. Halt authority: the capacity to stop deployment, not after market failure but before it. Independent access: direct examination of system behavior from the inside, not reliance on self-reported documentation. Consequences for misrepresentation: a liability architecture under which the gap between declared and actual safety profiles carries enforceable financial and legal penalties.
All three are currently absent from AI governance in operational form.
Halt authority exists on paper in several jurisdictions. No regulator holds routine pre-deployment gating power over frontier systems equivalent to pharmaceutical approval. The EU AI Act provides for post-market enforcement; it does not provide for mandatory pre-deployment halt authority that a developer cannot route around through jurisdictional choice. Independent access does not exist. Model internals are inaccessible to any external auditor. The technical capacity to evaluate emergent properties in frontier models at the level of interpretability required for safety verification does not currently exist within any regulatory body. Consequences for misrepresentation require a measurable baseline against which a false claim can be proven false. No agreed metric for AI safety exists. The boundary between honest uncertainty and deliberate misrepresentation dissolves in genuine epistemic ambiguity.
The absence of all three is not a temporary deficit pending better legislation. It is a structural characteristic of the current governance environment — confirmed in the International AI Safety Report 2026, visible in the pattern of safety team dissolution at major laboratories as deployment accelerates, and documented in the growing distance between the sophistication of safety documentation and the mechanisms for independently verifying its claims.
The correction window is not closing. In most critical domains, it has already closed. The integration of frontier models into cloud service APIs — productized, globally distributed, programmable by millions of downstream applications — has created a dependency network that no regulator can interrupt without cascading failure across thousands of organizational functions simultaneously. That impossibility is not legislative in nature — it is structural.
IV. The Degradation Mechanism: Agency Transfer
The correction window does not close all at once. It closes through a process: the progressive migration of decision-making authority from human operators to automated systems. This migration — agency transfer — is the degradation mechanism that makes the correction window’s closure structural rather than reversible.
Agency transfer is not a technical event. It is an institutional process. Organizations do not decide, in one deliberate act, to hand control to an AI system. They accumulate small operational dependencies — in scheduling, in routing, in risk assessment, in customer interaction — each individually justified by efficiency gains. The accumulation crosses a threshold: the human competence required to operate without the system has atrophied through disuse. At that point, rollback is not merely expensive. It is operationally non-viable.
The mechanism produces what can be called validity decay at execution time: the gap between the declared scope of human oversight and the actual decision-making that takes place under automation. A risk officer who can stop a transaction exercises real control. A risk officer who can only document disagreement with a system’s recommendation exercises performative control. The documentation records the form of oversight; the reality is that the system decides. As organizations integrate AI deeper into critical functions, the population of the first category shrinks and the population of the second expands — not through policy but through accumulated operational fact.
This dynamic has particular consequences for institutions in jurisdictions that receive AI systems they did not design. Their agency transfer occurs not only at the organizational level but at the sovereign level: decisions about the capabilities, boundaries, and update cycles of critical systems are made by actors outside their territory, on timescales they cannot influence, under commercial and strategic pressures they do not control. Uzbekistan’s QR payment unification (CBU Resolution No. 3817, March 2026) — a single centralized real-time behavioral dataset controlled by a single institutional actor, with no rollback provision in the governing regulation — demonstrates the mechanism at national scale: efficiency gains are real; concentration risk is institutional, not technical.
V. The Theory of Limits: Three Structural Constraints
What the preceding sections have established is a failure gradient without a built-in corrective mechanism. Six institutional decay patterns, a narrowing correction window, and the progressive atrophy of agency through accumulated dependency — each accelerating where the three structural limits converge. The question this section addresses is why those limits cannot be overcome even in principle, regardless of institutional design or political will.
The answer consists of three structural constraints. Each is a categorical limit, not a solvable deficit. Their interaction is multiplicative: the simultaneous operation of all three does not produce a governance environment that is three times harder to navigate — it produces one in which the conditions for real enforcement cannot be simultaneously satisfied.
Limit One: Sovereign Override
Every enforcement architecture in AI governance meets a structural limit at the point where a state for which the technology has become an element of strategic autonomy declines to subordinate its interests to the framework. This limit belongs to the structure of the international order, not to any design failure of the governance framework.
The Anthropic–Pentagon dispute of early 2026 illustrates the mechanism in real time. A private actor declared a safety boundary. A sovereign actor with operational dependence on the same technology contested that boundary. The contest moved into federal court. A second sovereign, observing the contest, opened a parallel channel — with the British government approaching Anthropic with expansion offers in the wake of the American conflict (Reuters, April 2026). At no point did any multilateral body, professional standard, or industry norm function as a binding constraint. The only instruments in play were contract law, regulatory designation, litigation, and the competing interests of two states.
This is not an exception. The sequence illustrates a structural pattern visible across multiple governance domains: the Treaty on the Non-Proliferation of Nuclear Weapons has operated with three permanent non-signatories for five decades; financial sanctions architectures have consistently generated parallel payment systems as states developed the incentive and capacity to route around them; voluntary AI safety commitments follow the same structural logic as COP pledges, credible at signing and eroding under competitive pressure. The AI domain compresses the cycle: deployment runs on release cycles, not diplomatic calendars, and agency transfer atrophies the institutional competence required for fallback precisely as the sovereign stakes rise.
Limit Two: Material Predetermination
The second limit operates through matter rather than will. It is not overcome by any sovereign instrument because it does not answer to any sovereign instrument.
The physical configuration of the compute stack — EUV lithography equipment, advanced semiconductor fabrication, energy infrastructure for training at frontier scale, data center geography — predetermines the range of strategic choices available to most jurisdictions before any governance dialogue begins. The constraint is physical, not political.
One facility in the Netherlands produces the extreme ultraviolet lithography machines without which fabrication of advanced chips at leading-edge process nodes is not possible. One company in Taiwan manufactures the overwhelming majority of frontier-class chips on which leading AI models are trained. The energy requirements for training frontier models exceed the grid capacity of most states. No regulatory instrument reaches these facts. They form the material substrate of the governance environment.
For jurisdictions that neither produce the technology nor control the compute infrastructure, sovereignty in AI policy means something structurally different from what sovereignty means in other domains. A state can issue a resolution. It cannot issue a fabrication line. The declaration of sovereignty and the material capacity to exercise it are two different things. In the AI domain, the gap between them is structural — and material predetermination constrains sovereign choice before deliberation begins.
Limit Three: Institutional Mismatch
The third limit is not a shortage of institutional capacity. It is a categorical mismatch between the architecture of existing governance institutions and the three elements of real enforcement.
Existing institutions were built on the assumption of public-law legitimacy within a defined jurisdiction, technical accessibility of the regulated system, and jurisdictional coherence between the scope of the problem and the scope of enforcement authority. AI governance requires none of these conditions to be met in the ways institutions are designed to meet them.
AI developers are headquartered in one jurisdiction, deploy across dozens, and produce consequences across hundreds. No single regulator commands authority across all three phases of the development-deployment-consequence chain. The EU can regulate AI use within its borders. It cannot regulate the development of the systems being used. The United States can restrict export of certain computational resources. It cannot control the deployment of architectures that have already become globally distributed. And the technical capacity to evaluate emergent properties in frontier AI models at the level of interpretability required for meaningful safety verification does not exist in any regulatory body — not as a temporary gap, but as an accelerating one, because the capability cycle runs on months while the institutional adaptation cycle runs on years.
The Multiplicative Interaction: Formal Statement of the Theory
When sovereign override, material predetermination, and institutional mismatch operate simultaneously on an AI governance architecture, the corrective capacity of that architecture does not decline gradually. It collapses below the threshold at which meaningful correction remains operationally viable — not because the institutions are incompetent, but because the structural conditions required for real enforcement cannot be simultaneously satisfied under current geopolitical and institutional conditions. This is a limit theorem, not a prediction. It describes the architecture of the problem, not a particular outcome.
Material predetermination constrains sovereign choice before deliberation begins: a state cannot adopt an AI governance posture independent of the compute infrastructure that foreign actors control. Sovereign override activates when a state determines that its strategic interest outweighs compliance cost — but the threshold for that determination is lowered precisely by material dependency, since a state that cannot produce alternatives has less room to enforce boundaries. Institutional mismatch means that the regulatory instruments available to address both prior limits require the simultaneous satisfaction of conditions — halt authority, independent access, consequences for misrepresentation — that the interaction of the first two limits makes unavailable.
VI. The Governance Residual
A limit theorem does not mean the end of all constraint. It means the end of the governance frame that assumed the three limits could be overcome through better institutional design. What remains is the governance residual: the set of constraint mechanisms that retain operational leverage even after the correction window has closed.
The governance residual consists of three categories. None of the three substitutes for the missing elements of real enforcement. But each operates where the three limits have not yet foreclosed all leverage.
Insurance and underwriting. When major insurers begin pricing AI-dependency depth as a distinct underwriting factor — separate from general cyber risk — they create a financial incentive for organizations to reduce dependency before mandatory regulatory frameworks compel them. When this divergence appears in underwriting practice while public safety assurances from developers remain unchanged, it constitutes the most direct market signal that corrective capacity has degraded beyond what formal governance reflects.
Litigation and judicial record. When AI-related litigation routinely compels the disclosure of internal safety evaluations, it produces judicial records that may conflict with voluntary transparency materials. Courts cannot govern AI in the sense of pre-deployment oversight. They can, however, be the mechanism through which factual discovery creates accountability for the gap between declared and actual safety profiles — after the fact, but with consequences that formal governance currently cannot produce.
Adjacent regulatory authority. Procurement requirements, financial licensing conditions, data protection frameworks, and labor regulations each reach into AI-adjacent domains where formal AI governance cannot operate. A jurisdiction that conditions public procurement on demonstrated rollback capacity — not declared oversight compliance, but actual demonstrated human capacity to operate critical functions without the system — exercises governance through a channel that the three limits have not closed.
The governance residual is not a substitute for the governance architecture that the three limits have foreclosed. It is the honest starting point for those operating after the correction window has closed. Using the residual on narrative maintenance — on sustaining the appearance of control — consumes the last available corrective capacity. The choice between acknowledged dependency and unacknowledged collapse is the governance question that remains open.
The Structural Position of the Global South
Jurisdictions in the structural position of the Global South — consuming AI systems without contributing to their design, subordinate to governance frameworks without representation in their formation, bearing responsibility for failures without control over the systems that produce them, holding formal sovereignty without the material capacity to exercise it — face this choice first and most acutely. The correction window closes there earlier, because the institutional buffers that allow central actors to maintain the appearance of control longer are absent.
This is not a disadvantage. It is an analytical advantage. A procurement ministry that acknowledges it cannot independently audit the AI system it deploys will, at minimum, invest in shadow manual processes and provider diversification. A ministry that maintains the assertion of full oversight will do neither — and will discover the operational gap only when the system fails under conditions the documentation did not anticipate. Those who cannot maintain the illusion of control are forced to confront the governance residual sooner. That confrontation, undertaken without illusion, is the only honest starting point available anywhere in the governance environment.
VII. Three Repositioning Imperatives
The theory of limits does not produce policy recommendations. It produces a repositioning of the problem. Three structural imperatives follow from the analysis for any institution, ministry, or organization making decisions about AI deployment or governance.
Imperative 1: Audit for agency transfer depth, not governance compliance.
The diagnostic test is concrete: has your organization conducted an unassisted rollback exercise for critical functions in the past twelve months? If not, the governance documentation is a record of exposure, not of control. Governance compliance measures whether the forms of oversight are in place. Agency transfer depth measures whether the substance of oversight remains operationally viable. An organization that measures only the former will discover the gap between form and substance at the worst possible moment: when a system fails under conditions the documentation did not anticipate and the human competence required for rollback has atrophied through disuse. Governance compliance that cannot be translated into demonstrated rollback capacity is not governance. It is a narrative.
Imperative 2: Inventory the governance residual before committing to compliance theater.
The governance residual — insurance exposure, litigation channels, procurement veto, adjacent regulatory authority — is finite. Institutions that direct it toward narrative maintenance consume it without constraint effect. Those that direct it toward genuine constraint levers — independent audits as conditions of coverage, rollback demonstrations as conditions of procurement, judicial disclosure requirements as conditions of settlement — may retain corrective influence even after the formal governance architecture has become operationally irrelevant. The diagnostic question is not: do we have a governance framework? It is: which components of the governance residual available to this institution are being deployed for actual constraint, and which are being deployed for appearance? The distinction determines whether residual capacity is being preserved or consumed.
Imperative 3: Treat governance architectures as evidence of commitment, not as templates.
Governance frameworks produced at the center of the AI ecosystem — the EU AI Act, the US executive orders, the voluntary safety commitments — are designed to answer the question “how should governance look?” They are optimized for legitimacy, not for correction. A governance architecture designed for appearance will report on its own intent until the correction window closes. Jurisdictions that treat these frameworks as universal templates adopt the appearance requirements without gaining the corrective capacity — which was absent in the original. The correct relationship to a framework designed elsewhere is: treat it as evidence of what the center is willing to commit to, which is different from what it is willing to enforce. Design from the governance residual outward — from what actually retains constraint leverage under the three limits — not from the ideal architecture inward.
VIII. Conclusion: From Governance to Positioning
The series began with a question asked in Tashkent in 1991, about the moment when rollback had become structurally impossible while the performance of control continued. It ends with a structural answer.
The institutional order created for governing previous technologies is categorically incapable of governing frontier AI — not because the people within it are incompetent or dishonest, but because the three limits interact multiplicatively to produce a governance environment in which the conditions for real enforcement cannot be simultaneously satisfied.
This framework does not claim that meaningful governance interventions are impossible in bounded domains or specific sectors. It argues that no existing architecture can simultaneously satisfy the conditions required for systemic frontier AI governance under current geopolitical and institutional conditions — and that this holds regardless of the quality of intention or the sophistication of institutional design.
Certain developments could partially expand the correction window within specific domains — mandatory liability regimes that make the gap between declared and actual safety profiles financially consequential, compute licensing architectures that introduce pre-market conditions at the infrastructure level, interpretability advances that provide independent access to model behavior, or sovereign compute blocs that reduce material predetermination for participating jurisdictions. None of these removes the three structural limits. But each could temporarily widen the governance residual in bounded contexts, creating intervals in which meaningful correction remains operationally viable for specific sectors or applications.
This is a limit theorem. Its implications are positional, not prescriptive.
Those designing governance instruments need to start from the governance residual, not from ideal architectures that the three limits will prevent from becoming operational. For institutions managing AI-dependent operations, the honest question is not whether a governance framework exists but whether the human competence required for rollback still exists — and what happens when the answer is no. For jurisdictions in the structural position of the Global South, the shift required is from treating frameworks designed at the center as universal templates to reading them as evidence of what the center is willing to commit to, which has consistently differed from what it is willing to enforce.
The correction window has not closed uniformly. It has not closed permanently in every domain. But it has closed structurally, at the level of any architecture capable of governing frontier AI as a whole.
What remains is the governance residual: partial, asymmetric, uncoordinated, insufficient for systemic correction. What remains is also a choice — not between control and chaos, but between acknowledged dependency and unacknowledged collapse.
Those who continue to act as though the correction window remains open are not simply mistaken. They are consuming the residual capacity that still exists in the service of a narrative that makes correction appear unnecessary until the moment it becomes impossible. What we build now will not be governance of AI. It will be governance of our own relationship to systems whose operational continuation no longer depends on meaningful human reversibility.
Appendix: Series Overview and How to Cite
Table 2: Beyond Control — Essays 1–12 with Publication Dates and DOI
| # | Title | Published | Central Thesis |
|---|---|---|---|
| 1 | The Illusion of Control · DOI | Feb 12, 2026 | Three structural failure mechanisms — performative control, incentive misalignment, information asymmetry — traced from the 1991 Soviet collapse through the 2008 financial crisis to today’s AI governance frameworks |
| 2 | The Transparency Trap · DOI | Feb 17, 2026 | AI governance instruments are rebuilding a familiar architecture: disclosure without enforceable accountability. Transparency is a trap rather than a solution |
| 3 | The Regulator’s Dilemma · DOI | Feb 23, 2026 | Every regulator facing a fast-moving technology confronts the same impossible constraint: understand it, move quickly, maintain legitimacy. Pick two |
| 4 | The Myth of Alignment · DOI | Mar 3, 2026 | Alignment is not primarily a technical challenge. It is a question of power: who defines the values, who enforces them, and who bears the consequences when the gap can no longer be contained |
| 5 | The Colonial Pattern · DOI | Mar 10, 2026 | AI governance is reproducing a pattern older than artificial intelligence: whoever writes the rules controls the technology. Rule-making concentration, extraction without representation, epistemic imposition |
| 6 | The Pattern Closes · DOI | Mar 23, 2026 | The mechanisms traced across the first five essays are no longer theoretical. In March 2026 they became operational simultaneously in geopolitics and AI–state relations |
| 7 | The Correction Window · DOI | Mar 30, 2026 | Three elements make enforcement real rather than performative: consequences for misrepresentation, halt authority, and independent verification with access. All three are currently absent from AI governance |
| 8 | The Agency Transfer · DOI | Apr 6, 2026 | Agency transfer is a gradient with a threshold beyond which reversal becomes operationally non-viable. The correction window closes not through crisis but through the quiet atrophy of human institutional capacity |
| 9 | The Sovereignty Question · DOI | Apr 13, 2026 | Every enforcement architecture in AI governance meets one structural limit: the sovereign will of a state for which the technology has become an element of strategic autonomy |
| 10 | The Infrastructure Question · DOI | Apr 20, 2026 | The physical configuration of the compute stack pre-determines the choice space for most jurisdictions before any sovereign decision is taken. The limit of matter |
| 11 | The Institutional Gap · DOI | Apr 27, 2026 | What real enforcement requires and what existing institutions can produce are not the same thing — and no amount of regulation, budget, or political will can close a gap that is structural in origin |
| 12 | Beyond Control: What Happens When the Correction Window Closes · DOI | May 4, 2026 | Three limits — institutional, sovereign, material — interact multiplicatively. The governance residual: partial, asymmetric, uncoordinated, insufficient for correction. The closure does not announce itself. It is observable |
How to Cite This Synthesis
Khodjaev, O. (2026). Beyond Control: Theory of Limits of AI Governance — An Analytical Synthesis. Version 1.0, May 2026. okhodjaev.com. https://doi.org/10.5281/zenodo.20120514
Series community: zenodo.org/communities/beyond-control-ai-governance
Glossary of Core Terms
| Term | Definition |
|---|---|
| Correction window | The interval between a system’s emergence and the moment at which rollback becomes operationally non-viable. |
| Agency transfer | The progressive migration of decision-making authority from human operators to automated systems through accumulated dependency. |
| Governance residual | The set of constraint mechanisms that retain operational leverage after the correction window has closed. |
| Sovereign override | The structural limit on enforcement architectures arising when a state’s strategic interest in a technology conflicts with the framework’s constraints. |
| Material predetermination | The physical configuration of compute, energy, and fabrication infrastructure that constrains sovereign governance choices before any policy deliberation begins. |
| Institutional mismatch | The categorical incompatibility between existing governance architectures and the three elements of real enforcement: halt authority, independent access, and consequences for misrepresentation. |
Oybek Khodjaev is the founder and CEO of INVEXI LLC, former Deputy Chairman of a major Uzbek commercial bank, and former Deputy Governor of the Samarkand Region (2019–2022). He writes on AI governance and institutional risk at okhodjaev.com as part of the analytical series Beyond Control — Theory of Limits of AI Governance.