The Agency Transfer: What Happens When Machines Make Decisions Humans Used to Make
Contents
- I. The Gradient
- II. The Atrophy Mechanism
- III. The Acceleration
- IV. The Asymmetry
- Implications
- Signals to Watch
- The Questions That Remain Open
- Sources & Notes
If the entire history of Earth — 4.54 billion years, confirmed by radiometric dating to within one percent — were compressed into the lifetime of a centenarian, anatomically modern humans would appear roughly a day and a half before death. Everything we call civilisation — agriculture, writing, cities, empires, industrial economies — would occupy the last two hours.
The Industrial Revolution would have begun three minutes ago.
Artificial intelligence as a scientific discipline — from the 1956 Dartmouth conference to the present — would have existed for under a minute.
And the technology to which we are now transferring consequential decisions — frontier AI systems deployed at scale since 2022 — arrived roughly two seconds ago.
In those two seconds, we began delegating decisions that took two hundred thousand years of institutional evolution to learn how to make.
Seen from geological scale, the transfer looks instantaneous. Seen from inside institutions, it looks like a sequence of small conveniences that no one experiences as a surrender of agency. I know what that sequence looks like — not from reading about it, but from watching it happen inside a banking system where I spent a decade.
In the late 1990s, at UzAgroIndustrialBank in Tashkent, I managed treasury operations that were still, in significant part, manual. Correspondent account balances were tracked by specialists who carried the logic of the payment system in their heads. When the first automated treasury management systems arrived, these specialists adapted — slowly, unevenly, and then, within a few years, they were gone. Not fired. Retired, reassigned, or simply no longer needed. The system ran itself. And by the time anyone might have wanted to return to manual operations — during system failures, during transition periods, during crises — the people who knew how were no longer there. The institutional memory had not been deleted. It had atrophied.
This process is happening right now, in my country, in real time. In December 2025, Uzbekistan launched the DMED electronic prescription system across Tashkent and fifteen pilot regions [1]. Paper prescriptions are being phased out. Doctors no longer write prescriptions by hand — the system generates them, controls dosages, blocks duplicate prescriptions, limits the number of concurrent medications. If the rollout proceeds as planned, the institutional capacity to manage paper-based pharmaceutical distribution will have begun to erode materially within twelve months — not through prohibition, but through disuse. The pharmacists who understood the old system are likely to retrain. The forms are likely to be discarded. The knowledge of how to operate without the system risks quietly disappearing.
The thesis: agency transfer — the migration of consequential decisions from human judgment to automated systems — is not a binary event. It is a gradient. And on that gradient, there is a threshold beyond which reversal becomes operationally non-viable on the timelines that matter. Not because the technology cannot be switched off. Because the human capacity it replaced has been allowed to decay.
The argument is not that automation is inherently harmful, or that manual systems were superior. It is that institutions that automate without preserving rollback competence often mistake convenience for resilience — and the difference becomes visible only under conditions when it is most costly to discover.
I. The Gradient
Agency transfer operates on a spectrum that is deceptively smooth. At one end, automated systems provide information that humans use to make decisions. At the other, automated systems make decisions that humans are informed about — or not informed about at all. The progression between these points is rarely announced. It happens through convenience, efficiency, and competitive pressure — the same forces that drive adoption of any institutional technology.
The classical automation literature identified this dynamic decades ago. Bainbridge’s foundational work on the ‘ironies of automation’ showed that automation often leaves humans responsible for precisely the abnormal conditions they are least practised at handling — because routine operations, where skills are maintained through repetition, are the first to be automated [2]. Parasuraman, Sheridan, and Wickens formalised the spectrum into a model of types and levels of automation, showing that each increment of automated authority corresponds to a measurable change in human skill maintenance [3]. The empirical evidence is consistent: as automation level rises, human intervention capacity declines — not because operators become less intelligent, but because capability is perishable when unpractised.
The pattern is visible across every domain where automation has been introduced. In aviation, flight management systems moved from navigation aids to autopilot to automated landing and, increasingly, to systems that manage more of the flight envelope than pilots manually control [4]. Pilots who rely on automated systems for routine flight operations demonstrate measurably degraded manual flying skills — not because they are less capable, but because the skills were not exercised. In financial markets, algorithmic trading moved from execution assistance to autonomous decision-making operating at speeds no human trader can match [5]. In each case, the transfer was gradual, rational at every step, and difficult to reverse once established.
What makes the gradient dangerous is not any single step. It is the cumulative effect: at each stage, the human competence required for the previous stage degrades through disuse. Manual navigation skills degrade when GPS is available. Mental arithmetic degrades when calculators are standard. Institutional judgment degrades when algorithmic recommendations are faster, cheaper, and more defensible than human deliberation. The automation literature calls this ‘automation-induced complacency’ [8]. I would add that at institutional scale, it is not complacency. It is structural: the organisation’s hiring, training, and reward systems adapt to the automated reality, and the old competences simply cease to be reproduced.
II. The Atrophy Mechanism
The irreversibility of agency transfer is not technical. It is institutional and cognitive. This distinction matters because it is systematically misunderstood in current governance discussions, which focus almost exclusively on technical reversibility — whether a system can be switched off — rather than on institutional reversibility — whether the human capacity to operate without it can be restored.
Three dimensions of reversibility should be distinguished. Technical reversibility asks whether the system can be switched off or paused. Institutional reversibility asks whether the organisation can still perform the function without it. Cognitive reversibility asks whether individuals retain the skills and judgment the system displaced. Current governance frameworks address the first. The second and third are where the correction window closes.
Institutional memory is not stored in databases. It is stored in people — in their judgment, their pattern recognition, their tacit knowledge of how systems actually behave under stress. When those people stop practising, the memory decays. When they retire or move on, it disappears. No documentation captures what an experienced treasury officer knew about reading the payment system for signs of stress. No manual reproduces the pharmacist’s judgment about which prescriptions required a second look. These competences existed in practice, not in procedure. When the practice stopped, the competence evaporated.
The 2003 reform of Uzbekistan’s correspondent account system — which I described in Essay 7 — offers a precise illustration. Before centralisation, each bank branch managed its own liquidity. Specialists who understood the decentralised system’s informal logic were essential. After centralisation, that informal logic became irrelevant. Within two years, the specialists had moved on. By the time anyone might have needed to reconstruct the old system, the knowledge of how it actually worked — not how it was documented, but how it operated in practice — was gone [6].
It is sometimes suggested that lost competences can be deliberately rebuilt through periodic ‘rollback exercises’ or mandatory manual proficiency requirements, as aviation has done for pilots. Yet such measures presuppose both institutional will and operational slack that most adopting institutions — especially those in resource-constrained or dependent settings — do not possess. The very efficiency that drives adoption simultaneously removes the margin required to maintain parallel human capacity. Aviation’s manual proficiency checks and energy-sector ‘black start’ drills are primitive forms of this logic — and even they are under pressure as automation deepens. In most other domains, no equivalent exists.
The DMED system in Uzbekistan may offer a compressed illustration of the same mechanism. If the rollout proceeds as planned, the pharmacists who understand how paper-based prescription verification worked — who could read a handwritten prescription, cross-reference it against their knowledge of the patient’s history, exercise professional judgment about interactions — will be progressively replaced by operators who confirm what the system displays. Within a generation of practitioners, the skill set is at risk of atrophy that would be difficult to reverse on operational timescales. Not through any act of destruction. Through the quiet mechanics of disuse.
The lesson is not that paper systems were better. It is that rollback capacity does not survive by default. It survives only if institutions deliberately maintain it — and the cost of that maintenance rises in direct proportion to how well the automated system performs.
III. The Acceleration
Previous agency transfers — calculators, GPS, autopilot, algorithmic trading — were domain-specific. Each transferred decision-making authority within a bounded operational context. A GPS system does not make medical decisions. An autopilot does not manage financial portfolios. The transfer was contained by the specificity of the technology.
Frontier AI systems are general-purpose. As characterised in the foundation model literature [7], they transfer agency across domains simultaneously — drafting legal documents, generating medical assessments, producing policy analysis, writing code, making procurement recommendations — in ways that collapse the traditional boundary between decision support and decision-making. A single model interface enters procurement, legal drafting, policy memo production, coding, communications, and medical triage simultaneously. This breadth is qualitatively different from any previous automation technology, because the atrophy it produces is not contained within a single professional domain. It operates across the full range of institutional functions at once. Unlike previous domain-specific technologies, frontier AI simultaneously weakens human judgment across legal, medical, policy, technical, and administrative domains within the same organisation — creating systemic institutional fragility rather than localised skill loss.
The speed is equally unprecedented. The Industrial Revolution transferred agency from artisans to machines over roughly a century. The digital revolution transferred agency from manual record-keeping to automated systems over roughly three decades. Frontier AI systems are transferring agency from human judgment to automated output on timelines measured in months. The DMED system in Uzbekistan is scheduled to complete its nationwide rollout within twelve months of pilot launch. In cases like this, the institutional capacity to operate without the system may degrade before anyone has formally decided whether the transfer was desirable.
The Anthropic–Pentagon episode documented in Essay 6 illustrates how quickly bargaining asymmetry can emerge once procurement, mission design, and safety constraints are tied to a single model provider [9]. When operational dependence develops within a single procurement cycle, the question of reversal becomes not theoretical but institutional — and the answer depends on whether the adopting organisation maintained the capacity to operate without the system it has come to rely on.
This is the critical asymmetry: the decision to adopt is fast, visible, and deliberate. The loss of capacity to reverse is slow, invisible, and emergent. No one decides to lose the ability to operate manually. It simply happens — through hiring patterns that no longer value the old skills, through training programs that no longer teach them, through institutional processes that no longer require them.
IV. The Asymmetry
Agency transfer does not distribute evenly. Those who develop AI systems retain the option to modify, retrain, or withdraw them. Those who adopt AI systems — governments, institutions, populations in dependent jurisdictions — absorb the transfer without retaining comparable reversal capacity. The developer holds a switch the user does not.
Translated into the language of institutional risk: this is vendor dependency with exit costs that rise over time, portability that decreases with integration depth, and fallback rights that exist formally but erode operationally. The developer retains model control, update authority, and the option to withdraw service. The adopter’s switching costs compound with each month of integration, and the trained human capacity that would constitute a fallback erodes in direct proportion to the depth of dependence. This is not a metaphor. It is the institutional economics of lock-in — amplified by the cognitive dimension that software procurement models do not capture — and the precise mechanism through which sovereignty is transferred without formal conquest.
This is the colonial pattern described in Essay 5 operating through a different mechanism: not through rules imposed from outside, but through capabilities absorbed from outside. When a government adopts an AI-powered system for public administration, healthcare, or security, the institutional competence to perform those functions without the system begins to decay from the moment of adoption. If the provider withdraws the system — through commercial decision, geopolitical pressure, or the kind of sovereign override that Essay 6 documented — the adopting institution faces a gap that cannot be filled on the timelines that matter.
The populations least likely to have voice in the transfer are those most exposed to its consequences. The farmers in the Fergana Valley whose agricultural credit will increasingly be assessed by automated systems did not participate in the design of those systems. The patients in Samarkand whose prescriptions are now generated by DMED were not consulted on the system’s architecture. This is not because anyone intended to exclude them. It is because agency transfer, like every systemic process this series has examined, operates through structural dynamics that do not require intent to produce exclusion.
Implications
First: agency transfer is the mechanism through which the correction window described in Essay 7 closes. It is not the only mechanism, but it is the most durable: once human capacity to perform a function has atrophied, restoring it requires not merely reversing a technical decision but rebuilding institutional competence that may have taken decades to develop. The correction window does not merely narrow. It changes category: from a reversible configuration problem to a loss of institutional capacity.
Second: the organisations most likely to detect dangerous levels of agency transfer are those with direct operational exposure — insurers pricing system-dependence risk, procurement officers managing vendor lock-in, military planners conducting operational continuity assessments. These actors have institutional incentives to measure what governance frameworks do not: the actual depth of dependence, not the declared level of human oversight.
Third: leading AI governance frameworks reviewed by the author do not yet treat agency transfer assessment as a required evaluation criterion. The EU AI Act addresses risk classification, transparency, and human oversight requirements — but not the structural erosion of institutional capacity to exercise that oversight over time [10]. Human-in-the-loop requirements assume the human retains the competence to intervene. Agency transfer erodes precisely that assumption.
Fourth: in practical terms, governance frameworks need an operational instrument. An agency transfer audit would ask three questions: which decisions have migrated to automated systems; which human competences have now gone long enough without practice to degrade; and what it would cost — in time, staff, training, and error tolerance — to operate manually again within the next quarter. As long as no institution is required to answer these questions, agency transfer will continue to close silently: adoption decisions are explicit, but atrophy is emergent.
Signals to Watch
-
Whether any major institution — government, military, financial regulator — conducts a formal agency transfer audit: a systematic assessment of which decisions have migrated to automated systems, which human competences have degraded as a result, and what the institutional cost of reversal would be at current dependence levels. The absence of such audits, as AI integration accelerates, is itself a signal.
-
Whether mandatory rollback exercises — periodic operations conducted without AI systems to maintain human competence — are introduced in any critical infrastructure domain. Aviation has long maintained manual flying proficiency requirements. Finance, healthcare, and public administration have no equivalent for AI-dependent operations.
-
Whether AI liability insurance begins pricing agency transfer depth — distinguishing between organisations that retain demonstrated reversal capacity and those that have become operationally dependent without maintaining alternatives. When insurers price the difference, the market has recognised what governance has not.
The Questions That Remain Open
In every domain I have worked in — banking, government, crisis management — the most dangerous condition is not the absence of a system. It is the belief that a system is present when the capacity behind it has quietly disappeared. The reports still arrive. The dashboards still display. The governance architecture still looks intact. But the human judgment that was supposed to be the last line of defence has atrophied through years of disuse — and no one noticed, because the system was working.
The question is not whether AI makes better decisions than humans in specific domains. In many cases, it demonstrably does. The question is what happens when the systems we have come to depend on fail — through technical failure, through adversarial attack, through withdrawal by their providers, through the kind of sovereign override that Essay 6 documented — and the humans who are supposed to step in no longer possess the competence to do so.
What is the institutional equivalent of a dead man’s switch — a mechanism that would enforce the halt authority, independent verification with access, and real consequences for misrepresentation identified in Essay 7 — once human agency has already atrophied through years of disuse?
No major governance framework, to the author’s knowledge, currently operationalises such a switch as a requirement. The question is whether that omission will be corrected before dependence hardens into institutional fact — and before the capacity it was meant to protect has already gone.
My father — today a Professor and Academician at Samarkand Institute of Economics and Statistics — continues to work. He witnessed the end of one system. He is watching the beginning of another.
Sources & Notes
[1] Government of Uzbekistan. Electronic prescription system (DMED) launched December 2025 in Tashkent and 15 pilot regions. Sources: Gazeta.uz, 13 December 2025; Kun.uz, 24 September 2025 (citing Cabinet of Ministers resolution). Full national rollout planned by end of 2026.
[2] Bainbridge, Lisanne. “Ironies of Automation.” Automatica, Vol. 19, No. 6, 1983. Foundational analysis of how automation removes the conditions under which operators maintain the skills needed to intervene when automation fails. sciencedirect.com
[3] Parasuraman, R., Sheridan, T.B., and Wickens, C.D. “A Model for Types and Levels of Human Interaction with Automation.” IEEE Transactions on Systems, Man, and Cybernetics, Vol. 30, No. 3, 2000. ieee.org
[4] Ebbatson, M. et al. “The Loss of Manual Flying Skills in Pilots of Highly Automated Airliners.” International Journal of Aviation Psychology, 2010. See also: Federal Aviation Administration. “Operational Use of Flight Path Management Systems: Final Report.” FAA, 2013. faa.gov
[5] Kirilenko, A. et al. “The Flash Crash: High-Frequency Trading in an Electronic Market.” Journal of Finance, 2017. See also: U.S. Securities and Exchange Commission / CFTC Joint Report. “Findings Regarding the Market Events of May 6, 2010.” sec.gov
[6] Central Bank of Uzbekistan. Centralisation of correspondent account infrastructure, completed autumn 2003. Author’s direct professional experience in the sector (UzAgroIndustrialBank, 1990s–2001). See also Essay 2 and Essay 7 of this series. cbu.uz
[7] Bommasani, R. et al. On the Opportunities and Risks of Foundation Models. Stanford CRFM, 2021. arxiv.org
[8] Parasuraman, R. and Manzey, D. “Complacency and Bias in Human Use of Automation.” Human Factors, 2010. Foundational research on automation-induced skill degradation and complacency effects.
[9] Reuters. “Anthropic sues to block Pentagon blacklisting over AI use restrictions.” March 9, 2026. reuters.com. See also Essay 6 of this series: okhodjaev.com/essays/the-pattern-closes/
[10] European Parliament. Regulation (EU) 2024/1689 — Artificial Intelligence Act. Official Journal of the European Union, 12 July 2024. Article 14 on human oversight requirements. eur-lex.europa.eu
[11] Carr, N. The Glass Cage: Automation and Us. W.W. Norton, 2014. Chapter 3 on the paradox of automation: the better the automated system, the more critical — and more degraded — the human contribution when it fails.
[12] International AI Safety Report 2025. The International Scientific Report on the Safety of Advanced AI. May 2025. internationalaisafetyreport.org
Full essay and updated sources: okhodjaev.com/essays/the-agency-transfer/
Oybek Khodjaev: systems transformation analyst, Founder & CEO of INVEXI LLC. Former Deputy Governor (Deputy Khokim) of Samarkand Region. Previously, Treasury Director and Deputy Chairman of the Management Board at JSC UzAgroIndustrialBank. More than thirty years’ experience in economics, banking, finance, and business across Uzbekistan and the CIS.
Published April 06, 2026