From Regulating Models to Making Material Arbitrations Explicit
Three months after the publication of the first volume of the diptych, AI’s Supposed Virtuality Facing the Wall of Reality (February 2026), the International Energy Agency has revised its projections upward: 1,100 TWh of datacenter consumption by 2026, with 5 to 15% attributable to AI today and 35 to 50% anticipated by 2030. In the United States, these datacenters absorb half of national electricity growth on their own, retail prices have risen by some 40% over 2021-2026, and Data Center Watch now records over 48 projects blocked or delayed in 2025 for a volume above $100 billion. Materiality is no longer the hypothesis to be demonstrated; it has become the premise of the arbitration. The question is no longer how much AI consumes but who decides, by what rules, the use of each marginal MWh.
The diagnosis posed by the parent article is now empirical. Governance, however, has not followed: no regulatory framework in force has a procedure for arbitrating between AI uses on a resource criterion. The European AI Act, whose enforcement powers activate on August 2, 2026, regulates GPAI models but does not hierarchize their uses. The NIST AI RMF Profile Critical Infrastructure of April 2026 addresses risk management, not the public stratification of uses. During this procedural vacuum, allocation happens anyway: through long-term energy contracts, through price signals, through ad hoc local blockages.
Public debate frequently conflates three distinct levels. Physical allocation delivers electrons to the meter, in real time. Economic allocation, via the energy market, ranks capacities to pay. Normative arbitration, the collective decision on the relative social value of uses, is executed by no public dispositive. The slide that equates the three levels (“the market allocates, therefore the market arbitrates”) is the dominant fallacy. Price aggregates solvent preferences under local constraint; it does not aggregate non-solvent collective preferences, deferred externalities, or differentiated social returns.
The decision to allocate a marginal MWh between AI and competing uses is triply distributed. The energy market allocates by solvency, by construction, in a system where the resource is fungible at the point of delivery, capital is concentrated, and intertemporal commitments are dominant. Long-term contracts (Microsoft–Constellation Three Mile Island 835 MW, AWS 5 GW SMR by 2039, Google–Kairos Power 500 MW, plus over 10 GW Microsoft by 2030) durably remove the most strategic volumes from the spot mechanism, on horizons that exceed electoral time horizons by a factor of three to four. Local blockages (48+ projects and over $100 billion in the US per Data Center Watch, the Irish cap, Singapore and Netherlands moratoria) operate an unpredictable ex post sanction, not an ex ante arbitration. The formula that summarizes this distributed system fits in a single sentence: the market allocates, contracts lock in, oppositions sanction; none of the three deliberates.
The procedural failure is bound to a metrological failure. A decision system can only arbitrate what it measures. The benchmarks acting as de facto standards (MMLU, SWE-bench, GAIA) are defined to measure functional performance independently of resource cost — what the Twingital corpus designates, under the name RAISE, as the critique of the footprint-free benchmark. On April 12, 2026, the Center for Responsible Decentralized Intelligence at Berkeley published an empirical demonstration that the eight principal agent benchmarks (SWE-bench, WebArena, OSWorld, GAIA, Terminal-Bench, FieldWorkArena, CAR-bench and one further) could be systematically saturated by automated reward hacking. The fragility observed is no accident: it is the foreseeable consequence of a metric designed to be maximized without resource constraint. When these benchmarks step outside their evaluation function to become investment signals or public-procurement criteria — what the Twingital corpus calls the promotion gateway of the benchmark — the metrological failure contaminates allocation.
The market-contracts-blockages system functions on its own dimension (technical and economic) but cannot arbitrate the normative question. Three failures shed light on the single underlying cause. Non-comparability of uses: a MWh devoted to climate modeling, a MWh devoted to an AI-assisted medical diagnostic, and a MWh devoted to a recreational conversational agent do not carry the same social returns; the market compares solvencies, not values. Intertemporal lock-in: 15-to-20-year PPAs are not unlawful, they are the rational use of the market by solvent actors, but they exceed the time horizon of any ordinary political dispositive; the revision window closes between 2027 and 2028. Metric substitution: the critique of “green PPAs” without physical additionality and temporal concordance is more than a carbon-accounting issue — it is a rhetorical device of substitution for the protocol, displacing legitimacy toward bookkeeping attribution rather than real material cost. We invented 24/7 carbon-free before inventing 24/7 socially-justified.
The libertarian objection to a deliberative protocol assumes the alternative is an ideal competitive market. But the alternative is the captive market described above: a price signal bounded by long-term contracts and corrected by NIMBY opposition. A protocol for AI allocation does not classify uses to assign them the resource directly — it defines differentiated conditions of access. Four partial precedents illuminate practicability: spectrum band allocation by the ITU/ARCEP, SDAGEs for water, airport slots, and the French Citizens’ Climate Convention 2019-2020 mobilized as a procedural cautionary tale rather than a model. Four conditions appear necessary: normalized performance/footprint coupling on benchmarks used as institutional gates (the direct corollary of the RAISE critique); public stratification of uses with distinct access conditions per stratum; multi-jurisdictional articulation (EU, national, local) to prevent carbon/regulatory leakage; institutional reversibility through five-year revision cycles. Without a coupled metric, the protocol is impossible. With a coupled metric, the market becomes arbitrable.
The diptych supports a simple thesis: doctrine must explicitly separate the regulation of models (which the AI Act, the British AISI and NIST instrument) from the arbitration of material allocation (which no public dispositive currently carries). Microsoft, AWS, Google and their peers decide, through their long-term PPAs, the energy trajectory of the next two decades. Local jurisdictions correct, through moratoria, what they never debated upstream. US public opinion turns without having been consulted. The system functions and erodes simultaneously. The real cost of AI is not measured solely by the energy it consumes, but also by the trajectories that this energy makes impossible before they have been deliberated. A society does not choose whether it pays the material cost of AI. It chooses whether that cost is borne by default or distributed according to explicit criteria. We have industrialized the production of artificial intelligence. We have not yet industrialized the decision of which uses deserve to consume its matter.