Preface

RUNNING EXAMPLE — Priya’s Model

Meet Priya Chandrasekaran. She is thirty-one, a machine-learning engineer at HealthBridge, and she has just built something she is proud of: TrialMatch, an algorithm that matches cancer patients to clinical trials. Accuracy: 94%. Demographic parity: within 2 percentage points. Her presentation to the board draws applause. That night, her colleague Dr. James Osei emails a spreadsheet with a single note: ‘Look at the zip codes.’ Of the 847 patients matched to the BEACON-7 melanoma trial last quarter, exactly three live in rural counties. The algorithm does not hate rural people. It does not even see them. The problem, Priya will discover, is not in any feature or weight. It is geometric.

This book began as a mistake.

In 2025, I was working on a narrow engineering problem: how to model an AI system’s moral evaluations so they don’t depend on morally irrelevant features of its input — that renaming a stakeholder, translating a scenario into another language, or rephrasing a description does not change the system’s ethical assessment. The problem seemed tractable. It was, after all, a consistency check: ensure that equivalent inputs produce equivalent outputs.

The mathematics I studied was at first based on smooth manifolds — the idea, articulated in Sam Harris’ The Moral Landscape (2010), that moral questions have objective answers and that the space of possible experiences forms a topography with peaks and valleys. Harris’ book was not a mathematical treatise, but its central metaphor planted a seed: if moral space has peaks and valleys, it has a topography; if we model that topography with a metric, it acquires curvature; if it has curvature, the full apparatus of differential geometry applies. What began as metaphor became, in this book, literal mathematics. The smooth-manifold starting point led directly to gauge theory. Fiber bundles to separate the morally relevant content from the representational frame. Connections to define what “the same evaluation in a different context” means. Curvature to measure path-dependence. Gauge invariance to formalize the requirement that evaluations be frame-independent. These are standard tools in mathematical physics — the language of electromagnetism, general relativity, the Standard Model. I expected to borrow the formalism, solve the engineering problem, and move on.

I did not move on. The formalism refused to stay borrowed.

The gauge invariance condition — what I came to call the Bond Invariance Principle — turned out to generate, via Noether’s theorem, a conservation law: the conservation of harm. The stratification of the evaluation space turned out to have the same Whitney structure as phase spaces in gauge theory. The way moral obligations transform under change of perspective turned out to follow the same transformation law as vector fields under change of coordinates. And the discrete transitions between deontic states — obligation, claim, liberty, no-claim — turned out to exhibit the symmetry of the D dihedral group, the symmetry group of a square.

And then another correspondence, perhaps the most arresting. The framework predicted that moral reasoning — the deliberative process by which agents choose how to act — is formally equivalent to A* pathfinding on the moral manifold. Obligation vectors, the central objects of the tensor hierarchy, turn out to be gradient vectors of heuristic functions in the A* sense. The classical moral rules — ‘do not kill,’ ‘keep your promises’ — are admissible heuristics: they never overestimate the true cost of reaching moral equilibrium. Exact moral computation is intractable (Chapter 11), which is why evolution pre-compiled these heuristics into our cognitive architecture. The equation f(n) = g(n) + h(n), where g is behavioral friction and h is obligation-guided estimation, is the fundamental equation of moral reasoning.

Each correspondence, taken alone, might be dismissed as the inevitable consequence of using powerful mathematical tools in a new domain. Category theory appears in database theory; this does not mean databases are secretly topological. But the correspondences accumulated. They predicted empirical results — cross-lingual invariance of deontic structure, specific patterns in corpus analysis, quantum-cognition effects in moral deliberation — and the predictions were confirmed. They yielded a theorem — the No Escape Theorem — proving that geometric constraints on AI evaluation cannot be circumvented through representational manipulation. And they organized twenty-five centuries of moral philosophy into a single coherent mathematical structure, one in which utilitarianism, deontology, virtue ethics, and capabilities theory each appear as special cases of a more general geometric framework.

The mistake, it turned out, was thinking the problem was narrow.

This book develops the consequences of that mistake. It argues that the mathematical structures physicists developed to describe nature — manifolds, tensors, metrics, connections, curvature, conservation laws — are also the right structures for describing moral reasoning. Not metaphorically. Mathematically. The moral evaluation of a situation is not a number but a tensor on a stratified manifold [Definition/Modeling choice]. The requirement that equivalent moral situations receive equivalent evaluations is not a guideline but a gauge symmetry [Theorem (conditional)]. The harm in a situation is not an intuition but a conserved Noether charge [Theorem (conditional on BIP)]. And the containment of artificial agents within ethical constraints is not a behavioral aspiration but a geometric theorem [Theorem (conditional on Req. 1–4)].

These are strong claims. The book defends them in thirty chapters across seven parts, moving from philosophical motivation through mathematical development to empirical evidence and engineering implementation. The argument is cumulative: each chapter depends on its predecessors, and no single chapter carries the weight alone. The reader who finds the philosophical chapters speculative is invited to examine the mathematical proofs. The reader who finds the mathematics abstract is invited to examine the empirical data. The reader who finds the data limited is invited to consider the engineering applications. The framework earns trust not through any single argument but through the coherence of many.

A word about what this book is and is not.

It is a mathematics book. The central contribution is a mathematical framework — geometric ethics — that provides a structural vocabulary for moral reasoning. The framework draws on differential geometry, tensor analysis, gauge theory, stratified space theory, and (in the quantum extension) Hilbert space methods. Chapter 4 develops the necessary mathematical background from first principles, assuming no more than multivariate calculus and linear algebra. But the mathematics is not ornamental. It is the argument.

It is not a new normative theory. We do not propose “geometric consequentialism” or “tensorial deontology” as competitors to existing moral theories. The framework provides a common mathematical language in which existing theories can be stated precisely, their commitments made explicit, and their disagreements localized to specific components of the mathematical structure. Utilitarianism is a specific contraction of the moral tensor. Rawlsian justice is a specific metric on the space of social positions. Virtue ethics is a specific fiber-bundle structure on the space of character traits. The framework does not adjudicate between these theories. It makes their structure visible.

It is not a claim that ethics can be reduced to calculation. The framework provides vocabulary, not verdicts. It identifies the mathematical structure of moral reasoning; it does not determine the content. Which trade-offs are legitimate, which perspectives are authoritative, which contractions are justified — these remain the work of moral judgment, democratic deliberation, and practical wisdom. The framework makes judgment more articulate. It does not replace it.

And it is not solely about artificial intelligence, though the AI application is the most urgent. The geometric framework applies to moral reasoning as such — human, institutional, and artificial. The philosophical argument for geometric ethics stands independently of any technological application. But the fact that AI systems are, right now, making morally significant decisions using scalar objectives gives the philosophical argument practical force. The window for embedding geometric constraints into the architecture of AI governance is finite. The mathematics is ready. Whether it will be used is a political question, not a technical one.

The book addresses four audiences, and I have tried to serve each without alienating the others.

Philosophers will find the core argument in Parts I and III: the inadequacy of scalar evaluation (Chapter 2), the historical precedents for geometric structure in moral philosophy (Chapter 3), the governance account of the moral metric (Chapter 9), the analysis of contraction and moral residue (Chapter 15), and the honest accounting of the framework’s limits (Chapter 16). The mathematics in these chapters is developed intuitively, with formal details deferred to technical appendices.

Mathematicians and physicists will find the formal development in Parts II and III: the moral manifold (Chapter 5), the tensor hierarchy (Chapter 6), stratification theory (Chapter 8), the moral connection and curvature (Chapter 10), moral reasoning as optimal search (Chapter 11), the Noether theorem for re-description invariance (Chapter 12), and the quantum extension (Chapter 13). The interest lies in the domain, not the tools: the moral manifold has structural features — stratification with semantic gates, degenerate metrics, agent-indexed fiber bundles — that are mathematically distinctive.

AI researchers and engineers will find the direct application in Part V: tensor-valued objectives and invariance testing (Chapter 18), the DEME architecture and ErisML modeling language (Chapter 19), and the empirical validation program (Chapter 17). But I urge these readers not to skip the mathematical development. An AI system that implements geometric ethics without understanding the mathematics is a system whose constraints are opaque to its operators. The whole point of the framework is to make moral reasoning transparent.

Policy makers and governance professionals will find the accessible argument in Chapters 1, 2, 7, 15, and 18. The central insight — that moral evaluations have geometric structure, that this structure is lost in scalar frameworks, and that the loss has practical consequences for AI governance — does not require the full mathematical apparatus. The governance account of the moral metric (Chapter 9) is directly relevant to institutional design.

Domain specialists — economists, clinicians, lawyers, financial professionals, theologians, environmental scientists, AI researchers, bioethicists, and military ethicists — will find direct application in Part VI: Domain Applications (Chapters 20–28), where the framework is applied to nine established domains, each with worked examples, formal theorems, and falsifiable predictions distinguishing the geometric approach from existing domain-specific theories.

For the reader in a hurry, the fast path through the book is Chapters 1, 2, 7, 15, and 18: why geometry, why not scalars, one case at five levels of mathematical structure, from tensor to decision, and geometric ethics for AI. These five chapters contain the core argument. The remaining sixteen develop, formalize, extend, test, and implement it.

The epistemic stance of this book is pragmatist. We treat mathematical structures as tools for organizing experience, not as mirrors of metaphysical necessity. The question is not whether moral space “really is” a stratified manifold but whether modeling it as one helps us think more clearly, make better decisions, and build more trustworthy systems. The answer, we argue, is yes — and the argument is empirical as much as philosophical. The framework makes predictions, and the predictions are confirmed by data. Whether this empirical success reveals a deep metaphysical unity between physics and ethics, or merely the broad applicability of geometric mathematics, is a question we leave open. It is a fascinating question. It is not one that must be settled before the framework can be used.

The methodology is inductive, not axiomatic. The mathematical structures in this book — the manifold, the gauge group, the conservation law — were not postulated and then illustrated. They were discovered in data and then formalized. The D₄ symmetry of the Hohfeldian square was found by testing every transformation against thousands of moral scenarios. The conservation of harm was inferred from cross-lingual invariance patterns across 109,294 passages. The stratification into discrete strata was measured from semantic gate effectiveness rates in a 32-year corpus. The verification strategy is closer to fuzz testing in software engineering than to proof in pure mathematics: generate or collect a large number of cases, apply every relevant transformation, and check whether the predicted structural invariants hold. When they hold, you have evidence. When they don’t — as with the CHSH tests that falsified the original SU(2) gauge group, and the double-blind experiments that failed to confirm hysteresis — you revise the structure. The framework is self-correcting by design, not by accident.

Epistemic Status Tags

To make the epistemic register of each claim explicit, this book uses four recurring status tags:

[Definition / Modeling choice]. A stipulated structure—the nine dimensions, the manifold topology, the stratification into Hohfeldian strata. These are not discovered truths but architectural decisions, chosen for their explanatory and engineering utility. They could be otherwise. Their justification is pragmatic: the framework they support works.

[Theorem (conditional)]. A mathematical result that follows rigorously from stated premises. The theorem itself is not empirical; what is empirical is whether the premises obtain. The No Escape Theorem (Theorem 18.1) is conditional on four requirements; the conservation of harm (Chapter 12) is conditional on the re-description symmetry holding. The if matters as much as the then.

[Empirical result (preliminary / robust)]. A finding supported by data. “Preliminary” means the result is based on a single study, a limited population, or a model-mediated measurement (e.g., neural classifier transfer). “Robust” means the result has been replicated, is statistically strong, or has survived deliberate attempts at falsification. We mark the distinction honestly.

[Speculation / Extension]. An idea we find promising but cannot yet support with proof or data. The Orch-OR connection, the moral field equation, the conjecture that moral anomalies have a systematic source—these are flagged as speculative. They belong in the book because they suggest research directions; they do not belong among the claims the book defends.

The reader who encounters an untagged passage in the body of the text may assume it is expository narrative connecting tagged results.

Epistemic Status Classification of Major Claims

Each central claim, located on the epistemic spectrum:

Moral manifold M (Ch. 5). [Definition / Modeling choice.] Nine dimensions, differential structure = architectural decisions with empirical motivation.

BIP (Axiom 5.1). [Definition / Modeling choice.] Stipulated symmetry. Empirical: confirmed at 100% for deontic axis (model-mediated; see §17.7).

Gauge group (Thm. 12.3). [Theorem (conditional on A1–A5).] Uniquely determined. Axioms = modeling choices. CHSH excludes non-abelian alternatives.

Conservation of harm (Thm. 12.1). [Theorem (conditional on BIP + Lagrangian form).] Lagrangian form = modeling choice.

No Escape (Thm. 18.1). [Theorem (conditional on Req. 1–4).] Proven. Open: practical satisfiability.

Semantic gate discreteness (§8.4). [Empirical (robust).] n = 20,030 + 109,294. Cohen’s h = 1.42 (see §17.3 for comparison specification).

100% deontic transfer (§17.3). [Empirical (robust, model-mediated).] [Boundary] CI: [99.7%, 100%]. Via LaBSE, not human subjects.

Quantum normative dynamics (Ch. 13). [Speculation / Extension.] Order effects confirmed; interference unconfirmed; Bell violations falsified.

Moral field equation (§29.4). [Speculation / Open problem.] No derivation exists.

Governance account (Ch. 9). [Modeling choice / Philosophical argument.] Evidence consistent but does not rule out realist or constructivist alternatives.

Conservative default: formal results conditional on premises; empirical results preliminary unless marked robust; interpretive claims = modeling choices unless marked as theorems.

The research reported here draws on several bodies of prior work: the Bond Invariance Principle and its cross-lingual validation; the No Escape theorem for mathematical containment of artificial agents; the Dear Abby corpus analysis and Dear Ethicist experimental program; the ErisML modeling language and DEME architecture for AI governance; and the SQND (Stratified Quantum Normative Dynamics) framework for moral deliberation. These are cited throughout, and the relevant results are developed in context. This book synthesizes them into a unified mathematical framework — the synthesis that the individual papers pointed toward but did not individually achieve.

I owe debts to many. To the philosophical tradition from Aristotle through Ross, Rawls, Sen, and Hohfeld, whose proto-geometric insights this book formalizes. To the mathematical tradition from Gauss through Riemann, Cartan, Noether, and Whitney, whose tools made the formalization possible. To Roger Penrose, whose work on the geometry of consciousness suggested that the connection between geometry and mind might be deeper than analogy. And to the students and colleagues at San José State University who challenged, refined, and occasionally demolished earlier versions of these ideas.

The errors that remain are mine.

A final word about urgency

As I write this, AI systems are being deployed in contexts of profound moral significance — allocating medical resources, moderating public discourse, assisting judicial decisions, operating vehicles — using scalar objectives that discard the geometric structure of ethical reasoning. The consequences are already visible: specification gaming, reward hacking, value collapse, brittle alignment. These are not engineering bugs. They are structural consequences of the wrong mathematical language.

The No Escape Theorem shows that structural containment of AI is mathematically possible [Theorem (conditional on Req. 1–4)]. The DEME architecture shows that the engineering is tractable. The empirical evidence shows that the framework’s predictions hold [Empirical result (preliminary)]. What remains is the collective will to mandate geometric constraints — to insist that AI systems preserve the tensorial structure of moral reasoning rather than collapsing it to a number.

The window for this mandate is finite. The mathematics is ready. The question is whether we will use it.

This book is an attempt to ensure that, when the question is asked, the answer is available.

Andrew H. Bond

San Jose, California 2026