Chapter 16: Moral Uncertainty and the Limits of Geometric Determination
RUNNING EXAMPLE — Priya’s Model
Priya catalogues her uncertainties. Empirical: she is 85% sure the rural exclusion is systematic (she needs more data from Appalachian counties). Metric: even if she fixes the algorithm, she does not know the right weight between medical suitability and access equity—reasonable people disagree. Theory: she is unsure whether HealthBridge has an obligation to actively counter historical healthcare disparities or merely to avoid worsening them. These three types of uncertainty are distinct and resist collapse into a single confidence interval. The robust core—what survives all three—is small but real: Mrs. Voss and patients like her are being harmed, and HealthBridge knows it. That core is enough.
16.1 What the Framework Cannot Settle
The geometric framework developed in this book provides structure, precision, and new questions. The moral manifold M gives moral situations a coordinate system. The tensor hierarchy gives moral evaluations direction and rank. The metric encodes trade-offs. The connection and curvature capture path-dependence. The conservation laws constrain re-description. The quantum extension models superposition and interference. The collective agency tensor captures emergent obligations. And contraction (Chapter 15) compresses all this structure into the scalar verdicts that action requires.
But the framework does not settle all moral questions. It provides vocabulary, not verdicts. It tells us what shape moral uncertainty has, not what the right answer is. It identifies where disagreement lives in the tensor, not which side is correct.
This chapter examines the limits of geometric determination — the places where the framework appropriately falls silent or yields only partial guidance. These limits are not defects. They are honest acknowledgments of genuine indeterminacy in moral life. A framework that pretended to answer every question would be overreaching; one that identifies where indeterminacy lies is providing information that scalar frameworks cannot.
16.2 Three Types of Moral Uncertainty
Type 1: Empirical Uncertainty
The first and most familiar type is uncertainty about the facts: what will happen, who will be affected, what the consequences will be.
Example. A physician considering a treatment is uncertain whether the patient will respond. An investor choosing between projects is uncertain which will succeed. A voter evaluating candidates is uncertain what policies they will implement.
This uncertainty is exogenous to the moral framework. It concerns the inputs — the situation, the consequences, the states of the world — not the moral structure applied to them. The geometric framework handles empirical uncertainty through the uncertainty tensor Σμν introduced in Chapter 6 (§6.6):
Σμν(p)=E[(δOμ)(δOν)]
The uncertainty tensor is a symmetric (2,0) -tensor encoding the shape of empirical uncertainty — which dimensions are most uncertain, which covary, and which are independently variable. The moral risk of a decision (Chapter 6) depends on the alignment between the uncertainty tensor and the interest covector:
σS2=ΣμνIμIν
When the principal axes of uncertainty coincide with the dimensions the stakeholder cares most about, the moral risk is high. When uncertainty lies along irrelevant dimensions, the risk is low.
What the framework settles. Empirical uncertainty is, in principle, reducible — more evidence, better models, and improved prediction can shrink Σμν toward zero. The framework tells us the shape of uncertainty and how it interacts with moral evaluation, but it does not resolve the uncertainty itself. That requires evidence, not geometry.
Computational realization (February 2026). The DEME V3 reference implementation operationalizes the uncertainty tensor through rank-3 MoralTensor objects that jointly represent obligation vectors and their per-dimension uncertainty. Beyond the covariance structure Σμν, V3 provides distributional risk measures: Conditional Value-at-Risk (CVaR) for tail-risk-sensitive evaluation, Value-at-Risk (VaR) for threshold-based uncertainty, and confidence-weighted evaluation that modulates the contraction weights wμ by epistemic confidence on each dimension. The rank-6 sample tensor enables Monte Carlo estimation of these risk measures, with s samples providing convergence guarantees for the tail statistics. This makes the abstract uncertainty tensor a concrete computational object with well-defined risk semantics.
Type 2: Metric Uncertainty
The second type is uncertainty about the metric: which trade-offs are correct, whether values are commensurable, what distance structure moral space has.
Example. An allocation committee may agree that welfare and justice both matter but disagree about their relative weights — the diagonal elements g11 and g33. A policy maker may be uncertain whether economic welfare and environmental integrity are commensurable at all — whether g16≠0 or g16=0.
This uncertainty is structural. It concerns the moral framework itself, not its inputs. It is the uncertainty addressed in Chapter 9: different accounts of the metric’s origin (realist, constructivist, expressivist, governance) yield different answers to the question of which metric is correct — and within any account, the metric may be underdetermined.
Representing metric uncertainty. Metric uncertainty can be encoded as a distribution over metrics — a probability measure P on the space of admissible metrics Gadm (Chapter 9, §9.6):
P:Gadm→[0,1], ∑Gadm dP(g)=1
The expected metric is:
gμν=∑Gadm gμν dP(g)
But the expected metric may not be the right metric to use. Using the expected metric corresponds to “averaging across theories,” which may produce incoherent trade-offs — an average of a utilitarian metric and a Rawlsian metric may be neither utilitarian nor Rawlsian.
What the framework settles. The framework makes metric uncertainty precise: we are uncertain about gμν, specifically about the off-diagonal terms linking dimensions μ and ν. This localization of disagreement is a genuine advance — it turns “we disagree about ethics” into “we disagree about g13” (the trade-off rate between welfare and justice). But the framework does not resolve the disagreement.
Type 3: Theory Uncertainty
The third and deepest type is uncertainty about the moral theory: whether consequentialism, deontology, virtue ethics, or some other view gives the correct account of morality.
Example. A thoughtful agent may assign 40% credence to consequentialism, 35% to deontology, and 25% to virtue ethics. Each theory recommends a different action. What should the agent do?
This uncertainty is foundational. Different theories correspond to different configurations within the geometric framework:
Different interest covectors. Utilitarianism weights Dimension 1 (welfare); Kantianism weights Dimension 2 (duty) and Dimension 4 (autonomy); care ethics weights Dimension 7 (care). These are different Iμ.
Different interest covectors. Utilitarianism weights Dimension 1 (welfare); Kantianism weights Dimension 2 (duty) and Dimension 4 (autonomy); care ethics weights Dimension 7 (care). These are different Iμ .
Different contraction procedures. Utilitarianism uses summative contraction; Rawlsianism uses maximin; deontology uses lexicographic. These are different C (Chapter 15).
The geometric framework provides a common language in which all these theories can be stated with precision. But it does not adjudicate between them. It is metatheoretically neutral — it provides the vocabulary for articulating theories, not the criterion for choosing among them.
What the framework settles. It makes the disagreement localizable. Two theories that seem to disagree about everything may, in tensorial form, agree on the manifold, the obligation fields, and eight of nine metric components, disagreeing only on g27 (the coupling between duty and care). This localization is analytically valuable — it turns an intractable dispute into a tractable one — but it does not resolve the dispute.
16.3 Representing Moral Uncertainty Tensorially
The Theory-Space Tensor
Chapter 3 (§3.7) introduced moral uncertainty as a vector in theory space:
|ψ⟩=∑k √(ck) |Tk⟩
where ck is the credence in theory Tk and |Tk⟩ is the state corresponding to that theory. Chapter 13 developed this into the full quantum formalism — density matrices, decoherence, measurement.
For the classical treatment of this chapter, we work with a theory-space covariance tensor:
Definition 16.1 (Theory Covariance Tensor). Let {T1,…,Tm} be a set of moral theories under consideration. The theory covariance tensor is a symmetric (0,2) -tensor on theory space:
Θjk=Cov(vj,vk)=E[(vj-vj)(vk-vk)]
where vj is the verdict (satisfaction score) under theory Tj, and the expectation is over the agent’s credence distribution.
The diagonal elements Θjj measure the agent’s uncertainty within theory Tj — how confident the agent is about the theory’s own verdict. The off-diagonal elements Θjk measure the correlation between theories — whether evidence for one tends to support the other.
The Joint Uncertainty
The full structure of moral uncertainty combines all three types:
Σtotalμν=Σempiricalμν+Σmetricμν+Σtheoryμν
where:
Σempiricalμν is the uncertainty about facts (which point in M we occupy)
Σmetricμν is the uncertainty about the metric (how to measure distances between points)
Σtheoryμν is the uncertainty about the full evaluative framework
These three types of uncertainty are additive — each contributes independently to the total uncertainty tensor — but they are not equally reducible. Empirical uncertainty can, in principle, be reduced to zero by gathering sufficient evidence. Metric uncertainty can be narrowed by governance but not eliminated. Theory uncertainty may be irreducible.
The Moral Risk of Theory Uncertainty
The total moral risk — the variance of the satisfaction score under all sources of uncertainty — is:
σS2=Σtotalμν IμIν
This expression shows that moral risk depends on the alignment between the total uncertainty and the interest covector. An agent whose interests happen to lie along dimensions where the theories agree (low Σtheory) faces low theory risk, even if the theories diverge sharply on other dimensions.
16.4 Decision Under Moral Uncertainty
The Three Standard Approaches
How should an agent act when uncertain which moral theory is correct? The philosophical literature offers three principal approaches, each of which has a precise tensorial formulation.
Approach 1: “My Favorite Theory” (MFT). Act according to whichever theory you find most plausible:
SMFT=STk*, k*=argmaxk ck
In tensorial terms, MFT contracts over the theory index using a delta function: all weight is placed on one theory, and all others are discarded. The contraction loss (Chapter 15) is maximal among the standard approaches — MFT discards all information from non-favored theories.
Advantages. Simple, decisive. Does not require intertheoretic comparison (§16.5). Respects the internal logic of the favored theory.
Disadvantages. Ignores the tails of the credence distribution. An agent who is 51% utilitarian and 49% deontologist acts as if the deontological considerations have zero weight — a discontinuity that seems irrational.
Approach 2: “Maximize Expected Choiceworthiness” (MEC). Weight each theory’s recommendation by credence and maximize the weighted sum:
SMEC=∑k ck STk
In tensorial terms, MEC contracts over the theory index using a weighted average: each theory contributes in proportion to the agent’s credence. This is a summative contraction (Chapter 15, §15.5) over theory space.
Advantages. Uses all available information. Responds smoothly to changes in credence. Produces moderate, “hedged” verdicts.
Disadvantages. Requires intertheoretic comparisons of value. Is “3 units of utilitarian goodness” comparable to “3 units of Kantian rightness”? Without a common scale, the weighted average is undefined. MEC assumes the moral-theory-space analogue of a metric — a way of comparing magnitudes across theories — that may not exist (§16.5).
Approach 3: Moral Hedging. Choose actions that are reasonably good under multiple theories, even if not optimal under any single one:
Shedge=maxa[∑k ck STk(a)-λ√(∑j,k cjck (STj(a)-STk(a))2)]
In tensorial terms, hedging minimizes the variance of the verdict across theories, rather than maximizing the expected value. It is risk-averse with respect to moral uncertainty: it prefers actions that are safe bets across multiple theories over actions that are brilliant under one theory and catastrophic under another.
Advantages. Robust to mis-estimation of credences. Avoids catastrophic outcomes under plausible theories. Captures the intuition that, under moral uncertainty, caution is warranted.
Disadvantages. May be excessively conservative. Can produce bland, uncommitted verdicts that no theory actively endorses. In extreme cases, hedging may paralyze action if no option is reasonably good under all theories.
Geometric Comparison
The three approaches can be compared geometrically in theory space:
| Approach | Contraction Over Theory Index | Information Used | Information Discarded |
|---|---|---|---|
| MFT | Delta function at | Best theory’s verdict | All other theories |
| MEC | Weighted average by | All verdicts, weighted | Variance across theories |
| Hedging | Variance-penalized average | All verdicts + covariance | Nothing (but conservative) |
MFT is a point in theory space. MEC is the centroid of the credence distribution. Hedging is the point that minimizes the credence-weighted distance to all theories simultaneously. Each occupies a different location in the landscape of possible contraction procedures, and each sacrifices different information (Chapter 15).
16.5 The Intertheoretic Comparison Problem
The Core Difficulty
MEC and hedging both require comparing values across theories: “How good is outcome X according to utilitarianism compared to how good outcome Y is according to Kantianism?” This comparison requires a meta-metric — a metric on theory space that allows magnitudes to be compared across theories.
The intertheoretic comparison problem is the problem of defining this meta-metric. It is hard for several reasons:
Different value scales. Utilitarianism uses unbounded utilities (in principle, utility can be any real number). Kantianism uses categorical permissions and prohibitions (permissible/impermissible, a binary). Virtue ethics uses character assessments (virtuous/vicious, with degrees). These are not just different numbers — they are different kinds of mathematical objects, not naturally comparable.
No natural exchange rate. What is one unit of utilitarian welfare “worth” in units of Kantian rightness? The question seems ill-formed. There is no natural conversion factor, and stipulating one is arbitrary.
Normalization is contentious. One approach is to normalize each theory’s verdicts to a common scale (e.g., mapping each theory’s best and worst outcomes to 0 and 1). But the normalization is itself a moral choice: which outcomes are “best” and “worst” under each theory, and whether the extreme cases should anchor the comparison, are disputed questions.
The Geometric Formulation
In the geometric framework, the intertheoretic comparison problem is the problem of defining a metric on theory space Θ:
dΘ(Tj,Tk)=√(Θjk-1 Δvj Δvk)
Without such a metric, weighted averaging is undefined — we are adding apples and oranges. With such a metric, MEC and hedging become well-defined — but the choice of Θ is itself a moral commitment, at a meta-level above the theories being compared.
Partial Comparability
A productive middle position: intertheoretic comparison is partially defined. Some cross-theory comparisons are clearly valid:
If utilitarianism rates an action as “massively harmful” and Kantianism rates it as “clearly impermissible,” the theories agree on the direction if not the magnitude. The negative valence is comparable even if the scales are not.
If two theories rank actions in the same order (theory A prefers x to y to z; theory B also prefers x to y to z), the ordinal comparison is valid even if the cardinal magnitudes are incommensurable.
If theories share some dimensions of evaluation (both utilitarianism and care ethics include a welfare component), the comparison along those shared dimensions is well-defined.
Where comparison is defined, MEC and hedging can operate. Where it is undefined, the agent faces genuine indeterminacy — not mere uncertainty, but a structural gap in the evaluative framework. The geometric framework makes this gap precise: it is the set of theory pairs (Tj,Tk) for which the meta-metric Θjk is undefined or degenerate.
16.6 Robust Obligations
What Survives Moral Uncertainty
Despite the depth of moral uncertainty, some conclusions are remarkably robust: they hold under a wide range of theories, metrics, and contraction procedures.
Definition 16.2 (Robust Obligation). An obligation Oμ is robust if the corresponding component of the satisfaction score contributes positively under all (or almost all) plausible theories, metrics, and contractions:
Iμ(k) Oμ>0 for all k∈{T1,…,Tm}
That is, the obligation “points in a good direction” from the perspective of every theory under consideration.
Examples of robust obligations:
The obligation not to torture innocents for amusement. This obligation has O1>0 (welfare), O2>0 (rights), O7>0 (care), and every plausible theory assigns positive weight to these dimensions. The obligation is robust.
The obligation to give some weight to others’ welfare. Dimension 1 (welfare) has positive I1(k) under utilitarianism, deontology (welfare matters even if it is not paramount), virtue ethics (benevolence is a virtue), and care ethics (welfare of the cared-for is central).
The obligation to keep promises, ceteris paribus. Dimension 2 (duty) is weighted positively by every theory that recognizes obligations — which is to say, every theory.
The Robust Core
The set of robust obligations defines a robust core — a subset of the obligation space that survives all plausible contractions:
Crobust={O∈TpM:Iμ(k)Oμ>0 for all k}
This is a cone in the tangent space — a set closed under positive scaling and under addition of any two elements. The robust core is the geometric formalization of the set of moral conclusions that are “safe” — that any reasonable agent should endorse, regardless of their theoretical commitments.
The size of the robust core depends on how much the theories agree. If all theories assign similar weights (similar Iμ(k)), the robust core is large — most obligations are robust. If theories diverge sharply, the robust core shrinks — fewer obligations survive all theories.
Stability Under Perturbation
Proposition 16.1 (Stability of Robust Obligations). An obligation O in the robust core is stable under small perturbations of the credence distribution. Specifically, if O∈Crobust and the credences ck are perturbed by δck with ∑k δck=0 and |δck|<ϵ , then O remains in the robust core for sufficiently small ϵ .
Proof. The MEC verdict SMEC=∑k ckIμ(k)Oμ>0 for all O∈Crobust, since each term is positive. A perturbation of ck changes SMEC by ∑k δckIμ(k)Oμ, which is bounded by ϵ⋅maxk|Iμ(k)Oμ|. For sufficiently small ϵ, the perturbation does not change the sign of SMEC. ▫
Clarification on scope. The robust core is defined by the condition Iᵅ_μ O^μ > 0 for all theories α, which is independent of the credence vector c. The non-trivial content of Proposition 16.1 is therefore not about the robust core membership (which is credence-independent) but about the MEC aggregate verdict: for an obligation O in the robust core, the weighted sum Σ_α c_α S_α is strictly positive for all credence distributions c with full support, and is Lipschitz-stable under small perturbations of c. The robust core itself is structurally stable: it is an open convex cone (Proposition 16.2), and membership in an open set is preserved under small perturbations of the obligation vector.
Robust obligations are insensitive to how you distribute credence among theories. Whether you are 60% utilitarian and 40% Kantian, or 20% utilitarian and 80% Kantian, the robust obligations survive. This stability is their practical significance: they provide action-guidance even under deep theoretical uncertainty.
16.7 Residual Indeterminacy
What Remains Undecided
Even after identifying the robust core, residual indeterminacy persists. Some moral questions lie outside the robust core — in the region where theories genuinely disagree and no contraction procedure settles the matter.
Sources of residual indeterminacy:
1. Theories genuinely disagree. Two theories assign opposite signs to the same obligation component: utilitarianism says I1O1>0 (the action improves welfare) while deontology says I2O2<0 (the action violates a right). The obligation is not in the robust core — it is “good” under one theory and “bad” under another. No geometric structure can resolve this; it is a genuine moral disagreement.
2. The metric is underdetermined. Two communities have adopted different metrics through legitimate governance processes (Chapter 9), and both metrics satisfy all the admissibility constraints (§9.6). The framework declares both metrics admissible but does not rank them. The choice between them is a governance question, not a geometric one.
3. The appropriate contraction is contested. Even with agreement on the obligation field and the metric, two agents may disagree about the contraction procedure (Chapter 15). One uses summative contraction (all dimensions count equally); the other uses lexicographic contraction (rights have priority). The framework reveals the disagreement but does not adjudicate.
4. Multiple options are tied or incommensurable. After contraction, two options may yield the same scalar score — a tie. Or the metric may be degenerate along the direction connecting them, making comparison impossible. In either case, the framework yields no determinate verdict.
Is Residual Indeterminacy a Problem?
Some philosophers — and some AI designers — want moral frameworks to be complete: to deliver a verdict for every question. The geometric framework suggests this aspiration is misguided.
Argument from structure. If the moral manifold has regions of genuine curvature (Chapter 10), then path-dependent evaluation is a structural feature of moral space, not an epistemic failure. Two agents who traverse different paths of experience will arrive at genuinely different moral orientations — not because one is wrong, but because the geometry maps different paths onto different moral states. Demanding a unique verdict would require flattening the curvature — destroying the geometric structure that makes the framework useful.
Argument from pluralism. Chapter 9 argued for structured pluralism: the space of admissible metrics is large enough to permit genuine diversity. Different communities may legitimately adopt different metrics, each satisfying all admissibility constraints. Demanding a unique metric would collapse the pluralism into a monism that the framework explicitly rejects.
Argument from the contraction theorem. The inevitability of moral residue (Chapter 15, Proposition 15.1) means that every contraction sacrifices information. A “complete” framework that produced a unique verdict for every question would be performing a specific contraction — and that contraction would have a specific residue. The residue would encode precisely the considerations that the “complete” framework ignores. Completeness is achieved only by ignoring something, and what is ignored retains moral significance.
The Value of Precise Indeterminacy
The framework’s contribution to residual indeterminacy is not resolution but precision. Scalar frameworks face indeterminacy too, but they cannot characterize it — they can only say “we don’t know.” The geometric framework can say:
We are uncertain about the metric, specifically about g13 (the welfare-justice trade-off rate).
The uncertainty lies in the subspace spanned by theories T2 and T4.
The robust core excludes options a and d (which are good under some theories and bad under others), leaving b and c as the viable options.
Options b and c are incommensurable under the current metric: gμν(bμ-cμ)(bν-cν) is undefined because g is degenerate along the direction b-c.
This is detailed, useful, and honest. It narrows the space of uncertainty, identifies what further information could resolve it, and acknowledges what cannot be resolved.
16.8 Decision Strategies Under Indeterminacy
When the Framework Falls Silent
When the robust core does not determine a unique action and the available decision procedures (MFT, MEC, hedging) give conflicting advice, what should the agent do?
Deliberative extension. Seek more information — about the facts (reducing Σempirical), about the operative values (narrowing Σmetric), or about the relative merits of theories (refining Σtheory). Moral inquiry is not a one-shot computation but an ongoing process that can shrink the indeterminate region over time.
Procedural resolution. When indeterminacy cannot be resolved substantively, resolve it procedurally. Use a fair procedure — a coin toss, a vote, a deliberative process, a lottery — to select among the indeterminate options. The geometric framework does not specify the procedure, but it constrains it: the procedure must be BIP-compliant (invariant under admissible re-descriptions) and must not violate the conservation of harm (Chapter 12).
Deferred contraction. When possible, defer the decision until circumstances force it (Chapter 15, §15.8). Maintain the tensorial structure, keep options open, and let the moral landscape clarify before committing. This is not indecision; it is a strategic response to genuine indeterminacy.
Transparency about indeterminacy. Acknowledge that the decision is made under irreducible moral uncertainty. Communicate the indeterminacy honestly rather than pretending to a certainty the situation does not support. This has practical consequences: decisions made under acknowledged indeterminacy may generate different residue (more symmetrically distributed across the foregone alternatives) than decisions made under false certainty.
16.9 Moral Uncertainty and AI Systems
Why AI Faces Moral Uncertainty Acutely
AI systems face moral uncertainty with special urgency:
They cannot deliberate indefinitely. Computational constraints force decisions within time limits. An autonomous vehicle facing an imminent collision cannot spend hours in moral deliberation. The system must have a pre-computed contraction procedure ready to deploy.
They cannot access full tensorial structure. AI systems work with models — necessarily simplified representations of the moral landscape. The model may capture the broad structure of the manifold while missing fine-grained features (local curvature, stratum boundaries, metric variations).
They must choose a contraction. The choice of objective function is a choice of moral contraction under uncertainty (Chapter 15, §15.11). The AI designer must commit to a specific aggregation of uncertain moral considerations — MFT, MEC, hedging, or some other procedure — before deployment.
They operate across diverse moral contexts. An AI system deployed globally encounters populations with different metrics (Chapter 9, §9.8). The system must either adopt a single metric (imposing one community’s trade-offs on all users) or adapt its metric to context (raising questions about moral consistency).
Design Implications
The geometric framework suggests several design principles for AI systems under moral uncertainty:
1. Represent uncertainty explicitly. The system should maintain and track the uncertainty tensor Σtotalμν — not just a scalar “confidence score” but the full structured uncertainty, decomposed into empirical, metric, and theory components.
2. Identify the robust core. Before optimizing under a specific theory, the system should compute the robust core Crobust — the set of actions that are positively evaluated under all plausible theories. Actions within the robust core are safe regardless of the theory. Actions outside the core are risky — good under some theories, bad under others.
3. Default to the robust core. Under significant moral uncertainty, the system should prefer actions in the robust core over actions outside it, even if some non-robust actions have higher expected value under the system’s best-guess theory. This is a form of moral hedging: prioritize safety over optimality when the theory is uncertain.
4. Escalate when outside the robust core. When the available actions are all outside the robust core — when every option is bad under some plausible theory — the system should escalate to human judgment. This is not a failure of the AI; it is a correct response to a situation where the moral uncertainty exceeds the system’s capacity to resolve it.
5. Log and audit moral uncertainty. The system should record its uncertainty tensor at each decision point, making the structure of its moral uncertainty available for post-hoc review. This enables accountability: if a decision turns out badly, the uncertainty log reveals whether the system had adequate warrant for its contraction choice.
16.10 The Modesty of the Framework
What Geometric Ethics Claims
The geometric framework makes specific, bounded claims:
It claims: Moral phenomena have tensor structure that scalar frameworks lose.
It does not claim: Tensorial analysis resolves all moral questions.
It claims: Different moral theories can be represented as different structures — different interest covectors, different metrics, different contractions — on a common manifold.
It does not claim: The framework adjudicates between theories.
It claims: Making structure explicit enables clearer thinking about moral questions.
It does not claim: Clearer thinking always yields determinate answers.
It claims: Moral uncertainty has shape — it is structured, directional, decomposable into types, and partially reducible.
It does not claim: The shape alone resolves the uncertainty.
Why Modesty Is a Strength
A framework that overclaims invites refutation. A framework that accurately characterizes its scope invites use.
The geometric framework is useful because it is modest. It provides:
A common vocabulary for stating different moral theories with precision, making disagreement localizable.
A structural analysis of uncertainty, decomposing it into types, measuring its shape, and identifying the robust core.
A geometric analysis of contraction, making explicit the information lost and the residue generated by any decision procedure.
Invariance constraints (BIP, conservation of harm) that apply regardless of which theory is correct, narrowing the space of admissible moral operations.
A bridge to implementation, providing the mathematical structures that AI systems need to represent and reason about moral quantities.
What the framework does not provide is the content: which metric is correct, which theory is right, how to weight competing values. These remain the province of moral judgment, democratic deliberation, and practical wisdom. The framework provides the stage, the lighting, and the stage directions. The actors — human agents, communities, institutions, AI systems — must still perform.
16.11 Summary
| Type of Uncertainty | Locus | Tensor Encoding | Reducibility |
|---|---|---|---|
| Empirical | Facts of the situation | Σ^{μν}_empirical | In principle, fully reducible by evidence |
| Metric | Trade-off structure | Σ^{μν}_metric | Partially reducible by governance |
| Theory | Choice of framework | or | May be irreducible |
| Decision Approach | Contraction Type | Advantages | Disadvantages |
|---|---|---|---|
| MFT | Delta function | Simple, decisive | Ignores non-favored theories |
| MEC | Weighted average | Uses all information | Requires intertheoretic comparison |
| Hedging | Variance-penalized | Robust, cautious | May be excessively conservative |
The geometric framework does not eliminate moral uncertainty. It characterizes it — giving it shape, structure, and location in the tensor. It identifies the robust core — the set of obligations that survive all plausible theories. It makes the residual indeterminacy precise — showing exactly where the framework falls silent and why. And it provides design principles for AI systems that must act under moral uncertainty.
The modesty of the framework is its deepest strength. By clearly delineating what it can and cannot settle, it avoids both the arrogance of frameworks that claim to resolve all moral questions and the despair of frameworks that claim none can be resolved. Some moral questions have determinate answers (the robust core). Some have answers that depend on governance choices (the metric). Some have answers that depend on the path of moral experience (holonomy). And some are genuinely indeterminate — not because we lack information, but because the structure of moral space is richer than any scalar verdict can capture.
This is not a deficiency. It is a description of the moral world.
Technical Appendix
Proposition 16.2 (Structure of the Robust Core). The robust core Crobust={O:Iμ(k)Oμ>0 for all k} is a convex cone in TpM. It is nonempty if and only if the interest covectors {I(k)} are not in “general opposition” — that is, if and only if there exists a direction in which all theories agree.
Proof. The set {O:Iμ(k)Oμ>0} is a half-space for each k. The robust core is the intersection of m half-spaces, which is a convex cone (possibly empty). The cone is nonempty iff the half-spaces have a common interior point — iff there exists O with I(k)⋅O>0 for all k — iff the convex hull of {I(k)} in Tp*M does not contain the origin. ▫
Corollary. If all theories assign positive weight to some dimension μ (i.e., Iμ(k)>0 for all k), then the robust core is nonempty — the unit vector eμ along that dimension is in the core.
Application. Since all mainstream ethical theories assign positive weight to at least the welfare dimension ( I1(k)>0), the robust core is nonempty. There always exist obligations that all theories endorse — at minimum, pure welfare improvements with no costs along other dimensions.
Proposition 16.3 (Irreducibility of Theory Uncertainty). Theory uncertainty Σtheoryμν cannot be reduced by gathering more empirical evidence if the theories agree on the facts and disagree only on the metric or the contraction procedure.
Hypothesis clarification. The irreducibility claim holds under the hypothesis that all theories in the space T agree on the empirical description of the situation (the point p ∈ M) and disagree only on the evaluative structure (the metric g, the interest covector I, or the contraction procedure). If theories disagree on the empirical facts — for example, on the consequences of an action — then gathering evidence can in principle resolve the disagreement and reduce U_T.
Proof. Empirical evidence reduces Σempiricalμν by narrowing the distribution over points in M. Theory uncertainty Σtheoryμν arises from the distribution over interest covectors and metrics, which is independent of the location in M. Hence empirical evidence does not affect Σtheoryμν. ▫
Moral content. This proposition formalizes the common experience that moral disagreements often persist even when all parties agree on the facts. The disagreement is not empirical but structural — it lies in the metric, the interest covector, or the contraction procedure. No amount of factual evidence can resolve a structural disagreement.
Proposition 16.4 (Hedging as Minimax Regret). Under certain regularity conditions, the hedging strategy
a*=argmaxa[∑k ckSTk(a)-λ⋅stdk(STk(a))]
converges, as λ→∞ , to the minimax-regret strategy:
a*=argminamaxk[maxa'STk(a')-STk(a)]
That is, extreme hedging minimizes the worst-case regret across all theories. The hedging parameter λ interpolates between MEC ( λ=0 ) and minimax regret ( λ→∞ ).
Proof. Write the objective as J(a;λ) = S̄(a) − λ·σ(a), where S̄(a) = Σ_k c_k S_{T_k}(a) is the weighted mean satisfaction and σ(a) = std_k(S_{T_k}(a)) is the cross-theory standard deviation. As λ → ∞, the penalty term λ·σ(a) dominates, so maximizing J is equivalent to minimizing σ(a). The regret of action a under theory T_k is r_k(a) = max_{a'} S_{T_k}(a') − S_{T_k}(a) ≥ 0. Minimizing the worst-case deviation as λ → ∞ converges to minimizing max_k r_k(a) (under the regularity condition that the maximum is achieved by a single theory): a* = argmin_a max_k[max_{a'} S_{T_k}(a') − S_{T_k}(a)], which is the minimax-regret strategy. For λ = 0, J(a;0) = S̄(a), the MEC objective. Hence λ interpolates between MEC and minimax regret. □
❖
The geometric framework is modest in what it claims. It does not claim to settle all moral questions. It claims that moral questions have structure — tensor structure — and that attending to this structure enables more precise reasoning, more honest acknowledgment of uncertainty, and more responsible action under indeterminacy.
Some moral questions have robust answers: obligations that every plausible theory endorses, actions that lie in the robust core regardless of how credences are distributed. These answers are the framework’s strongest contribution — moral conclusions that are insensitive to deep theoretical disagreement.
Other moral questions have context-dependent answers: obligations that depend on the metric (a governance choice), the contraction (a procedural choice), or the path of moral experience (a holonomic fact). These answers are not arbitrary — they are constrained by structural requirements (admissibility, BIP compliance, conservation of harm) — but they are not unique.
And some moral questions have no determinate answer at all: situations where the robust core is empty, the metric is degenerate, and the contraction generates unavoidable residue no matter which option is chosen. These are the genuine dilemmas — the cases where the geometry of moral space is too curved, too stratified, too high-dimensional for any scalar verdict to capture without loss.
The framework’s honesty about these limits is not a weakness. It is a precise characterization of the moral landscape as it is — structured, rich, partially determinate, and irreducibly complex.