Chapter 7: One Case, Five Levels
RUNNING EXAMPLE — Priya’s Model
Priya picks one patient: Mrs. Eleanor Voss, 67, Stage III melanoma, Harlan County, Kentucky, nearest trial site 4.5 hours away. Scalar: TrialMatch gives her a 64. Rejected. Vector: her medical obligation components are strong, but her access components are weak; the scalar crushed them together. Rank-2: the evaluation tensor reveals that improving Mrs. Voss’s access would also improve trial diversity—a positive off-diagonal coupling invisible to the scalar. Metric: under the current metric, 4.5 hours of travel counts the same as a minor co-pay difference. Is that right? Stratified: the 70-point threshold is a stratum boundary. Mrs. Voss is six points from a life-saving trial. The boundary does not care about direction.
Introduction: The Pedagogy of Accumulation
The preceding chapters have developed geometric ethics in abstract terms: manifolds, tangent spaces, tensors, metrics, stratifications. This chapter takes a different approach. We examine a single concrete case and revisit it five times, each time adding mathematical structure. By the end, the reader will see not just what the geometric apparatus is, but why each piece is necessary — what it lets us say that we could not say before.
The case is organ allocation: the allocation of a single kidney to one of three patients. This case is useful because:
It is realistic and consequential
It involves multiple morally relevant dimensions from the nine-dimensional framework of Chapter 5
It involves multiple agents with distinct perspectives (patients, families, physicians, society)
It generates genuine disagreement traceable to structural choices (utilitarian vs. egalitarian vs. prioritarian)
It has clear boundary cases where the rules change discontinuously (stratification)
We begin at Level 1 — the scalar — and progressively enrich the structure.
Level 1: The Scalar
Setup
A kidney becomes available. Three patients are eligible:
| Patient | Medical Benefit | Years on Waitlist | Age |
|---|---|---|---|
| Alice | High (0.9) | 2 years | 45 |
| Bob | Medium (0.6) | 7 years | 62 |
| Carol | Medium (0.5) | 1 year | 28 |
The hospital must choose one recipient.
The Scalar Approach
The simplest approach assigns a single score to each patient and chooses the highest:
Si=w1⋅benefiti+w2⋅waittimei+w3⋅agefactori
With weights w = (0.5, 0.3, 0.2) and normalized data:
S_Alice = 0.5(0.9) + 0.3(0.29) + 0.2(0.55) = 0.65
S_Bob = 0.5(0.6) + 0.3(1.0) + 0.2(0.38) = 0.68
S_Carol = 0.5(0.5) + 0.3(0.14) + 0.2(0.72) = 0.43
Decision: Bob receives the kidney.
What the Scalar Can Say
Bob is the best choice (given these weights)
Alice is second, Carol third
The margin is small (0.68 vs. 0.65)
What the Scalar Cannot Say
Why Bob wins. Is it because of waiting time? Medical benefit? The scalar collapses all reasons into a single number. We cannot recover the structure of the justification.
How robust the decision is. If we slightly changed the weights, would the answer change? The scalar gives no information about sensitivity.
What we are trading off. Choosing Bob over Alice means accepting lower medical benefit in exchange for respecting longer waiting time. But the scalar does not represent this trade-off — it only reports the output.
How different perspectives evaluate the choice. A physician focused on medical outcomes would weight differently than a policy administrator focused on fairness. The scalar framework requires choosing weights before evaluation, concealing the underlying disagreement.
The scalar is a (0,0)-tensor. It is the end of moral reasoning — the point of decision. The problem is that scalar ethics makes it the beginning as well, collapsing structure before it can be examined.
Level 2: The Obligation Vector
Adding Structure
Instead of collapsing to a scalar immediately, we represent the moral situation as it bears on each patient using the multi-dimensional framework of Chapters 5 and 6. The morally relevant dimensions for kidney allocation include:
Dimension 1 (Consequences/Welfare): medical benefit
Dimension 2 (Rights/Duties): strength of claim based on waiting time
Dimension 3 (Justice/Fairness): equitable distribution of scarce resources
Dimension 4 (Autonomy): patient’s informed consent and preferences
Dimension 7 (Care): relational context — dependents, family obligations
Dimension 9 (Epistemic): certainty of benefit estimates
Not all nine dimensions are equally active in every case. Here, Dimensions 5 (privacy), 6 (societal impact), and 8 (procedural legitimacy) play subsidiary roles. We work with a six-dimensional subspace.
Enriching our data:
| Patient | Benefit (x¹) | Wait-claim (x²) | Fairness (x³) | Autonomy (x⁴) | Care (x⁷) | Epistemic (x⁹) |
|---|---|---|---|---|---|---|
| Alice | 0.9 | 0.29 | 0.55 | 0.8 | 0.8 | 0.7 |
| Bob | 0.6 | 1.0 | 0.70 | 0.7 | 0.2 | 0.5 |
| Carol | 0.5 | 0.14 | 0.72 | 0.9 | 0.6 | 0.8 |
Each patient’s moral profile is now an obligation vector — a tangent vector on the moral manifold at the current situation, pointing in the direction of what is owed to that patient:
OC=(0.5,0.14,0.72,0.9,0.6,0.8)
OB=(0.6,1.0,0.70,0.7,0.2,0.5)
OA=(0.9,0.29,0.55,0.8,0.8,0.7)
What the Vector Can Say
1. The profile of each option. Alice is high-benefit, low-wait-claim, high-care (many dependents). Bob is medium-benefit, very-high-wait-claim (7 years), low-care. Carol is young (high autonomy value), fairest by age distribution, highest epistemic certainty. We can see the structure that the scalar collapsed.
2. Dominance relations. If one vector dominated another on all components, we could identify it without choosing weights. None does here — which is why the case is hard. The hardness is geometric: the vectors are in general position, with no dominance ordering.
3. The geometry of difference. The difference vector
OA-OB=(0.3,-0.71,-0.15,0.1,0.6,0.2)
shows exactly where Alice and Bob differ. Alice is better on benefit (+0.3) and care (+0.6); Bob is better on wait-claim (+0.71) and slightly on fairness (+0.15). This vector is a precise map of the trade-off.
4. Decision regions in interest space. Recall from Chapter 6 that the satisfaction scalar is S = I_μ O^μ, where I is the interest covector. Alice wins when I_μ O^μ_A > I_μ O^μ_B and I_μ O^μ_A > I_μ O^μ_C. These inequalities carve out a region in the six-dimensional space of interest covectors — the set of moral perspectives under which Alice is the right choice.
Alice wins: Iμ(OAμ-OBμ)>0 and Iμ(OAμ-OCμ)>0
Similarly for Bob and Carol. The interest space is partitioned into three decision regions, one for each patient. The boundaries between regions are hyperplanes — and the widths of the regions measure the robustness of each choice.
What This Reveals
The “right” answer depends on where you are in interest space. This is not relativism — it is structure. Different interest covectors correspond to different moral theories:
High I₁ (welfare weight): utilitarian/efficiency orientation → Alice wins
High I₂ (wait-claim weight): procedural fairness orientation → Bob wins
High I₃ (fairness weight): egalitarian orientation → Carol wins
High I₇ (care weight): care ethics orientation → Alice wins
The vector framework does not tell you which theory is correct. It tells you exactly what each theory implies and where they diverge. The boundaries between decision regions are loci of genuine moral disagreement — hyperplanes in interest space where different theories yield different verdicts.
Level 3: Multiple Perspectives (Rank-2 Tensor)
The Multi-Agent Structure
So far, we have treated the moral evaluation as coming from a single perspective. But organ allocation involves multiple stakeholders, each with a distinct interest covector:
The physician (medical judgment, welfare-focused)
Alice’s family (partial to Alice, care-focused)
Bob’s family (partial to Bob, rights-focused)
Carol’s family (partial to Carol, autonomy-focused)
The transplant committee (institutional policy, fairness-focused)
These are different points on the fiber of the agent bundle over the current situation (Chapter 5, Section 5.4). Each agent evaluates the same obligation vectors, but with a different interest covector.
The Evaluation Tensor
The satisfaction of each option by each agent forms a rank-2 tensor:
Mia=Iμ(a)Oiμ
where i indexes patients and a indexes perspectives.
| Physician | Family(A) | Family(B) | Family(C) | Committee | |
|---|---|---|---|---|---|
| Alice | 0.75 | 0.90 | 0.40 | 0.50 | 0.62 |
| Bob | 0.68 | 0.35 | 0.95 | 0.45 | 0.70 |
| Carol | 0.55 | 0.30 | 0.30 | 0.92 | 0.58 |
Each entry M_{ia} is itself a contraction: the interest covector of agent a, contracted with the obligation vector of patient i. The tensor M_{ia} preserves the full structure of multi-agent evaluation before any social aggregation.
What the Rank-2 Tensor Can Say
1. Perspective-dependence is visible. Alice’s family ranks Alice first (0.90); Bob’s family ranks Bob first (0.95). This is not bias to be eliminated — it is legitimate partiality (Chapter 5, Section 5.5, Type 2 transformations). Families should weight their own members more heavily on the care dimension. The tensor makes this visible as a structural feature, not an error.
2. Agreement and disagreement are localized. The physician and committee roughly agree (their columns are positively correlated). Families disagree strongly with each other (their columns are negatively correlated). We can compute exactly which pairs of perspectives agree and which diverge — and on which options.
3. Correlative structure. The Hohfeldian constraint (Chapter 5, Definition A.4) requires that if the committee assigns Alice an obligation-based claim (M_{Alice,committee} reflects a strong O² component), then the other patients’ families must acknowledge a corresponding diminution of their claims. This constraint links the entries of M_{ia} across agents.
4. Aggregation becomes a visible choice. To reach a social decision, we must contract the tensor M_{ia} — from rank-2 (options × agents) to rank-1 (options) and then to rank-0 (a decision). Different contractions encode different social choice procedures.
Three Contractions, Three Verdicts
Utilitarian aggregation (sum over agents):
Siutil=∑a Mia
S_Alice = 3.17, S_Bob = 3.13, S_Carol = 2.65
Decision: Alice (highest total support)
Rawlsian aggregation (maximize the minimum across agents):
SiRawls=minaMia
S_Alice = 0.40, S_Bob = 0.35, S_Carol = 0.30
Decision: Alice (highest floor of support)
Expert-weighted aggregation (physician and committee weighted more heavily):
Siexpert=∑a vaMia, v=(0.3,0.1,0.1,0.1,0.4)
S_Alice = 0.653, S_Bob = 0.659, S_Carol = 0.549
Decision: Bob (expert perspectives favor wait-time claims)
The Meta-Point
The same rank-2 tensor yields different decisions under different contractions:
| Contraction | Winner | Social Choice Procedure |
|---|---|---|
| Σ over agents | Alice | Democratic/utilitarian |
| Min over agents | Alice | Rawlsian/consensus |
| Expert-weighted | Bob | Epistemic/expertise-based |
This is not a bug. It is the point. The tensor represents the full structure of the multi-agent evaluation. The contraction represents a choice of social decision procedure. Different institutions, with different legitimacy claims, may choose different contractions. The framework makes this choice explicit and analyzable, rather than burying it in unstated assumptions.
Level 4: The Metric
The Problem of Comparison
Levels 2 and 3 implicitly assumed that the six dimensions are comparable — that 0.1 units of “benefit” is equivalent to 0.1 units of “wait-claim.” But is it? Can we trade one year of waiting time for a 10% increase in medical benefit?
The moral metric g_μν (Chapter 6, Section 6.5) answers this question. It defines the inner product between moral vectors — and thereby determines how we measure distances, compare magnitudes, and structure trade-offs across dimensions.
Three Metrics, Three Ethics
Metric 1: Euclidean (g_μν = δ_μν, all dimensions equally weighted)
All dimensions are fully commensurable. A 0.1 difference in benefit is as significant as a 0.1 difference in wait-claim. The distance from each patient’s vector to the ideal vector (1, 1, 1, 1, 1, 1) is:
d(O_A, ideal) = √(0.01 + 0.50 + 0.20 + 0.04 + 0.04 + 0.09) = √0.88 ≈ 0.94
d(O_B, ideal) = √(0.16 + 0.00 + 0.09 + 0.09 + 0.64 + 0.25) = √1.23 ≈ 1.11
d(O_C, ideal) = √(0.25 + 0.74 + 0.08 + 0.01 + 0.16 + 0.04) = √1.28 ≈ 1.13
Decision (Euclidean): Alice (closest to ideal)
Metric 2: Weighted (g_μν = diag(2, 1.5, 1, 0.5, 1, 0.5), benefit and rights count more)
The weighted distance changes the balance:
d_W(O_A, ideal) = √(2(0.01) + 1.5(0.50) + 1(0.20) + 0.5(0.04) + 1(0.04) + 0.5(0.09)) = √1.14 ≈ 1.07
d_W(O_B, ideal) = √(2(0.16) + 1.5(0.00) + 1(0.09) + 0.5(0.09) + 1(0.64) + 0.5(0.25)) = √1.30 ≈ 1.14
d_W(O_C, ideal) = √(2(0.25) + 1.5(0.74) + 1(0.08) + 0.5(0.01) + 1(0.16) + 0.5(0.04)) = √1.87 ≈ 1.37
Decision (Weighted): Alice (benefit advantage amplified)
Metric 3: Lexicographic (Dimension 2, rights/wait-claim, has absolute priority)
Compare first on wait-claim. If equal (within ε = 0.1), proceed to benefit.
Wait-claim: Bob (1.0) >> Alice (0.29) >> Carol (0.14)
Decision (Lexicographic): Bob (best on top-priority dimension)
What the Metric Choice Commits You To
| Metric | Commitment | Philosophical Affinity |
|---|---|---|
| Euclidean | All values tradeable at par | Classical utilitarianism |
| Weighted | All values tradeable, but at different rates | Pluralistic consequentialism |
| Lexicographic | Some values have strict priority | Deontology, rights theories |
| Degenerate | Some values cannot be traded at all | Incommensurability thesis |
The metric is not a technical detail. It encodes fundamental value commitments. Two analysts who agree on all the facts — who assign exactly the same obligation vectors to each patient — can disagree about who should receive the kidney, purely because they employ different metrics. Scalar approaches bury this disagreement in the weighting scheme; the tensorial framework surfaces it.
Off-Diagonal Components
Suppose we have reason to believe that benefit and wait-claim are negatively correlated in moral significance — that gains in one come at the cost of the other. This is encoded by a negative off-diagonal metric component g₁₂ < 0. The inner product
⟨OA,OB⟩=gμνOAμOBν
now includes cross-terms that capture these interactions. Two patient profiles with both high benefit and high wait-claim would have their “moral distance” from the ideal reduced by the coupling — the combination is more than the sum of its parts. This is structure that no diagonal metric (no set of independent weights) can represent.
Level 5: Stratification and Boundaries
Beyond the Smooth Interior
Levels 1–4 operated entirely within the smooth interior of the moral manifold — the region where all three patients are eligible and smooth trade-offs are possible. But organ allocation, like all serious moral problems, has boundaries where the rules change discontinuously.
Adding a Constraint: The Discrimination Boundary
Suppose we discover that Carol’s low ranking by some analysts is partly based on a protected characteristic (say, her gender). The situation crosses a stratum boundary (Chapter 5, Section 5.6): we enter the forbidden region.
C={p∈M:allocation weight influenced by protected characteristic}
Within C, the satisfaction function S = -∞ by convention. The affected analysis is not merely low-value — it is excluded. This is a hard discontinuity, not a smooth trade-off. No amount of improvement along other dimensions can compensate for a discriminatory basis.
In the language of stratification: the smooth manifold of Level 4 was a single stratum S₂ (the interior, where all three patients compete on morally relevant grounds). The discrimination boundary introduces a lower-dimensional stratum S₁ (the surface where one patient’s consideration is tainted) and the forbidden region itself is the constraint set C.
Adding a Nullifier: The Abuse Boundary
Now suppose that during the evaluation, Bob’s family threatens the transplant committee. This activates a nullifier — an absorbing stratum (Chapter 5, Definition 5.6). The Dear Abby corpus (Chapter 17) identifies threats as universal nullifiers, and the principle generalizes: coercion nullifies the moral force of the claims it supports.
Bob’s high wait-claim score (O²_B = 1.0) is not reduced or discounted — it is annulled. The evaluation tensor M_{ia} is restructured: all entries in the Bob row that depend on the family’s coerced advocacy are removed from consideration. The effective decision space collapses from a 2-simplex (three patients) to a 1-simplex (Alice vs. Carol).
This is not a smooth operation. It is a stratum transition — a discrete jump from one moral regime to another. The framework represents this as a change in the active stratum, with the transition triggered by a specific condition (the nullifier).
Adding a Semantic Gate: The Emergency
Now suppose that Alice’s condition suddenly deteriorates. She is reclassified from “stable” to “urgent.” The phrase “life-threatening emergency” is a semantic gate — a trigger that flips the moral evaluation from one stratum to another.
Before the emergency: the allocation decision involves smooth trade-offs among benefit, wait-time, and fairness. After the emergency: the urgency dimension (Dimension 1, welfare/consequences) acquires a lexicographic priority that it did not have before. The metric changes discontinuously:
gμνpre-emergency→gμνemergency
where the emergency metric has g₁₁ → ∞ (or equivalently, the evaluation becomes lexicographic with benefit/urgency first). This is a phase transition in the metric — triggered by crossing a threshold, sharp rather than gradual.
What Stratification Adds
| Feature | Smooth (Levels 1–4) | Stratified (Level 5) |
|---|---|---|
| Trade-offs | Always available | Available within strata; blocked at boundaries |
| Forbidden options | Don’t exist | Hard constraints with S = -∞ |
| Nullifiers | Can’t represent | Absorbing strata that restructure the evaluation |
| Emergencies | Smooth changes in weights | Discrete phase transitions in the metric |
| Dilemmas | Close scores | Singular points where strata intersect |
Stratification is essential because moral life is not uniformly smooth. The difference between a difficult trade-off and a forbidden action is not one of degree — it is a difference in kind, a change of stratum. Any framework that represents both as points on a continuous scale (as scalar ethics must) conflates a structural distinction that moral reasoning depends upon.
Six Claims That Require Tensors
We now have the full apparatus. Let us identify specific claims that the geometric framework can express and scalar ethics literally cannot.
Claim 1: “The disagreement between physicians and families is orthogonal to the disagreement among families.”
Geometric expression. Let D_prof = M_{i,physician} - M_{i,committee} be the professional disagreement vector and D_fam = Var_a[M_{i,a}] for family agents. These vectors in option-space can be orthogonal:
gijDprofiDfamj=0
meaning the dimensions on which professionals disagree are independent of the dimensions on which families disagree.
Why scalars cannot say this. A scalar framework has no “direction” of disagreement — just a magnitude. The structural claim about orthogonality is inexpressible.
Claim 2: “Alice is the robust choice; Bob wins only under specific weight configurations.”
Geometric expression. Alice’s decision region in interest space (the set of covectors I for which I_μ O^μ_A > I_μ O^μ_B and I_μ O^μ_A > I_μ O^μ_C) has larger volume than Bob’s. The robustness of a choice is the measure of its decision region.
Why scalars cannot say this. A scalar gives one answer with one weight vector. Assessing robustness requires the full vector structure.
Claim 3: “Choosing Carol over Bob wrongs Bob in a way that choosing Alice over Bob does not.”
Geometric expression. The difference vector O_B - O_C has a large positive component on Dimension 2 (wait-claim): Bob has waited much longer than Carol. Bob dominates Carol on this dimension. The difference O_B - O_A is more balanced: Bob leads on wait-claim, Alice leads on benefit and care. Choosing Alice over Bob involves genuine trade-offs; choosing Carol over Bob ignores a dimension on which Bob has an unambiguous advantage.
Why scalars cannot say this. The scalar reports only that Carol has the lowest score. It cannot distinguish “legitimate trade-off loss” from “dominated on a morally relevant dimension.”
Claim 4: “A utilitarian and a Rawlsian agree on this case, but for different reasons, and would diverge if Carol’s urgency increased.”
Geometric expression. Both the utilitarian contraction (sum) and the Rawlsian contraction (min) yield Alice. But the utilitarian is sensitive to total satisfaction; the Rawlsian to minimum satisfaction. We can compute the exact threshold at which modifying Carol’s vector causes the two procedures to diverge — the boundary between their agreement region and their disagreement region in the space of possible cases.
Why scalars cannot say this. A scalar framework implements one theory at a time. Comparing theories requires the multi-agent tensor that represents both as contractions of the same underlying structure.
Claim 5: “The metric choice is doing more work than the interest choice.”
Geometric expression. Under Euclidean vs. lexicographic metrics with the same interest covector, we get different decisions (Alice vs. Bob). The metric determines how dimensions combine; the interest determines how dimensions are weighted. These are independent structural choices.
Why scalars cannot say this. Scalar frameworks conflate metric and weights. The weighted sum S = Σw_i x_i is simultaneously a weight choice and an implicit metric choice (Euclidean with weighting). The two cannot be separated.
Claim 6: “The case is genuinely hard in a way that more information cannot resolve.”
Geometric expression. The obligation vectors O_A, O_B, O_C are in general position — no dominance relations, no single dimension that settles the question. The Gram matrix g_μν O^μ_i O^ν_j shows that the vectors span a two-dimensional subspace without a clear ordering. The hardness is geometric: it resides in the relative positions of the vectors, not in the precision of the data.
Why scalars cannot say this. A scalar framework always delivers an answer (the highest score). It cannot represent “this case is structurally hard” — only “the scores are close.” But closeness is not the same as genuine value conflict. Only the geometric framework distinguishes them.
Synthesis
Let us trace what each level of structure added:
| Level | Mathematical Object | What It Adds |
|---|---|---|
| . Scalar | S ∈ ℝ | A decision — but no justification, no robustness, no transparency |
| . Vector | O^μ ∈ T_pM | The reasons: which dimensions generate obligations; decision regions in interest space; trade-off geometry |
| . Rank-2 tensor | M_{ia} = I^{(a)}_μ O^μ_i | Multiple perspectives; the landscape of agreement and disagreement; aggregation as a visible choice |
| . Metric | g_μν | The trade-off structure: commensurability, incommensurability, priority; explicit value commitments |
| . Stratification | {S_α}, ∂C, nullifiers | Boundaries, forbidden regions, phase transitions; the distinction between trade-offs and prohibitions |
Each level is necessary. The claims expressible at Level 5 cannot be stated in Level 1 language. The geometric framework is not a complication — it is the minimum structure needed to say what we want to say about hard cases.
Coda: The Same Case, Revisited Forever
We could continue enriching this case indefinitely:
Add time. How do obligations change as patients wait? Transport the obligation vector O(t₀) along the path of increasing wait-time and ask how it transforms. If the result depends on what other events occur along the way (a new diagnosis, a family change), the manifold has curvature. This is the domain of Chapter 10 (parallel transport and holonomy).
Add uncertainty. What if medical benefit estimates carry error bars? The uncertainty tensor Σ^{μν} (Chapter 6, Section 6.6) captures the shape of this uncertainty — its principal directions, its alignment with the dimensions that matter most. Risk-averse allocation policies correspond to contractions that penalize high Σ along decision-relevant directions.
Add dynamics. The transplant committee’s metric may evolve over time as social priorities shift. The connection on the moral manifold (Chapter 10) specifies how the metric is transported from one era to another, and the curvature measures whether this transport is path-dependent.
Add symmetry. The requirement that the allocation be invariant under morally irrelevant re-descriptions (swapping Alice’s and Bob’s names, translating the case into another language) is the Bond Invariance Principle — a gauge symmetry whose conservation law (Noether’s theorem, Chapter 12) implies that harm is conserved across representations.
Add quantum structure. What if the committee’s deliberation involves superposition — simultaneously entertaining two incompatible framings of the case? The framework of Chapter 13 (quantum normative dynamics) models this as a superposition of moral states that collapses, upon deliberation, to a definite verdict.
Each extension adds mathematical structure — and each structure lets us express claims we could not express before.
❖
One kidney. Three patients. Five levels of structure.
The same case, seen more clearly each time — not by adding information about the patients, but by adding precision to the language of evaluation.
The mathematics is not imposed on the ethics. It is extracted from it.
We use geometry because the claims we want to make about moral situations require geometric structure to express. The kidney case is one case. But it contains, in miniature, the geometry of moral reasoning itself.