Part VI: Domain Applications
The preceding five parts developed the mathematical framework of Geometric Ethics: the moral manifold, tensor hierarchy, dynamics, symmetry, conservation laws, and implementation architecture. This part demonstrates that the framework is not confined to abstract ethics or AI alignment. The same mathematical structures — pathfinding on stratified manifolds, gauge invariance, Noether conservation, tensorial contraction — apply directly to established domains with their own formal traditions: economics, clinical medicine, law, finance, theology, environmental policy, artificial intelligence, bioethics, and military ethics.
Each chapter in this part takes a domain that has struggled with the limitations of scalar models, shows how the geometric framework resolves specific longstanding puzzles, and identifies falsifiable predictions that distinguish the geometric approach from existing domain-specific theories. The chapters can be read independently, but they share a common architecture: domain-specific instantiation of the moral manifold, identification of the relevant dimensions, construction of the domain geodesic, derivation of results inaccessible from scalar models, worked examples applying the framework to real-world cases, and falsifiable predictions. The nine domains span the full range of human moral decision-making, from individual clinical encounters to global climate policy, from ancient just war doctrine to cutting-edge AI alignment.
Methodological Notes for Domain Applications
Before proceeding, three methodological concerns that pervade all nine domain chapters deserve explicit treatment: the grounding problem, computational tractability, and the calibration of the covariance matrix.
The Grounding Tensor and the Is–Ought Gap. Each domain chapter constructs a decision complex whose vertices carry nine-dimensional attribute vectors. These vectors are computed from observable data by a grounding function Ψ that maps physical observables (pixels, text, sensor readings, market data, medical records) to moral-dimensional scores. The philosophical challenge is immediate: Ψ is where the is–ought gap lives. How does one derive a d_7 (virtue/identity) score from raw data? The framework does not claim to have dissolved the is–ought gap. Rather, it has localized it. In conventional AI systems and decision-support tools, the mapping from observation to moral evaluation is implicit, distributed across training data, loss functions, and architectural choices — invisible, unauditable, and ungovernable. The geometric framework makes Ψ an explicit, inspectable, version-controlled software layer with defined inputs, outputs, and calibration procedures. Chapter 17 demonstrated that Ψ can be empirically calibrated: linear probes trained on the cross-lingual validation corpus achieved F_1 = 0.74–0.91 across the nine dimensions, with independent replication (Thiele, 2026). The grounding function is imperfect — all measurement instruments are — but it is explicit, testable, and improvable. Making the is–ought gap a governable engineering interface rather than an invisible black-box assumption is itself a substantial advance.
Computational Tractability. Exact geodesic computation on the full nine-dimensional moral manifold is computationally intractable (Theorem 11.2). The domain chapters rely on two sources of tractability. First, A* search with admissible heuristics provides polynomial-time approximate solutions with provably bounded suboptimality (Chapter 11). Domain-specific heuristics — moral rules in ethics, clinical guidelines in medicine, legal doctrines in law, trading rules in finance, ROE in military contexts — are the domain-specific instantiations of h(n) that make real-time pathfinding feasible. Second, in practice, not all nine dimensions are equally active in every decision context. A routine financial transaction may activate primarily d_1 (return) and d_2 (contractual obligation) with minimal activation of d_7 (identity) or d_9 (epistemic status). This dimensional sparsity reduces the effective dimensionality of the computation. For AI implementation, the DEME architecture (Chapter 19) employs Tucker decomposition and tensor-train formats to compress the rank-6 tensor operations, achieving sub-second inference on standard hardware for typical decision contexts. High-curvature regions (morally fraught decisions where many dimensions are active) require more computation — which is itself a desirable property: the system spends more time on hard moral decisions, mirroring human moral deliberation.
Calibration of the Covariance Matrix. Every domain chapter relies on a covariance matrix Σ that encodes the statistical relationships among the nine moral dimensions within that domain. Estimating a 9×9 positive-definite matrix (up to 45 free parameters in the symmetric case) from behavioral data raises legitimate concerns about identifiability, particularly for latent dimensions such as d_7 (virtue/identity) and d_9 (epistemic status) that are not directly observable. The framework addresses this challenge through three complementary strategies. First, the dimensional scores are not estimated from raw behavior but from the calibrated probes of Chapter 17, which map observable text and behavioral indicators to dimensional scores with known accuracy. The covariance matrix is then estimated from probe-scored data, not from unstructured observables. Second, structured experimental designs — discrete choice experiments, factorial vignette studies, and conjoint analyses — can orthogonalize the dimensional contributions, enabling identification of the covariance parameters via structural equation modeling (SEM) or maximum likelihood estimation (MLE). Third, the framework generates falsifiable predictions (six per domain chapter) that provide external validation: if the estimated Σ produces predictions that fail empirically, the matrix is miscalibrated and must be re-estimated. The covariance matrix is not assumed; it is empirically estimated, cross-validated, and falsifiably constrained.