Chapter 16: Open Questions
“In mathematics you don’t understand things. You just get used to them.” — John von Neumann (attributed)
RUNNING EXAMPLE — JUDGE RIVERA’S DOCKET
It is 2045. Judge Elena Rivera, now seventy-three years old, sits in her chambers for the last time. Tomorrow is her retirement ceremony. On her desk, as always, are cases — but the cases look different from the ones she handled twenty years ago.
The federal courts adopted the Geometric Legal Analysis Framework (GLAF) in 2038, after a decade of pilot programs and academic development. Every federal judge now has access to a JurisGraph successor — Version 7.2, colloquially called “the manifold” — that constructs the judicial complex from the case database, scores attribute vectors, computes Mahalanobis distances, detects Wilson loops, and flags potential gauge violations. The system’s Legal Bond Index has been reduced to 0.02 through years of calibration, debiasing, and adversarial testing.
Rivera was one of the first judges to use the tool. She watched it evolve from the flawed prototype she tested in 2026 into a system that she trusts — not to make decisions for her, but to make the structure of her decisions visible. When she writes an opinion, she can see where her analysis sits on the eight-dimensional manifold, how far it is from the nearest precedent, and whether her reasoning creates any non-trivial Wilson loops in the citation network.
But Rivera also knows what the tool cannot do. Five problems have resisted solution for the entire twenty years of geometric jurisprudence. They are the open questions of the field — the places where the mathematics is incomplete, the data is insufficient, or the legal system’s structure exceeds the framework’s current capacity. These five problems are the inheritance she leaves to the next generation of judges, lawyers, and mathematician-jurists.
This chapter surveys these open problems. Each one is a frontier — a place where the map ends and the territory continues.
The Five Open Problems
The geometric framework developed in this book is, by design, incomplete. The judicial complex is a model of legal reasoning, not a replica. The eight dimensions are an approximation, not a derivation. The Mahalanobis metric is calibrated, not proved. And the topological theorems are conditional on the model’s assumptions, not absolute truths about the legal system.
This incompleteness is a feature, not a bug. A framework that claimed completeness would be either trivial or false. The geometric framework claims only that legal reasoning has geometric structure and that the structure is partially capturable by the tools of differential geometry, algebraic topology, and gauge theory. The five open problems identified below are the specific points where “partially” matters most — where the framework’s current tools are insufficient and new mathematics, new data, or new legal theory is required.
Open Problem 1: Metric Calibration
The Problem
The legal metric — the Mahalanobis distance d_M(c_i, c_j) = \sqrt{\Delta \mathbf{v}^T \Sigma^{-1} \Delta \mathbf{v}} — is the foundation of the entire framework. It determines which cases are “close” (strong precedent), which paths are “short” (compelling arguments), and which boundaries are “near” (hard cases). Every computation in the framework depends on the metric.
But the metric is calibrated, not derived. The covariance matrix \Sigma is estimated from data — from the scored attribute vectors of decided cases. And the attribute vectors are themselves estimated from natural language by linear probes trained on expert-labeled data. At every stage, the calibration introduces uncertainty: measurement error in the dimension scores, sampling error in the covariance estimation, and labeling error in the expert judgments.
The open problem is: How should the metric be calibrated, and how sensitive are the framework’s conclusions to calibration choices?
What We Know
The Algorithmic Jurisprudence manuscript demonstrated that a Mahalanobis metric calibrated on a corpus of 10,000 federal court opinions produces retrieval results that correlate well with expert judgments (Spearman’s \rho = 0.72 between Mahalanobis distances and expert similarity ratings). This is encouraging but not conclusive. The correlation is computed on a specific corpus, in a specific legal domain (federal civil rights cases), using a specific embedding model. Whether the calibration transfers to other domains (commercial law, criminal law, family law), other jurisdictions (state courts, international tribunals), or other legal traditions (civil law, Islamic law, customary law) is unknown.
Several specific calibration questions remain open:
Domain dependence. Should the covariance matrix be estimated globally (one \Sigma for all of law) or locally (a separate \Sigma_D for each legal domain D)? The choice matters enormously. A global \Sigma treats the correlation between statutory authority and remedial scope as a universal property of law; a local \Sigma allows this correlation to differ between commercial law and constitutional law. The correct answer is almost certainly local, but the granularity of the domains — how finely should the law be partitioned? — is an open empirical question.
Temporal stability. The covariance matrix changes over time as the law evolves. The correlation between precedential constraint (d_6) and entitlement structure (d_1) was probably stronger in 1850, when stare decisis was near-absolute, than in 2025, when the Supreme Court has overruled precedent at an accelerating rate. Should the metric be re-estimated annually? Every decade? After every landmark decision? The framework provides no guidance on the temporal dynamics of calibration.
Cross-rater reliability. The dimension scores depend on expert labeling, and experts disagree. Two legal scholars may assign different entitlement scores to the same case, reflecting genuine differences in legal interpretation rather than measurement error. The framework treats inter-rater disagreement as noise to be averaged out, but it may be signal — evidence that the “true” dimension score is indeterminate, and the metric is inherently uncertain.
Sensitivity analysis. How much do the framework’s conclusions change when the metric changes? If a small perturbation to \Sigma changes the nearest precedent, flips the materiality assessment of a breach, or creates a spurious Wilson loop, the framework’s conclusions are fragile and the calibration must be more precise. If the conclusions are robust to small perturbations, the calibration need only be approximate. A systematic sensitivity analysis — computing the framework’s outputs under perturbations to \Sigma and measuring the resulting variation — has not been performed.
What Would Solve It
A complete solution would require:
- A calibration protocol specifying the corpus size, domain structure, temporal window, and labeling procedure required for reliable metric estimation.
- A sensitivity theorem bounding the change in framework outputs as a function of the change in \Sigma: |\Delta \text{output}| \leq L \cdot \|\Delta \Sigma\| for some Lipschitz constant L.
- An uncertainty quantification procedure that propagates calibration uncertainty through the framework’s computations, producing confidence intervals rather than point estimates for distances, breach magnitudes, and Wilson loop holonomies.
None of these currently exist. The metric calibration problem is the most practically urgent of the five open problems, because every deployment of the framework — every time a court uses JurisGraph or its successors — depends on the metric being adequately calibrated.
RUNNING EXAMPLE — RIVERA IN 2045: THE CALIBRATION QUESTION
Rivera recalls the early days of geometric jurisprudence. In 2028, a district judge in the Southern District of New York used the manifold to assess a breach of contract case. The system identified a material breach with breach magnitude \beta = 0.83. The opposing party’s expert challenged the metric calibration, arguing that the covariance matrix used was estimated from a corpus of technology contracts and was inappropriate for the real estate contract at issue. Using a real-estate-specific covariance matrix, the expert showed that the breach magnitude dropped to \beta = 0.41 — below the materiality threshold.
The case went to the Second Circuit, which held that the choice of covariance matrix is a question of law, not a question of fact, and that courts must specify and justify the calibration used. The Second Circuit’s opinion — Greenfield Properties v. Apex Development, 2029 — became the foundational case on metric calibration, but it did not resolve the deeper question: what makes a calibration “correct”?
Twenty years later, the question remains open. Different circuits use different calibration protocols. The Ninth Circuit estimates \Sigma from the most recent 10,000 cases in the relevant domain. The Second Circuit uses a rolling 20-year window. The Fifth Circuit uses a fixed national corpus. The resulting metrics are similar but not identical, and Rivera has seen cases where the choice of calibration changes the outcome.
She writes in her retirement notes: “We have built the manifold, but we have not fully understood its metric. The next generation must.”
Open Problem 2: Cross-Jurisdictional Invariance
The Problem
Chapter 14 established that each national legal system is a separate manifold with its own metric, gauge group, and constitutional subcomplex. Treaties connect the manifolds via connection maps. But the framework assumes that the eight legal dimensions — entitlement, factual nexus, procedural posture, statutory authority, constitutional conformity, precedential constraint, remedial scope, public interest — are universal: the same eight dimensions apply to every legal system, though the calibration differs.
Is this assumption correct?
The open problem is: Are the eight legal dimensions universal across legal traditions, or do different traditions require different dimensional structures?
The Universality Question
The eight dimensions were identified empirically from the analysis of common-law and civil-law traditions. They appear to cover the major axes of legal analysis in these traditions. But the world’s legal traditions are far more diverse:
Islamic law (Shari’a) has dimensions that do not map neatly onto the eight: the distinction between fard (obligatory), mustahabb (recommended), mubah (permissible), makruh (discouraged), and haram (forbidden) creates a five-level normative scale on the entitlement dimension, not a binary right/no-right. The maqasid al-shari’a (objectives of Islamic law — preservation of religion, life, intellect, lineage, and property) define a different public interest dimension than the common-law version.
Customary law in many African and indigenous traditions operates without the procedural posture dimension (d_3) as it is understood in Western law. Standing, jurisdiction, and exhaustion of remedies are not gatekeeping functions in systems where disputes are resolved by community elders based on relational proximity rather than procedural authority.
Chinese law combines elements of civil law (statutory codes), socialist legal theory (law as an instrument of state policy), and traditional Confucian jurisprudence (law as a last resort, with preference for mediation and social harmony). The public interest dimension (d_8) is weighted differently — social stability and collective welfare may dominate individual rights in ways that the current framework does not capture.
If the eight dimensions are not universal, the multi-manifold framework of Chapter 14 requires modification. Instead of different calibrations on the same eight dimensions, different traditions would have different numbers and types of dimensions. The treaty connection maps would need to bridge not just different calibrations but different dimensional structures — mapping from an eight-dimensional common-law manifold to a potentially different-dimensional Islamic law manifold.
What Would Solve It
A solution would require:
Cross-cultural empirical study. Score a corpus of cases from diverse legal traditions (common law, civil law, Islamic law, customary law, Chinese law) on the eight dimensions and test whether the dimensions capture the structure of legal analysis in each tradition. If they do not, identify the additional or alternative dimensions required.
Dimensional reduction analysis. For each legal tradition, perform principal component analysis on the scored cases to determine the empirical dimensionality of legal analysis. If some traditions have fewer effective dimensions (because some dimensions are redundant in that tradition) or more (because the tradition distinguishes axes that the common-law framework merges), the dimensional structure must be adapted.
Inter-manifold mapping theory. Develop the mathematics for connection maps between manifolds of different dimensionality. This requires a generalization of the fiber bundle model from Chapter 14, where the fibers over different points in the transaction space have different dimensions.
This problem is both mathematically deep (it requires extending the framework to variable-dimensional manifolds) and empirically demanding (it requires extensive cross-cultural legal analysis). It is unlikely to be solved without sustained collaboration between mathematicians, comparative law scholars, and legal anthropologists.
RUNNING EXAMPLE — RIVERA IN 2045: THE UNIVERSALITY DEBATE
The cross-jurisdictional invariance problem became politically salient in 2035, when the International Court of Justice adopted a geometric framework for analyzing treaty disputes. The ICJ’s version used the eight-dimensional model, calibrated on international case law.
China and several African nations objected. The Chinese delegation argued that the framework’s constitutional conformity dimension (d_5) was calibrated to Western-style constitutional review and did not capture the role of the National People’s Congress in Chinese constitutional interpretation. The African Union’s legal committee argued that the procedural posture dimension (d_3) privileged adversarial procedures over customary mediation.
A compromise was reached in 2037: the ICJ adopted a flexible-dimension model that allows parties to propose additional or alternative dimensions for cases involving non-Western legal traditions. But the compromise created its own problems — how do you compute Mahalanobis distances when the two parties are operating in different-dimensional spaces? The question remains unresolved.
Rivera followed the debate from her chambers. She had handled enough international cases to know that the eight dimensions were not truly universal — but also that they captured something real. “The dimensions are like Newton’s laws,” she told her clerks. “Wrong in detail, but right enough to build bridges. The question is whether we need Einstein-level corrections or just better calibration.”
Open Problem 3: Computability of the Fundamental Group
The Problem
The topological constitutionality theorem (Chapter 7) uses path homology to detect constitutional violations. Path homology is computable in polynomial time for finite directed graphs — this is one of the framework’s most attractive features. But path homology is not the only topological invariant of the judicial complex. The fundamental group \pi_1(\mathcal{K}) — the group of all directed loops in the complex, up to homotopy — contains strictly more information than the first path homology group \widetilde{H}_1^{\text{path}}(\mathcal{K}).
The relationship between the two is well-known in algebraic topology: the first homology group is the abelianization of the fundamental group. \widetilde{H}_1^{\text{path}} detects commutative topological features — it can count the number of independent loops but cannot distinguish the order in which loops are traversed. \pi_1 detects non-commutative features — it can distinguish loops that, when composed in different orders, produce different topological outcomes.
The open problem is: Is the fundamental group of the judicial complex computable, and does it detect legal inconsistencies that path homology misses?
Why It Matters
Legal reasoning is non-commutative. The order in which legal arguments are applied can change the outcome. Applying the burden-shifting framework of McDonnell Douglas before addressing standing produces a different analysis than addressing standing first. This non-commutativity is captured by the semi-direct product structure of the Hohfeldian gauge group (Chapter 5) and by the non-commutativity of legal operations (demonstrated empirically in the Geometric Ethics data, with 16,798 commutator measurements).
If legal reasoning is non-commutative, then the fundamental group — which captures non-commutative topological structure — may detect legal inconsistencies that the abelian path homology misses. There could exist legal contradictions that are invisible to \widetilde{H}_1^{\text{path}} but visible to \pi_1: two directed loops that, when composed in one order, are contractible (consistent), but when composed in the other order, are non-contractible (inconsistent).
The Computability Challenge
For general topological spaces, the fundamental group is undecidable — there is no algorithm that, given a finite presentation of a topological space, determines whether the fundamental group is trivial. This is a consequence of the Adyan-Rabin theorem, which reduces the word problem for finitely presented groups (known to be undecidable) to the triviality problem for fundamental groups.
However, the judicial complex is not a general topological space. It is a finite directed graph with additional structure: edge weights, Hohfeldian labels, hierarchical ordering, and temporal constraints. The question is whether these constraints make the fundamental group computable in the legal setting.
Several sub-problems must be resolved:
The directed fundamental group. The standard fundamental group \pi_1 is defined for topological spaces using continuous paths. For directed graphs, the appropriate analogue is the directed fundamental group (or fundamental groupoid) — the group of directed loops up to directed homotopy. This group has been studied (Grandis, 2003; Goubault, 2000) but is less well-understood than the standard fundamental group.
Finite presentation. The judicial complex, being finite, has a finitely presented fundamental group. Finitely presented groups are not in general decidable, but many specific classes of finitely presented groups (free groups, abelian groups, hyperbolic groups) have decidable word problems. The question is whether the fundamental groups arising from judicial complexes fall into a decidable class.
Legal relevance. Even if the fundamental group is computable, does it detect legal inconsistencies that matter? The answer depends on whether non-commutative legal contradictions — inconsistencies that depend on the order of argument composition — are a real phenomenon or a theoretical curiosity. If they are real, the fundamental group is legally essential. If they are not, the path homology (which is computable and already implemented) is sufficient.
What Would Solve It
- A classification theorem for the fundamental groups of judicial complexes, showing that they belong to a decidable class (e.g., residually finite groups, automatic groups, or CAT(0) groups).
- An example of a non-commutative legal contradiction — a pair of legal arguments that are consistent when composed in one order but inconsistent when composed in the other — demonstrating that the fundamental group detects a genuine legal phenomenon.
- An algorithm for computing the directed fundamental group of a finite weighted directed graph, with complexity bounds suitable for judicial complexes of realistic size.
RUNNING EXAMPLE — RIVERA IN 2045: THE FUNDAMENTAL GROUP
The fundamental group problem was Rivera’s personal mathematical obsession. In 2033, she attended a conference at the Institute for Advanced Study where a topologist presented a proof that the fundamental groups of citation networks are residually finite* — they can be approximated by finite groups to arbitrary precision. This result implied that the word problem is decidable for these groups, and therefore that the fundamental group is computable in principle.*
But “in principle” is not “in practice.” The computation requires exponential time in the worst case, and even the best implementations could handle judicial complexes of only a few thousand vertices — far smaller than the millions of vertices in the real federal case database.
Rivera had seen one case where she believed a non-commutative contradiction existed. A circuit split in the Seventh Circuit involved two lines of precedent on qualified immunity. Applying the precedent on excessive force before the precedent on reasonable suspicion produced one result; applying them in the reverse order produced another. The path homology detected a non-trivial cycle, but it could not distinguish the two orderings — it saw only that a loop existed, not that the loop’s behavior depended on the direction of traversal.
She reported this case to the IAS group, and it became Example 4.2 in their 2034 paper. But no court has yet used the fundamental group in a decision. The tool exists in theory; it waits for the computation to become practical.
Open Problem 4: Dynamic Complexes
The Problem
The judicial complex \mathcal{K} as defined in Chapter 3 is static — it is a fixed structure computed from the current state of the case law. But the law changes. New cases are decided, old precedents are overruled, statutes are enacted and repealed, and constitutional amendments alter the topological constraints. The judicial complex at time t is different from the complex at time t + \Delta t.
The open problem is: How should the framework model the dynamics of the judicial complex — the evolution of the manifold over time?
Why Static Models Are Insufficient
The static model works for a snapshot analysis: given the current state of the law, compute the nearest precedent, detect Wilson loops, and evaluate gauge invariance. But many of the most important legal questions are dynamic:
Overruling. When the Supreme Court overrules a precedent, the judicial complex changes discontinuously — edges are removed, weights are reassigned, and the topology may change. The static model can compare the pre-overruling and post-overruling complexes, but it cannot model the process of overruling — the gradual erosion of precedent through distinguishing, questioning, and narrowing that precedes the formal overruling.
Legislative change. A new statute adds vertices and edges to the complex. An amendment modifies existing edges. A repeal removes vertices and edges. The static model can compare the pre-statute and post-statute complexes, but it cannot model the legislative process — the interaction between proposed statutes, committee amendments, and final enactment.
Doctrinal evolution. Legal doctrine evolves through a sequence of cases, each modifying the complex incrementally. The McDonnell Douglas burden-shifting framework was not established in a single case — it emerged through a sequence of cases over decades, each refining the framework’s elements. The static model cannot capture this evolutionary process.
Approaches to Dynamics
Several mathematical frameworks are available for modeling dynamic complexes:
Persistent homology. The theory of persistent homology (Edelsbrunner, Letscher, and Zomorodian, 2002) tracks topological features as a parameter changes. Applied to the judicial complex, the parameter could be time, producing a filtration \mathcal{K}_{t_1} \subset \mathcal{K}_{t_2} \subset \cdots \subset \mathcal{K}_{t_n} where each inclusion adds the cases decided between t_i and t_{i+1}. The persistent homology of this filtration tracks which topological features (loops, holes, connected components) are born, persist, and die as the complex grows.
The challenge is that the judicial complex does not only grow — it also shrinks (when precedents are overruled and edges are removed). Standard persistent homology handles monotone filtrations (the complex only grows); the legal setting requires zigzag persistence (the complex can grow and shrink), which is mathematically more complex and computationally more expensive.
Sheaves on the time axis. A sheaf assigns data (the judicial complex) to each open set of a topological space (intervals of time) and specifies how the data on overlapping intervals are related. A sheaf model of the dynamic judicial complex would assign a complex \mathcal{K}_I to each time interval I and specify restriction maps between complexes on nested intervals.
Stochastic dynamics. If the law’s evolution is modeled as a stochastic process (new cases arrive randomly, precedent erosion follows a decay function, legislative changes are Poisson events), the dynamic complex becomes a random simplicial complex — a rapidly growing field in stochastic topology. This approach would allow probabilistic statements about the complex’s future evolution: “With probability p, a Wilson loop in the current complex will be resolved within n years.”
What Would Solve It
- A dynamic complex formalism that models the judicial complex as a time-varying object, with operations for case addition, edge modification, precedent overruling, and legislative change.
- A persistence theory for directed complexes that handles both growth and shrinkage — zigzag persistence for path homology on directed graphs.
- A prediction framework that uses the dynamic complex’s history to forecast its future evolution: which Wilson loops are likely to be resolved, which precedents are likely to be overruled, and where new topological features are likely to emerge.
RUNNING EXAMPLE — RIVERA IN 2045: THE DYNAMIC MANIFOLD
The dynamic complex problem hit Rivera’s courtroom directly in 2040. She was adjudicating a telecommunications regulation case when a Supreme Court decision — issued while her case was under submission — overruled a key precedent on which both parties had relied. The manifold shifted under her feet.
JurisGraph 5.0 (the version in use at the time) had no mechanism for handling the change. It had been calibrated on the pre-overruling complex. The dimension scores, the edge weights, the nearest precedents — all were based on a judicial complex that no longer existed. Rivera had to wait three weeks for the system to be recalibrated on the post-overruling complex.
JurisGraph 7.2, the version Rivera uses in 2045, handles dynamic changes better. It maintains a rolling window complex, updated daily as new opinions are filed. It tracks precedent erosion scores — a measure of how much each case has been distinguished, questioned, or limited by subsequent cases — and adjusts edge weights accordingly. When a precedent is overruled, the system performs an immediate topological recomputation, flagging all cases in the current docket that are affected.
But the system still cannot predict which precedents will be overruled, which Wilson loops will be resolved, or where new inconsistencies will emerge. It is reactive, not predictive. Rivera believes that a predictive dynamic complex — one that can forecast the evolution of the manifold — would be the most transformative advance in geometric jurisprudence. But the mathematics for such a system does not yet exist.
Open Problem 5: International Law and Multi-Manifold Coherence
The Problem
Chapter 14 developed the multi-manifold framework for international law: each nation is a manifold, treaties are edges, and conflicts of law are manifold selection problems. But the chapter left unresolved the deepest question: Is there a coherent global legal manifold, or is international law irreducibly fragmented?
In mathematical terms: does the treaty network \mathcal{T} define a consistent fiber bundle, or does the bundle have non-trivial curvature everywhere — making every cross-border transaction subject to irreconcilable legal contradictions?
The Coherence Question
A coherent global legal manifold would be one where the international legal bundle is flat — where parallel-transporting a legal position around any closed loop in the transaction space returns to the same position. Flatness means that the order in which you cross borders does not matter: going from the US to the EU to China and back produces the same legal position as going from the US to China to the EU and back.
The evidence suggests that the global legal bundle is far from flat. The curvature is non-zero in many transaction domains:
Data protection. The EU-US-China triangle studied in Chapter 14 has non-trivial curvature. The Vertex Dynamics impossibility region is a manifestation of this curvature — there is no compliant position for EU personal data processed in China.
Taxation. The global tax system has extensive curvature. A multinational corporation can structure its operations to take advantage of mismatches between national tax systems (double non-taxation through treaty shopping), or it can be caught in double taxation (the same income taxed by two or more jurisdictions). The OECD’s Base Erosion and Profit Shifting (BEPS) project is an attempt to reduce the curvature of the international tax bundle — to harmonize national tax systems enough that the holonomy of tax obligations around a closed loop is trivial.
Human rights. Different nations’ human rights obligations are defined by different treaties (ICCPR, ECHR, ACHR, ACHPR) with different substantive content and different enforcement mechanisms. Parallel-transporting a “right to freedom of expression” from the ECHR system to the ACHR system produces a different right — different exceptions, different balancing tests, different enforcement mechanisms. The curvature is non-zero.
The Fragmentation Problem
The International Law Commission has studied the fragmentation of international law — the phenomenon where different international legal regimes (trade, human rights, environmental law, investment law) develop independently and sometimes contradict each other. In geometric terms, fragmentation is the existence of multiple overlapping fiber bundles with inconsistent connections — the trade bundle, the human rights bundle, and the environmental bundle cover the same transaction space but with different fibers and different connection maps.
The fragmentation problem asks whether these overlapping bundles can be unified into a single coherent structure. If they can, international law has a consistent global geometry, and the contradictions between regimes are resolvable. If they cannot, international law is irreducibly fragmented, and the contradictions are structural — built into the geometry of the international system.
What Would Solve It
- A curvature computation for the international legal bundle, quantifying the non-trivial holonomy in specific transaction domains (data protection, taxation, human rights).
- A flatness theorem identifying conditions under which the curvature vanishes — conditions on the treaty network that ensure cross-jurisdictional consistency. The OECD’s BEPS framework could be analyzed as an attempt to satisfy these conditions in the taxation domain.
- A fragmentation metric that measures the degree of inconsistency between overlapping legal regimes, providing a quantitative foundation for the ILC’s fragmentation study.
- A coherence conjecture — either proving that the global legal bundle can be made flat (by appropriate treaty design) or proving that flatness is impossible (that some curvature is inherent in any multi-sovereign system).
RUNNING EXAMPLE — RIVERA IN 2045: THE GLOBAL MANIFOLD
The international coherence problem became Rivera’s most personally meaningful open question after her experience with the Vertex Dynamics case. She had seen, in a single case, how the disconnection of the treaty network created legal impossibility — how a company could be trapped between contradictory obligations from three sovereign legal systems with no treaty path between them.
In the two decades since, the treaty landscape has changed. The EU and the United States negotiated a comprehensive digital trade agreement in 2032 that addressed data protection, AI regulation, and digital taxation. China participated in the negotiations but ultimately did not sign, citing sovereignty concerns over the agreement’s data localization provisions.
The 2032 agreement reduced the curvature of the EU-US legal bundle in the digital domain. But it created a sharper discontinuity between the EU-US bloc and China — the treaty edge connected two manifolds more tightly while leaving the third further disconnected. Rivera saw this pattern repeat across domains: bilateral and multilateral agreements reduce local curvature but may increase global fragmentation by creating tightly connected blocs with sharp boundaries between them.
She believes the coherence question is not merely a mathematical problem — it is the central geopolitical question of the twenty-first century. Can the world’s legal systems be made consistent enough to support a global economy, or will they fragment into competing blocs with irreconcilable legal geometries?
Rivera does not know the answer. But she has learned, over forty years on the bench, that the right framework for asking the question is worth more than a premature answer. And the geometric framework — for all its open problems and incomplete calibrations — is, she believes, the right framework.
Toward a Research Programme
What Is Needed
The five open problems are not independent. They form an interconnected web:
- Metric calibration (Problem 1) affects cross-jurisdictional invariance (Problem 2), because the calibration protocol must handle multiple legal traditions with potentially different dimensional structures.
- Cross-jurisdictional invariance (Problem 2) feeds into international coherence (Problem 5), because the multi-manifold framework requires a consistent way to compare legal positions across systems with different dimensions.
- Computability of the fundamental group (Problem 3) intersects dynamic complexes (Problem 4), because the fundamental group of a time-varying complex is more complex than the fundamental group of a static one.
- Dynamic complexes (Problem 4) are required for metric calibration (Problem 1), because the metric must be recalibrated as the complex evolves.
Solving any one problem fully would likely require progress on at least two others. The research programme is inherently interdisciplinary — requiring contributions from mathematics (algebraic topology, differential geometry, computational complexity), computer science (NLP, machine learning, algorithm design), law (comparative law, international law, jurisprudence), and the social sciences (legal anthropology, political science, economics).
The Modesty Principle
Throughout this book, we have maintained a principle of modesty: the geometric framework does not replace legal judgment; it makes legal judgment auditable. The framework does not determine verdicts; it provides a structural vocabulary for analyzing the reasoning that produces verdicts.
The open problems reinforce this modesty. The metric is calibrated, not derived — and the calibration is uncertain. The dimensions may not be universal. The topological invariants may not be computable in practice. The dynamics are not yet modeled. The international system may be irreducibly fragmented.
These uncertainties do not undermine the framework. They define its boundaries — the frontier between what the framework can do and what it cannot yet do. Every mature scientific framework has such a frontier. General relativity has singularities. Quantum mechanics has the measurement problem. The geometric framework for law has the five open problems.
The framework’s contribution is not to have solved all problems of legal reasoning. It is to have translated the problems of legal reasoning into a mathematical language where they can be stated precisely, investigated rigorously, and — eventually — solved.
The Promise
If the five open problems are solved — or even substantially advanced — the consequences for legal practice would be profound:
Metric calibration would give courts a reliable, validated metric for legal distance, enabling consistent precedent selection, reproducible argument evaluation, and meaningful comparison of legal systems.
Cross-jurisdictional invariance would provide a principled framework for comparative law — a way to translate legal concepts between traditions without losing structural information.
Computability of the fundamental group would unlock a deeper level of topological analysis — detecting non-commutative legal contradictions that the current tools cannot see.
Dynamic complexes would transform the framework from a snapshot tool into a temporal instrument — tracking the evolution of legal doctrine, predicting overrulings, and modeling legislative impact.
International coherence would provide the mathematical foundation for global legal harmonization — or, if coherence is impossible, would identify the structural barriers to harmonization and guide treaty design to minimize conflict.
RUNNING EXAMPLE — RIVERA’S LAST MORNING
Rivera closes the folder on her desk. Tomorrow she will give her retirement speech, and she has been asked to reflect on how the law has changed during her forty years on the bench.
She will not use the words “manifold” or “gauge invariance” or “path homology” in her speech. Her audience — judges, lawyers, law professors, a few journalists — does not speak that language. But she will say that the most important change she has witnessed is not a change in doctrine or technology. It is a change in aspiration.
When she was appointed to the bench in 2005, the aspiration of the legal system was correctness* — getting the right answer in each case. By the time she retires in 2045, the aspiration has expanded. The legal system still aspires to correctness, but it also aspires to consistency — getting the same answer in equivalent cases, regardless of who the judge is, what the defendant looks like, or how the lawyer writes the brief. And it aspires to transparency — making the structure of legal reasoning visible, so that the reasoning can be audited, challenged, and improved.*
These aspirations — consistency and transparency — are the aspirations that the geometric framework serves. Not by replacing judges with algorithms, but by giving judges a vocabulary for the structure they already intuit. Rivera has always known when a case was “close” to a boundary, when a precedent was “far” from the facts, when a legal argument was “short” and compelling or “long” and tortured. The geometric framework gave her words — mathematical words — for what she already knew.
The open problems are the work that remains. The metric must be calibrated. The dimensions must be tested for universality. The topology must be computed. The dynamics must be modeled. The international system must be mapped.
Rivera puts on her coat and walks out of the courthouse for the last time as a sitting judge. The manifold continues without her.
Chapter Summary
Metric calibration (Open Problem 1): The Mahalanobis metric is calibrated from data, and the calibration introduces uncertainty. A calibration protocol, a sensitivity theorem, and an uncertainty quantification procedure are needed.
Cross-jurisdictional invariance (Open Problem 2): The eight legal dimensions may not be universal across legal traditions. Cross-cultural empirical study, dimensional reduction analysis, and inter-manifold mapping theory are needed.
Computability of the fundamental group (Open Problem 3): The fundamental group of the judicial complex captures non-commutative topological features that path homology misses. Whether it is computable for judicial complexes, and whether it detects genuine legal contradictions, are open questions.
Dynamic complexes (Open Problem 4): The judicial complex evolves over time, and the static model cannot capture overruling, legislative change, or doctrinal evolution. A dynamic complex formalism, a persistence theory for directed complexes, and a prediction framework are needed.
International coherence (Open Problem 5): The global legal system may be irreducibly fragmented — the international legal bundle may have non-trivial curvature everywhere. A curvature computation, a flatness theorem, a fragmentation metric, and a coherence conjecture are needed.
The five problems are interconnected: progress on any one requires progress on at least two others. The research programme is inherently interdisciplinary.
The framework’s contribution is not to have solved all problems of legal reasoning but to have translated them into a mathematical language where they can be stated precisely and investigated rigorously.
Notes on Sources
The five open problems extend the discussion in the Algorithmic Jurisprudence manuscript (§13.4). Metric calibration connects to the broader problem of metric learning in machine learning — see Kulis (2013, Metric Learning: A Survey) and Bellet, Habrard, and Sebban (2015, Metric Learning). Cross-jurisdictional invariance connects to comparative law methodology — see Zweigert and Kötz (1998, Introduction to Comparative Law), Glenn (2014, Legal Traditions of the World), and Menski (2006, Comparative Law in a Global Context). The computability of the fundamental group is discussed in the context of the Adyan-Rabin theorem — see Adyan (1957) and Rabin (1958). Directed homotopy theory is developed by Grandis (2003, Directed Algebraic Topology) and Goubault (2000). Persistent homology was introduced by Edelsbrunner, Letscher, and Zomorodian (2002) and is surveyed in Edelsbrunner and Harer (2010, Computational Topology). Zigzag persistence is developed by Carlsson and de Silva (2010). Sheaves on topological spaces are standard — see Bredon (1997, Sheaf Theory). The fragmentation of international law is studied by the ILC (2006, Report of the Study Group on Fragmentation) and Koskenniemi (2006). The OECD BEPS project is documented in OECD (2015, Explanatory Statement). The concept of a coherence conjecture for the global legal system is original to the Geometric Law framework.