Working paper · in preparation
HA-FUGA
A polyphonic adversarial validation protocol for multi-agent AI systems.
The problem
Multi-agent AI systems are increasingly used to produce findings on complex, high-stakes questions: system design at scale, scientific synthesis, multi-stakeholder coordination, decisions whose answers no single body of expertise can produce alone.
Two structural gaps appear as these systems scale. First, agents are typically organized by technical function — collect, classify, summarize — rather than by the dimensions of complexity that the underlying problem presents. Second, validation is added as a downstream layer rather than emerging from the structure of agent interaction itself. The result is systems that produce confident outputs without auditable reasoning and without an internal mechanism for productive disagreement.
The protocol
HA-FUGA addresses both gaps by organizing agents not by function but by dimensions of complexity, drawn from the Horizons Architecture notation. Each dimension asks a question that none of the others can answer; the protocol holds them in productive tension through an interaction matrix and a polyphonic council of three frontier models that submit findings to adversarial debate. Validation does not arrive after the fact: it is a property of the structure.
| Dimension | Question it asks |
|---|---|
| Legacy | What is this for? |
| Community | Who is in, who is missing? |
| Learning | What do we not yet know? |
| Technology | Which tools serve the purpose? |
| Context | Which external forces operate? |
| Projects | Which projects materialise the legacy? |
Five principles
- Adversarial by Architecture — validation emerges from the structure of agent interaction.
- Incident Response Dimensional — anomalies trigger detection, containment and eradication.
- Integrity Chain — every finding carries a verifiable identity, evidence hash and access manifest.
- Polyphonic Debate — three frontier models in MORE, SAMRE and anti-conformity modes; consensus is not forced.
- Cross-Dimensional Audit — each dimension audits the others from its specific question.
Four metrics
- DSSDimensional Synergy Score — quality of interaction between dimensions.
- ACRAdversarial Convergence Rate — debate convergence; productive band 0.6–0.9.
- EISEvidence Integrity Score — traceability of findings to source.
- DCSDimensional Coverage Score — substantive contribution from every dimension.
The kernel — the M₆ₓ₆ interaction matrix
The interaction matrix M₆ₓ₆ records the influence weights between every pair of dimensions. It is the formal kernel of coherence in the system: where each pair of voices reinforces, contradicts, or simply ignores the other becomes a measurable property of the structure rather than an after-the-fact intuition. Reading the matrix exposes synergies and tensions that no single-axis decomposition can recover.
The matrix is not static. As the cycle runs, the cross-dimensional audits update each cell with the quality of interaction observed between the two dimensions on this specific question. The matrix at the end is a compressed picture of how the dimensions actually engaged each other on the proposal — not a generic schema of how they could.
Architecture in brief
Six dimensional agents — one per dimension — read the same input from their respective questions. A conductor coordinates the cycle; a council of three frontier models (Claude, GPT, Gemini) submits findings to adversarial debate. Each finding is sealed into an integrity chain, then promoted into the M₆ₓ₆ matrix. The architecture is fractal: the same structure operates at three nested scales (normative, strategic, operational), and the comparison between matrices at different scales reveals coherences and incoherences that no single-scale analysis can detect.
The live demo
A reference implementation runs the protocol end-to-end on user-supplied questions across two domains (AI safety policy, postal innovation). The browser receives a typed Server-Sent Events stream of every step the system takes: dimensional agents activating, findings emitted, cross-dimensional audits passing or failing, the council's adversarial votes with their reasoning, and the final M₆ₓ₆ matrix and metrics. A reactive hexagonal visualization makes the structure of the cycle visible — nodes pulse on each finding, edges illuminate on each audit, the central core pulses on each council validation.
Every completed cycle is persisted as a snapshot keyed by a share identifier, allowing any third party to replay the full event sequence at zero token cost. This makes the protocol auditable in a way that ordinary AI systems are not: one can not only inspect the conclusion, but watch the deliberation that produced it, exactly as it happened.
Four contributions
- Theoretical — first formal application of the Horizons Architecture notation to multi-agent AI validation. Organizing agents by dimensions of complexity, rather than by technical function, yields a tractable framework for problems that resist single-axis decomposition.
- Methodological — a multi-agent system whose principle of organization is dimensional. Coherence between agents emerges as a property of the system, not as an after-the-fact verification step.
- Protocol — HA-FUGA itself: an integrating contribution that brings together AIR, AIP, TRiSM, D3, FREE-MAD and AuditBench / Petri over a dimensional ontology. No published framework offers this configuration.
- Practical — original end-to-end implementation under institutional confidentiality. Demonstrates execution at scale with full integrity-chain auditability across heterogeneous data and stakeholder boundaries.
Status & inquiries
HA-FUGA is in active development at Horizons Architecture. A working paper formalizing the protocol is in preparation; the practical implementation referenced above is currently under institutional confidentiality. Inquiries regarding collaboration, integration, or access to the working paper draft may be directed to info@horizonsarchitecture.ai.