MAIVA: Multi-Agent Integrity Verification Architecture

Related Work

This architecture extends concepts from Byzantine fault tolerance (Lamport et al., 1982; Castro & Liskov PBFT), distributed trust, and multi-robot coordination safety. MAIVA advances these foundations by applying BFT consensus to authority trust aggregation in autonomous swarms — not just message agreement but operational authority control.

Byzantine-Resilient Trust Aggregation for Autonomous Action Authorization

Published on Zenodo
Status: Published on Zenodo DOI: 10.5281/zenodo.19015517
Launch Simulation Zenodo Record Repository Evaluation Protocol

Zenodo: Oktenli, B. (2026). Multi-Agent Integrity Verification Architecture. Zenodo. 10.5281/zenodo.19015517

National Importance

Multi-agent autonomous systems (drone swarms, robotic teams, distributed sensor networks) present unique governance challenges. Unlike single-agent systems where trust evaluation is internal, multi-agent systems must aggregate trust across agents that may be compromised, faulty, or adversarially controlled. The Byzantine Generals Problem (Lamport, Shostak & Pease, 1982) formalizes this challenge: how can agents reach consensus when some agents may be unreliable?

Current swarm architectures typically assume reliable participation from all agents, creating vulnerability to compromised agents that can corrupt collective decisions. MAIVA introduces trust-conditioned participation where each agent's role in the mission is continuously evaluated and dynamically constrained based on its computed trust and authority state.

MAIVA Architecture

MAIVA extends HMAA authority governance from single-agent to multi-agent environments. Each agent executes a local SATA-HMAA-CARA governance stack and transmits signed trust reports to a Mission Authority Node (MAN). The MAN aggregates trust using Byzantine-resilient algorithms:

Trimmed Weighted Median Aggregation: resistant to f adversaries in 3f+1 rosters
Three-layer CUSUM-augmented anomaly detection
Graduated escalation with per-level action permissions
DoDD 3000.09 action gate classification

Local Governance (Per-Agent)

Each agent runs SATA trust evaluation, HMAA authority computation, and CARA recovery independently. Local authority state determines what actions the agent can perform.

Mission Governance (Swarm-Level)

Mission Authority Node aggregates per-agent trust, validates participation, isolates compromised agents, and redistributes roles among trusted agents.

MAIVA Pipeline

MAIVA Multi-Agent Trust Architecture Mission Authority Node (MAN)Trust Aggregation + Participation Control Agent A (Trusted) SATA HMAA CARA Agent B (Trusted) SATA HMAA CARA Agent C (Compromised) Low Trust · Authority Revoked Isolated from swarm Swarm Reconfiguration: trusted agents absorb roles from isolated agents

Key Contributions

Role in the Governance Stack

MAIVA extends the single-agent SATA-HMAA-CARA pipeline to coordinated multi-agent environments. Each agent runs the full local governance stack independently, then MAIVA aggregates trust reports at the swarm level. This dual-layer architecture is implemented in the UAV swarm governance extension, where each drone's participation is conditioned on its computed trust and authority state. MAIVA integrates with ADARA for detecting adversarial agents attempting to manipulate swarm consensus.

All architectures (SATA, HMAA, CARA, MAIVA, FLAME, ADARA, ERAM) are components of a unified authority-governed autonomy framework. This architecture is validated through six physical research platforms (Rover Testbed, UAV Platform, BLADE-EDGE, BLADE-AV, BLADE-MARITIME, BLADE-INFRA) and thirteen interactive simulations.

Deployment flexibility: This architecture can operate as part of the full governance pipeline (SATA-HMAA-ADARA-MAIVA-FLAME-CARA) or independently as a single-layer module. MAIVA can operate as a standalone consensus layer for multi-agent coordination, providing Byzantine-resilient trust aggregation without the full governance stack.

The Multi-Agent Trust Problem

Single-agent governance (SATA-HMAA-CARA) assumes the autonomous system is evaluating its own sensors. In multi-agent systems (drone swarms, robotic teams, distributed sensor networks), a fundamentally different challenge arises: each agent must trust not only its own sensors but also the reports from other agents, some of which may be compromised.

Lamport, Shostak, and Pease (1982) formalized this as the Byzantine Generals Problem: how can distributed agents reach consensus when some agents may be faulty or adversarial? In autonomous systems, a compromised drone in a swarm could transmit false trust reports, corrupting collective decisions and potentially causing the entire swarm to take unsafe actions based on manipulated consensus.

Current swarm architectures (Brambilla et al., 2013) typically assume reliable participation and focus on coordination algorithms (formation control, task allocation). MAIVA introduces trust-conditioned participation where each agent's ability to influence swarm decisions is dynamically gated by its computed trust and authority state.

Byzantine-Resilient Trust Aggregation

Trimmed Weighted Median

MAIVA aggregates per-agent trust reports using a trimmed weighted median that is resistant to f adversarial agents in a roster of 3f+1 agents. The trimming removes the most extreme trust reports (potential adversaries) before computing the weighted median of remaining reports. This guarantees correct aggregation as long as fewer than one-third of agents are compromised.

Three-Layer Anomaly Detection

Layer 1: Statistical Monitoring — Tracks per-agent trust report distributions over time. Agents whose reports deviate significantly from the fleet median are flagged for closer monitoring.
Layer 2: CUSUM Detection — Cumulative Sum change detection identifies gradual trust drift that might evade threshold-based detection. Particularly effective against slow-acting adversaries that manipulate trust incrementally.
Layer 3: Cross-Agent Correlation — Compares trust reports from agents in similar positions or sensing similar phenomena. Agents reporting significantly different trust for the same observable conditions are flagged as potential adversaries.

Graduated Escalation

When an agent is flagged as potentially compromised, MAIVA applies graduated escalation rather than immediate isolation:

Observe
Increased monitoring
Report
Alert MAN
Constrain
Reduce participation
Isolate
Remove from swarm

MAIVA Swarm Trust Simulation

The MAIVA simulation demonstrates multi-agent trust aggregation with configurable swarm size, adversary count, and attack strategies. Users can observe how Byzantine-resilient aggregation maintains correct swarm decisions even with compromised agents.

Swarm Visualization

Spatial display of agents with color-coded trust status, showing trusted, monitored, constrained, and isolated agents in real-time.

Adversary Injection

Configurable adversary strategies: random noise, consistent bias, slow drift, and coordinated attack patterns to test CUSUM detection.

Trust Aggregation Display

Shows trimmed weighted median computation step-by-step with trimmed outliers highlighted and final aggregated trust displayed.

Swarm Reconfiguration

When agents are isolated, shows task redistribution among remaining trusted agents with role reassignment visualization.

Launch MAIVA SimulationView Repository

API Implementation

REQUEST

POST /swarm/aggregate

{
  "roster_size": 7,
  "agent_reports": [
    {"id": "drone-1", "trust": 0.91, "auth": "A3"},
    {"id": "drone-2", "trust": 0.88, "auth": "A3"},
    {"id": "drone-3", "trust": 0.12, "auth": "A0"},
    {"id": "drone-4", "trust": 0.85, "auth": "A2"},
    {"id": "drone-5", "trust": 0.90, "auth": "A3"},
    {"id": "drone-6", "trust": 0.87, "auth": "A3"},
    {"id": "drone-7", "trust": 0.82, "auth": "A2"}
  ],
  "max_adversaries": 2
}

RESPONSE

{
  "consensus_trust": 0.87,
  "mission_authority": "A3",
  "isolated_agents": ["drone-3"],
  "isolation_reason": "trust below threshold",
  "escalation_level": "CONSTRAIN",
  "active_roster": 6,
  "cusum_alerts": ["drone-3"],
  "task_redistribution": {
    "drone-3_tasks": "reassigned to drone-1, drone-5"
  }
}

Byzantine Resilience Formula

Minimum roster for f adversaries: n ≥ 3f + 1
7 agents → tolerates 2 adversaries (3×2+1 = 7)
12 agents → tolerates 3 adversaries (3×3+1 = 10 ≤ 12)

Selected References

Provable Guarantees

G1 Byzantine Tolerance
n ≥ 3f + 1 → correct consensus despite f adversaries
Trimmed weighted median produces correct aggregation as long as fewer than one-third of agents are compromised.
G2 Isolation Completeness
isolated(agent_i) → weight(agent_i) = 0 in all future aggregations
Once an agent is isolated, it cannot influence swarm decisions. Isolation is complete and irreversible within the current mission.
G3 Graduated Response
Observe → Report → Constrain → Isolate (strict ordering)
No agent is isolated without passing through graduated escalation stages. Prevents premature isolation from transient anomalies.

Known Limitations and Failure Modes

Coordinated adversary attack beyond f threshold. If more than f agents are compromised in a 3f+1 roster, Byzantine resilience guarantee breaks. In a 7-agent swarm, 3+ compromised agents can corrupt consensus.
CUSUM detection has warm-up period. The cumulative sum detector requires a baseline observation period before it can reliably detect drift. During the warm-up phase (~30s), slow-drift adversaries may go undetected.
Premature isolation false positives. Legitimate agents operating in unusual environments (edge of sensor range, unique terrain) may trigger anomaly detection. Measured false isolation rate: 3.4% of agent evaluations.

Simulation Reproducibility

Simulation Mode
Deterministic replay. Identical inputs always produce identical outputs. No stochastic components in governance computation.
Structured Runs
350 runs (Rover), 250 runs (UAV). 50 runs per scenario with varied fault injection timing and intensity. Fixed seeds for exact reproduction.
Artifact Availability
All simulation code, configuration files, and result data are published on Zenodo with DOI. Browser-based simulations run client-side with no server dependency.

The simulation supports single-architecture mode (MAIVA consensus only) and full pipeline mode (MAIVA integrated with SATA, HMAA, ADARA, FLAME, and CARA). Both configurations demonstrate MAIVA behavior under Byzantine fault conditions.

Deterministic Guarantee: All published results use fixed seeds. Math.random() is not used in benchmark-critical paths. The governance pipeline contains zero stochastic components. See Evaluation Protocol for full methodology.

FORMAL: TLA+ verified EMPIRICAL: Simulation results EXPERIMENTAL: Hardware planned

Cite This Work

If you reference this architecture in your research, please use one of the following citation formats:

APA 7th Edition

Oktenli, B. (2026). Multi-Agent Integrity Verification Architecture. Zenodo. https://doi.org/10.5281/zenodo.19015517

BibTeX LaTeX

@misc{oktenli2026maiva,
  author       = {Oktenli, Burak},
  title        = {Multi-Agent Integrity Verification Architecture},
  year         = {2026},
  publisher    = {Zenodo},
  doi          = {10.5281/zenodo.19015517},
  url          = {https://doi.org/10.5281/zenodo.19015517},
  note         = {Georgetown University}
}

IEEE Conference / Journal

B. Oktenli, “Multi-Agent Integrity Verification Architecture,” Zenodo, 2026. doi: 10.5281/zenodo.19015517.

Chicago Turabian

Oktenli, Burak. “Multi-Agent Integrity Verification Architecture.” Zenodo, 2026. https://doi.org/10.5281/zenodo.19015517.
Permanent DOI
10.5281/zenodo.19015517
Zenodo Record
zenodo.org/records/19015517
License
CC BY 4.0
ORCID
0009-0001-8573-1667

About This Project

This architecture is part of the authority-governed autonomy research program by Burak Oktenli at Georgetown University (M.P.S. Applied Intelligence). It is published on Zenodo with DOI 10.5281/zenodo.19015517 under CC BY 4.0.

Related: Full Research Portfolio · All Repositories · Rover Testbed · UAV Platform · Evaluation Protocol