This architecture extends concepts from Byzantine fault tolerance (Lamport et al., 1982; Castro & Liskov PBFT), distributed trust, and multi-robot coordination safety. MAIVA advances these foundations by applying BFT consensus to authority trust aggregation in autonomous swarms — not just message agreement but operational authority control.
Byzantine-Resilient Trust Aggregation for Autonomous Action Authorization
Published on ZenodoZenodo: Oktenli, B. (2026). Multi-Agent Integrity Verification Architecture. Zenodo. 10.5281/zenodo.19015517
Multi-agent autonomous systems (drone swarms, robotic teams, distributed sensor networks) present unique governance challenges. Unlike single-agent systems where trust evaluation is internal, multi-agent systems must aggregate trust across agents that may be compromised, faulty, or adversarially controlled. The Byzantine Generals Problem (Lamport, Shostak & Pease, 1982) formalizes this challenge: how can agents reach consensus when some agents may be unreliable?
Current swarm architectures typically assume reliable participation from all agents, creating vulnerability to compromised agents that can corrupt collective decisions. MAIVA introduces trust-conditioned participation where each agent's role in the mission is continuously evaluated and dynamically constrained based on its computed trust and authority state.
MAIVA extends HMAA authority governance from single-agent to multi-agent environments. Each agent executes a local SATA-HMAA-CARA governance stack and transmits signed trust reports to a Mission Authority Node (MAN). The MAN aggregates trust using Byzantine-resilient algorithms:
Trimmed Weighted Median Aggregation: resistant to f adversaries in 3f+1 rosters
Three-layer CUSUM-augmented anomaly detection
Graduated escalation with per-level action permissions
DoDD 3000.09 action gate classification
Each agent runs SATA trust evaluation, HMAA authority computation, and CARA recovery independently. Local authority state determines what actions the agent can perform.
Mission Authority Node aggregates per-agent trust, validates participation, isolates compromised agents, and redistributes roles among trusted agents.
MAIVA extends the single-agent SATA-HMAA-CARA pipeline to coordinated multi-agent environments. Each agent runs the full local governance stack independently, then MAIVA aggregates trust reports at the swarm level. This dual-layer architecture is implemented in the UAV swarm governance extension, where each drone's participation is conditioned on its computed trust and authority state. MAIVA integrates with ADARA for detecting adversarial agents attempting to manipulate swarm consensus.
All architectures (SATA, HMAA, CARA, MAIVA, FLAME, ADARA, ERAM) are components of a unified authority-governed autonomy framework. This architecture is validated through six physical research platforms (Rover Testbed, UAV Platform, BLADE-EDGE, BLADE-AV, BLADE-MARITIME, BLADE-INFRA) and thirteen interactive simulations.
Deployment flexibility: This architecture can operate as part of the full governance pipeline (SATA-HMAA-ADARA-MAIVA-FLAME-CARA) or independently as a single-layer module. MAIVA can operate as a standalone consensus layer for multi-agent coordination, providing Byzantine-resilient trust aggregation without the full governance stack.
Single-agent governance (SATA-HMAA-CARA) assumes the autonomous system is evaluating its own sensors. In multi-agent systems (drone swarms, robotic teams, distributed sensor networks), a fundamentally different challenge arises: each agent must trust not only its own sensors but also the reports from other agents, some of which may be compromised.
Lamport, Shostak, and Pease (1982) formalized this as the Byzantine Generals Problem: how can distributed agents reach consensus when some agents may be faulty or adversarial? In autonomous systems, a compromised drone in a swarm could transmit false trust reports, corrupting collective decisions and potentially causing the entire swarm to take unsafe actions based on manipulated consensus.
Current swarm architectures (Brambilla et al., 2013) typically assume reliable participation and focus on coordination algorithms (formation control, task allocation). MAIVA introduces trust-conditioned participation where each agent's ability to influence swarm decisions is dynamically gated by its computed trust and authority state.
MAIVA aggregates per-agent trust reports using a trimmed weighted median that is resistant to f adversarial agents in a roster of 3f+1 agents. The trimming removes the most extreme trust reports (potential adversaries) before computing the weighted median of remaining reports. This guarantees correct aggregation as long as fewer than one-third of agents are compromised.
When an agent is flagged as potentially compromised, MAIVA applies graduated escalation rather than immediate isolation:
The MAIVA simulation demonstrates multi-agent trust aggregation with configurable swarm size, adversary count, and attack strategies. Users can observe how Byzantine-resilient aggregation maintains correct swarm decisions even with compromised agents.
Spatial display of agents with color-coded trust status, showing trusted, monitored, constrained, and isolated agents in real-time.
Configurable adversary strategies: random noise, consistent bias, slow drift, and coordinated attack patterns to test CUSUM detection.
Shows trimmed weighted median computation step-by-step with trimmed outliers highlighted and final aggregated trust displayed.
When agents are isolated, shows task redistribution among remaining trusted agents with role reassignment visualization.
POST /swarm/aggregate
{
"roster_size": 7,
"agent_reports": [
{"id": "drone-1", "trust": 0.91, "auth": "A3"},
{"id": "drone-2", "trust": 0.88, "auth": "A3"},
{"id": "drone-3", "trust": 0.12, "auth": "A0"},
{"id": "drone-4", "trust": 0.85, "auth": "A2"},
{"id": "drone-5", "trust": 0.90, "auth": "A3"},
{"id": "drone-6", "trust": 0.87, "auth": "A3"},
{"id": "drone-7", "trust": 0.82, "auth": "A2"}
],
"max_adversaries": 2
}
{
"consensus_trust": 0.87,
"mission_authority": "A3",
"isolated_agents": ["drone-3"],
"isolation_reason": "trust below threshold",
"escalation_level": "CONSTRAIN",
"active_roster": 6,
"cusum_alerts": ["drone-3"],
"task_redistribution": {
"drone-3_tasks": "reassigned to drone-1, drone-5"
}
}
Minimum roster for f adversaries: n ≥ 3f + 1 7 agents → tolerates 2 adversaries (3×2+1 = 7) 12 agents → tolerates 3 adversaries (3×3+1 = 10 ≤ 12)
The simulation supports single-architecture mode (MAIVA consensus only) and full pipeline mode (MAIVA integrated with SATA, HMAA, ADARA, FLAME, and CARA). Both configurations demonstrate MAIVA behavior under Byzantine fault conditions.
Deterministic Guarantee: All published results use fixed seeds. Math.random() is not used in benchmark-critical paths. The governance pipeline contains zero stochastic components. See Evaluation Protocol for full methodology.
If you reference this architecture in your research, please use one of the following citation formats:
@misc{oktenli2026maiva,
author = {Oktenli, Burak},
title = {Multi-Agent Integrity Verification Architecture},
year = {2026},
publisher = {Zenodo},
doi = {10.5281/zenodo.19015517},
url = {https://doi.org/10.5281/zenodo.19015517},
note = {Georgetown University}
}
This architecture is part of the authority-governed autonomy research program by Burak Oktenli at Georgetown University (M.P.S. Applied Intelligence). It is published on Zenodo with DOI 10.5281/zenodo.19015517 under CC BY 4.0.
Related: Full Research Portfolio · All Repositories · Rover Testbed · UAV Platform · Evaluation Protocol