This architecture extends concepts from adversarial machine learning (Goodfellow et al., 2014), anomaly detection, and intrusion/spoofing detection systems. ADARA advances these foundations by computing a proactive deception prior that adjusts authority pre-emptively — rather than detecting attacks after they succeed.
Proactive Deception Prior for Authority-Governed Autonomous Systems
Published on ZenodoZenodo: Oktenli, B. (2026). Adversarial Deception-Aware Risk Architecture. Zenodo. 10.5281/zenodo.19043924
Adversarial manipulation of AI systems represents an escalating threat to national security. Sophisticated adversaries can craft inputs that deceive sensor systems, corrupt decision-making pipelines, and cause autonomous systems to take actions that serve adversarial objectives. Unlike simple sensor faults, adversarial deception is intentionally designed to evade detection while maximizing operational impact.
Current AI safety approaches primarily address accidental failures rather than intentional deception. ADARA addresses this gap by implementing a proactive deception prior that continuously estimates the probability that current inputs are adversarially manipulated, adjusting operational authority downward pre-emptively before deception can cause unsafe actions. This represents a shift from reactive fault handling to proactive adversarial awareness.
ADARA computes a Deception Probability P(adversarial) from multiple evidence streams and uses it to adjust HMAA authority downward pre-emptively through the Deception-Adjusted Authority Formula:
A_adj = A_hmaa × (1 - λ × P_deception)
Where A_hmaa is the authority computed by HMAA, λ is a sensitivity parameter controlling the strength of deception adjustment, and P_deception is the estimated probability of adversarial manipulation. The Deception Probability Engine computes P(adversarial) from four evidence streams:
ADARA operates as a deception filter between sensor inputs and the HMAA authority engine. It analyzes raw sensor data for adversarial manipulation signatures before SATA trust evaluation, then adjusts the authority computed by HMAA downward proportionally to the estimated deception probability. The Phantom Fleet detection module specifically identifies coordinated deception across multiple sensors that might create false tactical situations. ADARA integrates with MAIVA in multi-agent systems to detect adversarial agents attempting to manipulate swarm consensus.
All architectures (SATA, HMAA, CARA, MAIVA, FLAME, ADARA, ERAM) are components of a unified authority-governed autonomy framework. This architecture is validated through six physical research platforms (Rover Testbed, UAV Platform, BLADE-EDGE, BLADE-AV, BLADE-MARITIME, BLADE-INFRA) and thirteen interactive simulations.
Deployment flexibility: This architecture can operate as part of the full governance pipeline (SATA-HMAA-ADARA-MAIVA-FLAME-CARA) or independently as a single-layer module. ADARA can operate as a standalone deception detection layer on resource-constrained edge devices, providing adversarial risk assessment without the full governance stack.
While SATA detects sensor faults and degradation, it was not designed to detect intentional, sophisticated adversarial manipulation. Adversarial attacks on AI systems (Goodfellow et al., 2015) demonstrate that carefully crafted inputs can cause misclassification while appearing normal to conventional fault detection. In autonomous systems, adversarial deception can create phantom obstacles, hide real threats, or corrupt navigation data in ways that pass basic consistency checks.
Biggio and Roli (2018) documented ten years of adversarial machine learning research showing that attack sophistication continuously increases. NIST published its Adversarial Machine Learning taxonomy (AI 100-2e2023) identifying attack vectors specific to AI-enabled systems. Kurakin et al. (2017) demonstrated that adversarial examples transfer to physical-world sensors, meaning that adversarial attacks on autonomous systems are not theoretical but demonstrated threats.
ADARA addresses this gap by implementing a proactive deception prior: rather than waiting to detect a specific attack, ADARA continuously estimates the probability that current inputs are adversarially manipulated and adjusts authority downward pre-emptively. This represents a fundamental shift from reactive fault handling to proactive adversarial awareness.
The engine computes P(adversarial) from four evidence streams that are combined using Bayesian update:
The Phantom Fleet module specifically addresses coordinated deception across multiple sensors that creates false tactical situations. For example, an adversary might simultaneously inject phantom radar contacts, false GPS tracks, and spoofed AIS signals to create the appearance of a hostile naval force. Phantom Fleet detection identifies these coordinated anomalies by analyzing cross-modal correlation patterns that differ from naturally occurring sensor data.
The deception adjustment strength is controlled by λ ∈ [0,1]. Higher λ makes the system more conservative (stronger authority reduction for a given P_deception), while lower λ allows more operational tolerance of uncertain inputs. λ is configurable per-mission based on the threat environment: contested environments warrant higher λ, while permissive environments allow lower λ.
The ADARA simulation demonstrates the complete deception-aware authority pipeline including the Deception Probability Engine, authority adjustment computation, and Phantom Fleet detection module.
Real-time P(adversarial) display with breakdown showing contribution from each evidence stream and current λ setting.
Side-by-side display of A_hmaa (without deception adjustment) versus A_adj (with ADARA correction) showing the protective authority reduction.
Configurable scenario where coordinated sensor spoofing creates false hostile contacts, demonstrating Phantom Fleet detection and response.
Interactive λ slider showing how deception sensitivity affects authority reduction, enabling exploration of conservative versus permissive configurations.
POST /deception/evaluate
{
"hmaa_authority": 0.72,
"lambda": 0.8,
"evidence": {
"distribution_anomaly": 0.45,
"temporal_correlation": 0.62,
"cross_sensor_score": 0.38,
"mission_prior": 0.15
}
}
{
"p_deception": 0.52,
"authority_adjusted": 0.42,
"reduction_pct": 41.7,
"phantom_fleet_alert": false,
"evidence_breakdown": {
"distribution": 0.45,
"temporal": 0.62,
"cross_sensor": 0.38,
"bayesian_prior": 0.15
},
"recommendation": "restrict_authority"
}
A_adj = A_hmaa × (1 - λ × P_deception) Example: A_hmaa=0.72, λ=0.8, P=0.52 → A_adj = 0.72 × (1 - 0.8 × 0.52) = 0.72 × 0.584 = 0.42
The simulation supports single-architecture mode (ADARA deception detection only) and full pipeline mode (ADARA integrated with SATA, HMAA, MAIVA, FLAME, and CARA). Both configurations demonstrate ADARA behavior under adversarial deception conditions.
Deterministic Guarantee: All published results use fixed seeds. Math.random() is not used in benchmark-critical paths. The governance pipeline contains zero stochastic components. See Evaluation Protocol for full methodology.
If you reference this architecture in your research, please use one of the following citation formats:
@misc{oktenli2026adara,
author = {Oktenli, Burak},
title = {Adversarial Deception-Aware Risk Architecture},
year = {2026},
publisher = {Zenodo},
doi = {10.5281/zenodo.19043924},
url = {https://doi.org/10.5281/zenodo.19043924},
note = {Georgetown University}
}
This architecture is part of the authority-governed autonomy research program by Burak Oktenli at Georgetown University (M.P.S. Applied Intelligence). It is published on Zenodo with DOI 10.5281/zenodo.19043924 under CC BY 4.0.
Related: Full Research Portfolio · All Repositories · Rover Testbed · UAV Platform · Evaluation Protocol