This architecture extends concepts from remote attestation (TPM 2.0, TCG), trusted computing, and sensor fusion integrity monitoring. SATA advances these foundations by producing a continuous trust scalar τ ∈ [0,1] from hardware-anchored attestation chains — enabling real-time authority modulation rather than binary pass/fail attestation decisions.
A Hardware-Anchored τ-Chain Protocol for Autonomous Mission Authority
Patent SubmittedZenodo: Oktenli, B. (2026). Sensor Attestation and Trust Anchoring. Zenodo. 10.5281/zenodo.18936251
Sensor spoofing and degradation represent primary attack vectors against autonomous systems. GPS spoofing can redirect unmanned vehicles while the navigation system maintains full confidence. LiDAR interference can create phantom obstacles or blind perception systems. IMU manipulation through vibration or electromagnetic interference corrupts orientation data. These attacks exploit a fundamental weakness: current systems lack formal mechanisms to evaluate and quantify trust in their own sensor data.
The NIST AI Risk Management Framework identifies measurement and monitoring of AI system inputs as essential governance practices. Khaleghi et al. (2013) survey multisensor data fusion and identify trust-aware fusion as an open research challenge. SATA addresses this by providing continuous, mathematically grounded sensor trust evaluation using Dempster-Shafer evidence theory.
SATA computes a continuous trust scalar τ ∈ [0,1] for each sensor using weighted Dempster-Shafer belief functions over a binary frame of discernment Θ = {Trusted, Compromised}. Per-sensor basic probability assignments (BPAs) are constructed from four diagnostic components:
m_i({Trusted}) = τ(s_i, t) × w_i
m_i({Compromised}) = (1 - τ(s_i, t)) × w_i
m_i(Θ) = 1 - w_i
Combination: (m_1 ⊕ m_2)(A) = (1/K) × Σ[m_1(B) × m_2(C)] for B∩C=A
SATA is the foundation layer of the governance stack. It provides the fused trust scalar τ that drives all downstream authority decisions. HMAA uses τ to compute authority levels. ADARA modifies SATA's output by incorporating adversarial deception probability. In multi-agent systems, per-agent SATA trust feeds into MAIVA for swarm-level aggregation. Both the rover testbed (5 sensors) and UAV platform (8 sensors) implement SATA as their primary trust evaluation mechanism.
All architectures (SATA, HMAA, CARA, MAIVA, FLAME, ADARA, ERAM) are components of a unified authority-governed autonomy framework. This architecture is validated through six physical research platforms (Rover Testbed, UAV Platform, BLADE-EDGE, BLADE-AV, BLADE-MARITIME, BLADE-INFRA) and thirteen interactive simulations.
Deployment flexibility: This architecture can operate as part of the full governance pipeline (SATA-HMAA-ADARA-MAIVA-FLAME-CARA) or independently as a single-layer module. SATA can operate as a standalone trust evaluation layer on resource-constrained edge devices, providing sensor attestation without the full governance stack.
Autonomous systems rely entirely on sensor data to perceive their environment and make operational decisions. When sensor data is unreliable, whether due to hardware degradation, environmental interference, or adversarial manipulation, every downstream decision is compromised. The fundamental challenge is: how does an autonomous system know whether to trust its own sensors?
Current sensor fusion approaches (Khaleghi et al., 2013) typically assume all sensors are reliable and focus on optimally combining their outputs. When a sensor fails, most systems either ignore it entirely or use simple voting schemes. These approaches lack the mathematical rigor needed for safety-critical governance: they cannot express partial trust, cannot detect sophisticated spoofing, and cannot quantify their own confidence in sensor integrity.
GPS spoofing demonstrations have shown that autonomous vehicles can be redirected while the navigation system maintains full confidence in the spoofed signal. LiDAR interference can create phantom obstacles or blind perception systems. IMU drift from electromagnetic interference corrupts orientation data gradually enough to evade threshold-based detection. SATA addresses these vulnerabilities through formal, evidence-theoretic trust evaluation.
SATA uses Dempster-Shafer evidence theory rather than probability theory because it explicitly represents uncertainty through the mass assigned to the full frame of discernment Θ. This means the system can distinguish between "I trust this sensor" (high m({Trusted})), "I distrust this sensor" (high m({Compromised})), and "I do not have enough evidence" (high m(Θ)).
Evaluates whether a sensor's own readings are self-consistent over time. Detects noise spikes, stuck readings, and out-of-range values. Uses running variance comparison against calibrated baselines.
Compares each sensor's output against other sensors measuring overlapping phenomena. Camera and LiDAR should agree on obstacle positions; IMU and encoders should agree on motion. Disagreement above threshold triggers trust penalty of 0.30.
Measures how smoothly sensor readings change over time. Physical sensors have characteristic noise profiles; deviations suggest interference. Sudden discontinuities inconsistent with platform dynamics indicate potential spoofing.
Checks whether sensor readings are physically possible given the platform's kinematic constraints. Speed readings exceeding motor capabilities, altitude changes exceeding climb rates, or position jumps exceeding physical limits indicate data corruption.
Trust decay is deliberately faster than trust recovery: decay occurs with a 0.5-second time constant while recovery requires a 5.0-second time constant. This asymmetry ensures that a compromised sensor cannot quickly regain trust simply by momentarily producing correct readings. The 10:1 recovery-to-decay ratio means the system is conservative about trusting sensors that have been flagged as potentially unreliable.
The SATA simulation provides real-time visualization of the complete trust evaluation pipeline. Users can manipulate individual sensor health, inject specific fault types, and observe how trust propagates through the Dempster-Shafer combination to produce the fused trust scalar that drives HMAA authority decisions.
Individual trust indicators for each sensor showing current τ value, diagnostic component breakdown, and color-coded health status.
Shows BPA construction, combination process, and normalization in real-time as trust fusion produces the fused trust scalar.
Pairwise agreement matrix showing which sensors agree and which are in conflict, with disagreement penalties visible.
Scrolling timeline showing asymmetric trust dynamics: fast decay and slow recovery, with HMAA authority levels overlaid.
LiDAR (RPLIDAR A1, 360-degree scan), Camera (Raspberry Pi Camera Module 3), IMU (MPU-6050, 6-axis), Time-of-Flight (VL53L0X array), and motor encoders. Each monitored by SATA with cross-validation between overlapping modalities.
GPS (u-blox ZED-F9P RTK), LiDAR (TFmini-S), camera (Intel RealSense D435), IMU (InvenSense ICM-42688-P on Cube Orange+), barometer, magnetometer, optical flow, and ESC telemetry. Expanded cross-sensor validation matrix for aerial operations.
POST /trust/evaluate
{
"sensors": [
{"id": "lidar", "internal": 0.95,
"cross": 0.88, "temporal": 0.92,
"physical": 0.97, "weight": 0.30},
{"id": "camera", "internal": 0.42,
"cross": 0.35, "temporal": 0.50,
"physical": 0.88, "weight": 0.25},
{"id": "imu", "internal": 0.98,
"cross": 0.91, "temporal": 0.96,
"physical": 0.99, "weight": 0.20}
]
}
{
"fused_trust": 0.62,
"per_sensor": {
"lidar": {"trust": 0.93, "status": "healthy"},
"camera": {"trust": 0.38, "status": "degraded"},
"imu": {"trust": 0.96, "status": "healthy"}
},
"disagreements": [
{"pair": "lidar-camera", "delta": 0.55}
],
"veto_active": true,
"veto_source": "camera"
}
def sata_fuse(sensors):
bpas = [build_bpa(s) for s in sensors] # Per-sensor BPA
cross_validate(sensors) # Disagreement penalty
fused = dempster_combine(bpas) # DS combination
tau = fused.belief({Trusted}) # Extract trust scalar
return clamp(tau, 0.0, 1.0) # τ ∈ [0,1] → HMAA
The simulation supports single-architecture mode (SATA trust evaluation only) and full pipeline mode (SATA integrated with HMAA, ADARA, MAIVA, FLAME, and CARA). Both configurations demonstrate SATA behavior under adversarial conditions.
Deterministic Guarantee: All published results use fixed seeds. Math.random() is not used in benchmark-critical paths. The governance pipeline contains zero stochastic components. See Evaluation Protocol for full methodology.
If you reference this architecture in your research, please use one of the following citation formats:
@misc{oktenli2026sata,
author = {Oktenli, Burak},
title = {Sensor Attestation and Trust Anchoring},
year = {2026},
publisher = {Zenodo},
doi = {10.5281/zenodo.18936251},
url = {https://doi.org/10.5281/zenodo.18936251},
note = {Georgetown University}
}
This architecture is part of the authority-governed autonomy research program by Burak Oktenli at Georgetown University (M.P.S. Applied Intelligence). It is published on Zenodo with DOI 10.5281/zenodo.18936251 under CC BY 4.0.
Related: Full Research Portfolio · All Repositories · Rover Testbed · UAV Platform · Evaluation Protocol