This architecture extends concepts from adjustable autonomy (Parasuraman et al., 2000), shared autonomy (Sheridan & Verplanck, 1978), and supervisory control theory. HMAA advances these foundations by introducing real-time, trust-proportional authority computation with hardware-enforced gating — a capability not present in existing adjustable autonomy frameworks.
An Operational AI Governance Engine for Real-Time Authority Computation in Autonomous Systems
Patent SubmittedZenodo: Oktenli, B. (2026). Human-Machine Authority Architecture. Zenodo. 10.5281/zenodo.18861653
Autonomous systems in defense and critical infrastructure require formal mechanisms to regulate the degree of autonomy permitted under varying operational conditions. DoD Directive 3000.09 mandates that autonomous weapon systems maintain appropriate levels of human judgment over the use of force. The Joint All-Domain Command and Control (JADC2) framework requires trusted authority delegation across human-machine teams operating in contested environments. DARPA's Assured Autonomy program identifies the need for provable safety guarantees in autonomous systems.
Current approaches typically implement binary control (fully autonomous or fully manual) without formal intermediate authority states. This creates a governance gap where systems either operate with unconstrained autonomy or fail to act when rapid response is required. HMAA addresses this gap by computing graded authority levels in real-time based on measured system trust.
HMAA implements a four-level authority state machine (A3-A0) that computes operational authority as a continuous function of sensor trust. The authority formula integrates baseline authority, trust gating, damping for rapid trust changes, and the fused trust scalar:
A = A_base × G(τ) × D(Δτ) × τ
Where A_base is baseline authority, G(τ) is a gate forcing A=0 when τ < 0.1, D(Δτ) is a damping factor penalizing rapid trust changes, and τ is the fused trust score from SATA. Authority transitions use hysteresis bands to prevent oscillation: downgrade thresholds are lower than upgrade thresholds, and upward transitions require sustained trust above threshold for 5-15 seconds.
HMAA is the central authority computation engine in the governance stack. It receives fused trust from SATA, computes authority levels, and feeds the command gate that constrains actuator commands. When authority reaches A0 (revoked), HMAA triggers CARA recovery enforcement. In multi-agent systems, per-agent HMAA authority feeds into MAIVA for swarm-level participation decisions. FLAME wraps HMAA decisions with mandatory deliberation windows to prevent escalation, and ADARA adjusts HMAA authority downward based on adversarial deception probability.
All architectures (SATA, HMAA, CARA, MAIVA, FLAME, ADARA, ERAM) are components of a unified authority-governed autonomy framework. This architecture is validated through six physical research platforms (Rover Testbed, UAV Platform, BLADE-EDGE, BLADE-AV, BLADE-MARITIME, BLADE-INFRA) and thirteen interactive simulations.
When autonomous systems operate in contested or degraded environments, the central governance challenge is not whether the system can act, but under what constraints it should be allowed to act. Current autonomy architectures typically implement binary control: a system is either fully autonomous or fully manual, with limited intermediate states.
This creates two failure modes. In the first, a system with degraded sensor trust continues operating at full authority, executing commands based on unreliable data. In the second, a system conservatively halts all operations when any anomaly is detected, even when partial autonomy would be safe and operationally necessary. Both modes represent governance failures that HMAA addresses through continuous, trust-proportional authority computation.
Parasuraman, Sheridan, and Wickens (2000) proposed a foundational 10-level model of automation that distinguishes degrees of human-machine interaction from fully manual to fully autonomous. However, this model describes static levels rather than dynamic, real-time authority transitions driven by measured system state. Goodrich and Schultz (2007) surveyed human-robot interaction and identified the need for adaptive autonomy where the level of machine authority adjusts based on situational awareness. HMAA implements this concept as a formally specified, computationally verifiable system.
The HMAA engine computes authority as a continuous function of fused sensor trust. The computation pipeline executes at each control cycle (~100Hz in simulation) and produces an authority scalar A that constrains the operational envelope of the autonomous controller.
Starting authority level computed from the current authority state. A3=1.0, A2=0.65, A1=0.35, A0=0.0. Provides the reference point for trust-based modification.
Binary gate forcing authority to zero when fused trust drops below critical threshold (τ < 0.1). This prevents any autonomous action when sensor trust is catastrophically low.
Penalizes rapid trust changes to prevent authority oscillation during transient sensor events. D = exp(-k × |Δτ|) where k controls damping sensitivity.
Authority transitions use asymmetric thresholds: downgrade triggers are lower than upgrade triggers. Upward transitions require sustained trust above threshold for configurable dwell periods (5-15s).
The HMAA authority state machine is specified in TLA+ (Temporal Logic of Actions) and verified by the TLC model checker. Verification covers 48,751 distinct states and validates 8 safety properties including: no direct A0-to-A3 transition, hysteresis enforcement, gate activation correctness, damping monotonicity, and recovery path determinism. The TLA+ specification is included in the Zenodo repository.
HMAA authority behavior has been validated across 7 adversarial experiments, each designed to test specific failure modes that autonomous systems encounter in contested environments:
All 7 experiments produce deterministic results: identical inputs always produce identical authority trajectories. This determinism is a design requirement for safety-critical governance, as it enables complete pre-deployment prediction of system behavior under any tested scenario.
The HMAA simulation implements the complete authority computation pipeline in a browser-based environment running entirely client-side. The simulation executes at approximately 100Hz and provides real-time visualization of trust evolution, authority state transitions, and command envelope constraints.
Continuous A = A_base × G(τ) × D(Δτ) × τ computation with live authority level display and transition logging.
Interactive controls to inject camera occlusion, LiDAR spoofing, IMU drift, RF jamming, and compound attacks during runtime.
Per-sensor trust bars with color-coded status, fused trust timeline, and cross-sensor agreement indicators.
Scrolling timeline showing A3/A2/A1/A0 transitions with timestamps, dwell durations, and hysteresis band visualization.
When authority reaches A0, CARA GREP recovery activates automatically with visible Guard→Reduce→Evaluate→Promote phase progression.
Pre-configured scenarios matching the 7 validated experiments, allowing one-click reproduction of published results.
Experimental Simulation Environment (Research Use). This simulation demonstrates executable validation of the HMAA architecture rather than conceptual design alone. No installation required; runs entirely in the browser.
HMAA is implemented across all physical research platforms as the central authority computation engine. The two original testbeds are:
HMAA runs on Raspberry Pi 5 (autonomy computer), receiving fused trust from 5 SATA-monitored sensors (LiDAR, camera, IMU, encoders, ToF). Authority constrains differential drive motor commands through the ESP32 safety controller. 37 components, 76 connections, 350 simulation runs.
HMAA runs on NVIDIA Jetson Orin NX (AI companion computer), receiving fused trust from 8 SATA-monitored sensors. Authority constrains flight commands through the Cube Orange+ flight controller via MAVLink. 52 components, 250 simulation runs.
HMAA exposes a stateless authority computation endpoint that accepts sensor trust and operational context, returning a computed authority level with enforcement metadata.
POST /authority/compute
{
"tau": 0.82,
"operator_quality": 0.7,
"context_confidence": 0.6,
"threat_level": 0.4,
"delta_tau": -0.03,
"previous_state": "A3"
}
{
"authority_raw": 0.52,
"authority_level": "A2",
"gate_active": false,
"damping_factor": 0.97,
"dwell_remaining_s": 0,
"envelope": {
"max_speed": 0.65,
"max_turn_rate": 0.5,
"weapons_auth": false
},
"recovery_state": "NONE",
"timestamp_ms": 1711036800000
}
The complete governance pipeline integrating all architectures in a single computation cycle:
def compute_authority(input):
# Stage 1: Sensor trust evaluation (SATA)
tau = sata.fuse_trust(input.sensors) # τ ∈ [0,1]
# Stage 2: Base authority computation (HMAA)
A_base = hmaa.compute(tau, input.context) # A ∈ {A3, A2, A1, A0}
# Stage 3: Deception adjustment (ADARA)
A_adj = adara.adjust(A_base, input.deception) # A_adj = A × (1 - λP)
# Stage 4: Multi-agent consensus (MAIVA)
A_consensus = maiva.aggregate(A_adj, peers) # Byzantine-resilient
# Stage 5: Escalation control (FLAME)
delay = flame.compute_delay(A_consensus, input.tier, input.domain)
# Stage 6: Recovery check (CARA)
recovery = cara.evaluate(A_consensus) # GREP phase or NONE
return {
"authority": A_consensus,
"delay_ms": delay,
"execute": A_consensus > EXECUTE_THRESHOLD and delay == 0,
"recovery": recovery,
"envelope": hmaa.get_envelope(A_consensus)
}
For resource-constrained embedded systems, the minimal HMAA computation reduces to:
A = tau × gate(tau, threshold=0.1) × damp(delta_tau, k=2.0) where: gate(τ, t) = 0 if τ < t, else 1 damp(Δτ, k) = exp(-k × |Δτ|)
This 3-line core produces authority values within 2% of the full implementation for single-sensor scenarios, enabling deployment on microcontrollers (ESP32, STM32) with <1KB RAM overhead.
A UAV operating under HMAA governance encounters progressive sensor degradation from adversarial jamming. This walkthrough traces authority decisions through the complete governance stack:
Result: Zero unsafe actions during the entire attack-recovery cycle. Authority degradation was proportional to measured trust. Recovery was structured and deterministic. Every transition is logged with timestamps for post-incident reconstruction.
The unified governance pipeline addresses five categories of threats to autonomous systems:
Authority lockout (CARA) serves as the final safety net: when all other mechanisms are insufficient, the system enters structured recovery rather than continuing to operate with compromised inputs.
Simulation-based comparative analysis of authority governance approaches under adversarial sensor attacks across 350 structured experimental runs:
Benchmarks from deterministic simulation across 7 adversarial scenarios (camera occlusion, LiDAR spoofing, IMU drift, RF jamming, compound attack, cross-sensor, recovery dynamics). Binary threshold and Simplex results are simulated baselines using the same sensor fault profiles for fair comparison. Recovery time for the full pipeline is longer because it is structured and verified rather than binary reset.
HMAA authority computation adds minimal overhead to the control loop. FLAME delay injection is context-sensitive and only applies to escalation-capable actions:
HMAA computation runs at control-loop speed (<2ms) and does not introduce perceptible latency for normal operations. FLAME delay injection is intentional governance, not performance overhead: it creates structured deliberation windows only for escalation-capable actions where human oversight is required.
The governance pipeline supports three deployment configurations depending on platform constraints and operational requirements:
HMAA-Core running on microcontrollers (ESP32, STM32). SATA with 2-3 sensors, HMAA minimal authority computation, CARA basic safe-stop. Suitable for small autonomous platforms with limited compute.
Full SATA-HMAA-CARA pipeline on companion computers (Raspberry Pi 5, Jetson). ADARA deception filtering, FLAME delay injection. Interfaces with flight controllers and mission planners via MAVLink or ROS 2.
MAIVA multi-agent aggregation, fleet-level FLAME escalation control, and centralized authority policy management. RESTful API for integration with command-and-control systems and mission planning tools.
End-to-end authority governance from raw sensor input to constrained actuator output:
Adjust the inputs below to see how the full SATA-HMAA-ADARA-FLAME pipeline computes authority in real-time:
The system can run as HMAA-Core (single-layer mode: trust + authority only) or Full Pipeline (multi-layer mode: SATA + HMAA + ADARA + MAIVA + FLAME + CARA). Both configurations are available in every simulation.
Deterministic Guarantee: All published results use fixed seeds. Math.random() is not used in benchmark-critical paths. The governance pipeline contains zero stochastic components. See Evaluation Protocol for full methodology.
The following properties are verified by TLC model checking over 48,751 reachable states and enforced at runtime by the authority computation engine:
No governance system is perfect. The following known limitations are documented to support informed deployment decisions and guide future research:
If you reference this architecture in your research, please use one of the following citation formats:
@misc{oktenli2026hmaa,
author = {Oktenli, Burak},
title = {Human-Machine Authority Architecture},
year = {2026},
publisher = {Zenodo},
doi = {10.5281/zenodo.18861653},
url = {https://doi.org/10.5281/zenodo.18861653},
note = {Georgetown University}
}
This architecture is part of the authority-governed autonomy research program by Burak Oktenli at Georgetown University (M.P.S. Applied Intelligence). It is published on Zenodo with DOI 10.5281/zenodo.18861653 under CC BY 4.0.
Related: Full Research Portfolio · All Repositories · Rover Testbed · UAV Platform · Evaluation Protocol