George H. Heilmeier, former DARPA director, created nine questions every research proposal must answer. This page answers all nine for AUTHREX, in plain English. Each answer is under 200 words. No jargon, no architecture acronyms, no evasions.
Build a safety layer that sits between an autonomous system and the physical world, and limits what the system is allowed to do based on how much its sensors can be trusted right now.
Think of it as a virtual force field. When sensor data is clean and consistent, the autonomous system has full authority. When sensors disagree, when there's evidence of jamming, when the system is being actively deceived by an adversary, authority drops automatically and irreversible actions are blocked. The human stays in command of what matters.
Today's answer: binary kill switches, watchdog timers, pre-flight safety checks, and operator override. All of these are either on or off. The system is either "in autonomy" or "not in autonomy."
The problem: adversaries don't attack when the system is off. They attack during the most confusing moments of operation, when sensor data is degraded but not obviously broken. Today's systems either proceed on bad data (and fail catastrophically) or trip a kill switch (and abort the entire mission). There is no middle ground.
Runtime assurance approaches from AFRL's Safe Autonomy team (Dr. Kerianne Hobbs and others) have started closing this gap for specific platforms. AUTHREX generalizes that work: the same governance pipeline runs on a drone, a self-driving car, a ship, and a power grid, because the underlying authority computation is domain-agnostic.
Three things together, not individually:
1. Graded authority, not binary. Four authority levels (A0–A3) that change continuously based on trust. The system isn't "on" or "off"; it's operating at whatever authority level its current sensor trust can support. Downgrades happen in milliseconds; upgrades have deliberate delays to prevent oscillation.
2. Hardware-anchored trust. Sensor trust scores are computed in hardware (FPGA) and cryptographically attested. An attacker who compromises the software cannot fake the trust score. This is what makes the governance layer tamper-evident rather than just "another piece of software."
3. Formally verified state machine. The authority state transitions are verified in TLA+ (48,751 reachable states, no unsafe states reachable). This is not a test, it's a mathematical proof that the system cannot enter an unsafe state regardless of inputs.
Why will it succeed? The underlying math is standard (Dempster-Shafer fusion, Byzantine fault tolerance, CUSUM anomaly detection) — each is well-established. The novelty is the integration. Four U.S. provisional patents filed (March 2026); 24 publications with DOIs; 13 validated simulations with 2,800+ runs show the approach works across drone, automotive, maritime, and infrastructure domains.
Who cares:
DoD program offices running AI-enabled weapons (DARPA, AFRL Safe Autonomy, Navy ONR, Army AI Integration Center). They need DoDD 3000.09 compliance for lethal autonomous systems. AUTHREX provides the formal authority layer that directive requires.
Autonomous vehicle manufacturers (Mobileye, Waymo, Tesla, and Tier 1 suppliers). ISO 26262 ASIL-D safety targets are difficult to meet without a formal authority framework. AUTHREX's BLADE-AV variant targets this directly.
Critical infrastructure operators (NERC CIP–regulated utilities, water systems after Oldsmar, SCADA operators). They need governance for AI integrated into ICS environments.
Allied defense ministries facing AIS spoofing, GPS jamming, and contested-domain operations.
If successful: fewer AI-caused friendly fire incidents, fewer autonomous vehicle fatalities from sensor failures, a credible path for the U.S. and allies to deploy increasingly autonomous systems without ceding human command authority to algorithms that can be fooled.
Technical:
False positives that overly restrict autonomy (drone returns to base unnecessarily in benign conditions). Mitigation: conservative deliberation windows, extensive domain tuning, and ADARA's calibration against known adversarial signatures.
Latency budget overrun on extreme-edge platforms. Mitigation: current FPGA implementation holds 5ms end-to-end on Zynq UltraScale+; platforms needing faster have dedicated silicon pathway planned.
Adoption:
Integration cost for existing platforms. Mitigation: BLADE-SDK designed for drop-in integration with MAVLink, ROS 2, AUTOSAR, and ICS/SCADA stacks. No autonomy software rewrites required.
Regulatory/policy:
DoDD 3000.09 interpretation may shift; ISO 26262 update cycles slow. Mitigation: architecture is standards-configurable; the governance logic remains constant while the compliance mapping adapts.
Research has been bootstrapped to date (independent research, no federal funding yet). Hardware platform costs are already documented:
For federally funded pathway: SBIR Phase I ($250K / 6 months) would fund hardening of the FPGA governance bitstream and a third-party safety audit. Phase II ($1.7M / 24 months) funds full MIL-STD-810G qualification and DO-178C review for airworthiness. Total to a TRL 6 deliverable: ~$2M over 30 months.
Current status (April 2026): TRL 2–4 depending on platform. Rover and UAV testbeds are TRL 4 (validated in lab and relevant environment). BLADE platforms are TRL 2–3 (technology concept formulated, proof-of-concept demonstrated in simulation).
Path to TRL 6 (demonstration in operationally relevant environment): 24–30 months with SBIR Phase I/II funding. Key milestones:
Months 0–6: FPGA bitstream hardening, formal safety case review, independent red team assessment.
Months 6–18: Platform integration with a defense prime (Shield AI, Anduril, or equivalent). Live-fire testing at a DoD range with governance instrumentation.
Months 18–30: Multi-domain demonstrations. Deliverable: TRL 6 on at least two platforms (UAV + ground or UAV + maritime).
Midterm (6 months):
Independent red team demonstrates AUTHREX correctly detects and mitigates five canonical attack scenarios: GPS spoofing (drone), adversarial patch on camera (AV), AIS spoofing (ship), CAN bus injection (ground vehicle), SCADA replay attack (infrastructure). Target: 100% detection, zero false positives in benign operation over 10,000 simulation runs per scenario.
Formal verification report confirms TLA+ state space still covers no unsafe-state reachability after each design change.
Final (24–30 months):
Live demonstration at a DoD range: a BLADE-EDGE-equipped UAV operating in an actively jammed RF environment correctly downgrades authority, blocks unsafe engagements, and completes mission via authority-governed fallbacks. Observers from at least two program offices.
Third-party audit report certifying DoDD 3000.09 compliance and ISO 26262 ASIL-D traceability.
Government transition:
Primary path: SBIR Phase I → Phase II → Phase III (sole-source contract with a defense prime). AUTHREX's reference implementations for five domains (drone, AV, maritime, ICS, DEW) are drop-in for primes already building autonomous systems. The BLADE-SDK wraps the governance pipeline so integration does not require rewriting autonomy software.
Secondary path: Direct licensing to DoD program offices via the four U.S. provisional patent applications.
Commercial transition:
BLADE-AV targets the ISO 26262 ASIL-D market (autonomous vehicles). BLADE-INFRA targets NERC CIP–regulated utilities. Both are large commercial markets with existing demand for formal safety governance. Revenue model: IP licensing to Tier 1 suppliers + SDK support contracts.
Research transition:
All research output is CC BY 4.0 (Zenodo) and MIT (code). Formal specifications, simulation code, and experimental results are openly reproducible to support academic follow-on work, including by AFRL's Safe Autonomy research team and partner university labs.
If these answers satisfy the "so what?" question, the technical depth behind each one is available one click away.