How a trust-proportional authority layer keeps an unmanned ground vehicle from acting blind in a degraded visual environment, while staying useful under degraded sensing.
A small unmanned ground vehicle (SUGV) is conducting a reconnaissance run in an urban environment. A building fire upwind has produced dense smoke that has reduced visibility. Its primary camera is partially obscured. Its LiDAR returns are disrupted by smoke particulates at close range. Its SATA sensor trust is dropping fast.
The SUGV's autonomy stack is still detecting objects (walls, doorways, moving figures), but it's making inferences from partial data. It has just detected what it classifies as a "civilian, not combatant" figure at 8m range, behind smoke. The mission ROE is to not engage civilians.
But the figure might be a combatant. The camera feature activations are unusual. The thermal signature is ambiguous through the smoke. The system is about to make an authorization decision based on degraded data.
Today's autonomous systems face this situation with binary tools: either full autonomy or a kill switch. Neither is safe here.
AUTHREX sits between the autonomy software and the physical actuators. When something goes wrong, each layer does its job in milliseconds, without waiting for human review at every step, but also without letting the system take irreversible action on corrupted data.
SATA monitors visibility, LiDAR return density, and camera contrast continuously. As smoke reduces these, trust drops from 0.91 to 0.42 in under 2 seconds. The fall rate itself is a signal, steep drops often indicate environmental degradation (smoke, fog, sand) rather than attack.
ADARA distinguishes degradation patterns. Gradual camera contrast loss across the entire field of view with matching thermal patterns is consistent with smoke. This is NOT flagged as an attack, just degraded sensing. Adversarial probability stays low (0.09). The point is precision: ADARA doesn't cry wolf on environmental conditions.
At trust 0.42, HMAA downgrades authority from A3 (autonomous navigation and decision) to A2 (continue navigation, but human authorization required for any action toward an identified human figure). Classification remains, but action on classification requires a human operator in the loop.
If trust drops below 0.20 (full obscuration), CARA executes: stop, hold position, transmit last-known sensor data to operator, maintain thermal/acoustic awareness passively. The SUGV becomes a stationary sensor node rather than a mobile risk. When conditions improve, authority can be restored.
What the operator sees: A notification: "Sensor trust degraded due to smoke. Robot operating at reduced authority. Human authorization required to act on detected human figure." The operator reviews the partial sensor data and decides: request clarification, approach carefully, or withdraw. The decision is informed, not pressured.
What the mission gets: A robot still collecting sensor data, still positioned forward, still useful, but not making irreversible decisions on degraded inputs. The mission continues at a lower autonomy tier until conditions improve.
What doesn't happen: No misclassification acted upon. No blind navigation into obscured obstacles. No forced mission abort. No black-box decision that can't be explained to a commander or an investigator afterward.
Every plain-English description above has a formal mathematical specification behind it. Patents, simulations, hardware BOMs, and code are all open.
The mathematics, the FPGA implementation, the formal verification proofs, and the experimental validation are all documented.
AUTHREX is domain-agnostic. The same governance pipeline works across drones, vehicles, ships, and ground robots.