testbed using a basic machine reasoning architecture that is aware of its environment

Key Points

This research addresses the challenge of effective decision support for human and AI collaborative systems in operational scenarios characterised by uncertainty and ambiguity. The proposition is that AI technologies to enhance shared comprehension and situational awareness are particularly critical when the information is sparse, dynamically changing, or deliberately deceptive. However, in these degraded environments typical AI agents, particularly discriminative machine-learned algorithms, behave unpredictably.

This project treats human-machine collaboration as the mechanism that enables such epistemic uncertainty to be resolved. Human-machine interaction is therefore contextualised by the AI identifying and quantifying a state of uncertainty and optimised by the AI participating in the resolution using human-understandable concepts or language. The research program developed a testbed using a basic machine reasoning architecture that is aware of its environment, its own internal cognitive state, and the existence and needs of other entities with which it interacts.