Heuristic-based method for conflict discovery of shared control between humans and autonomous systems - A driving automation case study
Résumé
The paper proposes a heuristic-based prospective method to discover possible conflicts of shared control between humans and autonomous systems. This method adapts the triplet Competence-Availability-Possibility-to-act (CAP) that represents the autonomy characteristics of decision-makers such as humans or autonomous systems. The CAP-based autonomy is decomposed into several scenarios of shared control in a workspace or between workspaces. Conflicting decisions between humans and autonomous systems are conflicts of autonomy relating to competence, availability, or possibility to act and are determined by applying heuristics adapted from deductive, inductive, abductive, and counterfactual reasoning principles. This heuristic-based method comprises four main steps: verification of shared control, identification of discovery parameters, discovery of possible conflicting decisions, and validation of these conflicts. It was applied to a driving automation case study involving two autonomous systems: Lane Keeping Assist (LKA) and Automated Cruise Control (ACC). The conflicts of autonomy identified determine possible confusions between the reasoning of a driver and that of an autonomous system, sometimes resulting in dangerous situations. These were validated and analyzed in two ways by drivers with at least two years of driving experience: during an investigation to obtain qualitative feedback from 43 drivers and during a tutorial on human reliability assessment involving 17 people. The results demonstrate the advantage of the heuristic-based method to detect possible conflicting decisions or sources of conflicts between humans and machines. The approach proposed will consequently be of assistance in the design of shared control processes between humans and autonomous systems through the implementation of technical learning or pedagogical abilities, the improvement of alarm systems to control human attention or avoid confusion between the intentions of humans and machines, or the development of training programs or driving lessons to increase user awareness of such conflicts.