ARLINGTON, Va. Humans and machines have been able to work together since the Industrial Revolution, but errors through misunderstandings are common. To help with this, U.S. military researchers are launching an artificial intelligence and machine learning program to help humans and machines get along better than ever before.
Officials of the U.S Defense Advanced Research Projects Agency (DARPA) in Arlington, Virginia, released a solicitation on August, 2016 (DARPA-BAA-16-53) for the Explainable Artificial Intelligence (XAI) project.
XAI centers on machine learning and human/computer interaction, and seeks to create a suite of machine learning techniques that produce explainable models that, when combined with explanation techniques, enable end users to understand, trust, and manage the emerging generation of artificial intelligence (AI) systems.
Dramatic success in machine learning has led to an explosion of new AI capabilities that can produce autonomous systems that perceive, learn, decide, and act on their own, DARPA researchers say. Although these systems offer tremendous benefits, their effectiveness is limited by the machine’s inability to explain its decisions and actions to human users.
XAI seeks to create machine learning and computer/human interaction tools to enable an end user who depends on decisions, recommendations, or actions produced by an AI system to understand the rationale for the system’s decisions.
An intelligence analyst who receives recommendations from a big data analytics algorithm, for example, needs to understand why the algorithm has recommended certain activities for further investigation. Similarly, a test operator of a newly developed autonomous system must understand why the system makes its decisions to decide how to use it in future missions.
XAI tools will help provide end users with an explanation of individual decisions, enable users to understand the system’s overall strengths and weaknesses convey an understanding of how the system will behave in the future, and perhaps how to correct the system’s mistakes.
The XAI project aims at three related research and development challenges: how to produce more models; how to design the explanation interface; and how to understand the psychological requirements for effective explanations.
For the first challenge, XAI seeks to develop machine-learning techniques to produce explainable models. For the second challenge, the program anticipates integrating state-of-the-art human-computer interaction (HCI) techniques with new principles, strategies, and techniques to generate effective explanations. For the third challenge, XAI plans to summarize, extend, and apply current psychological theories of explanation.
The program has two technical areas — one to develop an explainable learning system that contains an explainable model and an explanation interface; and the second that involves psychological theories of explanation.