When many think about the progress of AI and its impact on work, they envision a world where the robots and software thinking machines do all of the work and there’s little room left for the work humans used to do.
Perhaps that’s the future, or perhaps it’s not. It’s certainly not the future for the AI and human workforce that the Defense Advanced Research Projects Agency (DARPA) sees. DARPA is the agency that helped usher in the Internet, as well as the original expert systems of the 1960s through 1980s, as well as the big data analysis and machine learning systems that lay the foundation for natural language processing, self-driving cars, personal assistant bots. Now DARPA is leading the efforts to make AI and humans even more collaborative co-workers.
AI has proven some of its value in the form of very targeted and specialized systems. It’s also getting more reliable large-scale data analysis when paired with high-quality training data. But most AI and machine learning systems simply don’t adapt well to conditions that change. These systems can’t explain or show the work behind their results.
That’s where AI Next comes in. According to the agency, AI Next “seeks to explore new theories and applications that could make it possible for machines to adapt to changing situations. DARPA sees this next generation of AI as a third wave of technological advance, one of contextual adaptation.”
In September 2018, DARPA announced more than $2 billion in funding toward projects that could get that job done. These efforts were discussed this week at the AI DARPA Colloquium, where agency officials explained program components that included:
- DARPA’s Media Forensics program that is developing scalable, automated tools to spot media fakes by looking at the digital, physical and semantic integrity of images and video. DARPA hopes it may become possible to rapidly flag and filter manipulated content.
- AI has a massive computing problem and many new AI applications are extremely computationally expensive, yet microchip power isn’t increasing as fast as it used to, so future AI apps require new approaches. DARPA hopes specialized chips will solve this challenge. DARPA’s Software-Defined Hardware program is coming up with new ways.
- Machine Learning is vulnerable to adversarial AI attacks in which maliciously crafted inputs can be introduced to poison systems and cause errors in classification. DARPA’s GARD program is exploring why machine learning systems are brittle and builds new systems that are defensible.
Additionally, the agency will push forward the Assured Autonomy program, which seeks to guarantee the safety of AI systems by establishing foundational techniques for assurance, verification and the validation of learning-enabled systems.
“With AI Next, we are making multiple research investments aimed at transforming computers from specialized tools to partners in problem-solving,” said agency director Dr. Steven Walker. “Today, machines lack contextual reasoning capabilities, and their training must cover every eventuality, which is not only costly, but ultimately impossible. We want to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognize new situations and environments and adapt to them,” Walker said.
While it may be some time before this research impacts enterprises, it’s on the way. And improving the reliability and explainability of AI, streamlining power and performance capabilities, and ushering in the next generation of AI technologies — that’s AI that provides contextual reasoning — will be disruptive and provide widespread commercial benefits.