NNSA’s Office of Defense Nuclear Nonproliferation Research and Development program (DNN R&D) works to advance our Nation’s capabilities to detect and monitor nuclear material production and movement; weapons development; and detonations across the globe. As the United States continues to pursue more comprehensive arms control agreements that require a robust verification regime , the technology for detection grows more important than ever.
That’s why it was the perfect time for members of the DNN R&D team to take the lead on a recent technical workshop that speaks to the future of proliferation detection, through one of the hottest topics in tech today – artificial intelligence, or AI. On Sept. 15 and 16, a workshop titled Next-Gen AI for Proliferation Detection: Accelerating the Development and Use of Explainability Methods to Design AI Systems Suitable for Nonproliferation Mission Applications showcased presentations from the National Laboratories and university partners that push the state-of-the-art in AI. Seven Department of Energy laboratories participated, as well as NNSA’s Sandia National Laboratories, Los Alamos National Laboratory, Lawrence Livermore National Laboratory, and the Nevada National Security Site. The meeting provided a forum for discussions with mission partners on challenges and opportunities for AI in nuclear nonproliferation, and fostered new collaborations across the Nuclear Security Enterprise.
Explainability is critical when applying AI to the high-consequence, low-likelihood events we encounter in the nonproliferation and proliferation detection domains.
AI is a broad, evolving topic, and about as popular as it is misunderstood. The workshop, led by Senior Program Manager Angie Sheffield, focused on a much-needed development within the field called “explainability.” Aptly named, the concept of explainability can be described as a characteristic of AI systems that makes it easier for humans to understand the inner workings of the model – thus enabling them to build better models, and yield more useful results.
“Excluding explainability severely limits our ability to achieve an AI system’s full predictive power, as well as its ability to find widespread acceptance,” said Sheffield. “Explainability is critical when applying AI to the high-consequence, low-likelihood events we encounter in the nonproliferation and proliferation detection domains. Leveraging the expertise of DOE and NNSA’s National Laboratories, we are overcoming gaps where current AI capabilities fall short in building explainable AI systems that are suitable for nuclear nonproliferation missions. In fact, while our goal is proliferation detection, the potential for impact extends far beyond nuclear nonproliferation. The challenges posed by the nuclear security domain are so demanding that, in building AI to detect early nuclear proliferation, we believe that the National Laboratories will advance the entire field of AI.”
The challenges posed by the nuclear security domain are so demanding that, in building AI to detect early nuclear proliferation, we believe that the National Laboratories will advance the entire field of AI.
Conventional AI and machine learning techniques are inadequate for the highly technical and high-consequence domain of nuclear nonproliferation; nuclear proliferation detection demands the development of the next-generation of AI methods and technologies to build systems that are suitable for nuclear security missions. In this workshop, AI experts and mission partners collaborated to identify best practices and opportunities for AI applications in future nonproliferation work. NNSA’s National Laboratories took this chance to highlight their advancements in AI and explainability, providing a foundation for future research and capabilities.
Despite the complexity of this highly technical topic, the nonproliferation R&D team wasn’t only concerned about the science of AI. Equally important are the people to get us to that next scientific achievement, and the teams that they form. In coordination with program leadership, the workshop organizers dedicated themselves to highlighting another critical need in their STEM experience – diversity.
Long-term, their goals include improving gender equity and representation, highlighting the work of women and minorities in STEM, and contributing to NNSA’s larger initiative to recruit a talented and diverse workforce. One way to start this conversation was to consider inclusion and diversity throughout the planning of the event, which featured panelists and participants from different backgrounds and perspectives – striving to balance representation across age, race, and gender. Meeting coordinators see this event as the first of many steps to reach out to future employees of NNSA, and taking a more active role in building a workforce that champions diversity.
“The Next-Gen AI for Proliferation Detection Meeting is committed to driving inclusive diversity and representation across our field. Data shows diverse teams generate more innovative outcomes, which is critical given our national security mission and cutting-edge research,” noted Sheffield. “As a very new and highly multi-disciplinary field, AI also presents opportunities to encourage and promote inclusive diversity in our program and the Nuclear Security Enterprise.
This meeting is the first in a series on the next generation of AI to enable nuclear proliferation detection. The next event will be held in January 2021.