Blog: A safe space for AI development on DOD networks – GCN.com
A safe space for AI development on DOD networks
- By Lauren C. Williams
- May 01, 2019
Air Force researchers working with artificial intelligence code may soon have a platform that gives them secure access to educated end-users and outside developers, algorithms, mission data and computational hardware.
Because the Defense Department’s does not allow unvetted software or code on its networks, it’s difficult for developers to experiment with cutting-edge tools. But the Air Force is looking to dismantle some of those barriers with its Air Force Cognitive Engine (ACE) software platform.
“We’re trying to create a software ecosystem to hook up the core infrastructures that are required for successful AI development — that’s people, algorithms, data and computational resources,” said Maj. Michael Seal, director for the Air Force’s Autonomy Capability Team 3, which leads the ACE program.
The traditional DOD way, Seal said during the Defense Department’s April 25 Lab Day, is for people to use the network they’re told to use, use one that houses data from an amalgamation of sources and use whatever tools are approved and available.
“It’s not like I’m at home pecking away at Python script deciding I want the latest version of Pandas [an open-source data analysis tool] and I install it,” Seal said. “It doesn’t work that way.”
That top-down approach, however, doesn’t work very well with artificial intelligence efforts.
“Right now, a lot of the cutting-edge AI tools live on the level of code base, not software. AI researchers and implementers work from code,” Seal explained. “But our networks and our policies around what goes on our networks are built around software, not code.”
To do that, he said, ACE is architecting a space where code-based tools can be packaged to be better accepted by DOD’s systems.
The effort also aims to encourage sharing of those tools — both across DOD’s in-house efforts and with private-sector partners.
“If you find your preferred AI business, their team has a toolbox they prefer to work with that can’t get through the door to our network system because most of it hasn’t been approved or cleared for our activities,” Seal said. “What we want to do is make a conduit to meet them on our networks, if not with the absolute most cutting-edge release of a package [then] a recent release so they have tools to work with that are still familiar.”
ACE is in developmental beta phase, which will help shape the program’s architecture going forward. Production version 1.0 is expected to be released summer 2020. Hosting applications are currently proving to be an early challenge, but Seal said the goal is to have ACE be platform agnostic so it can play with cloud, local computing or the edge.
“It’s not as soon as we would like, but the software challenges underlying this are research level,” he said. “There’s a lot of requirements around architectural development.”
For the next six months, Seal said, the program will work with the Joint Artificial Intelligence Center on predictive maintenance problems, while also conducting demonstration and feedback activities with intelligence, surveillance and reconnaissance cells.
This article was first posted to FCW, a sibling site to GCN.
About the Author
Lauren C. Williams is a staff writer at FCW covering defense and cybersecurity.
Prior to joining FCW, Williams was the tech reporter for ThinkProgress, where she covered everything from internet culture to national security issues. In past positions, Williams covered health care, politics and crime for various publications, including The Seattle Times.
Williams graduated with a master’s in journalism from the University of Maryland, College Park and a bachelor’s in dietetics from the University of Delaware. She can be contacted at firstname.lastname@example.org, or follow her on Twitter @lalaurenista.
Click here for previous articles by Wiliams.