This Q&A was conducted as part of the NextGov Emerging Technology Summit. Gilman Louie, CEO, LookingGlass was interviewed by James Hanson, Group Publisher, Federal and Technology Markets at GovExec and VP of NextGov. (This interview has been edited for length and clarity).
Hanson: We’ve heard a lot about AI and Cyber, so how will AI change cyber?
Louie: Cyber has traditionally been about having machines and sensors figuring out what’s happening on the edges of your network boundaries with humans in the middle analyzing the data. Network operators setting up the response system. If there’s an attack, humans working on ways to remediate the damage. It’s a very human centric approach.
Even the way our organizations are set up is based on humans, such as CISOs, network operators, and cyber warriors.
With AI, we will go from a human centric world to an algorithmic centric world. This will allow us to go from responding at human speed to responding at machine speed. How we approach cyber is going to fundamentally change. It’s like were in the era of bi-planes and somebody is about to introduce jets
In the near term, what does that mean for federal agencies?
We are in a transitional phase where we are going to see an increasing amount of human-machine teaming. That’s where the humans, the machines, and the algorithms are going to work together to not only defend but pre-position to be able to shape the battlespace for an organization. That’s going to steadily increase to a point where the humans are no longer the tacticians who physically monitor, wait for the red light to go off, and pick up the phone to call someone.
In the new world, the tactician is going to be the algorithm. The humans are going to be the orchestrators focusing on the strategy while the algorithm will be executing the strategy. And then we will see a gradual increase to the point where the algorithms will still be following human guidelines but will begin to operate more autonomously as the systems become more sophisticated.
This is critical because the attackers are going to be using equally sophisticated algorithms to probe, attack, shape, and break apart brittleness in your AI defenses.
That’s a great point. As you build out AI defense systems, how do you ensure resiliency in those systems against attack?
The only way to properly use these systems is to have confidence in them. And the only way to have confidence is through testing the systems.
We’re going to need to test these systems against billions of attacks to make sure the algorithms are not brittle, but also that they fail gracefully.
It’s also not an all or nothing situation where it’s either all machine learning or all rules based. I still believe there is a role to play for rules-based systems. The AI can operate within a framework and that framework has hard boundaries designed, tested, and evaluated by humans.
In addition, we need a lot more sensors on our networks because we to know not just what’s coming in, but what’s going out. A great example is the Solar Winds attack. If we had our networks sensored appropriately and we were sharing information and collaborating across our agencies, we would have seen multiple agencies talking to command-and-control nodes that had no business talking to those nodes.
We’re not just going to throw out machine learning and see what happens. That’s why our report in the Defense Department came out with very specific guidelines about how we need to test, deploy, and use AI in autonomous systems. We were the first government in the world to publish ethical standards for the use of AI. We think all nations, whether they’re competitors or even adversaries need to act responsibly in how they use these technologies.
As we develop technologies like 5G which will open a higher threat aperture, how do you see us protecting ourselves from this new emerging technology?
I worked on the Defense Innovation Board on 5G and particularly how the US government and particularly the Defense Department, should be thinking about the challenges of the technology. So first of all, 5G needs to be a running on a trusted infrastructure. A lot of the concerns with ZTE and Huawei illustrate the challenge of whether we can trust the network.
Number two is the importance of secure communications across the systems. If we want to have joint or all domain operations, systems need to be able to talk to each other with a level of trust. 5G means we can talk to the edges and move data around very quickly. So were going to need have security on the edges and in the cloud.
While 5G will create a much bigger attack surface, we will also have better capabilities to collect data as well.
This really gets at the heart of what we do at NextGov in trying to help agencies understand this landscape. I want to close on a big picture thought that I like to get your opinion on. You mentioned several use cases and the future of technology. In looking at the cyber domain in general, where do you think we’re going in the net 5-10 years?
A lot of people think of cyber as its own domain of warfare, but I think that’s a mistake. Cyber touches everything. We must think about protecting those critical systems in a different way than we have in the past.
Cyber isn’t an IT problem. It is really an enterprise problem. The key is can you trust the system. If you’re in an F-35 you need to know if that missile actually hit something. If you’re sitting in an information operations center, you need to know if the threat you see is real or what your adversary wants you to see.
Until we see that cyber isn’t just its own field, but actually touches everything, we will continue to be vulnerable and behind the curve.
If you’re interested in learning more about how LookingGlass can help prepare your cyber team to deal with security issues in emerging technologies, contact us today.