The presentation discusses the two main branches of artificial intelligence in cybersecurity: symbolic and non-symbolic approaches. The focus is on the use of cognitive agents to enhance human performance in cybersecurity and the need for autonomous agents to keep up with the pace of attacks.
- There are two main branches of artificial intelligence in cybersecurity: symbolic and non-symbolic approaches
- Non-symbolic approaches are good at classification, clustering, and predictive analytics, while symbolic approaches require knowledge engineering and are good for more complicated problems
- Cognitive agents can enhance human performance in cybersecurity by focusing on important tasks and freeing up human experts
- Autonomous agents are necessary to keep up with the pace of attacks and the use of intelligent agents by adversaries
- The presentation includes an anecdote about discovering a Java rat running on a significant portion of machines in the business area
- The goal is to move humans from being in the loop to being on the loop in the human-machine symbiosis in cybersecurity
The presenter discovered a Java rat running on a significant portion of machines in their business area, despite concerns about people developing it and running it locally. The rat was hiding itself well and worked, but the presenter's IT team would have been angry if they had run it against their local production environment.
One of the greatest challenges in developing capable cyberspace operators is building realistic environments for training events. While many organizations have developed technologies and techniques for replicating enterprise-scale networks, the problem is how to realistically populate those networks with synthetic persons. Whether we are training network defenders or penetration testers, we want to pit them against adaptive and intelligent adversaries who can continuously put their skills to the test. In either case, we also need rich ecosystems in which realistic user agents exchange messages, interact with the web and occasionally assist (or hinder) the efforts of the attackers and defenders.
This talk describes our research and development of a family of Cyberspace Cognitive (CyCog) agents that can behave like attackers, defenders or users in a network. The attacker agent (CyCog-A) was developed to train defenders while its defensive counterpart (CyCog-D) was intended to help develop penetration testers. The user agent (CyCog-U), on the other hand, is much more versatile in that it can support either type of training. Furthermore, since these synthetic users are models of actual users on a network, they can display behaviors that can either hinder or assist attackers and/or defenders.
Our experiences and successes point to current gaps as well as future threats and opportunities. From the need for scalable cyberspace mapping techniques to our work in modeling behaviors to the lessons learned in human-machine teaming, the CyCog family of agents is opening a new dimension in cyberspace operations research and development.