Welcome

The workshop will focus on the application of artificial intelligence to problems in cyber security. This year’s AICS emphasis will be on human-machine teaming within the context of cyber security problems and will specifically explore collaboration between human operators and AI technologies. The workshop will address applicable areas of AI, such as machine learning, game theory, natural language processing, knowledge representation, automated and assistive reasoning and human machine interactions. Further, cyber security application areaswith a particular emphasis on the characterization and deployment of human-machine teaming- will be the focus. Additional areas can be discussed with similar challenges and solution spaces (e.g. genomic big data, astronomy, and cyberbiosecurity).

As cyber security has rapidly matured, data collection has become easier to instrument, implement, and collect. This has led to a massive increase of the amount of data that must be analyzed to achieve situational awareness- the scale of which is beyond human capabilities. Additionally, with the concurrent advancements in machine learning capabilities, there are algorithms and tools with the impressive ability to automatically analyze and classify massive amounts of data in complex scenarios, but deploying them in specific domains can be challenging. Together, this has created an environment of increased reliance on AI-based systems for humans to interact with the scale of cyber security problems.

Because humans must interact with at least parts of these AI systems, many challenges and arise. Principally among them are: 1) Determining optimal techniques to improve AI performance given targeted, limited human input, 2) understanding the extent to which the interaction between humans and AI introduces an attack surface for adversarial techniques to influence the performance of both the human and computer systems, 3) establishing and quantifying trust between humans and AI systems, 4) providing explainable AI where humans are required to do ‘last mile’ synthesis of information provided from a black box algorithm, and 5) defining the scope in which an AI system can operate autonomously in distinct cyber security domains while maintaining safety. A successful framework for the interaction between humans and AI is extremely important as machine learning based AI capabilities become incorporated into everyday life. Human-computer interactions will continue to increase. If they are not accurate, robust, trustworthy, explainable, and safe the systems will be prone to failure even if the underlying algorithms and/or people are individually effective.

For this workshop we consider general challenges 1-5 in the domain of cyber security as a focus application area. Cyber security is difficult to perform because of its high reliance on subject matter expertise to recognize anomalies in cyber data. Because AI systems are not yet well suited for this context-generating tasks for cyber, there is a human-in-the-loop requirement for most cyber security applications. Cyber security thus provides a unique case study in exploring the relationship between AI systems and humans because each rely on input and parse output from the other.

Understanding and addressing challenges associated with systems that involve human-machine teaming requires collaboration between several different research and development communities including: artificial intelligence, cyber-security, game theory, machine learning, human factors, as well as the formal reasoning communities. This workshop is structured to encourage a lively exchange of ideas between researchers in these communities from the academic, public, and commercial sectors.