The workshop will focus on the application of artificial intelligence to problems in cyber security. While AI and ML have shown astounding ability to automatically analyze and classify large amounts of data in complex scenarios, the techniques are not still widely adopted in real world security settings, especially in cyber systems. The workshop will address technologies and their applications in security, such as, machine learning, game theory, natural language processing, knowledge representation, automated and assistive reasoning and human machine interactions.

This year the workshop emphasis will be on applications of generative AI, including LLMs, to cybersecurity problems as well as adversarial attacks on such models.

In general, AI techniques are still not widely adopted in many real world cyber security situations. There are many reasons for this including practical constraints (power, memory, etc.), lack of formal guarantees within a practical real world model, and lack of meaningful explanations. Moreover, in the face of improved automated systems security (better hardware security, better cryptographic solutions), cyber criminals have amplified their efforts with social attacks such as phishing attacks and spreading misinformation, some of which are now easier to construct using LLMs and other generative AI techniques. These large-scale attacks are cheap and only need to succeed for a tiny fraction of all attempts to be effective. These lead to a complex cybersecurity battlefield in which actors that do not adopt the latest advances in security or AI can suffer huge losses. We invite work at the intersection of AI (all AI topics in AAAI) and cybersecurity that help improve the understanding of this complex space.