Speakers
Synopsis
The buzz around Artificial Intelligence (AI) is everywhere. Since last year, almost every conference includes AI as one of the key topics of discussion. From traditional machine learning to chatbots, content generation and deepfakes; Generative AI (Gen AI) promises to transform industries. Yet, amidst the excitement, a key question remains: how do we safeguard this powerful technology without slowing business growth and innovation?
This session is aimed to CISOs and other security practitioners who are looking to address the essential, yet often overlooked, aspect of AI security governance. Many organisations have already defined responsible AI frameworks but just few have delved into creating a comprehensive framework that goes beyond responsible AI principles and establishes robust security measures throughout the AI lifecycle.
The challenge of securing AI is further intensified by the evolving regulatory landscape and frameworks. Governments around the world are scrambling to define and enforce AI safety standards. Recently, the first comprehensive regulation on AI passed in Europe (EU AI Act), while in the United States various states are considering bills that address specific aspects of AI use. Australia doesn’t have yet a specific law regulating AI, but it’s regulated through existing regulations. However, as adoption of AI increases, it’s possible that a targeted AI regulation is introduced in the near future.
Additionally, numerous AI security frameworks exist, each with its own strengths and weaknesses. Deciding which framework to adopt can feel overwhelming and security teams must stay ahead of the curve to strike the right balance between mitigating risks and fostering innovation.