Machine learning poisoning: How attackers can manipulate AI models for malicious purposes

Thursday
 
28
 
November
1:00 pm
 - 
1:40 pm

Speakers

Shahmeer Amir

Shahmeer Amir

CEO
Speeqr

Synopsis

The use of machine learning and artificial intelligence has been on the rise in various industries, including the field of cybersecurity. These technologies have shown great potential in detecting and mitigating cyber threats, but they also come with their own set of risks. One of the most significant risks is the threat of machine learning poisoning attacks.

Machine learning poisoning attacks involve an attacker manipulating the data or the learning algorithm used by an AI model to compromise its accuracy or functionality. This type of attack is particularly dangerous because it can go undetected for a long time, and it can be challenging to trace its origins. A successful poisoning attack can result in the AI model making incorrect decisions, which can lead to a security breach or data loss.

The session will cover practical steps that organizations can take to prevent machine learning poisoning attacks. These measures include data validation, monitoring the performance of AI models, and implementing adversarial training techniques. Attendees will learn how to implement these measures and ensure that their systems are protected against machine learning poisoning attacks. Attendees will gain insights into how these attacks were executed, and the lessons learned from them.

The presentation will also include case studies of high-profile machine learning poisoning attacks, highlighting the impact they had on the organizations targeted.

Acknowledgement of Country

We acknowledge the traditional owners and custodians of country throughout Australia and acknowledge their continuing connection to land, waters and community. We pay our respects to the people, the cultures and the elders past, present and emerging.

Acknowledgement of Country