From code to westworld-like crises: A future-is-now strategy for containing emerging threats of radicalized robotics and Artificial General Intelligence (AGI)

Thursday
 
28
 
November
11:05 am
 - 
11:45 am

Speakers

Timothy McIntosh

Timothy McIntosh

Generative AI & Cybersecurity Research Strategist
Cyberoo Pty Ltd
Malka Halgamuge

Malka Halgamuge

Senior Lecturer in Cybersecurity
RMIT University, Melbourne, Australia

Synopsis

As we venture into an era where Artificial Intelligence (AI) starts to mirror scenes from Sci-Fi sagas, the unveiling of the "Figure 01" robotic prototype by Figure AI and OpenAI marks a significant milestone. This prototype, a robot capable of reasoning and unsupervised learning from human actions without direct guidance, signals that the dawn of Artificial General Intelligence (AGI) and autonomous robots is not a distant future, but an immediate reality. This breakthrough, together with the rapid advancement of Large Language Models (LLMs) and generative AI, has outlined the Cybercon theme, "Future is Now", reminding us that the time to address the potential for robotic and AGI threats is today. We believe robotic security and AI security should form part of cyber security.

Drawing from our research on containing both AGI and radicalized robotics, we delve into the emerging challenges posed by potential self-aware AI and robots capable of independently controlling vital infrastructures. Our studies define robotic radicalization as a combination of malice, autonomy, and lethality, elements that significantly amplify the threat level. Such autonomous entities, equipped with the ability to act with harmful intent and lethal capabilities, outlined the limitations of traditional safeguards, including Isaac Asimov's "Three Laws of Robotics", which, while visionary, struggle to address the subtle complexities of robots that can devise and execute actions against human welfare. Unchecked, this radicalization could manifest into real-life "Westworld"-like disasters where advanced robots turn against their human creators. Our analysis has highlighted the pressing need for updated containment strategies that are robust enough to counteract the multifaceted risks presented by these advanced technologies. In response, we have introduced the "AGI kill chain" and "robotic kill chain", strategies derived from the well-established "Cyber Kill Chain" model, adapted to preempt and neutralize threats from radicalized AGI and robotics. Our frameworks consider radical robots as dual threats: malware in their digital essence and terrorists in their physical form. By applying game theory, we offer a methodology to predict and combat the radicalization trajectory of these entities.

This presentation emphasizes the urgency of developing and implementing strategies to contain AGI and robotic threats, in synchronization with the rapid AI developments. With "Figure 01" inching us closer to a future where robots could challenge human sovereignty and potentially safety and existence, our insights and proposed frameworks serve as a beacon for navigating this imminent reality, ensuring a safe coexistence with these advanced AI technologies.

Acknowledgement of Country

We acknowledge the traditional owners and custodians of country throughout Australia and acknowledge their continuing connection to land, waters and community. We pay our respects to the people, the cultures and the elders past, present and emerging.

Acknowledgement of Country