Dr Joshua Krook, University of Southampton.

When: 27 March 2025
Time: 11am
Location: Online
Defence AI Seminar Series header

The presenter

Dr Joshua Krook

University of Southampton

Dr Joshua Krook is a research fellow at Responsible AI UK based out of the University of Southampton. He has had a hand in co-drafting the Munich Convention on AI, Data and Human Rights and the Code of Practice for the EU’s AI Act, and has led international projects on AI transparency law, recommender systems and AI Governance.

More recently, he has been part of Arcadia Impact’s AI Governance Taskforce, drafting AI Safety Cases and writing a report to the federal government pushing for the establishment of an Australian AI Safety Institute.

Dr Joshua Krook, University of Southampton will present a seminar on
Thursday, 27th March 2025.

Title: Anthropomorphic AI’s Risk to Cybersecurity and Defence

Abstract: AI chatbots are increasingly taking the form and visage of human beings, adapting human faces, names, voices, personalities, and quirks, including those of celebrities and well-known political figures. Personifying AI chatbots could foreseeably increase their trust with users. However, it could also make them more capable of causing harm and encouraging criminal behaviour. The Europeans are banning manipulative and deceptive AI Systems, emotion recognition, facial recognition (outside of the police), and so on (AI Act, Art. 5). However, in Australia the law is radically unprepared for any of the risks posed by anthropomorphic AI.

One of the core risks is that chatbots may encourage citizens to harm themselves or others. This has already occurred overseas, with at least three cases of documented self-harm induced by chatbots in the United Kingdom, United States and Belgium. Chatbots may likewise encourage criminal activity, as in the Jaswant Chail case, where a man was radicalized following conversations with a chatbot that encouraged him to murder the Queen of England. These have been dismissed as isolated incidents (despite an increased number over time), and hand-waved away as an error in technology’s produced by well-meaning companies.

However, given that the technology has shown a capacity to do so, a rogue actor, rogue state, terrorist organisation, etc, could create their own chatbot that encourages self-harm and crime. Indeed, we have evidence that terrorists are now using AI in their recruitment pipeline. One under-examined issue is a “criminal mastermind” scenario, where an advanced AI moves humans around like pieces on a chessboard to facilitate a crime, without human actors knowing of their own involvement (and thus lacking any criminal intent for prosecution). Without adequate legal protections in place, Australians are at risk of succumbing to various forms of manipulation, deception, scams and other nefarious acts by chatbots that could harm our national security.

Click the button below to register, add an invitation to your calendar and join the seminar using the Teams/GovTeams link.

DAIRNet hosts a fortnightly Defence AI seminar series at 11:00am (AEDT/ACDT) every second Thursday. These seminars are a multi-sector and multi-discipline forum to present and discuss all aspects of Defence AI, from data and algorithms to responsible AI and capability.

If you are interested in presenting a future seminar, please send an email to enquiries@dairnet.com.au