AJCAI and Defence AI Symposium

When: 01-05 Dec 2025
Time: 9am
Location: Australian National University, Canberra

DAIRNet is proud to support the Australian Joint Conference on Artificial Intelligence (AJACI).

AJCAI, is a leading forum for innovative AI research and collaboration. As part of the conference, DAIRNet will host the Defence AI Symposium on Monday, 1 December, bringing together Defence, academia, and industry to exchange insights, discuss shared priorities, and explore emerging opportunities in the AI landscape. 

DAIRNet hosted Defence AI Symposium

The symposium centres on the theme of “AI Adoption: From Concept to Capability”, inviting discussions on how we can accelerate the responsible, effective, and secure integration of AI across Defence operations.

Presenters

Dr Mel McDowall

Director of DAIRNet

Dr Mel McDowall is the Director of the Defence AI Research Network (DAIRNet), an initiative of the Department of Defence and hosted by the University of South Australia. This draws on her experiences in transdisciplinary research and operation roles. Having worked in many industries, such as clinical, agriculture and Defence, Mel has experience with many steps in the “bench to bedside/farm-gate/capability” journey. This includes basic and applied research, business improvement and governance, advocacy, contract management, territory management, and clinical support.

Transdisciplinary teams, science communication and outreach are critical for research. Highlights of Mel’s career include receiving a Young Tall Poppy Award (2005), attending Science Meets Parliament (2014-2016) as an advocate for reproductive medicine research, and participating in the CSIRO OnPrime program (2016). More recently Mel was the 2023 winner of the Technology category of the Women in Technology awards and a finalist in the Defence Connect “Women in Leadership” award (2023). Mel holds a PhD in Medicine (2005) and Master of Business Administration (2018) from the University of Adelaide.

Mr Mike Moroney

Executive Director of the Defence Artificial Intelligence Centre (DAIC)
Defence

Mike Moroney was appointed Executive Director of the Defence AI Centre in October 2025. Before joining the Australian Public Service, he served 23 years in the Royal Australian Air Force, where he led several technology and innovation teams.
As a Wing Commander, Mike was the AI Lead for Jericho Disruptive Innovation, where he led AI integration into capabilities, championed Responsible AI practices and elevated AI readiness by adapting Air Forces people, partnerships, practices, and digital infrastructure to be fit-for-AI.

Mike also led the Augmented Aviation Asset Intelligence and CASG Next Generation Acquisition and Sustainment teams, implementing initiatives in AI, additive manufacturing, and Industry 4.0. Mike has also led modelling and simulation activities for the E-7A Wedgetail program.

In addition to his technology leadership roles, Mike deployed as part of Operation ASTUTE and has led teams involved in operations coordination, supply chain management, acquisition and sustainment in support of numerous ADF aircraft fleets including the AP-3C Orion, P-8A Poseidon, E-7A Wedgetail and C-130J Hercules. Mike holds Masters degrees in Business and Systems Engineering as well as an Advanced Diploma of Aviation.

Dr Sacha Babuta

Director of the Centre for Emerging Technology and Security (CETaS), and Director of National Security and Policy
The Alan Turing Institute

Dr Sacha Babuta is the Director of the Centre for Emerging Technology and Security (CETaS), and Director of National Security and Policy at The Alan Turing Institute, the UK’s national institute for data science and AI. He leads a multidisciplinary research team covering a range of security and technology issues. He previously worked within the UK Government as AI Futures Lead at the Centre for Data Ethics and Innovation, and before this as Research Fellow at the Royal United Services Institute (RUSI), where he led the institute’s research on digital policing, technology, and national security.

He is Honorary Lecturer at University College London (UCL), and was previously Chair of the Essex Police Data Ethics Committee, Research Associate at the National Centre for Gang Research (University of West London) and Associate Fellow at the University of Bristol. He has sat on various advisory committees and provided evidence to numerous parliamentary inquiries and legislative reviews.

Sacha holds an Undergraduate degree in Linguistics and MSc in Crime Science both from UCL, and a Doctorate in Criminology from the University of West London. His academic research focuses on data-driven approaches to violence risk management.

Anna Knack

Senior Research Associate in The Alan Turing Institute’s Defence and National Security Grand Challenge and Lead Researcher at CETaS
The Alan Turing Institute

Anna Knack is a Senior Research Associate in the Turing’s Defence and National Security Grand Challenge and Lead Researcher at CETaS. She leads the AI for Data-Driven Advantage (AIDA) workstream, focusing on AI assurance, human-machine teaming, cyber defence, and analysis and decision support in defence and national security.

Previously, Anna was Deputy Co-ord Lead of the Technology, Disruption & Uncertainty research workstream at RAND, where she used futures methods to explore military innovation, cybersecurity, cybercrime and counter-terrorism. Her research has informed strategy and policy for organisations including the UK MoD, US DoD, UK Strategic Command, UK Cabinet Office, European Commission, EDA, and Europol. She has briefed senior policymakers across national and international bodies and presented at major conferences.

Anna holds an MA in International Relations from the University of Utrecht and a BA in Politics and Social Policy from the University of York.

Presentation title: Assuring AI-enabled uncrewed systems: Identifying promising practice for Defence

Abstract: This study aimed to identify solutions to UK Defence AI assurance and exploitation challenges and involved user research across the UK Military Commands. The project team developed a commander’s guide for assuring computer vision-based uncrewed systems, providing a summary of operational error risks that may not be clear to commanders based on Dstl-funded fundamental research in the Turing. The study then explored defence AI assurance challenges and blockers based on interviews with users and identified promising practice based on the Turing’s corpus of basic and applied AI research and engineering knowledge experience for defence and national security. Promising practice was identified based on findings from Turing policy research for defence and national security, the latest literature and targeted workshops with AI TEEV and quality assurance leads from allied nations, including standards bodies, academics, international legal experts, and industry technical assurance teams. The promising practice generated informed the development of a workflow for the go-no go decision informing AI capability development and an associated system card template.

Prof Toni Erskine

Professor of International Politics in the Coral Bell School of Asia Pacific Affairs
The Australian National University

Toni Erskine is a Professor of International Politics in the Coral Bell School of Asia Pacific Affairs at the Australian National University (ANU). She is the recipient of the International Studies Association’s 2024-25 Distinguished Scholar Award in International Ethics, an Associate Fellow of the Leverhulme Centre for the Future of Intelligence at Cambridge University, and Chief Investigator at the Australian Government funded project ‘Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making’.

She previously served as Director of the Coral Bell School of Asia Pacific Affairs, ANU (2018-2023), Editor of International Theory: A Journal of International Politics, Law, and Philosophy (2019-2023), and Academic Lead for the United Nations Economic and Social Commission for Asia and the Pacific / Association for Pacific Rim Universities project ‘AI for Social Good’ (2021-2023).

Professor Erskine’s research sits at the intersection of International Relations, international security, and moral and political philosophy. Her research explores the impact of artificial intelligence on world politics and organised violence; the ‘institutional moral agency’ and responsibility of formal organisations; the ethics and laws of war; human protection in the face of mass atrocity crimes and the role of joint action and informal coalitions in response to global crises and existential threats; cosmopolitan theories and their critics; and the prospect (she’s sceptical) of AI-driven systems as ‘synthetic moral agents’.

Presentation title: ‘Computer Says “War”? AI and Resort-to-Force Decision Making in a Context of Rapid Change and Global Uncertainty’

Abstract: AI-enabled systems already contribute indirectly to state-level decisions on whether and when to wage war through their use in intelligence collection and analysis. I will begin by maintaining that increasingly direct contributions to state-level resort-to-force decision making by existing AI-enabled systems are imminent and inevitable. This prospect raises a host of ethical, legal, political, and geo-political challenges. I will then highlight four recent, on-going developments that create a backdrop of rapid change and global uncertainty against which AI-enabled systems will infiltrate and influence these state-level deliberations: i) the tendency to misperceive the latest AI-enabled technologies as increasingly “human”; ii) the changing role of “big tech” in the global competition over military applications of AI; iii) a conspicuous blind spot in current discussions surrounding international regulation; and iv) the emerging reality of an “AI-nuclear weapons nexus.” Together these factors will affect the trajectory of the phenomenon of AI-informed war initiation – and how we respond to the accompanying challenges.

Dr Damian Copeland

Director of Article 36 Legal

Dr. Damian Copeland is an internationally recognised legal scholar and practitioner with expertise in international humanitarian law and the legal review of emerging military technologies. As the Director of Article 36 Legal, he specialises in the legal review of new weapons, means, and methods of warfare, including autonomous and AI-enhanced systems.

Dr. Copeland’s academic background includes a Bachelor of Laws (Hons), a Master of Laws (Merit), and a Ph.D. focused on the legal review of autonomous weapon systems. His military career spanned over three decades with the Australian Army, including deployments to Iraq, Afghanistan, East Timor, Cambodia, and Somalia, providing him with extensive operational experience that complements his legal expertise.

Dr. Copeland’s scholarly contributions, notably his book, ‘A Functional Approach to the Legal Review of Autonomous Weapon Systems’, have significantly shaped discussions on integrating legal and ethical considerations into the development and deployment of advanced military technologies. Since 2023, he has chaired the Australian-led annual Expert Meetings on the Legal Review of Autonomous Weapons.

Presentation title: A functional approach to the legal review of autonomous weapon system

Abstract: The unique characteristics of autonomous weapon systems (AWS) challenge the traditional approach to legal reviews. Traditionally, reviews focus on the design and effects of a weapon, with the outcome determining its legality per se, without detailed consideration of its contextual application on the battlefield. This approach rests on the presumption of lawful use, on the basis that combatants and their commanders bear legal responsibility for any unlawful employment of the weapon during armed conflict. However, when an AWS performs functions that directly engage the rules of international humanitarian law (IHL) regulating the means and methods of warfare, this presumption becomes insufficient. A weapon itself is not a legal entity and cannot bear responsibility under international law for violations such as war crimes.
This presentation proposes a functional approach to legal reviews that bridges this methodological gap. By assessing how an AWS performs specific functions in its intended operational context, this approach enables a more rigorous evaluation of compliance with IHL and supports analysis of related quasi-legal risks, such as ensuring context-appropriate human control, and addressing other risks relevant to the lawful and responsible use of autonomous systems.

Dr Phyo San

Lead Research Scientist, Defence

Dr Phyo San is a Lead Research Scientist in Artificial Intelligence and Machine Learning at the Defence Science and Technology Group (DSTG), Australia. She leads strategic initiatives to operationalise AI across Defence, focusing on the development of trusted, scalable, and data-driven solutions for mission-critical applications.
With over 15 years of experience across academia, industry, and government, Dr San brings deep technical expertise and a strong understanding of AI governance, ethics, and deployment in complex environments. Her work bridges cutting-edge research with real-world implementation, advancing Defence capabilities through responsible and innovative use of AI.

Dr San is passionate about fostering interdisciplinary collaboration and translating research into impact. She actively engages with Defence stakeholders, and research partners to ensure AI technologies are not only technically robust but also aligned with operational needs and ethical standards.

Presentation title: Operationalising Trusted AI: From Algorithms to Mission Impact

Abstract: Translating Artificial Intelligence (AI) from research concepts into operational capability remains a key focus in advancing Defence transformation. This presentation highlights practical approaches to operationalising AI across Defence use cases, moving beyond experimentation toward trusted, deployable, and scalable solutions. It introduces a framework addressing key enablers such as data readiness, model assurance, system integration, and governance to ensure AI delivers measurable mission outcomes. The session also highlights how operationalised AI enhances mission capability through effective human machine teaming, improving situational awareness, decision making, and operational performance. By bridging the gap between concept development and real-world application, this work demonstrates a clear pathway for transforming AI innovation into mission-ready capability that strengthens Defence effectiveness and resilience.

Dr Kathryn Kasmarik

Professor of Computer Science, School of Systems and Computing
University of New South Wales, Australian Defence Force Academy

Dr Kathryn Kasmarik is a Professor of Computer Science at the University of New South Wales, Canberra at the Australian Defence Force Academy. Her current research interests lie in building swarming robots that can evolve their own collective behaviours.

Kathryn completed a Bachelor of Computer Science and Technology at the University of Sydney, including a study exchange at the University of California, Los Angeles (UCLA). She graduated with First Class Honours and the University Medal in 2002. She completed a PhD in computer science through the National ICT Australia and the University of Sydney in 2007. She moved to UNSW Canberra in 2008.

Kathryn has published over 150 articles in peer reviewed conference and journals. Her research has been funded over $4 million by the Australian Research Council and Defence Science and Technology Group, among other sources. She was the Deputy Head of School (Teaching) for the School of Engineering and IT from 2018-2021 and Head of School (Acting) establishing a new School of Systems and Computing at UNSW Canberra from 2023-2024.

Presentation title: Human-swarm interaction: design and evaluation

Abstract: This talk will present approaches to human-swarm interaction using different touch screen gestures. The talk will consider the differences between the needs of single vehicle teleoperation and multi-vehicle teleoperation, which influence the design of indirect interaction mechanisms for human-swarm teaming. Two different swarm algorithms will be considered: boid swarms and particle swarm optimisation. These algorithms will be considered within two applications (1) a search and rescue task in benign terrain where operators are responsible for mission setup and monitoring and (2) a move-to-target task in the presence of different types of hazards where operators are responsible for interpreting information about hazards that is not otherwise detectable by the swarm. The composition of roles within each human-swarm team will be discussed. Metrics permitting operators to understand swarm health in these setting will be presented, as well as results of a user study showing the benefits of human-swarm teaming.

Dr Ami Drory

Senior Research Scientist in Autonomy and AI
Defence

Dr Ami Drory is a Senior Research Scientist in Autonomy & AI at the Defence Science & Technology Group (DSTG), a member of the Defence AI Centre (DAIC), and an adjunct at the Australian National University (ANU). Before joining DSTG, he was a Senior Research Fellow in Machine Learning at UNSW’s National Facility for Human–Robot Interaction Research, focusing on cognitive load estimation for military UAV operators.

He previously held a professorship in Biomechanical Engineering at the University of Nottingham UK, leading research in remote neuromuscular disease monitoring during a pandemic and development of autonomous vehicle systems. Earlier, at Northwestern University and the Rehabilitation Institute of Chicago US, he researched machine learning assistive technologies for clinical prediction and diagnostics in the rehabilitation of veterans, patients with stroke, Parkinson’s disease, and infants with Cerebral Palsy.

He also worked at the Australian Institute of Sport, enhancing athlete performance for the Olympics, and researched IMU-based sensor networks for motion capture at the University of Sydney. During his academic career, he has taught courses in biomechanics, computer vision, machine learning, bioengineering, biomedical engineering, ergonomics, human mechanics, and human factors at some of the world’s best universities. Dr Drory earned his PhD in Engineering and Computer Science from ANU, developing computer vision and machine learning techniques for markerless and surface geometry estimation, recognition and tracking for biomechanics applications.

Presentation title: HMT for military adaption of AI/ML – Are we dumbing down humans?

Abstract: HMT is expected to enhance Army’s range and lethality, enabling small combat teams to generate asymmetric advantage and to create secondary benefits including force protection, increased mass and scale, and more rapid decision-making. Machines, however, in the context of AI/ML have different strengths than humans. Their utilisation is in the battlefield to achieve increased mass, scale and rapid decision-making is not an attritable substitution of like for like. The challenge for HMT is to integrate the strengths of each instead of reverting to the mean joint capability. In this presentation, common approaches to HMT will be challenged. Is considering HMT at the current developmental stage of AI/ML and Autonomous systems a distraction? Are we blunting human capabilities? What are the implications of embracing uncertainty in the performance of machines?

Prof Lexing Xie

Professor of Computer Science, The Australian National University (ANU)

Lexing Xie is a Professor of Computer Science at the Australian National University (ANU), where she leads the ANU Computational Media Lab and directs the ANU-wide Integrated AI Network. Her research spans machine learning, computational social science, and computational economics, with a particular focus on online optimization, neural networks for sequences, human-centered NLP, and applied problems such as distributed online markets, decision-making by humans and machines.

Lexing received the 2023 ARC Future Fellowship and the 2018 Chris Wallace Award for Outstanding Research. Her research has garnered seven best paper and best student paper awards at ACM and IEEE conferences between 2002 and 2019. Among her editorial roles, she served as the inaugural Editor-in-Chief of the AAAI International Conference on Web and Social Media (ICWSM) and was the Program Co-Chair of ACM Multimedia 2024. Prior to joining ANU, she was a Research Staff Member at the IBM T.J. Watson Research Centre in New York. She holds a PhD in Electrical Engineering from Columbia University and a BS in Electrical Engineering from Tsinghua University.

Presentation title: LLM-Assisted Comparison of Complex Documents

Abstract: Professional documents such as regulations, standards, policies, and technical specifications are often lengthy, intricate, and difficult to interpret, particularly when comparing multiple sources. Understanding these documents requires expertise in both their content and structure. To support this process, we present DocDiff, a system that leverages large language models to cluster, align, and compare complex documents. This approach enables stakeholders, including policymakers, regulators, and industry practitioners, to uncover meaningful discrepancies, obligations, and enforcement structures embedded in professional texts, thereby improving clarity, consistency, and compliance.

Dr Yue Liu

Research Fellow, The Australian National University (ANU)

Yue Liu is currently a Research Fellow at the Australian National University, before that he was a Research Engineer at the Software Engineering for AI team, Data61, CSIRO. Yue received the “Early Career in Science Award” from Data61 in 2024.
His research interests include responsible AI, AI engineering, and AI for HSE. Yue holds a PhD in Computer Science and Engineering from University of New South Wales. He is active in the research community, serving on program committees and editorial roles in leading AI and software engineering venues.

Mr Xiuyuan (Jack) Yuan

Undergraduate Researcher, The Australian National University (ANU)

As a researcher, Jack has conducted studies on the topics of Large Language Models, Responsible AI and Human-in-the-loop AI systems. He is also an active member of the research community, participating at symposiums, workshops and conferences.

Captain Adam Allica

Director Maritime Asymmetric Capability Exploration, Defence

Captain Allica joined the RAN in 1986, qualifying as a Maritime Warfare Officer in 1988, as a Navigator in 1990, was dux of the Principal Warfare Officer Surface Warfare course in 1996 and selected for Command in the inaugural ‘Junior Command’ initiative in 2000. Throughout his career at sea, Captain Allica served in HMA Ships Jervis Bay, Ipswich, Parramatta, Torrens, Cessnock, Gawler, Sydney, Darwin and Newcastle, and commanded HMAS Fremantle (2000-02), conducting immigration and fisheries operations in Australia’s Northern Waters.

Captain Allica operational postings include deployments to the Middle East in HMAS Sydney (1993) for Operation Damask, East Timor (1998-2000), and the Persian Gulf in HMAS Newcastle (2005) for Operations Slipper and Catalyst. Shore postings included periods as a Navigation Instructor (1992), Bridge Simulator Instructor (1995), and Sea Training Group as Fleet Anti-Submarine and Fleet Electronic Warfare Officer (2002) where he was awarded a CDF Gold Commendation.

Outside Navy, Captain Allica spent eight years working in the commercial sector as a Management Consultant, ICT Project and Program Manager, and in senior management roles within ICT companies as well as establishing his own start-up company. He re-joined Navy in 2013 at the Directorate of Navy Continuous Innovation (DNCI) where he has held roles as Project Manager, Deputy Director, and Director. In 2017 Captain Allica introduced AI technologies to Navy: instrumental in developing the Defence AI capability, delivering the first Robotic Process Automation bots in Navy in 2019 and through AUKUS Pillar 2. Other outcomes include using AI/ML on the dark web for supply chain assurance, now managed permanently for whole of Defence by the CASG Supply Network Analysis Program (SNAP).
In 2019 Captain Allica was appointed as the inaugural Director General Warfare Innovation Navy, encompassing responsibility for Robotic and Autonomous Systems, AI, Modelling & Simulation and Innovation. He developed policy to conduct agile capability development through experimentation and prototyping, which led to Navy’s first Large Uncrewed Underwater Vessel (LUUV). His leadership and reinvigoration of the Autonomous Warrior operational experimentation (OPEX) program from 2020 through to 2024 saw him plan and execute the inaugural AUKUS Maritime Big Play OPEX event in 2024 which received high praise from CDF and AUKUS partners.

Captain Allica holds a Master’s Degree in Business Administration (University of New South Wales’ Australian Graduate School of Management), Graduate Diploma of Management (Defence Studies) (University of Canberra) and Graduate Certificate in Maritime Studies (University of Wollongong) and as well as being a Graduate of the Australian Institute of Company Directors.

LEUT Jessica Craig

Royal Australian Navy officer
Information Advantage Centre of Excellence (IACOE), Defence

Lieutenant Jessica Craig is a Royal Australian Navy officer currently serving in the Information Advantage Centre of Excellence (IACOE). Established in 2024, the IACOE is focused on scaling and optimising human-centred Information Operations (IO) within the Australian Defence Force (ADF), and integrating emerging technologies, such as artificial intelligence, to enhance situational awareness and counter adversarial influence.

Lieutenant Craig holds a Master of Public Relations and Advertising from the University of New South Wales and a Bachelor of Arts from Western Sydney University. She has previously served as the Officer in Charge of Influence Activities within the Fleet Information Effects Unit and has contributed to IO cells across multiple domestic and international exercises.

Her current project interests include the application of AI in IO planning, particularly in understanding and countering the spread of mis/disinformation and developing strategies to protect the cognitive resilience and will to fight of ADF personnel.

FLTLT Jorge Alvarino Diaz

Digital Technology Officer, Jericho Disruptive Innovation
Defence

FLTLT Jorge Alvarino Diaz is a Digital Technology Officer working with the Jericho AI Integration Team, Defence.

Natasha Karner

Research Associate in The Alan Turing Institute’s Defence and National Security Grand Challenge and Researcher at CETaS
The Alan Turing Institute

Natasha Karner is a Research Associate in the Turing’s Defence and National Security Grand Challenge. She currently works in the AI for Data-Driven Advantage (AIDA) defence policy workstream, which focuses on identifying solutions to the technical and policy challenges of defence regarding responsible AI adoption for strategic advantage. She is also a researcher in CETaS. Her recent and ongoing research projects include AI assurance in Defence and National Security, integrating AI into command decision-making, and geospatial data safety and security.

Natasha’s PhD dissertation explored the governance of Autonomous Weapons Systems (AWS), including deliberations under the auspices of the United Nations Convention on Certain Conventional Weapons (CCW). She is a Junior Associate Fellow at NATO Defense College and a member of BASIC’s Emerging Voices Network for nuclear weapons policy.