the presenters
WGCDR S. Kate Conroy
Responsible AI Lead, Jericho Disruptive Innovation, Royal Australian Air Force
WGCDR S. Kate Conroy née Devitt is Specialist Capability Officer (Responsible AI Lead) Royal Australian Air Force and works in AI Safety and Assurance for the Queensland Government. She is a lead SME co-author of the National Framework for the Assurance of Artificial Intelligence in Government (2024). As Chief Scientist Trusted Autonomous Systems Defence Cooperative Research Centre, she lead delivery of the Responsible AI for Defence (RAID) Toolkit (2023) and the Robotics and Autonomous Systems Gateway (RAS-Gateway) (2022), a digital hub providing access to resources and tools to support the Australian autonomous systems ecosystem to navigate regulatory frameworks with greater certainty and efficiency.
Amy Plant
Product Manager A3I Program, Jericho Disruptive Innovation, Royal Australian Air Force
Amy Plant is an Agile project manager at the Department of Defence working at the intersection of participatory design and disruptive technologies. As a Product Owner within Air Force’s Jericho Disruptive Innovation team, Amy leads the development of digital products that incorporate emerging technologies to improve aviation asset management outcomes. With a comprehensive background in the creative industries leading economic and community development initiatives, Amy is committed to fostering innovative and sustainable solutions that emphasise human-centred design approaches. During her previous employment at Western Australian non-profit organisation, FORM Building a State of Creativity, Amy contributed to numerous award-winning visual arts productions including The Canning Stock Route Project, Field of Light: Avenue of Honour, and the PUBLIC Silo Trail.
WGCDR S. Kate Conroy and Amy Plant, Jericho Disruptive Innovation, Royal Australian Air Force, will present a seminar on Thursday, 22 August 2024.
The Political Declaration on responsible military use of AI and autonomy (Nov, 2023) calls for the endorsing states to develop appropriate measures to ensure responsible AI and autonomy at relevant stages of the technology lifecycle. This talk will consider existing and emerging human-centred methods for the test and evaluation of human factors relevant to responsible AI in the military domain as well as methodological gaps. The talk will discuss human-centred methods to demonstrate compliance with:
- Military AI Frameworks
- Military Responsible AI (RAI) Toolkits
- RAI international agreements in the military domain
The talk will also consider impacts of AI on ethical dimensions including wellbeing, autonomy and justice across diverse stakeholders (e.g. civilians, industry, academia, government, NGOs, the public and military personnel). The aim of the event is to promote human-centred assurance methodologies to justify AI solution design, features, function, and context of use.
Click the button below to register, add an invitation to your calendar and join the seminar using the Teams/GovTeams link.