Title: A System-level Approach to AI Safety and Guardrails: Ensuring Testing, Transparency, and Accountability Through Standards. Dr Liming Zhu from DATA61, CSIRO.

When: 07 March 2024
Time: 11am
Location: Online
Defence AI Seminar Series header

the presenter

Dr Liming Zhu

Research Director, Software and Computational Systems, DATA61, CSIRO

Professor Liming Zhu is a Research Director at CSIRO’s DATA61 and a conjoint full professor at the University of New South Wales (UNSW). He is the chairperson of Standards Australia’s blockchain committee and contributes to the AI trustworthiness committee. He is a member of the OECD.AI expert group on AI Risks and Accountability, as well as a member of the Responsible AI at Scale think tank at Australia’s National AI Centre.

His research program innovates in areas of AI/ML systems, responsible/ethical AI, software engineering, blockchain, regulation technology, quantum software, privacy and cybersecurity.

He has published over 300 papers on software architecture, blockchain, governance and responsible AI.  He delivered the keynote “Software Engineering as the Linchpin of Responsible AI” at the International Conference on Software Engineering (ICSE) 2023. His two upcoming books “Responsible AI: Best Practices for Creating Trustworthy AI Systems” and “Engineering AI Systems: DevOps and Architecture Approaches”, will be published by Addison Wesley in 2024.

Dr Liming Zhu from DATA61, CSIRO will present a seminar on Thursday, 7 March 2024.

This talk delves into the nuanced understanding that AI models represent merely a component of the broader AI system. It emphasizes the critical role of external factors – from deployment contexts to access to sensitive data and the utilization of tools – in shaping the safety profile of AI systems. These elements can significantly influence risk levels, introducing vulnerabilities but also presenting opportunities for risk mitigation through the implementation of context-specific system-level guardrails. We advocate for a holistic system approach that integrates model evaluation with system-level testing, adopting a supply chain perspective to enhance transparency, accountability, and shared responsibility. This methodology is pivotal for evaluating and mitigating safety risks collaboratively across the AI supply chain. This talk will also explore how Australia’s new AI safety standard represents a significant step in this direction. Rather than focusing solely on competing in large-scale or frontier model training and evaluation, this approach positions Australia as a world leader in the system-level approach. It benefits small-to-medium enterprises that primarily construct systems using third-party models, fostering a diverse and inclusive context.

 

Click the button below to register, add an invitation to your calendar and join the seminar using the Teams/GovTeams link.

DAIRNet hosts a fortnightly Defence AI Seminar Series at 11:00am (AEST/AEDT) every second Thursday. These seminars are a multi-sector and multi-discipline forum to present and discuss all aspects of Defence AI, from data and algorithms to responsible AI and capability.

If you are interested in presenting a future seminar, please send an email to enquiries@dairnet.com.au