PROGRAMME
In 2023 the workshop was conducted during one day according to the following schedule.
​
Day 1 - Monday, August 21th, 2023 - 09:00 to 17:50 CST
INVITED SPEAKERS
BEST PAPER AWARD
The Program Committee (PC) designated up to 3 papers as candidates to the AISafety Best Paper Award.
​
The candidate papers for the 2023 edition of AISafety are as follows:
The best paper was selected based on the votes of the workshop’s Chairs, during the workshop.
The authors of the Best Paper Award received a certificate with the name of the award, the name of the paper, and the names of the authors of the paper, at the workshop’s closing.
​
The AISafety 2023 Best Paper Award was granted to:
​
Nicola Franco, Daniel Korth, Jeanette Miriam Lorenz, Karsten Roscher and
Stephan Günnemann,
for:
Diffusion Denoised Smoothing for Certified and Adversarial
Robust Out-Of-Distribution Detection.
ORGANIZING COMMITTEE
-
Gabriel Pedroza, ANSYS, France
-
Xiaowei Huang, University of Liverpool, UK
-
Xin Cynthia Chen, ETH Zurich, Switzerland
-
Andreas Theodorou, Umeå University, Sweden
-
Nikolaos Matragkas, CEA LIST, France
STEERING COMMITTEE
-
Huascar Espinoza, KDT JU, Belgium
-
Mauricio Castillo-Effen, Lockheed Martin, USA
-
José Hernández-Orallo, Universitat Politècnica de València, Spain
-
Richard Mallah, Future of Life Institute, USA
- John McDermid, University of York, UK
PROGRAMME COMMITTEE
-
Simos Gerasimou, University of York, UK
-
Jonas Nilson, NVIDIA, USA
-
Morayo Adedjouma, CEA LIST, France
-
Brent Harrison, University of Kentucky, USA
-
Alessio R. Lomuscio, Imperial College London, UK
-
Brian Tse, Affiliate at University of Oxford, China
-
Michael Paulitsch, Intel, Germany
-
Ganesh Pai, NASA Ames Research Center, USA
-
Rob Alexander, University of York, UK
-
Vahid Behzadan, University of New Haven, USA
-
Chokri Mraidha, CEA LIST, France
-
Ke Pei, Huawei, China
-
Orlando Avila-García, Arquimea Research Center, Spain
-
I-Jeng Wang, Johns Hopkins University, USA
-
Chris Allsopp, Frazer-Nash Consultancy, UK
-
Andrea Orlandini, ISTC-CNR, Italy
-
Agnes Delaborde, LNE, France
-
Rasmus Adler, Fraunhofer IESE, Germany
-
Roel Dobbe, TU Delft, The Netherlands
-
Vahid Hashemi, Audi, Germany
-
Juliette Mattioli, Thales, France
-
Bonnie W. Johnson, Naval Postgraduate School, USA
-
Roman V. Yampolskiy, University of Louisville, USA
-
Jan Reich, Fraunhofer IESE, Germany
-
Fateh Kaakai, Thales, France
-
Francesca Rossi, IBM and University of Padova, USA
-
Javier Ibañez-Guzman, Renault, France
- Jérémie Guiochet, LAAS-CNRS, France
-
Raja Chatila, Sorbonne University, France
-
François Terrier, CEA LIST, France
-
Mehrdad Saadatmand, RISE Research Institutes of Sweden, Sweden
-
Alec Banks, Defence Science and Technology Laboratory, UK
-
Roman Nagy, Argo AI, Germany
-
Nathalie Baracaldo, IBM Research, USA
-
Toshihiro Nakae, DENSO Corporation, Japan
-
Gereon Weiss, Fraunhofer IKS, Germany
-
Philippa Ryan Conmy, Adelard, UK
-
Stefan Kugele, Technische Hochschule Ingolstadt, Germany
-
Colin Paterson, University of York, UK
-
Davide Bacciu, Università di Pisa, Italy
-
Timo Sämann, Valeo, Germany
-
Sylvie Putot, Ecole Polytechnique, France
-
John Burden, University of Cambridge, UK
-
Sandeep Neema, DARPA, USA
-
Fredrik Heintz, Linköping University, Sweden
-
Simon Fürst, BMW Group, Germany
-
Mario Gleirscher, University of Bremen, Germany
-
Mandar Pitale, NVIDIA, USA
-
Leon Kester, TNO, The Netherlands
-
Bernhard Kaiser, ANSYS, Germany
TECHNICAL SESSIONS
The presentation files are available in the talk title links below.
​
Keynote 1 - Paul Lukowicz (DFKI- Kaiserslautern, Germany) , Safety risks of AI: Intelligence, Complexity, and Stupidity
Abstract: TBD
Session 1 - Robustness of AI via OoD and Unknown-Unknows Dectection – Chair: David Bossens (University of Southampton, UK)
-
Diffusion Denoised Smoothing for Certified and Adversarial Robust Out-Of-Distribution Detection - Nicola Franco, Daniel Korth, Jeanette Miriam Lorenz, Karsten Roscher, Stephan Günnemann
-
Unsupervised Unknown Unknown Detection in Active Learning - Prajit T. Rajendran, Huascar Espinoza, Agnes Delaborde, Chokri Mraidha
> Debate Panel - Session Discussants: Presenters and Session Chair.
Session 2 - AI Robustness, Adversarial Attacks and Reinforcement Learning - Chair: Anqi Liu (Johns Hopkins University, USA)
-
PerCBA: Persistent Clean-label Backdoor Attacks on Semi-Supervised Graph Node Classification – Xiao Yang, Gaolei Li, Chaofeng Zhang, Meng Han, Wu Yang
-
Distribution-restrained Softmax Loss for the Model Robustness - Chen Li, Hao Wang, Jinzhe Jiang, Xin Zhang, Yaqian Zhao, Weifeng Gong
-
Fear Field: Adaptive constraints for safe environment transitions in Shielded Reinforcement Learning - Haritz Odriozola-Olalde, Nestor Arana-Arexolaleiba, Maider Zamalloa, Jon Perez-Cerrolaza, Jokin Arozamena-Rodríguez
> Debate Panel - Session Discussants: Presenters and Session Chair.
Session 3 - AI Governance and Policy/Value Alignment – Chair: François Terrier (CEA-LIST, France)
-
An open source perspective on AI and alignment with the EU AI Act - Diego Calanzone, Andrea Coppari, Riccardo Tedoldi, Giulia Olivato, Carlo Casonato
> Debate Panel - Session Discussants: Presenters and Session Chair.
Keynote 2 - François Terrier (Program Director of CEA List, France) - No Trust without regulation! European challenge on regulation, liability and standards for trusted AI
The explosion in the performance of Machine Learning (ML) and the potential of its applications are strongly encouraging us to consider its use in industrial systems, including for critical functions such as decision-making in autonomous systems. While the AI community is well aware of the need to ensure the trustworthiness of AI-based applications, it is still leaving too much to one side the issue of safety and its corollary, regulation and standards, without which it is not possible to certify any level of safety, whether the systems are slightly or very critical.
The process of developing and qualifying safety-critical software and systems in regulated industries such as aerospace, nuclear power stations, railways or automotive industry has long been well rationalized and mastered. They use well-defined standards, regulatory frameworks and processes, as well as formal techniques to assess and demonstrate the quality and safety of the systems and software they develop. However, the low level of formalization of specifications and the uncertainties and opacity of machine learning-based components make it difficult to validate and verify them using most traditional critical systems engineering methods. This raises the question of qualification standards, and therefore of regulations adapted to AI. With the AI Act, the European Commission has laid the foundations for moving forward and building solid approaches to the integration of AI-based applications that are safe, trustworthy and respect European ethical values. The question then becomes “How can we rise to the challenge of certification and propose methods and tools for trusted artificial intelligence?”.
Session 5 - AI Trustworthiness, Explainability and Testing - Chair: Prajit T. Rajendran (CEA-LIST, France)
-
Empirical Optimal Risk to Quantify Model Trustworthiness for Failure Detection - Shuang Ao, Stefan Rueger, Advaith Siddharthan.
-
Weight-based Semantic Testing Approach for Deep Neural Networks - Amany Alshareef, Nicolas Berthier, Sven Schewe and Xiaowei Huang
-
AI for Safety: How to use Explainable Machine Learning Approaches for Safety Analyses – Iwo Kurzidem, Simon Burton, Philipp Schleiss
> Debate Panel - Session Discussants: Presenters and Session Chair.