top of page

A PRELIMINARY SET OF LANDSCAPE CATEGORIES

Learn More...

The figure below proposes a draft schema of seven categories for classifying and discussing AI Safety knowledge. Each of these categories interacts and depends on the others. The purpose of this classification is to promote structured discussions towards a consistent view of AI Safety, as well as to understand the place of this field with respect to other disciplines in computer science, project management or social sciences. This taxonomy is fully open to be amended during the workshop or future meetings.

​

​

​

AI Safety Landscape
CategoriesWeb.png
Towards an AI Safety Landscape
Recorded Sessions
Full Workshop Report
AI Safety Foundations

This category covers a number of foundational concepts, characteristics and problems related to AI safety that need special consideration from a theoretical perspective. This includes concepts such as uncertainty, generality or value alignment, as well as characteristics such autonomy levels, safety criticality, types of human-machine and environment-machine interaction. This group intends to collect any cross-category concerns in AI Safety.

​

Specification and Modelling

The main scope of this category is on how to describe needs, designs and actual operating safety-critical systems from different perspectives (technical concerns) and abstraction levels. This includes the specification and modelling of risk management properties (e.g., hazards, failures modes, mitigation measures), as well as safety-related requirements, training, behavior or quality attributes in AI-based systems.

​

Verification and Validation

This category concerns design-time approaches to ensure that an AI-based system meets its requirements (verification) and behaves as expected (validation). The range of techniques covers any formal/mathematical, model-based simulation or testing approach that provides evidence that an AI-based system satisfies its defined (safety) requirements and does not deviate from its intended behavior and causes unintended consequences.

​
Runtime Monitoring and Enforcement

The increasing autonomy and learning nature of AI-based systems is particularly challenging for their verification and validation (V&V), due to our inability to collect an epistemologically sufficient quantity of evidence to ensure correctness. Runtime monitoring is useful to cover the gaps of design-time V&V by observing the internal states of a given system and its interactions with external entities, with the aim of determining system behavior correctness or predicting potential risks. Enforcement deals with runtime mechanisms to self-adapt, optimize or reconfigure system behavior with the aim of supporting fallback to a safe system state from the (anomalous) current state.

​
Human-Machine Interaction

As autonomy progressively substitute cognitive human tasks, some kind of human-machine interaction issues become more critical, such as the loss of situation awareness or overconfidence. Other issues include: collaborative missions that need unambiguous communication to manage self-initiative to start or transfer tasks; safety-critical situations in which earning and maintaining trust is essential at operational phases; or cooperative human-machine decision tasks where understanding machine decisions are crucial to validate safe autonomous actions.

​

Process Assurance and Certification

Process Assurance is the planned and systematic activities that assure system lifecycle processes conform to its requirements (including safety) and quality procedures. In our context, it covers the management of the different phases of AI-based systems, including training and operational phases, the traceability of data and artefacts, and people. Certification implies a (legal) recognition that a system or process complies with industry standards and regulations to ensure it delivers its intended functions safely. Certification is challenged by the inscrutability of AI-based systems and the inability to ensure functional safety under uncertain and exceptional situations prior to its operation.

​
Safety-related Ethics, Security and Privacy

While these are quite large fields, we are interested in their intersection and dependencies with safety issues. Ethics becomes increasingly important as autonomy (with learning and adaptive abilities) involves the transfer of safety risks, responsibility, and liability, among others. AI-specific security and privacy issues must be considered with regard to its impact on safety. For example, malicious adversarial attacks can be studied with focus on situations that compromise systems towards a dangerous situation.

​

​

bottom of page