Robotic systems are set to be served in a wide variety of real-world settings, from roads to shopping malls, offices, airports, and healthcare facilities. However, to perform consistently well in these environments, robots must be able to deal well with uncertainty, adapting to unexpected changes in their surrounding environment while ensuring the safety of nearby humans.
Robotic systems that can adapt independently to uncertainty in situations where humans could be at risk are referred to as “safety-critical self-adaptation” systems. While many robotics scientists attempt to develop these systems and improve their performance, there is still a lack of a clear and general theoretical framework defining them.
Researchers at the University of Victoria in Canada recently conducted a study aimed at clearly defining the concept of a “safety critical self-adaptive system”. Their paper previously published on arXiv provides a valuable framework that can be used to classify these systems and differentiate them from other robotic solutions.
“Self-adaptive systems have been extensively studied,” Simon Demert and Jens Weber wrote in their paper. This paper proposes a definition of Safety Self-Adaptation System then describes a Category to classify quotes into different types based on their impact on system integrity and system integrity status.”
The main goal of Demert Weber’s work was to formalize the idea of ”self-adaptive systems critical to safety”, so that it could be better understood by robotics. To do this, the researchers first proposed some clear definitions of two terms, “safety self-adaptive system” and “safety adaptation.”
According to their definition, to be a self-adaptive system critical to safety, a robot must meet three main criteria. First, it must satisfy Weyns’ exogenous principle of adaptation, which basically means that it must be able to deal independently with changes and uncertainty in its environment, as well as the system itself and its goals.
For a system to be critical for safety and self-adaptation, it must also satisfy Weyns’ endogenous principle of adaptation, which suggests that it must evolve endogenously and modify its behavior according to the changes it undergoes. To do this, it must consist of a managed system and a management system.
In this framework, the managed system performs the basic system functions, while the management system adapts the managed system over time. Finally, the managed system must be able to effectively handle safety-critical functions (i.e., complete actions that, if poorly performed, can lead to accidents and adverse events).
On the other hand, researchers’ definition of “safe adaptation” is based on two main ideas. These are that the managed component of the automated system is responsible for any incidents in the environment, while the management component is responsible for any changes in the configuration of the managed system. Based on these two concepts, Demert Weber defines “safe adaptation” as:
“A safe adaptation option is an adaptation option that, when applied to a managed system, does not result in or contribute to the managed system’s reaching a dangerous state,” the researchers wrote in their paper. “A safe adaptation action is an adaptation action that, while being carried out, does not create or contribute to a hazard. It follows that a safe adaptation is one in which all adaptation options and adaptation actions are safe.”
To better define what “safe adaptation” means, and what distinguishes it from any other form of “adaptation,” Demert and Weber have also devised a new taxon that can be used to classify different adaptations carried out by self-adaptive systems. This classification focuses specifically on the safety or risks associated with various adaptations.
Demert and Weber wrote in their paper: “The rating expresses the rating criteria and then describes the specific criteria that the safety condition of a self-adaptive system must meet, depending on the type of modifications that are made.” “Each type is illustrated in the classification using an example of a safety-critical self-adaptive water heating system.”
The classification defined by Demert Weber classifies adaptations made by automated or computational self-adaptive systems into four broad categories, referred to as type 0 (non-inference), type I (fixed assurance), type II (restrictive assurance), and type III (dynamic assertion). ). Each of these adaptation categories is associated with specific rules and characteristics.
Recent work by this team of researchers could guide future studies focused on developing self-adaptive systems designed to operate in critical safety conditions. Ultimately, it can be used to gain a better understanding of the capabilities of these systems for various real-world applications.
“The next step in this type of investigation is to validate the proposed classification, demonstrating its ability to classify all types of safety-critical self-adaptive systems, and that the obligations imposed by classification are appropriate using a body of systematic literature review and case studies,” concludes Demert Weber in their paper.
Simon Diemert, Jens H. Weber, Critical adaptation for safety in self-adaptive systems. arXiv: 2210.00095v1 [cs.SE]And the arxiv.org/abs/2210.00095
© 2022 Science X Network
the quote: A clear definition and classification of safety-critical self-adaptive robotic systems (2022, October 21) Retrieved on October 21, 2022 from https://techxplore.com/news/2022-10-definition-classification-taxonomy-safety-critical-self-adaptive .html
This document is subject to copyright. Notwithstanding any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.
#Clear #definition #classification #selfadaptive #robotic #systems #vital #safety #importance