AV Safety

Verifiable Safety for Autonomous Vehicles

Centered Image
System architecture of Synergistic Simplex

Safety in autonomous vehicles (AV) remains a key challenge. An inherent limitation of traditional autonomous driving agents is that the responsibilities of mission (navigation) and safety (collision avoidance) are diffused throughout the system. Without the separation between mission-critical and safety-critical tasks, the optimization goals of the system become ill-formed. Worse, without separation, the safety problem becomes intermingled and convoluted.

An example of this intermingling are the Deep Neural Networks (DNN) for object detection in AV. These solutions couple the tasks of identifying object existence and class e.g., vehicle or pedestrian. This approach works incredibly well for mission-critical systems, enabling a plethora of new applications, including AV. However, in safety-critical systems where obstacle existence detection is a necessary requirement, while object classification is not, this inter-dependency between the two can result in errors from one task e.g., object classification, to cause critical faults in the other i.e., obstacle existence detection. Furthermore, the intermingled complex tasks then require complex solutions. While DNN has been deployed quite successfully to fulfill these complex requirements, their inherent limitations for complete verification and analysis, make them unsuitable for safety-critical tasks.

The impasse caused by the necessity of DNN solutions to enable AV’s mission capabilities and their unsuitability for safety-critical tasks can be resolved by decoupling the responsibilities of mission and safety. The disassociated fulfillment of safety and mission requirements has been successful in system architectures like Simplex, leading to systems that work with high performance in typical cases but provide verifiable safe behavior in safety-critical conditions. But thus far, such a separation has not been achieved in AV.

Perception of obstacles remains a critical safety concern for AV. Real-world collisions have shown that the autonomy faults leading to fatal collisions originate from obstacle existence detection. Therefore, we first target such faults, while developing a generalized framework, extensibel to address other fault types.

This research aims to create a safety layer for AV, that bears the primary responsibility for maintaining the system's safety and is verifiable for its behavior.

References

2024

  1. Journal
    Perception simplex: Verifiable collision avoidance in autonomous vehicles amidst obstacle detection faults
    Ayoosh Bansal, Hunmin Kim, Simon Yu, Bo Li, Naira Hovakimyan, Marco Caccamo, and Lui Sha
    Software Testing, Verification and Reliability, 2024
  2. Conference
    Synergistic Perception and Control Simplex for Verifiable Safe Vertical Landing
    Ayoosh Bansal, Yang Zhao, James Zhu, Sheng Cheng, Yuliang Gu, Hyung Jin Yoon, Hunmin Kim, Naira Hovakimyan, and Lui R Sha
    In AIAA SCITECH 2024 Forum , 2024

2022

  1. Conference
    Verifiable obstacle detection
    Ayoosh Bansal, Hunmin Kim, Simon Yu, Bo Li, Naira Hovakimyan, Marco Caccamo, and Lui Sha
    In 2022 IEEE 33rd International Symposium on Software Reliability Engineering (ISSRE) , 2022
  2. Preprint
    Synergistic Redundancy: Towards Verifiable Safety for Autonomous Vehicles
    Ayoosh Bansal, Simon Yu, Hunmin Kim, Bo Li, Naira Hovakimyan, Marco Caccamo, and Lui Sha
    arXiv preprint arXiv:2209.01710, 2022

2021

  1. Conference
    Risk Ranked Recall: Collision Safety Metric for Object Detection Systems in Autonomous Vehicles
    Ayoosh Bansal, Jayati Singh, Micaela Verucchi, Marco Caccamo, and Lui Sha
    In 2021 10th Mediterranean Conference on Embedded Computing (MECO) , 2021