RSB Team at the 2nd Annual Automotive Functional Safety Forum in Berlin

2024-11-19

Table of content

  1. Safety Insights Berlin
  2. Overview and Observations
  3. ISO 26262 Revision 3
  4. Predictive Maintenance and Unified Safety
  5. AI Safety in Autonomous Driving
  6. End-to-End Safety
  7. Safety of Software for Perception Systems
  8. Open Source in Automotive Safety
  9. AI vs. Standard Compilers
  10. IEEE P2285: Streamlining Safety Data
  11. Safety AI Technology
  12. RSB- Staying Ahead

Safety Insights Berlin

At RSB Automotive Consulting, we recently had the privilege of attending the 2nd Annual Automotive Functional Safety Forum in Berlin on October 28th and 29th. Over these two days, our representatives delved into the latest industry discussions on automotive safety. The forum covered a spectrum of topics, from foundational safety protocols to emerging, sometimes provocative innovations. Some of the proposed solutions were consistent with our existing expertise, reinforcing our strategic goals. Others surprised us with fresh approaches to industry challenges, while still others sparked constructive debate, prompting us to revisit certain assumptions. We invite you to read on for a nuanced look at the themes that resonated, challenged, and inspired us.

Overview and Observations

The forum took place in the welcoming setting of the NH Collection Berlin Mitte Friedrichstrasse, with Riccardo Vincelli from Renesas expertly guiding the discussions. The setup was almost reminiscent of a university seminar: around 35 participants were seated in neatly arranged rows, creating an atmosphere of focused, academic-style engagement. Over two intensive days, we attended 33 presentations—17 on the first day anxcd 16 on the second—most delivered live, with a few remote speakers joining from the U.S. Given the depth and specificity of the material, it was an exhausting yet highly engaging event for all involved.

source: personal archive

The overarching theme was the evolving standards in functional safety, with a special focus on system architecture and software design to ensure compliance. The sheer number of regulations, which continues to expand, underscored the complexity of the field. In addition to the standards, prominent topics included SOTIF (Safety of the Intended Functionality), cybersecurity, AI/ML, and the future of automotive technology.

With these foundational discussions in mind, we’ll now delve into the specific challenges and key takeaways related to functional safety.

ISO 26262 Revision 3

One of the most significant topics discussed at the forum was ISO 26262 Revision 3, which is currently under development and is expected to be released around 2026/27. This revision is crucial as it aims to address several emerging challenges in automotive safety standards:

  • Artificial Intelligence and Machine Learning: The upcoming version will incorporate updated requirements tailored specifically for AI and ML applications within automotive systems. There will be an extension of Annex C for configuration of machine learning and guidelines for handling training data
  • Predictive Maintenance: The new standard will likely include provisions for predictive maintenance strategies that can proactively identify degrading faults before they lead to failures
  • Fail Operational Systems: As vehicles evolve towards greater autonomy, ensuring that electrical/electronic (E/E) systems can maintain operational capabilities post-fault detection is becoming increasingly critical. Thus, fail-operational architectures will be emphasized
  • Safety of Intended Functionality (SOTIF): This concept will be integrated more comprehensively into ISO 26262 to address potential hazards arising from system behavior outside intended operating conditions

The table below presents other key relevant standards related to safety and reliability that were also discussed at the conference.

StandardDescription
ISO 26262 r.3Upcoming revisions addressing AI/ML integration and predictive maintenance
IEC 61508 r.3Updates aligning with ISO 26262 for broader functional safety applications
ISO/TR 9968Guidelines for new energy vehicles focusing on functional safety principles
ISO TS 5083Safety for automated driving systems, waiting for finalization and publication
ISO/IEC TR 5469FuSa for AI systems, extension into ISO/IEC TS 22440
ISO PAS 8926Use of pre existing software
JA 1020Recommendations for the Rust Programming Language in Safety-Related Systems, first draft in Q4/24
source: conference presentations

The forum underscored both the complexity and the importance of ongoing dialogue among industry experts. As functional safety evolves with automotive tech, it’s essential to stay proactive in adapting our standards to prioritize safety. But with each new strategy and regulation, there’s a growing concern about adding unnecessary complexity. While these frameworks aim to enhance safety, there’s a risk that, in trying to cover every potential issue, we may be creating systems that are harder to understand, implement, and monitor effectively. Are we genuinely addressing the key safety challenges, or are we layering on processes that make it harder to see the core issues? This added complexity could ultimately dilute our focus, making it more challenging to maintain the practical, day-to-day safety measures that matter most. Balancing innovation with simplicity will be crucial to ensure that our safety practices remain effective and manageable.

Predictive Maintenance and Unified Safety

At the conference, predictive maintenance, recently formalized in TR 9839 (August 2023), emerged as a key focus area. This approach utilizes data analytics and condition monitoring to anticipate faults, allowing automotive systems to proactively address potential issues before they compromise safety. Yet, predictive maintenance is only one part of a broader shift toward dependability engineering, where functional safety (FuSa), Safety of the Intended Functionality (SOTIF), and cybersecurity are converging into a unified framework. This integrated approach acknowledges that with the rise of connected vehicles, the line between functional and cyber-related failures is increasingly blurred. Dependability engineering thus aims to address both accidental failures and malicious threats in tandem, enhancing reliability while safeguarding against cyber risks. As this field evolves, we expect shared standards to emerge, enabling a streamlined, resilient approach to safety and security that meets the demands of modern vehicle technology.

AI Safety in Autonomous Driving

A significant shift is underway toward formalizing AI safety in automated driving, with the anticipated release of the ISO 8800 standard in the latter half of 2024. This standard will provide comprehensive guidance for managing AI safety across an automotive project’s entire lifecycle, not limited solely to data handling. ISO 5469, already applied in some Driver Monitoring Systems (DMS) with supervisory components, further underscores the importance of robust, end-to-end safety oversight as vehicles become increasingly autonomous.

On the regulatory horizon, the EU’s Automated Driving Systems (ADS) Act is expected to establish safety standards for vehicle design, testing, and real-world performance criteria, while UL 4600 offers frameworks to support detailed safety cases across key areas of autonomous driving. These standards collectively raise critical questions about what it means for autonomous systems to be “safe enough.”

For those who couldn’t join us at the conference but are interested in diving deeper into these topics, we highly recommend Philip Koopman’s “How Safe Is Safe Enough? Measuring and Predicting Autonomous Vehicle Safety”. The book delves into the nuances of benchmarking safety, comparing AI to human drivers under various conditions, and determining acceptable levels of risk. Koopman’s insights provide essential context for understanding the standards that will shape the future of autonomous vehicle safety.

source: amazon.com

End-to-End Safety

At the conference, RSB Team directly asked about the safety assurances for GPUs and NPUs in AI-driven automotive systems, but nobody addressed it openly. In private, hardware vendors admitted they couldn’t fully guarantee the safety of these processing units. This raises a critical issue: without a clear safety commitment from hardware providers, how can we ensure true end-to-end safety?

In our view, this uncertainty demands greater accountability from platform providers. Documenting hardware limitations in the safety case is essential, as only with transparency about hardware reliability can we build genuinely safe systems. Until this gap is addressed, the industry faces an unresolved risk that, in my opinion, needs far more attention.

Safety of Software for Perception Systems

Attending this forum, we found the discussions surrounding Safety of Software for Perception Systems particularly enlightening. One critical point raised was that currently no vendor supports mixed criticality within a single Virtual Machine (VM). This limitation means that any Quality Management (QM) software must be isolated within a separate gateway to ensure safety compliance.The diversity in approaches among hypervisor vendors presents both challenges and opportunities; we can glean valuable insights from their methodologies and integrate them into our platform strategies. However, legacy software and open-source solutions remain significant concerns due to their lack of qualification in many cases. This gap highlights an urgent need for robust frameworks that can ensure these components meet necessary safety standards.

Moreover, this topic closely ties into Safety of Intended Functionality (SOTIF) as outlined in ISO 21448, which emphasizes understanding how software behaves under various conditions. It is crucial that we not only develop perception systems that are functionally safe but also ensure they operate reliably within their intended environments.

For those interested in exploring these concepts further, I recommend reviewing some insightful resources:

While advancements are being made in integrating various software components safely within automotive systems, we must remain vigilant about ensuring proper segregation between different criticality levels. The current requirement for QM software to reside in separate gateways not only complicates system architecture but also raises questions about efficiency and resource utilization. As we strive for innovation in perception systems under SOTIF guidelines, it is essential that we do not overlook these foundational challenges. Balancing innovation with rigorous safety evaluations will be key as we navigate this complex terrain moving forward.

Open Source in Automotive Safety

Can open-source software truly drive the future of automotive safety? This intriguing question set the tone for one of the forum’s most thought-provoking discussions, centered on the growing role of OSS in critical vehicle systems. While OSS offers a faster time-to-market, cost efficiencies, and access to a broad developer community, it also brings new questions around liability, safety standards, and certification. The conversation underscored OSS’s undeniable appeal—along with a reminder that adopting it in a safety-critical context like automotive comes with unique responsibilities and challenges.

  • Accelerated Development with OSS: OSS can speed up development significantly by providing pre-built components backed by community support. Numerous studies and reports highlight how OSS accelerates time-to-market, reducing the need for extensive custom coding and allowing developers to leverage existing, thoroughly vetted solutions.
  • Reliability from a Global Community: Known for its reliability, OSS projects such as Linux are refined through constant community feedback and testing, reinforcing their robustness. With contributions and revisions from developers worldwide, OSS benefits from an extensive review process that enhances reliability through a broad exposure to varied use cases.
  • Broad Talent Pool for Development: The OSS community creates a valuable talent pool for automotive companies, offering expertise and scalability that would be challenging to build internally. This wide network of developers familiar with OSS platforms proves advantageous for recruitment and expanding project teams.
  • Clarifying Responsibility and Liability: Unlike proprietary software, accountability in OSS often spans vendors, OEMs, and integrators. For instance, Red Hat supports its OSS solutions commercially, clarifying liability terms for users. Structured Development Interface Agreements (DIAs) help to define roles and manage risks more effectively in OSS applications.
  • Addressing Safety Standards Gaps: Integrating OSS in compliance with automotive safety standards, such as ISO 26262, can be challenging since these standards are requirement-driven and not inherently code-based. While there’s no universal safety standard for OSS, the V-model can often be adapted with customized approaches to fit code-driven projects.
  • Quality KPIs: OSS quality is often measured by Key Performance Indicators (KPIs) focused on community activity, review rigor, and code maturity. Standards like the Linux Foundation’s BOM help ensure traceability and quality.
  • Ensuring ASIL Compliance: OSS components vary in their Automotive Safety Integrity Level (ASIL) ratings. While Zephyr RTOS targets compliance with ASIL D standards, Red Hat Linux supports up to ASIL B, making it essential to align safety requirements with specific applications and use contexts.
  • Hardware Dependency: Certification for OSS often depends on the target hardware to ensure safety compliance. Safety-related software often undergoes re-validation for each new hardware configuration, which is standard practice in safety-critical systems.
  • Considerations for Mass Production: OSS demonstrates potential for mass production within automotive safety systems. Established OSS projects, like Linux, have been adapted to meet safety and scalability needs, proving feasible for broader deployment across the industry.
  • Open-Source AI in Automotive Safety: Open-source AI frameworks show promise in the automotive sector, though challenges remain with safety certification, data privacy, and model verification. New AI-specific safety guidelines are emerging, aiming to better support OSS adoption in autonomous driving systems.

Open-source software is undoubtedly shaping the future of automotive safety, offering both immense potential and complex challenges. With the right balance of innovation and accountability, OSS could revolutionize safety standards and speed up the development of autonomous driving. The journey ahead is exciting, and the industry is watching closely.

AI vs. Standard Compilers

For us, key highlight of the forum was Dr. Oscar Slotosch’s presentation on “Compilers for AI vs. Standard Software Compilers.” His talk offered a thought-provoking look at the unique demands and evolving landscape of AI compilers compared to their traditional counterparts. Here’s a breakdown of the key points from his session that are especially relevant to our field:

  • Purpose & Standards: Traditional compilers for languages like C/C++ adhere to established ISO standards and come with qualification suites that ensure consistent, reliable outputs. AI compilers, however, lack these frameworks. Instead, they are designed to translate high-level models—such as matrix operations and neural networks—into optimized code for GPUs, TPUs, or ASICs using tools like Google’s TVM. This variance from traditional practices highlights a pressing need for AI compiler standards.
  • Compilation & Optimization: Standard compilers convert high-level language directly into machine code while focusing on efficiency and speed. In contrast, AI compilers operate at a higher abstraction level, where they optimize tensor and matrix operations to support efficient neural network execution, often compressing 64-bit floats to 32-bit for enhanced performance on GPUs, especially those powered by CUDA. This process of downscaling poses safety and accuracy questions that the field must address.
  • Verification & Qualification: In traditional software, compiler verification relies on ISO 26262 compliance, providing a clear framework for qualification. With AI compilers, however, tools like Validator.com offer some degree of application-specific compliance checks, but there is no universal standard for AI compiler qualification. This gap suggests the need for new AI-specific safety guidelines as AI applications grow more complex and data-driven.
  • Input Complexity: Traditional compilers are built to handle large volumes of code with minimal data, while AI compilers process much smaller codebases but rely on massive datasets for model training. This shift places unique demands on validation practices, further emphasizing the need for different qualification approaches in AI development.
  • Additional Considerations: Tool qualification for AI development within frameworks like ISO 26262 remains feasible, though modifications are required to account for AI’s data-centric nature. Additionally, achieving safety and quality benchmarks may increasingly depend on “proven-in-use” argumentation—leveraging open-source validation, data insights, and innovative methods—to ensure safety compliance even as AI compilers continue to evolve.

As RSB, we recognize the critical need for rigorous safety protocols and foresee a future where AI-specific guidelines will address these gaps. Our takeaway from Dr. Slotosch’s session is that the industry may need to adapt current safety standards or even develop new ones to address AI’s data-driven demands. For AI in safety-critical applications, achieving compliance may depend on a blend of innovative, proven-in-use arguments, data validation methods, and potentially, contributions from open-source communities to ensure comprehensive safety assurance.

PS: For those interested in a deeper dive into this topic, we highly recommend Professor Slotosch’s podcast on Spotify, Validas Tools & Library Qualification, where he explores these advancements and challenges in compiler technology.

source: open.spotify.com

IEEE P2285: Streamlining Safety Data

Imagine a world where safety-critical data, from fault modes to failure rates, could seamlessly transfer across different tools, industries, and supply chain levels. This is precisely the ambition behind IEEE’s P2285 standard, a transformative effort to standardize data formats for functional safety (FuSa) information, facilitating interoperability across the dependability lifecycle. By unifying how essential safety data—such as fault modes, risk assessments, and failure rates—is structured and shared, P2285 is set to promote compatibility, traceability, and compliance with established safety standards like ISO 26262 and IEC 61508.

With the rise of autonomous systems, IEEE P2285 seeks to simplify safety and reliability data sharing across systems and supply chains, cutting time and costs by creating a universal, tool-agnostic format. Aligned with IEEE Std 2851-2023, the P2285 standard provides best practices for platform-independent data exchange in key areas such as soft error testing, base failure rates, and RAS architecture. This consistent framework enables engineers to maintain aligned safety practices across critical applications.

Ultimately, P2285 represents a forward-thinking vision for safety-critical industries, positioning safety engineering as a more collaborative, streamlined process. When data flows freely and predictably, engineers can focus more on innovation and less on integration, a change that promises to shape the future of safe, reliable technology.

Safety AI Technology

Last but not least, we discussed Predictive Maintenance (PdM) — a topic gaining traction due to its potential to enhance safety and reliability within functional safety systems. The insights shared about ISO PAS 8800 highlighted how advances in sensor technology and computational power have revolutionized Prognostics and Health Management (PHM). These innovations enable real-time data monitoring and accurate failure predictions.

The IEEE Working Group emphasized that PdM should be viewed as a critical safety mechanism against degrading faults. Key metrics such as Safety Performance Function Metric (SPFM) are influenced by PdM strategies. Moreover, defining “Failure Mode Coverage” based on statistical parameters can significantly enhance our understanding of PdM effectiveness.

As we embrace these advanced predictive techniques, we must integrate them within our existing frameworks while ensuring compliance with established safety standards. The proactive maintenance strategies supported by PdM promise reduced downtime while contributing significantly to overall system availability — an imperative as we advance toward more complex automotive systems. Balancing innovation with rigorous safety evaluations will be paramount as we navigate this evolving landscape effectively.

RSB- Staying Ahead

Attending events like the 2nd Annual Automotive Functional Safety Forum gives us the opportunity to explore the latest advancements and engage in discussions that shape the future of the automotive sector. It inspires us to remain at the forefront of these dynamic changes, embrace new challenges, and redefine safety standards.

At RSB Automotive Consulting, we believe that where innovation is born, that’s where we belong. If you share our passion for automotive innovation—whether you’re a specialist looking for new projects or a company seeking talented pioneers—we invite you to explore our Functional Safety Services (Functional Safety – RSB Automotive Consulting ) and reach out. Together, let’s build a safe and forward-thinking automotive industry.