This repository demonstrates practical applications of AI Safety Assurance principles, aligned with:
- AMLAS (Assurance of Machine Learning for Autonomous Systems)
- IEEE Systematic Literature Review (2022) on AI Safety Assurance
- ACM Taxonomy of Machine Learning Safety (2022)
- Explainable AI (XAI) using PeBEx and SHAP
- Policy Modeling considerations for AI systems in the context of public policy and national security
notebooks/- Jupyter notebooks demonstrating AMLAS stages, Explainability techniques, and Safety Checklistdata/- Sample data and modelsscripts/- Automation scripts for black-box testing, safety envelopes, and runtime error detectionaml_safety_argument_pattern.md- AMLAS safety argument patterns in markdown
This project includes starter notebooks to demonstrate key AI Safety Assurance practices:
explainability_runtime.ipynb: Demonstrates runtime explainability using SHAP and PeBEx methods, aligned with AMLAS Stage 5 Model Verification and ACM ML Safety Taxonomy.taxonomy_checklist_demo.ipynb: Provides a checklist-based mapping of project safety techniques to the ACM ML Safety Taxonomy.aml_safety_scoping.ipynb: Template notebook for documenting AMLAS Stages 1–2 scoping and safety requirements.
These examples are intended to show practical applications of current AI Safety Assurance research to policy-relevant and mission-relevant AI governance needs, including alignment with NIST AI RMF.
AI Safety Assurance is critical to responsible adoption of AI within national security and intelligence communities. This demo aligns with needs articulated in NIST AI RMF and the ODNI Responsible AI framework.
- AMLAS (2021)
- IEEE SLR on AI Safety Assurance (2022)
- ACM ML Safety Taxonomy (2022)
- Latest Explainable AI approaches (2025)
- Policy Modeling with AI (2024)