You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The EU AI Act's provisions for high-risk AI systems become applicable on August 2, 2026 — 4 months away. Articles 9-15 and Annex IV mandate extensive documentation, risk management, and traceability for high-risk AI systems. Fines reach up to €35 million or 7% of global annual turnover for non-compliance.
Rivet already has the infrastructure for this:
STPA schema covers safety analysis (losses, hazards, UCAs, loss scenarios, control structure)
ASPICE schema covers the V-model traceability chain (stakeholder → system → software → verification)
Cybersecurity schema covers ISO 21434 / ASPICE SEC.1-4
What's missing: artifact types that directly map to the EU AI Act's Annex IV documentation requirements, plus traceability rules that enforce the Act's specific obligations.
Strategic value: No open-source traceability tool targets EU AI Act compliance for AI-enabled safety-critical systems. DOORS, Polarion, and codebeamer require expensive licenses and don't natively understand the Act's structure. Rivet can be the git-native, schema-validated compliance backbone — especially for teams already using STPA/ASPICE schemas.
Documents the continuous, iterative risk management process
Art. 9(2a)
Known risk identification
risk-assessment
Individual risk entries with likelihood/severity
Art. 9(2b)
Foreseeable misuse risks
misuse-risk
Risks from reasonably foreseeable misuse
Art. 9(2c)
Post-market emerging risks
emerging-risk (links to post-market-plan)
Risks discovered after deployment
Art. 9(4)
Risk mitigation measures
risk-mitigation
Targeted measures for identified risks
Art. 10
Data governance
data-governance-record
Training/validation/test data provenance and quality
Art. 11
Technical documentation
Schema itself (Annex IV coverage)
The schema enforces Annex IV completeness
Art. 12
Record-keeping / logging
logging-specification
What events are logged, retention, access
Art. 13
Transparency
transparency-record
Information provided to deployers, user-facing docs
Art. 14
Human oversight
human-oversight-measure
Specific human intervention capabilities
Art. 15(1)
Accuracy
accuracy-evaluation
Accuracy metrics and methodology
Art. 15(2)
Robustness
robustness-evaluation
Resilience testing, adversarial evaluation
Art. 15(3)
Cybersecurity
cybersecurity-evaluation
Security testing, vulnerability assessment
Traceability rules
traceability-rules:
# Every AI system must have a complete risk management process
- name: system-has-risk-managementdescription: Every AI system description must be covered by a risk management process (Art. 9)source-type: ai-system-descriptionrequired-backlink: manages-risk-forfrom-types: [risk-management-process]severity: error# Every identified risk must have a mitigation measure
- name: risk-has-mitigationdescription: Every risk assessment must have at least one mitigation measure (Art. 9(4))source-type: risk-assessmentrequired-backlink: mitigatesfrom-types: [risk-mitigation]severity: error# Every AI system must have data governance documentation
- name: system-has-data-governancedescription: Training/validation data must be governed (Art. 10)source-type: design-specificationrequired-backlink: governsfrom-types: [data-governance-record]severity: error# Every AI system must have monitoring measures
- name: system-has-monitoringdescription: AI system must have monitoring and control documentation (Art. 12)source-type: ai-system-descriptionrequired-backlink: monitorsfrom-types: [monitoring-measure]severity: error# Every AI system must have human oversight documentation
- name: system-has-human-oversightdescription: Human oversight measures must be documented (Art. 14)source-type: ai-system-descriptionrequired-backlink: overseen-byfrom-types: [human-oversight-measure]severity: error# Every AI system must have accuracy, robustness, cybersecurity evaluations
- name: system-has-accuracydescription: Accuracy metrics must be documented (Art. 15(1))source-type: design-specificationrequired-backlink: evaluatesfrom-types: [accuracy-evaluation, performance-evaluation]severity: error
- name: system-has-robustnessdescription: Robustness evaluation must be documented (Art. 15(2))source-type: design-specificationrequired-backlink: evaluatesfrom-types: [robustness-evaluation, performance-evaluation]severity: warning# Every system must reference applicable standards
- name: system-has-standardsdescription: Applicable harmonised standards must be listed (Annex IV §7)source-type: ai-system-descriptionrequired-backlink: applied-tofrom-types: [standards-reference]severity: warning# Every system must have a conformity declaration
- name: system-has-declarationdescription: EU declaration of conformity required (Annex IV §8)source-type: ai-system-descriptionrequired-backlink: declaresfrom-types: [conformity-declaration]severity: error# Every system must have post-market monitoring
- name: system-has-post-marketdescription: Post-market monitoring plan required (Art. 72, Annex IV §9)source-type: ai-system-descriptionrequired-backlink: monitors-post-marketfrom-types: [post-market-plan]severity: error# Risk management must cover foreseeable misuse
- name: risk-covers-misusedescription: Risk management should identify foreseeable misuse risks (Art. 9(2b))source-type: risk-management-processrequired-backlink: identified-byfrom-types: [misuse-risk]severity: warning# Transparency records must exist
- name: system-has-transparencydescription: Transparency information for deployers must be documented (Art. 13)source-type: ai-system-descriptionrequired-backlink: transparency-forfrom-types: [transparency-record]severity: error
Bridge schemas for STPA/ASPICE composition
The EU AI Act schema should compose with existing safety schemas:
# Bridge: eu-ai-act ↔ stpa# Maps STPA safety analysis to EU AI Act risk managementschema:
name: eu-ai-act-stpa-bridgeversion: "0.1.0"extends: [eu-ai-act, stpa]link-types:
- name: risk-identified-by-stpainverse: stpa-identifies-riskdescription: Risk assessment derived from STPA hazard/loss analysissource-types: [risk-assessment]target-types: [hazard, sub-hazard, loss]
- name: mitigation-from-constraintinverse: constraint-provides-mitigationdescription: Risk mitigation derived from STPA system constraintsource-types: [risk-mitigation]target-types: [system-constraint, controller-constraint]traceability-rules:
- name: stpa-hazards-map-to-risksdescription: STPA hazards should be linked to EU AI Act risk assessmentssource-type: hazardrequired-backlink: risk-identified-by-stpafrom-types: [risk-assessment]severity: warning
Context
The EU AI Act's provisions for high-risk AI systems become applicable on August 2, 2026 — 4 months away. Articles 9-15 and Annex IV mandate extensive documentation, risk management, and traceability for high-risk AI systems. Fines reach up to €35 million or 7% of global annual turnover for non-compliance.
Rivet already has the infrastructure for this:
extendsand bridge schemas (Spec-driven development: schema packages, bridges, guide API, CRUD CLI #93) enables layeringWhat's missing: artifact types that directly map to the EU AI Act's Annex IV documentation requirements, plus traceability rules that enforce the Act's specific obligations.
Strategic value: No open-source traceability tool targets EU AI Act compliance for AI-enabled safety-critical systems. DOORS, Polarion, and codebeamer require expensive licenses and don't natively understand the Act's structure. Rivet can be the git-native, schema-validated compliance backbone — especially for teams already using STPA/ASPICE schemas.
EU AI Act Requirements Mapping
Annex IV: Technical Documentation (9 mandatory sections)
The schema maps each Annex IV section to one or more artifact types:
ai-system-descriptiondesign-specificationsatisfies → ai-system-descriptiondata-governance-recordgoverns → design-specificationthird-party-componentused-by → design-specificationmonitoring-measuremonitors → ai-system-descriptionperformance-evaluationevaluates → design-specificationrisk-assessmentleads-to → ai-system-descriptionrisk-mitigationmitigates → risk-assessmentlifecycle-changemodifies → design-specificationstandards-referenceapplied-to → ai-system-descriptionconformity-declarationdeclares → ai-system-descriptionpost-market-planmonitors-post-market → ai-system-descriptionArticles 9-15: Obligation-specific artifact types
risk-management-processrisk-assessmentmisuse-riskemerging-risk(links topost-market-plan)risk-mitigationdata-governance-recordlogging-specificationtransparency-recordhuman-oversight-measureaccuracy-evaluationrobustness-evaluationcybersecurity-evaluationTraceability rules
Bridge schemas for STPA/ASPICE composition
The EU AI Act schema should compose with existing safety schemas:
Phases
Phase 1: Core schema + Annex IV artifact types
schemas/eu-ai-act.yamlwith all artifact types from the mapping aboverivet init --schema eu-ai-actwith starter artifactsexamples/eu-ai-act/Phase 2: Bridge schemas
eu-ai-act-stpa-bridge.yaml— map STPA analysis to risk managementeu-ai-act-aspice-bridge.yaml— map ASPICE verification to performance evaluationeu-ai-act-cybersecurity-bridge.yaml— map ISO 21434 to Art. 15(3) cybersecurityPhase 3: Compliance dashboard views
Phase 4: Export for notified bodies
rivet export --format eu-ai-act-report— structured report following Annex IV sectionsrivet export --format eu-ai-act-dossier— full compliance dossier with all linked artifactsReferences
stpa.yaml,aspice.yaml,cybersecurity.yaml— composition targets