Skip to content

bact/pitloom

Repository files navigation

Pitloom

PyPI - Version GitHub License DOI

The Pippin Pitloom

Automated transparency, woven from the ground up.

Under development -- NOT FOR PRODUCTION

Overview

Pitloom automates the generation of SPDX 3-compliant SBOMs for Python projects, documenting the composition and provenance of software systems. By reading metadata directly from Python packages and AI models (GGUF, ONNX, PyTorch, Safetensors), it creates standardized SPDX 3 JSON artifacts. It also offers native Hatchling integration, allowing users to hook into the build process to generate SBOMs automatically.

Features

  • SPDX 3 support: Generates SBOMs in SPDX 3 JSON-LD format
  • Multi-backend metadata extraction: Reads project metadata from pyproject.toml (PEP 621 [project]), Poetry ([tool.poetry]), and setuptools (setup.cfg / setup.py)
  • Dependency tracking: Automatically includes project dependencies in the SBOM
  • AI/ML model metadata: Extracts metadata from model files (GGUF, ONNX, PyTorch, Safetensors) for SPDX AI profile
  • License detection: Detect SPDX License ID from project metadata and license text, using LicenseID
  • Metadata provenance: Tracks the source of each metadata field for transparency and auditability
  • Standards compliant: Follows SPDX 3 specification and modern Python packaging standards

Installation

Install Pitloom using pip:

pip install pitloom

For development (lint + test), using pip >= 25:

pip install --group dev -e .

Or with uv:

uv sync --group dev

Optional model format support

Install extras to enable metadata extraction from model files:

pip install -e ".[aimodel]"       # all supported local AI model formats
pip install -e ".[huggingface]"   # HuggingFace Hub model metadata

or choose individual local formats:

pip install -e ".[fasttext]"      # fastText models
pip install -e ".[gguf]"          # GGUF models
pip install -e ".[onnx]"          # ONNX models
pip install -e ".[safetensors]"   # Safetensors models

Usage

Command line

Project SBOM

Generate an SBOM for a Python project in the current directory:

loom .

Specify output file:

loom /path/to/project -o sbom.spdx3.json

AI model SBOM

Generate an SBOM for a single AI model file, without a Python project directory. The model is treated as an ai_AIPackage root element. The output file is written to the current working directory:

loom -m path/to/model.safetensors
loom -m path/to/model.onnx
loom -m path/to/model.gguf

Supported local formats: GGUF, ONNX, Safetensors, PyTorch (.pt/.pth), Keras, HDF5, NumPy, fastText.

HuggingFace model SBOM

Pass a HuggingFace URL or model ID directly - no local file required. Pitloom fetches metadata from the Hub (model card, config.json, tokenizer_config.json, and generation_config.json) and produces an enriched ai_AIPackage SBOM with architecture, hyperparameters, license, language, and linked training datasets.

# Full URL
loom -m https://huggingface.co/mistralai/Mistral-7B-v0.1

# URL with tree path (stripped automatically)
loom -m https://huggingface.co/mistralai/Mistral-7B-v0.1/tree/main

# Bare model ID
loom -m Qwen/Qwen3-235B-A22B

Requires huggingface_hub:

pip install pitloom[huggingface]

Common model SBOM options

Specify the output file explicitly:

loom -m model.safetensors -o my-model.spdx3.json
loom -m mistralai/Mistral-7B-v0.1 -o mistral.spdx3.json

Pretty-print the output:

loom -m model.gguf --pretty
loom -m Qwen/Qwen3-235B-A22B --pretty

Set creator metadata:

loom -m model.safetensors --creator-name "Alice" --creator-email "alice@example.com"

Show help:

loom -h

Python API

The SBOM generator can be used programmatically:

from pathlib import Path
from pitloom.core.creation import CreationMetadata
from pitloom.assemble import generate_sbom, generate_ai_model_sbom

# Generate SBOM for a Python project
generate_sbom(
    project_dir=Path("/path/to/project"),
    output_path=Path("sbom.spdx3.json"),
    creation_info=CreationMetadata(
        creator_name="Your Name",
        creator_email="your@example.com",
    ),
    pretty=False,
)

# Generate an SBOM for a standalone AI model file
generate_ai_model_sbom(
    model_path=Path("model.safetensors"),
    output_path=Path("model.spdx3.json"),
    creation_info=CreationMetadata(creator_name="Your Name"),
    pretty=True,
)

# Generate an SBOM from a HuggingFace model repository (no local file needed)
from pitloom.assemble import generate_huggingface_sbom

generate_huggingface_sbom(
    model_source="mistralai/Mistral-7B-v0.1",  # or full URL
    output_path=Path("mistral.spdx3.json"),
    creation_info=CreationMetadata(creator_name="Your Name"),
    pretty=True,
)

Hatchling build hook

Pitloom can embed an SBOM automatically into every wheel you build by acting as a Hatchling build hook. The SBOM is placed at .dist-info/sboms/sbom.spdx3.json inside the wheel, following PEP 770.

Adding Pitloom to your build requirements

Add loom to your project's build requirements:

[build-system]
requires = ["hatchling", "pitloom"]
build-backend = "hatchling.build"

Registering the hook

Enable the hook by adding a section to your pyproject.toml:

[tool.hatch.build.hooks.pitloom]
# All fields are optional. Defaults are shown.
enabled = true
sbom-basename = "package-name"      # name part only (no extension); default "sbom"
creator-name = "SBOM Creator"       # defaults to "Pitloom"
creator-email = "mail@example.com"  # defaults to None
creation-datetime = "2026-04-01T00:00:00Z"  # Date and time in ISO 8601 UTC format
fragments = []  # extra SPDX fragment paths (relative to project root)

The full SBOM filename is {sbom-basename}.spdx3.json - e.g., the default produces sbom.spdx3.json. Setting sbom-basename = "mypackage-1.0" would produce mypackage-1.0.spdx3.json.

That is all. Running hatch build or python -m build will now generate and embed the SBOM automatically - no extra commands needed.

Merging AI/ML fragments

For AI-powered software, you can track model and dataset provenance during training using pitloom.loom, then include those fragments in the wheel SBOM:

[tool.hatch.build.hooks.pitloom]
fragments = [
    "fragments/train_run.spdx3.json",
    "fragments/eval_run.spdx3.json",
]

Fragments listed under [tool.hatch.build.hooks.pitloom] are merged together with any fragments already listed under [tool.pitloom].

Resulting wheel structure

mypackage-1.0-py3-none-any.whl
└── mypackage-1.0.dist-info/
    └── sboms/
        └── sbom.spdx3.json   <- PEP 770

Python tracking decorator

Developers can easily annotate scripts or Jupyter notebooks to generate external SBOM fragments that Pitloom will merge during the build process:

from pitloom import loom

# Use as a function decorator...
@loom.shoot(output_file="fragments/sentiment_model.json")
def train_model():
    loom.set_model("sentiment-clf")
    loom.add_dataset("imdb-reviews", dataset_type="text")
    # ... training logic ...

# ...or use as a context manager
with loom.shoot(output_file="fragments/sentiment_model.json"):
    loom.set_model("sentiment-clf")
    loom.add_dataset("imdb-reviews", dataset_type="text")

Example

Generate an SBOM for the sentimentdemo project:

# Clone the sentimentdemo repository
git clone https://github.com/bact/sentimentdemo.git

# Generate SBOM
loom sentimentdemo

The generated SBOM will include:

  • Project metadata (name, version, description)
  • Project dependencies with version constraints
  • SPDX relationships between components
  • Creator and creation timestamp information
  • Metadata provenance tracking for transparency

Metadata provenance

Pitloom tracks the source of each metadata field in the SBOM using the SPDX 3 comment attribute. This enables answering questions like:

"Why does the SBOM say the concluded license is MIT?"

"Where did the version number come from?"

Provenance examples

For a package with metadata extracted from various sources:

{
  "type": "software_Package",
  "name": "mypackage",
  "software_packageVersion": "1.2.3",
  "comment": "Metadata provenance: name: Source: pyproject.toml | Field: project.name; version: Source: src/mypackage/__about__.py | Method: dynamic_extraction; dependencies: Source: pyproject.toml | Field: project.dependencies"
}

The provenance information shows:

  • Package name: Extracted from pyproject.toml -> project.name
  • Version: Dynamically extracted from src/mypackage/__about__.py
  • Dependencies: Listed in pyproject.toml -> project.dependencies

This transparency is crucial for:

  • Auditability: Understanding where SBOM data comes from
  • Trust: Verifying the accuracy of metadata
  • Machine consumption: Automated tools can parse provenance
  • Human review: Manual inspection of data sources

Project structure

See docs/implementation/summary.md for the canonical, up-to-date project tree.

Development

Running tests

pytest

Running linter

ruff check src/ tests/

Building the package

pip install build
python -m build

Roadmap

See docs/design/roadmap.md.

References

For more information about implementing AI BOM using SPDX specification, see Karen Bennet, Gopi Krishnan Rajbahadur, Arthit Suriyawongkul, and Kate Stewart, “Implementing AI Bill of Materials (AI BOM) with SPDX 3.0: A Comprehensive Guide to Creating AI and Dataset Bill of Materials”, The Linux Foundation, October 2024.

License

  • Source code: Apache License 2.0.
  • Documentation: Creative Commons Attribution 4.0 International.
  • Test fixture AI models: Individual files are licensed under Apache-2.0, CC0-1.0, or MIT. See tests/fixtures/README.md for details. Note that these are available in the source repository only and are not included in the distribution packages.

Name

A pit loom is a traditional handloom built into a ground-level pit to house its internal mechanisms and the weaver's legs. This "grounded" design provides stability and precision during the weaving process.

We use the loom as a metaphor for the tool's function: it weaves disparate threads of metadata into a cohesive SBOM, creating a transparent, structured "fabric" for the software build.

About

Automated transparency, woven from the ground up. SBOM generation for Python & AI projects. Extract metadata from GGUF, ONNX, PyTorch, and Safetensors models with native Hatchling build-hook support.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages