Metadata-Version: 2.4
Name: usdm4_protocol
Version: 0.4.0
Summary: USDM4 Protocol Import - M11, CPT and Legacy formats
License-Expression: AGPL-3.0-only
Requires-Python: >=3.12
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: usdm4>=0.22.0
Requires-Dist: raw_docx>=0.14.0
Requires-Dist: beautifulsoup4
Requires-Dist: python-dateutil
Provides-Extra: ai
Requires-Dist: anthropic>=0.62.0; extra == "ai"
Requires-Dist: d4k_ms_base>=0.3.0; extra == "ai"
Provides-Extra: ai-gemini
Requires-Dist: google-genai>=1.0.0; extra == "ai-gemini"
Requires-Dist: d4k_ms_base>=0.3.0; extra == "ai-gemini"
Provides-Extra: pdf
Requires-Dist: pymupdf>=1.25.0; extra == "pdf"
Provides-Extra: pdf-docling
Requires-Dist: docling>=2.43.0; extra == "pdf-docling"
Provides-Extra: all
Requires-Dist: anthropic>=0.62.0; extra == "all"
Requires-Dist: google-genai>=1.0.0; extra == "all"
Requires-Dist: d4k_ms_base>=0.3.0; extra == "all"
Requires-Dist: pymupdf>=1.25.0; extra == "all"
Requires-Dist: docling>=2.43.0; extra == "all"
Provides-Extra: dev
Requires-Dist: pytest>=8.2.0; extra == "dev"
Requires-Dist: pytest-cov>=4.1.0; extra == "dev"
Requires-Dist: pytest-mock>=3.14.0; extra == "dev"
Requires-Dist: python-dotenv>=1.0.0; extra == "dev"
Requires-Dist: anyio>=4.9.0; extra == "dev"
Requires-Dist: ruff>=0.12.0; extra == "dev"
Dynamic: license-file

# usdm4_protocol

A unified Python package for importing clinical trial protocol documents into [USDM v4](https://github.com/data4knowledge/usdm4) (Unified Study Definitions Model). Converts protocol documents from three industry formats into a structured, standards-compliant USDM JSON representation.

## Supported Formats

**ICH M11** — The [ICH M11 guideline](https://www.ich.org/) clinical electronic Structured Protocol Template. Accepts `.docx` files following the M11 section structure. Supports optional AI-assisted extraction via the Anthropic Claude API.

**TransCelerate CPT** — The Common Protocol Template defined by [TransCelerate BioPharma](https://www.transceleratebiopharmainc.com/). Accepts `.docx` files in CPT format.

**Legacy PDF** — Freeform sponsor protocol documents in PDF format. Uses HTML-based extraction with a pluggable PDF converter backend (docling or PyMuPDF).

## Installation

The package requires Python 3.12 or later.

```bash
# Core package (M11 and CPT DOCX import)
pip install usdm4_protocol

# With lightweight PDF support (~20 MB, suitable for Docker/Fly.io)
pip install usdm4_protocol[pdf]

# With high-accuracy PDF support via docling (large, includes ML models)
pip install usdm4_protocol[pdf-docling]

# With AI-assisted extraction (Anthropic Claude)
pip install usdm4_protocol[ai]

# With AI-assisted extraction (Google Gemini)
pip install usdm4_protocol[ai-gemini]

# Everything
pip install usdm4_protocol[all]
```

## Quick Start

### Unified Entry Point

The `USDM4Protocol` class provides a single interface across all formats:

```python
from usdm4_protocol import USDM4Protocol

protocol = USDM4Protocol()

# Import an M11 DOCX
wrapper = protocol.from_m11("path/to/protocol_m11.docx")

# Import a CPT DOCX
wrapper = protocol.from_cpt("path/to/protocol_cpt.docx")

# Import a legacy PDF
wrapper = protocol.from_pdf("path/to/protocol.pdf")

# Auto-detect format from file extension
wrapper = protocol.from_file("path/to/protocol.docx")

# Access the USDM JSON
json_str = wrapper.to_json()

# Check for errors
print(protocol.errors.dump(0))
```

### Format-Specific Handlers

Each format also has its own handler class for direct use:

```python
from usdm4_protocol.m11 import USDM4M11
from usdm4_protocol.cpt import USDM4CPT
from usdm4_protocol.legacy import USDM4Legacy

# M11 with AI-assisted extraction
m11 = USDM4M11()
wrapper = m11.from_docx("protocol.docx", use_ai=True)

# CPT
cpt = USDM4CPT()
wrapper = cpt.from_docx("protocol.docx")

# Legacy PDF with explicit converter choice
legacy = USDM4Legacy()
wrapper = legacy.from_pdf("protocol.pdf", pdf_converter="pymupdf")
```

### Exporting

Convert USDM JSON back to an HTML document view:

```python
protocol = USDM4Protocol()

# Export as M11-structured HTML
html = protocol.to_html("usdm_output.json", template="M11")

# Export as CPT-structured HTML
html = protocol.to_html("usdm_output.json", template="CPT")

# Generate data views (title page, etc.)
views = protocol.data_views("usdm_output.json")
```

## PDF Converter Options

The legacy PDF handler supports two backends for converting PDF to HTML. The converter is selected via the `pdf_converter` parameter.

| Converter | Install Extra | Size | Best For |
|-----------|--------------|------|----------|
| **PyMuPDF** | `pdf` | ~20 MB | Docker deployments, Fly.io, lightweight environments |
| **docling** | `pdf-docling` | ~2 GB+ | Maximum accuracy, complex table extraction, GPU-accelerated environments |

In `"auto"` mode (the default), docling is preferred when available, falling back to PyMuPDF. Both converters produce HTML output that feeds into the same downstream extraction pipeline.

```python
# Explicit selection
wrapper = protocol.from_pdf("protocol.pdf", pdf_converter="pymupdf")
wrapper = protocol.from_pdf("protocol.pdf", pdf_converter="docling")

# Auto-select best available (default)
wrapper = protocol.from_pdf("protocol.pdf")
```

## Processing Pipeline

All three handlers follow the same internal pipeline, implemented in a shared `BaseImport` base class:

```
Load → Extract → Assemble → Wrapper
```

1. **Load** — Reads the source document and converts it to a normalised HTML representation. For DOCX formats this uses `raw_docx`; for PDF it uses the selected converter (docling or PyMuPDF). The legacy PDF path also cleans (removes table of contents) and splits the HTML into logical sections.

2. **Extract** — Parses the HTML to identify structured data: title page fields, sponsor information, study design, amendments, inclusion/exclusion criteria, Schedule of Activities tables, and section content. Extraction uses a two-layer architecture:
   - **Common layer** (`common/extract/`): Format-agnostic extractors including AI-assisted extraction (`ContentExtractor`, `IEExtractor`), combined AI + heuristic row classification (`CombinedRowClassifier`), and section discovery utilities.
   - **Format layer** (e.g., `m11/import_/extract/`): Format-specific code that locates sections and delegates to common extractors.

3. **Assemble** — Maps the extracted data into USDM v4 model objects via `usdm4`, producing a `Wrapper` instance containing the complete study definition.

## Schedule of Activities (SoA) Extraction

The SoA extraction pipeline converts protocol schedule tables into USDM4 timeline, epoch, encounter, and activity objects. It runs 8 sequential feature extractors on each SoA table: ActivityRow, Notes, Timepoints, Epochs, Visits, Windows, Activities, and Conditions.

A `CombinedRowClassifier` runs both an AI classifier (single Claude API call) and a heuristic classifier against the table's header rows, then merges their results using consensus logic. Where both agree, the result is used directly. Where they conflict, the AI classification is preferred. The merged classification provides row-type hints to each feature extractor, so they can target the correct row without re-scanning.

SoA section discovery (for legacy PDFs) uses a multi-pass strategy with 10 search terms, exhaustive matching across all candidate sections, and OCR-tolerant fallback. Sections containing `<table>` elements are preferred over those without. When a matched section has no table, child subsections are checked automatically.

When no epoch row exists in the SoA table but timepoints are present, a default "Study Period" epoch is synthesised automatically, allowing the assembler to build a schedule timeline even for simple tables.

## Package Structure

```
src/usdm4_protocol/
├── __init__.py              # USDM4Protocol unified entry point
├── common/                  # Shared utilities
│   ├── ai/                  # AI providers (Claude, fallback)
│   ├── assemble/            # USDM assembly logic
│   ├── base_import.py       # BaseImport — shared Load→Extract→Assemble pipeline
│   ├── extract/             # Common extractors (IE criteria, row classifiers, section finder)
│   ├── html/                # HTML table expansion, BeautifulSoup helpers
│   └── load/                # Shared load utilities
├── m11/                     # ICH M11 handler
│   ├── import_/             # Load DOCX → Extract → Assemble
│   ├── export/              # USDM → HTML export
│   ├── specification/       # M11 section definitions (YAML)
│   ├── elements/            # M11 element definitions
│   └── views/               # Document and data views
├── cpt/                     # TransCelerate CPT handler
│   ├── import_/             # Load DOCX → Extract → Assemble
│   └── views/               # Document views
├── legacy/                  # Legacy PDF handler
│   └── import_/
│       ├── load/            # PDF → HTML conversion
│       │   ├── to_html.py          # Factory selecting converter backend
│       │   ├── to_html_base.py     # Abstract base class
│       │   ├── to_html_docling.py  # Docling implementation
│       │   ├── to_html_pymupdf.py  # PyMuPDF implementation
│       │   ├── clean_html.py       # HTML normalisation (TOC removal)
│       │   └── split_html.py       # Section splitting by numbered headings
│       └── extract/         # HTML → structured data
└── soa/                     # Schedule of Activities extractor
    ├── soa_extractor.py     # Main SoA extraction orchestrator
    ├── soa_model.py         # SoA data → HTML table generation
    └── features/            # Feature extractors (epochs, visits, timepoints,
                             # activities, windows, conditions, notes) and
                             # heuristic row classifier
```

## Development

```bash
# Clone and install with dev dependencies
git clone https://github.com/data4knowledge/usdm4_protocol.git
cd usdm4_protocol
pip install -e ".[dev,ai,pdf]"

# Run tests
pytest

# Lint
ruff check src/ tests/
```

### Test Structure

Tests mirror the source layout under `tests/`:

- `tests/m11/` — Extraction, assembly, export, specifications, views
- `tests/soa/` — Schedule of Activities feature extraction, model generation, row classification
- `tests/common/` — Shared extractors, AI providers, HTML utilities, combined row classifier
- `tests/legacy/` — PDF loading, HTML cleaning/splitting, IE extraction, USDM integration
- `tests/cpt/` — CPT import, title page extraction, document views

Integration tests (in `test_integration.py` and `test_*_ai_integration.py` files) require real test files and/or API keys. They are marked with `@pytest.mark.integration` and `@pytest.mark.ai` respectively. A `.test_env` file is loaded at test collection time via `conftest.py` to provide environment variables such as `ANTHROPIC_API_KEY` and `GEMINI_API_KEY`. Integration tests use session-scoped caching to avoid redundant protocol loading across test methods.

## Dependencies

Core: `usdm4`, `raw_docx`, `simple_error_log`, `beautifulsoup4`, `python-dateutil`

Optional: `pymupdf` (lightweight PDF), `docling` (high-accuracy PDF), `anthropic` + `d4k_ms_base` (Claude AI extraction), `google-genai` + `d4k_ms_base` (Gemini AI extraction)

## Testing

### Commands

The default `addopts` in `pyproject.toml` excludes `integration` and `ai` markers and enables coverage, so plain `pytest` runs unit tests with coverage reporting.

| Command | What runs | Speed |
|---|---|---|
| `pytest` | Unit tests only (default excludes integration + ai) | Fast (seconds) |
| `pytest -m "not ai"` | Unit + integration tests, no API calls | Medium |
| `pytest -m ""` | Full suite including AI integration tests | Slow (requires API keys) |

### Coverage

Coverage is enabled by default via `addopts` in `pyproject.toml`. To run without coverage:

```bash
pytest -o "addopts=-m 'not integration and not ai'"
```

### Building the Package

```bash
python3 -m build --sdist --wheel
```

### Publishing

```bash
twine upload dist/*
```

## License

AGPL-3.0
