Metadata-Version: 2.4
Name: incept-qti-sdk
Version: 0.5.3
Summary: QTI 3.0 item authoring and upload SDK
Project-URL: Homepage, https://github.com/trilogy-group/incept_qti_converter
Project-URL: Repository, https://github.com/trilogy-group/incept_qti_converter
Project-URL: Changelog, https://github.com/trilogy-group/incept_qti_converter/blob/main/CHANGELOG.md
Author: Trilogy Group
License-Expression: MIT
License-File: LICENSE
Keywords: assessment,education,ims,qti
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Education
Requires-Python: >=3.10
Requires-Dist: beautifulsoup4>=4.12
Requires-Dist: httpx>=0.27
Requires-Dist: latex2mathml>=3.0
Requires-Dist: lxml>=5.0
Requires-Dist: mistune>=3.0
Requires-Dist: pydantic>=2.0
Requires-Dist: python-dotenv>=1.0
Provides-Extra: dev
Requires-Dist: pytest-asyncio>=0.24; extra == 'dev'
Requires-Dist: pytest>=8.0; extra == 'dev'
Requires-Dist: ruff; extra == 'dev'
Description-Content-Type: text/markdown

# QTI 3.0 Conversion SDK

Build assessment items using Python data models. The SDK converts them into valid, renderable QTI 3.0 XML — handling identifiers, response processing, scoring, and content transformation internally.

## Installation

```bash
pip install incept-qti-sdk
```

Or with `uv`:

```bash
uv add incept-qti-sdk
```

For development (with test/lint tools):

```bash
git clone https://github.com/trilogy-group/incept_qti_converter.git
cd incept_qti_converter
pip install -e ".[dev]"
```

## Quickstart

```python
from qti_sdk import (
    QuestionItem, QtiBuilder, ChoiceConfig, Choice,
    SummativeBehavior, ItemPackaging,
)

item = QuestionItem(
    question="What is the capital of France?",
    behavior=SummativeBehavior(),
    packaging=ItemPackaging(difficulty=0.3),
    interactions=[
        ChoiceConfig(choices=[
            Choice(id="A", content="London"),
            Choice(id="B", content="Paris", correct=True),
            Choice(id="C", content="Berlin"),
        ]),
    ],
)

result = QtiBuilder().build(item)
print(result.item_xml)
```

To upload items to TimeBack, copy `.env.example` to `.env`, fill in your credentials, then:

```python
import asyncio
from qti_sdk import TimeBackConfig, upload_qti_package, get_job_status

config = TimeBackConfig.from_env()
upload = asyncio.run(upload_qti_package([result], config))
print(f"Job: {upload.job_id}")
```

## CLI Usage

The SDK ships a `qti-sdk` command (also available as `python -m qti_sdk`) for use from any language or shell script. Input is QuestionItem JSON; output is structured JSON on stdout.

### Build items into QTI XML

Takes QuestionItem JSON (a single object or an array) and writes QTI 3.0 XML files to an output directory. A manifest JSON is emitted to stdout.

```bash
# From a file
qti-sdk build -i items.json -o ./qti_output

# From stdin (e.g. piped from a TypeScript generator)
cat items.json | qti-sdk build -o ./qti_output
```

Output directory structure:

```
qti_output/
  items/
    choice_abc123.xml
    text_entry_def456.xml
  stimuli/
    stim_aabbccdd.xml
```

Manifest JSON on stdout:

```json
{
  "output_dir": "./qti_output",
  "items": [
    {
      "item_id": "choice_abc123",
      "item_xml": "items/choice_abc123.xml",
      "stimulus_xml": "stimuli/stim_aabbccdd.xml",
      "stimulus_id": "stim_aabbccdd",
      "metadata": { "interaction_type": "choiceInteraction" }
    }
  ]
}
```

### Upload items to TimeBack

Builds items, packages into a QTI ZIP, and uploads to TimeBack in one step. Credentials are read from environment variables (or a `.env` file).

```bash
# Set credentials (or put them in .env)
export TIMEBACK_PLATFORM_API_URL="https://api.timeback.example.com"
export TIMEBACK_APPLICATION_CLIENT_ID="your-client-id"
export TIMEBACK_APPLICATION_CLIENT_SECRET="your-client-secret"

# Upload and return immediately with a job ID
qti-sdk upload -i items.json

# Upload and wait for processing to complete
qti-sdk upload -i items.json --poll
```

Output (without `--poll`):

```json
{
  "job_id": "abc-123-def",
  "warnings": []
}
```

Output (with `--poll`):

```json
{
  "job_id": "abc-123-def",
  "warnings": [],
  "status": "COMPLETED",
  "last_updated": "2026-03-01T12:00:00Z",
  "item_statuses": [
    { "item_id": "uuid-1", "status": "COMPLETED", "last_updated": "...", "message": null }
  ]
}
```

Additional flags: `--media-root DIR` (resolve local asset paths), `--save-zip PATH` (save the assembled ZIP for debugging), `--stimulus-mode inline` (inline stimuli instead of separate files), `--emit-end-attempt` (include `qti-end-attempt-interaction` in adaptive items for renderers without a wrapper Submit button).

### Check job status

```bash
qti-sdk status JOB_ID

# Poll until the job finishes
qti-sdk status JOB_ID --poll
```

### Dump JSON Schema

```bash
# All models
qti-sdk schema

# Specific model
qti-sdk schema --model QuestionItem
```

## Curriculum Standards

Every uploaded item needs at least one `curriculum_standards` entry. When labels are not pre-resolved to CASE UUIDs, you must also set **`document`** and **`course`** to the **exact titles** of the CFDocument and course on TimeBack (matched case-insensitively). The SDK lists documents and courses from the API for your configured environment, then resolves human-readable standard labels (e.g. `CCSS.MATH.CONTENT.3.OA.A.1`) to CASE UUIDs at upload time.

### Choosing `document` and `course`

Use the same titles shown in TimeBack (or returned by the CASE / curriculum APIs), for example:

| `document` | `course` |
|------------|----------|
| `Common Core Standard` | `3rd Grade` |
| `Common Core Standard` | `5th Grade` |
| `Common Core Standard` | `Algebra 1` |

Names must match **exactly** aside from letter case (case-insensitive strict match). If no document or course matches, upload fails with a clear error.

### Python usage

```python
from qti_sdk import ItemPackaging, StandardAlignment

packaging = ItemPackaging(
    difficulty=0.5,
    document="Common Core Standard",
    course="3rd Grade",
    curriculum_standards=[
        StandardAlignment(label="CCSS.MATH.CONTENT.3.OA.A.1"),
    ],
)
```

### JSON usage

```json
{
  "packaging": {
    "difficulty": 0.5,
    "document": "Common Core Standard",
    "course": "3rd Grade",
    "curriculum_standards": [
      {"label": "CCSS.MATH.CONTENT.3.OA.A.1"}
    ]
  }
}
```

### How standard resolution works

1. `document` and `course` locate the course tree via TimeBack's CASE / curriculum APIs (with in-memory caching per upload batch).
2. `curriculum_standards[].label` values are matched against standard codes in that tree.
3. The SDK resolves each label to a CASE UUID and includes it in the uploaded package.

### Validation

The SDK validates all items **before** making any network calls:

- Items missing `curriculum_standards` are rejected (TimeBack requires at least one).
- Items with unresolved standard labels but missing `document` or `course` are rejected.
- All errors are collected and reported per-item in a single `ValidationError`, so you can fix everything in one pass.

### Logging and verbosity

Progress logs go to stderr (not stdout), so they never interfere with structured JSON output.

```bash
qti-sdk upload -i items.json              # INFO-level logs (default)
qti-sdk upload -i items.json --verbose    # DEBUG-level logs
qti-sdk upload -i items.json --quiet      # WARNING only
```

### Error handling

On failure, the CLI writes a structured JSON error to stdout and exits with a non-zero code:

| Exit code | Meaning |
|-----------|---------|
| 0 | Success |
| 1 | Validation error (bad JSON, missing env vars) |
| 2 | Build error (SDK internal) |
| 3 | Upload / network error |
| 4 | Poll timeout |

```json
{
  "error": "ValidationError",
  "message": "4 validation error(s)",
  "details": [
    { "loc": ["question"], "msg": "Field required", "type": "missing" }
  ]
}
```

### Input format

The CLI accepts a QuestionItem JSON — either a single object or an array. The QuestionItem schema is available via `qti-sdk schema --model QuestionItem`.

A minimal example:

```json
{
  "question": "What is 2 + 2?",
  "behavior": { "type": "summative" },
  "interactions": [
    {
      "type": "choice",
      "choices": [
        { "id": "A", "content": "3" },
        { "id": "B", "content": "4", "correct": true },
        { "id": "C", "content": "5" }
      ]
    }
  ],
  "packaging": { "difficulty": 0.2 }
}
```

## Examples

The [`examples/`](examples/) directory contains runnable scripts for every interaction type:

| Example | What it demonstrates |
|---------|---------------------|
| `example_mcq.py` | Multiple-choice (summative, adaptive, formative) |
| `example_text_entry.py` | Fill-in-the-blank (single and multi-blank) |
| `example_extended_text.py` | Essay / free-response items |
| `example_match.py` | Matching and categorization |
| `example_media.py` | Media interaction (video/audio, play count tracking) |
| `example_order.py` | Sequencing / ordering (**temporarily disabled** — `OrderConfig` raises `ValidationError` at construction) |
| `example_pci.py` | Portable Custom Interactions (graphs, number lines) |
| `example_composite.py` | Multi-part items (mixed interaction types) |
| `example_scoring.py` | Weighted scoring, partial credit, score expressions |
| `example_template.py` | Randomized item variants with template variables |
| `example_upload.py` | Build + upload + poll job status |

## Error Handling

All SDK exceptions inherit from `QtiSdkError`:

```python
from qti_sdk import QtiSdkError, ValidationError, BuildError, UploadError

try:
    result = builder.build(item)
except ValidationError:
    ...  # invalid model input
except BuildError:
    ...  # XML generation failure
except QtiSdkError:
    ...  # catch-all for any SDK error
```

## Logging

**CLI users:** Logs go to stderr automatically. Use `--verbose` for debug output, `--quiet` for warnings only. See [CLI Usage](#cli-usage) above.

**Python users:** The SDK uses Python's standard `logging` module under the `qti_sdk` namespace:

```python
import logging
logging.basicConfig(level=logging.INFO)
logging.getLogger("qti_sdk").setLevel(logging.DEBUG)
```

| Logger | Levels used | What it logs |
|--------|-------------|-------------|
| `qti_sdk.upload.auth` | DEBUG | Granted OAuth scopes |
| `qti_sdk.upload.uploader` | INFO, ERROR, WARNING | Progress milestones, upload failures, asset download issues |
| `qti_sdk.upload.case_resolver` | INFO | Course tree loading, code-to-UUID mapping counts |

---

# Architecture & Data Model

## Goal

Provide a mandated set of input data models that generator authors use to produce assessment items. The SDK converts these models into valid, renderable QTI 3.0 XML — handling identifier correlation, response processing, and content transformation internally.

Scope: **items, stimulus, and companion materials** (no test/section assembly). Adaptive items supported. Composite items (multiple interactions in one item) supported. Template processing (randomized item variants) supported. Presentation attributes deferred. Inter-resource associations (stimulus refs, dependencies) are handled automatically based on field presence.

**Quick-reference for generator authors:** [`CONVENTIONS.md`](CONVENTIONS.md)
**Full field-by-field schema mapping, composite semantics, scoring chain, template processing, and validation rules:** [`QTI_SCHEMA_MAPPING.md`](QTI_SCHEMA_MAPPING.md)
**Accessibility support status and rollout policy:** [`docs/accessibility_support.md`](docs/accessibility_support.md)
**Custom grading API contract** (for `api_scoring`): [`docs/custom_grading_api.md`](docs/custom_grading_api.md)

---

## Core Design Decision

**Unified `QuestionItem` container with an explicit behavior discriminator.**

All items are constructed as a single `QuestionItem` that carries content + assessment semantics. A `behavior` field declares _how_ the item should behave — which response processing pattern, feedback reveal strategy, and scoring approach to use. An `interactions` list holds one or more typed interaction configs.

Generator authors:

1. Create a **`QuestionItem`** with a shared question stem
2. Pick a **behavior** (how the item scores and gives feedback)
3. Add one or more **interaction configs** (what kind of question — `ChoiceConfig`, `TextEntryConfig`, etc.)
4. Fill in **optional fields** (stimulus, companion materials, accessibility catalog, template, scoring dimensions, feedback)

They never write QTI identifiers, response processing logic, or outcome declarations.

---

## Interaction Types (7 configs)

Interaction-specific fields live on the config. Shared fields (`question`, `stimulus`, `companion_materials`, `accessibility_catalog`, `behavior`, `feedback`, `template`, `scoring_dimensions`) live on `QuestionItem`. Multiple interactions in one item produce a composite item automatically.

| Config | Covers | Key interaction-specific fields |
|---|---|---|
| `ChoiceConfig` | MCQ, multi-select | `choices[]`, `max_selections`, `shuffle`, `score_map`, `score_expression` |
| `TextEntryConfig` | Fill-in-the-blank (single/multi) | `answers`, `prompt` (with `<blank>` placeholders), `case_sensitive`, `tolerance`, `correct_expression` |
| `ExtendedTextConfig` | Essay, SAQ, FRQ | `expected_length`, `scoring_mode` |
| `MatchConfig` | Matching pairs, categorization | `source_set[]`, `target_set[]`, `correct_mapping[]`, `shuffle`, `score_map` |
| `MediaConfig` | Video/audio with play count tracking | `media_type`, `sources[]`, `autostart`, `min_plays`, `max_plays`, `loop` |
| `OrderConfig` | Sequencing (**temporarily disabled** — raises `ValidationError`) | `items[]`, `correct_order[]`, `shuffle`, `partial_credit` |
| `PCIConfig` | PCI-driven (graph, number-line, etc.) | `interaction_type`, `data_attributes`, `properties`, `interaction_markup`, `scoring` (match-correct \| external) |

All scorable configs also support `score_expression` (typed DSL) and `target_dimension` (multi-dimensional scoring). Each config has optional `prompt` and `label` fields for composite items. `PCIConfig` does not generate standard QTI interaction elements — unmodeled standard interactions (gap-match, inline-choice, etc.) need first-class configs when needed. PCI modules are resolved via `PCI_MODULE_REGISTRY` (4 registered platform modules) or explicit `module` + `data_item_path_uri`; unresolvable PCIs are rejected at construction time.

For full field definitions: [`QTI_SCHEMA_MAPPING.md` §1](QTI_SCHEMA_MAPPING.md#1-the-proposed-model). For usage rules per interaction type: [`CONVENTIONS.md`](CONVENTIONS.md).

---

## Behavior Types (assessment patterns)

Each behavior is a parameterized type (discriminated union) that maps to a pre-built response processing template inside the SDK.

| Behavior                    | Use case                      | Parameters                                                                 |
| --------------------------- | ----------------------------- | -------------------------------------------------------------------------- |
| `SummativeBehavior`         | High-stakes test, one attempt | None. No feedback. SCORE only.                                             |
| `FeedbackEnabled`           | Practice, formative, adaptive | `adaptive` (`false`=non-adaptive, `true`=adaptive), `policy` (FeedbackPolicy). Feedback content via `Feedback` entries on the item/interaction. |
| `ExternalGraded`            | Essay, FRQ, complex rubric    | None. No automated scoring.                                                |
| `api_scoring`               | External API-based grading    | `ApiScoringConfig(endpoint, mastery_value?, extra_fields?)`. See [`docs/custom_grading_api.md`](docs/custom_grading_api.md) for the API contract. |

Feedback content (hints, explanations, solution steps, learning content, answer reveals) lives on `Feedback` entries attached at three levels: sub-items (Choice, OrderItem, MatchItem), interaction configs, and the QuestionItem. The behavior's `FeedbackPolicy` controls default reveal timing.

The SDK maps Feedback entries to QTI elements automatically: choice-level → `qti-feedback-inline`, interaction-level → `qti-feedback-block`, item-level → `qti-modal-feedback` or `qti-feedback-block`.

---

## Shared Building Blocks (optional fields on `QuestionItem`)

| Block                  | Purpose                                                                                                                               |
| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------- |
| `Stimulus`             | Text, image, audio, or video passage. Union type discriminated by `type` field. Accepts a single stimulus or a list (multi-document). |
| `CompanionMaterials`   | Tools available during the item: `calculator?: basic\|standard\|scientific`, `ruler?: bool`, `protractor?: bool`                      |
| `LearningContent`      | Structured remediation material (text/video). Used as content in `Feedback` entries.                                                   |
| `Feedback`             | First-class feedback entity. `type` (explanation/hint/solution_steps/learning_content/answer_reveal), `content` (markdown or LearningContent), optional `show_when` (ShowCondition). |
| `FeedbackPolicy`       | Default show conditions per feedback type. Lives on `FeedbackEnabled` behavior.                                                        |
| `ScoringDimension`     | Named scoring dimension with `max_score` and `scoring_method` (human/machine). Interactions target dimensions via `target_dimension`. |
| `TemplateConfig`       | Randomized item variants: `variables[]` (random integers/floats), `constraints[]`, `computed_values`. `correct_expression` on `TextEntryConfig` computes answers at delivery time. |
| `ContextVariable`      | Advanced escape hatch for raw `qti-context-declaration` elements. Most generators will never use this.                                |
| `AccessibilityCatalog` | Per-element accessibility alternatives. List of `{target_path, support, content}` where `target_path` points to model fields (for example `question`, `interactions[0].prompt`, `interactions[0].choices[B]`), `support` is a QTI card type, and `content` is inline HTML/SSML or a file reference. |

---

## Presence-Driven Associations

The SDK auto-generates QTI associations and separate resource files based on which optional fields are populated. Generator authors never deal with identifiers, hrefs, or cross-file references.

### Stimulus → separate file or inline (mode-driven)

When `stimulus` is present on the `QuestionItem`, the SDK generates stimulus content in one of two modes, selected at build time:

**Mode 1: Separate files (default)**
1. Generates a **separate** `qti-assessment-stimulus` XML file containing the stimulus body (text/image/audio/video processed through the content pipeline)
2. Adds a `qti-assessment-stimulus-ref` element on the item with `identifier` and `href` pointing to the stimulus file
3. Identifier/href correlation is guaranteed by the SDK — a single internal ID produces the stimulus filename, the ref's `href`, and both `identifier` values. Generator authors never see or manage these.

**Mode 2: Inline**
1. Stimulus content is injected directly into `<qti-item-body>` before the interaction element
2. No separate stimulus file generated, no `<qti-assessment-stimulus-ref>` element
3. Manifest has no stimulus resources or dependencies

### Companion Materials → `qti-companion-materials-info`

When `companion_materials` is present on the `QuestionItem`, the SDK generates `qti-companion-materials-info` with the appropriate children:

- `companion_materials.calculator = "scientific"` → `<qti-calculator>` with type attribute
- `companion_materials.ruler = true` → `<qti-rule>`
- `companion_materials.protractor = true` → `<qti-protractor>`

No identifier correlation needed — this is a self-contained element on the item.

### Accessibility Catalog → `qti-catalog-info`

When `accessibility_catalog` is present on the `QuestionItem`, the SDK generates `qti-catalog-info` containing one `qti-catalog` per target element, each with `qti-card` entries for the supplied support types:

- `{support: "spoken", content: "<speak>...</speak>"}` → `<qti-card support="spoken">` with SSML content
- `{support: "glossary-on-screen", content: "..."}` → `<qti-card support="glossary-on-screen">` with HTML content
- `{support: "sign-language", content: "https://...mp4"}` → `<qti-card support="sign-language">` with `<qti-file-href>`
- `{support: "long-description", content: "..."}` → `<qti-card support="long-description">` with HTML content

The SDK wires `data-catalog-idref` attributes on corresponding item-body elements to link each catalog entry to the content it describes. Generator authors supply `target_path` values (for example `question`, `interactions[0].prompt`, `interactions[0].choices[B]`, `interactions[0].label`) and the builder resolves them to the correct XML element.

Only cards that are supplied get generated. An item with just `spoken` entries gets a catalog with only spoken cards — no empty placeholder cards for other support types.

### Stimulus mode selection

The mode is selected via the builder API:

```python
# At build time
builder = QtiBuilder()
result = builder.build(item, stimulus_mode="separate")   # default
result = builder.build(item, stimulus_mode="inline")

# Or post-processing transform (backwards-compatible)
from qti_sdk.builders.transforms import inline_stimulus, inline_all_stimuli
result = inline_stimulus(result)          # single BuildResult
results = inline_all_stimuli(results)     # list of BuildResults
```

Items without a stimulus pass through unchanged regardless of mode.

### End-attempt interaction (adaptive)

By default, adaptive items do **not** emit `qti-end-attempt-interaction` or the `RESPONSE_END_ATTEMPT` response declaration. This avoids a duplicate Submit button when the renderer (e.g. qti-3-player) provides its own wrapper Submit. To include them for renderers that need an in-body end-attempt control:

```python
# CLI
qti-sdk build -i items.json -o out/ --emit-end-attempt

# Python API
result = builder.build(item, emit_end_attempt=True)
```

### Manifest Dependencies (auto-wired at package time)

When exporting a batch of items, the SDK generates `imsmanifest.xml` with:

- Each item as a `resource` (type `imsqti_item_xmlv3p0`)
- Each stimulus as a `resource` (type `imsqti_stimulus_xmlv3p0`)
- `dependency` elements on items that reference stimuli, wired by matching identifiers
- `qtiMetadata` per resource — auto-derived from model type and behavior (interaction type, feedback type, scoring mode)

Generator authors call a single `package(items)` method. The manifest is fully derived.

---

## Content Processing Pipeline (reusable library)

All text fields that contain student-visible content go through:

1. **LaTeX → MathML** — `$...$` and `$$...$$` delimiters converted via MathJax. Existing MathML and SVG protected.
2. **Template variable substitution** — `@@var@@` tokens are replaced with `<qti-printed-variable>` elements (when `TemplateConfig` is present). Tokens are protected through the Markdown stage via placeholders.
3. **Markdown → HTML** — Markdown parsed by `mistune` with `strikethrough` + `table` plugins.
4. **HTML → XML** — Void elements self-closed. Entities fixed for XML compliance.

This pipeline ships as a library. Generator authors can pre-process their content through it, or pass raw markdown/LaTeX and let the converter handle it.

Image fields go through a separate chain: probe dimensions, decide inline vs. side placement, generate sized `<img>` tags.

### Markdown Capability Status

Current behavior is implemented in `qti_sdk/content/markdown_converter.py` (`mistune.create_markdown(plugins=["strikethrough", "table"], hard_wrap=True)`), then normalized by `qti_sdk/content/xml_utils.py`.

| Capability | Typical syntax | Current support status | Notes / when needed |
|---|---|---|---|
| Headers | `#`, `##`, `###` | Supported | Renders to `<h1>`...`<h6>`. |
| Bold | `**text**` | Supported | Renders to `<strong>`. |
| Italic | `*text*` or `_text_` | Supported | Renders to `<em>`. |
| Inline code | `` `code` `` | Supported | Renders to `<code>`. |
| Fenced code blocks | Triple backticks | Supported | Renders to `<pre><code>...</code></pre>`. |
| Strikethrough | `~~text~~` | Supported | Enabled via `strikethrough` plugin. |
| Tables | Pipe table syntax | Supported | Enabled via `table` plugin. |
| Lists / blockquotes / links | Standard Markdown | Supported | Handled by mistune core parser. |
| Single newline to line break | `line1` + newline + `line2` | Supported | `hard_wrap=True` emits `<br/>`. |
| Raw inline/block HTML passthrough | `<sub>`, `<div>`, etc. | Not supported (escaped) | Tags are escaped as text (`&lt;...&gt;`) in current pipeline. |
| Subscript (Markdown syntax) | `H~2~O` | Not supported | No subscript plugin/extension configured. |
| Superscript (Markdown syntax) | `x^2^` | Not supported | No superscript plugin/extension configured. |

This table reflects **current behavior**, not target behavior. If richer typography is required in future content (chemistry formulas, exponents, semantic inline HTML), treat subscript/superscript and HTML passthrough as explicit backlog capabilities.

---

## How the Converter Works

```
Generator output (QuestionItem)
    │
    ├── behavior field ──► selects response processing template
    ├── interactions[] ──► selects item body structure (single or composite)
    ├── content fields ──► content processing pipeline ──► XHTML
    │
    ├── stimulus present? ──► YES ──► generate stimulus XML + stimulus-ref
    │                         NO  ──► skip
    │
    ├── template present? ──► YES ──► emit template declarations + processing
    │                         NO  ──► skip
    │
    ├── presence-driven ──► companion_materials, accessibility_catalog,
    │   associations         feedback, scoring_dimensions, context_declarations
    │
    ▼
QtiBuilder.build(item) assembles item XML:
    1. Context declarations (if present)
    2. Response declarations (derived from interactions + correct answers + score maps)
    3. Outcome declarations (SCORE, MAXSCORE, FEEDBACK, per-dimension outcomes)
    4. Template declarations + processing (if template present)
    5. Item body (interactions + stimulus-ref + processed content + feedback blocks)
    6. Response processing (scoring chain: score_expression > score_map > binary)
    7. Feedback elements (if feedback entries present)
    │
    ▼
Output:
    item.xml ─────── QTI 3.0 assessment item (all identifiers correlated)
    stimulus.xml ─── QTI 3.0 stimulus (only if stimulus field present)
```

All identifier wiring is internal. The generator author never sees `RESPONSE`, `SCORE`, `FEEDBACK_optionA`, or stimulus ref identifiers. For composite items, per-interaction identifiers (`RESPONSE_1`, `SCORE_1`, `FEEDBACK_1`, etc.) are auto-generated and correlated.

---

## Validation Pipeline

| Tier                    | What                                                            | How                                                                  |
| ----------------------- | --------------------------------------------------------------- | -------------------------------------------------------------------- |
| **Input validation**    | Model shape, required fields for declared behavior              | Pydantic validators on the mandated models                           |
| **XSD validation**      | Generated XML conforms to QTI 3.0 schema                        | XSD validation against official schema                               |
| **Semantic validation** | Identifier correlation, correct answer references valid choices | Programmatic checks (guaranteed by builder, but belt-and-suspenders) |
| **Render validation**   | Item renders correctly in a QTI player                          | Feed to renderer, screenshot, LLM compares source vs. rendered       |

Tier 4 is recommended during onboarding (first 50 items per generator) and optional for steady-state.

---

## Adaptability Strategy

### When behavior needs to change

A generator author needs a new feedback strategy or scoring approach that doesn't match existing behaviors.

**Path:** Add a new behavior type with its parameters. Write the response processing template (~50-100 lines). Register it in the converter. Ship an SDK update. No model changes needed on `QuestionItem` — the new behavior type carries its own content.

### When a new interaction type is needed

An interaction not covered by the 6 config types (e.g., hotspot on image, slider, drawing).

**Path:** Define a new interaction config with its specific fields. Write the item body generator and supported behavior templates. Register it in `QtiBuilder`. Ship an SDK update.

**Escape hatch:** `PCIConfig` covers any PCI-based interaction immediately — for registered modules (`number-line-question`, `graph-based-question`, `graph-plot-points-question`, `graph-line-inequality-question`) the author provides `interaction_type` + `data_attributes` and the SDK handles module resolution. Use `correct_values` for match-correct scoring (format depends on module: space-separated `"x y"` for point type, decimal for float, string for string). For custom modules, the author provides explicit `module` + `data_item_path_uri`. For `graph-plot-points-question`, use increment/label attributes (`data-graph-increment-value`, `data-graph-label-interval`, etc.) instead of `data-graph-x-step`/`data-graph-y-step`.

### When the model is too restrictive

A generator needs a field or structure the model doesn't support.

**Path:** Add the field as optional. Existing generators are unaffected. Only the behavior templates that use the new field need updating.

### When full custom scoring is needed

Rarely, someone needs score computation logic that no template covers.

**Path:** Two escape hatches, in order of preference:

1. **Score Expression DSL** — a typed, serializable expression tree (`SumExpr`, `ProductExpr`, `MapResponseExpr`, etc.) that compiles 1:1 to QTI expression elements. Covers weighted sums, scaled scores, bonus/penalty math. Lives on the model, round-trips via JSON, LLM-friendly.
2. **Builder hooks** — non-serializable build-time extensibility on the builder class (not the model). Override specific builder methods to inject custom XML fragments into response processing. Full power, requires QTI knowledge.

---

## What We Don't Build Yet

- **Test assembly** (sections, ordering, branching, time limits)
- **Presentation/style attributes** (CSS, ARIA beyond defaults)
- **Accessibility feature rollout gating** (new SDK accessibility support is staged behind confirmed player support; see [`docs/accessibility_support.md`](docs/accessibility_support.md))
- **Accessibility content production** (the SDK accepts and wires catalog entries supplied by generators, but does not produce SSML, sign language video, braille files, or simplified text itself — generators own that pipeline)
- **Result reporting** (QTI results format — delivery-side concern, not authoring)
- **QTI 2.x export** (the SDK targets QTI 3.0 only — see [`QTI_VERSION_EXPORT_GUIDE.md`](QTI_VERSION_EXPORT_GUIDE.md) for a full analysis of what legacy version export would require and the implementation path)

These can be layered on later without changing the item-level models.

---

## Summary

| Decision              | Choice                                                     | Rationale                                                       |
| --------------------- | ---------------------------------------------------------- | --------------------------------------------------------------- |
| Item container        | Unified `QuestionItem` with `interactions[]` list          | Shared fields declared once. Composite items are `len > 1`.     |
| Interaction types     | 6 typed configs inside `QuestionItem`                      | Covers K-12 + SAT + AP. PCIConfig for the rest.                 |
| Model vs. template    | Both — model carries content, `behavior` selects template  | Decouples what you say from how it's scored                     |
| Behavior variation    | Parameterized types + pre-built templates                  | New behaviors don't change models. 3 types, expandable.         |
| Scoring               | Priority chain: `score_expression` > `score_map` > binary | Weighted, partial-credit, and DSL scoring without raw XML       |
| Associations          | Presence-driven: stimulus, companion materials, a11y catalog, template, scoring dimensions, feedback | Field populated → SDK generates refs, files, declarations, and dependencies |
| Content processing    | Reusable library (LaTeX→MathML→`@@var@@`→Markdown→HTML→XML) | Same pipeline for all interaction types                         |
| Escape hatches        | PCIConfig (PCI) + Score Expression DSL + context declarations + builder hooks | Handles the 10% without blocking the 90%               |
| Validation            | 4-tier: input → XSD → semantic → render                    | Catches errors at each level                                    |
