Metadata-Version: 2.4
Name: interview_kit
Version: 0.3.0
Summary: Adaptive voice-interview engine.
Project-URL: Homepage, https://github.com/szijderveld/interview_kit
Project-URL: Documentation, https://github.com/szijderveld/interview_kit/blob/main/docs/integration.md
Project-URL: Issues, https://github.com/szijderveld/interview_kit/issues
Project-URL: Changelog, https://github.com/szijderveld/interview_kit/blob/main/CHANGELOG.md
Project-URL: Source, https://github.com/szijderveld/interview_kit
Author-email: szijderveld <szijderveld@gmail.com>
License-Expression: Apache-2.0
License-File: LICENSE
Keywords: anthropic,interview,livekit,llm,voice-agent
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Typing :: Typed
Requires-Python: >=3.11
Requires-Dist: aiosqlite>=0.22.1
Requires-Dist: anthropic<1.0,>=0.40
Requires-Dist: httpx>=0.28.1
Requires-Dist: pydantic<3,>=2
Requires-Dist: pyyaml<7,>=6
Provides-Extra: voice
Requires-Dist: livekit-agents[cartesia,deepgram,silero]<2,>=1.0; extra == 'voice'
Requires-Dist: livekit-api<2,>=1.0; extra == 'voice'
Description-Content-Type: text/markdown

# interview_kit

Adaptive voice-interview engine. The operator defines a Conversation — a
persona for the interviewer, a purpose, and a list of Goals (each with a
"what good looks like" standard). The engine runs the conversation as a
voice agent (LiveKit + Deepgram STT + Anthropic LLM + Cartesia TTS),
adapts mid-call (clarifying, drilling, skipping redundant goals), and
produces a structured Extract mapping every claim back to who said it
and when. This package is the engine — storage, web layer, link domain,
and UI are the consumer's responsibility.

## Install

Requires Python 3.11+.

```sh
pip install interview_kit
```

The voice extra pulls in LiveKit and the audio plugins:

```sh
pip install "interview_kit[voice]"
```

## Smoke test (no API key)

```sh
interview_kit demo
```

Runs the full agent loop against a synthetic respondent and a
deterministic fake LLM. Prints the transcript and the structured Extract.

## Quickstart

Save the following as `interview.yaml`:

```yaml
persona:
  system_prompt: You are running a discovery interview about morning routines.
  style: neutral
  voice_id: demo-voice
purpose: Understand the interviewee's morning routine.
background:
  interviewee_role: staff engineer
  interviewee_expertise: end-to-end pipeline ownership
goals:
  - id: routine
    intent: Map the morning routine
    standard: At least two rituals named with timing.
  - id: exceptions
    intent: Find common exception paths
    standard: At least one exception flow named.
```

Then, with `ANTHROPIC_API_KEY` set:

```python
import asyncio
from interview_kit import Conversation, Engine
from interview_kit.testing.simulators import RamblyKnowledgeableSimulator

async def main() -> None:
    engine = Engine.with_defaults()
    template = Conversation.from_yaml("interview.yaml")
    conv = await engine.create_conversation(**template.model_dump(exclude={"id"}))
    extract = await engine.simulate_session(conv.id, RamblyKnowledgeableSimulator())
    print(extract.model_dump_json(indent=2))

asyncio.run(main())
```

## Production / voice integration

See [docs/integration.md](docs/integration.md) for the FastAPI + LiveKit
AgentServer wiring, `ConversationStore` and `EventSink` implementation
guidance, and the operational gaps the consumer must close.

## Development

See [CONTRIBUTING.md](CONTRIBUTING.md) for the source checkout, test, and
local-voice workflows.
