jeevesagent.model.echo

A trivial model that echoes the last user message back, in chunks.

Useful for proving the loop end-to-end without API keys or network. It emits one text chunk per word followed by a single finish chunk with a synthetic usage record.

Classes

EchoModel

Echo-style model for tests and demos.

Module Contents

class jeevesagent.model.echo.EchoModel(*, prefix: str = 'Echo: ', chunk_delay_s: float = 0.0, cost_per_token: float = 0.0)[source]

Echo-style model for tests and demos.

async complete(messages: list[jeevesagent.core.types.Message], *, tools: list[jeevesagent.core.types.ToolDef] | None = None, temperature: float = 1.0, max_tokens: int | None = None) tuple[str, list[jeevesagent.core.types.ToolCall], jeevesagent.core.types.Usage, str][source]

Single-shot echo. Returns the echoed user prompt as one string with synthetic usage. No per-token chunking — used by the non-streaming hot path (agent.run()).

async stream(messages: list[jeevesagent.core.types.Message], *, tools: list[jeevesagent.core.types.ToolDef] | None = None, temperature: float = 1.0, max_tokens: int | None = None) collections.abc.AsyncIterator[jeevesagent.core.types.ModelChunk][source]
name: str = 'echo'