Metadata-Version: 2.4
Name: latteries
Version: 0.1.0
Summary: James' API LLM evaluations workflow library - A collection of tools for LLM API calls, caching, and evaluation workflows
Project-URL: Homepage, https://github.com/thejaminator/latteries
Project-URL: Repository, https://github.com/thejaminator/latteries
Project-URL: Issues, https://github.com/thejaminator/latteries/issues
Author-email: James <your-email@example.com>
License: MIT
License-File: LICENSE
Keywords: anthropic,api,caching,evaluation,llm,openai
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.8
Requires-Dist: anthropic>=0.25.0
Requires-Dist: anyio
Requires-Dist: httpx
Requires-Dist: openai>=1.0.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: slist
Requires-Dist: streamlit
Requires-Dist: tqdm
Provides-Extra: dev
Requires-Dist: black; extra == 'dev'
Requires-Dist: pyright; extra == 'dev'
Requires-Dist: pytest; extra == 'dev'
Requires-Dist: pytest-asyncio; extra == 'dev'
Requires-Dist: ruff; extra == 'dev'
Description-Content-Type: text/markdown

# James' API LLM evaluations workflow library
Library of functions that I find useful in my day-to-day work.

## Installation as starter code to run evals.
Clone the repo if you want to use the example scripts. Can be useful for e.g. cursor and coding agents.

**Clone the repo and install dependencies:**
  ```bash
  git clone https://github.com/thejaminator/latteries.git
  cd latteries
  uv venv venv
  source venv/bin/activate
  uv pip install -r requirements.txt
  ```


### Installation as a package.
Alternatively, you can install the package and use it as a library without the example scripts.
```bash
pip install latteries
```


## My workflow
- I want to call LLM APIs like normal python.
- This is a library. Not a framework. Frameworks make you declare magical things in configs and functions. This is a library, which is a collection of tools I find useful.
- Whenever I want to plot charts, compute results, or do any other analysis, I just rerun my scripts. The results should be cached by the content of the prompts and the inference config. This helped me be fast in getting results out.

### Core functionality - caching
```python
from latteries import load_openai_caller, ChatHistory, InferenceConfig


async def example_main():
    # Cache to the folder "cache"
    caller = load_openai_caller("cache")
    prompt = ChatHistory.from_user("How many letter 'r's are in the word 'strawberry?")
    config = InferenceConfig(temperature=0.0, max_tokens=100, model="gpt-4o")
    # This cache is based on the hash of the prompt and the InferenceConfig.
    response = await caller.call(prompt, config)
    print(response.first_response)


if __name__ == "__main__":
    import asyncio

    asyncio.run(example_main())
```

### Core functionality - call LLMs in parallel
- The caching is safe to be used in parallel. I use my library [slist for useful utils for lists](https://github.com/thejaminator/slist), such as running calls in parallel.
- [See full example](example_scripts/example_parallel.py).
```python
async def example_parallel_tqdm():
    caller = load_openai_caller("cache")
    fifty_prompts = [f"What is {i} * {i+1}?" for i in range(50)]
    prompts = [ChatHistory.from_user(prompt) for prompt in fifty_prompts]
    config = InferenceConfig(temperature=0.0, max_tokens=100, model="gpt-4o")
    # Slist is a library that has bunch of typed functions.
    # # par_map_async runs async functions in parallel.
    results = await Slist(prompts).par_map_async(
        lambda prompt: caller.call(prompt, config),
        max_par=10, # Parallelism limit.
        tqdm=True, # Brings up tqdm bar.
    )
    result_strings = [result.first_response for result in results]
    print(result_strings)
```

### Core functionality - support of different model providers
- You often need to call models on openrouter / use a different API client such as Anthropic's.
- I use MultiClientCaller, which simply routes by matching on the model name.
- [See full example](example_scripts/example_llm_providers.py).
```python
def load_multi_client(cache_path: str) -> MultiClientCaller:
    """Matches based on the model name."""
    openai_api_key = os.getenv("OPENAI_API_KEY")
    openrouter_api_key = os.getenv("OPENROUTER_API_KEY")
    anthropic_api_key = os.getenv("ANTHROPIC_API_KEY")
    shared_cache = CacheByModel(Path(cache_path))
    openai_caller = OpenAICaller(api_key=openai_api_key, cache_path=shared_cache)
    openrouter_caller = OpenAICaller(
        openai_client=AsyncOpenAI(api_key=openrouter_api_key, base_url="https://openrouter.ai/api/v1"),
        cache_path=shared_cache,
    )
    anthropic_caller = AnthropicCaller(api_key=anthropic_api_key, cache_path=shared_cache)

    # Define rules for routing models.
    clients = [
        CallerConfig(name="gpt", caller=openai_caller),
        CallerConfig(name="gemini-2.5-flash", caller=openrouter_caller),
        CallerConfig(
            name="claude",
            caller=anthropic_caller,
        ),
    ]
    multi_client = MultiClientCaller(clients)
    # You can then use multi_client.call(prompt, config) to call different based on the name of the model.
    return multi_client
```


### Viewing model outputs:
We have a simple tool to view conversations that are in a jsonl format of "user" and "assistant".
[My workflow is to simply dump the jsonl conversations to a file and then view them.](example_scripts/example_parallel_and_log.py)
```bash
streamlit run latteries/viewer.py <path_to_jsonl_file>
```
<img src="docs/viewer.png" width="70%" alt="Viewer Screenshot">








## Example scripts
These are evaluations of multiple models and creating charts with error bars.
- Single turn evaluation, MCQ: [MMLU](example_scripts/mmlu/evaluate_mmlu.py), [TruthfulQA](example_scripts/truthfulqa/evaluate_truthfulqa.py)
- Single turn with a judge model for misalignment. TODO.
- Multi turn evaluation with a judge model to parse the answer: [Are you sure sycphancy?](example_scripts/mmlu/mmlu_are_you_sure.py)






## FAQ

What if I want to repeat the same prompt without caching?
- [Pass try_number to the caller.call function](example_scripts/example_parallel.py).

Do you have support for JSON schema calling?
- Yes. TODO show example.

Do you have support for log probs?
- Yes. TODO show example.

What is the difference between this and xxxx?


## Publishing to PyPI

This package is set up for easy publishing to PyPI using `uv`. Here are the steps:

### Prerequisites

1. **Install uv** (if you haven't already):
   ```bash
   curl -LsSf https://astral.sh/uv/install.sh | sh
   ```

2. **Set up PyPI credentials**: 
   - Create an account on [PyPI](https://pypi.org/)
   - Generate an API token at https://pypi.org/manage/account/token/
   - Store it safely (you'll need it for publishing)

### Publishing Steps

1. **Test on TestPyPI first** (recommended):
   ```bash
   ./publish-test.sh
   ```
   This will build and upload to [TestPyPI](https://test.pypi.org/), where you can test the package safely.

2. **Publish to PyPI**:
   ```bash
   ./publish.sh
   ```
   This will build and upload to the real PyPI.

### Manual Publishing

If you prefer to do it manually:

```bash
# Clean previous builds
rm -rf dist/ build/ *.egg-info/

# Install build dependencies
uv pip install --upgrade build twine

# Build the package
uv run python -m build

# Check the package
uv run python -m twine check dist/*

# Upload to TestPyPI (optional)
uv run python -m twine upload --repository testpypi dist/*

# Upload to PyPI
uv run python -m twine upload dist/*
```

### Version Management

Update the version in two places before publishing:
- `pyproject.toml` in the `[project]` section
- `latteries/__init__.py` in the `__version__` variable

### Package Structure

The package includes:
- Core API calling functionality
- Caching system
- Multi-provider support (OpenAI, Anthropic, etc.)
- Response viewer CLI tool (`latteries-viewer`)
- Example scripts and evaluation tools

## General philsophy on evals engineering.
To elaborate in future. These aren't specific to repos, but are principles that I find helpful for those starting up.
- Don't mutate python objects. Causes bugs. Please copy / deepcopy things like configs and prompts.
- Python is a scripting language. Use it to write your scripts!!! Avoid writing complicated bash files when you can just write python.
- I hate yaml. More specifically, I hate yaml that becomes a programming language. Sorry. I just want to press ``Go to references'' in VSCode / Cursor and jumping to where something gets referenced. YAML does not do that.
- Keep objects as pydantic basemodels / dataclasses. Avoid passing data around as pandas dataframes. No one (including your coding agent)  zknows what is in the dataframe. Hard to read. Also can be lossy (losing types). If you want to store intermediate data, use jsonl.
- Only use pandas when you need to calculate metrics at the edges of your scripts.