Metadata-Version: 2.4
Name: mwin
Version: 0.1.2
Summary: Track OpenAI, Claude, Gemini and OpenAI-compatible models then give solutions to improve your agent system.
Author-email: yanghui <dasss90ovo@gmail.com>
Project-URL: Homepage, https://github.com/yanghui1-arch/mwin.git
Requires-Python: >=3.12
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: click>=8.3.0
Requires-Dist: more-itertools>=10.8.0
Requires-Dist: openai>=1.108.2
Requires-Dist: requests>=2.32.5
Requires-Dist: uuid6>=2025.0.1
Dynamic: license-file

# AT
AT: Track, log, and evaluate AI models. Supports OpenAI, Claude, Google API and custom PyTorch models.<br/>
Our goal is to make llm application more valuable and effortlessly improve llm capabilities.

# Quickstart
You can use pip install AT. (It's not implemented now.)
```bash
pip install aitrace
```
OR pip install from source.
```bash
git clone https://github.com/yanghui1-arch/AITrace.git
cd src
pip install -e .
```
Then you need to configure AT through CLI.
```bash
aitrace configure
```
It needs an AITrace API key. You can get the apikey after logging `http://localhost:5173`.
Finally use `@track` to track your llm input and output
```python
from aitrace import track
from openai import OpenAI

openai_apikey = 'YOUR API KEY'


@track(
    project_name="aitrace_demo",
    tags=['test', 'demo'],
    track_llm=LLMProvider.OPENAI,    
)
def llm_classification(film_comment: str):
    prompt = "Please classify the film comment into happy, sad or others. Just tell me result. Don't output anything."
    cli = OpenAI(base_url='https://api.deepseek.com', api_key=openai_apikey)
    cli.chat.completions.create(
        messages=[{"role": "user", "content": f"{prompt}\nfilm_comment: {film_comment}"}],
        model="deepseek-chat"
    ).choices[0].message.content
    llm_counts(film_comment=film_comment)
    return "return value"

@track(
    project_name="aitrace_demo",
    tags=['test', 'demo', 'second_demo'],
    track_llm=LLMProvider.OPENAI,
)
def llm_counts(film_comment: str):
    prompt = "Count the film comment words. just output word number. Don't output anything others."
    cli = OpenAI(base_url='https://api.deepseek.com', api_key=openai_apikey)
    return cli.chat.completions.create(
        messages=[{"role": "user", "content": f"{prompt}\nfilm_comment: {film_comment}"}],
        model="deepseek-chat"
    ).choices[0].message.content

llm_classification("Wow! It sucks.")
```

Then it will output your llm trace. It is not supported to visualize now. I am developing it more and more quickly. Welcome to all contributions.

# Development
AT project package manager is uv. If you are a beginner uver, please click uv link: [uv official link](https://docs.astral.sh/uv/guides/projects/#creating-a-new-project)
```bash
uv install
uv .venv/Script/activate
```
You can watch more detailed debug information by using `--log-level=DEBUG` or `set AT_LOG_LEVEL=DEBUG` for Windows or `export AT_LOG_LEVEL=DEBUG` for Linux and Mac.
