Metadata-Version: 2.4
Name: bsy-clippy
Version: 0.1.0
Summary: Terminal client for interacting with an Ollama server
Author: Sebas
Requires-Python: >=3.8
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: requests<3,>=2.28
Dynamic: license-file

# bsy-clippy

`bsy-clippy` is a lightweight Python client for interacting with an [Ollama](https://ollama.ai) server.  

It supports both **batch (stdin) mode** for one-shot prompts and **interactive mode** for chatting directly in the terminal.  
You can also load **system prompts** from a file to guide the LLM’s behavior.

---

## Features

- Connects to Ollama API over HTTP (`/api/generate`).
- Defaults to:
  - IP: `172.20.0.100`
  - Port: `11434`
  - Model: `qwen3:1.7b`
  - Mode: `batch` (wait for full output)
  - Bundled system prompt file that can be overridden with `--system-file`
- Configurable parameters:
  - `-i` / `--ip` → Ollama server IP
  - `-p` / `--port` → Ollama server port
  - `-M` / `--model` → model name
  - `-m` / `--mode` → output mode (`stream` or `batch`)
  - `-t` / `--temperature` → sampling temperature (default: `0.7`)
  - `-s` / `--system-file` → path to a text file with system instructions
  - `-u` / `--user-prompt` → extra user instructions prepended before the data payload
  - `-r` / `--memory-lines` → number of conversation lines to remember in interactive mode
  - `-c` / `--chat-after-stdin` → process stdin once, then drop into interactive chat
- Two modes of operation:
  - **Batch mode** (default) → waits until the answer is complete, then prints only the final result.
  - **Stream mode** → shows response in real-time, tokens appear as they are generated.
- Colored terminal output:
  - **Yellow** = streaming tokens (the model’s “thinking” in progress).
  - **Default terminal color** = final assembled answer.

---

## Installation

### pipx (recommended)

```bash
pipx install .
```

After updating the source, reinstall with `pipx reinstall bsy-clippy`.

### pip / virtual environments

```bash
pip install .
```

---

## Usage

### System prompt file

By default, `bsy-clippy` loads a bundled prompt (`Be very brief. Be very short.`).  
You can change this with `--system-file` or disable it via `--no-default-system`.

Example **bsy-clippy.txt**:

```
You are a helpful assistant specialized in cybersecurity.
Always explain your reasoning clearly, and avoid unnecessary markdown formatting.
```

These lines will be sent to the LLM before every user prompt.

### User prompt parameter

Use `--user-prompt "Classify the following log:"` when piping data so the model receives:

```
system prompt (if any)

user prompt text

data from stdin or interactive input
```

### Interactive memory

Set `--memory-lines 6` (or `-r 6`) to keep the last six conversation lines (user + assistant) while chatting.  
Only the final assistant reply (not the thinking traces) is stored and sent back on the next turn.

### Chat after stdin

Use `-c` / `--chat-after-stdin` to process piped data first and then remain in interactive mode with the response (and any configured memory) available:

```bash
cat sample.txt | bsy-clippy -u "Summarize this report" -r 6 -c
```

After the initial answer prints, you can continue the conversation while the tool remembers the piped data and the model’s reply.

---

### Interactive mode (default = batch)

Run without piping input:

```bash
bsy-clippy
```

Example session in **batch mode**:

```
You: Hello!
Hello! How can I assist you today? 😊
```

To force **streaming mode**:

```bash
bsy-clippy --mode stream
```

Streaming session looks like:

```
You: Hello!
LLM (thinking): <think>
Reasoning step by step...
</think>
Hello! How can I assist you today? 😊
```

---

### Batch mode (stdin)

Pipe input directly:

```bash
echo "Tell me a joke" | bsy-clippy
```

Output:

```
Why don’t scientists trust atoms? Because they make up everything!
```

---

### Forcing modes

```bash
bsy-clippy --mode batch
bsy-clippy --mode stream
```

---

### Adjusting temperature

```bash
bsy-clippy --temperature 0.2
bsy-clippy --temperature 1.2
```

---

### Custom server and model

```bash
bsy-clippy --ip 127.0.0.1 --port 11434 --model llama2
```

---

## Requirements

See [`requirements.txt`](requirements.txt).
