Metadata-Version: 2.4
Name: ollama-remote-chat-cli
Version: 0.1.0
Summary: Remote CLI client for Ollama servers - Network-ready chat interface with history tracking and inference control
Home-page: https://github.com/Avaxerrr/ollama-remote-chat-cli
Author: Avaxerrr
Author-email: Avaxerrr@users.noreply.github.com
License: MIT
Project-URL: Bug Tracker, https://github.com/Avaxerrr/ollama-remote-chat-cli/issues
Project-URL: Source Code, https://github.com/Avaxerrr/ollama-remote-chat-cli
Keywords: ollama,chat,cli,remote,llm,ai,terminal,client
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: End Users/Desktop
Classifier: Topic :: Communications :: Chat
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Operating System :: OS Independent
Classifier: Environment :: Console
Requires-Python: >=3.7
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: requests>=2.31.0
Requires-Dist: python-dotenv>=1.0.0
Dynamic: author
Dynamic: author-email
Dynamic: classifier
Dynamic: description
Dynamic: description-content-type
Dynamic: home-page
Dynamic: keywords
Dynamic: license
Dynamic: license-file
Dynamic: project-url
Dynamic: requires-dist
Dynamic: requires-python
Dynamic: summary

# Ollama Chat CLI

[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Python 3.7+](https://img.shields.io/badge/python-3.7+-blue.svg)](https://www.python.org/downloads/)

A feature-rich command-line interface for Ollama with chat history, inference settings, and beautiful UI.

## Features

- Beautiful Claude Code-style interface
- Chat history with session management
- Configurable inference settings (temperature, top_p, context window, etc.)
- Real-time context tracking and token usage
- Easy model switching and management
- Search through chat history
- Support for local and remote Ollama servers
- Secure configuration with .env file

## Requirements

- Python 3.7+
- Ollama installed and running
- pip (Python package manager)

## Quick Start

### 1. Clone the Repository

```bash
git clone https://github.com/yourusername/ollama-chat-cli.git
cd ollama-chat-cli
```

### 2. Install Dependencies

```bash
pip install -r requirements.txt
```

Or if using the package:
```bash
pip install -e .
```

### 3. Configure Your Connection

Copy the example environment file:
```bash
cp .env.example .env
```

Edit `.env` with your preferred connection method (see below).

### 4. Run the CLI

```bash
python ollama_chat/cli.py
```

Or if installed as a package:
```bash
ollama-chat
# or the short alias:
oc
```

## Configuration Methods

### Method 1: Local Connection (Default)

If Ollama is running on the same machine:

```env
OLLAMA_HOST=http://localhost:11434
OLLAMA_MODEL=llama2
```

**Setup:**
1. Install Ollama from [ollama.ai](https://ollama.ai)
2. Run: `ollama serve`
3. Pull a model: `ollama pull llama2`

---

### Method 2: Hostname Connection (.local / mDNS)

For connecting to another computer on your local network using its hostname:

```env
OLLAMA_HOST=http://my-computer.local:11434
OLLAMA_MODEL=gemma3:12b
```

**Setup:**
1. Find your computer's hostname:
   - **Windows**: `hostname` in CMD
   - **Mac**: System Preferences → Sharing → Computer Name
   - **Linux**: `hostname` in terminal
   
2. Ensure mDNS/Bonjour is working:
   - **Windows**: Install [Bonjour Print Services](https://support.apple.com/kb/DL999)
   - **Mac/Linux**: Built-in
   
3. Test connection: `ping my-computer.local`

4. Configure firewall to allow port 11434

**Common hostnames:**
- `desktop.local`
- `macbook.local`
- `xav-pcx.local`
- `server.local`

---

### Method 3: Static IP Address

If your computer has a fixed IP on your network:

```env
OLLAMA_HOST=http://192.168.1.100:11434
OLLAMA_MODEL=mistral
```

**Setup:**
1. Set a static IP on your Ollama server machine
2. Find your IP address:
   - **Windows**: `ipconfig` → IPv4 Address
   - **Mac**: `ifconfig en0 | grep inet`
   - **Linux**: `ip addr show`
   
3. Configure firewall to allow port 11434

4. Test: Open browser to `http://192.168.1.100:11434` (should show "Ollama is running")

---

### Method 4: Dynamic IP (Current Session)

For temporary connections when IP changes:

```env
OLLAMA_HOST=http://192.168.1.XXX:11434
OLLAMA_MODEL=llama2
```

**Setup:**
1. Find current IP (see Method 3)
2. Update `.env` each time IP changes
3. Consider using hostname (Method 2) instead for permanent setup

---

### Method 5: Remote Server

For connecting to Ollama on a remote server:

```env
OLLAMA_HOST=https://ollama.myserver.com
OLLAMA_MODEL=llama3.3
```

**Setup:**
1. Set up Ollama on remote server
2. Configure reverse proxy (nginx/caddy) with SSL
3. Open firewall port (11434 or custom)
4. Test connection: `curl https://ollama.myserver.com/api/tags`

---

### Method 6: Docker

If running Ollama in Docker:

```env
OLLAMA_HOST=http://localhost:11434
OLLAMA_MODEL=llama2
```

**Setup:**
```bash
docker run -d -p 11434:11434 --name ollama ollama/ollama
docker exec -it ollama ollama pull llama2
```

---

### Method 7: WSL (Windows Subsystem for Linux)

Accessing Windows host from WSL:

```env
OLLAMA_HOST=http://host.docker.internal:11434
OLLAMA_MODEL=llama2
```

Or find Windows IP from WSL:
```bash
ip route | grep default | awk '{print $3}'
```

---

## Available Commands

| Command | Description |
|---------|-------------|
| `/help` | Show all available commands |
| `/models` | List available models |
| `/switch` | Switch to a different model |
| `/pull` | Download a new model |
| `/delete` | Delete a model |
| `/host` | Change Ollama host URL |
| `/config` | Show current configuration |
| `/settings` | Configure inference settings (temperature, etc.) |
| `/modelinfo` | Show detailed model information |
| `/history` | View chat history |
| `/search` | Search chat history |
| `/clear` | Clear conversation context |
| `/new` | Start a new chat session |
| `/multi` | Enter multi-line input mode |
| `/exit` | Exit the chat |

## Inference Settings

Configure AI behavior with `/settings`:

- **Temperature** (0.0-2.0): Controls creativity
  - 0.1-0.3 for coding/math
  - 0.6-0.8 for general chat
  - 1.0-1.5 for creative writing

- **Top P** (0.0-1.0): Nucleus sampling (default: 0.9)
- **Top K** (1-100): Token choice limits (default: 40)
- **Context Window** (128-32768): Conversation memory (default: 2048)
- **Max Output** (1-4096): Response length (default: 512)

## Troubleshooting

### "Could not connect to Ollama"

1. **Verify Ollama is running:**
   ```bash
   curl http://localhost:11434
   # Should return: "Ollama is running"
   ```

2. **Check firewall settings:**
   - Windows: Allow port 11434 in Windows Firewall
   - Mac: System Preferences → Security → Firewall Options
   - Linux: `sudo ufw allow 11434`

3. **For hostname issues:**
   ```bash
   # Test if hostname resolves
   ping your-hostname.local
   
   # If it fails, use IP address instead
   ```

4. **For Docker:**
   ```bash
   docker ps  # Verify container is running
   docker logs ollama  # Check logs
   ```

### "Model not found"

```bash
# List available models
ollama list

# Pull the model you want
ollama pull llama2
```

### "Permission denied"

```bash
# Make sure config directory is writable
chmod 755 ~/.ollama_chat_config.json
```

## File Structure

```
ollama-chat-cli/
├── ollama_chat/
│   ├── __init__.py
│   ├── __main__.py
│   ├── cli.py           # Main CLI application
│   ├── ui.py            # UI components and banner
│   ├── api.py           # Ollama API client
│   ├── config.py        # Configuration management
│   ├── commands.py      # Command handlers
│   └── history.py       # Chat history manager
├── .env.example         # Example configuration
├── .gitignore          
├── setup.py            
├── requirements.txt    
└── README.md
```

## Security Notes

- `.env` file is gitignored and won't be committed
- Config files are stored in your home directory
- Never commit your actual `.env` file to Git
- For remote connections, use HTTPS with proper SSL certificates

## License

MIT License - feel free to use and modify!

## Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

## Support

If you encounter any issues, please open an issue on GitHub.
