Metadata-Version: 2.4
Name: automatiq
Version: 0.1.0a4
Summary: Record browser sessions and reverse-engineer them into scraping scripts.
Author-email: Kanishq Vijay <stonesteel27@gmail.com>
License-Expression: MIT
Project-URL: Homepage, https://github.com/StoneSteel27/AutomatiQ
Project-URL: Repository, https://github.com/StoneSteel27/AutomatiQ
Project-URL: Issues, https://github.com/StoneSteel27/AutomatiQ/issues
Keywords: scraping,browser-automation,llm,agent,recording,reverse-engineering
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Internet :: WWW/HTTP
Classifier: Topic :: Software Development :: Code Generators
Classifier: Topic :: Software Development :: Testing
Classifier: Typing :: Typed
Requires-Python: >=3.11
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: instructor~=1.14.5
Requires-Dist: litellm~=1.81.13
Requires-Dist: pydantic~=2.12.5
Requires-Dist: pydantic-pick~=0.2.0
Requires-Dist: python-dotenv~=1.2.1
Requires-Dist: pyyaml~=6.0.3
Requires-Dist: rich>=13.0.0
Requires-Dist: zendriver~=0.15.2
Requires-Dist: mss~=10.1.0
Requires-Dist: numpy~=2.4.0
Requires-Dist: imageio-ffmpeg~=0.6.0
Requires-Dist: requests~=2.32.5
Requires-Dist: curl_cffi~=0.14.0
Requires-Dist: beautifulsoup4~=4.14.3
Requires-Dist: magika~=1.0.1
Requires-Dist: ipython~=9.10.0
Provides-Extra: dev
Requires-Dist: pre-commit>=3.6.0; extra == "dev"
Requires-Dist: ruff>=0.3.0; extra == "dev"
Requires-Dist: build>=0.10.0; extra == "dev"
Requires-Dist: twine>=5.0.0; extra == "dev"
Dynamic: license-file

<p align="center">
  <img src="https://raw.githubusercontent.com/StoneSteel27/AutomatiQ/main/assets/automatiq_banner.svg" alt="AutomatiQ" width="600">
</p>

<p align="center">
  <em>Your <span style="color:#00FFC8;font-weight:bold">activity</span>, into <span style="color:#FF009E;font-weight:bold">automation</span>.</em>
</p>

<p align="center">
  <a href="https://discord.gg/8j7dFWMMDA">
    <img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord">
  </a>
</p>

# AutomatiQ

> **Alpha (v0.1.0)** => Work in progress. Things will break, change, and improve.

Record a browser session, and an AI agent reverse-engineers it into a standalone Python script.

## Install

```bash
pip install automatiq
```

### Install from source

```bash
git clone https://github.com/StoneSteel27/AutomatiQ.git
cd AutomatiQ
pip install -e .
```

### Dev setup

```bash
pip install -e ".[dev]"
pre-commit install
```

This installs `ruff`, `build`, `twine`, and `pre-commit` hooks (lint + format on every commit).

## Configuration

On first run, AutomatiQ creates `~/.automatiq/config.toml` with commented defaults. Edit it to override models, timeouts, recording settings, etc.

```toml
[models]
agent    = "gemini/gemini-3-flash-preview"
recorder = "gemini/gemini-3.1-flash-lite-preview"
# base_url = "http://localhost:11434/v1"   # Ollama / LM Studio / vLLM

[agent]
max_steps       = 60
sandbox_timeout = 60

[recording]
fps                   = 3
segment_pad           = 2
merge_gap_threshold   = 1.5
max_frames_per_prompt = 8
```

Priority: **CLI flag** > `~/.automatiq/config.toml` > built-in defaults.

Set your API key in a `.env` file at the project root (any [litellm](https://docs.litellm.ai/docs/providers)-supported model works):

```
GEMINI_API_KEY=your-key-here
```

## Run

```bash
# Record a session, then have the agent build a scraper
automatiq run https://example.com

# Or run each step separately
automatiq record https://example.com   # just record
automatiq agent                         # build scraper from last recording
```

CLI flags override config:

```bash
automatiq run https://example.com --model openai/gpt-4o --max-steps 80
```

## What it does

1. **Record:** Opens Chrome, captures your browsing (video, network requests, user actions).
2. **Agent:** An LLM investigator reads the session dump, experiments in a sandboxed IPython environment, and produces a working scraping script.

## Requirements

- Python 3.11+
- A supported LLM API key (Gemini, OpenAI, OpenRouter, or any OpenAI-compatible endpoint via `--base-url`)

## License

MIT
