Metadata-Version: 2.4
Name: conjure-llm
Version: 0.0.2
Summary: Zero-dependency programming — replace library imports with LLM-generated, verified code
Author: Thin Signal
License: MIT
Keywords: llm,code-generation,supply-chain-security,zero-dependency,mlx,apple-silicon
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: MacOS
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Security
Classifier: Topic :: Software Development :: Code Generators
Classifier: Topic :: Software Development :: Libraries
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: pyyaml>=6.0
Requires-Dist: mlx-lm>=0.20
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Provides-Extra: eval
Requires-Dist: transformers; extra == "eval"
Dynamic: license-file

# conjure

Zero-dependency programming — replace library imports with LLM-generated, verified code.

Every library you import is attack surface you didn't write. Conjure replaces thousands of transitive dependencies with one model binary and human-readable YAML specs.

```
Traditional app:
  App → pip install → 200 packages → 1500 transitive deps → 8M LOC of stranger code

Conjure app:
  App → .yaml spec files → embedded LLM → verified code → cached
  Trust chain: one model file (checksummed)
```

## Install

```bash
pip install conjure-llm
```

## Quick Start

```python
import conjure

# First call: generates, verifies, and caches (~10s)
result = conjure.invoke("base64_encode", data="hello")
print(result)  # "aGVsbG8="

# Second call: cache hit (0.3ms)
result = conjure.invoke("base64_encode", data="world")
```

## How It Works

1. Write a YAML spec describing what you need:

```yaml
spec: edit_distance
version: 1.0.0
description: |
  Compute the Levenshtein edit distance between two strings.
function: levenshtein
input:
  s1: str
  s2: str
output: int
examples:
  - input: { s1: "kitten", s2: "sitting" }
    output: 3
  - input: { s1: "", s2: "hello" }
    output: 5
constraints:
  no_imports: true
  max_lines: 400
```

2. Conjure generates a self-contained Python implementation (no imports), verifies it against your examples, and caches it:

```python
result = conjure.invoke("levenshtein", s1="kitten", s2="sitting")
# Returns: 3
```

The generated code is:
- **Import-free** — enforced by AST analysis, not heuristics
- **Verified** — must pass all spec examples before caching
- **Sandboxed** — runs with restricted builtins and timeouts
- **Cached** — content-addressed by spec + model + seed (sub-ms on repeat calls)

## Results

On **ConjureEval-100** (100 specs across 20 categories):

| Metric | Rate |
|--------|------|
| pass@1 (first attempt) | **70.0%** |
| pass@3 (best of 3 attempts) | **87.9%** |

The retry pipeline recovers ~18% more specs by giving the model its own error messages.

### Attack Surface Reduction

Across 5 real Python applications:

| Application | Dependencies | Conjure | Reduction |
|-------------|:-----------:|:-------:|:---------:|
| Flask blog | 13 transitive | 0 | **15×** LOC |
| FastAPI service | 15 transitive | 0 | **17×** LOC |
| CLI tool | 5 transitive | 0 | **6×** LOC |
| Web scraper | 17 transitive | 0 | **20×** LOC |
| File sync | 8 transitive | 0 | **9×** LOC |

### Translation Feasibility

52% of a typical Flask application's imports can be automatically replaced by Conjure specs.

## CLI

```bash
# Invoke a function
conjure invoke base64_encode -k data=hello

# Pre-generate all specs
conjure build --spec-dir specs/stdlib

# Evaluate specs
conjure eval --spec-dir specs/stdlib --output results.md

# Analyze a project for translation
conjure translate my_project/ --output conjure_specs/

# Compare attack surface
conjure audit --packages "flask,requests,pyyaml" --specs specs/stdlib
```

## Model

Conjure uses **Qwen3.5-9B-OptiQ-4bit** via [MLX](https://github.com/ml-explore/mlx) on Apple Silicon:

- 5 GB model memory (4-bit mixed-precision quantization)
- Instruct mode with recommended sampling (temp=0.6, top_p=0.95)
- Runs entirely on-device — no API keys, no cloud, no network

## Included Specs

30 curated stdlib specs covering common library patterns:

| Category | Specs |
|----------|-------|
| Encoding | base64_encode, base64_decode, hex_encode, hex_decode, rot13 |
| String | slugify, camel_to_snake, snake_to_camel, reverse_words |
| Collections | group_by, flatten, flatten_dict, chunk, deduplicate |
| Data parsing | csv_parse, json_parse, url_parse |
| Math | gcd, fibonacci, statistics, levenshtein |
| Algorithms | binary_search, run_length_encode, matrix_multiply |
| Other | is_palindrome, word_count, truncate, frequency_count, deep_get |

Plus 970 additional specs across 20 categories available in the full distribution.

## Limitations

- **Complex algorithms**: SHA-256, recursive-descent parsers — too complex for reliable generation from a 9B model
- **Apple Silicon only**: Requires MLX (macOS with M-series chip)
- **Cold start**: First generation takes 3-100s (cached permanently after)
- **Pure functions only**: Stateful protocols, FFI, OS access are out of scope

## License

MIT
