Metadata-Version: 2.4
Name: meta-reasoning
Version: 0.0.1
Summary: Reasoning is not a property of the model — it is an emergent dynamic of external control.
Author: Tommaso Bredariol
License: MIT
Project-URL: Homepage, https://github.com/tommasobredariol/meta-reasoning
Project-URL: Repository, https://github.com/tommasobredariol/meta-reasoning
Keywords: llm,reasoning,cognitive-control,meta-reasoning,ai
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: openai>=1.0
Requires-Dist: pydantic>=2.0
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Requires-Dist: mypy>=1.0; extra == "dev"
Requires-Dist: ruff>=0.4; extra == "dev"
Dynamic: license-file

<p align="center">
  <img src="https://raw.githubusercontent.com/tictacguy/Meta-Reasoning/main/docs/static/logo_gh.png" alt="SIR Logo" width="500">
</p>

<p align="center">
  <strong>Cognitive Heteronomy for LLMs</strong>
</p>

<p align="center">
  <a href="https://pypi.org/project/meta-reasoning/"><img src="https://img.shields.io/pypi/v/meta-reasoning" alt="PyPI"></a>
  <a href="https://pypi.org/project/meta-reasoning/"><img src="https://img.shields.io/pypi/pyversions/meta-reasoning" alt="Python"></a>
  <a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue.svg" alt="License"></a>
</p>

---

> **Reasoning is not a property of the model — it is an emergent dynamic of external control.**

An SDK that rejects the illusion of autonomous LLM reasoning. Instead of treating language models as cognitive agents, Meta-Reasoning introduces **cognitive heteronomy**: reasoning is governed, observed, and mutated from the outside.

The model doesn't think. It executes. The thinking happens in the architecture around it.

---

## Core Thesis

LLMs are generative substrates, not minds. What is commonly called "reasoning" is pattern replay — not deliberation. This SDK externalizes all meta-cognitive functions into a **Cognitive Controller** that:

- **Observes** the *form* of reasoning (not its content)
- **Measures** trajectory, redundancy, stall, and premature convergence
- **Mutates** the reasoning process through formal constraint operators
- **Records** cognitive trajectories in an **Epistemic Ledger**

No self-reflection. No "think step by step". No autonomous agents.

## Architecture

<p align="center">
  <img src="https://raw.githubusercontent.com/tictacguy/Meta-Reasoning/main/docs/static/architecture.png" alt="Architecture" width="100%">
</p>

### Level 1 — Generative Substrate (LLM)
Produces text and structures. Decides nothing. Stateless by design.

### Level 2 — Cognitive Controller
The heart. Semantically blind — it doesn't evaluate truth, it evaluates *cognitive form*:
- Entropy of reasoning moves
- Strategy repetition index
- Depth without novelty
- Constraint violation rate
- Premature closure score

### Level 3 — Epistemic Ledger
Not RAG. Not content memory. A structural trace of:
- Cognitive transformations attempted
- Strategies that produced stall
- Failure maps that prevent regression

## Key Concepts

### Structured Output Protocol
Every LLM generation must include a formal reasoning trace:
```json
{
  "content": "...",
  "reasoning_trace": {
    "moves": ["assumption", "deduction", "analogy"],
    "depth": 4,
    "confidence_markers": 2,
    "abstraction_level": "medium"
  }
}
```

### Cognitive Move Taxonomy
A finite, observable alphabet:
`assumption` · `deduction` · `induction` · `abduction` · `analogy` · `contradiction` · `enumeration` · `compression` · `narrative_simulation`

### Mutation Operators
The controller doesn't say "reason better". It says:
- **BAN**: "deduction is forbidden"
- **REQUIRE**: "you must use analogy"
- **LIMIT_DEPTH**: "max 2 reasoning steps"
- **FORCE_COMPRESSION**: "reduce to 2 concepts"
- **INVERT_CAUSALITY**: "reverse the causal direction"
- **REQUIRE_CONTRADICTION**: "find an internal contradiction"

Improvisation emerges from constraint, not freedom — like jazz.

### Failure as First-Class Output
The system does not optimize for correct answers. Failure is informative:
- Every collapsed trajectory is recorded
- Every stall enriches the ledger
- The system learns *which cognitive spaces to avoid*

## Installation

```bash
pip install -e .
```

Or with dev dependencies:
```bash
pip install -e ".[dev]"
```

## Quick Start

### Without an API key (mock backend)
```bash
python -m examples.mock_example
```

### With OpenAI
```bash
export OPENAI_API_KEY=<your-key>
python -m examples.openai_example
```

### Programmatic usage
```python
from meta_reasoning import CognitiveEngine

class MyBackend:
    def generate(self, messages):
        # Call your LLM here, return {"content": "..."}
        ...

engine = CognitiveEngine(backend=MyBackend(), max_cycles=5)
result = engine.run("Your task here")

for cycle in result.cycles:
    print(f"Cycle {cycle.cycle}: {cycle.outcome}")
    print(f"  Moves: {[m.value for m in cycle.output.reasoning_trace.moves]}")
    print(f"  Entropy: {cycle.metrics.entropy:.2f}")

# Save the epistemic ledger for analysis
engine.ledger.save("session.json")
```

## Running Tests

```bash
pip install -e ".[dev]"
pytest tests/ -v
```

## Project Structure

```
meta_reasoning/
├── __init__.py        # Public API
├── types.py           # Cognitive moves, traces, mutations, metrics
├── substrate.py       # Level 1 — LLM interface
├── controller.py      # Level 2 — Cognitive Controller
├── ledger.py          # Level 3 — Epistemic Ledger
├── metrics.py         # Semantically-blind cognitive metrics
├── mutations.py       # Mutation operator generation
└── engine.py          # The governed cognitive loop
```

## Related Work & Philosophy

For a detailed comparison with Chain-of-Thought, Tree-of-Thoughts, Meta-Reasoning Prompting, Reflexion, Self-Refine, ReAct, and other approaches — including a comparative table — see the **[full Related Work page](https://tictacguy.github.io/meta-reasoning/#related-work)** on the project website.

The short version: every existing approach keeps the LLM as the cognitive subject. We don't. The model is a substrate. The reasoning is governed from outside.

## License

AGPL-3.0 -- See [LICENSE](LICENSE) for details.
