Metadata-Version: 2.4
Name: netanel-core
Version: 0.1.0
Summary: Self-learning LLM call library. Every call learns. Every call improves.
Author: Netanel Systems
License: MIT
Project-URL: Homepage, https://www.netanel.systems
Project-URL: Repository, https://github.com/netanel-systems/netanel-core
Project-URL: Issues, https://github.com/netanel-systems/netanel-core/issues
Project-URL: Documentation, https://github.com/netanel-systems/netanel-core
Keywords: llm,learning,ai,langchain,langgraph,memory,agents,quality,self-improvement
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.11
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: langgraph>=0.3.0
Requires-Dist: langchain>=0.3.0
Requires-Dist: langchain-openai>=0.3.0
Requires-Dist: deepagents<1.0,>=0.4.1
Requires-Dist: pydantic>=2.0.0
Requires-Dist: pyyaml>=6.0
Dynamic: license-file

# netanel-core

> **The self-learning LLM call.** Every call learns. Every call improves.

[![CI](https://github.com/netanel-systems/nathan-core/actions/workflows/tests.yml/badge.svg)](https://github.com/netanel-systems/nathan-core/actions/workflows/tests.yml)
[![Python 3.11+](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

A Python library that wraps any LLM with file-based memory and automatic quality improvement. No database required.

```python
from netanel_core import LearningLLM

llm = LearningLLM(namespace="my-app")
result = llm.call("Write a function to validate emails")

print(result.response)  # The LLM's output
print(result.score)      # Quality score (0.0-1.0)
print(result.usage)      # Token usage for cost tracking
```

---

## ✨ Features

- 🧠 **Self-Learning** - Extracts patterns from every call, builds better context over time
- 📊 **Quality-First** - Auto-evaluation + retry loop, only stores high-quality outputs
- 💾 **File-Based Memory** - No database, human-readable `.md` files, git-trackable
- 🔄 **Prompt Evolution** - Auto-rewrites prompts based on learnings
- 🎯 **Bounded Safety** - Max retries, tokens, iterations (NASA-grade)
- 🤖 **DeepAgent Support** - Complex reasoning with LangGraph agents
- 💰 **Cost Tracking** - Token usage for all calls

---

## 🚀 Quick Start

```bash
pip install netanel-core
```

```python
from netanel_core import LearningLLM

llm = LearningLLM(namespace="my-app")
result = llm.call("Explain quantum computing simply")

if result.passed:
    print(result.response)
    print(f"Quality: {result.score:.2f}, Tokens: {result.usage['total_tokens']}")
```

---

## 📖 How It Works

Every `llm.call()` executes:

1. **RETRIEVE** - Load memories from namespace
2. **BUILD** - Create context (role + memories + task)
3. **CALL** - Invoke LLM
4. **EVALUATE** - Score quality (gpt-4o-mini + main model)
5. **RETRY** - If score < threshold, retry with feedback
6. **EXTRACT** - Extract patterns from successful responses
7. **STORE** - Save to `memories/{namespace}/patterns/`
8. **EVOLVE** - Trigger prompt improvements
9. **RETURN** - Result with response + metadata

---

## 💡 Usage Examples

### Custom Model

```python
from langchain_openai import ChatOpenAI

model = ChatOpenAI(model="gpt-4", temperature=0.7)
llm = LearningLLM(namespace="gpt4-app", model=model)
```

### DeepAgent Mode

```python
result = llm.call(
    "Research AI papers and summarize trends",
    use_agent=True  # Multi-step reasoning
)
print(f"Steps: {result.agent_steps}")
```

### Cost Tracking

```python
total = sum(llm.call(task).usage['total_tokens'] for task in tasks)
cost = (total / 1_000_000) * 0.375  # gpt-4o-mini avg
print(f"Cost: ${cost:.4f}")
```

---

## 🎯 Configuration

```python
from netanel_core import Config

config = Config(
    namespace="my-app",
    quality_threshold=0.8,
    max_retries=3,
    memories_dir="./memories",
)

llm = LearningLLM(config=config)
```

Or YAML:

```yaml
namespace: my-app
quality_threshold: 0.8
max_retries: 3
```

```python
config = Config.from_yaml("config.yaml")
```

---

## 📂 Memory Structure

```text
memories/
└── {namespace}/
    ├── patterns/
    │   ├── 001-function-structure.md
    │   └── 002-error-handling.md
    └── prompts/
        └── current.md
```

Human-readable Markdown - inspect or version control.

---

## 🧪 Development

```bash
pip install -e .[dev]
pytest --cov
```

---

## 📚 Documentation

- [Architecture](ARCHITECTURE.md) - System design
- [API Reference](docs/API.md) - Complete API
- [Examples](examples/) - Usage patterns

---

## 📝 License

MIT - see [LICENSE](LICENSE)

---

Built by [Netanel Systems](https://www.netanel.systems) with [LangGraph](https://github.com/langchain-ai/langgraph) + [Deep Agents](https://github.com/anthropics/deepagents)
