Metadata-Version: 2.4
Name: litecrew
Version: 0.1.0
Summary: Multi-agent orchestration in ~100 lines. No magic.
Project-URL: Homepage, https://github.com/menonpg/litecrew
Project-URL: Documentation, https://github.com/menonpg/litecrew#readme
Project-URL: Repository, https://github.com/menonpg/litecrew
Project-URL: Issues, https://github.com/menonpg/litecrew/issues
Author-email: Prahlad Menon <prahlad.menon@gmail.com>
License-Expression: MIT
License-File: LICENSE
Keywords: agents,ai,anthropic,crew,llm,multi-agent,openai,orchestration,simple
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Provides-Extra: all
Requires-Dist: anthropic>=0.18.0; extra == 'all'
Requires-Dist: openai>=1.0.0; extra == 'all'
Requires-Dist: soul-agent>=0.1.7; extra == 'all'
Provides-Extra: anthropic
Requires-Dist: anthropic>=0.18.0; extra == 'anthropic'
Provides-Extra: dev
Requires-Dist: pytest-cov>=4.0.0; extra == 'dev'
Requires-Dist: pytest>=7.0.0; extra == 'dev'
Provides-Extra: memory
Requires-Dist: soul-agent>=0.1.7; extra == 'memory'
Provides-Extra: openai
Requires-Dist: openai>=1.0.0; extra == 'openai'
Description-Content-Type: text/markdown

# litecrew

**Multi-agent orchestration in ~100 lines. No magic. No vendor lock-in.**

[![PyPI](https://img.shields.io/pypi/v/litecrew)](https://pypi.org/project/litecrew/)
[![Tests](https://github.com/menonpg/litecrew/actions/workflows/test.yml/badge.svg)](https://github.com/menonpg/litecrew/actions)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

```python
from litecrew import Agent, crew

researcher = Agent("researcher", model="gpt-4o-mini")
writer = Agent("writer", model="claude-3-5-sonnet-20241022")

@crew(researcher, writer)
def write_article(topic: str) -> str:
    research = researcher(f"Research {topic}, return key facts")
    return writer(f"Write article using: {research}")

article = write_article("quantum computing")
```

That's it. That's the library.

---

## 🔑 BYOK — Bring Your Own Keys

**litecrew never touches your API keys.** We don't proxy, store, or even see them.

```bash
# Set your keys as environment variables (standard practice)
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
```

The official `openai` and `anthropic` Python libraries read these automatically. litecrew just calls those libraries. **Your keys stay on your machine.**

- ✅ No litecrew account required
- ✅ No API proxy
- ✅ No telemetry
- ✅ No key storage
- ✅ Works offline with local models (via OpenAI-compatible APIs)

---

## 🎯 What litecrew IS

A **minimal orchestration layer** for simple multi-agent workflows.

| ✅ Use litecrew when... |
|------------------------|
| You have 2-5 agents that pass data to each other |
| You're prototyping and want to move fast |
| You want to understand every line of your orchestration code |
| You're learning how multi-agent systems work |
| You need something working in 10 minutes, not 10 hours |

**Core features:**
- Define agents (model + tools + system prompt)
- Sequential handoffs (A → B → C)
- Parallel fan-out (A → [B, C, D] → collect)
- Tool calling (OpenAI function calling format)
- Token tracking and cost awareness
- Optional persistent memory via [soul-agent](https://github.com/menonpg/soul.py)

---

## 🚫 What litecrew is NOT

**Be honest about scope.** If you need these, use a full framework:

| ❌ Don't use litecrew when... | Use instead |
|------------------------------|-------------|
| Complex hierarchical agent management | CrewAI, AutoGen |
| Stateful conversation with branching | LangGraph |
| Production enterprise workflows | LangGraph, Temporal |
| Visual workflow builders | Flowise, n8n |
| 47 pre-built integrations | LangChain |
| Human-in-the-loop approval flows | CrewAI, custom |
| Automatic retry with exponential backoff | Tenacity + custom |
| Streaming responses | Direct API calls |
| Agent-to-agent negotiation | AutoGen |

**The deal:** We do 20% of what CrewAI does in 1% of the code. That's a tradeoff. If you need the other 80%, you've outgrown us — and that's fine.

---

## 📊 Comparison

| Framework | Lines of Code | Learning Curve | Flexibility | Our Take |
|-----------|---------------|----------------|-------------|----------|
| **litecrew** | ~150 | Minutes | Limited | Start here |
| CrewAI | ~15,000 | Hours | High | Graduate to this |
| LangGraph | ~50,000 | Days | Very High | For complex flows |
| AutoGen | ~30,000 | Days | High | For agent negotiation |

**Our recommendation:**
1. **Start with litecrew** — Get your prototype working
2. **Hit a limitation** — You need something we don't do
3. **Graduate to CrewAI + [crewai-soul](https://github.com/menonpg/crewai-soul)** — Keep your memory layer

---

## Installation

```bash
pip install litecrew
```

With providers:
```bash
pip install litecrew[openai]      # OpenAI support
pip install litecrew[anthropic]   # Anthropic support
pip install litecrew[all]         # Everything including memory
```

---

## Usage

### Basic Agent

```python
from litecrew import Agent

agent = Agent(
    name="assistant",
    model="gpt-4o-mini",  # or "claude-3-5-sonnet-20241022"
    system="You are a helpful assistant."
)

response = agent("What is the capital of France?")
print(response)
print(agent.tokens)  # {"in": 23, "out": 15}
```

### Sequential Handoff

```python
from litecrew import Agent, sequential

researcher = Agent("researcher", model="gpt-4o-mini")
writer = Agent("writer", model="gpt-4o-mini")
editor = Agent("editor", model="gpt-4o-mini")

pipeline = sequential(researcher, writer, editor)
result = pipeline("Write about AI safety")
```

### Parallel Execution

```python
from litecrew import Agent, parallel

security = Agent("security", system="Review for security issues.")
performance = Agent("performance", system="Review for performance.")
style = Agent("style", system="Review for code style.")

review_all = parallel(security, performance, style)
results = review_all("def get_user(id): return db.query(f'SELECT * FROM users WHERE id={id}')")
# Returns: ["SQL injection risk...", "Consider caching...", "Use parameterized queries..."]
```

### With Tools

```python
from litecrew import Agent, tool

@tool(schema={
    "type": "object",
    "properties": {"query": {"type": "string"}},
    "required": ["query"]
})
def search(query: str) -> str:
    return f"Results for: {query}"

agent = Agent("assistant", tools=[search])
response = agent("Search for the latest AI news")
```

### With Persistent Memory

```python
from litecrew import Agent, with_memory

agent = Agent("assistant", model="gpt-4o-mini")
agent = with_memory(agent, namespace="my-assistant")

# Agent now remembers across sessions
agent("My name is Alice and I work at Acme Corp")
# ... later, even after restart ...
agent("Where do I work?")  # "You work at Acme Corp"
```

---

## Testing

```bash
# Install dev dependencies
pip install litecrew[dev]

# Run tests
pytest tests/

# Run with coverage
pytest tests/ --cov=litecrew
```

---

## The Soul Ecosystem

litecrew is part of a family of simple, composable AI tools:

| Package | Purpose | When to Use |
|---------|---------|-------------|
| [**litecrew**](https://github.com/menonpg/litecrew) | Minimal orchestration | Starting out, prototypes |
| [soul-agent](https://github.com/menonpg/soul.py) | Persistent memory | Add memory to any agent |
| [crewai-soul](https://github.com/menonpg/crewai-soul) | CrewAI + memory | Production multi-agent |
| [langchain-soul](https://github.com/menonpg/langchain-soul) | LangChain + memory | Complex chains |
| [llamaindex-soul](https://github.com/menonpg/llamaindex-soul) | LlamaIndex + memory | RAG pipelines |

---

## Philosophy

> "Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away." — Antoine de Saint-Exupéry

Most frameworks race to add features. We race to keep them out.

**The SQLite strategy:** SQLite doesn't try to be PostgreSQL. It does one thing well and says "if you need more, use something else." That's us.

---

## FAQ

**Q: Why not just use CrewAI?**  
A: CrewAI is great when you need it. But sometimes you just want two agents to pass data without learning a framework. That's us.

**Q: How do I add feature X?**  
A: Fork it. The code is ~150 lines. Add what you need. Or graduate to CrewAI.

**Q: Will you add streaming/callbacks/hierarchies?**  
A: No. Adding features makes us what we're replacing.

**Q: Is this production-ready?**  
A: For simple workflows, yes. For complex enterprise needs, use CrewAI + crewai-soul.

**Q: Do you store my API keys?**  
A: No. We never see them. They stay in your environment variables.

---

## License

MIT — Do whatever you want.

---

## Contributing

- **Bug?** Open an issue
- **Feature request?** Consider if it keeps us simple. If not, fork it.
- **PR?** Keep it minimal

---

**Built by [The Menon Lab](https://themenonlab.com)** | [Blog](https://blog.themenonlab.com) | [Twitter](https://twitter.com/themedcave)
