Metadata-Version: 2.4
Name: substrai-lambdallm
Version: 1.0.0
Summary: Serverless-native LLM orchestration framework for AWS Lambda
Project-URL: Homepage, https://substrai.dev
Project-URL: Repository, https://github.com/substrai/lambdallm
Project-URL: Documentation, https://docs.substrai.dev/lambdallm
Project-URL: Issues, https://github.com/substrai/lambdallm/issues
Project-URL: Changelog, https://github.com/substrai/lambdallm/blob/main/CHANGELOG.md
Author-email: Gaurav Kumar Sinha <gaurav@substrai.dev>
License: MIT
License-File: LICENSE
Keywords: agents,aws,bedrock,chains,framework,genai,lambda,llm,serverless
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Application Frameworks
Requires-Python: >=3.10
Provides-Extra: all
Requires-Dist: aws-xray-sdk>=2.12.0; extra == 'all'
Requires-Dist: boto3>=1.28.0; extra == 'all'
Requires-Dist: pyyaml>=6.0; extra == 'all'
Provides-Extra: bedrock
Requires-Dist: boto3>=1.28.0; extra == 'bedrock'
Provides-Extra: dev
Requires-Dist: moto>=4.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.21; extra == 'dev'
Requires-Dist: pytest>=7.0; extra == 'dev'
Requires-Dist: ruff>=0.1.0; extra == 'dev'
Provides-Extra: xray
Requires-Dist: aws-xray-sdk>=2.12.0; extra == 'xray'
Provides-Extra: yaml
Requires-Dist: pyyaml>=6.0; extra == 'yaml'
Description-Content-Type: text/markdown

# LambdaLLM

**Serverless-native LLM orchestration framework for AWS Lambda.**

> Built by [SubstrAI](https://github.com/substrai) — Open-source GenAI frameworks for serverless infrastructure.

## The Problem

Existing LLM frameworks (LangChain, LlamaIndex) assume long-running servers. They break on Lambda:
- Cold starts: 500MB+ dependency trees add seconds
- Stateless: No conversation memory between invocations
- 15-min timeout: Long agent loops crash
- 250MB limit: LangChain alone exceeds this

## The Solution

LambdaLLM is purpose-built for Lambda's constraints:

```python
from lambdallm import handler, Prompt, Model

summarize = Prompt(
    template="Summarize in {max_words} words:\n\n{document}",
    output_schema={"summary": str, "key_points": list}
)

@handler(model=Model.CLAUDE_3_HAIKU)
def lambda_handler(event, context):
    return summarize.invoke(
        document=event["body"]["text"],
        max_words=100
    )
```

## Features

- **< 5MB** package size (vs 400MB+ for LangChain)
- **Cold-start optimized** — lazy imports, connection pooling
- **DynamoDB-native state** — conversation memory that survives stateless execution
- **Cost-aware routing** — auto-select cheapest model that meets quality threshold
- **One-command deploy** — `lambdallm deploy` generates all AWS infrastructure
- **Timeout handling** — checkpoint/resume for long chains

## Installation

```bash
pip install lambdallm[bedrock]
```

## Quick Start

```bash
lambdallm init my-project
cd my-project
lambdallm dev
```

## Documentation

- [Getting Started](https://docs.substrai.dev/lambdallm/getting-started)
- [API Reference](https://docs.substrai.dev/lambdallm/api)
- [Examples](https://github.com/substrai/lambdallm/tree/main/examples)

## License

MIT — see [LICENSE](LICENSE)

## Author

**Gaurav Kumar Sinha** — Founder, [SubstrAI](https://github.com/substrai)
