Metadata-Version: 2.4
Name: FloTorch-core
Version: 2.9.6
Summary: A Python project for FloTorch
Author-email: Shiva Krishna <shiva.krishnaah@gmail.com>
License: MIT
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: langchain==0.3.14
Requires-Dist: boto3==1.36.2
Requires-Dist: ollama==0.4.6
Requires-Dist: PyPDF2==3.0.1
Requires-Dist: opensearch-py==2.8.0
Requires-Dist: sagemaker==2.235.2
Requires-Dist: openai==1.57.4
Requires-Dist: ragas==0.2.14
Requires-Dist: psycopg2-binary==2.9.9
Provides-Extra: dev
Requires-Dist: pytest==8.3.4; extra == "dev"
Requires-Dist: testcontainers==4.9.0; extra == "dev"
Requires-Dist: minio==7.2.15; extra == "dev"
Requires-Dist: moto==5.0.27; extra == "dev"
Dynamic: license-file

# 🚀 FloTorch-core

**FloTorch-core** is a modular and extensible Python framework for building LLM-powered RAG (Retrieval-Augmented Generation) pipelines. It offers plug-and-play components for embeddings, chunking, retrieval, gateway-based LLM calls, and RAG evaluation.

---

## ✨ Features

- 🧩 Text Chunking (Fixed-size, Hierarchical)
- 🧠 Embedding Models (Titan, Cohere, Bedrock)
- 🔍 Document Retrieval (OpenSearch + Vector Storage)
- 💻 Bedrock/sagemaker/gateway inferencer
- 🔌 Unified LLM Gateway (OpenAI, Bedrock, Ollama, etc.)
- 📏 RAG Evaluation (RAGAS Metrics)
- ☁️ AWS Integration (S3, DynamoDB, Lambda)
- 🧢 Built-in Testing Support

---

## 📆 Installation

```bash
pip install FloTorch-core
```

To install development dependencies:

```bash
pip install FloTorch-core[dev]
```

---


## 📂 Project Structure

```
flotorch/
├── inferencer/         # LLM gateway/bedrock/sagemaker interface
├── embedding/          # Embedding models
├── chunking/           # Text chunking logic
├── evaluator/          # RAG evaluation (RAGAS)
├── storage/            # Vector DB, S3, DynamoDB
├── util/               # Utilities and helpers
├── rerank/             # Ranking documents
├── guardrails/         # Enabling guardrails
├── reader/             # reader for json/pdf
```

---

## 📖 Usage Example

### Reader

```
from flotorch_core.reader.json_reader import JSONReader
from flotorch_core.storage.s3_storage import S3StorageProvider

json_reader = JSONReader(S3StorageProvider(<S3 bucket>))
json_reader.read(<path>)
```

### Embedding
```
from flotorch_core.embedding.embedding_registry import embedding_registry

embedding_class = embedding_registry.get_model(<model id>)

# model id example: amazon.titan-text-express-v1, amazon.titan-embed-text-v2:0, cohere.embed-multilingual-v3
```

### Vector storage (opensearch)
```
from flotorch_core.storage.db.vector.open_search import OpenSearchClient

vector_storage_object = OpenSearchClient(
    <opensearch_host>, 
    <opensearch_port>, 
    <opensearch_username>, 
    <opensearch_password>, 
    <index_id>, 
    <embedding object>
)
```

### Vector storage (bedrock knowledgebase)
```
from flotorch_core.storage.db.vector.bedrock_knowledgebase_storage import BedrockKnowledgeBaseStorage

vector_storage_object = BedrockKnowledgeBaseStorage(
    knowledge_base_id=<knowledge_base_id>,
    region=<aws_region>
)
```

### Guardrails over vector storage
```
from flotorch_core.storage.db.vector.guardrails_vector_storage import GuardRailsVectorStorage

base_guardrails = BedrockGuardrail(<guardrail_id>, <guardrail_version>, <aws_region>)            
vector_storage_object = GuardRailsVectorStorage(
    vector_storage_object, 
    base_guardrails,
    <enable_prompt_guardrails(True/False)>,
    <enable_context_guardrails(True/False)>
)
```

### Inferencer
```
from flotorch_core.inferencer.bedrock_inferencer import BedrockInferencer
from flotorch_core.inferencer.gateway_inferencer import GatewayInferencer
from flotorch_core.inferencer.sagemaker_inferencer import SageMakerInferencer

inferencer = BedrockInferencer(
    <model_id>, 
    <region>, 
    <number of n_shot_prompts>, 
    <temperature>, 
    <n_shot_prompt_guide_obj>
)

inferencer = GatewayInferencer(
    model_id=<model_id>, 
    api_key=<api_key>, 
    base_url=<base_url>, 
    n_shot_prompts=<n_shot_prompts>, 
    n_shot_prompt_guide_obj=<n_shot_prompt_guide_obj>
)

inferencer = SageMakerInferencer(
    <model_id>, 
    <region>, 
    <arn_role>, 
    <n_shot_prompts>, 
    <temperature>, 
    <n_shot_prompt_guide_obj>
)
```

### GuardRail over inferencer

```
from flotorch_core.inferencer.guardrails.guardrails_inferencer import GuardRailsInferencer

inferencer = GuardRailsInferencer(inferencer, base_guardrails)
```

---


## 📬 Maintainer

**Shiva Krishna**  
📧 Email: shiva.krishnaah@gmail.com

**Adil Raza**  
📧 Email: adilraza.9752@gmail.com

---

## 📄 License

This project is licensed under the [MIT License](LICENSE).

---

## 🌐 Links

- GitHub: [https://github.com/FissionAI/flotorch-core](https://github.com/FissionAI/flotorch-core)

