Metadata-Version: 2.4
Name: PickYourLLM
Version: 0.3
Summary: Pick Your LLM: Intelligent, Use-Case Aware LLM Model advisor for Optimal Performance and Cost
Home-page: https://github.com/AmadeusITGroup/PickYourLLM
Author: Ilias, Eoin
Author-email: ilias.driouich@amadeus.com;eoin.thomas@amadeus.com
Requires-Python: >=3.6
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: pandas>=1.1.0
Requires-Dist: numpy>=1.19.0
Dynamic: author
Dynamic: author-email
Dynamic: description
Dynamic: description-content-type
Dynamic: home-page
Dynamic: license-file
Dynamic: requires-dist
Dynamic: requires-python
Dynamic: summary

## PickYourLLM Framework

#### - This framework helps you automatically select the most suitable Large Language Model (LLM) for a given business or technical use case.

#### - It analyzes use case requirements (e.g., cost, latency, reasoning quality, context window, provider constraints), matches them against available LLMs, and ranks the best candidates based on weighted scoring.
---

## Features

- **Use Case–Driven Selection:** Takes a natural-language description of a use case and extracts structured constraints and priorities.
- **Constraint Extraction:** Uses advanced LLM models to normalize requirements into a standardized schema (provider, latency, cost, openness, tool calling, languages, etc.).
- **Model Matching:** Filters candidate LLMs based on hard constraints such as provider restrictions, deployment type, language support, context window, and cost thresholds.
- **Weighted Recommendation Engine:** Scores models using weighted dimensions such as cost, latency, reasoning, quality, throughput, tool-calling capability, and openness.
- **Transparent Ranking:** Produces ranked recommendations with clear rationales explaining why each model was selected.
---

## How It Works

The pipeline runs in sequential steps:

- **Use Case Selection**  
Choose from predefined scenarios (customer assistant, travel agent assistant, multilingual chatbot, internal copilot, etc.) or provide your own description.

- **Requirement Extraction (LLM Agent)**  
the use case is parsed into structured metadata, including:  
Provider constraints  
Deployment preferences  
Latency and cost requirements  
Language support  
Reasoning / quality expectations  
Tool-calling or multimodal needs  
Priority weights across decision criteria  

- **Model Filtering**  
Candidate LLMs from the model catalog are filtered according to the extracted hard constraints.

- **Scoring & Ranking**  
The remaining models are scored using a weighted recommendation engine across the most relevant dimensions for the use case.

- **Export**  
Ranked recommendations are exported to CSV, along with the extracted use case metadata in JSON format for inspection.

## Usage

To use the tool, follow these steps:

```bash
pip install PickYourLLM

PickYourLLM
