Metadata-Version: 2.4
Name: uuv-assistant-ai
Version: 1.3.0
Summary: UUV Assistant AI based, used to improve the life of testers and developers by generating cucumber phrases from the GUI.
Project-URL: Homepage, https://e2e-test-quest.github.io/uuv/
Project-URL: Documentation, https://e2e-test-quest.github.io/uuv/
Project-URL: Repository, https://github.com/e2e-test-quest/uuv
Project-URL: Issues, https://github.com/e2e-test-quest/uuv/issues
Project-URL: Changelog, https://github.com/e2e-test-quest/uuv/blob/main/packages/assistant-ai/CHANGELOG.md
Project-URL: Funding, https://opencollective.com/uuv
Author: Louis Fredice NJAKO MOLOM, Stanley SERVICAL
Maintainer: Louis Fredice NJAKO MOLOM, Stanley SERVICAL
License: MIT
Keywords: a11y,acceptance,accessibilite,accessibility,ai,artificial-intelligence,bdd,cucumber,cypress,e2e,end 2 end,end2end,gherkin,image-analysis,llm,playwright,tdd,test,test-generation,testing,testing-library,uuv,vlm
Classifier: Development Status :: 4 - Beta
Classifier: Framework :: FastAPI
Classifier: Framework :: Pydantic
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development
Classifier: Topic :: Software Development :: Testing
Classifier: Topic :: Utilities
Requires-Python: <3.15,>=3.10
Requires-Dist: aiohappyeyeballs==2.6.1
Requires-Dist: aiohttp==3.13.3
Requires-Dist: aiosignal==1.4.0
Requires-Dist: alembic==1.18.3
Requires-Dist: annotated-doc==0.0.4
Requires-Dist: annotated-types==0.7.0
Requires-Dist: anyio==4.12.1
Requires-Dist: async-timeout==5.0.1; python_full_version < '3.11'
Requires-Dist: asyncer==0.0.8
Requires-Dist: attrs==25.4.0
Requires-Dist: beautifulsoup4==4.14.3
Requires-Dist: bs4==0.0.2
Requires-Dist: cachetools==6.2.6
Requires-Dist: certifi==2026.1.4
Requires-Dist: charset-normalizer==3.4.4
Requires-Dist: click==8.3.1
Requires-Dist: cloudpickle==3.1.2
Requires-Dist: colorama==0.4.6; sys_platform == 'win32'
Requires-Dist: colorlog==6.10.1
Requires-Dist: diskcache==5.6.3
Requires-Dist: distro==1.9.0
Requires-Dist: dnspython==2.8.0
Requires-Dist: dspy==3.1.3
Requires-Dist: email-validator==2.3.0
Requires-Dist: exceptiongroup==1.3.1; python_full_version < '3.11'
Requires-Dist: fastapi-cli==0.0.20
Requires-Dist: fastapi-cloud-cli==0.11.0
Requires-Dist: fastapi==0.128.6
Requires-Dist: fastar==0.8.0
Requires-Dist: fastuuid==0.14.0
Requires-Dist: filelock==3.20.3
Requires-Dist: frozenlist==1.8.0
Requires-Dist: fsspec==2026.2.0
Requires-Dist: gepa==0.0.26
Requires-Dist: greenlet==3.3.1; platform_machine == 'AMD64' or platform_machine == 'WIN32' or platform_machine == 'aarch64' or platform_machine == 'amd64' or platform_machine == 'ppc64le' or platform_machine == 'win32' or platform_machine == 'x86_64'
Requires-Dist: h11==0.16.0
Requires-Dist: hf-xet==1.2.0; platform_machine == 'AMD64' or platform_machine == 'aarch64' or platform_machine == 'amd64' or platform_machine == 'arm64' or platform_machine == 'x86_64'
Requires-Dist: httpcore==1.0.9
Requires-Dist: httptools==0.7.1
Requires-Dist: httpx==0.28.1
Requires-Dist: huggingface-hub==1.4.1
Requires-Dist: idna==3.11
Requires-Dist: importlib-metadata==8.7.1
Requires-Dist: jinja2==3.1.6
Requires-Dist: jiter==0.13.0
Requires-Dist: json-repair==0.57.1
Requires-Dist: jsonschema-specifications==2025.9.1
Requires-Dist: jsonschema==4.26.0
Requires-Dist: litellm==1.81.9
Requires-Dist: mako==1.3.10
Requires-Dist: markdown-it-py==3.0.0
Requires-Dist: markupsafe==3.0.3
Requires-Dist: mdurl==0.1.2
Requires-Dist: multidict==6.7.1
Requires-Dist: numpy==2.2.6; python_full_version < '3.11'
Requires-Dist: numpy==2.4.2; python_full_version >= '3.11'
Requires-Dist: openai==2.17.0
Requires-Dist: optuna==4.7.0
Requires-Dist: orjson==3.11.7
Requires-Dist: packaging==25.0
Requires-Dist: pillow==12.1.1
Requires-Dist: propcache==0.4.1
Requires-Dist: pydantic-core==2.41.5
Requires-Dist: pydantic-extra-types==2.11.0
Requires-Dist: pydantic-settings==2.12.0
Requires-Dist: pydantic==2.12.5
Requires-Dist: pygments==2.19.2
Requires-Dist: python-dotenv==1.2.1
Requires-Dist: python-multipart==0.0.22
Requires-Dist: pyyaml==6.0.3
Requires-Dist: referencing==0.37.0
Requires-Dist: regex==2026.1.15
Requires-Dist: requests==2.32.5
Requires-Dist: rich-toolkit==0.19.0
Requires-Dist: rich==14.3.2
Requires-Dist: rignore==0.7.6
Requires-Dist: rpds-py==0.30.0
Requires-Dist: sentry-sdk==2.52.0
Requires-Dist: shellingham==1.5.4
Requires-Dist: sniffio==1.3.1
Requires-Dist: soupsieve==2.8.3
Requires-Dist: sqlalchemy==2.0.46
Requires-Dist: starlette==0.52.1
Requires-Dist: tenacity==9.1.4
Requires-Dist: tiktoken==0.12.0
Requires-Dist: tokenizers==0.22.2
Requires-Dist: tomli==2.4.0; python_full_version < '3.11'
Requires-Dist: tqdm==4.67.3
Requires-Dist: typer-slim==0.21.1
Requires-Dist: typer==0.21.1
Requires-Dist: typing-extensions==4.15.0
Requires-Dist: typing-inspection==0.4.2
Requires-Dist: urllib3==2.6.3
Requires-Dist: uvicorn==0.40.0
Requires-Dist: uvloop==0.22.1; platform_python_implementation != 'PyPy' and sys_platform != 'cygwin' and sys_platform != 'win32'
Requires-Dist: watchfiles==1.1.1
Requires-Dist: websockets==16.0
Requires-Dist: xxhash==3.6.0
Requires-Dist: yarl==1.22.0
Requires-Dist: zipp==3.23.0
Provides-Extra: mlflow
Requires-Dist: cryptography==46.0.6; extra == 'mlflow'
Requires-Dist: docker==7.1.0; extra == 'mlflow'
Requires-Dist: flask-cors==6.0.2; extra == 'mlflow'
Requires-Dist: flask==3.1.3; extra == 'mlflow'
Requires-Dist: graphene==3.4.3; extra == 'mlflow'
Requires-Dist: gunicorn==23.0.0; extra == 'mlflow'
Requires-Dist: huey==2.6.0; extra == 'mlflow'
Requires-Dist: matplotlib==3.10.8; extra == 'mlflow'
Requires-Dist: mlflow-skinny==3.9.0; extra == 'mlflow'
Requires-Dist: mlflow-tracing==3.9.0; extra == 'mlflow'
Requires-Dist: mlflow==3.9.0; extra == 'mlflow'
Requires-Dist: pandas==2.3.3; extra == 'mlflow'
Requires-Dist: pyarrow==21.0.0; extra == 'mlflow'
Requires-Dist: scikit-learn==1.8.0; extra == 'mlflow'
Requires-Dist: scipy==1.17.0; extra == 'mlflow'
Requires-Dist: skops==0.13.0; extra == 'mlflow'
Requires-Dist: waitress==3.0.2; extra == 'mlflow'
Description-Content-Type: text/markdown

# uuv-assistant-ai

<p align="center">
<a href="https://e2e-test-quest.github.io/uuv/">  
<picture>  
<img alt="UUV Logo" src="https://e2e-test-quest.github.io/uuv/img/uuv.png">  
</picture>  
</a>  
</p>

<h3 align="center">
Assistant AI - Image-based test generation
</h3>

<p align="center">
AI-powered assistant that helps testers and developers generate Cucumber BDD test scenarios from GUI screenshots and HTML content.
</p>

<p align="center">
<a href="https://pypi.org/project/uuv-assistant-ai/" target="_blank">
<img src="https://img.shields.io/pypi/v/uuv-assistant-ai?logo=pypi" alt="PyPI version"/>
</a>
<a href="https://www.python.org/downloads/" target="_blank">
<img src="https://img.shields.io/badge/using-python-3776AB?logo=python&logoColor=white" alt="python"/>
</a>
<a href="https://fastapi.tiangolo.com/" target="_blank">
<img src="https://img.shields.io/badge/using-fastapi-009688?logo=fastapi&logoColor=white" alt="fastapi"/>
</a>
<a href="https://python-poetry.org/" target="_blank">
<img src="https://img.shields.io/badge/using-poetry-60A5FA?logo=python-poetry&logoColor=white" alt="poetry"/>
</a>
<a href="https://opencollective.com/uuv" target="_blank">
<img src="https://img.shields.io/badge/support-us-00b69c?logo=opencollective&logoColor=white" alt="Support us on Open Collective"/>
</a>
<br/>
</p>

<div align="center">
<a href="https://pypi.org/project/uuv-assistant-ai/" target="_blank">
    <img alt="uuv-assistant-ai PyPI download count"
         src="https://img.shields.io/pypi/dm/uuv-assistant-ai?logo=pypi&label=uuv-assistant-ai"></img>
</a>
<br/>
</div>

## What is uuv-assistant-ai?

`uuv-assistant-ai` is an AI-powered service that extends the UUV ecosystem by enabling AI assisted tests. It uses Vision Language Models (VLMs) and Large Language Models (LLMs) to:
- **Classify images** - Determine if image elements are decorative or informative (in that case it generate suitable image description)

This service integrates with the main [`@uuv/assistant`](https://e2e-test-quest.github.io/uuv/docs/tools/uuv-assistant) or [`@uuv/assistant-desktop`](https://e2e-test-quest.github.io/uuv/docs/tools/assistant-desktop) to provide a complete solution for E2E test generation.


## Getting started

### Prerequisites

- Python >=3.10, <3.15

### Environment Variables

Create a `.env` file with the following variables:

#### Required (for API access)

```env
LLM_API_URL=https://localhost:11434
LLM_API_KEY=your-api-key
LLM_MODEL=ministral-3:8b

VLM_API_URL=https://localhost:11434
VLM_API_KEY=your-api-key
VLM_MODEL=ministral-3:8b
```

### Setup with `pip`

```bash
pip install uuv-assistant-ai

# After installing from PyPI
uuv-assistant-ai
```
The API will be available at `http://localhost:8000`

### Setup with `uv`

```bash
uvx add uuv-assistant-ai[mlflow]
```
The API will be available at `http://localhost:8000`

### Programmatic Usage

```python
from uuv_assistant_ai.image_classifier import (
    UUVMultipleImageDescriberAgent,
    UUVImageClassifierAgent
)
from PIL import Image

# Describe images
describer = UUVMultipleImageDescriberAgent(
    vlm_api_url="https://api.openai.com/v1",
    vlm_api_key="your-key",
    vlm_model="gpt-4-vision-preview"
)

image = Image.open("screenshot.png")
descriptions = describer(image)

# Classify images
classifier = UUVImageClassifierAgent(
    llm_api_url="https://api.openai.com/v1",
    llm_api_key="your-key",
    llm_model="gpt-4"
)

result = classifier(
    html_content="<html>...</html>",
    css_selector=".element",
    image_description="A button labeled Submit"
)
```

### API Endpoints

#### 1. Classify Image (Unified)

Stream image analysis results including description and classification.

```bash
curl -X POST "http://localhost:8000/api/v1/image/classify-unified" \
  -F "html_content=<html>" \
  -F "css_selector=.element-selector" \
  -F "target_img_file=@screenshot.png"
```

**Response** (Server-Sent Events):

```json
{"image_description": "A button labeled 'Submit' with blue background"}
{"is_decorative": false, "confidence": 0.95, "analysis_details": "This is a functional button..."}
```

#### 2. Multiple Image Description

Describe multiple images in a single request.

```bash
curl -X POST "http://localhost:8000/api/v1/image/multiple-describe" \
  -F "target_img_file=@screenshot.png"
```

**Response**:

```json
{
    "descriptions": [
        { "element": "button", "description": "Submit button" },
        { "element": "input", "description": "Text input field" }
    ]
}
```

#### 3. Classify Image (Standard)

Classify an image using a pre-computed description, html_content and css_selector.

```bash
curl -X POST "http://localhost:8000/api/v1/image/classify" \
  -F "html_content=<html>" \
  -F "css_selector=.element-selector" \
  -F "image_description=A button labeled Submit"
```

**Response**:

```json
{
    "is_decorative": false,
    "confidence": 0.95,
    "analysis_details": "This is a functional button..."
}
```

#### When to use which endpoint:

**Quick Start (Recommended)**  
Use `classify-unified` for a simple, one-call approach that returns both image description and classification.

**Advanced / Step-by-step**  
For more control, use the two-step approach:

1. `multiple-describe` - Get descriptions for all UI elements in one screenshot
2. `classify` - Classify each element using the pre-computed description

This step by approach is useful when you need to:

- Reuse descriptions for multiple operations
- Analyze multiple elements separately
- Build custom workflows with intermediate processing

## Integration with UUV Ecosystem

The `uuv-assistant-ai` service integrates with:

- [`@uuv/assistant`](https://e2e-test-quest.github.io/uuv/docs/tools/uuv-assistant) - Web interface for test generation (integrates with this service)
- [`@uuv/cypress`](https://www.npmjs.com/package/@uuv/cypress) - Cypress execution engine
- [`@uuv/playwright`](https://www.npmjs.com/package/@uuv/playwright) - Playwright execution engine

## Documentation

Full documentation: [https://e2e-test-quest.github.io/uuv/](https://e2e-test-quest.github.io/uuv/)

## License

[<a href="https://github.com/e2e-test-quest/uuv/blob/main/LICENSE">  
<img src="https://img.shields.io/badge/license-MIT-blue" alt="MIT license"/>  
</a>](https://spdx.org/licenses/MIT.html)

This project is licensed under the terms of the [MIT license](https://github.com/e2e-test-quest/uuv/blob/main/LICENSE).

## Authors

- [`@luifr10`](https://github.com/luifr10) - Louis Fredice NJAKO MOLOM
- [`@stanlee974`](https://github.com/stanlee974) - Stanley SERVICAL

## Support UUV

If you want to help UUV grow, you can fund the project directly via [Open Collective](https://opencollective.com/uuv). Every contribution helps us dedicate more time and energy to improving this open-source tool.

<a href="https://opencollective.com/uuv/contribute" target="_blank">
  <img src="https://opencollective.com/uuv/contribute/button@2x.png?color=blue" width=300 />
</a>
