Metadata-Version: 2.4
Name: avise
Version: 0.2.0
Summary: AI Vulnerability Identification & Security Evaluation framework
Author: Joni Kemppainen, Niklas Raesalmi
Author-email: Mikko Lempinen <mikko.lempinen@oulu.fi>
License-Expression: MIT
License-File: LICENSE
Requires-Python: >=3.10
Requires-Dist: accelerate>=1.12.0
Requires-Dist: mistral-common>=1.11.0
Requires-Dist: numpy>=2.3.5
Requires-Dist: ollama>=0.3.0
Requires-Dist: openai>=1.0.0
Requires-Dist: pyyaml>=6.0
Requires-Dist: requests>=2.32.5
Requires-Dist: scipy>=1.17.0
Requires-Dist: torch>=2.10.0
Requires-Dist: transformers>=5.2.0
Requires-Dist: triton-windows; platform_system == 'Windows'
Provides-Extra: docs
Requires-Dist: sphinx; extra == 'docs'
Requires-Dist: sphinx-github-style; extra == 'docs'
Requires-Dist: sphinx-rtd-theme; extra == 'docs'
Provides-Extra: unit-tests
Requires-Dist: pytest>=9.0.2; extra == 'unit-tests'
Description-Content-Type: text/markdown


![](/docs/assets/avise_logo.png)

# AVISE - AI Vulnerability Identification & Security Evaluation

A framework for identifying vulnerabilities in and evaluating the security of AI systems.

#### Full Documentations: https://avise.readthedocs.io

<br>
<br>

## Quickstart for evaluating Language Models

### Prerequisites

- Python 3.10+
- Docker (For Running models locally with Ollama)

### 1. Install AVISE

Install with
- **pip:**
    ```bash
    pip install avise
    ```

- **uv:**

    ```bash
    uv install avise
    ```

### 2. Run a model

You can use AVISE to evaluate any model accessible via an API by configuring a Connector. In this Quickstart, we will
assume using the Ollama Docker container for running a language model. If you wish to evaluate models deployed in other ways, see
the [Full Documentations](https://avise.readthedocs.io) and available template connector configuration files at `AVISE/avise/configs/connector/languagemodel/` dir of this repository.

#### Running a language model locally with Docker & Ollama

- Clone this repository to your local machine with:

```bash
git clone https://github.com/ouspg/AVISE.git
```

- Create the Ollama Docker container
    - for **GPU** accelerated inference with:
        ```bash
        docker compose -f AVISE/docker/ollama/docker-compose.yml up -d
        ```
    - or for **CPU** inference with:
        ```bash
        docker compose -f AVISE/docker/ollama/docker-compose-cpu.yml up -d
        ```

- Pull an Ollama model to evaluate into the container with:
    ```bash
    docker exec -it avise-ollama ollama pull <model_name>
    ```

### 3. Evaluate the model with a Security Evaluation Test (SET)

#### Basic usage

```bash
avise --SET <SET_name> --connectorconf <connector_name> [options]
```

For example, you can run the `prompt_injection` SET on the model pulled to the Ollama Docker container with:

```bash
avise --SET prompt_injection --connectorconf ollama_lm --target <model_name>
```

To list the available SETs, run the command:
```bash
avise --SET-list
```


## Advanced usage

### Configuring Connectors

You can create your own connector configuration files, or if you cloned the AVISE repository, you can modify the existing connector configuration files in `AVISE/avise/configs/connector/languagemodel/`.

For example, you can edit the default Ollama Connector configuration file `AVISE/avise/configs/connector/languagemodel/ollama.json`, and insert the name of an Ollama model you have pulled to be used as a target by default:

```json
{
    "target_model": {
        "connector": "ollama-lm",
        "type": "language_model",
        "name": "<NAME_OF_TARGET_MODEL>",
        "api_url": "http://localhost:11434", #Ollama default
        "api_key": null
    }
}
```
If you want to use custom configuration files for SETs and/or Connectors, you can do so by giving the paths to the configuration files with `--SETconf` and `--connectorconf` arguments:

```bash
avise --SET prompt_injection --SETconf AVISE/avise/configs/SET/languagemodel/single_turn/prompt_injection_mini.json --connectorconf AVISE/avise/configs/connector/languagemodel/ollama.json
```

### Required Arguments

| Argument | Description |
|----------|-------------|
| `--SET`, `-s` | Security Evaluation Test to run (e.g., `prompt_injection`, `context_test`) |
| `--connectorconf`, `-c` | Path to Connector configuration JSON (Accepts predefined connector configuration paths: `ollama_lm`, `openai_lm`, `genericrest_lm`)|


### Optional Arguments

| Argument | Description |
|----------|-------------|
| `--SETconf` | Path to SET configuration JSON file. If not given, uses preconfigured paths for SET config JSON files. |
| `--target`, `-t` | Name of the target model/system to evaluate. Overrides target name from connector configuration file. |
| `--format`, `-f` | Report format: `json`, `html`, `md` |
| `--runs`, `-r` | How many times each SET is executed |
| `--output` | Custom output file path |
| `--reports-dir` | Base directory for reports (default: `avise-reports/`) |
| `--SET-list` | List available Security Evaluation Tests |
| `--connector-list` | List available Connectors |
| `--verbose`, `-v` | Enable verbose logging |
| `--version`, `-V` | Print version  |


