Metadata-Version: 2.4
Name: evidentia-ai
Version: 0.7.0
Summary: LLM-powered risk statement generator and evidence validator for Evidentia
Project-URL: Homepage, https://github.com/allenfbyrd/evidentia
Project-URL: Repository, https://github.com/allenfbyrd/evidentia
Project-URL: Issues, https://github.com/allenfbyrd/evidentia/issues
Project-URL: Changelog, https://github.com/allenfbyrd/evidentia/blob/main/CHANGELOG.md
Author-email: Allen Byrd <allen@allenfbyrd.com>
License-Expression: Apache-2.0
Keywords: compliance,grc,instructor,litellm,llm,nist-800-30,risk-statement
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Information Technology
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Typing :: Typed
Requires-Python: >=3.12
Requires-Dist: evidentia-core<0.8.0,>=0.7.0
Requires-Dist: instructor>=1.6
Requires-Dist: litellm<2.0,>=1.83.0
Requires-Dist: tenacity>=9.0
Description-Content-Type: text/markdown

# evidentia-ai

LLM-powered features for [Evidentia](https://github.com/allenfbyrd/evidentia): risk statement generation and (in Phase 3) evidence validation.

Uses **LiteLLM** for provider-agnostic LLM calls and **Instructor** for structured output extraction into Pydantic models.

## Provides

- **Risk Statement Generator** — Convert control gaps into NIST SP 800-30-compliant risk statements
- **Evidence Validator** *(Phase 3)* — Assess evidence sufficiency using LLM analysis
- **LLM Client** — Provider-agnostic wrapper around LiteLLM with retry, rate limiting, and structured output

## Supported providers

Any provider supported by LiteLLM:
- Anthropic Claude (default: `claude-sonnet-4-6`)
- OpenAI GPT
- Google Gemini
- AWS Bedrock
- Azure OpenAI
- Local models via Ollama, vLLM, etc.

## Install

```bash
pip install evidentia-ai
```

License: Apache 2.0
