Metadata-Version: 2.4
Name: trentai-sdk
Version: 0.0.9
Summary: trentai sdk
Home-page: 
Author: Brajesh Kumar
Author-email: Brajesh Kumar <brajesh@trent.ai>
License: NoLicense
Project-URL: Repository, https://github.com/trnt-ai/sdks
Keywords: OpenAPI,OpenAPI-Generator,trentai sdk
Requires-Python: >=3.9
Description-Content-Type: text/markdown
Requires-Dist: urllib3<3.0.0,>=2.1.0
Requires-Dist: python-dateutil>=2.8.2
Requires-Dist: pydantic>=2
Requires-Dist: typing-extensions>=4.7.1
Provides-Extra: dev
Requires-Dist: pytest>=7.2.1; extra == "dev"
Requires-Dist: tox>=3.9.0; extra == "dev"
Requires-Dist: flake8>=4.0.0; extra == "dev"
Requires-Dist: types-python-dateutil>=2.8.19.14; extra == "dev"
Requires-Dist: mypy==1.4.1; extra == "dev"
Dynamic: author

# trentai-sdk
This service serves inference requests for prompt guard.

This Python package is automatically generated by the [OpenAPI Generator](https://openapi-generator.tech) project:

- API version: 0.0.9
- Package version: 0.0.9
- Generator version: 7.20.0
- Build package: org.openapitools.codegen.languages.PythonClientCodegen
For more information, please visit [https://trent.ai](https://trent.ai)

## Requirements.

Python 3.9++

## Installation & Usage

### pip install

Install the SDK from PyPI:

```sh
pip install trentai-sdk
```

Then import the package:
```python
import trentai
```

### Tests

Execute `pytest` to run the tests.

## Getting Started

Please follow the [installation procedure](#installation--usage) and then run the following:

```python

import trentai
from trentai.rest import ApiException
from pprint import pprint

# Defining the host is optional and defaults to https://api.trent.ai/inference
# See configuration.py for a list of all supported configuration parameters.
configuration = trentai.Configuration(
    host = "https://api.trent.ai/inference"
)

# The client must configure the authentication and authorization parameters
# in accordance with the API server security policy.
# Examples for each auth method are provided below, use the example that
# satisfies your auth use case.

# Configure API key authorization: apiKey
configuration.api_key['apiKey'] = os.environ["API_KEY"]

# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed
# configuration.api_key_prefix['apiKey'] = 'Bearer'

# Enter a context with an instance of the API client
with trentai.ApiClient(configuration) as api_client:
    # Create an instance of the API class
    api_instance = trentai.(api_client)
    
    try:
        api_response = api_instance.()
        print("The response of ->:\n")
        pprint(api_response)
    except ApiException as e:
        print("Exception when calling ->: %s\n" % e)
```

## Documentation for API Endpoints

All URIs are relative to *https://api.trent.ai/inference*

Class | Method | HTTP request | Description
------------ | ------------- | ------------- | -------------
*PromptGuardApi* | [**analyze_prompt**](docs/PromptGuardApi.md#analyze_prompt) | **POST** /v1/prompt-guard | Analyze prompt for security threats


## Documentation For Models

 - [Error](docs/Error.md)
 - [PromptGuardRequest](docs/PromptGuardRequest.md)
 - [PromptGuardResponse](docs/PromptGuardResponse.md)
 - [PromptGuardResponseRulesInner](docs/PromptGuardResponseRulesInner.md)


<a id="documentation-for-authorization"></a>
## Documentation For Authorization


Authentication schemes defined for the API:
<a id="apiKey"></a>
### apiKey

- **Type**: API key
- **API key parameter name**: x-api-key
- **Location**: HTTP header


## Author

brajesh@trent.ai

