Metadata-Version: 2.1
Name: gllm-inference-binary
Version: 0.2.38
Summary: A library containing components related to model inferences in Gen AI applications.
Author: Henry Wicaksono
Author-email: henry.wicaksono@gdplabs.id
Requires-Python: >=3.11,<3.13
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Provides-Extra: anthropic
Provides-Extra: google-genai
Provides-Extra: google-vertexai
Provides-Extra: huggingface
Provides-Extra: openai
Provides-Extra: twelvelabs
Provides-Extra: voyage
Requires-Dist: anthropic (==0.49.0) ; extra == "anthropic"
Requires-Dist: gllm-core-binary
Requires-Dist: huggingface-hub (==0.26.2) ; extra == "huggingface"
Requires-Dist: jinja2 (==3.1.4)
Requires-Dist: langchain (==0.3.0)
Requires-Dist: langchain-google-genai (==2.0.6) ; extra == "google-genai"
Requires-Dist: langchain-google-vertexai (==2.0.8) ; extra == "google-vertexai"
Requires-Dist: langchain-openai (>=0.3.1,<0.4.0) ; extra == "openai"
Requires-Dist: langchain-voyageai (==0.1.4) ; extra == "voyage"
Requires-Dist: libmagic (>=1.0,<2.0) ; sys_platform == "win32"
Requires-Dist: pandas (==2.2.2)
Requires-Dist: protobuf (==5.28.2)
Requires-Dist: python-magic (==0.4.27)
Requires-Dist: python-magic-bin (>=0.4.14,<0.5.0) ; sys_platform == "win32"
Requires-Dist: sentencepiece (==0.2.0)
Requires-Dist: transformers (==4.46.1) ; extra == "huggingface"
Requires-Dist: twelvelabs (==0.4.4) ; extra == "twelvelabs"
Description-Content-Type: text/markdown

# GLLM Inference

## Description

A library containing components related to model inferences in Gen AI applications.

## Installation

1. Python v3.11 or above:

You can install Python using [Miniconda](https://docs.anaconda.com/free/miniconda/index.html).

2. Make sure you're in the `base` conda environment:
```bash
conda activate
```

3. [Poetry](https://python-poetry.org/docs/) v1.8.1 or above:

You can install Poetry using cURL (you need Python to install Poetry):
```bash
curl -sSL https://install.python-poetry.org | python3 -
```

4. Install the library using Poetry:
```bash
# Latest
poetry add "git+ssh://git@github.com/GDP-ADMIN/gen-ai-internal.git#subdirectory=libs/gllm-inference"

# Specific version
poetry add "git+ssh://git@github.com/GDP-ADMIN/gen-ai-internal.git@gllm_inference-v0.0.1-beta.1#subdirectory=libs/gllm-inference"

# Specific Branch Name
poetry add "git+ssh://git@github.com/GDP-ADMIN/gen-ai-internal.git@<BRANCH NAME>#subdirectory=libs/gllm-inference"

# With extra dependencies
poetry add "git+ssh://git@github.com/GDP-ADMIN/gen-ai-internal.git#subdirectory=libs/gllm-inference" --extras "extra1 extra2"
```

Available extras:
- `anthropic`: Install Anthropic models dependencies
- `google-genai`: Install Google Generative AI models dependencies
- `google-vertexai`: Install Google Vertex AI models dependencies
- `huggingface`: Install HuggingFace models dependencies
- `openai`: Install OpenAI models dependencies
- `twelvelabs`: Install TwelveLabs models dependencies

5. At this step, you can deactivate Miniconda environment as Poetry will create and manage its own virtual environment for you.
```bash
conda deactivate
```

## Managing Dependencies
1. Go to root folder of `gllm-inference` module, e.g. `cd libs/gllm-inference`.
2. Run `poetry shell` to create a virtual environment.
3. Run `poetry lock` to create a lock file if you haven't done it yet.
4. Run `poetry install` to install the `gllm-inference` requirements for the first time.
5. Run `poetry update` if you update any dependency module version at `pyproject.toml`.


## Contributing
Please refer to this [Python Style Guide](https://docs.google.com/document/d/1uRggCrHnVfDPBnG641FyQBwUwLoFw0kTzNqRm92vUwM/edit?usp=sharing)
to get information about code style, documentation standard, and SCA that you need to use when contributing to this project

1. Activate `pre-commit` hooks using `pre-commit install`
2. Run `poetry shell` to create a virtual environment.
3. Run `poetry lock` to create a lock file if you haven't done it yet.
4. Run `poetry install` to install the `gllm-inference` requirements for the first time.
5. Run `which python` to get the path to be referenced at Visual Studio Code interpreter path (`Ctrl`+`Shift`+`P` or `Cmd`+`Shift`+`P`)
6. Try running the unit test to see if it's working:
```bash
poetry run pytest -s tests/unit_tests/
```

