Metadata-Version: 2.3
Name: minimal-slt
Version: 0.1.2
Summary: A minimal Sign Language Translation package
Keywords: sign-language,SLT,sign-language-translation,huggingface,pytorch
Author: Shakib Yazdani
Author-email: Shakib Yazdani <shakibyzn@gmail.com>
License: Apache-2.0
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Science/Research
Classifier: Intended Audience :: Education
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Scientific/Engineering :: Image Processing
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Programming Language :: Python :: 3
Classifier: License :: OSI Approved :: Apache Software License
Requires-Dist: accelerate>=0.26.0
Requires-Dist: opencv-contrib-python>=4.13.0.92
Requires-Dist: opencv-python-headless>=4.13.0.92
Requires-Dist: pandas>=3.0.0
Requires-Dist: sacrebleu>=2.6.0
Requires-Dist: timm>=1.0.24
Requires-Dist: torch>=2.10.0
Requires-Dist: torchvision>=0.25.0
Requires-Dist: transformers<5.0.0
Requires-Python: >=3.12
Project-URL: Repository, https://github.com/shakibyzn/minimal-slt
Description-Content-Type: text/markdown

# Minimal-SLT

## Goal and Purpose
Minimal-SLT is a **minimal** and **clean** sign language translation (SLT) package developed for educational purposes based on Hugging Face 🤗 and PyTorch. It is developed for Phoenix14T dataset (DGS->de). But it can be adapted to other datasets easily.

## Features
- Feature extraction (e.g., I3D and timm-supported models)
- Transformer model
- Wandb logging
- HF trainer
- BLEU, Chrf, ROUGE, BLEURT evaluation

## Installation
Minimal-SLT uses [UV](https://docs.astral.sh/uv/). Make sure to install UV first.
```bash
uv venv .venv
source .venv/bin/activate
uv pip install minimal-slt
# for evaluation
git clone https://github.com/google-research/bleurt.git
cd bleurt
pip install .
uv pip install rouge_score
```

## Usage

### Feature extraction
```bash
# timm-supported
uv run minimal_slt feat --config configs/base_config.yaml --feat_type timm --timm_model "timm/vit_base_patch16_clip_224.openai"
# python -m minimal_slt feat --config configs/base_config.yaml --feat_type timm --timm_model "timm/vit_base_patch16_clip_224.openai"
```
For I3D, first download WLASL pre-trained weights from [WLASL](https://github.com/dxli94/WLASL/tree/master?tab=readme-ov-file) repo.
```bash
# I3D
uv run minimal_slt feat --config configs/base_config.yaml
# python -m minimal_slt feat --config configs/base_config.yaml
```

### Training
```bash
uv run minimal_slt train --config configs/base_config.yaml --outpath your_minimal_slt_path
# python -m minimal_slt train --config configs/base_config.yaml --outpath your_minimal_slt_path
```

### Inference
```bash
uv run minimal_slt test --config configs/base_config.yaml --model_path your_minimal_slt_path --output_dir ./artifacts/results
# python -m minimal_slt test --config configs/base_config.yaml --model_path your_minimal_slt_path --output_dir ./artifacts/results
```

### Evaluation
```bash
uv run minimal_slt eval -i ./artifacts/results
# python -m minimal_slt eval -i ./artifacts/results
```

## Contact
Please leave an issue if you have questions or issues with the code.

For general questions, email me at `shakibyzn <at> gmail.com`.