Metadata-Version: 2.4
Name: custom-llm-eval
Version: 0.1.9
Summary: A comprehensive framework for evaluating Large Language Models with built-in support for bias, toxicity, relevancy metrics, custom evaluations, conversational test cases, release tracking, and token counting
Author: Atul B
Author-email: atulbmysuru@gmail.com
License: MIT
Project-URL: Homepage, https://github.com/atulbmysuru/custom-llm-eval
Project-URL: Issues, https://github.com/atulbmysuru/custom-llm-eval/issues
Project-URL: Repository, https://github.com/atulbmysuru/custom-llm-eval
Keywords: llm,evaluation,deepeval,ai,testing,bias,toxicity,nlp,token-counting
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Testing
Requires-Python: >=3.8
Requires-Dist: deepeval>=0.21.0
Requires-Dist: requests>=2.28.0
Requires-Dist: python-dotenv>=0.19.0
Requires-Dist: tiktoken>=0.5.0
