Metadata-Version: 2.4
Name: calculator_main
Version: 1.0.1
Summary: Calculate the BLEU, ROUGE, PPL scores of large models
Home-page: https://github.com/yourusername/casrel_datautils
Author: CH.Z
Classifier: Programming Language :: Python :: 3
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: License
Requires-Dist: rouge
Requires-Dist: nltk
Dynamic: author
Dynamic: classifier
Dynamic: description
Dynamic: description-content-type
Dynamic: home-page
Dynamic: license-file
Dynamic: requires-dist
Dynamic: requires-python
Dynamic: summary

BLEU ROUGE PPL 评分计算工具包

## 安装
pip install calculator_main

##  使用
from calculator_main.main import LLMAssessment

## 示例
    # 示例：困惑度计算
    sentences = [
        ['I', 'have', 'a', 'pen'],
        ['He', 'has', 'a', 'book'],
        ['She', 'has', 'a', 'cat']
    ]
    unigram = {
        'I': 1 / 12, 'have': 1 / 12, 'a': 3 / 12, 'pen': 1 / 12,
        'He': 1 / 12, 'has': 2 / 12, 'book': 1 / 12, 'She': 1 / 12, 'cat': 1 / 12
    }
    perplexity = LLMAssessment.calculate_perplexity(sentences, unigram)
    print("困惑度为：", perplexity)

    # 示例：ROUGE分数计算
    generated_text = "This is some generated text."
    reference_text = "This is a reference text."
    rouge_scores = LLMAssessment.calculate_rouge(generated_text, reference_text)
    print("ROUGE分数：", rouge_scores)

    # 示例：BLEU分数计算
    candidate_text = ["This", "is", "some", "generated", "text"]
    reference_texts = [
        ["This", "is", "a", "reference", "text"],
        ["This", "is", "another", "reference", "text"]
    ]
    bleu_scores = LLMAssessment.calculate_bleu(reference_texts, candidate_text)
    print("BLEU分数：", bleu_scores)
