Metadata-Version: 2.4
Name: grammarticle
Version: 1.1.0
Summary: Grammarticle spaCy pipeline
Home-page: https://github.com/upunaprosk/grammarticle
Author: upunaprosk
Author-email: 
License: MIT
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Operating System :: OS Independent
Requires-Python: >=3.7
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: spacy<3.9.0,>=3.8.5
Requires-Dist: spacy-transformers<1.4.0,>=1.3.8
Requires-Dist: spacy-huggingface-hub==0.0.10
Dynamic: author
Dynamic: classifier
Dynamic: description
Dynamic: description-content-type
Dynamic: home-page
Dynamic: license
Dynamic: license-file
Dynamic: requires-dist
Dynamic: requires-python
Dynamic: summary

# Grammarticle

<img src="https://raw.githubusercontent.com/upunaprosk/grammarticle/master/logo.png" alt="GrammArticle Logo" width="150" align="right" />

GrammArticle is a RoBERTa-based grammar checker for English article usage. It detects three types of article errors:

1) Missing – when an article is absent but required
2) Wrong – when an incorrect article is used (e.g., "a apple" instead of "an apple", or "the" instead of "a/an")
3) Redundant – when an article is unnecessary (e.g., "the furniture")



## Installation

GrammArticle is trained on publicly available GEC datasets with synthetic augmentation and is available as a SpaCy pipeline.

`pip install grammarticle`

or

```bash
pip install spacy-transformers
python -m spacy download en_core_web_trf
pip install https://huggingface.co/iproskurina/en_grammarticle/resolve/main/en_grammarticle-1-py3-none-any.whl

```

## Usage

```
import grammarticle
nlp = grammarticle.load()
text = "This is sentence"
doc = nlp(text)
for span in doc.spans.get("sc", []): 
    print(f"[{span.label_}] {span.text}")
```

Outputs:
```
[Missing] sentence
```
___


![Example Output](https://raw.githubusercontent.com/upunaprosk/grammarticle/master/example.png)

___
