Metadata-Version: 2.4
Name: bilbystats
Version: 0.1.8
Summary: Python Packages of functions for performing stats for bilby
Home-page: https://github.com/sjdavenport/bilbystats/
Download-URL: https://github.com/bilbyai/bilbystats/
Author: Samuel DAVENPORT
Author-email: 12sdavenport@gmail.com
License: MIT
Keywords: LLMs,Transformers
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: numpy
Requires-Dist: matplotlib
Requires-Dist: scipy
Requires-Dist: transformers>=4.48.0
Requires-Dist: datasets
Requires-Dist: evaluate
Requires-Dist: torch
Requires-Dist: pandas
Requires-Dist: openai
Requires-Dist: dotenv
Requires-Dist: anthropic
Requires-Dist: ollama
Requires-Dist: google-genai
Requires-Dist: tiktoken
Requires-Dist: transformers[torch]
Requires-Dist: seaborn
Requires-Dist: google-cloud-bigquery
Requires-Dist: db-dtypes
Requires-Dist: ipywidgets
Requires-Dist: ipykernel
Requires-Dist: google-cloud-bigquery-storage
Requires-Dist: nbformat
Requires-Dist: nbclient
Requires-Dist: nasdaq-data-link
Requires-Dist: wandb
Requires-Dist: sentence-transformers
Requires-Dist: gliner
Requires-Dist: sacrebleu
Requires-Dist: bert-score
Requires-Dist: pytorch-lightning>=2.0.0
Requires-Dist: groq
Requires-Dist: seqeval
Requires-Dist: nltk
Provides-Extra: dev
Requires-Dist: yfinance; extra == "dev"
Requires-Dist: jupyter; extra == "dev"
Requires-Dist: snownlp; extra == "dev"
Requires-Dist: spacy; extra == "dev"
Requires-Dist: geopandas; extra == "dev"
Requires-Dist: dspy; extra == "dev"
Requires-Dist: litellm; extra == "dev"
Requires-Dist: unbabel-comet>=2.2.0; extra == "dev"
Requires-Dist: spyder-kernels==3.0.*; extra == "dev"
Requires-Dist: beautifulsoup4; extra == "dev"
Requires-Dist: scikit-learn; extra == "dev"
Requires-Dist: google-api-python-client; extra == "dev"
Requires-Dist: trafilatura; extra == "dev"
Requires-Dist: selenium; extra == "dev"
Requires-Dist: webdriver_manager; extra == "dev"
Requires-Dist: openpyxl; extra == "dev"
Dynamic: author
Dynamic: author-email
Dynamic: description
Dynamic: description-content-type
Dynamic: download-url
Dynamic: home-page
Dynamic: keywords
Dynamic: license
Dynamic: license-file
Dynamic: provides-extra
Dynamic: requires-dist
Dynamic: requires-python
Dynamic: summary

# BilbyStats

## A collection of statistical and machine learning functions for use in the Bilby pipeline

### Install the package

#### Pip installation
```bash
pip install bilbystats
```

#### Alternative: local installation
To pull the repo run

```bash
git clone --depth=1 https://github.com/bilbyai/bilbystats/
```
Navigate to the root directory of the package and run

```bash
uv pip install .
```

or just 
```bash
pip install .
```
if you don't have uv install.

### Set up API keys
API keys must be added in order to call LLMs. To do so they must be added to the environment. The simplest way to do so is to add lines of the following form to your .bashrc or .zshrc file. 

```bash
export OPENROUTER_API_KEY=exampleapikey
export OPENAI_API_KEY=exampleapikey
export CLAUDE_API_KEY=exampleapikey
```

Where you should replace exampleapikey with the corresponding API key in each case. 

### Importing the package

Then the package can be imported from within python via e.g.

```bash
import bilbystats as bs
```

### Setting local defaults
In order to set local defaults. E.g. for checkpoint saving etc. You'll need to navigate to 

cp bilbystats/defaults/local_defaults_example.env bilbystats/defaults/local_defaults.env

and then change the default parameters and reinstall the package.

### Run local LLMs using Ollama

If you'd like to use the Ollama functions which allow you to call LLMs on your local machine you'll need to install [Ollama](https://ollama.com/).
Once you've installed ollama you can download LLMs such as llama3.2

```bash
ollama run llama3.2
```

or deepseek-r1:7b

```bash
ollama run deepseek-r1:7b
```

See https://ollama.com/search for a full list of the available models.

To call the LLM programmatically using bilbystats you can run

```
bs.llm_api('test call', 'you are an llm', 'llama3.2')
```

or

```
bs.llm_api('test call', 'you are an llm', 'deepseek-r1:7b')
```

Or in general use the model name in any llm related function such as bs.translate.
