Metadata-Version: 2.4
Name: chATLAS_Frontend
Version: 1.0.0
Summary: Code to run the frontend app
License-Expression: Apache-2.0
Requires-Python: >=3.11
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: chatlas-chains>=0.2.0
Requires-Dist: chatlas-embed>=0.1.19
Requires-Dist: flask
Requires-Dist: authlib
Requires-Dist: markdown
Requires-Dist: sqlalchemy
Requires-Dist: markupsafe
Requires-Dist: psutil
Requires-Dist: matplotlib
Requires-Dist: seaborn
Dynamic: license-file

# chATLAS
Welcome to the repository for the frontend of chATLAS: An AI assistant for the ATLAS collaboration

Main app (more stable): https://chatlas-flask-chatlas.app.cern.ch

Staging area (contains newer features, less stable): https://chatlas-staging-chatlas.app.cern.ch

If you want to install the app for development work, follow the instructions below:

## Requirements
You will need to set the following environment variables for the app to run
```bash
export CHATLAS_OPENAI_KEY=... # Feel free to use a personal OpenAI API key here
export CHATLAS_DB_PASSWORD=...
export CHATLAS_EMBEDDING_MODEL_PATH=...
export CHATLAS_GROQ_KEY=...
export CHATLAS_GROQ_BASE_URL=http://cs-513-ml003:3000 # this will change if not on CERN network, see below
```

Reach out to the developers by [email](atlas-comp-ml-chatlas-developers@cern.ch) or on [mattermost](https://mattermost.web.cern.ch/ml-atlas/channels/chatlas-ai-assistant-development) to get setup.

If running locally (not on lxplus), you'll need a local copy of the embedding model, see [these instructions](#getting-the-embedding-model)

## Environment creation

Move to the directory containing this file and run:

```bash
uv sync
```

## Running the app

0. Ensure you have the [venv](#environment-creation) and necessary [environment variables](#requirements)
```bash
export CHATLAS_OPENAI_KEY=...
export CHATLAS_DB_PASSWORD=...
export CHATLAS_EMBEDDING_MODEL_PATH=...
export CHATLAS_GROQ_KEY=...
export CHATLAS_PORT_FORWARDING=1
```

We recommend putting this into a `.env` file in the root directory of the project. Then you can load in the environment variables from the file with:
```
set -a
source .env
set +a
```

1. Setup port forwarding for the DBOD instances

**note** If you are connected to the CERN network, skip this step

```bash
# forward the all DBs
ssh -N \
  -L 6624:dbod-chatlas.cern.ch:6624 \
  -L 6606:dbod-chatlas-cds.cern.ch:6606 \
  -L 3000:cs-513-ml003:3000 \
  "$LXPLUS_USERNAME"@lxplus.cern.ch
```

2. Launch

```bash
uv run --env-file .env chATLAS_Frontend/launch.py --local-mode --db-host cern-prod --port-forwarding
```

3. Viewing the output

After launching, you should get some terminal output like " * Running on http://127.0.0.1:5000"

If running locally, just open this link

If on lxplus, you need to port-forward the address to your local machine. Note the lxplus node you are on and run:

```bash
ssh -L 5000:127.0.0.1:5000 <USERNAME>@lxplus<NUMBER>.cern.ch
```

# Getting the Embedding Model

The embedding model is now stored on EOS. To run locally you need to download the embedding model:

```shell
scp <USERNAME>@lxplus.cern.ch:/eos/atlas/atlascerngroupdisk/phys-mlf/Chatlas/multi-qa-mpnet-base-dot-v1-ATLAS-TALK <LOCAL PATH TO STORE MODEL>
```

Then export environment variable:

```shell
export CHATLAS_EMBEDDING_MODEL_PATH = <LOCAL PATH TO STORE MODEL>
```

## Updating the environment

Edit [`pyproject.toml`](pyproject.toml) and run `uv sync` to update the lockfile
