{% extends "base.html" %} {% block title %}AI Features — MOSAIC{% endblock %} {% block content %}

AI Features

MOSAIC includes a local RAG (Retrieval-Augmented Generation) pipeline that lets you query and chat with your cached papers using a semantic vector index and an LLM of your choice.

{% if not embedding_model or not llm_configured %}
⚠ Setup required — {% if not embedding_model and not llm_configured %} Neither an embedding model nor an LLM is configured. {% elif not embedding_model %} No embedding model configured. {% else %} No LLM configured. {% endif %} Go to Config → Embeddings & RAG / LLM to set them up.
{% endif %}
📊 Index

Build or rebuild the vector index over your cached papers. Must be run at least once before using Ask or Chat.

Papers in cache
{{ total_papers }}
Papers indexed
{{ indexed_count }}
{% if embedding_model %}
Embedding model
{{ embedding_model }}
{% endif %}
Go to Index →
❓ Ask

Ask a single question about your indexed papers. Choose a mode:

Go to Ask →
💬 Chat

Multi-turn conversation with your papers. The assistant retains context across messages within the same browser session.

Session history is kept in memory and resets on server restart.

Go to Chat →
{% endblock %}