{% extends "base.html" %} {% block title %}AI Features — MOSAIC{% endblock %} {% block content %}
MOSAIC includes a local RAG (Retrieval-Augmented Generation) pipeline that lets you query and chat with your cached papers using a semantic vector index and an LLM of your choice.
{% if not embedding_model or not llm_configured %}Build or rebuild the vector index over your cached papers. Must be run at least once before using Ask or Chat.
Go to Index →Ask a single question about your indexed papers. Choose a mode:
Multi-turn conversation with your papers. The assistant retains context across messages within the same browser session.
Session history is kept in memory and resets on server restart.
Go to Chat →