Configure the AI provider and model for the built-in assistant. The assistant helps interpret scan results and plan testing workflows.
Endpoint for the assistant's LLM provider (e.g., http://inference-server:11434 for remote Ollama)