Ask AI
Add a built-in AI chat that lets users ask questions about your documentation. The AI searches relevant pages, builds context, and streams a response from any OpenAI-compatible LLM.
Quick Start
ai: {
enabled: true,
}That's it. The AI reads your OPENAI_API_KEY environment variable and uses gpt-4o-mini by default.
Add OPENAI_API_KEY to your .env file:
OPENAI_API_KEY=sk-...The key is automatically read from process.env.OPENAI_API_KEY.
Add OPENAI_API_KEY to your .env file:
OPENAI_API_KEY=sk-...Pass it through docs.server.ts (SvelteKit requires server-only env access):
import { createDocsServer } from "@farming-labs/svelte/server";
import { env } from "$env/dynamic/private";
import config from "./docs.config";
const contentFiles = import.meta.glob("/docs/**/*.{md,mdx,svx}", {
query: "?raw",
import: "default",
eager: true,
}) as Record<string, string>;
export const { load, GET, POST } = createDocsServer({
...config,
ai: { apiKey: env.OPENAI_API_KEY, ...config.ai },
_preloadedContent: contentFiles,
});Add OPENAI_API_KEY to your .env file:
OPENAI_API_KEY=sk-...Pass it through docs.server.ts:
import { createDocsServer } from "@farming-labs/astro/server";
import config from "./docs.config";
const contentFiles = import.meta.glob("/docs/**/*.{md,mdx}", {
query: "?raw",
import: "default",
eager: true,
}) as Record<string, string>;
export const { load, GET, POST } = createDocsServer({
...config,
ai: { apiKey: import.meta.env.OPENAI_API_KEY, ...config.ai },
_preloadedContent: contentFiles,
});Add OPENAI_API_KEY to your .env file:
OPENAI_API_KEY=sk-...Nuxt automatically reads environment variables via Nitro's runtime config. The defineDocsHandler reads process.env.OPENAI_API_KEY on the server.
import { defineDocsHandler } from "@farming-labs/nuxt/server";
import config from "../../docs.config";
export default defineDocsHandler(config, useStorage);Configuration Reference
All options go inside the ai object in docs.config.ts:
export default defineDocs({
ai: {
// ... options
},
});enabled
Whether to enable AI chat functionality.
| Type | Default |
|---|---|
boolean | false |
ai: {
enabled: true,
}mode
How the AI chat UI is presented.
| Type | Default |
|---|---|
"search" | "floating" | "search" |
"search"— AI tab integrated into theCmd+Ksearch dialog. Users switch between "Search" and "AI" tabs."floating"— A floating chat widget with a button on screen. Opens as a panel, modal, or full-screen overlay.
ai: {
enabled: true,
mode: "floating",
}position
Position of the floating chat button on screen. Only used when mode is "floating".
| Type | Default |
|---|---|
"bottom-right" | "bottom-left" | "bottom-center" | "bottom-right" |
ai: {
enabled: true,
mode: "floating",
position: "bottom-left",
}floatingStyle
Visual style of the floating chat when opened. Only used when mode is "floating".
| Type | Default |
|---|---|
"panel" | "modal" | "popover" | "full-modal" | "panel" |
"panel"— A tall panel that slides up from the button position. No backdrop overlay."modal"— A centered modal dialog with a backdrop overlay, similar to theCmd+Ksearch dialog."popover"— A compact popover near the button. Suitable for quick questions."full-modal"— A full-screen immersive overlay. Messages scroll in the center, input is pinned at the bottom, suggested questions appear as horizontal pills.
ai: {
enabled: true,
mode: "floating",
floatingStyle: "full-modal",
}model
The LLM model configuration. Can be a simple string (single model) or an object with multiple selectable models.
Simple — single model:
| Type | Default |
|---|---|
string | "gpt-4o-mini" |
ai: {
enabled: true,
model: "gpt-4o",
}Advanced — multiple models with UI dropdown:
| Type | Default |
|---|---|
object | — |
ai: {
enabled: true,
model: {
models: [
{ id: "gpt-4o-mini", label: "GPT-4o mini (fast)", provider: "openai" },
{ id: "gpt-4o", label: "GPT-4o (quality)", provider: "openai" },
{ id: "llama-3.3-70b-versatile", label: "Llama 3.3 70B", provider: "groq" },
],
defaultModel: "gpt-4o-mini",
},
}Each model entry has:
id— The model identifier sent to the LLM API (e.g."gpt-4o-mini")label— Display name shown in the UI dropdown (e.g."GPT-4o mini (fast)")provider— (optional) Key matching a named provider in theprovidersconfig. If omitted, uses the defaultbaseUrlandapiKey.
When model is an object with a models array, a model selector dropdown appears in the AI chat interface so users can pick which model to use.
providers
Named provider configurations. Each provider has its own baseUrl and apiKey, allowing models from different providers to coexist in a single config.
| Type | Default |
|---|---|
object | — |
ai: {
enabled: true,
providers: {
openai: {
baseUrl: "https://api.openai.com/v1",
apiKey: process.env.OPENAI_API_KEY,
},
groq: {
baseUrl: "https://api.groq.com/openai/v1",
apiKey: process.env.GROQ_API_KEY,
},
},
model: {
models: [
{ id: "gpt-4o-mini", label: "GPT-4o mini", provider: "openai" },
{ id: "llama-3.3-70b-versatile", label: "Llama 3.3 70B", provider: "groq" },
],
defaultModel: "gpt-4o-mini",
},
}When a user selects a model in the dropdown, the backend automatically uses that model's provider to resolve the correct baseUrl and apiKey. All providers must be OpenAI Chat Completions API compatible (OpenAI, Groq, Together, Fireworks, OpenRouter, Ollama, any vLLM deployment).
baseUrl
Default base URL for an OpenAI-compatible API endpoint. Used when no per-model provider is configured.
| Type | Default |
|---|---|
string | "https://api.openai.com/v1" |
ai: {
enabled: true,
model: "llama-3.1-70b-versatile",
baseUrl: "https://api.groq.com/openai/v1",
}apiKey
Default API key for the LLM provider. Used when no per-model provider is configured. Falls back to process.env.OPENAI_API_KEY if not set.
| Type | Default |
|---|---|
string | process.env.OPENAI_API_KEY |
ai: {
enabled: true,
apiKey: process.env.GROQ_API_KEY,
}Warning: Never hardcode API keys. Always use environment variables.
systemPrompt
Custom system prompt prepended to the AI conversation. Documentation context is automatically appended after this prompt.
| Type | Default |
|---|---|
string | "You are a helpful documentation assistant..." |
ai: {
enabled: true,
systemPrompt: "You are a friendly assistant for Acme Corp. Always mention our support email for complex issues.",
}maxResults
Maximum number of search results to include as context for the AI. More results = more context but higher token usage.
| Type | Default |
|---|---|
number | 5 |
ai: {
enabled: true,
maxResults: 10,
}suggestedQuestions
Pre-filled suggested questions shown in the AI chat when the conversation is empty. Clicking one fills the input and submits automatically.
| Type | Default |
|---|---|
string[] | [] |
ai: {
enabled: true,
suggestedQuestions: [
"How do I get started?",
"What themes are available?",
"How do I create a custom component?",
],
}aiLabel
Display name for the AI assistant in the chat UI. Shown as the message label and header title.
| Type | Default |
|---|---|
string | "AI" |
ai: {
enabled: true,
aiLabel: "DocsBot",
}packageName
The npm package name used in code examples. The AI will use this in import snippets instead of generic placeholders.
| Type | Default |
|---|---|
string | — |
ai: {
enabled: true,
packageName: "@farming-labs/docs",
}docsUrl
The public URL of your documentation site. The AI will use this for absolute links instead of relative paths.
| Type | Default |
|---|---|
string | — |
ai: {
enabled: true,
docsUrl: "https://docs.farming-labs.dev",
}loader
Loading indicator variant shown while the AI generates a response.
| Type | Default |
|---|---|
string | "shimmer-dots" |
Available variants: "shimmer-dots", "circular", "dots", "typing", "wave", "bars", "pulse", "pulse-dot", "terminal", "text-blink", "text-shimmer", "loading-dots".
ai: {
enabled: true,
loader: "wave",
}loadingComponent
Custom React component that completely overrides the built-in loader variant. Receives { name } (the aiLabel value). Only works in Next.js — for other frameworks, use the loader option.
| Type | Default |
|---|---|
(props: { name: string }) => ReactNode | — |
ai: {
enabled: true,
aiLabel: "Sage",
loadingComponent: ({ name }) => (
<div className="flex items-center gap-2 text-sm text-zinc-400">
<span className="animate-pulse">🤔</span>
<span>{name} is thinking...</span>
</div>
),
}triggerComponent
Custom trigger button for the floating chat. Replaces the default sparkles button. Only used when mode is "floating". Each framework accepts its native component format — pass it as a prop on DocsLayout (or a slot in Astro).
| Type | Default |
|---|---|
Component | Built-in sparkles button |
Pass a React component via docs.config.tsx:
ai: {
enabled: true,
mode: "floating",
triggerComponent: <button className="my-chat-btn">Ask AI</button>,
}Import a Svelte component and pass it as a prop on DocsLayout:
<script>
import { DocsLayout } from "@farming-labs/svelte-theme";
import AskAITrigger from "$lib/components/AskAITrigger.svelte";
import config from "../../lib/docs.config";
let { data, children } = $props();
</script>
<DocsLayout tree={data.tree} {config} triggerComponent={AskAITrigger}>
{@render children()}
</DocsLayout>Use the trigger-component slot on DocsLayout:
---
import DocsLayout from "@farming-labs/astro-theme/src/components/DocsLayout.astro";
import AskAITrigger from "../../components/AskAITrigger.astro";
---
<DocsLayout tree={data.tree} config={config}>
<AskAITrigger slot="trigger-component" />
<DocsContent data={data} config={config} />
</DocsLayout>Import a Vue component and pass it as a prop on DocsLayout:
<script setup lang="ts">
import { DocsLayout, DocsContent } from "@farming-labs/nuxt-theme";
import AskAITrigger from "~/components/AskAITrigger.vue";
import config from "~/docs.config";
const route = useRoute();
const pathname = computed(() => route.path);
const { data } = await useFetch("/api/docs", {
query: { pathname }, watch: [pathname],
});
</script>
<template>
<DocsLayout :tree="data.tree" :config="config" :trigger-component="AskAITrigger">
<DocsContent :data="data" :config="config" />
</DocsLayout>
</template>Full Example — Single Provider
export default defineDocs({
ai: {
enabled: true,
mode: "floating",
position: "bottom-right",
floatingStyle: "full-modal",
model: "gpt-4o-mini",
aiLabel: "DocsBot",
packageName: "@farming-labs/docs",
docsUrl: "https://docs.farming-labs.dev",
maxResults: 5,
suggestedQuestions: [
"How do I get started?",
"What themes are available?",
"How do I configure the sidebar?",
"How do I set up AI chat?",
],
},
});Full Example — Multiple Providers
export default defineDocs({
ai: {
enabled: true,
mode: "floating",
position: "bottom-right",
floatingStyle: "full-modal",
providers: {
openai: {
baseUrl: "https://api.openai.com/v1",
apiKey: process.env.OPENAI_API_KEY,
},
groq: {
baseUrl: "https://api.groq.com/openai/v1",
apiKey: process.env.GROQ_API_KEY,
},
},
model: {
models: [
{ id: "gpt-4o-mini", label: "GPT-4o mini (fast)", provider: "openai" },
{ id: "gpt-4o", label: "GPT-4o (quality)", provider: "openai" },
{ id: "llama-3.3-70b-versatile", label: "Llama 3.3 70B", provider: "groq" },
],
defaultModel: "gpt-4o-mini",
},
aiLabel: "DocsBot",
suggestedQuestions: [
"How do I get started?",
"What themes are available?",
],
},
});OPENAI_API_KEY=sk-...
GROQ_API_KEY=gsk_...Users see a model dropdown in the AI chat interface. When they pick a model, the backend automatically routes the request to the correct provider's API with the right credentials.
Using a Different LLM Provider
Single provider (simple)
Use any OpenAI-compatible API by setting baseUrl and model:
ai: {
enabled: true,
baseUrl: "https://api.groq.com/openai/v1",
model: "llama-3.1-70b-versatile",
}OPENAI_API_KEY=gsk_...Multiple providers
Use the providers map to configure multiple APIs, then reference them from each model entry:
ai: {
enabled: true,
providers: {
openai: {
baseUrl: "https://api.openai.com/v1",
apiKey: process.env.OPENAI_API_KEY,
},
together: {
baseUrl: "https://api.together.xyz/v1",
apiKey: process.env.TOGETHER_API_KEY,
},
ollama: {
baseUrl: "http://localhost:11434/v1",
},
},
model: {
models: [
{ id: "gpt-4o-mini", label: "GPT-4o mini", provider: "openai" },
{ id: "meta-llama/Llama-3.3-70B-Instruct-Turbo", label: "Llama 3.3 70B", provider: "together" },
{ id: "llama3.2", label: "Llama 3.2 (local)", provider: "ollama" },
],
defaultModel: "gpt-4o-mini",
},
}Compatible providers: OpenAI, Groq, Together AI, Fireworks, OpenRouter, Azure OpenAI, Ollama (local), any vLLM deployment — anything that speaks the OpenAI Chat Completions API format.