How to Connect Your AI Character to a Knowledge Base

How to Connect Your AI Character to a Knowledge Base

An AI character without knowledge is a blank slate.

It can hold a conversation. It can stay in character. But the moment someone asks about the world it lives in — the history of the kingdom, the rules of the magic system, the company's product catalog, the support policy — it has nothing to pull from.

MegaNova Studio gives you two distinct systems for connecting characters and agents to structured knowledge. Each works differently and serves a different purpose. Understanding both lets you build characters that actually know things.

Two Systems, Two Purposes

Lorebooks are designed for characters. They store world knowledge, lore, and contextual information in structured entries. When a relevant keyword appears in the conversation, the matching entries are automatically injected into the system prompt — without any action required from the character or the user. The character simply knows what it needs to know, at the moment it needs to know it.

Knowledge Files are designed for agents. They store documents — PDFs, text files, spreadsheets, markdown — that the agent can search during its reasoning loop. Instead of passive injection, the agent actively calls search_knowledge_base when it decides that a question warrants a lookup.

Different mechanisms. Different use cases. Both available in MegaNova Studio.


Lorebooks: Keyword-Triggered Knowledge for Characters

What a Lorebook Does

A lorebook is a collection of entries. Each entry has a title, content, a set of trigger keywords, a priority value, and an enabled/disabled flag.

When the character receives a message, the system scans the last ten messages in the conversation and checks whether any lorebook entry's keywords appear in that text. Matching is case-insensitive substring matching — if the entry's keyword is dragon and the message contains The dragon emerged from the mountain, the entry fires.

All matched entries are sorted by priority, assembled under the header [WORLD INFORMATION - Use this knowledge about the world], and injected into the system prompt before the LLM call. The character receives the knowledge as part of its context. It responds as if it already knew.

Creating a Lorebook

Open the Lore tab in the Character Studio. This is the central place for managing a character's lorebooks.

From the Lore tab, click Create New Lorebook. Give it a name and an optional description. The lorebook is now ready to receive entries.

Inside the lorebook, create entries. Each entry needs:

  • Title — a label for the entry (e.g., "The Northern Kingdom", "Magic System Overview")
  • Content — the actual knowledge the character should have access to when this entry is triggered
  • Keywords — one or more trigger words or phrases. When any of these appear in conversation, the entry fires.
  • Priority — controls ordering when multiple entries fire simultaneously. Lower numbers are higher priority.

An entry with an empty keyword list never fires. An entry that is disabled is skipped entirely regardless of keyword matches.

Attaching a Lorebook to a Character

Creating a lorebook does not automatically connect it to a character. You attach lorebooks explicitly.

In the Lore tab, each lorebook card has an Attach button. Click it to connect the lorebook to the current character. A badge reading ATTACHED appears on the card. The character will now use that lorebook's entries during conversations.

You can attach multiple lorebooks to one character. All attached lorebooks are scanned on every message.

Token Budget

Lorebook injection consumes tokens. The system allocates 25% of the context window to lorebook content by default. If matched entries would exceed that budget, lower-priority entries are excluded.

You can configure the budget in the Blueprint Editor under the World Lore section. The section also shows how many lorebooks are currently attached and displays an estimated token count based on your budget percentage.

For characters with large lorebooks, reduce the budget if you need more space for conversation history. For information-dense characters where accurate knowledge retrieval matters more than long history, increase it.

AI-Generated Lorebooks

If writing lore entries manually feels like too much work, use the AI generation feature.

In the Blueprint Editor's World Lore section, click Generate with AI. MegaNova reads the character's existing blueprint — identity, background, personality, origin — and generates a lorebook with six entries automatically. Each entry comes with a title, content, and keywords derived from the character's world.

You can edit, expand, or delete individual entries afterward. The AI generation gives you a starting point, not a final product.

Importing a Lorebook

If you have an existing lorebook in a compatible format, you can import it directly into the Lore tab instead of building entries from scratch. Useful for transferring lorebooks between characters or importing from external tools.


Agent Knowledge Files: Document Search for Agents

What Knowledge Files Do

Agents in MegaNova Studio support a different mechanism: uploaded documents that become searchable during the agent's reasoning loop.

When a user asks a question, the agent can decide — as part of its reasoning — that the answer might be in a document. It calls search_knowledge_base with a query string, receives the relevant passages, and incorporates that content into its response.

Unlike lorebook injection, this is an active process. The agent chooses when to search. It can ask follow-up questions, search again with a different query, or combine results from multiple searches within the same turn.

Uploading Knowledge Files

Open the Knowledge tab on any agent's detail page.

Click Upload to add a file. Supported formats: PDF, TXT, MD, CSV, DOCX. Maximum file size: 10 MB per file. You can upload multiple files.

After uploading, the file's text is extracted and made available to the agent's search_knowledge_base tool. The agent does not read the entire file on every request — it searches for relevant passages based on the query it constructs during reasoning.

Enabling the Tool

Uploading files alone does not activate knowledge search. You also need to enable the search_knowledge_base tool in the agent's settings.

In the agent's Overview tab, scroll to the Tools section. Check Knowledge Base Search. Save the agent.

Once enabled, the agent will call this tool when it determines that a user's question warrants a document lookup. It constructs a search query, retrieves matching passages, and incorporates them into its response.

Any uploaded document becomes part of the knowledge base. Common uses:

  • Product documentation — agents can look up features, pricing, or specifications without needing those details in their system prompt
  • Support policies — customer service agents can reference return policies, SLA terms, or troubleshooting guides
  • Research documents — research agents can process PDFs and pull relevant passages during analysis
  • Internal reference materials — onboarding docs, process guides, FAQs

Documents are not read in full on every request. The agent searches actively and retrieves only the passages relevant to its current query.


Using Both Together in a Workflow

Lorebooks and knowledge files are not mutually exclusive. A workflow can use both.

Connect a Lorebook node (from the Nova Agent category in the Materials Library) to a Character node or Agent node in your workflow. The lorebook entries fire based on keywords just as they do in direct chat.

Connect a Raw Storage node to provide document-based reference material that the agent can query actively.

When the Agent node processes its turn, it has access to:

  • Character context from the connected Character node
  • World knowledge from the Lorebook node (keyword-triggered, passive)
  • Document knowledge from the knowledge files (query-based, active via search_knowledge_base)

This layered approach works well for complex use cases where some knowledge should fire automatically (world-building details, character-specific facts) and other knowledge should only be retrieved when directly relevant to a specific question (policy documents, technical specifications).


When to Use Each System

Use a Lorebook when:

  • The knowledge is tied to the character's world or identity
  • You want context injected automatically without requiring the character to "ask" for it
  • The knowledge is categorical and can be organized around specific keywords
  • You are building a roleplay character, game NPC, or fictional persona that needs consistent world knowledge

Use Knowledge Files when:

  • The knowledge comes from external documents (PDFs, reports, support docs)
  • The agent needs to search, not just recall
  • The knowledge is too large or too varied to express as keyword-triggered entries
  • You are building a customer service agent, research agent, or document-aware assistant

Use both when:

  • The agent needs persistent world context (lorebook) and the ability to look up specific document content on demand (knowledge files)

A Note on Token Economy

Both systems consume context window space.

Lorebook entries are injected directly into the system prompt. The default 25% budget caps that injection. Keep individual entries focused — a 50-word entry with sharp keywords outperforms a 500-word entry with broad keywords that fires on everything.

Knowledge file content is retrieved per query, not injected wholesale. But the agent uses a turn to retrieve it. In a 10-turn reasoning loop, every tool call is a turn spent. If the agent needs to run three searches to answer one question, that uses three turns.

Design for efficiency on both sides. Tight lorebook entries that fire precisely. Knowledge files organized so that a single, well-formed query returns what the agent actually needs.


Getting Started

Open a character or agent in MegaNova Studio.

For a character: navigate to the Lore tab → create a lorebook → add entries with specific keywords → attach the lorebook.

For an agent: open the Knowledge tab → upload your reference documents → enable Knowledge Base Search in the Tools section.

Both systems are available without any additional configuration. The knowledge layer is one of the most direct ways to make a character or agent genuinely useful — not just conversational, but informed.

Stay Connected

💻 Website: Meganova Studio

🎮 Discord: Join our Discord

👽 Reddit: r/MegaNovaAI

🐦 Twitter: @meganovaai