How to Use the Character Quality Panel to Auto-Fix Issues

How to Use the Character Quality Panel to Auto-Fix Issues

The Character Quality Panel sits in the sidebar of the Blueprint Editor. It's the Quality Agent — a system that evaluates your character by running benchmark scenarios, identifies specific problems, and generates targeted edits to fix them. You can apply those edits one at a time, review them before committing, or apply all of them at once.

This article walks through what the panel shows, how the auto-fix workflow operates, and what to expect from the generated suggestions.


The Three States of the Quality Panel

The panel has three distinct states depending on whether the agent has run.

Not Evaluated — if the character hasn't been benchmarked yet, the panel shows the Quality Agent's capabilities (immersion break detection, consistency checks, one-click fixes) and a single Run Quality Agent button. The character must be saved before running.

Agent Running — once started, the panel shows a spinner with the label "Agent Analyzing..." while it runs benchmark scenarios and generates fix suggestions. This takes a moment — the agent is making actual AI calls against your character.

Results — after the run completes, the panel shows your character's overall quality score (0–100) as a large number with a color-coded bar, and dimension bars for Consistency, Immersion, and Memory.

Score colors follow a three-level scale:

  • Green — 80 or above: the dimension is healthy
  • Yellow — 60–79: passing but worth watching
  • Red — below 60: the dimension has significant issues

Reading the Results

The collapsed view shows three dimension bars: Consistency, Immersion, and Memory. These are the three core dimensions the panel surfaces from the full five-dimension benchmark.

A status badge at the top right indicates whether results are current:

  • Up to date — scores reflect the character's current configuration
  • Outdated — the character has been edited since the last run; scores may no longer be accurate
  • Not evaluated — no benchmark has been run

Click Show details to expand the panel and see the specific issues identified within each dimension. Issues appear as a bulleted list under each dimension name, with a left red border marking each item. These are the exact problems the agent found — not generic categories, but specific descriptions like "No immersion guardrails detected. Character may break the fourth wall."

The lastEvaluated timestamp and scenario count appear at the bottom of the expanded view.


The Apply Agent Fixes Button

When issues exist, the primary action button appears above the Re-run button: Apply Agent Fixes (N), where N is the total number of issues found.

Clicking it triggers the fix generation step. The panel goes into the Agent Running state again while the backend:

  1. Takes the high and medium severity issues from the benchmark results
  2. Packages each issue with its dimension, score, the test prompt that produced it, and the character's actual response
  3. Sends these to the fix generation model (manta-flash-1.0) with the character's full definition as context
  4. Receives back a set of targeted edits — each specifying which field to change, what the current value is, what the suggested value is, and why

Once fixes are generated, the Fix Preview modal opens.


The Fix Preview Modal

The modal title reads "Quality Agent Suggestions" with a count of issues found. A progress bar tracks how many fixes have been applied, skipped, and remain pending.

The modal presents fixes one at a time, in order of severity. Each fix card shows:

Issue header — the dimension (Consistency, Immersion, etc.), severity level (high / medium / low), and a specific description of what's wrong. The card is color-coded by severity: red for high, orange for medium, yellow for low.

Suggested fix — a purple card with the agent's explanation of what it's proposing to change and why.

Field being modified — a line showing which part of the character the fix will affect. The five fields a fix can target are:

  • System Prompt (systemInstruction)
  • Scenario
  • First Message (firstMessage)
  • Example Dialogue (exampleDialogue)
  • Description

Diff preview — a side-by-side view showing the current value on the left (red background, labeled "Current") and the proposed value on the right (green background, labeled "After Fix"). The diff is truncated at 800 characters for long fields. You can hide the diff if you want a cleaner view.

At the bottom of the modal, three actions are available:

  • Skip — moves past this fix without making any changes. The fix is recorded as skipped in the progress count.
  • Apply All (N) — applies every remaining pending fix at once. This button only appears when more than one fix is pending.
  • Apply Fix — applies the current fix to the character and advances to the next one.

How Applying a Fix Works

When you apply a fix, the character's form data is updated immediately with the suggested value for that field. The change is live in the editor — if you switch to the Blueprint Editor tab, you'll see the updated content in the relevant section.

Applying any fix marks the quality status as Outdated, since the character has now changed since the last benchmark run. This is expected — the fix has modified the character, and the previous scores no longer reflect it.

For multiple fixes that target the same field, "Apply All" uses the last fix in the sequence for that field. If two fixes both modify the system prompt, the second one's suggested value is what gets applied. Review the diffs before using Apply All if multiple fixes touch the same field.


After Reviewing All Fixes

When all fixes have been either applied or skipped, the modal shows a completion screen: "All Issues Reviewed" with a count of how many fixes were applied.

From here, the Verify Improvements button closes the modal and switches to the Arena tab. From Arena you can re-run the benchmark to see whether the applied fixes improved the scores. Running the benchmark again after fixes is the correct validation step — it confirms that the changes resolved the issues the agent identified, rather than assuming they did.

If the scores don't improve after applying fixes, check the detailed results view in Arena. The reasoning field on each scenario tells you what the judge observed, which may point to issues beyond what the automatic fixes addressed.


Static Pre-Benchmark Checks

The Quality Agent also runs a lightweight static analysis on the character definition before any benchmark is triggered. This analysis catches obvious problems that don't require AI evaluation:

  • No immersion guardrails — if the system prompt doesn't contain "never acknowledge," "never break character," or "stay in character," the agent flags it as a medium-severity immersion risk
  • AI-like phrases in first message — if the first message contains phrases like "As an AI," "I don't have feelings," "I was trained," or "How can I help you today," it's flagged as a high-severity immersion issue
  • Character name missing from system prompt — low severity, suggests the prompt may lack a personality anchor
  • System prompt too short — less than 100 characters is flagged as medium severity for consistency

These static checks feed into the same issues list as the benchmark results, and they're included when fix suggestions are generated.


When to Re-Run vs. When to Apply Fixes

Use Re-run Agent when:

  • The character is marked Outdated (you've edited it since the last run)
  • You want updated scores after applying fixes
  • You've made manual edits to the system prompt and want to verify them

Use Apply Agent Fixes when:

  • The benchmark has surfaced specific issues and you want guided, reviewed suggestions
  • You prefer to see the diff before committing each change
  • You want to apply all high/medium severity fixes in one session

The two workflows complement each other. Run the agent, apply fixes, run again to verify. Each cycle should narrow the gap between current scores and the 80+ range where all dimensions are green.

Open the Blueprint Editor and run the Quality Agent →

Stay Connected

💻 Website: Meganova Studio

🎮 Discord: Join our Discord

👽 Reddit: r/MegaNovaAI

🐦 Twitter: @meganovaai