Meet NovaBrain: The Task Planner at the Heart of Nova OS

Meet NovaBrain: The Task Planner at the Heart of Nova OS

Every multi-agent system faces the same problem: a request arrives that needs multiple agents to handle, and the system has to figure out what to do in what order. Most systems solve this with routing rules, keyword matching, or a fixed workflow template. These approaches work for simple cases and break down at scale.

Nova OS takes a different approach. Before any agent executes, NovaBrain — a dedicated planning layer — reads the request, inspects the conversation history, and produces a structured execution plan. That plan tells the system not just which agents to run, but in what order, which can run simultaneously, which steps are already done, and which to skip entirely.

This article explains what NovaBrain is, what it does, and why a planning layer makes the difference between an AI system that routes requests and one that executes complex work reliably.


The Problem Planning Solves

Consider what happens without a planning layer. A request arrives: "Review this contract and check for compliance issues — but don't regenerate the clause summary since you already did that."

A keyword-based router sees: contract, compliance, clause, summary. It fires the Clause Extractor, the Compliance Checker, and the Report Builder. The clause extraction runs again — the user explicitly said not to do it. The result is wrong, and the user gets a response that ignores their instruction.

This is not a routing problem. It's a planning problem. The system that executes correctly needs to:

  1. Understand that clause extraction was already completed
  2. Skip it and proceed with the Compliance Checker only
  3. Use the previously extracted clauses as input

No routing rule catches "I already did the previous step." No keyword trigger handles context from prior conversation turns. Planning does — because planning reads the full request intent, not just surface keywords.


What NovaBrain Produces

NovaBrain makes a structured LLM planning call against the incoming request. It returns a BrainPlan: a list of BrainTask objects that form a dependency graph.

Each BrainTask has six fields:

task_id — a unique identifier for this task within the plan.

description — what this specific task does. "Extract key clauses from the vendor agreement" is a description. It's human-readable and appears in the call log for observability.

required_capability — the capability type this task needs. The routing system uses this to match the task to an agent that has the capability, rather than hardcoding which specific agent runs it.

source — whether this is a new task ("user_request") or something already present in the conversation history ("history"). Tasks sourced from history are marked as completed before execution begins — the system reuses existing work rather than repeating it.

action — what to do with this task: execute, skip, or already_done. Execute means run it. Skip means the user or context indicates this step isn't needed. Already done means the work was completed in a prior turn.

depends_on — a list of task_id values that must complete before this task can start. An empty list means the task can start immediately. A populated list means the task waits for those specific tasks to finish first.

These six fields contain everything the execution layer needs to run the workflow correctly.


How NovaBrain Reads the Request

NovaBrain's planning call receives three inputs:

The current request — the full text of what the user is asking for now.

The conversation history — all prior turns in the current conversation session. NovaBrain uses this to identify work already completed, tasks already attempted, and context that affects how the current request should be interpreted.

The available capabilities — the set of agent capabilities registered in the current deployment. NovaBrain produces plans using the capabilities that actually exist, not hypothetical ones.

From these three inputs, NovaBrain produces a plan that reflects intent, history, and available execution resources together. This is what separates it from a static workflow template — the plan is generated fresh for each request based on full context.


The InputContextAnalyzer: Simple vs. Complex

Not every request needs NovaBrain. A simple question — "What is the liability cap in this contract?" — doesn't need a multi-step plan. It needs one agent, one execution, one response.

Before NovaBrain runs, the InputContextAnalyzer scores the request's complexity:

(tokens/100) × 0.3 + (entities/5) × 0.2 + (tools/3) × 0.3 + (depth/3) × 0.2

Requests scoring below 1.0 are classified as simple. They go directly to the cascade router, which selects a single agent. No planning overhead.

Requests scoring 1.0 or above are complex. NovaBrain runs and produces a full execution plan before any agent fires.

The threshold is where planning overhead pays off. A simple request doesn't benefit from planning — the time NovaBrain would spend planning costs more than it saves. A complex request benefits substantially — planning eliminates false triggers, enables parallelism, and handles historical context that simple routing can't access.


Three Things NovaBrain Gets Right That Routing Gets Wrong

1. It doesn't re-execute completed work

When a user's request references work done in a prior turn, NovaBrain marks those tasks as already_done. The system reuses the prior output as input to the current plan without re-running the agent. Routing has no mechanism for this — it evaluates the current request in isolation and would dispatch the same agent again.

2. It doesn't trigger on keyword coincidences

Classic false positive: "I don't need the PDF right now." A keyword-based router sees "PDF" and fires the PDF agent. NovaBrain reads the full sentence, understands the negation, and sets the PDF task to action: skip. The agent doesn't fire.

This false positive problem compounds at scale. A production system handling thousands of requests per day in a single domain will accumulate edge cases — requests that contain the keywords of a capability without actually needing it. NovaBrain handles these correctly because it reads intent, not keywords.

3. It identifies what can run in parallel

NovaBrain encodes dependencies explicitly in the plan via depends_on. Tasks with no dependencies can start immediately. Tasks with dependencies wait only for what they specifically need — not for unrelated prior steps.

A routing system that fires agents sequentially doesn't know which steps are independent. It makes every agent wait for the previous one to finish, even when there's no actual dependency between them. NovaBrain's dependency map is what makes DAG-based parallel execution possible.


The Feedback Loop: Plans That Improve Over Time

Every plan NovaBrain produces is logged. The execution outcome of each task in the plan — whether it completed successfully, how long it took, whether it required a fallback — feeds back into the trust score system.

Over time, NovaBrain's plans become more accurate because the agents they route to have better trust scores reflecting real performance history. An agent that consistently executes its assigned tasks successfully earns a higher trust score, which makes it a preferred assignment in future plans when multiple agents could fulfill the required capability.

This feedback loop closes the gap between planned performance and actual performance. The first plan NovaBrain produces for a new deployment is based on configured capabilities and neutral trust scores. The hundredth plan is based on real execution history across all agents in the mesh.


Why the Planning Step Produces 96% Accuracy

Nova OS's multi-agent orchestration achieves 96% task accuracy — 2.7× the industry benchmark for AI task completion. The planning step is the primary reason.

The industry benchmark reflects systems that use static workflows or keyword-based routing. These approaches work on clean, predictable requests and fail on the long tail: requests that reference prior context, requests that contain negations or conditionals, requests that span domain boundaries in unexpected ways.

NovaBrain handles the long tail correctly because it plans from full context rather than matching surface patterns. The 96% number reflects what planning buys on realistic enterprise request distributions — not a curated test set, but the full variety of real-world inputs that production systems encounter.


NovaBrain and the Rest of the System

NovaBrain doesn't work in isolation. Its plans are executed by the rest of the Nova OS architecture:

The DAGExecutor runs tasks in the BrainPlan in parallel where depends_on permits, using Kahn's topological sort algorithm to sequence execution correctly.

The cascade router uses the required_capability field from each BrainTask to find the right agent — matching capability requirements against registered agent profiles through the same 3-tier routing process used for direct requests.

Plan Repair handles mid-execution failures. If an agent fails while running a task, PlanRepair re-plans the remaining tasks in the graph, finding alternative agents that can fulfill the same required_capability.

The call log records the full BrainPlan alongside every execution event — which tasks ran, in what order, with what agents, and what each produced. This is the audit trail that regulated enterprise deployments require.

NovaBrain is the first thing that runs on a complex request and the layer whose output the rest of the system executes. Getting planning right is what makes the execution layer reliable.

Learn more about Nova OS →

Stay Connected

💻 Website: meganova.ai

📖 Docs: docs.meganova.ai

✍️ Blog: Read our Blog

🐦 Twitter: @meganovaai

🎮 Discord: Join our Discord