How to Generate Videos From Your AI Character's Avatar

How to Generate Videos From Your AI Character's Avatar

5-minute read · Published March 2026


A static avatar tells people what your character looks like. A video shows them how your character moves — their energy, their world, the feeling they create in the first three seconds.

MegaNova Studio's Video tab turns your character's blueprint directly into motion, using the same identity data you've already written. Here's how it works.


Where to Find It

The Video tab lives inside your Character Studio, in the sidebar alongside the Editor, Arena, Lore, and Assets tabs.

It appears automatically once your character has:

  • A saved ID (the character has been created)
  • An uploaded image avatar — not a URL reference, an actual image file

If both conditions are met, the tab is there and ready. No separate setup, no feature flag to enable.


The Two Modes: Text-to-Video vs. Image-to-Video

MegaNova supports two video generation modes, and the tab handles both from the same interface.

Text-to-Video (T2V) generates a clip purely from a prompt. Good for establishing scenes, abstract visuals, or when your character's avatar isn't the main focus.

Image-to-Video (I2V) animates a specific image — your character's avatar, an emotion variant, or a background asset — bringing it to life with motion while preserving the visual identity you've designed. This is the mode most creators use in the Video tab, because it keeps your character recognizable.

The model you select determines which mode you're in. Picking a model with I2V in the name enables the image selector; T2V models hide it automatically.


Step 1: Choose Your Reference Image

For I2V generations, the left panel shows a grid of selectable images pulled directly from your character's assets:

  • The main avatar (always listed first)
  • Emotion variants (if you've created expression assets)
  • Background and icon images from your character's asset library

Click any image to select it — a blue highlight and checkmark confirm the selection. That image becomes the visual anchor for the generated video: the model will animate it, generate motion around it, and maintain its visual characteristics throughout the clip.


Step 2: Build Your Prompt From Blueprint Sections

This is where MegaNova's approach differs from every other video tool. Instead of writing a prompt from scratch, you select which parts of your character's blueprint to use as context.

Five sections are available as checkboxes:

SectionWhat it pulls in
AppearancePhysical description, clothing, grooming style
Current SceneSetting, atmosphere, what the character is doing right now
TaglineThe one-liner that defines the character
Personality in ActionCore traits, quirks, and voice style
Origin StoryWorld, background, where they came from

Toggle the sections you want. The prompt field below updates automatically, combining everything into a single descriptive text:

[Character Name]: [appearance]. [current scene]. [personality]...

You can edit this prompt manually after it's generated, or hit Refill to regenerate it from your current selections. If a section appears greyed out, that part of the blueprint is empty — fill it in the Editor tab first.

The result is a prompt grounded in your character's actual identity data, not improvised from memory.


Step 3: Select a Model

Nine models are available across two tiers — lite (faster, lower cost) and pro (higher quality, more credits):

Image-to-Video

  • Wan 2.6 I2V — reliable general-purpose I2V, up to 15 seconds
  • Veo 3.1 Fast I2V — Google's fast variant, up to 8 seconds
  • Veo 3.1 I2V — full-quality Veo, up to 8 seconds
  • Seedance Lite I2V — BytePlus's lightweight option, up to 10 seconds

Text-to-Video

  • Wan 2.6 T2V
  • Veo 3.1 Fast
  • Veo 3.1
  • Seedance Lite
  • Seedance Pro

The credit cost for each model is displayed in the dropdown so you know what you're spending before you commit.


Step 4: Configure Resolution and Duration

Resolution — three fixed options:

FormatDimensionsBest for
16:91280×720Desktop embeds, YouTube
9:16720×1280TikTok, Instagram Reels, Shorts
1:1960×960Social media (versatile)

Duration — controlled by a slider, with limits that vary by model:

ModelMinMax
Wan 2.62s15s
Veo models2s8s
Seedance2s10s

The slider clamps automatically when you switch models, so you won't accidentally request a 12-second Veo clip.

Audio — an optional checkbox, off by default. When enabled, the generation includes an audio layer.


Step 5: Generate and Wait

Hit Generate. The button shows the credit cost before you click, then switches to Submitting... as the request goes out, then Generating... once the task is queued.

Video generation is asynchronous — the backend submits to MegaNova's video API, receives a task ID, and polls for status every 5 seconds. You'll see the active task appear in the right panel with its current status. Most clips complete in 1–3 minutes depending on model, duration, and server load.

When finished, the video appears in the Saved Videos grid on the right. It's also automatically added to your character's asset library, with the first 60 characters of your prompt as the label — so you can find it later in the Assets tab.


What the Model Actually Receives

When you hit Generate, here's the exact payload sent to MegaNova's video API:

{
  "model": "Alibaba/wan2.6-i2v",
  "prompt": "Aria: Silver hair cascading over a midnight coat...",
  "size": "720x1280",
  "n_seconds": 7,
  "include_audio": false,
  "image_url": "data:image/jpeg;base64,..."
}

The image is converted to a base64 data URL on the frontend before submission — no CORS issues, no external domain restrictions. Local avatar files, CDN URLs, and uploaded assets all work the same way.


Practical Tips for Better Results

Use Current Scene liberally. The model responds well to specific settings and actions. "Standing at the edge of a rain-soaked rooftop, coat billowing" generates something far more interesting than "standing outside."

Portrait orientation for social. Select 720×1280 before generating if you're creating content for Instagram, TikTok, or Shorts. The composition the model produces is fundamentally different from a landscape crop.

Wan 2.6 I2V for fidelity, Veo for motion quality. Wan 2.6 tends to preserve the reference image more closely. Veo produces more dynamic motion. If your character has a very specific look you want preserved, start with Wan 2.6.

Edit the auto-prompt. The blueprint sections give you a solid starting point, but the prompt field is fully editable. After the auto-fill, add action verbs: "walks slowly toward the camera," "turns and smiles," "the wind moves through her hair." Video models respond to motion language.

Test with shorter durations first. Run a 3–4 second clip to validate the look before committing to a 10-second generation. Credit cost scales with duration.


Where Your Videos Live

Every generated video is saved to three places:

  • Video tab history — right panel, scrollable grid with delete buttons
  • Character's Asset library — accessible from the Assets tab, filtered by type: video
  • Video Generation page — for managing videos across multiple characters

Deleting from the Video tab's history panel is permanent.


The Bigger Picture

The Video tab is one piece of a larger pipeline. Once you have a clip that captures your character's visual identity, you can embed it on landing pages via the character embed, use it in marketing materials, or attach it to agent-facing channels for richer context.

For characters deployed as support bots or brand personas, a short video generated from the character's avatar and scene data creates a much stronger first impression than a static image alone.


Open MegaNova Studio and go to your character's Video tab →


MegaNova Studio supports 9 video models across T2V and I2V modes. Generated videos are saved to your character's asset library automatically. Credit costs vary by model and are shown before generation.

Stay Connected

💻 Website: Meganova Studio

🎮 Discord: Join our Discord

👽 Reddit: r/MegaNovaAI

🐦 Twitter: @meganovaai