How to Refine a Character Portrait Step by Step With img2img

How to Refine a Character Portrait Step by Step With img2img

Generating a character portrait often takes more than one pass. The first result from the character image generator might have the right outfit and style but the wrong expression, lighting, or composition. Rather than regenerating from scratch and getting something completely different, you can use img2img to iterate — feeding the existing image back into the generation pipeline as a reference and describing only the changes you want.

This article covers the img2img workflow in MegaNova Studio's full image generation studio: what it does, how to load reference images, which modes to use for different refinement goals, and the settings that control how much the output changes from the input.


img2img vs. Regenerating From Scratch

When you regenerate from scratch, you get a new random result within the constraints of your prompt. Each generation is independent — the seed is random (-1 by default), which means the face, lighting, and composition can shift significantly even with the same text prompt.

img2img changes this. You give the model an existing image as a reference, and it generates a new image guided by both your text prompt and the visual content of the reference. The output is closer to the input than a fresh generation would be. You're not starting over — you're steering.

This is what makes img2img the right tool for iterative refinement. If a portrait has the right face and clothing but the background is wrong, a targeted img2img pass can replace the background while leaving the character intact. If the expression isn't right, a prompt-guided pass can shift it without randomizing everything else.


Where to Find img2img

The img2img functionality lives in the full image generation studio, accessible via the main navigation (not the character-specific Quick Create generator). Look for the Image Generation section in the sidebar.

This is a different interface from the character portrait generator used during Quick Create. The full studio has a settings panel on the left, an image output area on the right, and a References section near the top of the settings. That References section is where you load your input image for img2img.


Step 1: Load Your Portrait as a Reference Image

The References panel accepts up to 10 images total. To load a portrait you've already generated:

Option A — From your space: Click the + button in the References panel, then select From space. This opens your image collection — the same gallery that stores all your generated images. Find the portrait you want to refine, click it, and confirm. The image loads into the References panel as a thumbnail.

Option B — From this device: Click +, then select From this device. Use the file browser to select an image file from your computer.

Option C — Edit from My Creations: From any view where your generated images appear, clicking an "Edit image" action on a specific image dispatches it directly to the generation studio's reference slot. The studio opens with that image already loaded.

You can load multiple reference images if you want to blend elements from several sources, but for portrait refinement, one reference image is usually sufficient.


Step 2: Confirm the Model Switches to img2img Mode

When you load a reference image, the studio automatically detects that img2img is needed. Two things happen:

  1. The model dropdown filters to show only models that support image editing. These models have a green Edit badge next to their name.
  2. If your current model doesn't support img2img, the studio automatically switches to the first available img2img-capable model and shows a warning: "Switched to [model name] (supports image editing)."

You can manually select any model from the filtered list. All visible models in this context support img2img. Models without the Edit badge are hidden when a reference image is loaded.

The credit cost still applies per generation and is shown in the model selector.


Step 3: Choose Your AI Assistant Mode

The sparkle icon next to the prompt field opens the AI Assistant menu. This menu is where you specify how the reference image should influence the generation. There are five modes:

Smart Image Editing — the most intuitive option for portrait refinement. Describe the change you want in plain language: "change the background to a dark library," "make the expression more serious," "add dramatic lighting from the left." The model applies the described edit while preserving the rest of the image.

Img2Img — uses the input image to guide the overall generation direction. Less targeted than Smart Image Editing — more useful when you want a similar composition with variation rather than a specific targeted change.

Character Reference — uses the input image as a character consistency anchor. The generated output maintains the character's face and key visual features while applying your prompt for other changes. Use this when the priority is keeping the character recognizable across multiple generated images.

Style Reference — extracts the aesthetic properties of the input image (color palette, lighting mood, rendering style) and applies them to a new generation. Use this when you have a stylistically strong image and want new generations that feel visually consistent with it.

ControlNet — uses the input image as a structural reference, preserving pose and composition layout while allowing other properties to change. Use this when you want to reuse a specific pose or body composition from an existing image.

For most portrait refinement use cases — adjusting small details while keeping the character stable — Smart Image Editing is the right starting mode. Character Reference is the right mode when generating multiple images and needing face consistency across them.


Step 4: Write a Targeted Prompt

The prompt is what tells the model what to change or what to generate. Write it specifically:

Less effective: "anime girl with dark hair"
More effective: "same character, change background to candlelit stone chamber, maintain current outfit and expression"

For Smart Image Editing, describe only the change you want. Mentioning elements you want to keep ("maintain current outfit," "same face") signals to the model that those should stay stable.

For Character Reference mode, describe the full image you want. The reference image handles face consistency; the prompt handles everything else — setting, pose, mood, lighting.

The studio also has a Blueprint Field Picker (shown when characters are loaded) that can insert appearance details from a character's blueprint directly into the prompt. This is useful when your prompt needs to re-specify character details to anchor the generation.


Step 5: Adjust Guidance Scale and Steps

Two settings in the Additional Settings section control the refinement intensity:

Guidance Scale (default 3.5) — controls how strictly the model follows your text prompt relative to the reference image. Higher values produce output that more closely matches the prompt text; lower values produce output that stays closer to the reference image's content. For subtle refinements where you want the image to change as little as possible, keep guidance scale low (2–4). For targeted edits where the text instruction should dominate, increase it (5–8).

Steps (default 20) — controls generation quality and detail. Higher step counts produce more refined output but take longer and cost more. For initial iteration passes, 20 steps is sufficient. For a final refinement where quality matters, increase to 25–30.

Both settings are in the collapsed Additional Settings row in the left panel. Click the row to expand it.


Step 6: Set a Fixed Seed for Controlled Variation

The seed (default -1, random) determines the random variation in each generation. With a fixed seed, the same prompt and reference image produce consistent results — useful when you want to compare the effect of changing only one parameter.

To fix the seed: note the seed value from a generation you like (visible in the image metadata), then type that value into the seed field before regenerating. Changing any other parameter with the same seed shows the isolated effect of that change.

For exploratory refinement — finding what works — keep seed at -1. Once you find a direction that's working, fix the seed and iterate on the prompt.


Step 7: Generate and Evaluate

Click Generate. The output appears in the gallery on the right side. The image is automatically saved to your image collection with category "edited" and a label noting the model used.

Evaluate: does the output maintain what you wanted to keep? Did the targeted change apply? If the output drifted from the reference more than expected, lower the guidance scale and regenerate. If the targeted change didn't apply strongly enough, increase the guidance scale or make the prompt instruction more specific.

Keep the reference image loaded. You can continue generating against the same reference with different prompts or settings without re-uploading it. Each new generation produces a new result you can compare against the others.


Step 8: Use the Output as the Next Reference

Once you have a generation that's closer to what you want, you can load it as a new reference for the next pass. This is the iterative refinement loop: each generation can become the input to the next.

In the gallery, click the image you want to use next, then add it to the References panel (remove the old reference first if you only want the new one). Adjust your prompt to describe the next change, and generate again.

This layered approach lets you make changes in sequence rather than all at once — fixing the background first, then the lighting, then small expression details — without any single generation having to handle too many changes simultaneously.


Saving a Final Version to the Character

Once you have the refined portrait you want, use the Apply button in the character-specific generator (CharacterImageGeneration) to set it as the character's avatar — it will open the crop tool. From the full image generation studio, you can download the image and upload it as an avatar through the Assets tab in the Character Studio.

The refined image is already in your image collection under category "edited" with the generation history preserved in the batch system.

Open the image generation studio and start refining →

Stay Connected

💻 Website: Meganova Studio

🎮 Discord: Join our Discord

👽 Reddit: r/MegaNovaAI

🐦 Twitter: @meganovaai