How to Use the Python SDK to Chat With Your Character Programmatically
You've spent time building a character in MegaNova Studio — crafting the personality, writing the backstory, tuning the dialogue style. Now you want to bring that character into your own application, pipeline, or script. This guide shows you exactly how to do that, from a one-liner SDK call to full streaming conversations with persistent memory.
Two Ways to Connect
MegaNova gives you two integration paths depending on how much existing OpenAI infrastructure you already have:
Option A — MegaNova Python SDK (native, full-featured):
pip install meganova
Option B — OpenAI SDK (drop-in, if you're already using it):
pip install openai
Both work. Option A gives you access to MegaNova-specific fields like memory injection counts, resolution status, and tool confirmation flows. Option B is the right choice if you're migrating an existing codebase and want to swap endpoints with minimal changes.
Getting Your API Key
Every published agent has an API key in the format agent_xxx.... Find it in the Overview tab of your agent in MegaNova Studio. Copy it — you'll need it as the credential for every request.
A quick sanity check before writing any code:
import requests
API_KEY = "agent_xxx..."
BASE_URL = "https://studio-api.meganova.ai"
info = requests.get(f"{BASE_URL}/api/agents/v1/{API_KEY}/info").json()
print(info["name"]) # Your character's name
print(info["welcome_message"])
print(info["is_available"]) # False if outside business hours
If you get the agent name back, you're authenticated and ready.
The Simplest Possible Chat
With the MegaNova SDK:
from meganova.cloud import CloudAgent
agent = CloudAgent(api_key="agent_xxx...")
response = agent.chat("What's your name?")
print(response)
That's it. The character's entire blueprint — personality, backstory, tone, guardrails — is already baked in on the server side. You don't write a system prompt. You don't manage a message history array. You just send text and get text back.
The Native API: Full Control
When you need more than a simple string response, use the native REST endpoint directly. It returns richer metadata on every reply:
import requests
API_KEY = "agent_xxx..."
BASE_URL = "https://studio-api.meganova.ai"
response = requests.post(
f"{BASE_URL}/api/agents/v1/{API_KEY}/chat",
json={
"message": "Tell me about yourself",
"user_identifier": "user@example.com",
"user_identifier_type": "email"
}
).json()
print(response["response"]) # The character's reply
print(response["conversation_id"]) # Save this — you need it for the next message
print(response["tokens_used"]) # Token cost for this turn
print(response["memories_used"]) # How many memories were injected
Two fields here deserve attention:
conversation_id— This is your thread ID. Pass it back on every subsequent message to maintain context. Without it, each message starts a fresh conversation.memories_used— The number of long-term memory entries the character retrieved about this user. If you see 0 on a first interaction and 3 on a returning user's second conversation, the memory system is working.
Multi-Turn Conversations
The pattern for maintaining a running conversation is simple: save the conversation_id from the first response and pass it forward on every request.
import requests
API_KEY = "agent_xxx..."
BASE_URL = "https://studio-api.meganova.ai"
def chat(message, conversation_id=None, user_id=None):
payload = {"message": message}
if conversation_id:
payload["conversation_id"] = conversation_id
if user_id:
payload["user_identifier"] = user_id
payload["user_identifier_type"] = "email"
return requests.post(
f"{BASE_URL}/api/agents/v1/{API_KEY}/chat",
json=payload
).json()
# Start a conversation
r1 = chat("My name is Alex and I'm building a trading app", user_id="alex@example.com")
conv_id = r1["conversation_id"]
print(r1["response"])
# Continue it — character knows your name, context carries forward
r2 = chat("What do you think about my project?", conversation_id=conv_id)
print(r2["response"])
# Even a new conversation later will inject Alex's stored memories
r3 = chat("I'm back!", user_id="alex@example.com") # new conv_id, but memories persist
print(r3["memories_used"]) # Will be > 0 on return visits
The character doesn't just remember within a session — it builds a persistent profile per user_identifier. Facts, preferences, and summaries accumulate over time and are automatically retrieved on future interactions.
Streaming Responses
For chat interfaces, CLI tools, or any UX where you want text to appear token-by-token, MegaNova supports OpenAI-compatible streaming via Server-Sent Events:
from openai import OpenAI
client = OpenAI(
api_key="agent_xxx...",
base_url="https://studio-api.meganova.ai/api/agents/v1/agent_xxx..."
)
print("Character: ", end="", flush=True)
stream = client.chat.completions.create(
model="agent",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)
for chunk in stream:
delta = chunk.choices[0].delta.content
if delta:
print(delta, end="", flush=True)
print() # newline at end
The stream yields JSON delta objects in standard OpenAI format — each chunk carries a content field until the final finish_reason: "stop" chunk signals completion. This means any code you've already written for OpenAI streaming works here without modification.
The OpenAI Drop-In: Migrating Existing Code
If you have an existing application that calls the OpenAI API, switching to a MegaNova character requires changing exactly two lines:
# Before
from openai import OpenAI
client = OpenAI(api_key="sk-...")
# After
from openai import OpenAI
client = OpenAI(
api_key="agent_xxx...",
base_url="https://studio-api.meganova.ai/api/agents/v1/agent_xxx..."
)
# Everything else stays identical
response = client.chat.completions.create(
model="agent", # Always "agent" — model is configured in Studio
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)
The response format is fully spec-compatible: choices[0].message.content, usage.total_tokens, finish_reason — all there, all in the same shape as OpenAI's response objects.
One key difference worth knowing: the model field in the request is ignored. The model is configured per-agent in MegaNova Studio, not per-request. This is intentional — your character's voice stays consistent regardless of which model version is running underneath.
Handling Tool Confirmation
Some agents are configured to use tools — calling external APIs, creating tickets, sending emails. For sensitive operations, the agent can be set to require explicit user approval before executing. Your code needs to handle this flow:
import requests
API_KEY = "agent_xxx..."
BASE_URL = "https://studio-api.meganova.ai"
response = requests.post(
f"{BASE_URL}/api/agents/v1/{API_KEY}/chat",
json={"message": "Create a support ticket for my login issue"}
).json()
if response.get("status") == "awaiting_confirmation":
pending = response["pending_tool_call"]
print(f"Agent wants to: {pending['description']}")
# e.g. "Create ticket: 'Login issue' (priority: high)"
user_input = input("Approve? (y/n): ")
action = "approve" if user_input == "y" else "reject"
final = requests.post(
f"{BASE_URL}/api/agents/v1/{API_KEY}/chat/confirm",
json={
"approval_id": pending["approval_id"],
"action": action
}
).json()
print(final["response"])
else:
print(response["response"])
The pending_tool_call object tells you the tool name, its arguments, and a human-readable description of what's about to happen. This pattern is useful for any agentic flow where you want a human in the loop before the agent takes a real-world action.
Error Handling in Production
A minimal but production-ready wrapper:
import requests
from requests.exceptions import RequestException
def send_message(api_key, message, conversation_id=None):
try:
response = requests.post(
f"https://studio-api.meganova.ai/api/agents/v1/{api_key}/chat",
json={"message": message, "conversation_id": conversation_id},
timeout=30
)
if response.status_code == 429:
retry_after = response.headers.get("Retry-After", 60)
raise Exception(f"Rate limit hit. Retry after {retry_after}s")
if response.status_code == 404:
raise Exception("Agent not found or not published")
response.raise_for_status()
return response.json()
except RequestException as e:
raise Exception(f"Network error: {e}")
The API returns standard HTTP status codes: 200 for success, 429 with a Retry-After header for rate limiting, 404 if the agent isn't published, and 401 for an invalid key. The default rate limit is 30 messages/minute and 1,000/day per user — configurable in Studio.
What You're Actually Getting With Each Request
When you call the chat endpoint, MegaNova is doing more than routing a message to an LLM. Under the hood, each request:
- Loads the character blueprint — all 8 sections (identity, personality, backstory, behavior, etc.) are assembled into a system prompt automatically
- Retrieves relevant memories for the given
user_identifier— past facts, preferences, and conversation summaries - Injects knowledge base entries that match keywords in the message
- Executes any tools the agent has configured (custom API calls, ticket lookup, etc.)
- Persists the conversation turn for history, analytics, and future memory extraction
- Updates memory asynchronously after the response is sent
You get all of this for a single HTTP POST. Compare that to building it yourself with the raw OpenAI API — system prompt management, conversation history trimming, retrieval, tool calling, memory persistence — and the value of the abstraction becomes clear.
Quickstart: Full Working Script
import requests
API_KEY = "agent_xxx..." # Paste your agent's API key here
BASE_URL = "https://studio-api.meganova.ai"
def chat_session():
info = requests.get(f"{BASE_URL}/api/agents/v1/{API_KEY}/info").json()
print(f"Connected to: {info['name']}")
print(f"Welcome: {info['welcome_message']}\n")
conversation_id = None
while True:
user_input = input("You: ").strip()
if not user_input or user_input.lower() in ("exit", "quit"):
break
payload = {"message": user_input}
if conversation_id:
payload["conversation_id"] = conversation_id
response = requests.post(
f"{BASE_URL}/api/agents/v1/{API_KEY}/chat",
json=payload
).json()
conversation_id = response["conversation_id"]
print(f"Character: {response['response']}\n")
if __name__ == "__main__":
chat_session()
Run this, swap in your real API key, and you have an interactive CLI session with your character — persistent memory, tools, knowledge base, and all.
Next Steps
Once the basics work, the natural path forward is:
- Connect your backend: Pass
user_identifieras your app's user ID so each user builds their own memory profile - Add streaming to your UI: Use the OpenAI-compatible endpoint with
stream=Truefor real-time text rendering - Add tools: In Studio, define custom HTTP tools so the character can call your own APIs mid-conversation
- Set up agent-to-agent delegation: If you need specialized behavior (billing, technical, returns), link separate published agents and let the primary agent route automatically
The MegaNova agent API is compatible with the OpenAI Python SDK. Rate limits, model selection, and memory behavior are all configured in MegaNova Studio — your integration code stays clean.
Get your API key in MegaNova Studio →
Stay Connected
💻 Website: Meganova Studio
🎮 Discord: Join our Discord
👽 Reddit: r/MegaNovaAI
🐦 Twitter: @meganovaai