User GuideAI Workshop

AI Workshop

The AI Workshop is StudioBrain’s hub for AI-powered content generation. It supports multiple AI providers, collaborative multi-provider generation, deep analysis, context-aware prompting, and smart content merging — all informed by your project’s templates and rules.

Access the AI Workshop from the sidebar by clicking AI Workshop.

Overview

The AI Workshop lets you:

  • Write prompts and generate content for your worldbuilding entities
  • Compare responses from multiple AI providers side by side
  • Merge and fuse the best parts of different AI outputs
  • Control exactly what context from your world is sent to the AI
  • Review and import generated content back into your entities
  • Use BrainBits credits for managed AI or bring your own API keys

BrainBits

BrainBits are StudioBrain’s credit system for AI operations. Every AI generation, analysis, or smart fix consumes BrainBits based on the complexity and token usage of the request.

Monthly Allocations

TierBrainBits / MonthNotes
Free100For trying StudioBrain AI features; BYO API keys do not consume BrainBits
Indie500Individual allocation
Team750 per userPooled across the team; can be allocated per group
EnterpriseUnlimitedOr negotiated high cap

All tiers can purchase additional BrainBits blocks. StudioBrain AI services (managed orchestration, context building, validation) are included for paid tiers and do not count toward BrainBit usage.

What Consumes BrainBits

OperationApproximate Cost
Single-provider text generation1-3 BrainBits
Multi-provider collaborative generation2-6 BrainBits (per provider)
Fusion generation3-8 BrainBits
Deep analysis2-5 BrainBits
Smart fix (Premium AI tier)1-3 BrainBits
Image generation (cloud)2-10 BrainBits
Vision analysis1-3 BrainBits

Checking Your Balance

Your current BrainBits balance is displayed in the AI Workshop header and in Settings > Billing. The balance updates in real time as you use AI features.

BYO API Keys vs Managed Keys

StudioBrain supports two modes of AI access:

  • Bring Your Own Keys (BYO) — Enter your own API keys in Settings. Requests go directly to the provider using your key and billing. No BrainBits consumed. Available on all tiers.
  • Managed AI (StudioBrain Orchestrator) — Paid tiers include access to StudioBrain’s AI orchestration service. Requests are routed through the orchestrator, which handles provider selection, context optimization, and rules enforcement. Consumes BrainBits from your allocation.

When both BYO keys and managed AI are available, StudioBrain uses your BYO keys first (since they do not consume BrainBits) and falls back to the managed service when a provider is not configured.

Selecting AI Providers

StudioBrain supports multiple AI providers. Configure API keys for each in Settings > AI Providers.

Available Providers

ProviderModelsNotes
OpenAIGPT-4o, GPT-4 Turbo, GPT-4, GPT-3.5 TurboCloud-based, requires API key
AnthropicClaude 4 Opus, Claude 4 Sonnet, Claude 3.5 SonnetCloud-based, requires API key
GoogleGemini 2.0 Flash, Gemini 1.5 ProCloud-based, requires API key
GrokGrok-2, Grok-3Cloud-based, requires API key
Qwen (Local)Qwen text modelsRuns locally, free, no API key needed

Provider Selector

The Provider Selector panel lets you choose one or more providers for each generation request. When multiple providers are selected, the AI Workshop runs your prompt through all of them simultaneously and presents the results for comparison.

Model Selection

For each provider, you can select a specific model. The Model Selector shows available models for your configured providers along with their capabilities and token limits.

Generation Modes

The AI Workshop supports several generation modes:

ModeDescription
SingleSend the prompt to one provider and receive a single response
CollaborativeSend the prompt to multiple providers and receive separate responses for comparison
FusionGenerate responses from multiple providers, then use a “fusion” model to combine the best parts into a single output

Collaborative Generation

In collaborative mode, your prompt is sent to all selected providers in parallel. The results appear in a multi-provider comparison view where you can:

  • Read each provider’s response side by side
  • Rate and compare the quality of each response
  • Select the best response to use
  • Merge parts from different responses together

Fusion Mode

Fusion mode takes collaborative generation a step further. After all providers respond, a designated “fusion model” reads all the responses and synthesizes a single, improved output that combines the strengths of each provider’s contribution.

You can configure which provider acts as the fusion model separately from the generation providers.

Context Controls

One of the most powerful features of the AI Workshop is fine-grained control over what world context is sent along with your prompt. The AI does not just see your prompt — it sees relevant entity data from your project, filtered through your templates and rules.

Entity Context Selector

The Context Selector lets you pick which entities to include as context for the AI. For example, when generating a new scene at a location, you might include:

  • The location entity itself
  • Key characters who frequent the location
  • The faction that controls the area
  • Relevant items or technology

Relationship Depth

The relationship depth setting controls how many levels of connected entities are included:

  • 0 — Only the selected entities are included
  • 1 — Selected entities plus their direct relationships
  • 2 — Selected entities, their relationships, and their relationships’ relationships

Higher depth provides more context but uses more tokens.

Context Budget

The context budget (measured in tokens) sets a maximum size for the context payload sent to the AI. This prevents accidentally sending too much data, which can:

  • Exceed the model’s context window
  • Increase API costs or BrainBits consumption
  • Dilute the AI’s focus with irrelevant information

The default budget is 4,000 tokens, but you can adjust it based on the model’s capabilities and your needs.

Smart Reduction

When the selected context exceeds the budget, smart reduction automatically compresses it:

StrategyDescription
SummarizeGenerates concise summaries of each entity
ExtractPulls only the most relevant fields and relationships
HybridCombines summarization and extraction for balanced results
HierarchicalPrioritizes closer relationships and reduces distant ones more aggressively

The Context Preview panel shows you exactly what will be sent to the AI, including token estimates and reduction metrics.

Additional Context Options

  • Include Markdown bodies — Toggle whether to include the full narrative content of entities or just their frontmatter fields
  • System prompt — Customize the system prompt that frames the AI’s role and instructions
  • Rules context — Include project-specific rules from _Rules/ files to constrain the AI’s output (e.g., no modern technology references, faction-appropriate speech patterns)
  • Template guidance — Include the entity template schema to guide the AI’s output format and ensure it produces the right fields

How Rules Feed Into AI Generation

When rules context is enabled, StudioBrain sends the relevant rules file’s system_prompt and individual rules to the AI. This means:

  • The AI understands the tone, setting, and constraints of your world
  • Critical rules are enforced (the AI avoids producing content that violates them)
  • High-priority rules are used as strong guidance
  • Generated content is more likely to pass validation when imported

Entity Chat

The Entity Chat component block provides conversational AI editing within the entity editor. Instead of switching to the AI Workshop, you can:

  • Ask questions about the entity (“What would this character say in a crisis?”)
  • Request changes (“Make this character older and give them a military background”)
  • Generate specific fields (“Write three dialogue samples for this character”)
  • Refine existing content (“Rewrite the backstory with more emphasis on the corporate conspiracy”)

Entity Chat uses the current entity’s data as context and produces type-safe suggestions that can be applied directly to the entity’s fields.

The Entity Chat system uses discriminated union schemas to guarantee that AI-generated field suggestions match the exact types defined in the template. This means the AI cannot suggest a string where an array is expected, or produce invalid data that would break validation.

Deep Analysis Mode

Deep Analysis mode provides thorough, multi-pass analysis of an entity or collection of entities. Use it to:

  • Identify inconsistencies across related entities
  • Check for missing cross-references
  • Evaluate narrative quality and completeness
  • Suggest improvements based on your rules and templates

Deep Analysis runs longer than standard generation and consumes more BrainBits, but produces more comprehensive results.

Working with Results

Viewing Results

After generation completes, results are displayed with:

  • The full generated text with Markdown formatting
  • Provider identification and model used
  • Token usage statistics
  • Generation time
  • BrainBits consumed (if using managed AI)

Multi-Provider Comparison

When using collaborative mode, the Multi-Provider Results view shows all responses in a grid layout. You can:

  • Expand and collapse individual responses
  • Copy any response to clipboard
  • Select a response as the “winner”

Smart Merge

The Smart Merge Dialog lets you combine parts of different AI responses into a single output. You can select paragraphs or sections from different providers and merge them into a coherent final result.

Importing Results

Once you are satisfied with a generated result, you can import it back into your project:

  • Direct import — Apply the generated content to an existing entity’s fields
  • Smart Import — Intelligent parsing that maps generated content to the correct entity fields based on the template schema
  • New entity creation — Create a brand new entity from the generated content

The Smart Import Modal handles the mapping between AI output and entity schema, showing you a preview before applying changes. Because templates define the expected fields and Zod schemas validate them, imported content is type-checked before being written.

Generation History

All AI generations are logged and accessible from the AI Generations page (sidebar). From here you can:

  • Browse past generation results with search and filtering
  • Filter by entity type, provider, or generation mode
  • Re-view or re-import any previous generation
  • Track generation statistics and API usage over time

Advanced Settings

In the AI Workshop, click Show Advanced to access:

  • Max tokens — Maximum length of the AI response (default: 2,048)
  • Temperature — Controls randomness (0 = deterministic, 1 = creative; default: 0.7)
  • Timeout — Maximum time to wait for a response (default: 600 seconds)
  • Fusion max tokens — Separate token limit for the fusion model pass
  • Vector search — Toggle RAG (Retrieval-Augmented Generation) for automatic context enrichment from the vector database

Tips for Effective AI Generation

  1. Be specific in your prompts — Instead of “write a character bio,” try “write a 500-word backstory for a corporate security specialist who was forced underground after a scandal.”

  2. Use entity context — Always select relevant entities as context. The AI performs much better when it understands the relationships and lore of your world.

  3. Enable rules context — Turn on rules context to ensure the AI follows your world’s constraints. This significantly improves the consistency of generated content.

  4. Start with collaborative mode — Compare responses from multiple providers to get a range of creative options, then pick the best or merge them.

  5. Adjust the context budget — For detailed generation, increase the budget to give the AI more world knowledge. For quick, focused prompts, keep it lower.

  6. Use Entity Chat for refinements — For entity-specific edits, use the Entity Chat component block in the Visual Editor for conversational back-and-forth editing directly on the entity.

  7. Review before importing — Always review generated content in the Smart Import preview before applying. Check that field types match and narrative content is consistent with existing lore.

Next Steps