User GuideAI Workshop

AI Workshop

The AI Workshop is City of Brains Studio’s hub for AI-powered content generation. It supports multiple AI providers, collaborative multi-provider generation, deep analysis, context-aware prompting, and smart content merging.

Access the AI Workshop from the sidebar by clicking AI Workshop.

Overview

The AI Workshop lets you:

  • Write prompts and generate content for your worldbuilding entities
  • Compare responses from multiple AI providers side by side
  • Merge and fuse the best parts of different AI outputs
  • Control exactly what context from your world is sent to the AI
  • Review and import generated content back into your entities

Selecting AI Providers

City of Brains Studio supports multiple AI providers. You can configure API keys for each in Settings.

Available Providers

ProviderModelsNotes
OpenAIGPT-4 Turbo, GPT-4, GPT-3.5Cloud-based, requires API key
AnthropicClaude 3.5 Sonnet, Claude 3Cloud-based, requires API key
GoogleGemini 1.5 ProCloud-based, requires API key
GrokGrok BetaCloud-based, requires API key
Qwen (Local)Qwen Text modelsRuns locally, free, no API key needed

Provider Selector

The Provider Selector panel lets you choose one or more providers for each generation request. When multiple providers are selected, the AI Workshop runs your prompt through all of them simultaneously and presents the results for comparison.

Model Selection

For each provider, you can select a specific model. The Model Selector shows available models for your configured providers along with their capabilities and token limits.

Generation Modes

The AI Workshop supports several generation modes:

ModeDescription
CollaborativeSend the prompt to multiple providers and receive separate responses for comparison
SingleSend the prompt to one provider and receive a single response
FusionGenerate responses from multiple providers, then use a “fusion” model to combine the best parts into a single output

Collaborative Generation

In collaborative mode, your prompt is sent to all selected providers in parallel. The results appear in a multi-provider comparison view where you can:

  • Read each provider’s response side by side
  • Rate and compare the quality of each response
  • Select the best response to use
  • Merge parts from different responses together

Fusion Mode

Fusion mode takes collaborative generation a step further. After all providers respond, a designated “fusion model” reads all the responses and synthesizes a single, improved output that combines the strengths of each provider’s contribution.

You can configure which provider acts as the fusion model separately from the generation providers.

Context Controls

One of the most powerful features of the AI Workshop is fine-grained control over what world context is sent along with your prompt. The AI does not just see your prompt — it sees relevant entity data from your project, giving it the knowledge needed to generate consistent, lore-accurate content.

Entity Context Selector

The Context Selector lets you pick which entities to include as context for the AI. For example, when generating a new scene at a location, you might include:

  • The location entity itself
  • Key characters who frequent the location
  • The faction that controls the area
  • Relevant items or technology

Relationship Depth

The relationship depth setting controls how many levels of connected entities are included. With a depth of:

  • 0 — Only the selected entities are included
  • 1 — Selected entities plus their direct relationships
  • 2 — Selected entities, their relationships, and their relationships’ relationships

Higher depth provides more context but uses more tokens.

Context Budget

The context budget (measured in tokens) sets a maximum size for the context payload sent to the AI. This prevents accidentally sending too much data, which can:

  • Exceed the model’s context window
  • Increase API costs
  • Dilute the AI’s focus with irrelevant information

The default budget is 4,000 tokens, but you can adjust it based on the model’s capabilities and your needs.

Smart Reduction

When the selected context exceeds the budget, smart reduction automatically compresses it using one of several strategies:

StrategyDescription
SummarizeGenerates concise summaries of each entity
ExtractPulls only the most relevant fields and relationships
HybridCombines summarization and extraction for balanced results
HierarchicalPrioritizes closer relationships and reduces distant ones more aggressively

The Context Preview panel shows you exactly what will be sent to the AI, including token estimates and reduction metrics (original tokens, reduced tokens, reduction ratio).

Additional Context Options

  • Include Markdown bodies — Toggle whether to include the full narrative content of entities or just their frontmatter fields
  • System prompt — Customize the system prompt that frames the AI’s role and instructions
  • Rules context — Include project-specific writing rules, style guides, or constraints
  • Template guidance — Include the entity template schema to guide the AI’s output format

Working with Results

Viewing Results

After generation completes, results are displayed with:

  • The full generated text with Markdown formatting
  • Provider identification and model used
  • Token usage statistics
  • Generation time

Multi-Provider Comparison

When using collaborative mode, the Multi-Provider Results view shows all responses in a grid layout. You can:

  • Expand and collapse individual responses
  • Copy any response to clipboard
  • Select a response as the “winner”

Smart Merge

The Smart Merge Dialog lets you combine parts of different AI responses into a single output. You can select paragraphs or sections from different providers and merge them into a coherent final result.

Importing Results

Once you are happy with a generated result, you can import it back into your project:

  • Direct import — Apply the generated content to an existing entity’s fields
  • Smart Import — Intelligent parsing that maps generated content to the correct entity fields
  • New entity creation — Create a brand new entity from the generated content

The Smart Import Modal handles the mapping between AI output and entity schema, showing you a preview before applying changes.

Generation History

All AI generations are logged and accessible from the AI Generations page (sidebar). From here you can:

  • Browse past generation results with search and filtering
  • Filter by entity type, provider, or generation mode
  • Re-view or re-import any previous generation
  • Track generation statistics and API usage over time

Advanced Settings

In the AI Workshop, click Show Advanced to access:

  • Max tokens — Maximum length of the AI response (default: 2,048)
  • Temperature — Controls randomness (0 = deterministic, 1 = creative; default: 0.7)
  • Timeout — Maximum time to wait for a response (default: 600 seconds)
  • Fusion max tokens — Separate token limit for the fusion model pass
  • Vector search — Toggle RAG (Retrieval-Augmented Generation) for automatic context enrichment from the vector database

Tips for Effective AI Generation

  1. Be specific in your prompts — Instead of “write a character bio,” try “write a 500-word backstory for a corporate security specialist who was forced underground after a scandal.”

  2. Use entity context — Always select relevant entities as context. The AI performs much better when it understands the relationships and lore of your world.

  3. Start with collaborative mode — Compare responses from multiple providers to get a range of creative options, then pick the best or merge them.

  4. Adjust the context budget — For detailed generation, increase the budget to give the AI more world knowledge. For quick, focused prompts, keep it lower.

  5. Iterate with the chat panel — For entity-specific refinements, use the Entity Chat component block in the Visual Editor for conversational back-and-forth editing.

Next Steps