Admin Guide

This guide covers day-to-day administration of a StudioBrain deployment, including user management, tenant provisioning, health monitoring, backups, and upgrades.

User Management

Inviting Users

Users are invited by email through the backend API. The admin creates an invite, and the user receives a link to set up their account.

# Invite a user to a tenant
curl -X POST http://localhost:8201/api/auth/invite \
  -H "Authorization: Bearer $ADMIN_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "email": "newuser@example.com",
    "role": "editor",
    "tenant_id": "your-tenant-uuid"
  }'

Roles

RolePermissions
ownerFull access. Can manage billing, delete tenant, manage all users
adminCan invite/deactivate users, manage templates and rules, configure settings
editorCan create, edit, and delete entities. Can use AI features
viewerRead-only access to entities and assets. Cannot edit or generate content

Changing a User’s Role

curl -X PATCH http://localhost:8201/api/auth/users/{user_id}/role \
  -H "Authorization: Bearer $ADMIN_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"role": "admin"}'

Deactivating a User

Deactivated users cannot log in but their data is preserved:

curl -X POST http://localhost:8201/api/auth/users/{user_id}/deactivate \
  -H "Authorization: Bearer $ADMIN_TOKEN"

Tenant Provisioning

A tenant represents an organization or team in StudioBrain. Each tenant has isolated data, its own set of entity types, and independent usage tracking.

Creating a Tenant

curl -X POST http://localhost:8201/api/tenants \
  -H "Authorization: Bearer $ADMIN_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Acme Games Studio",
    "slug": "acme-games",
    "plan": "indie",
    "max_users": 5,
    "brainbits_monthly": 500
  }'

Plan Assignment

Each tenant is assigned a plan that determines feature availability and resource limits:

PlanMax UsersBrainBits/MonthStorageFeatures
free1100Local onlyDesktop, BYO API keys
indie150025 GB cloudWeb access, Google Drive
teamPer seat750/user pooled100 GB cloudMobile, SSO, S3/Azure
enterpriseUnlimitedUnlimitedUnlimitedDedicated instance

Feature Overrides

Individual tenants can have feature overrides applied beyond their plan defaults:

curl -X PATCH http://localhost:8201/api/tenants/{tenant_id}/features \
  -H "Authorization: Bearer $ADMIN_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "features": {
      "web_access": true,
      "google_drive_sync": true,
      "custom_templates": true,
      "brainbits_monthly_override": 1000
    }
  }'

Feature Gating

StudioBrain uses a middleware-based feature gating system. Features are checked against the tenant’s plan and any per-tenant overrides.

How Feature Gates Work

  1. The feature_gate middleware reads the tenant’s plan from the JWT token.
  2. It checks the config/plans.py definition for that plan’s included features.
  3. If the tenant has feature overrides in the features JSON column, those take precedence.
  4. Requests to gated endpoints return HTTP 403 if the feature is not available for the tenant’s plan.

Feature Gate Configuration

Feature definitions live in config/plans.py:

PLAN_FEATURES = {
    "free": {
        "web_access": False,
        "cloud_sync": False,
        "google_drive": False,
        "mobile_access": False,
        "sso": False,
        "custom_templates": True,
        "ai_generation": True,
        "brainbits_monthly": 100,
    },
    "indie": {
        "web_access": True,
        "cloud_sync": True,
        "google_drive": True,
        "mobile_access": False,
        "sso": False,
        "custom_templates": True,
        "ai_generation": True,
        "brainbits_monthly": 500,
    },
    # ... team, enterprise
}

Frontend Feature Gates

The frontend uses the useFeatureGate hook to conditionally render UI elements:

const { hasFeature, plan } = useFeatureGate();
 
if (!hasFeature('cloud_sync')) {
  return <UpgradePrompt feature="Cloud Sync" requiredPlan="indie" />;
}

The WebAccessGate component wraps pages that require web access (Indie+ plans).

Health Monitoring

Health Check Endpoints

EndpointMethodPurpose
GET /healthGETBasic backend health check. Returns entity count and types
GET /api/services/healthGETDetailed service health including database, AI, and storage status

Basic health check:

curl http://localhost:8201/health

Expected response:

{
  "status": "healthy",
  "entity_count": 236,
  "entity_types": ["character", "location", "brand", "district", "faction", "item", "job"]
}

Detailed service health:

curl http://localhost:8201/api/services/health

Expected response:

{
  "backend": "healthy",
  "database": "connected",
  "ai_service": "healthy",
  "storage": "accessible",
  "redis": "connected",
  "qdrant": "connected"
}

AI service health:

curl http://your-gpu-host:8202/health

Expected response:

{
  "status": "healthy",
  "cloud_mode": false,
  "gpu_available": true
}

Automated Monitoring

For production deployments, set up periodic health checks:

# Simple cron-based monitoring (add to crontab)
*/5 * * * * curl -sf http://localhost:8201/health > /dev/null || \
  echo "StudioBrain backend unhealthy" | mail -s "Alert" admin@example.com

Log Management

Log Files

StudioBrain writes structured logs to three files:

Log FileServiceContents
backend-api.logBackend (FastAPI)API requests, database operations, sync events, errors
ai-service.logAI ServiceGeneration requests, model loading, GPU status, provider calls
frontend-studio.logFrontend (Next.js)Build output, client interactions, SSR errors

Log format: YYYY-MM-DD HH:MM:SS | LEVEL | source | message

Log Management API

View and manage logs through the API:

# Get recent backend logs (last 100 lines)
curl "http://localhost:8201/api/services/logs/backend-api?lines=100"
 
# Get logs filtered by level
curl "http://localhost:8201/api/services/logs/backend-api?level=ERROR"
 
# Search logs
curl "http://localhost:8201/api/services/logs/backend-api?search=sync+failed"
 
# Clear a log file
curl -X POST "http://localhost:8201/api/services/logs/backend-api/clear"
 
# List available log files
curl "http://localhost:8201/api/services/logs/list"

Settings UI

The Settings > Service Control panel in the web interface provides real-time log viewing:

  • View Console buttons for each service (backend, AI, frontend)
  • Real-time streaming with 3-second polling
  • Search, filter by level, auto-scroll, pause/resume
  • Download and copy log contents
  • Clear log files

Docker Logs

Container-level logs are also available via Docker:

# Follow all container logs
docker compose logs -f
 
# Follow a specific container
docker logs -f studiobrain-backend
 
# Last 200 lines
docker logs --tail 200 studiobrain-backend

Backup Procedures

SQLite (Desktop/Single-User Mode)

SQLite backups are simple file copies. Stop the backend or use the SQLite .backup command:

# File copy (stop backend first for consistency)
docker compose stop backend
cp /path/to/db/city_brains.db /path/to/backups/city_brains_$(date +%Y%m%d).db
docker compose start backend
 
# Or use SQLite backup command (no downtime)
docker exec studiobrain-backend \
  sqlite3 /data/db/city_brains.db ".backup '/data/db/backup_$(date +%Y%m%d).db'"

PostgreSQL

For multi-tenant deployments with separate Auth and Content databases:

# Auth database (contains PII -- store securely)
pg_dump -h 10.0.0.100 -U studiobrain_auth -d studiobrain_auth \
  -F c -f auth_backup_$(date +%Y%m%d).dump
 
# Content database
pg_dump -h 10.0.0.101 -U studiobrain_app -d studiobrain_content \
  -F c -f content_backup_$(date +%Y%m%d).dump
 
# Restore from backup
pg_restore -h 10.0.0.100 -U studiobrain_auth -d studiobrain_auth \
  -c auth_backup_20260224.dump

For production environments, enable WAL archiving for continuous backup with point-in-time recovery (PITR):

# postgresql.conf
wal_level = replica
archive_mode = on
archive_command = 'cp %p /backups/wal/%f'

Qdrant (Vector Store)

Export collections using the Qdrant snapshot API:

# Create a snapshot of all collections
curl -X POST "http://10.0.0.102:6333/snapshots"
 
# List snapshots
curl "http://10.0.0.102:6333/snapshots"
 
# Download a snapshot
curl "http://10.0.0.102:6333/snapshots/{snapshot_name}" -o qdrant_backup.snapshot

Redis

Redis persists data via RDB snapshots by default:

# Trigger an immediate snapshot
redis-cli -h 10.0.0.103 -a your_password BGSAVE
 
# Copy the RDB file
cp /var/lib/redis/dump.rdb /path/to/backups/redis_$(date +%Y%m%d).rdb

Content Files

Entity markdown and asset files should be backed up independently:

# Rsync content directory
rsync -av /path/to/content/ /path/to/backups/content_$(date +%Y%m%d)/
 
# If using NFS from a NAS with ZFS, leverage ZFS snapshots
zfs snapshot tank/studiobrain/content@$(date +%Y%m%d)

Backup Schedule Recommendations

ComponentFrequencyMethodRetention
Auth DBDaily full, continuous WALpg_dump + WAL archiving90 days
Content DBDaily full, hourly WALpg_dump + WAL archiving30 days
QdrantDailySnapshot API7 days
RedisHourlyRDB snapshot24 hours
Content filesHourlyrsync or ZFS snapshots30 days
SQLite (desktop)Before upgradesFile copy30 days

Upgrade Procedure

Standard Upgrade

# 1. Back up before upgrading
docker exec studiobrain-backend \
  sqlite3 /data/db/city_brains.db ".backup '/data/db/pre_upgrade_backup.db'"
 
# 2. Pull latest code
cd /path/to/studiobrain
git pull origin main
 
# 3. Rebuild and restart
cd docker
docker compose down
docker compose build
docker compose up -d
 
# 4. Verify health
curl http://localhost:8201/health

Backend-Only Upgrade

When changes only affect the backend (no frontend UI changes):

cd /path/to/studiobrain
git pull origin main
cd docker
docker compose build backend
docker compose up -d backend

AI Service Upgrade

On the GPU host:

cd /opt/studiobrain-ai/app
git pull origin main
 
# Restart to pick up code changes
docker restart studiobrain-ai
 
# If requirements changed, do a full restart
docker compose restart studiobrain-ai

Post-Upgrade Verification

After any upgrade, verify:

  1. Backend health endpoint returns "healthy"
  2. Frontend loads without errors
  3. Entity list pages display data correctly
  4. AI service responds (if deployed)
  5. Check logs for any startup errors: docker logs studiobrain-backend --tail 50

Template Management

Templates are the mechanism by which entity types are defined in StudioBrain. Administrators manage templates by editing files in the _Templates/Standard/ directory within the content path.

How Templates Work

  1. Templates are entity type definitions. Each template file defines a single entity type with its fields, types, and defaults.
  2. Templates are stored as entities. In the database, templates are stored with type='Template' and are tenant-scoped.
  3. Adding a template creates a new entity type. Place a new MYTYPE_TEMPLATE.md file in _Templates/Standard/ and restart the backend. The new type becomes immediately available in the API and, after a frontend rebuild, in the UI.
  4. Schema-driven UI. The frontend reads template definitions and generates Zod validation schemas at runtime. No code changes are needed to support new fields or entity types.

Adding a New Entity Type

  1. Create the template file:
_Templates/Standard/VEHICLE_TEMPLATE.md
---
entity_id: ""
entity_name: ""
vehicle_type: ""
manufacturer: ""
top_speed: 0
fuel_type: ""
associated_characters: []
---
 
# Vehicle Template
 
Template for defining vehicles in the game world.
  1. Restart the backend to pick up the new template:
docker compose restart backend
  1. Rebuild the frontend to generate TypeScript types:
docker compose build frontend
docker compose up -d frontend
  1. The new entity type is now available at GET /api/entity/vehicle and in the UI sidebar.

Modifying an Existing Template

Edit the YAML frontmatter in the template file to add, rename, or remove fields. Existing entities retain their data. New fields will use the default values specified in the template.

Viewing Registered Templates

curl http://localhost:8201/api/templates

This returns all registered templates with their field definitions, which the frontend uses to build forms and validation schemas.

Service Control

Restarting Services

# Restart all services
docker compose restart
 
# Restart individual service
docker compose restart backend
docker compose restart frontend
 
# Full stop and start (for configuration changes)
docker compose down
docker compose up -d

Checking Service Status

# Container status
docker compose ps
 
# Resource usage
docker stats studiobrain-backend studiobrain-frontend studiobrain-caddy

Entering a Container

For debugging, you can shell into a running container:

# Backend container
docker exec -it studiobrain-backend bash
 
# Frontend container
docker exec -it studiobrain-frontend sh