Admin Guide
This guide covers day-to-day administration of a StudioBrain deployment, including user management, tenant provisioning, health monitoring, backups, and upgrades.
User Management
Inviting Users
Users are invited by email through the backend API. The admin creates an invite, and the user receives a link to set up their account.
# Invite a user to a tenant
curl -X POST http://localhost:8201/api/auth/invite \
-H "Authorization: Bearer $ADMIN_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"email": "newuser@example.com",
"role": "editor",
"tenant_id": "your-tenant-uuid"
}'Roles
| Role | Permissions |
|---|---|
owner | Full access. Can manage billing, delete tenant, manage all users |
admin | Can invite/deactivate users, manage templates and rules, configure settings |
editor | Can create, edit, and delete entities. Can use AI features |
viewer | Read-only access to entities and assets. Cannot edit or generate content |
Changing a User’s Role
curl -X PATCH http://localhost:8201/api/auth/users/{user_id}/role \
-H "Authorization: Bearer $ADMIN_TOKEN" \
-H "Content-Type: application/json" \
-d '{"role": "admin"}'Deactivating a User
Deactivated users cannot log in but their data is preserved:
curl -X POST http://localhost:8201/api/auth/users/{user_id}/deactivate \
-H "Authorization: Bearer $ADMIN_TOKEN"Tenant Provisioning
A tenant represents an organization or team in StudioBrain. Each tenant has isolated data, its own set of entity types, and independent usage tracking.
Creating a Tenant
curl -X POST http://localhost:8201/api/tenants \
-H "Authorization: Bearer $ADMIN_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "Acme Games Studio",
"slug": "acme-games",
"plan": "indie",
"max_users": 5,
"brainbits_monthly": 500
}'Plan Assignment
Each tenant is assigned a plan that determines feature availability and resource limits:
| Plan | Max Users | BrainBits/Month | Storage | Features |
|---|---|---|---|---|
free | 1 | 100 | Local only | Desktop, BYO API keys |
indie | 1 | 500 | 25 GB cloud | Web access, Google Drive |
team | Per seat | 750/user pooled | 100 GB cloud | Mobile, SSO, S3/Azure |
enterprise | Unlimited | Unlimited | Unlimited | Dedicated instance |
Feature Overrides
Individual tenants can have feature overrides applied beyond their plan defaults:
curl -X PATCH http://localhost:8201/api/tenants/{tenant_id}/features \
-H "Authorization: Bearer $ADMIN_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"features": {
"web_access": true,
"google_drive_sync": true,
"custom_templates": true,
"brainbits_monthly_override": 1000
}
}'Feature Gating
StudioBrain uses a middleware-based feature gating system. Features are checked against the tenant’s plan and any per-tenant overrides.
How Feature Gates Work
- The
feature_gatemiddleware reads the tenant’s plan from the JWT token. - It checks the
config/plans.pydefinition for that plan’s included features. - If the tenant has feature overrides in the
featuresJSON column, those take precedence. - Requests to gated endpoints return HTTP 403 if the feature is not available for the tenant’s plan.
Feature Gate Configuration
Feature definitions live in config/plans.py:
PLAN_FEATURES = {
"free": {
"web_access": False,
"cloud_sync": False,
"google_drive": False,
"mobile_access": False,
"sso": False,
"custom_templates": True,
"ai_generation": True,
"brainbits_monthly": 100,
},
"indie": {
"web_access": True,
"cloud_sync": True,
"google_drive": True,
"mobile_access": False,
"sso": False,
"custom_templates": True,
"ai_generation": True,
"brainbits_monthly": 500,
},
# ... team, enterprise
}Frontend Feature Gates
The frontend uses the useFeatureGate hook to conditionally render UI elements:
const { hasFeature, plan } = useFeatureGate();
if (!hasFeature('cloud_sync')) {
return <UpgradePrompt feature="Cloud Sync" requiredPlan="indie" />;
}The WebAccessGate component wraps pages that require web access (Indie+ plans).
Health Monitoring
Health Check Endpoints
| Endpoint | Method | Purpose |
|---|---|---|
GET /health | GET | Basic backend health check. Returns entity count and types |
GET /api/services/health | GET | Detailed service health including database, AI, and storage status |
Basic health check:
curl http://localhost:8201/healthExpected response:
{
"status": "healthy",
"entity_count": 236,
"entity_types": ["character", "location", "brand", "district", "faction", "item", "job"]
}Detailed service health:
curl http://localhost:8201/api/services/healthExpected response:
{
"backend": "healthy",
"database": "connected",
"ai_service": "healthy",
"storage": "accessible",
"redis": "connected",
"qdrant": "connected"
}AI service health:
curl http://your-gpu-host:8202/healthExpected response:
{
"status": "healthy",
"cloud_mode": false,
"gpu_available": true
}Automated Monitoring
For production deployments, set up periodic health checks:
# Simple cron-based monitoring (add to crontab)
*/5 * * * * curl -sf http://localhost:8201/health > /dev/null || \
echo "StudioBrain backend unhealthy" | mail -s "Alert" admin@example.comLog Management
Log Files
StudioBrain writes structured logs to three files:
| Log File | Service | Contents |
|---|---|---|
backend-api.log | Backend (FastAPI) | API requests, database operations, sync events, errors |
ai-service.log | AI Service | Generation requests, model loading, GPU status, provider calls |
frontend-studio.log | Frontend (Next.js) | Build output, client interactions, SSR errors |
Log format: YYYY-MM-DD HH:MM:SS | LEVEL | source | message
Log Management API
View and manage logs through the API:
# Get recent backend logs (last 100 lines)
curl "http://localhost:8201/api/services/logs/backend-api?lines=100"
# Get logs filtered by level
curl "http://localhost:8201/api/services/logs/backend-api?level=ERROR"
# Search logs
curl "http://localhost:8201/api/services/logs/backend-api?search=sync+failed"
# Clear a log file
curl -X POST "http://localhost:8201/api/services/logs/backend-api/clear"
# List available log files
curl "http://localhost:8201/api/services/logs/list"Settings UI
The Settings > Service Control panel in the web interface provides real-time log viewing:
- View Console buttons for each service (backend, AI, frontend)
- Real-time streaming with 3-second polling
- Search, filter by level, auto-scroll, pause/resume
- Download and copy log contents
- Clear log files
Docker Logs
Container-level logs are also available via Docker:
# Follow all container logs
docker compose logs -f
# Follow a specific container
docker logs -f studiobrain-backend
# Last 200 lines
docker logs --tail 200 studiobrain-backendBackup Procedures
SQLite (Desktop/Single-User Mode)
SQLite backups are simple file copies. Stop the backend or use the SQLite .backup command:
# File copy (stop backend first for consistency)
docker compose stop backend
cp /path/to/db/city_brains.db /path/to/backups/city_brains_$(date +%Y%m%d).db
docker compose start backend
# Or use SQLite backup command (no downtime)
docker exec studiobrain-backend \
sqlite3 /data/db/city_brains.db ".backup '/data/db/backup_$(date +%Y%m%d).db'"PostgreSQL
For multi-tenant deployments with separate Auth and Content databases:
# Auth database (contains PII -- store securely)
pg_dump -h 10.0.0.100 -U studiobrain_auth -d studiobrain_auth \
-F c -f auth_backup_$(date +%Y%m%d).dump
# Content database
pg_dump -h 10.0.0.101 -U studiobrain_app -d studiobrain_content \
-F c -f content_backup_$(date +%Y%m%d).dump
# Restore from backup
pg_restore -h 10.0.0.100 -U studiobrain_auth -d studiobrain_auth \
-c auth_backup_20260224.dumpFor production environments, enable WAL archiving for continuous backup with point-in-time recovery (PITR):
# postgresql.conf
wal_level = replica
archive_mode = on
archive_command = 'cp %p /backups/wal/%f'Qdrant (Vector Store)
Export collections using the Qdrant snapshot API:
# Create a snapshot of all collections
curl -X POST "http://10.0.0.102:6333/snapshots"
# List snapshots
curl "http://10.0.0.102:6333/snapshots"
# Download a snapshot
curl "http://10.0.0.102:6333/snapshots/{snapshot_name}" -o qdrant_backup.snapshotRedis
Redis persists data via RDB snapshots by default:
# Trigger an immediate snapshot
redis-cli -h 10.0.0.103 -a your_password BGSAVE
# Copy the RDB file
cp /var/lib/redis/dump.rdb /path/to/backups/redis_$(date +%Y%m%d).rdbContent Files
Entity markdown and asset files should be backed up independently:
# Rsync content directory
rsync -av /path/to/content/ /path/to/backups/content_$(date +%Y%m%d)/
# If using NFS from a NAS with ZFS, leverage ZFS snapshots
zfs snapshot tank/studiobrain/content@$(date +%Y%m%d)Backup Schedule Recommendations
| Component | Frequency | Method | Retention |
|---|---|---|---|
| Auth DB | Daily full, continuous WAL | pg_dump + WAL archiving | 90 days |
| Content DB | Daily full, hourly WAL | pg_dump + WAL archiving | 30 days |
| Qdrant | Daily | Snapshot API | 7 days |
| Redis | Hourly | RDB snapshot | 24 hours |
| Content files | Hourly | rsync or ZFS snapshots | 30 days |
| SQLite (desktop) | Before upgrades | File copy | 30 days |
Upgrade Procedure
Standard Upgrade
# 1. Back up before upgrading
docker exec studiobrain-backend \
sqlite3 /data/db/city_brains.db ".backup '/data/db/pre_upgrade_backup.db'"
# 2. Pull latest code
cd /path/to/studiobrain
git pull origin main
# 3. Rebuild and restart
cd docker
docker compose down
docker compose build
docker compose up -d
# 4. Verify health
curl http://localhost:8201/healthBackend-Only Upgrade
When changes only affect the backend (no frontend UI changes):
cd /path/to/studiobrain
git pull origin main
cd docker
docker compose build backend
docker compose up -d backendAI Service Upgrade
On the GPU host:
cd /opt/studiobrain-ai/app
git pull origin main
# Restart to pick up code changes
docker restart studiobrain-ai
# If requirements changed, do a full restart
docker compose restart studiobrain-aiPost-Upgrade Verification
After any upgrade, verify:
- Backend health endpoint returns
"healthy" - Frontend loads without errors
- Entity list pages display data correctly
- AI service responds (if deployed)
- Check logs for any startup errors:
docker logs studiobrain-backend --tail 50
Template Management
Templates are the mechanism by which entity types are defined in StudioBrain. Administrators manage templates by editing files in the _Templates/Standard/ directory within the content path.
How Templates Work
- Templates are entity type definitions. Each template file defines a single entity type with its fields, types, and defaults.
- Templates are stored as entities. In the database, templates are stored with
type='Template'and are tenant-scoped. - Adding a template creates a new entity type. Place a new
MYTYPE_TEMPLATE.mdfile in_Templates/Standard/and restart the backend. The new type becomes immediately available in the API and, after a frontend rebuild, in the UI. - Schema-driven UI. The frontend reads template definitions and generates Zod validation schemas at runtime. No code changes are needed to support new fields or entity types.
Adding a New Entity Type
- Create the template file:
_Templates/Standard/VEHICLE_TEMPLATE.md---
entity_id: ""
entity_name: ""
vehicle_type: ""
manufacturer: ""
top_speed: 0
fuel_type: ""
associated_characters: []
---
# Vehicle Template
Template for defining vehicles in the game world.- Restart the backend to pick up the new template:
docker compose restart backend- Rebuild the frontend to generate TypeScript types:
docker compose build frontend
docker compose up -d frontend- The new entity type is now available at
GET /api/entity/vehicleand in the UI sidebar.
Modifying an Existing Template
Edit the YAML frontmatter in the template file to add, rename, or remove fields. Existing entities retain their data. New fields will use the default values specified in the template.
Viewing Registered Templates
curl http://localhost:8201/api/templatesThis returns all registered templates with their field definitions, which the frontend uses to build forms and validation schemas.
Service Control
Restarting Services
# Restart all services
docker compose restart
# Restart individual service
docker compose restart backend
docker compose restart frontend
# Full stop and start (for configuration changes)
docker compose down
docker compose up -dChecking Service Status
# Container status
docker compose ps
# Resource usage
docker stats studiobrain-backend studiobrain-frontend studiobrain-caddyEntering a Container
For debugging, you can shell into a running container:
# Backend container
docker exec -it studiobrain-backend bash
# Frontend container
docker exec -it studiobrain-frontend sh