User GuideInfrastructure Overview

Infrastructure Overview

This page provides a high-level summary of the StudioBrain service architecture.

Three-Service Stack

StudioBrain is built as three cooperating services:

ServiceTechnologyPurpose
FrontendNext.js 15, React 19, TypeScriptUser interface, real-time collaboration
BackendFastAPI, SQLAlchemy, Python 3.12Entity storage, auth, file sync, billing
AI ServiceFastAPI, PyTorch, multi-providerAI generation, RAG, embeddings

Data Architecture

StudioBrain follows a markdown-first principle: all entity data is stored as markdown files with YAML frontmatter. The database is a cache that can be rebuilt from files at any time.

Deployment Modes

ModeStorageMulti-tenant
DesktopSQLite + local filesNo (single user)
Self-HostedPostgreSQL + NFS/localOptional
CloudPostgreSQL (auth) + PostgreSQL (content) + Qdrant + RedisYes (full RLS)

AI Providers

The AI service supports multiple providers with automatic routing:

  • OpenAI (GPT-4o, o1, etc.)
  • Anthropic (Claude)
  • Google (Gemini)
  • Grok (xAI)
  • Local models via Ollama

Provider selection is per-request based on model preference, BrainBits balance, and availability.

Cloud Storage Providers

ProviderTierUse Case
Local filesystemAllDefault, desktop mode
Google DriveIndie+Personal cloud backup and sync
S3 / Azure BlobTeam+Team storage, enterprise

Internal infrastructure topology, network configuration, database schemas, and security hardening are documented in the internal architecture guide (requires authentication).