Skip to main content

Database Setup

Configure and manage PostgreSQL for Brainz Lab.

PostgreSQL Requirements

  • PostgreSQL 15 or higher
  • 10GB+ storage (depends on log volume)
  • Recommended: SSD storage

Using Docker

The included PostgreSQL container works for most deployments:
postgres:
  image: postgres:16-alpine
  volumes:
    - postgres_data:/var/lib/postgresql/data
  environment:
    POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}

Using External PostgreSQL

Connection String

DATABASE_URL=postgres://user:password@host:5432/database

Create Databases

CREATE DATABASE brainzlab_platform;
CREATE DATABASE brainzlab_recall;
CREATE DATABASE brainzlab_reflex;

Run Migrations

docker compose exec platform rails db:migrate
docker compose exec recall rails db:migrate
docker compose exec reflex rails db:migrate

Performance Tuning

postgresql.conf

For a server with 16GB RAM:
# Memory
shared_buffers = 4GB
effective_cache_size = 12GB
work_mem = 256MB
maintenance_work_mem = 1GB

# Connections
max_connections = 200

# WAL
wal_buffers = 64MB
min_wal_size = 1GB
max_wal_size = 4GB

# Query Planner
random_page_cost = 1.1
effective_io_concurrency = 200

Indexes

Key indexes are created automatically. For high-volume deployments, add:
-- Recall logs by timestamp
CREATE INDEX CONCURRENTLY idx_logs_timestamp
  ON logs (timestamp DESC);

-- Recall logs by level and timestamp
CREATE INDEX CONCURRENTLY idx_logs_level_timestamp
  ON logs (level, timestamp DESC);

-- Reflex errors by status
CREATE INDEX CONCURRENTLY idx_errors_status_updated
  ON errors (status, updated_at DESC);

Backups

Automated Backups

# Daily backup script
#!/bin/bash
DATE=$(date +%Y%m%d)
pg_dump -h localhost -U postgres brainzlab_platform > backup_platform_$DATE.sql
pg_dump -h localhost -U postgres brainzlab_recall > backup_recall_$DATE.sql
pg_dump -h localhost -U postgres brainzlab_reflex > backup_reflex_$DATE.sql

Continuous Archiving

For point-in-time recovery:
# postgresql.conf
archive_mode = on
archive_command = 'cp %p /backups/wal/%f'

Restore

# Full restore
psql -h localhost -U postgres -d brainzlab_platform < backup.sql

# Point-in-time restore
pg_restore -h localhost -U postgres -d brainzlab_platform backup.dump

Retention

Automatic Cleanup

Configure retention in environment:
LOG_RETENTION_DAYS=30
ERROR_RETENTION_DAYS=90
The cleanup job runs daily.

Manual Cleanup

-- Delete old logs
DELETE FROM logs WHERE timestamp < NOW() - INTERVAL '30 days';

-- Delete resolved errors
DELETE FROM errors
WHERE status = 'resolved'
  AND resolved_at < NOW() - INTERVAL '90 days';

Monitoring

Connection Count

SELECT count(*) FROM pg_stat_activity;

Table Sizes

SELECT
  tablename,
  pg_size_pretty(pg_total_relation_size(schemaname || '.' || tablename))
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname || '.' || tablename) DESC;

Slow Queries

Enable slow query logging:
# postgresql.conf
log_min_duration_statement = 1000  # Log queries > 1s

High Availability

For production, consider:
  • Streaming Replication - Primary + read replicas
  • Patroni - Automated failover
  • PgBouncer - Connection pooling
  • Managed PostgreSQL - AWS RDS, Google Cloud SQL, etc.