Skip to main content

Anomaly Detection

Flux automatically detects unusual patterns in your events and metrics, alerting you to spikes, drops, and trends before they become problems.

How It Works

The anomaly detector runs every 15 minutes, comparing recent data with historical baselines:
  1. Baseline Calculation - Uses same hour from previous days
  2. Comparison - Compares current values to baseline
  3. Detection - Identifies significant deviations
  4. Severity - Assigns severity based on deviation magnitude

Anomaly Types

Spike

Value significantly higher than expected:
Current: 1,500 requests/min
Baseline: 400 requests/min
Deviation: +275%
→ Spike detected (Critical)
Threshold: More than 3x baseline (200%+ deviation)

Drop

Value significantly lower than expected:
Current: 50 orders/hour
Baseline: 200 orders/hour
Deviation: -75%
→ Drop detected (Warning)
Threshold: Less than 30% of baseline (70%+ deviation)

Trend

Sustained directional change over time:
Hour 1: 100 → 110 (+10%)
Hour 2: 110 → 125 (+14%)
Hour 3: 125 → 145 (+16%)
Hour 4: 145 → 170 (+17%)
→ Upward trend detected

Severity Levels

SeverityDeviationColor
Info0-50%Blue
Warning50-100%Yellow
Critical>100%Red

Viewing Anomalies

Dashboard

Navigate to Flux > Anomalies to see:
  • Active anomalies
  • Anomaly timeline
  • Affected metrics/events
  • Deviation charts

API

# List anomalies
curl -X GET "https://flux.brainzlab.ai/api/v1/anomalies" \
  -H "Authorization: Bearer $API_KEY" \
  -d "status=active" \
  -d "severity=critical"
Response:
{
  "anomalies": [
    {
      "id": "anom_123",
      "type": "spike",
      "severity": "critical",
      "metric": "api.requests",
      "current_value": 1500,
      "baseline_value": 400,
      "deviation_percent": 275,
      "detected_at": "2024-01-15T14:30:00Z",
      "status": "active"
    }
  ]
}

MCP

flux_anomalies({
  status: "active",
  severity: ["warning", "critical"],
  from: "24h ago"
})

Managing Anomalies

Acknowledge

Mark an anomaly as seen:
POST /api/v1/anomalies/:id/acknowledge
This:
  • Marks it as acknowledged
  • Records who acknowledged and when
  • Keeps it in history for analysis

Resolve

Mark an anomaly as resolved:
POST /api/v1/anomalies/:id/resolve
Include an optional note:
{
  "note": "Marketing campaign caused traffic spike - expected behavior"
}

Notifications

Configure how you’re notified about anomalies:

Slack

BrainzLab.configure do |config|
  config.flux_notifications = {
    slack: {
      webhook_url: ENV["SLACK_WEBHOOK_URL"],
      channel: "#alerts",
      severity: [:warning, :critical]
    }
  }
end

Email

config.flux_notifications = {
  email: {
    recipients: ["[email protected]"],
    severity: [:critical]
  }
}

Webhook

config.flux_notifications = {
  webhook: {
    url: "https://your-app.com/webhooks/flux",
    headers: { "X-Secret": ENV["WEBHOOK_SECRET"] }
  }
}
Webhook payload:
{
  "event": "anomaly.detected",
  "anomaly": {
    "id": "anom_123",
    "type": "spike",
    "severity": "critical",
    "metric": "api.requests",
    "deviation_percent": 275,
    "detected_at": "2024-01-15T14:30:00Z"
  },
  "project_id": "proj_abc"
}

Configuration

Enable/Disable

BrainzLab.configure do |config|
  # Global enable/disable
  config.flux_anomaly_detection = true
end

Per-Metric Settings

Configure sensitivity per metric:
PUT /api/v1/metrics/:name/anomaly_config
{
  "enabled": true,
  "spike_threshold": 3.0,
  "drop_threshold": 0.3,
  "min_data_points": 100,
  "comparison_window": "7d"
}
SettingDescriptionDefault
spike_thresholdMultiplier for spike detection3.0
drop_thresholdFraction for drop detection0.3
min_data_pointsMinimum data before detecting100
comparison_windowHistorical period for baseline7d

Exclude Metrics

Disable anomaly detection for specific metrics:
PUT /api/v1/metrics/test.metric/anomaly_config
{
  "enabled": false
}

Best Practices

Set Appropriate Thresholds

Adjust thresholds based on metric volatility

Document Resolutions

Add notes when resolving anomalies

Review Regularly

Check acknowledged anomalies periodically

Route by Severity

Critical to PagerDuty, warnings to Slack

Example: Alert Pipeline

Set up a complete anomaly pipeline:
# config/initializers/brainzlab.rb
BrainzLab.configure do |config|
  config.flux_anomaly_detection = true

  config.flux_notifications = {
    # Critical → PagerDuty
    webhook: {
      url: ENV["PAGERDUTY_EVENTS_URL"],
      filter: ->(anomaly) { anomaly.severity == :critical }
    },

    # Warning → Slack
    slack: {
      webhook_url: ENV["SLACK_WEBHOOK_URL"],
      channel: "#alerts",
      severity: [:warning]
    },

    # All → Email digest
    email: {
      recipients: ["[email protected]"],
      digest: :daily
    }
  }
end

Anomaly History

View historical anomalies for pattern analysis:
GET /api/v1/anomalies?status=all&from=30d+ago
Use history to:
  • Identify recurring issues
  • Correlate with deployments
  • Find seasonal patterns
  • Tune detection thresholds