STING Platform Installation Guide

Welcome to STING! This guide will walk you through installing STING on your system. The installation is designed to be straightforward and handles most dependencies automatically.


Quick Start: One-Line Installer

bash -c "$(curl -fsSL https://raw.githubusercontent.com/AlphaBytez/STING-CE-Public/main/bootstrap.sh)"

What this does:

  • ✅ Automatically detects your OS (macOS, WSL2, Debian/Ubuntu)
  • ✅ Installs Docker if not present
  • ✅ Clones the STING repository
  • ✅ Launches the interactive web-based setup wizard
  • ✅ Guides you through configuration (domains, email, LLM preferences)
  • ✅ Creates your admin account

After installation, access STING at:

For additional installation options and detailed configuration, see the Fresh Installation Guide.


System Requirements

Minimum Requirements

For optimal performance with larger deployments, see the Hardware Acceleration Guide and Performance Admin Guide.

Check Your System

# Check OS version
uname -a                    # Linux
sw_vers                     # macOS

# Check available memory
free -h                     # Linux
sysctl hw.memsize          # macOS

# Check available disk space
df -h

# Check Docker installation
docker --version
docker compose version

STING Core Requirements

Operating System:
  - macOS: 11+ (Big Sur or later)
  - Linux: Ubuntu 20.04+ or equivalent
  - Note: Apple Silicon (M1/M2/M3) recommended for macOS

CPU:
  - Minimum: 4 cores
  - Recommended: 6-8 cores for comfortable performance
  - Production: 8+ cores for concurrent users

Memory (STING Core Only):
  - Minimum: 8GB RAM
  - Recommended: 12-16GB RAM
  - Production: 16GB+ RAM for large datasets
  - Docker Allocation: 6-8GB RAM for core services

Storage (STING Core Only):
  - System: 20-30GB for STING services
  - Documents: Scale based on knowledge base size
  - Backups: Additional space for database backups

Network:
  - Initial Setup: Internet required for downloads
  - Operation: Can run offline after setup (except external AI)
  - Speed: 100 Mbps+ recommended

Docker:
  - Version: Docker Desktop 4.0 or later
  - Required: Docker Compose plugin
  - Resources: Allocate 6-8GB RAM to Docker

AI Model Requirements (Separate)

Option 1: External AI (Recommended for most users):
  Additional RAM: 0GB (runs in cloud)
  Additional Storage: 0GB (no local models)
  Examples: OpenAI, Claude, Gemini

Option 2: Ollama Local (Privacy-focused):
  Additional RAM: +4-8GB depending on model size
  Additional Storage: +10-30GB for model files
  Models:
    - phi3:mini: ~2GB, runs on 4GB RAM
    - llama3:8b: ~5GB, runs on 8GB RAM
    - deepseek-r1: ~8GB, runs on 12GB RAM

Option 3: Legacy In-Process Models (Not recommended):
  Additional RAM: +8-16GB for model loading
  Additional Storage: +20-50GB for model files
  Note: Deprecated in favor of Ollama/External AI

Tested Configurations

Minimum Viable (Tested):
  CPU: 4 cores
  RAM: 8GB total (6GB to Docker)
  Storage: 30GB
  AI: External API or Ollama on separate host
  Use Case: Single user, small datasets (<1000 docs)

Comfortable Development (Tested):
  CPU: 4-6 cores
  RAM: 16GB total (8GB to Docker + 4GB for Ollama)
  Storage: 50GB
  AI: Ollama local with phi3:mini
  Use Case: 1-5 users, moderate datasets (<10k docs)

Production Baseline:
  CPU: 8+ cores
  RAM: 16GB+ total
  Storage: 100GB+
  AI: External API or dedicated Ollama instance
  Use Case: 5-20 users, large datasets (10k+ docs)
Small Deployment (1-5 users, <1000 documents)
Profile: Small Team / Development
Users: 1-5 concurrent users
Documents: < 1,000 documents

Hardware:
  CPU: Apple M1/M2/M3 or Intel/AMD 8+ cores
  Memory: 16GB RAM
  Storage: 100GB SSD
  Network: 100 Mbps

Docker Resources:
  Memory Limit: 12GB
  CPU Limit: 6 cores
  Optimized: Resource limits included
Medium Deployment (5-20 users, 1000-10000 documents)
Profile: Medium Organization
Users: 5-20 concurrent users
Documents: 1,000 - 10,000 documents

Hardware:
  CPU: Apple M2 Pro/M3 or Intel/AMD 12+ cores
  Memory: 32GB RAM
  Storage: 250GB NVMe SSD
  Network: 1 Gbps

Docker Resources:
  Memory Limit: 24GB
  CPU Limit: 10 cores

Scaling Recommendations:
  - Consider dedicated ChromaDB instance
  - Separate Redis cache server
  - Load balancer for multiple app instances
Large Deployment (20+ users, 10000+ documents)
Profile: Enterprise / Production
Users: 20+ concurrent users
Documents: 10,000+ documents

Hardware:
  CPU: Apple M2 Max/M3 Max or Intel/AMD 16+ cores
  Memory: 64GB+ RAM
  Storage: 500GB+ SSD with high IOPS (NVMe recommended)
  Network: 10 Gbps
  GPU: Metal Performance Shaders (macOS) or CUDA-compatible (Linux)

Docker Resources:
  Memory Limit: 48GB
  CPU Limit: 14 cores

Architecture:
  - Redis cluster for caching
  - Separate knowledge processing workers
  - Dedicated ChromaDB with replication
  - Load-balanced application servers
  - Monitoring stack (Prometheus + Grafana)

Quick Start Installation

Step 1: Install Git

# Install Xcode Command Line Tools (includes git)
xcode-select --install

# OR install via Homebrew
brew install git
# Update system packages
sudo apt update

# Install git
sudo apt install -y git

Step 2: Clone Repository

# Clone STING repository
git clone https://github.com/your-org/sting-platform.git
cd sting-platform

# Verify you're in the correct directory
ls -la  # Should show manage_sting.sh, conf/, frontend/, etc.

Step 3: Run Installation

# Run the installer (handles all dependencies automatically)
sudo bash install_sting.sh

Expected Installation Output:

✓ Detected snap Docker - automatically replacing with apt version
✓ Docker Engine installed successfully
✓ System dependencies installed
✓ Configuration files generated
✓ Network connectivity confirmed
✓ Disk space sufficient (XX GB available)

Installation Progress:

[1/8] Validating environment...                ✓
[2/8] Building Docker images...                ✓
[3/8] Initializing databases...                ✓
[4/8] Configuring services...                  ✓
[5/8] Starting core services...                ✓
[6/8] Setting up authentication...             ✓
[7/8] Preparing AI models...                   ✓
[8/8] Final health checks...                   ✓

🎉 STING Platform installed successfully!

AI Model Setup

STING supports two AI model options. The setup wizard will guide you through configuration during installation.

  1. Install Ollama from ollama.com
  2. Pull recommended models:
    ollama pull phi3:mini       # Fast, lightweight model
    ollama pull llama3:8b       # Balanced performance
    ollama pull deepseek-r1     # Advanced reasoning
    
  3. Configure in STING:
    • The setup wizard will detect Ollama automatically
    • Or configure later in Settings → AI Models → Ollama

For detailed model recommendations and configuration, see the Ollama Model Setup Guide.

Option 2: OpenAI and External Providers

Configure API keys for cloud-based AI during setup or in the STING interface:

Supported Providers:

  • OpenAI: GPT-4, GPT-4-Turbo, GPT-3.5-Turbo, o1, o1-mini
  • Anthropic: Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku
  • Google: Gemini Pro, Gemini Pro Vision
  • Custom: Any OpenAI-compatible API endpoint

Configuration via Settings:

  1. Navigate to Settings → AI Models → External Providers
  2. Add your API key for the provider you want to use
  3. Select your preferred model
  4. Test the connection

Environment Variables (Optional):

export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="..."

Post-Installation Setup

Access Verification

Test Frontend Access:

# Open browser to frontend
open https://sting.local:8443  # Development

Test API Access:

# Check API health
curl -k https://sting.local:5050/api/auth/health

# Expected response: {"status": "healthy", "timestamp": "..."}

Create Your First User

  1. If default admin was not created during install, you will need to first create admin account:
    sudo msting create admin [email-here]
    
  2. Check output for confirmation
  3. Enter email (default admin@sting.local)
  4. Complete email verification
    • Check Mailpit at http://sting.local:8025 for the verification email
  5. Log in to access the dashboard

Test the Chatbot

  1. Navigate to the ** Bee Chat** interface
  2. Send a test message: “Hello, what is STING?”
  3. Verify Bee responds appropriately
  4. Check that your selected model is working

Resource Management

Docker Resource Allocation

STING Core includes optimized Docker resource limits that work on 8GB RAM systems. Scale up for production workloads.

Minimal Configuration (8GB RAM System):
  knowledge:
    memory: 2GB
    cpu: 1.0 core
    purpose: Document processing & indexing

  chroma:
    memory: 1GB
    cpu: 0.5 cores
    purpose: Vector search & embeddings

  app:
    memory: 1GB
    cpu: 1.0 core
    purpose: Core API & business logic

  database:
    memory: 768MB
    cpu: 0.5 cores
    purpose: PostgreSQL database

  frontend:
    memory: 512MB
    cpu: 0.5 cores
    purpose: React web interface

  vault:
    memory: 384MB
    cpu: 0.25 cores
    purpose: Secrets management

  redis:
    memory: 384MB
    cpu: 0.25 cores
    purpose: Session & data caching

  messaging:
    memory: 256MB
    cpu: 0.25 cores
    purpose: Message queue & events

Total Allocation (Minimal):
  Max Memory: ~6.3GB (fits in 8GB system)
  Max CPU: ~4.25 cores (works on 4-core system)
  System Buffer: 1.7GB reserved for OS & other processes

---

Recommended Configuration (16GB RAM System):
  knowledge:
    memory: 3GB
    cpu: 1.5 cores

  chroma:
    memory: 2GB
    cpu: 1.0 core

  app:
    memory: 1.5GB
    cpu: 1.0 core

  database:
    memory: 1GB
    cpu: 1.0 core

  frontend:
    memory: 512MB
    cpu: 0.5 cores

  vault:
    memory: 512MB
    cpu: 0.25 cores

  redis:
    memory: 512MB
    cpu: 0.5 cores

  messaging:
    memory: 256MB
    cpu: 0.25 cores

Total Allocation (Recommended):
  Max Memory: ~9.3GB (comfortable on 16GB system)
  Max CPU: ~5.5 cores
  System Buffer: 6.7GB for OS, Ollama, and other processes

Performance Monitoring

# Monitor all containers
docker stats --no-stream

# Check specific service
docker stats sting-ce-knowledge --no-stream

Advanced Configuration

AI Model Configuration

Configure AI models through the web interface or environment variables:

Via Web Interface (Recommended):

  1. Navigate to Settings → AI Models
  2. Configure Ollama or External API providers
  3. Set your preferred default model
  4. Adjust temperature and max tokens as needed

Via Environment Variables:

# Ollama Configuration
export OLLAMA_BASE_URL="http://localhost:11434"
export OLLAMA_DEFAULT_MODEL="llama3:8b"

# External AI Configuration
export OPENAI_API_KEY="sk-..."
export DEFAULT_AI_PROVIDER="openai"
export DEFAULT_AI_MODEL="gpt-4"

Database Access

# Connect to database directly
psql -h localhost -p 5433 -U postgres -d sting_app

# View database credentials
cat ~/.sting-ce/env/db.env

SSL Certificate Setup

Development (Self-Signed)
# Certificates are auto-generated during installation
ls -la ~/.sting-ce/certs/
Production (Let's Encrypt)
# Update configuration for your domain
vim conf/config.yml

# Set your domain and email
application:
  ssl:
    domain: "your-domain.com"
    email: "admin@your-domain.com"

# Restart with new configuration
./manage_sting.sh restart

Troubleshooting

Common Issues

Docker Service Failures
# Check Docker status
docker ps -a

# Restart Docker Desktop (macOS)
killall Docker && open -a Docker

# Check Docker logs
docker compose logs [service]
Port Conflicts
# Check port usage
lsof -i :3010  # Frontend
lsof -i :5050  # API

# Kill conflicting processes
sudo kill -9 [PID]
AI Model Connection Issues

For Ollama:

# Check if Ollama is running
ollama list

# Restart Ollama service
systemctl restart ollama  # Linux
# or restart the Ollama app on macOS

# Test Ollama connection
curl http://localhost:11434/api/tags

For External APIs:

# Test OpenAI connection
curl https://api.openai.com/v1/models \
  -H "Authorization: Bearer $OPENAI_API_KEY"

# Verify API key in STING Settings → AI Models

For comprehensive LLM troubleshooting, see the LLM Health Check Guide.

Memory Issues
# Check Docker container memory usage
docker stats --no-stream

# For Ollama models using too much RAM (if using an APU):
# Switch to a smaller model (e.g., phi3:mini instead of llama3:70b)
ollama pull phi3:mini

# Update default model in STING Settings → AI Models

Health Diagnostics

# Comprehensive health check
./manage_sting.sh health

# Individual service health
curl -k https://sting.local:5050/api/auth/health
curl -k http://sting.local:8086/health

Recovery Procedures

# Restart all services
./manage_sting.sh restart

# Regenerate configuration
./manage_sting.sh regenerate-config
# Stop services
./manage_sting.sh stop

# Remove containers (preserves data)
docker compose down

# Restart installation
./manage_sting.sh start
# Remove all data (DESTRUCTIVE)
./manage_sting.sh uninstall --force

# Remove installation directory
rm -rf ~/.sting-ce

Security Hardening

Change Default Passwords

# Generate new secrets
./manage_sting.sh regenerate-secrets

# Update database password
vim ~/.sting-ce/env/db.env

Firewall Configuration

# Block unnecessary ports
sudo ufw enable
sudo ufw deny 5433  # PostgreSQL (internal only)
sudo ufw deny 8200  # Vault (admin only)
sudo ufw allow 443  # HTTPS
sudo ufw allow 80   # HTTP (redirect to HTTPS)

Production SSL Certificates

# Install certbot
sudo apt install certbot

# Generate Let's Encrypt certificate
sudo certbot certonly --standalone -d your-domain.com

# Update STING configuration to point to certificate
vim conf/config.yml

Next Steps

Need help? Visit our Troubleshooting Guide or reach out to the community!

Last updated: