STING-CE Fresh Installation Guide
Prerequisites
- Docker Desktop installed and running.
- Python 3.8+ installed.
- Internet connectivity for downloading Docker images.
- Hugging Face account with API token (optional but recommended).
- At least 20GB free disk space (10GB for models, 10GB for Docker images).
Installation Steps
Step 1: Clone the Repository
git clone https://github.com/your-repo/STING-CE.git
cd STING-CE/STING
Step 2: Download LLM Models (REQUIRED)
Before running the installer, you must download the LLM models:
# For testing/development (recommended - ~5GB)
./download_small_models.sh
# OR for production use (~15GB)
./download_optimized_models.sh
Models will be downloaded to: ~/Downloads/llm_models/
Step 3: Set Hugging Face Token (Optional)
export HF_TOKEN="your_huggingface_token_here"
Step 4: Run the Installer
./install_sting.sh
What the Installer Does
Checks Prerequisites
- Verifies Docker is running.
- Checks network connectivity.
- Verifies LLM models are pre-downloaded.
Builds Docker Images
- Creates base images for all services.
- Configures networking and volumes.
Starts Core Services
- PostgreSQL database.
- Vault for secrets management.
- Kratos for authentication.
- Frontend and backend services.
Configures LLM Services
- On macOS: Starts native Metal-accelerated service.
- On Linux: Starts Docker-based LLM services.
- Uses pre-downloaded models (no download during install).
Troubleshooting
Network Issues
If you see DNS resolution errors:
- Check your internet connection.
- Check Docker’s DNS settings.
- Restart Docker Desktop.
- Try using a different DNS server.
Model Download Issues
If models fail to download:
- Ensure you have a valid HF_TOKEN.
- Check disk space in
~/Downloads/. - Try downloading models manually.
- Check firewall/proxy settings.
Installation Hangs
If installation appears stuck:
- Check
~/.sting-ce/logs/manage_sting.log. - Ensure models were pre-downloaded.
- Check Docker container logs.
- Verify no conflicting services on required ports.
Post-Installation
After successful installation:
- Access the frontend at: https://localhost:8443.
- Register a new account.
- Check service status:
./manage_sting.sh status.
Required Ports
Ensure these ports are available:
- 8443: Frontend
- 5050: Backend API
- 5432: PostgreSQL
- 8200: Vault
- 8080: LLM Gateway
- 8081: Chatbot Service
- 8086: Native LLM Service (macOS)
Default Model Configuration
The system now defaults to TinyLlama for better compatibility:
- Model: TinyLlama-1.1B-Chat
- Path: ~/Downloads/llm_models/TinyLlama-1.1B-Chat
- Size: ~2.2GB
You can change models after installation using:
./sting-llm load <model_name>