Before You Install Nectar

Nectar runs STING services in Docker containers and uses local AI models so your data never leaves your computer. This page helps you set up everything you need.

Step 1: Install Docker

Docker runs the STING services that power Nectar. Install it first:

SystemDownload
macOSDocker Desktop for Mac
WindowsDocker Desktop for Windows
LinuxDocker Engine or Docker Desktop

After installing, make sure Docker is running (look for the Docker icon in your system tray/menu bar).

Verify Docker is Working

docker --version
docker compose version

You should see version numbers for both. If not, restart Docker Desktop.


Step 2: Install Ollama

Ollama is a free tool that runs AI models on your computer. It’s like a “runtime” for AI - Nectar talks to it to get AI responses.

Download Ollama

Go to ollama.com and download for your system:

SystemDownload
macOSDownload for Mac
WindowsDownload for Windows
Linux`curl -fsSL https://ollama.com/install.sh

Verify Ollama is Working

After installing, open your terminal (or Command Prompt on Windows) and run:

ollama --version

You should see a version number. If you get an error, restart your computer and try again.


Step 3: Choose Your AI Model

AI models are like different “brains” for the AI. Bigger models are smarter but need more memory.

Pick ONE model to start with based on your computer’s RAM:

Your RAMRecommended ModelCommand to Install
8GBPhi-3 Mini (lightweight)ollama pull phi3
16GBLlama 3.3 (best quality)ollama pull llama3.3
32GB+Llama 3.3 + DeepSeek CoderBoth commands below

Install Your Model

Open terminal and run the command for your chosen model:

# For most users (16GB+ RAM) - RECOMMENDED
ollama pull llama3.3

# For computers with limited RAM (8GB)
ollama pull phi3

# Optional: For code/programming tasks (needs 16GB+ RAM)
ollama pull deepseek-coder-v2

Verify Your Model is Installed

ollama list

You should see your model(s) listed. Example output:

NAME              SIZE
llama3.3:latest   4.7 GB

Step 4: Test Ollama (Optional)

Before installing Nectar, you can test that Ollama works:

ollama run llama3.3 "Hello, what can you help me with?"

If you see an AI response, you’re ready to install Nectar!

Press Ctrl+D (or Cmd+D on Mac) to exit.


System Requirements Summary

ComponentMinimumRecommended
OSmacOS 11+, Windows 10+, Ubuntu 20.04+Latest version
RAM8GB16GB+
Storage10GB free20GB+ free
CPUDual-coreQuad-core+
GPUNot requiredHelps with speed

Apple Silicon Users

If you have an M1/M2/M3 Mac, you’re in luck! Ollama automatically uses your GPU for faster responses.

Windows Users

Make sure WSL2 is enabled if you want the best performance. Ollama will guide you through this during installation.


Troubleshooting

“ollama: command not found”

  • Mac: Make sure you dragged Ollama to Applications and ran it once
  • Windows: Restart your computer after installing
  • Linux: Run the install script again

Model download stuck

  • Check your internet connection
  • Try a smaller model first (phi3)
  • Some corporate networks block large downloads

“Not enough memory”

  • Close other applications
  • Try a smaller model (phi3 instead of llama3.3)
  • Restart your computer to free up memory

Ready?

Once you have:

  • ✅ Docker installed and running
  • ✅ Ollama installed and running
  • ✅ At least one AI model pulled

You’re ready to Install Nectar →

Last updated: