Recallium Installation Guide

Get Recallium running in under 2 minutes.

Quick Overview

Recallium is an AI memory and project intelligence server that gives your IDE agents persistent memory. It uses the Model Context Protocol (MCP) to store, search, and reason about your code, decisions, and project knowledge—creating a digital twin that grows smarter over time.

FeatureWhat You Get
Persistent MemoryDecisions, patterns, learnings saved across sessions
Cross-Project IntelligenceLessons learned once → applied everywhere
Document Knowledge BaseUpload PDFs, specs → AI understands instantly
16 MCP ToolsFull toolkit for IDE agents
Web Dashboard18-page UI for management and insights

Free Local Option (No API Keys!)

Want to run completely free and private? Use Ollama + built-in embeddings:

LLM: Ollama (Llama 3, Mistral, Qwen - runs locally)
Embeddings: GTE-Large (built-in, no API needed)
Cost: $0

Just install Ollama, pull a model, and select it in the Setup Wizard. Your data never leaves your machine.

IDE Support At-a-Glance

Connection TypeIDEsSetup
HTTP Direct (recommended)Cursor, VS Code, Claude Code, Claude Desktop, Windsurf, Roo Code, Visual Studio 2022Just add URL to config
npm Client (stdio→HTTP bridge)Zed, JetBrains, Cline, BoltAI, Augment Code, Warp, Amazon QInstall npm install -g recallium first

Prerequisites

1. Docker

Install Docker for your platform.

2. Ollama (for free local AI)

Install Ollama:

# macOS
brew install ollama

# Linux
curl -fsSL https://ollama.ai/install.sh | sh

# Windows
# Download from https://ollama.ai/download

Start Ollama:

ollama serve

3. Pull Required Models

# Required for insights (or use OpenAI/Anthropic)
ollama pull qwen2.5-coder:7b
Note: No npm Client Needed (For Most IDEs)

Recallium now uses HTTP transport — most modern IDEs connect directly to the Docker container:

No npm client needed (HTTP or Extension): Cursor, VS Code, Claude Code, Claude Desktop, Windsurf, and 10+ other IDEs

npm client required (STDIO→HTTP bridge): Zed, JetBrains, Cline, BoltAI, and other command-only IDEs

If your IDE requires the npm client (see IDE Integration section below), install it:

npm install -g recallium

Quick Start

macOS (30 Seconds)

cd install
chmod +x start-recallium.sh
./start-recallium.sh

What the script does:

  1. Check Docker is installed and running
  2. Verify recallium.env exists
  3. Stop and remove any existing container
  4. Pull the latest image from Docker Hub
  5. Start the container with IPv6 dual-stack (Safari compatible)
  6. Open browser when ready

That's it! Visit http://localhost:9001 to complete setup.

Linux (30 Seconds)

cd install
chmod +x start-recallium.sh
./start-recallium.sh

Visit http://localhost:9001 to complete setup.

IPv6 Note: The scripts use IPv6 dual-stack port binding ([::]) for Safari compatibility on macOS. If you encounter port binding errors on Linux, your Docker daemon may not have IPv6 enabled. Two options:

Option A: Enable IPv6 in Docker
# Edit /etc/docker/daemon.json
{ "ipv6": true }
# Then: sudo systemctl restart docker
Option B: Use IPv4-only binding (edit start-recallium.sh)
# Change: -p "[::]:${PORT}:9000"
# To:     -p "${PORT}:9000"

Windows (2 Minutes)

Windows requires additional Ollama configuration for Docker connectivity.

Step 1: Configure Ollama Environment Variable (one-time setup)

  1. Open "Edit the system environment variables" (search in Start menu)
  2. Click "Environment Variables"
  3. Under "System variables" → Click "New"
  4. Variable name: OLLAMA_HOST
  5. Variable value: 0.0.0.0:11434
  6. Click OK → OK
  7. Restart Ollama (close and reopen the application)

Step 2: Add Windows Firewall Rule (one-time setup)

Open Command Prompt or PowerShell as Administrator and run:

netsh advfirewall firewall add rule name="Ollama" dir=in action=allow program="C:\Users\<YOUR_USERNAME>\AppData\Local\Programs\Ollama\ollama.exe" enable=yes profile=private

Replace <YOUR_USERNAME> with your Windows username.

Step 3: Start Recallium

cd install
start-recallium.bat

What the script does:

  1. Check Docker is installed and running
  2. Verify recallium.env exists
  3. Stop and remove any existing container
  4. Pull the latest image from Docker Hub
  5. Auto-detect your IP address and update OLLAMA_HOST/OLLAMA_BASE_URL in recallium.env
  6. Start the container with proper configuration
  7. Open browser when ready

Visit http://localhost:9001 to complete setup.

Alternative: Docker Compose (More Control)

Use docker-compose if you want more control or need to customize the setup.

cd install
docker compose --env-file recallium.env pull    # Download latest image
docker compose --env-file recallium.env up -d   # Start container
Windows users: If using docker-compose instead of start-recallium.bat, you must manually update recallium.env with your IP address:
OLLAMA_HOST=http://YOUR_IP:11434
OLLAMA_BASE_URL=http://YOUR_IP:11434
Find your IP with: ipconfig | findstr "IPv4"

Advantages of docker-compose:

  • Easier to customize (edit docker-compose.yml)
  • Standard Docker workflow
  • Better for integration with other services

Port Configuration (Optional)

Default ports work for most users. Only change if you have conflicts.

Edit recallium.env before starting:

HOST_UI_PORT=9001        # Web UI: http://localhost:9001
HOST_API_PORT=8001       # MCP API: http://localhost:8001
HOST_POSTGRES_PORT=5433  # PostgreSQL: localhost:5433
VOLUME_NAME=recallium-v1 # Data volume name

Port mapping:

Your MachineContainerService
HOST_UI_PORT (9001)9000Web UI
HOST_API_PORT (8001)8000MCP API
HOST_POSTGRES_PORT (5433)5432PostgreSQL

Important: If you change HOST_API_PORT, update your IDE's MCP configuration to match.

Access Recallium

  • Web UI: http://localhost:9001 (or your configured HOST_UI_PORT)
  • MCP API: http://localhost:8001 (or your configured HOST_API_PORT)
  • Health Check: http://localhost:8001/health

30-Second Verification

# 1. Check container is running
docker ps -f name=recallium

# 2. Verify health endpoint
curl http://localhost:8001/health
# Expected: {"status":"healthy",...}

# 3. Check MCP tools are available
curl http://localhost:8001/mcp/status
# Expected: List of 16 available tools

# 4. Open the Web UI
open http://localhost:9001   # macOS
# or visit http://localhost:9001 in your browser

If all checks pass, proceed to the Setup Wizard!

Setup Wizard (First-Time Configuration)

On first launch, visit http://localhost:9001 to complete the setup wizard:

Setup Wizard - Welcome

The Setup Wizard guides you through initial configuration

1. Choose Your LLM Provider

Select LLM Provider

Select from 5 supported LLM providers

Recallium works with any LLM provider—use what you already have:

ProviderModelsNotes
AnthropicClaude 3.5 Sonnet, Claude 3 Opus/Sonnet/HaikuRecommended for best results
OpenAIGPT-4o, GPT-4 Turbo, GPT-3.5 TurboFunction calling, streaming
Google GeminiGemini 1.5 Pro, Gemini 1.5 FlashMulti-modal support
OllamaLlama 3, Mistral, Qwen, any local modelFree, runs locally
OpenRouter100+ models via single APIAccess any model

2. Test Your Configuration

The setup wizard lets you:

  • Test API keys before saving
  • Verify connectivity to your chosen provider
  • Switch providers anytime without losing data
Set LLM Priority

Configure provider priority and automatic failover

3. Complete Setup

Once configured, the MCP tools become available to all connected IDEs.

Recallium Dashboard after setup

After setup: Your dashboard shows system stats, recent activity, and quick access to all features

Free Local Setup (No API Keys Required)

Want to run completely free and private?

LLM: Ollama (local models like Llama 3, Mistral)
Embeddings: GTE-Large (built-in, runs locally)

Just select Ollama in the setup wizard and ensure Ollama is running locally.

Configuration

The recallium.env file contains all configuration options. Key settings:

Database (Required)

POSTGRES_PASSWORD=recallium_password  # Change in production!

LLM Provider (5 Options)

Configure via Setup Wizard at http://localhost:9001

API keys are securely vaulted inside your Docker container—never stored in plain text files.

ProviderModelsNotes
OllamaLlama 3, Mistral, Qwen, any local modelFree, local - Default
AnthropicClaude 3.5 Sonnet, Claude 3 Opus/Sonnet/HaikuRecommended for best results
OpenAIGPT-4o, GPT-4 Turbo, GPT-3.5 TurboFunction calling, streaming
Google GeminiGemini 1.5 Pro, Gemini 1.5 FlashMulti-modal support
OpenRouter100+ models via single APIAccess any model

The Setup Wizard lets you:

  • Test API keys before saving
  • Switch providers anytime without losing data
  • Configure failover providers for reliability

Ports (Host Mappings)

HOST_UI_PORT=9001        # Access UI on your machine
HOST_API_PORT=8001       # Access MCP API on your machine
HOST_POSTGRES_PORT=5433  # Access PostgreSQL on your machine

See recallium.env for all available options with detailed inline documentation.

Advanced Configuration

All configuration is managed through recallium.env (single source of truth). Common customizations:

Change LLM Model

# Edit install/recallium.env
LLM_MODEL=llama3.2:3b           # Smaller, faster
# or
LLM_MODEL=gpt-oss:20b           # Larger, more accurate

Adjust Document Chunk Size

# Edit install/recallium.env
CHUNK_SIZE_TOKENS=400           # Safe with 22% margin (recommended)
CHUNK_SIZE_TOKENS=450           # Moderate margin

Change Processing Settings

# Edit install/recallium.env
BATCH_SIZE=20                   # Process more memories at once
MAX_CONCURRENT=10               # More parallel operations
QUEUE_WORKERS=3                 # More background workers

Switch LLM Providers

Use the Setup Wizard at http://localhost:9001 → Providers

  • Add new provider credentials securely (vaulted, not in plain text)
  • Test connectivity before saving
  • Switch active provider instantly
  • Configure fallback providers

No container restart required—changes take effect immediately.

Disable Features

# Edit install/recallium.env
ENABLE_PATTERN_MATCHING=false   # Skip pattern detection
ENABLE_UNIFIED_INSIGHTS=false   # Disable insights processing
REAL_TIME_PROCESSING=false      # Queue for batch processing

After editing recallium.env, restart the container:

cd install
docker compose down
docker compose --env-file recallium.env up -d

IDE Integration

Recallium supports two connection methods:

HTTP-Capable IDEs (Recommended - No npm Client)

These IDEs connect directly to http://localhost:8001/mcp:

Command-Only IDEs (Requires npm Client)

These IDEs only support command-based connections and need the npm client as a stdio→HTTP bridge:

First, install the npm client:

npm install -g @recallium/mcp-client

Note: If you changed HOST_API_PORT in recallium.env from the default 8001, update the URL in your IDE config accordingly.

Management Commands

# View logs
docker compose --env-file recallium.env logs -f

# Stop
docker compose down

# Restart
docker compose --env-file recallium.env restart

# Update to latest version
docker compose --env-file recallium.env pull
docker compose --env-file recallium.env up -d

# Reset everything (deletes all data!)
docker compose down -v

Tips

Add Rules for Automatic Tool Usage

If you don't want to manually invoke Recallium tools every time, you can define rules in your IDE to automatically use Recallium for specific tasks:

  • For Cursor: Cursor Settings > Rules section
  • For Claude Code: In .clauderc or CLAUDE.md file
  • For Windsurf: In .windsurfrules file

Example Rule:

Always use recallium MCP tools when working with this codebase. Specifically:
- Use store_memory to capture implementation decisions, learnings, and code context
- Use search_memories to find past decisions and context before making changes
- Use get_rules at the start of each session to load project behavioral guidelines
- Link memories to files using related_files parameter for better code searchability

From then on, your AI assistant will automatically use Recallium's persistent memory without you having to explicitly request it.

Testing with MCP Inspector

You can test and debug your Recallium MCP connection using the official MCP Inspector:

# Test HTTP connection
npx @modelcontextprotocol/inspector http://localhost:8001/mcp

# Test npm client (stdio→HTTP bridge)
npx @modelcontextprotocol/inspector npx -y recallium-mcp

The inspector provides a web interface to:

  • List all available tools
  • Test tool execution with custom parameters
  • View request/response JSON
  • Debug connection issues

Troubleshooting

Next Steps

  1. Complete Setup Wizard: Visit http://localhost:9001 to configure your LLM provider
  2. Configure IDE: Follow the IDE Integration guides above
  3. Start Using: Your AI now has persistent memory across sessions!

Your First Commands (Try These in Your IDE!)

Once your IDE is connected, try these commands with your AI assistant:

"recallium"
→ Magic summon: loads all your project context in one call

"Store a memory: We decided to use PostgreSQL because..."
→ Saves your decision with auto-tagging

"Search my memories about authentication"
→ Finds past decisions, patterns, learnings

"What was I working on last week?"
→ Session recap with recent activity

"Get insights about my database patterns"
→ Cross-project pattern analysis

What You Can Do Now

CapabilityExample
Store memoriesDecisions, patterns, learnings automatically preserved
Search across projectsFind past context instantly
Get insightsPattern analysis across your work
Link projectsShare knowledge between related projects
Upload documentsPDFs, docs become searchable knowledge
Manage tasksTrack TODOs with linked memories
Structured thinkingDocument complex problem-solving