Getting StartedInstallation

Installation

KULVEX installs via a single command that handles all dependencies automatically.

Prerequisites

  • Linux (Ubuntu 22.04+, Fedora 38+, Debian 12+) or macOS (Sonoma+, Apple Silicon)
  • Docker (auto-installed if missing)
  • NVIDIA GPU recommended (auto-detected) — or use cloud-only mode
  • 30 GB free disk space minimum
  • A valid license key from kulvex.ai

Install

KULVEX_LICENSE_KEY=klx_lic_xxx curl -fsSL kulvex.ai/install | bash

Replace klx_lic_xxx with your actual license key.

What the installer does

  1. Validates your license against kulvex.ai
  2. Detects hardware — GPUs, VRAM, RAM, CPU cores
  3. Installs prerequisites — Docker, Docker Compose, NVIDIA Container Toolkit
  4. Clones the repository to ~/.kulvex
  5. Auto-selects the best model for your GPU (abliterated, from HuggingFace)
  6. Downloads the model (16-30 GB depending on your VRAM)
  7. Generates .env with JWT secrets, database URLs, model config
  8. Builds and starts Docker containers
  9. Runs health checks and opens the dashboard

Resumable downloads

If the model download is interrupted (network issues, etc.), just re-run the installer. It will automatically resume from where it left off.

KULVEX_LICENSE_KEY=klx_lic_xxx bash ~/.kulvex/install.sh

Installation options

# Install with defaults
KULVEX_LICENSE_KEY=klx_lic_xxx curl -fsSL kulvex.ai/install | bash
 
# Don't open browser after install
KULVEX_LICENSE_KEY=klx_lic_xxx bash install.sh --no-browser
 
# Custom install directory
KULVEX_HOME=/opt/kulvex KULVEX_LICENSE_KEY=klx_lic_xxx bash install.sh
 
# Use a HuggingFace mirror
HF_MIRROR=https://hf-mirror.com KULVEX_LICENSE_KEY=klx_lic_xxx bash install.sh

Windows (WSL 2)

Native Windows is not supported. Use WSL 2:

  1. Install WSL 2: wsl --install
  2. Install Docker Desktop with WSL 2 backend
  3. Open a WSL terminal and run the install command above

After installation

Open http://localhost:9200 to access the KULVEX dashboard.

  1. Create your admin account
  2. Configure API keys in Settings (Anthropic for Claude, Deepgram for voice)
  3. Start chatting with Mnemo

Services

ServicePortDescription
Web dashboard9200Next.js frontend
API9100FastAPI backend + Socket.IO
llama-serverinternalMnemo inference (not exposed)
MongoDBinternalDatabase (not exposed)
ChromaDBinternalVector store for RAG (not exposed)