Installation
KULVEX installs via a single command that handles all dependencies automatically.
Prerequisites
- Linux (Ubuntu 22.04+, Fedora 38+, Debian 12+) or macOS (Sonoma+, Apple Silicon)
- Docker (auto-installed if missing)
- NVIDIA GPU recommended (auto-detected) — or use cloud-only mode
- 30 GB free disk space minimum
- A valid license key from kulvex.ai
Install
KULVEX_LICENSE_KEY=klx_lic_xxx curl -fsSL kulvex.ai/install | bashReplace klx_lic_xxx with your actual license key.
What the installer does
- Validates your license against kulvex.ai
- Detects hardware — GPUs, VRAM, RAM, CPU cores
- Installs prerequisites — Docker, Docker Compose, NVIDIA Container Toolkit
- Clones the repository to
~/.kulvex - Auto-selects the best model for your GPU (abliterated, from HuggingFace)
- Downloads the model (16-30 GB depending on your VRAM)
- Generates
.envwith JWT secrets, database URLs, model config - Builds and starts Docker containers
- Runs health checks and opens the dashboard
Resumable downloads
If the model download is interrupted (network issues, etc.), just re-run the installer. It will automatically resume from where it left off.
KULVEX_LICENSE_KEY=klx_lic_xxx bash ~/.kulvex/install.shInstallation options
# Install with defaults
KULVEX_LICENSE_KEY=klx_lic_xxx curl -fsSL kulvex.ai/install | bash
# Don't open browser after install
KULVEX_LICENSE_KEY=klx_lic_xxx bash install.sh --no-browser
# Custom install directory
KULVEX_HOME=/opt/kulvex KULVEX_LICENSE_KEY=klx_lic_xxx bash install.sh
# Use a HuggingFace mirror
HF_MIRROR=https://hf-mirror.com KULVEX_LICENSE_KEY=klx_lic_xxx bash install.shWindows (WSL 2)
Native Windows is not supported. Use WSL 2:
- Install WSL 2:
wsl --install - Install Docker Desktop with WSL 2 backend
- Open a WSL terminal and run the install command above
After installation
Open http://localhost:9200 to access the KULVEX dashboard.
- Create your admin account
- Configure API keys in Settings (Anthropic for Claude, Deepgram for voice)
- Start chatting with Mnemo
Services
| Service | Port | Description |
|---|---|---|
| Web dashboard | 9200 | Next.js frontend |
| API | 9100 | FastAPI backend + Socket.IO |
| llama-server | internal | Mnemo inference (not exposed) |
| MongoDB | internal | Database (not exposed) |
| ChromaDB | internal | Vector store for RAG (not exposed) |