KULVEX
Autonomous Intelligence Platform — self-hosted AI that runs on your hardware, controls your home, and never sends your data to the cloud.
What is KULVEX?
KULVEX is a self-hosted AI platform that combines:
- Mnemo — Local LLM inference on your GPU via llama.cpp (abliterated models, no censorship)
- Claude API — Cloud reasoning for complex tasks (optional, user-toggled)
- 17 Domain Agents — Weather, home automation, solar, security, messaging, and more
- KCode — AI-powered coding assistant that runs 100% locally
- KULVEX Home — Smart home control (Zigbee, Z-Wave, WiFi, Matter)
- Voice Interface — STT + TTS with intent detection and natural conversation
- 9 Messaging Channels — Telegram, Signal, WhatsApp, Discord, and more
Key Principles
- Privacy first — Your data stays on your machine. Local inference by default.
- Abliterated models — No artificial censorship. Your AI, your rules.
- Hardware-adaptive — Auto-detects your GPU and selects the best model.
- Self-evolving — Auto-updates, self-healing, circuit breakers.
Hardware Tiers
| Tier | GPU | RAM | What You Get |
|---|---|---|---|
| Recommended | RTX 4090 24GB+ | 64GB | Full local inference, all features |
| Standard | RTX 3060 12GB+ | 32GB | Core models locally |
| Minimum | Any 8GB GPU | 16GB | Limited local + cloud fallback |
| Cloud-only | No GPU | 8GB | Claude API for everything |
Quick Start
KULVEX_LICENSE_KEY=klx_lic_xxx curl -fsSL kulvex.ai/install | bashThe installer handles everything: Docker, GPU detection, model download, and service startup.
Open http://localhost:9200 when done.