Troubleshooting
Installation Issues
”License validation failed”
- Check your license key is correct (starts with
klx_lic_) - Ensure you have internet access to reach kulvex.ai
- If you changed hardware, your license may need re-binding — contact support
”Docker build failed"
# Check Docker is running
docker info
# Check Docker Compose version (need v2+)
docker compose version
# View build logs
cd ~/.kulvex && docker compose build --no-cache 2>&1 | tail -50"NVIDIA Container Toolkit not found”
# Check if nvidia-smi works
nvidia-smi
# Check Docker GPU access
docker run --rm --gpus all nvidia/cuda:12.8.0-base-ubuntu24.04 nvidia-smiIf docker run --gpus all fails, reinstall the NVIDIA Container Toolkit:
# Ubuntu/Debian
sudo apt-get install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart dockerModel download interrupted
Re-run the installer — it automatically resumes interrupted downloads:
KULVEX_LICENSE_KEY=klx_lic_xxx bash ~/.kulvex/install.shPort already in use
# Check what's using ports 9100/9200
ss -tlnp | grep -E '9100|9200'
# Stop conflicting services or change ports in .envRuntime Issues
Mnemo not responding
# Check container status
docker compose ps
# Check llama-server logs
docker compose logs kulvex-llama --tail 50
# Check API logs
docker compose logs kulvex-api --tail 50
# Restart llama-server
docker compose restart kulvex-llamaSlow inference
- Check GPU utilization:
nvidia-smi - Ensure all layers are offloaded: look for
LLAMA_GPU_LAYERS=999in logs - Check KV cache type:
q8_0is faster thanf16 - Flash attention should be enabled:
LLAMA_FLASH_ATTN=true
Out of VRAM
# Check VRAM usage
nvidia-smi
# The model might be too large for your GPU
# Edit .env and reduce the model or use a more aggressive quantDatabase connection issues
# Check MongoDB
docker compose logs kulvex-mongo --tail 20
# Test connection
docker exec kulvex-mongo mongosh --eval "db.adminCommand('ping')"Voice not working
- Check STT status:
curl http://localhost:9100/api/voice/status - Check if mnemo:voice node is reachable (if configured)
- Check if Deepgram API key is set (Settings > API Keys)
- Whisper CPU fallback should always work — check API logs for errors
Logs
# All service logs
docker compose logs --tail 100
# Specific service
docker compose logs kulvex-api --tail 100 -f # follow
# Application logs
cat ~/.kulvex/logs/kulvex.logReset
Restart all services
cd ~/.kulvex && docker compose restartFull rebuild
cd ~/.kulvex && docker compose down && docker compose up -d --buildFactory reset (keeps models)
cd ~/.kulvex
docker compose down -v # removes database volumes
# Regenerate .env
KULVEX_LICENSE_KEY=klx_lic_xxx bash install.shGetting Help
- Discord: discord.gg/kulvex
- Email: [email protected]
- Documentation: docs.kulvex.ai