Troubleshooting

Troubleshooting

Installation Issues

”License validation failed”

  • Check your license key is correct (starts with klx_lic_)
  • Ensure you have internet access to reach kulvex.ai
  • If you changed hardware, your license may need re-binding — contact support

”Docker build failed"

# Check Docker is running
docker info
 
# Check Docker Compose version (need v2+)
docker compose version
 
# View build logs
cd ~/.kulvex && docker compose build --no-cache 2>&1 | tail -50

"NVIDIA Container Toolkit not found”

# Check if nvidia-smi works
nvidia-smi
 
# Check Docker GPU access
docker run --rm --gpus all nvidia/cuda:12.8.0-base-ubuntu24.04 nvidia-smi

If docker run --gpus all fails, reinstall the NVIDIA Container Toolkit:

# Ubuntu/Debian
sudo apt-get install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

Model download interrupted

Re-run the installer — it automatically resumes interrupted downloads:

KULVEX_LICENSE_KEY=klx_lic_xxx bash ~/.kulvex/install.sh

Port already in use

# Check what's using ports 9100/9200
ss -tlnp | grep -E '9100|9200'
 
# Stop conflicting services or change ports in .env

Runtime Issues

Mnemo not responding

# Check container status
docker compose ps
 
# Check llama-server logs
docker compose logs kulvex-llama --tail 50
 
# Check API logs
docker compose logs kulvex-api --tail 50
 
# Restart llama-server
docker compose restart kulvex-llama

Slow inference

  • Check GPU utilization: nvidia-smi
  • Ensure all layers are offloaded: look for LLAMA_GPU_LAYERS=999 in logs
  • Check KV cache type: q8_0 is faster than f16
  • Flash attention should be enabled: LLAMA_FLASH_ATTN=true

Out of VRAM

# Check VRAM usage
nvidia-smi
 
# The model might be too large for your GPU
# Edit .env and reduce the model or use a more aggressive quant

Database connection issues

# Check MongoDB
docker compose logs kulvex-mongo --tail 20
 
# Test connection
docker exec kulvex-mongo mongosh --eval "db.adminCommand('ping')"

Voice not working

  1. Check STT status: curl http://localhost:9100/api/voice/status
  2. Check if mnemo:voice node is reachable (if configured)
  3. Check if Deepgram API key is set (Settings > API Keys)
  4. Whisper CPU fallback should always work — check API logs for errors

Logs

# All service logs
docker compose logs --tail 100
 
# Specific service
docker compose logs kulvex-api --tail 100 -f  # follow
 
# Application logs
cat ~/.kulvex/logs/kulvex.log

Reset

Restart all services

cd ~/.kulvex && docker compose restart

Full rebuild

cd ~/.kulvex && docker compose down && docker compose up -d --build

Factory reset (keeps models)

cd ~/.kulvex
docker compose down -v  # removes database volumes
# Regenerate .env
KULVEX_LICENSE_KEY=klx_lic_xxx bash install.sh

Getting Help