Prerequisites
Before you start, make sure your environment meets these requirements. Hardware- At least one node with 16 GB RAM (32 GB recommended for running local models)
- A GPU is optional but strongly recommended for model inference — CPU inference is significantly slower
- Stable local network connectivity between all nodes
- Linux (Ubuntu 22.04 LTS or Debian 12 recommended)
- macOS is supported for client-side tooling but not for hosting nodes
- Docker Engine 24.0 or later
- Docker Compose v2
curlandgit
Setup flow
Install Docker
If Docker is not already installed, run the official install script:Log out and back in so the group change takes effect, then confirm Docker is running:
Configure your environment
Copy the example environment file and fill in your values:Open
.env in your editor and set at minimum:JARVIS_HOST— the hostname or IP of your primary node (e.g.,your-jarvis-host)OLLAMA_HOST— where your Ollama instance runs (e.g.,http://your-jarvis-host:11434)LITELLM_MASTER_KEY— a strong secret key for your LiteLLM gateway
Connect to the brain mesh
Start the core Jarvis services using Docker Compose:This starts LiteLLM, the Ollama proxy, and the agent orchestration layer. The first run may take a few minutes as Docker pulls the required images.
Verify node connectivity
Check that your node is reachable and reporting health:You should see a JSON response indicating the gateway is up. If you have multiple nodes, repeat this for each one.
If a node fails to respond, check that Docker is running on that machine and that port 4000 is not blocked by a firewall.
Pull your models
Use Ollama to download the models you want to serve. For example:You can pull models on any node in your mesh. LiteLLM will route requests to whichever node has the model available.
Next steps
Run your first agent
Delegate a task to Paperclip or Hermes now that your mesh is running.
Explore the mesh
Learn how nodes, models, and inference routing work together.