Skip to main content
This guide walks you through setting up Jarvis from scratch — from hardware requirements to verifying that your models are ready to serve requests.

Prerequisites

Before you start, make sure your environment meets these requirements. Hardware
  • At least one node with 16 GB RAM (32 GB recommended for running local models)
  • A GPU is optional but strongly recommended for model inference — CPU inference is significantly slower
  • Stable local network connectivity between all nodes
Operating system
  • Linux (Ubuntu 22.04 LTS or Debian 12 recommended)
  • macOS is supported for client-side tooling but not for hosting nodes
Software
  • Docker Engine 24.0 or later
  • Docker Compose v2
  • curl and git
Jarvis relies heavily on Docker for model serving and agent containers. Make sure Docker is running before you proceed — many setup steps will silently fail without it.

Setup flow

1

Install Docker

If Docker is not already installed, run the official install script:
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
Log out and back in so the group change takes effect, then confirm Docker is running:
docker info
2

Clone the Jarvis configuration

Pull the Jarvis configuration repository to your primary node:
git clone https://github.com/chadronbryant/jarvis-config.git ~/jarvis
cd ~/jarvis
3

Configure your environment

Copy the example environment file and fill in your values:
cp .env.example .env
Open .env in your editor and set at minimum:
  • JARVIS_HOST — the hostname or IP of your primary node (e.g., your-jarvis-host)
  • OLLAMA_HOST — where your Ollama instance runs (e.g., http://your-jarvis-host:11434)
  • LITELLM_MASTER_KEY — a strong secret key for your LiteLLM gateway
Never commit your .env file to version control. It contains credentials that grant full access to your Jarvis environment.
4

Connect to the brain mesh

Start the core Jarvis services using Docker Compose:
docker compose up -d
This starts LiteLLM, the Ollama proxy, and the agent orchestration layer. The first run may take a few minutes as Docker pulls the required images.
5

Verify node connectivity

Check that your node is reachable and reporting health:
curl http://your-jarvis-host:4000/health
You should see a JSON response indicating the gateway is up. If you have multiple nodes, repeat this for each one.
If a node fails to respond, check that Docker is running on that machine and that port 4000 is not blocked by a firewall.
6

Pull your models

Use Ollama to download the models you want to serve. For example:
ollama pull llama3.1
ollama pull mistral
ollama pull deepseek-r1
You can pull models on any node in your mesh. LiteLLM will route requests to whichever node has the model available.
7

Confirm models are available

Query the LiteLLM gateway to verify your models are registered and ready:
curl http://your-jarvis-host:4000/models \
  -H "Authorization: Bearer your-litellm-master-key"
The response should list every model you pulled. If a model is missing, check the Ollama logs on the node where you pulled it:
docker logs ollama

Next steps

Run your first agent

Delegate a task to Paperclip or Hermes now that your mesh is running.

Explore the mesh

Learn how nodes, models, and inference routing work together.