Roadwarrior setup for a private Ollama GPU host and one peer host on the same private network.
This repo is for a simple two-host layout:
- one GPU Ubuntu host running Ollama
- one peer Ubuntu host allowed to reach it over private IP only
The scripts are tuned for DigitalOcean, but usually work on similar Ubuntu or Debian-family hosts too.
roadwarrior-ollama-gpu-host.shInstalls or updates Ollama on the GPU host, configures systemd, enables private-network serving, optionally locks access with UFW, pulls the model, and can create a derived local model with a Modelfile.roadwarrior-ollama-peer-host.shVerifies that the peer host can reach the GPU host over the private network, checks/api/tags, optionally checks/api/chat, and prints the exact OpenClaw onboarding command for the remote Ollama endpoint.
Run this on the GPU Ubuntu host:
curl -fsSL https://raw.githubusercontent.com/vektort13/CDLP-OlamaUncensored/main/roadwarrior-ollama-gpu-host.sh | bashRun this on the peer Ubuntu host:
curl -fsSL https://raw.githubusercontent.com/vektort13/CDLP-OlamaUncensored/main/roadwarrior-ollama-peer-host.sh | bashThe GPU-host script includes ready presets so you do not need to enter model tuning values by hand.
| Profile | Base model | Derived model | Use when |
|---|---|---|---|
best-general |
qwen3.5:27b |
qwen3.5-local |
Best overall default for most 24GB+ cards. |
best-coding |
gemma4:31b |
gemma4-coder-local |
Best built-in preset here for coding and reasoning. |
low-vram |
dolphin3:8b |
dolphin3-local |
Conservative option for lower VRAM hosts. |
manual |
Any local tag | Optional | Full control over model, keep-alive, local-only mode, and Modelfile settings. |
All preset profiles default to the same Roadwarrior behavior:
- private-network serving only
- UFW limited to one peer host on TCP/11434
- local-only Ollama mode enabled
- model pull enabled
- local smoke test enabled
Use this as a practical starting point, not a theoretical maximum-fit chart.
| VRAM | Start with | Why |
|---|---|---|
| 16GB | dolphin3:8b |
Safest fit with room for runtime overhead and a smaller context. |
| 24GB | qwen3.5:27b |
Best current general-purpose single-GPU default. |
| 48GB | gemma4:31b |
Strong coding and reasoning preset with more context headroom. |
| 80GB | gemma4:31b |
Strongest preset included here unless you intentionally want to go manual. |
If you want something between the presets, gpt-oss:20b is still a good alternate when you want a more developer-oriented model in roughly the 16GB to 24GB range.
- GPU host: Ubuntu droplet with NVIDIA GPU and Ollama
- Peer host: Ubuntu box that runs OpenClaw or any other client
- Network: same VPC or other private network
- Exposure: private IP only
If a DigitalOcean Cloud Firewall is attached to the GPU droplet, add the same inbound TCP/11434 allow rule there that the GPU-host script applies in UFW.
The peer-host script verifies:
- the GPU host is reachable over private IP
/api/tagsworks from the peer host/api/chatworks too if the GPU host already has a local model pulled- the exact
openclaw onboard --custom-base-url ...command you should run next
It uses the native Ollama API URL and intentionally does not use /v1.
These scripts do not install OpenClaw.
The peer-host script only helps you validate the private Ollama path and then either:
- prints the exact
openclaw onboardcommand to use - runs that command directly if
openclawis already installed on the peer host
- These scripts expect an interactive TTY.
- The GPU-host script is tuned for one peer host, not a shared public endpoint.
- The GPU-host script binds Ollama to
0.0.0.0:11434, so the firewall rule is what keeps the service private. - The default derived models are intentionally tuned for direct local use and lighter filtering.
- Derived Modelfiles are stored under
/opt/roadwarrior-ollama/modelfiles. - Large model pulls can take time, especially on a fresh host.