ClosedMesh turns the unused capacity of your team's laptops, workstations and on-prem boxes into a single peer-to-peer inference mesh. The chat surface runs in your browser. Inference runs on machines you control. Nothing leaves the network.
ClosedMesh is split between a thin product surface — the chat UI you're using right now — and a peer-to-peer inference runtime that handles model loading, routing, and distribution across machines. They're shipped and versioned separately.
When you open this site, the browser doesn't reach our servers for inference. It reaches back into the controller running on your own Mac, which routes the request to whichever peer in the mesh is best positioned to serve it.
The chat UI loads from closedmesh.com. The page itself is static — it never sees a prompt or a token.
A tiny Next.js service installed on each teammate's machine. CORS-allowed for closedmesh.com. Speaks OpenAI-compatible to the runtime.
One or more ClosedMesh LLM peers. Capability-matched. Auto-routes around offline nodes. Handles dense and MoE models that don't fit on one box.
Conversations stay on machines you own. No keys to revoke, no per-token bill, no third-party retention policy to read.
An M-series Mac, an RTX 4090 box and a Vulkan laptop happily serve the same conversation. Each node advertises its capability; the router only sends work it can actually run.
Dense models split across nodes by layer (pipeline parallelism). MoE models split by expert with zero cross-node inference traffic.
Every node exposes a standard /v1/chat/completions endpoint. Drop-in for any tool that speaks OpenAI — agents, IDE plugins, internal scripts.
Laptops sleep. Workstations reboot. The mesh keeps serving — requests are dispatched only to live, capability-matched peers.
One curl command per machine. The runtime drops into ~/.local/bin and registers a launchd / systemd / scheduled-task autostart.
The installer detects OS, CPU architecture and GPU vendor, then pulls the matching runtime build. You can also pin a backend explicitly for unusual setups.
| OS | Hardware | Backend |
|---|---|---|
| macOS | Apple Silicon | Metal |
| Linux | x86_64 · NVIDIA | CUDA |
| Linux | x86_64 · AMD | ROCm |
| Linux | x86_64 · Intel / other | Vulkan |
| Linux | x86_64 · CPU-only | CPU |
| Linux | aarch64 | Vulkan / CPU |
| Windows 10/11 | x86_64 · NVIDIA | CUDA |
| Windows 10/11 | x86_64 · AMD / Intel / other | Vulkan |
| WSL2 | x86_64 · NVIDIA passthrough | CUDA |