EVAL-MOLTBOT-001 — OPERATOR NODE

Moltbot-Rob Integration

Deploy a local AI coprocessor (Mac Mini M4) as an employee Operator Node, governed by Rob. Enhance employee workflow locally while gating critical actions through centralized intelligence.

RECOMMENDED: OPTION A

Laptop as primary workspace, Mac Mini as headless AI coprocessor. Recommended baseline: M4 Pro 48GB. Total investment ~$2,579 per employee.

Executive Summary

The Moltbot concept deploys a Mac Mini M4 as a local AI brain for each employee. The Mini handles inference, browser automation, and workflow learning locally, while Rob serves as the central governor — issuing scoped execution grants, brokering external APIs, and logging all actions. The employee works from their laptop as usual; the Mini operates as a silent coprocessor.

Recommended Config
M4 Pro
48GB RAM, 1TB SSD
Total Investment
$2.6K
Complete setup per employee
Device Weight
1.6 lb
Portable: office to home
Annualized Cost
$860
Amortized over 3 years

Why This Matters

A Moltbot pays for itself within 6–12 months compared to equivalent cloud GPU costs ($2,400–$6,000/yr). Zero data leaves the premises, zero per-token API costs for local inference, and always-on availability regardless of internet quality.

Architecture Overview

The Moltbot operates as a governed node in the Rob ecosystem. It receives tasks from and reports status to the central Rob platform. All high-risk actions require Rob approval via scoped, short-lived execution tokens.

┌──────────────────────────────┐ │ ROB (Central Governor) │ │ ─────────────────────────── │ │ Operator Registry │ │ Execution Grant Service │ │ Broker APIs (CRM, Email, │ │ Payments, Travel, Sheds) │ │ Audit & Telemetry │ └───────────┬──────────────────┘ │ Tailscale Mesh VPN (WireGuard) │ ┌──────────────────┼──────────────────┐ │ │ │ ┌─────────▼──────┐ ┌────────▼───────┐ ┌────────▼───────┐ │ MOLTBOT #1 │ │ MOLTBOT #2 │ │ MOLTBOT #N │ │ (Employee A) │ │ (Employee B) │ │ (Future) │ │ ────────────── │ │ ────────────── │ │ ────────────── │ │ Mac Mini M4 Pro │ │ Mac Mini M4 │ │ Mac Mini M4 │ │ Ollama (local) │ │ Ollama (local) │ │ Ollama (local) │ │ n8n (workflows) │ │ n8n (workflow) │ │ n8n (workflow) │ │ Playwright (web)│ │ Playwright │ │ Playwright │ └─────────────────┘ └────────────────┘ └────────────────┘ │ │ │ ┌─────────▼──────┐ ┌────────▼───────┐ ┌────────▼───────┐ │ Employee Laptop │ │ Employee Laptop│ │ Employee Laptop │ │ (Primary UI) │ │ (Primary UI) │ │ (Primary UI) │ └─────────────────┘ └────────────────┘ └────────────────┘

Moltbot (Front-End / Operator)

  • Grant-aware execution: checks high-risk actions against threshold, requires Rob approval
  • Secure Rob command channel: persistent outbound connection, cert-based identity, message queue
  • Local workflow learning: tracks apps, sites, sequences, patterns
  • Stores structured outputs locally, sends summaries to Rob

Rob (Back-End / Governor)

  • Operator registry: track devices, employees, permissions, tools, revocation
  • Execution Grant Service: scoped, short-lived tokens (scope, duration, limits, domains)
  • Broker APIs: handles external tools/keys (CRM, email, payments, travel, sheds)
  • Audit & telemetry: log actions, metrics, health; standardize outputs

Mac Mini M4 Configuration Lineup

Apple's 2024 Mac Mini redesign is 5.0 × 5.0 × 2.0 inches and weighs just 1.5–1.6 lbs — smaller than a hardcover book. Built-in power supply (no external brick), silent operation, and Apple Silicon unified memory make it ideal for an always-on AI coprocessor that moves between office and home.

Configuration Chip CPU / GPU RAM Storage MSRP Notes
Base M4 M4 10C / 10C 16GB 256GB $599 Too tight for AI
Mid M4 M4 10C / 10C 16GB 512GB $799 RAM-limited
M4 + RAM M4 10C / 10C 24GB 512GB $999 Budget sweet spot
M4 Pro Base M4 Pro 12C / 16C 24GB 512GB $1,399 Pro chip, same RAM
M4 Pro Max Bin M4 Pro 14C / 20C 48GB 1TB $2,199 Top-bin CPU upgrade
M4 Pro Ultimate M4 Pro 14C / 20C 64GB 1TB $2,399 Maximum RAM; 70B models

Avoid: 8GB & Intel Macs

16GB is the absolute floor for any local LLM. At 16GB, the system will constantly swap to SSD, degrading performance and SSD lifespan. Intel Macs lack unified memory architecture — on Apple Silicon, GPU and CPU share the same RAM pool, meaning a 48GB M4 Pro gives models access to the full 48GB. Intel Macs also lack the Neural Engine and have vastly worse performance-per-watt.

What Models Fit by RAM Tier

RAM Models That Fit Speed Use Cases
24GB Llama 3.2 8B, Qwen2.5 14B, DeepSeek-R1-Distill 7B/14B, Mistral 7B, Phi-3 14B 15–30 tok/s Code completion, summarization, translation, basic reasoning
64GB All above + Llama 3.3 70B (Q4), DeepSeek-R1-Distill 70B (Q4) 3–8 tok/s Near-frontier reasoning, complex analysis

Key Insight

RAM is more important than chip variant for AI workloads. A base M4 with 24GB outperforms an M4 Pro with 16GB for model inference. The M4 Pro's extra GPU cores help with inference speed, but you cannot run a model that doesn't fit in RAM. Buy RAM first, chip speed second.

Physical Specs

Dimensions
5" × 5" × 2"
127 × 127 × 50 mm
Weight
1.6 lb
0.73 kg (M4 Pro)
Thunderbolt
3× TB5
Up to 120 Gb/s (Pro)
Displays
3× 6K
60Hz (M4 Pro)

Accessories & Peripherals

The employee uses this as a workstation at the office but takes it home where they work most of the time. The accessory setup should support both locations.

Monitors

Product Size Resolution Price Best For
Samsung ViewFinity S7 (S70D) 27" 4K ~$220 Best budget 4K desk monitor
Dell S2725QC 27" 4K USB-C ~$300 Single-cable USB-C
ARZOPA 16" Portable 16" 2.5K ~$100 Budget portable for second location
ViewSonic VP16-OLED 15.6" 1080p OLED ~$300 Pro color accuracy portable

Keyboard & Mouse

Product Price Notes
Apple Magic Keyboard (Touch ID) + Magic Mouse ~$278 Native macOS, Touch ID for auth

Carrying & Cables

Item Price Purpose
Lacdo Hard Mac Mini M4 Case ~$30 EVA hard shell, fits Mini + all accessories
Thunderbolt 4/5 cable (0.8m) ~$40 Thunderbolt Bridge to laptop
Cat6a Ethernet cable (6ft) ~$10 Direct Ethernet
USB-C cable ~$15 Portable monitor connection
HDMI 2.1 cable (6ft) ~$12 Backup monitor connection

Connection Options

The Moltbot connects to the employee's laptop and to the central Rob server. Different connection methods serve different needs.

Mini ↔ Laptop (Local Tethering)

Method Speed Latency Setup Cost Best For
WiFi (same LAN) 100–600 Mbps 2–10ms None $0 Quick API calls, lightweight use
Ethernet (direct/switch) 1 Gbps <1ms Low $10–$20 Reliable, low-latency traffic
Thunderbolt Bridge ~10 Gbps <0.5ms Medium $30–$40 File transfer, high-bandwidth inference
10Gb Ethernet (BTO) 10 Gbps <0.5ms Medium +$100 Sustained throughput at scale

Remote Desktop / Screen Sharing

For times when the employee needs visual access to the Mini's desktop (model management, debugging, etc.):

Solution Latency Quality Cost Notes
RustDesk (self-hosted) Low–Medium Good Free Open-source, self-hostable relay
Apple Screen Sharing 50–150ms Fair Free Built-in, zero setup. LAN only.

Recommendation

Tailscale (mesh VPN, always on, free tier) for network connectivity between Mini and Rob. Parsec for low-latency screen sharing when visual access is needed. For the primary Moltbot use case (API-level communication), Tailscale alone is sufficient — the Mini's services are reachable over the Tailscale IP, no screen sharing needed.

Local AI Software Stack

Model Runners

Tool Type License Best For
LM Studio GUI + API Free (closed) Interactive model browsing, MLX optimization
MLX (Apple) Framework Open source Maximum Apple Silicon optimization, lowest RAM overhead

Browser Automation

Tool Description Integration
Stagehand AI-native browser automation SDK. Natural language + code hybrid. Direct API
Browser Use Python framework bridging AI agents and browser control. Ollama API compatible

Workflow Automation

Tool Description Runs On
Open Interpreter Runs code locally via natural language. Controls OS, terminal, browser. Python, local
Claude Code / Aider AI coding assistants operating as autonomous agents. CLI, local or API

Recommended Stack

n8n (workflow orchestration) + Ollama (local inference) + Playwright MCP (browser automation). This gives a complete local AI automation pipeline with zero cloud dependencies for routine tasks.

Security & Governance Integration

Authentication Layers

Approach Description Complexity
mTLS (Mutual TLS) Both Rob and Moltbot present X.509 certificates. Rob's CA signs Moltbot's cert at provisioning. Defense in depth. Medium
SSH Certificates Rob CA signs short-lived SSH certs for command execution channels. Medium

Message Queue Patterns

Pattern Protocol Latency Best For
MQTT MQTTS (8883) Low Heartbeats, command dispatch, device telemetry. IoT-native.
WebSocket WSS Low Real-time task streaming, interactive AI sessions.
HTTP/REST HTTPS Medium Simple task dispatch and result retrieval.
Redis Pub/Sub TCP Very low Fast message passing (Rob already uses Celery/Redis).

Permission Gating (CML Mirror)

Layer Mechanism Description
Device Registration Provisioning Token One-time token from Rob admin. Moltbot uses it to register and receive its device certificate.
Session Tokens JWT with Scopes Short-lived JWTs (1h expiry): inference.local, browser.automate, file.access, tool.execute
Permission Tiers CML Levels CML-0 (read-only queries), CML-1 (local actions), CML-2 (external actions requiring approval)
Rate Limiting Token Bucket Max 100 browser actions/hr, max 10 external API calls/hr per device

Setup Options

Three physical configuration options, depending on employee workflow and IT maturity. The employee in question works primarily from home, with time at the office — the Mini travels between both locations.

Option B

Mini as full workstation with monitor, keyboard, mouse. Employee remotes in from laptop when away from desk via Parsec or Screen Sharing.

Pros
  • Single primary device to manage
  • All compute in one place
  • Simpler software stack
Cons
  • Requires good internet for remote use
  • 50–80ms WAN latency on remote sessions
  • Can't work offline from laptop
  • Peripherals needed at each location

Option C

Mini controls laptop apps — acts as an AI agent that observes and automates the employee's applications. The most advanced option, best deferred to v2.

Pros
  • True AI assistant model
  • Learns employee workflows
  • Most "futuristic" option
Cons
  • Highly complex to build
  • Cross-device security implications
  • Needs robust error handling
  • Deferred to future roadmap

Rollout Playbook

1

Procure Hardware

WEEK 1
  • Order Mac Mini M4 Pro (48GB RAM, 1TB SSD) — $1,999
  • Order Logitech MX Keys S + MX Master 3S combo — ~$170
  • Order Samsung ViewFinity S7 27" 4K monitor (office) — ~$220
  • Order ARZOPA 16" portable monitor (home) — ~$100
  • Order Lacdo carrying case + cables — ~$90
2

Base Configuration

WEEK 2 — DAY 1-2
  • macOS setup: create managed admin account, enable FileVault encryption
  • Install Tailscale, join to Rob's tailnet with ACLs configured
  • Install Docker Desktop for Mac
  • Install Ollama, pull recommended models (Qwen2.5 32B, DeepSeek-R1-Distill 14B)
  • Install Parsec for remote desktop access
  • Verify Mini is reachable from employee's laptop over Tailscale
3

Rob Integration

WEEK 2 — DAY 3-5
  • Register Moltbot in Rob's operator registry (device ID, employee, permitted tools)
  • Deploy MQTT client for heartbeat/status reporting
  • Configure mTLS certificates (Rob CA signs Moltbot's cert)
  • Set up Execution Grant Service endpoints on Rob side
  • Deploy n8n via Docker with Ollama integration
  • Configure Playwright MCP for browser automation
4

Test & Validate

WEEK 3
  • End-to-end test: employee laptop sends inference request to Mini via Tailscale
  • Test Rob governance: Moltbot requests execution grant, Rob approves/denies
  • Test portability: Mini moves from office to home, reconnects via Tailscale
  • Test remote desktop: Parsec from laptop to Mini over WAN
  • Benchmark local inference speed and validate model quality
5

Employee Onboarding

WEEK 3-4
  • Set up employee's desk at office: monitor, Mini, peripherals
  • Set up home station: portable monitor, carry case protocol
  • Train employee on daily workflow: how to interact with Moltbot from their laptop
  • Document standard operating procedures
  • Monitor usage for first 2 weeks, tune models and workflows

Cost Summary

Per-Employee Investment

Budget
$1,344
Recommended
$2,579
Premium
$3,574
Budget
$1,344
M4, 24GB RAM. Runs 7B–14B models. ARZOPA portable monitor, Logitech combo.
Premium
$3,574
M4 Pro, 64GB RAM. Runs 70B models. Dual 27" 4K USB-C monitors, dual Logitech combos.

Recommended Setup Breakdown

Item Cost
Mac Mini M4 Pro, 48GB RAM, 1TB SSD $1,999
Samsung ViewFinity S7 27" 4K (office desk) $220
ARZOPA 16" portable monitor (home) $100
Logitech MX Keys S + MX Master 3S for Mac $170
Thunderbolt 4/5 cable (0.8m) $35
Ethernet cable (Cat6a, 6ft) $10
USB-C cable (portable monitor) $15
Lacdo carrying case $30
TOTAL ~$2,579

Annual Cost Comparison

Comparison Annual Cost
ChatGPT Team (per seat) $300/yr
Claude Pro (per seat) $240/yr
GitHub Copilot Business $228/yr
Cloud GPU instance (comparable) $2,400–$6,000/yr

ROI Analysis

The Moltbot pays for itself within 6–12 months compared to cloud GPU costs. Additional benefits: zero data leaves the premises, zero per-token API costs for local inference, always-on availability regardless of internet quality, and the hardware has a 5+ year useful life with Apple Silicon's longevity.

ROI | LOGISTICS
EVAL-MOLTBOT-001 — Internal Research — February 18, 2026
Prepared by Rob (AI Client Services) — Go AI Labs