EmpireNet - Empire Server Node Build to Service the Empire Ring Member Requests

EmpireNet - Empire Server Node Build to Service the Empire Ring Member Requests


Here’s a complete, soup-to-nuts engineering plan for your Empire Node: the physical build, power/airflow/switching, OS flash + first boot, and the inner “world” architecture—what each of the five Pi-5s do and how they coordinate with the Jetson Orin Nanos. I’ve also included a production-ready file tree, fully commented config/code you can drop in, and clear interaction patterns (RabbitMQ for jobs, MQTT for telemetry/IO, k3s/Docker orchestration, Cloudflare Tunnels, and Twilio webhooks).


Hardware & Enclosure Engineering​


Box & Plate​


  • Case: Rugged briefcase (Apache/Pelican style) with removable aluminum or black-plex base plate (3–4 mm).
  • Pegboard sleds: 2 removable “sleds” (upper/lower) using M3/M4 standoffs; slot-rails so you can lift the entire compute section out.
  • Inner supports: L-brackets along long edges + a mid-spine bar to stop plate flex. Add a 10 mm neoprene edge gasket to avoid vibration.
  • Vibration isolation: Silicone grommets for fans, rubber feet beneath Jetsons’ plates.

Airflow (run case open during operation per your plan)​


  • Intake: 2–3x 120 mm intakes drilled low on a sidewall. Exhaust: 2–3x 120 mm high on opposite wall to create left→right or bottom→top flow.
  • Ducting: 1 mm ABS baffles to force air across Jetson heat sinks; leave 10–15 mm plenum above Pis.
  • Fan control: 6-gang illuminated rocker bank (3 intakes, 2 exhausts, 1 LEDs); optional MOSFET PWM board later.
  • Wire gauge: For 5 V/12 V fans at <0.2 A each, 22 AWG is fine; for the 5 V Pi rail use 16–18 AWG trunk with 22 AWG drops.

Power Architecture (simple, robust, fused)​


  • Main bus: 12 VDC (from a 120 VAC→12 VDC PSU, 300–600 W depending on how many devices).
  • 5 V rail: Dedicated buck converter 12 V→5 V 30 A (e.g., DROK/Meanwell module). This single, beefy rail feeds all Pis and 5 V accessories.
    • Inline blade fuses: one per Pi (3–5 A), one per fan bank (1 A).
    • Common ground between 12 V and 5 V rails.
  • Pi power: Prefer USB-C PD modules or direct 5 V terminals with thick leads; keep each Pi on its own fuse.
  • Jetsons: Power from the 12 V bus (each ~5–7 A load spikes). Put each Jetson behind a 10 A blade fuse.
  • Speakers: 12 V Class-D mini amp (15–30 W) + 4–8 Ω drivers for audible alerts.

Switches & I/O Plate​


  • Front panel: 6-gang switch array (fans/lights), master kill switch, 2x USB-A passthroughs, 1x USB-C passthrough, 3.5 mm audio, RJ45 panel-mount for WAN.
  • Back plate: 8–16-port Gigabit switch mounted, plus PoE injector if you want cameras or Pi Zeros over PoE.

Mounting & Serviceability​


  • Everything on threaded inserts. Label every harness with heat-shrink.
  • Cable raceways: stick-on channels; Velcro for quick lifts.
  • Service loops on all USB-C/PWR leads to remove sleds without unplugging the world.



Flashing & First Boot (Zero Microsoft stack)​


Raspberry Pi 5 (Ubuntu Server 24.04 LTS, 64-bit)​


  1. Flash with Raspberry Pi Imager (set hostname, SSH enabled, user/pass, Wi-Fi if needed).
  2. First boot: sudo apt update &amp;&amp; sudo apt full-upgrade -y &amp;&amp; sudo rpi-eeprom-update -a.
  3. Install base: curl -fsSL https://get.docker.com | sh → add user to docker; install containerd, cgroupfs tweaks.
  4. Net: Static LAN via netplan for each role node; reserve IPs on your router.
  5. Observability agent: node_exporter, promtail (or vector), telegraf.

Jetson Orin Nano (JetPack 6.x on Ubuntu 22.04)​


  1. Flash with NVIDIA SDK Manager.
  2. Install docker, nvidia-container-toolkit, tritonserver, deepstream.
  3. Join cluster (k3s or Docker swarm) with GPU runtime enabled; label/taint the node (gpu=true).
  4. Pin power mode to MAXN; attach NVMe; set jetson_clocks on boot.



Inner World Architecture (5× Pi-5 + Jetsons)​


High-level Roles​


  • Pi-1 “COORDINATOR/CONTROL-PLANE”
    • k3s server (or Docker Swarm manager), CoreDNS, ingress (Caddy or Nginx), Consul/Serf for discovery (optional), Cloudflare Tunnel client.
    • RabbitMQ (job/command bus), Vault (secrets), MinIO client, Prometheus/Grafana.
  • Pi-2 “DATA/NAS & DB”
    • PostgreSQL primary, Timescale extension (metrics), MinIO (S3 object store), Loki (logs) + Promtail gateway.
    • Local NVMe via PCIe hat (X1001) with btrfs (snapshots) or ZFS if you prefer.
  • Pi-3 “IO & AUTOMATION”
    • Mosquitto (MQTT), Node-RED, GPIO-bridge for Pi Zero fleet, InfluxDB (if you prefer TIG), Telegraf for sensor scraping.
  • Pi-4 “COMMS & MEDIA”
    • Twilio webhook gateway (Flask/FastAPI), alerting (Apprise), TTS/Audio out to speakers, optional RTSP proxy for cams.
  • Pi-5 “EDGE SECURITY & TUNNELS”
    • Cloudflare Tunnel, WireGuard, Fail2Ban, Wazuh/Osquery agent, camera motion detectors (lightweight), backup watchdog.
  • Jetson-A/B/C “AI INFERENCE”
    • Triton Inference Server containers (TensorRT/PyTorch builds), video analytics (DeepStream), OCR/ASR, and any ML that needs CUDA.
    • All models served via HTTP/gRPC; workloads pulled from RabbitMQ job queues; telemetry back via MQTT.

Buses & Patterns​


  • Telemetry/IO: MQTT (topic hierarchy below).
  • Work/jobs/results: RabbitMQ (competing consumers, retries, DLX).
  • Deploy/orchestrate: k3s (Pi-1 control plane, others as workers; Jetsons join with nvidia runtime).
  • Discovery & edge publish: Cloudflare Tunnels on Pi-1 and Pi-5 (separate tunnels for control vs public/demo).
  • Storage: MinIO S3 for artifacts/recordings; PostgreSQL for state.



MQTT & RabbitMQ Conventions​


MQTT Topic Scheme​


  • empire/telemetry/<node>/<sensor> (read-only metrics; retained snapshot)
  • empire/cmd/<target>/<capability> (desired state commands)
  • empire/state/<device>/<capability> (device reports current state)
  • empire/alerts/<severity> (fatal|warn|info)
  • empire/edge/<zone>/<device> (GPIO, relays, greenhouse, fans, pumps, lights)

Examples:


  • empire/cmd/greenhouse/fan/set → payload: {“speed”:60}
  • empire/state/greenhouse/fan → payload: {“speed”:58,“rpm”:1200,“temp”:27.2}

RabbitMQ Queues​


  • jobs.inference (Jetsons consume)
  • jobs.ocr, jobs.asr, jobs.cv (specialized)
  • jobs.render (if you add video overlays)
  • jobs.retry.* with dead-letter → jobs.dlx

Routing keys: empire.ai.infer, empire.cv.motion, empire.nlp.summarize, etc.




Repository Layout (all code + configs listed)​


root/
├─ k8s/
│ ├─ base/namespace-empire.yaml
│ ├─ core/ingress-caddy.yaml
│ ├─ core/rabbitmq.yaml
│ ├─ core/mosquitto.yaml
│ ├─ core/minio.yaml
│ ├─ core/postgres.yaml
│ ├─ core/node-red.yaml
│ ├─ obs/prometheus.yaml
│ ├─ obs/grafana.yaml
│ ├─ security/wireguard.yaml
│ └─ tunnels/cloudflare-tunnel.yaml
├─ docker/
│ ├─ compose-pi1-coordinator.yml
│ ├─ compose-pi2-data.yml
│ ├─ compose-pi3-io.yml
│ ├─ compose-pi4-comms.yml
│ ├─ compose-pi5-security.yml
│ └─ compose-jetson-ai.yml
├─ services/
│ ├─ twilio-gateway/
│ │ ├─ app.py
│ │ ├─ requirements.txt
│ │ └─ README.md
│ ├─ gpio-agent/
│ │ ├─ agent.py
│ │ ├─ config.yaml
│ │ └─ README.md
│ ├─ ai-dispatcher/
│ │ ├─ dispatcher.py
│ │ ├─ queue_schema.json
│ │ └─ README.md
│ ├─ health-watchdog/
│ │ ├─ watchdog.py
│ │ └─ systemd/health-watchdog.service
│ └─ edge-alerts/
│ ├─ alerts.py
│ └─ README.md
├─ cf/
│ ├─ tunnel-pi1.yaml
│ └─ tunnel-pi5.yaml
├─ ansible/
│ ├─ inventory.yaml
│ ├─ playbook-bootstrap.yaml
│ └─ roles/...
├─ mqtt/
│ ├─ mosquitto.conf
│ └─ aclfile
├─ rabbitmq/
│ ├─ definitions.json
│ └─ rabbitmq.conf
├─ postgres/
│ ├─ initdb.sql
│ └─ backup.sh
├─ node-red/
│ └─ flows.json
├─ prometheus/
│ └─ prometheus.yaml
└─ grafana/
└─ dashboards/empire.json




Full Code/Config (commented)​


docker/compose-pi1-coordinator.yml​


version: "3.9"
services:
caddy:
image: caddy:2
restart: unless-stopped
ports: ["80:80","443:443"]
volumes:
- ./caddy/Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
rabbitmq:
image: rabbitmq:3.13-management
restart: unless-stopped
ports: ["5672:5672","15672:15672"]
environment:
RABBITMQ_DEFAULT_USER: empire
RABBITMQ_DEFAULT_PASS: empire
volumes:
- ../rabbitmq/definitions.json:/etc/rabbitmq/definitions.json:ro
- ../rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf:ro
prometheus:
image: prom/prometheus
restart: unless-stopped
volumes:
- ../prometheus/prometheus.yaml:/etc/prometheus/prometheus.yml:ro
ports: ["9090:9090"]
grafana:
image: grafana/grafana
restart: unless-stopped
ports: ["3000:3000"]
volumes:
- ../grafana/dashboards:/var/lib/grafana/dashboards:ro
volumes:
caddy_data: {}
caddy_config: {}


Comments:​


- Pi-1 runs the control plane: ingress (Caddy), RabbitMQ, Prometheus, Grafana.​


- Keep cloudflared tunnel separate (cf/tunnel-pi1.yaml) to expose only needed services.​


docker/compose-pi2-data.yml​


version: "3.9"
services:
postgres:
image: postgres:16
environment:
POSTGRES_USER: empire
POSTGRES_PASSWORD: empire
POSTGRES_DB: empiredb
ports: ["5432:5432"]
volumes:
- /mnt/nvme/postgres:/var/lib/postgresql/data
- ../postgres/initdb.sql:/docker-entrypoint-initdb.d/init.sql:ro
minio:
image: quay.io/minio/minio
command: server /data --console-address ":9001"
environment:
MINIO_ROOT_USER: empire
MINIO_ROOT_PASSWORD: empire-password
ports: ["9000:9000","9001:9001"]
volumes:
- /mnt/nvme/minio:/data


Comments:​


- Put Pi-2 on NVMe with btrfs, enable snapshots; schedule pg_dump + btrfs send to a secondary drive nightly.​


docker/compose-pi3-io.yml​


version: "3.9"
services:
mosquitto:
image: eclipse-mosquitto:2
ports: ["1883:1883","9001:9001"]
volumes:
- ../mqtt/mosquitto.conf:/mosquitto/config/mosquitto.conf:ro
- ../mqtt/aclfile:/mosquitto/config/aclfile:ro
- mosq_data:/mosquitto/data
nodered:
image: nodered/node-red:latest
ports: ["1880:1880"]
volumes:
- ../node-red/flows.json:/data/flows.json
volumes:
mosq_data: {}


Comments:​


- MQTT is your heartbeat. Node-RED orchestrates greenhouse GPIO via Pi Zero agents over Wi-Fi/PoE.​


docker/compose-pi4-comms.yml​


version: "3.9"
services:
twilio-gateway:
build: ../services/twilio-gateway
ports: ["8088:8088"]
environment:
TWILIO_AUTH_TOKEN: ${TWILIO_AUTH_TOKEN}
TWILIO_ACCOUNT_SID: ${TWILIO_ACCOUNT_SID}
RABBIT_URL: amqp://empire:[email protected]:5672
MQTT_URL: mqtt://pi3.local:1883
edge-alerts:
build: ../services/edge-alerts
environment:
MQTT_URL: mqtt://pi3.local:1883
AUDIO_DEVICE: default


Comments:​


- twilio-gateway receives inbound SMS/webhooks and publishes commands/jobs.​


- edge-alerts can TTS/MP3 an audible alert to the enclosure speakers.​


docker/compose-pi5-security.yml​


version: "3.9"
services:
cloudflared:
image: cloudflare/cloudflared:latest
command: tunnel --config /etc/cloudflared/config.yaml run
volumes:
- ../cf/tunnel-pi5.yaml:/etc/cloudflared/config.yaml:ro
wireguard:
image: lscr.io/linuxserver/wireguard
cap_add: [ "NET_ADMIN", "SYS_MODULE" ]
sysctls:
- net.ipv4.conf.all.src_valid_mark=1
volumes:
- ./wg:/config
ports: ["51820:51820/udp"]


Comments:​


- Pi-5 is your security edge: tunnels + remote admin path over WireGuard.​


docker/compose-jetson-ai.yml​


version: "3.9"
services:
triton:
image: nvcr.io/nvidia/tritonserver:24.05-py3
runtime: nvidia
environment:
- NVIDIA_VISIBLE_DEVICES=all
command: tritonserver --model-repository=/models --http-port=8000 --grpc-port=8001
ports: ["8000:8000","8001:8001"]
volumes:
- /opt/models:/models:ro
ai-dispatcher:
build: ../services/ai-dispatcher
environment:
RABBIT_URL: amqp://empire:[email protected]:5672
TRITON_URL: http://localhost:8000
MQTT_URL: mqtt://pi3.local:1883


Comments:​


- Jetson runs Triton; ai-dispatcher consumes jobs, calls Triton, publishes results.​




mqtt/mosquitto.conf​


listener 1883
protocol mqtt
listener 9001
protocol websockets
allow_anonymous false
password_file /mosquitto/config/aclfile
persistence true
persistence_location /mosquitto/data/
retained_persistence true


Comments:​


- Use mosquitto_passwd to create empire users and set ACLs; keep telemetry retained.​


rabbitmq/rabbitmq.conf​


loopback_users.guest = false
management.load_definitions = /etc/rabbitmq/definitions.json


rabbitmq/definitions.json (minimal)​


{
"users":[{"name":"empire","password_hash":"","hashing_algorithm":"rabbit_password_hashing_sha256","tags":"administrator"}],
"vhosts":[{"name":"/"}],
"permissions":[{"user":"empire","vhost":"/","configure":".","write":".","read":".*"}],
"queues":[
{"name":"jobs.inference","vhost":"/","durable":true},
{"name":"jobs.ocr","vhost":"/","durable":true},
{"name":"jobs.asr","vhost":"/","durable":true},
{"name":"jobs.dlx","vhost":"/","durable":true}
],
"policies":[
{"vhost":"/","name":"dlx","pattern":"^jobs\.","apply-to":"queues","definition":{"dead-letter-exchange":"","dead-letter-routing-key":"jobs.dlx"}}
]
}


Comment: fill password after​




services/twilio-gateway/app.py (FastAPI, SMS in → MQ/MQTT)​


from fastapi import FastAPI, Request
import os, asyncio, aio_pika, paho.mqtt.client as mqtt


app = FastAPI()
RABBIT_URL = os.getenv("RABBIT_URL","amqp://empire:empire@localhost:5672")
MQTT_URL = os.getenv("MQTT_URL","mqtt://localhost:1883")


Lazy MQTT connect (simple):​


_mqtt = mqtt.Client()
def _connect_mqtt():
host = MQTT_URL.split("://")[1].split(":")[0]
port = int(MQTT_URL.split(":")[-1])
_mqtt.connect(host, port, 60)


@app.post("/twilio/sms")
async def sms_webhook(request: Request):
form = await request.form()
from_num = form.get("From","")
body = (form.get("Body","") or "").strip()



<span><span><span># Publish command over RabbitMQ for downstream processors</span></span><span><br>conn = </span><span><span>await</span></span><span> aio_pika.connect_robust(RABBIT_URL)<br></span><span><span>async</span></span><span> </span><span><span>with</span></span><span> conn:<br> ch = </span><span><span>await</span></span><span> conn.channel()<br> q = </span><span><span>await</span></span><span> ch.declare_queue(</span><span><span>"jobs.inference"</span></span><span>, durable=</span><span><span>True</span></span><span>)<br> </span><span><span>await</span></span><span> ch.default_exchange.publish(<br> aio_pika.Message(body=</span><span><span>f'{{"from":"<span>{from_num}</span></span></span><span>","cmd":"</span><span><span>{body}</span></span><span>"}}'.encode()),<br> routing_key=q.name<br> )<br><br></span><span><span># Also fanout to MQTT so Node-RED flows can react locally</span></span><span><br></span><span><span>try</span></span><span>:<br> _connect_mqtt()<br> _mqtt.publish(</span><span><span>"empire/alerts/info"</span></span><span>, </span><span><span>f"SMS <span>{from_num}</span></span></span><span>: </span><span><span>{body}</span></span><span>")<br> _mqtt.disconnect()<br></span><span><span>except</span></span><span>:<br> </span><span><span>pass</span></span><span><br><br></span><span><span># Minimal TwiML response</span></span><span><br></span><span><span>return</span></span><span> {</span><span><span>"message"</span></span><span>:</span><span><span>"ok"</span></span><span>}<br></span></span>

Comments:​


- Add signature validation with TWILIO_AUTH_TOKEN for production.​


- Use Rabbit for durable commands and MQTT for fast local reactions.​


services/ai-dispatcher/dispatcher.py (jobs → Triton → results)​


import os, json, asyncio, aiohttp, aio_pika, paho.mqtt.client as mqtt


RABBIT_URL = os.getenv("RABBIT_URL","amqp://empire:empire@localhost:5672")
TRITON_URL = os.getenv("TRITON_URL","http://localhost:8000")
MQTT_URL = os.getenv("MQTT_URL","mqtt://localhost:1883")


async def handle_job(body):
job = json.loads(body.decode())
model = job.get("model","default")
payload = job.get("payload",{})
# Example HTTP infer call (pseudo):
async with aiohttp.ClientSession() as s:
# Replace with real Triton infer endpoint
async with s.post(f"{TRITON_URL}/v2/models/{model}/infer", json=payload) as r:
result = await r.json()
# Publish result to MQTT
client = mqtt.Client()
host = MQTT_URL.split("://")[1].split(":")[0]
port = int(MQTT_URL.split(":")[-1])
client.connect(host, port, 60)
client.publish("empire/results/ai", json.dumps({"model":model,"result":result}))
client.disconnect()


async def main():
conn = await aio_pika.connect_robust(RABBIT_URL)
ch = await conn.channel()
q = await ch.declare_queue("jobs.inference", durable=True)
async with q.iterator() as queue_iter:
async for msg in queue_iter:
async with msg.process():
try:
await handle_job(msg.body)
except Exception as e:
# TODO: publish to DLQ or alert
pass


if name == "main":
asyncio.run(main())


Comments:​


- Replace Triton POST with your model’s actual input format.​


- Add retries + dead-letter handling.​


services/gpio-agent/agent.py (Pi Zero side; GPIO via MQTT)​


import RPi.GPIO as GPIO
import json, paho.mqtt.client as mqtt, time, os


BROKER = os.getenv("BROKER","pi3.local")
TOP_CMD = "empire/cmd/greenhouse/#"
PINMAP = {"fan1":17,"fan2":27,"pump1":22,"light1":23} # BCM pins


GPIO.setmode(GPIO.BCM)
for pin in PINMAP.values():
GPIO.setup(pin, GPIO.OUT)
GPIO.output(pin, GPIO.LOW)


def on_message(client, userdata, msg):
try:
payload = json.loads(msg.payload.decode())
except:
payload = {}
topic = msg.topic.split("/")[-1] # e.g. fan1
state = payload.get("state")
if topic in PINMAP and state in ["ON","OFF"]:
GPIO.output(PINMAP[topic], GPIO.HIGH if state=="ON" else GPIO.LOW)
client.publish(f"empire/state/greenhouse/{topic}", json.dumps({"state":state}))
client = mqtt.Client()
client.on_message = on_message
client.connect(BROKER,1883,60)
client.subscribe(TOP_CMD)
client.loop_forever()


Comments:​


- Run this on each Pi Zero near relays. Keep wiring short; use SSR/relay boards with flyback protection.​


services/health-watchdog/watchdog.py (self-heal + alert)​


import os, time, socket, subprocess, paho.mqtt.client as mqtt


BROKER = os.getenv("BROKER","pi3.local")
CHECKS = [
("rabbitmq", ["bash","-lc","nc -zv pi1.local 5672"]),
("postgres", ["bash","-lc","nc -zv pi2.local 5432"]),
("mqtt", ["bash","-lc","nc -zv pi3.local 1883"])
]


def alert(msg):
c = mqtt.Client()
c.connect(BROKER,1883,60)
c.publish("empire/alerts/warn", msg)
c.disconnect()


while True:
for name, cmd in CHECKS:
rc = subprocess.call(cmd, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
if rc != 0:
alert(f"CHECK_FAIL:{name}")
time.sleep(15)


systemd unit (services/health-watchdog/systemd/health-watchdog.service)​


[Unit]
Description=Empire Health Watchdog
After=network-online.target
[Service]
ExecStart=/usr/bin/python3 /opt/health-watchdog/watchdog.py
Restart=always
[Install]
WantedBy=multi-user.target


Comments:​


- Extend with Prometheus blackbox_exporter later; this is your quick audible/visual fallback.​


cf/tunnel-pi1.yaml (Cloudflare Tunnel minimal)​


tunnel: empire-control
credentials-file: /etc/cloudflared/empire-control.json
ingress:



Comments:​


- Use a separate tunnel on Pi-5 for “public demo” endpoints to isolate blast radius.​


postgres/initdb.sql (seed, timescale optional)​


CREATE EXTENSION IF NOT EXISTS timescaledb;
CREATE SCHEMA IF NOT EXISTS empire;
CREATE TABLE IF NOT EXISTS empire.metrics(
ts TIMESTAMPTZ NOT NULL,
source TEXT NOT NULL,
key TEXT NOT NULL,
value DOUBLE PRECISION,
PRIMARY KEY (ts, source, key)
);


Comments:​


- Write Telegraf→Postgres or Prometheus→remote-write depending on your preference.​


node-red/flows.json​


[
{
"id":"empire-fan-flow",
"type":"tab",
"label":"Greenhouse",
"disabled":false
}
]


Comments:​


- Import your flows JSON from the Node-RED editor; this is a placeholder file.​




How They Interact (walkthrough)​


  1. Inbound command (SMS) → Twilio → Pi-4 /twilio/sms
    • The gateway drops a durable job on RabbitMQ (jobs.inference) and also emits an MQTT notification empire/alerts/info.
  2. Jetson dispatcher (ai-dispatcher) consumes the job, calls local Triton, publishes result to empire/results/ai (MQTT), and can also push a callback to a RabbitMQ “results” queue for audit.
  3. Node-RED reacts to MQTT (e.g., temperature high) and publishes GPIO commands to empire/cmd/greenhouse/.... The nearest Pi Zero flips relays and reports empire/state/....
  4. Health-watchdog on each Pi checks critical ports; any failure = audible alert (Pi-4 speakers) + MQTT warn + (optional) SMS via your gateway.
  5. Backups: Pi-2 runs nightly pg_dump to MinIO; btrfs snapshots replicate to a USB SSD or a second NVMe. MinIO buckets replicate to an off-site MinIO via Cloudflare Tunnel + mc mirror.



Assembly Order of Operations (quick checklist)​


  1. Cut/drill fan arrays and I/O plates; install fan grills, silicone grommets.
  2. Mount base plate + mid-spine + standoffs; test-fit sleds.
  3. Install PSU (12 V), 12→5 V buck, fused blade block, distribution buses.
  4. Terminate wiring: 16–18 AWG trunk for 5 V rail, 22 AWG drops; label all.
  5. Install switch bank (fans/lights), test with dummy load; polarity check.
  6. Mount ethernet switch; dry-fit Pis and Jetsons; secure with countersunk screws.
  7. Speakers + mini amp; route 3.5 mm from Pi-4 or USB DAC.
  8. Flash OS on SD/NVMe; first boot; set static IPs; install docker + agents.
  9. Bring up compose stacks per node; validate with Grafana and MQTT explorer.
  10. Run watchdog, simulate failures, verify alerts (speaker + MQTT + SMS).
  11. Enable Cloudflare tunnels and WireGuard; verify remote ops.



Ideas & Upgrades​


  • Add a tiny OLED status panel driven by MQTT (shows temps, queue depth, alerts).
  • HAT-less GPIO: Use ribbon to a DIN-rail relay bay on the case wall; keep high-current away from compute.
  • Power telemetry: INA219 current sensors reporting to MQTT.
  • Thermal zoning: PWM fan curve driven by Jetson die temp via Node-RED function node.
  • Cold-spare SD/OS: Keep a cloned SD in a labeled slot inside the lid for field swaps.
  • Immutable config: Use Ansible playbook-bootstrap.yaml to converge nodes in one shot.
 
Back
Top