Every part on the bench, by model number, with the price we paid and where we bought it. The point of this page is reproducibility — clone these two builds and you can run the same Cortex + Mercury demos at home.
| Component | Exact part | Source | Price |
|---|---|---|---|
| GPU | NVIDIA GeForce RTX 5090 (Founders Edition or AIB) 32 GB GDDR7 · sm_120 (Blackwell) · 575 W TDP · PCIe 5.0 ×16 |
Best Buy / Newegg / Micro Center | $2,000–2,800 |
| CPU | AMD Ryzen 7 9800X3D 8c / 16t · 4.7 GHz base, 5.2 GHz boost · 96 MB L3 (3D V-Cache) · AM5 · 120 W TDP |
Micro Center / Amazon | $479 |
| Motherboard | MSI X870E-P PRO AM5 · ATX · WiFi 7 · 2× DDR5 DIMM · PCIe 5.0 ×16 + ×4 · 4× M.2 |
Newegg | $259 |
| RAM | G.Skill Trident Z5 Neo 64 GB DDR5-4800 (2× 32 GB) CL30 · EXPO ready · running at JEDEC 4800 for stability |
Newegg / Amazon | $219 |
| Storage (system) | Samsung 990 PRO 2 TB NVMe Gen 4 7,450 MB/s read · 6,900 MB/s write · 5-year warranty |
Best Buy | $179 |
| Storage (data, D:\) | Crucial T705 4 TB NVMe Gen 5 14,500 MB/s read — TRIBE weights + 'tribev2_cache' live here |
Newegg | $439 |
| PSU | Corsair RM1000x SHIFT 1000 W 80+ Gold 12V-2×6 connector, native 600 W to RTX 5090 |
Amazon | $199 |
| Cooler (CPU) | Noctua NH-D15 G2 Air cooler — keeps the 9800X3D under 70 °C at full load, no AIO maintenance |
Amazon | $149 |
| Case | Fractal Design Torrent Mesh-front airflow case · clears the 5090 with room for cables |
Newegg | $199 |
| Case fans | Noctua NF-A14 PWM ×3 | Amazon | $95 |
| Total | $4,216 – $5,016 | ||
The 5090 was picked the day it was announced because 32 GB GDDR7 is the cheapest way to fit TRIBE (~6 GB) + Gemma 4 26B (~21 GB) on the same card without quantization. The 9800X3D is the fastest gaming + workstation chip in 2025–26, which matters because Soumit games on this box too. The mesh-front case + Noctua air cooling means there's no AIO pump to fail at 3 AM in the middle of a demo.
| Spec | This machine | Notes |
|---|---|---|
| Model | MacBook Pro 16" (M4 Max, 2024) | Order page: apple.com/shop/buy-mac/macbook-pro/16-inch-m4-max |
| Chip | Apple M4 Max · 16-core CPU · 40-core GPU · 16-core Neural Engine | Top M4 Max bin (vs. 14-core CPU base) |
| Memory | 48 GB unified | Upgrade option from 36 GB; 64 GB is the next step (+$400). 48 GB chosen because TRIBE + Gemma 4 26B comfortably fits. |
| Storage | 1 TB NVMe | Up from 512 GB base. ~7.4 GB/s read. |
| Display | 16.2" Liquid Retina XDR · 3456 × 2234 · 1000 nits sustained · 1600 nits peak HDR | Useful for the 3D brain visualization preview |
| Charger | 140 W USB-C (in-box) | Yes 140 W is required to fast-charge under load |
| Total ordered config | $4,199 | |
The 48 GB tier is the cheapest way to comfortably run Gemma 4 26B (4-bit, ~16 GB) plus TRIBE on MPS (~6 GB) plus macOS overhead (~10 GB) plus dev tools — leaving ~12-15 GB headroom. The 64 GB tier (+$400) is overkill for our workload; the 128 GB tier is for people training models, not running them.
Standby tertiary node. Lives next to the ISU campus for in-person research meetings with collaborators. Joins the Tailscale mesh but isn't normally in the inference pool.
Edge node planned for AdGuard Home + Tailscale subnet routing for the lab. NOT for LLM inference — too small. Status: SD card prep + AdGuard install scripts written, hardware not flashed yet.
| Seratonin | Big Apple | |
|---|---|---|
| Architecture | x86_64 + NVIDIA Blackwell | ARM64 + Apple M4 Max (unified) |
| Memory model | 64 GB system + 32 GB dedicated VRAM | 48 GB unified (CPU + GPU share) |
| Memory bandwidth (GPU) | 1.79 TB/s | 546 GB/s |
| Gemma 4 E4B throughput | 194 tok/s | ~90 tok/s |
| TRIBE v2 inference time / scan | ~3 min (CUDA) | WIP (MPS, neuralset device-string PR pending) |
| Idle power | ~50 W | ~7 W |
| Sustained ML power | ~450 W | ~60 W |
| Acoustic (under load) | ~38 dB (Noctua air) | ~22 dB (clamshell, fans on) |
| Hardware cost | $4,000 (build) | $4,199 (Apple direct) |
| Energy cost / month (24/7 idle + occasional load) | ~$31 | ~$5 |
| Best at | Heavy-throughput training + concurrent demos | Cool, quiet, portable, low-watt narration server |