- CLAUDE.md: Main homelab assistant context and instructions - IP-ASSIGNMENTS.md: Complete IP address assignments - NETWORK.md: Network bridges, VLANs, and configuration - EMC-ENCLOSURE.md: EMC storage enclosure documentation - SYNCTHING.md: Syncthing setup and device list - SHELL-ALIASES.md: ZSH aliases for Claude Code sessions - HOMEASSISTANT.md: Home Assistant API and automations - INFRASTRUCTURE.md: Server hardware and power management - configs/: Shared shell configurations - scripts/: Utility scripts - mcp-central/: MCP server configuration 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
7.7 KiB
7.7 KiB
Network Architecture
Network Ranges
| Network | Range | Purpose | Gateway |
|---|---|---|---|
| LAN | 10.10.10.0/24 | Primary network, management, general access | 10.10.10.1 (UniFi Router) |
| Storage/Internal | 10.10.20.0/24 | Inter-VM traffic, NFS/iSCSI, no external access | 10.10.20.1 (vmbr3) |
| Tailscale | 100.x.x.x | VPN overlay for remote access | N/A |
PVE (10.10.10.120) - Network Bridges
Physical NICs
| Interface | Speed | Type | MAC Address | Connected To |
|---|---|---|---|---|
| enp1s0 | 1 Gbps | Onboard NIC | e0:4f:43:e6:41:6c | Switch → UniFi eth5 |
| enp35s0f0 | 10 Gbps | Intel X550 Port 0 | b4:96:91:39:86:98 | Switch → UniFi eth5 |
| enp35s0f1 | 10 Gbps | Intel X550 Port 1 | b4:96:91:39:86:99 | Switch → UniFi eth5 |
Note: All three NICs connect through a switch to the UniFi Gateway's 10Gb SFP+ port (eth5). No direct firewall connection.
Bridge Configuration
vmbr0 - Management Bridge (1Gb)
- Physical NIC: enp1s0 (1 Gbps onboard)
- IP: 10.10.10.120/24
- Gateway: 10.10.10.1
- MTU: 9000
- Purpose: General VM/CT networking, management access
- Use for: Most VMs and containers that need basic internet access
VMs/CTs on vmbr0:
| VMID | Name | IP |
|---|---|---|
| 105 | fs-dev | 10.10.10.5 |
| 110 | homeassistant | 10.10.10.110 |
| 201 | copyparty | DHCP |
| 206 | docker-host | 10.10.10.206 |
| 200 | pihole (CT) | 10.10.10.10 |
| 205 | findshyt (CT) | 10.10.10.205 |
vmbr1 - High-Speed LXC Bridge (10Gb)
- Physical NIC: enp35s0f0 (10 Gbps Intel X550)
- IP: 10.10.10.121/24
- Gateway: 10.10.10.1
- MTU: 9000
- Purpose: High-bandwidth LXC containers and VMs
- Use for: Containers/VMs that need high throughput to network
VMs/CTs on vmbr1:
| VMID | Name | IP |
|---|---|---|
| 111 | lmdev1 | 10.10.10.111 |
vmbr2 - High-Speed VM Bridge (10Gb)
- Physical NIC: enp35s0f1 (10 Gbps Intel X550)
- IP: 10.10.10.122/24
- Gateway: (none configured)
- MTU: 9000
- Purpose: High-bandwidth VMs, storage traffic
- Use for: VMs that need high throughput (TrueNAS, Saltbox)
VMs/CTs on vmbr2:
| VMID | Name | IP |
|---|---|---|
| 100 | truenas | 10.10.10.200 |
| 101 | saltbox | 10.10.10.100 |
| 202 | traefik (CT) | 10.10.10.250 |
vmbr3 - Internal-Only Bridge (Virtual)
- Physical NIC: None (isolated virtual network)
- IP: 10.10.20.1/24
- Gateway: N/A (no external routing)
- MTU: 9000
- Purpose: Inter-VM communication without external access
- Use for: Storage traffic (NFS/iSCSI), internal APIs, secure VM-to-VM
VMs with secondary interface on vmbr3:
| VMID | Name | Internal IP | Notes |
|---|---|---|---|
| 100 | truenas | (check TrueNAS config) | NFS/iSCSI server |
| 101 | saltbox | (check VM config) | Media storage access |
| 111 | lmdev1 | (check VM config) | AI model storage |
| 201 | copyparty | 10.10.20.201 | Confirmed via cloud-init |
PVE2 (10.10.10.102) - Network Bridges
Physical NICs
| Interface | Speed | Type | MAC Address | Connected To |
|---|---|---|---|---|
| nic0 | Unknown | Unused | e0:4f:43:e6:1b:e3 | Not connected |
| nic1 | 10 Gbps | Primary NIC | a0:36:9f:26:b9:bc | Direct to UCG-Fiber (10Gb negotiated) |
Note: PVE2 connects directly to the UCG-Fiber. Link negotiates at 10Gb.
Bridge Configuration
vmbr0 - Single Bridge (10Gb)
- Physical NIC: nic1 (10 Gbps)
- IP: 10.10.10.102/24
- Gateway: 10.10.10.1
- Purpose: All VMs on PVE2
VMs on vmbr0:
| VMID | Name | IP |
|---|---|---|
| 300 | gitea-vm | 10.10.10.220 |
| 301 | trading-vm | 10.10.10.221 |
Which Bridge to Use?
| Scenario | Bridge | Reason |
|---|---|---|
| General VM/CT | vmbr0 | Standard networking, 1Gb is sufficient |
| High-bandwidth VM (media, AI) | vmbr1 or vmbr2 | 10Gb for large file transfers |
| Storage-heavy VM (NAS access) | vmbr2 + vmbr3 | 10Gb external + internal storage network |
| Isolated internal service | vmbr3 only | No external access, secure |
| VM needing both external + internal | vmbr0/1/2 + vmbr3 | Dual-homed configuration |
Traffic Flow
Internet
│
▼
┌─────────────────────────────────────────────────────────────┐
│ UCG-Fiber (10.10.10.1) │
│ │
│ eth5 (10Gb SFP+) switch0 (eth0-eth4, 10Gb) │
│ │ │ │
└────────┼───────────────────────────────┼────────────────────┘
│ │
▼ │
┌─────────────────────┐ │
│ 10Gb Switch │ │
└─────────────────────┘ │
│ │ │ │
│ │ │ │
▼ ▼ ▼ ▼
enp1s0 enp35s0f0 enp35s0f1 nic1
(1Gb) (10Gb) (10Gb) (10Gb)
│ │ │ │
▼ ▼ ▼ ▼
vmbr0 vmbr1 vmbr2 vmbr0
│ │ │ │
│ │ │ │
PVE PVE PVE PVE2
General lmdev1 TrueNAS, gitea-vm,
VMs Saltbox, trading-vm
Traefik
Internal Only (no external access):
┌─────────────────────────────────────┐
│ vmbr3 (10.10.20.0/24) - Virtual │
│ No physical NIC - inter-VM only │
│ │
│ TrueNAS ◄──► Saltbox │
│ ▲ ▲ │
│ │ │ │
│ └─── lmdev1 ┘ │
│ ▲ │
│ │ │
│ copyparty │
└─────────────────────────────────────┘
Determining Physical Connections
To determine which 10Gb port goes where, check:
- Physical cable tracing - Follow cables from server to switch/firewall
- Switch port status - Check UniFi controller for connected ports
- MAC addresses - Compare
ip link showMACs with switch ARP table
# On PVE - get MAC addresses
ip link show enp35s0f0 | grep ether
ip link show enp35s0f1 | grep ether
# On router - check ARP
ssh root@10.10.10.1 'cat /proc/net/arp'
Adding a New VM to a Specific Network
# Add VM to vmbr0 (standard)
qm set VMID --net0 virtio,bridge=vmbr0
# Add VM to vmbr2 (10Gb)
qm set VMID --net0 virtio,bridge=vmbr2
# Add second NIC for internal network
qm set VMID --net1 virtio,bridge=vmbr3
# For containers
pct set CTID --net0 name=eth0,bridge=vmbr0,ip=10.10.10.XXX/24,gw=10.10.10.1
MTU Configuration
All bridges use MTU 9000 (jumbo frames) for optimal storage performance.
If adding a new VM that will access NFS/iSCSI storage, ensure the guest OS also uses MTU 9000:
# Linux guest
ip link set eth0 mtu 9000
# Permanent (netplan)
# /etc/netplan/00-installer-config.yaml
network:
ethernets:
eth0:
mtu: 9000