Compare commits

...

22 Commits

Author SHA1 Message Date
Hutson
38a7a2c52e Auto-sync: 20260123-015626 2026-01-23 01:56:27 -05:00
Hutson
52d8f2f133 Add central configuration reference section
Reference ~/.secrets, ~/.hosts, and ~/.ssh/config for centralized
credentials and host management. Includes homelab-specific variables
for Syncthing, Home Assistant, n8n, and Cloudflare.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 15:13:16 -05:00
Hutson
80b6ab43d3 Auto-sync: 20260120-145048 2026-01-20 14:50:49 -05:00
Hutson
6932ee1ca9 Auto-sync: 20260116-161159 2026-01-16 16:12:19 -05:00
Hutson
42cfdd8552 Auto-sync: 20260116-155016 2026-01-16 15:50:17 -05:00
Hutson
d54447949e Add Oura Ring integration and automations documentation
- Document HACS and Oura Ring v2 integration setup
- Add OAuth credentials for Oura developer portal
- Document 9 Oura automations:
  - Sleep/wake detection (HR-based thermostat control)
  - Health alerts (low readiness, SpO2, fever detection)
  - Sleep comfort (temperature-based thermostat adjustment)
  - Activity reminders (sedentary alert)
- Add Nest thermostat to integrations list
- Mark completed TODOs

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 15:25:21 -05:00
Hutson
4535969566 Auto-sync: 20260116-152013 2026-01-16 15:20:14 -05:00
Hutson
8c1cbf3dac Auto-sync: 20260116-150510 2026-01-16 15:05:12 -05:00
Hutson
d38de8bfb1 Auto-sync: 20260115-110247 2026-01-15 11:02:48 -05:00
Hutson
db7ac68312 Auto-sync: 20260114-183121 2026-01-14 18:31:23 -05:00
Hutson
bd3ed4e4ef Auto-sync: 20260114-002941 2026-01-14 00:29:42 -05:00
Hutson
e7c8d7f86f Auto-sync: 20260113-134342 2026-01-13 13:43:43 -05:00
Hutson
1dcb7ff9e5 Auto-sync: 20260113-093539 2026-01-13 09:35:40 -05:00
Hutson
f234fe96cb Auto-sync: 20260113-015009 2026-01-13 01:50:10 -05:00
Hutson
1abd618b52 Auto-sync: 20260113-013507 2026-01-13 01:35:08 -05:00
Hutson
35fba5a6ae Auto-sync: 20260113-012006 2026-01-13 01:20:07 -05:00
Hutson
eb698f0c38 Auto-sync: 20260111-164757 2026-01-11 16:47:58 -05:00
Hutson
d66ed5c55a Auto-sync: 20260111-161755 2026-01-11 16:17:56 -05:00
Hutson
5ac698db0d Auto-sync: 20260107-000953 2026-01-07 00:09:54 -05:00
Hutson
7eacc846e6 Auto-sync: 20260105-213809 2026-01-05 21:38:10 -05:00
Hutson
b832cc9e57 Auto-sync: 20260105-212307 2026-01-05 21:23:08 -05:00
Hutson
54a71124ae Auto-sync: 20260105-172251 2026-01-05 17:22:52 -05:00
16 changed files with 1824 additions and 35 deletions

190
AUTOMATION-WELCOME-HOME.md Normal file
View File

@@ -0,0 +1,190 @@
# Welcome Home Automation
## Overview
Automatically turns on lights when you arrive home after sunset, creating a warm welcome.
## Status
- **Created:** 2026-01-14
- **State:** Active (enabled)
- **Entity ID:** `automation.welcome_home`
- **Last Triggered:** Never (newly created)
## How It Works
### Trigger
- Activates when **person.hutson** enters **zone.home** (100m radius)
- GPS tracking via device_tracker.honor (Honor phone)
### Conditions
The automation only runs when it's dark:
- After sunset (with 30-minute early start) **OR**
- Before sunrise
This prevents lights from turning on during daytime arrivals.
### Actions
When triggered, the following lights turn on:
| Light | Brightness | Purpose |
|-------|------------|---------|
| **Living Room** | 75% | Main ambient lighting |
| **Living Room Lamp** | 60% | Softer accent light |
| **Kitchen** | 80% | Task lighting for entry |
## Climate Control Note
No climate/heating entities were found in your Home Assistant setup. To add heating control in the future:
1. Integrate your thermostat/HVAC with Home Assistant
2. Add a climate action to this automation (see customization below)
## Customization
### Adjust Trigger Distance
The home zone has a 100m radius. To change this:
```yaml
# In Home Assistant UI: Settings → Areas → Zones → Home
# Or via API:
curl -X PUT \
-H "Authorization: Bearer $HA_TOKEN" \
-H "Content-Type: application/json" \
-d '{"latitude": 35.6542655, "longitude": -78.7417665, "radius": 150}' \
"http://10.10.10.210:8123/api/config/zone/zone.home"
```
### Add More Lights
To add additional lights (e.g., Office, Front Porch):
```bash
HA_TOKEN="your-token-here"
# Get current config
curl -s -H "Authorization: Bearer $HA_TOKEN" \
"http://10.10.10.210:8123/api/config/automation/config/welcome_home" > automation.json
# Edit automation.json to add more light actions
# Then update:
curl -X POST \
-H "Authorization: Bearer $HA_TOKEN" \
-H "Content-Type: application/json" \
-d @automation.json \
"http://10.10.10.210:8123/api/config/automation/config/welcome_home"
```
### Add Climate Control (when available)
Add this action to the automation:
```json
{
"service": "climate.set_temperature",
"target": {
"entity_id": "climate.thermostat"
},
"data": {
"temperature": 72,
"hvac_mode": "heat"
}
}
```
### Use a Scene Instead
To activate a predefined scene instead of individual lights:
```json
{
"service": "scene.turn_on",
"target": {
"entity_id": "scene.living_room_relax"
}
}
```
Available scenes include:
- `scene.living_room_relax`
- `scene.living_room_dimmed`
- `scene.all_nightlight`
## Testing
### Manual Trigger
```bash
HA_TOKEN="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiIwZThjZmJjMzVlNDA0NzYwOTMzMjg3MTQ5ZjkwOGU2NyIsImlhdCI6MTc2NTk5MjQ4OCwiZXhwIjoyMDgxMzUyNDg4fQ.r743tsb3E5NNlrwEEu9glkZdiI4j_3SKIT1n5PGUytY"
curl -X POST \
-H "Authorization: Bearer $HA_TOKEN" \
-H "Content-Type: application/json" \
-d '{"entity_id": "automation.welcome_home"}' \
"http://10.10.10.210:8123/api/services/automation/trigger"
```
### Check Last Triggered
```bash
curl -s -H "Authorization: Bearer $HA_TOKEN" \
"http://10.10.10.210:8123/api/states/automation.welcome_home" | \
python3 -c "import json, sys; print(json.load(sys.stdin)['attributes']['last_triggered'])"
```
## Disable/Enable
### Disable
```bash
curl -X POST \
-H "Authorization: Bearer $HA_TOKEN" \
-H "Content-Type: application/json" \
-d '{"entity_id": "automation.welcome_home"}' \
"http://10.10.10.210:8123/api/services/automation/turn_off"
```
### Enable
```bash
curl -X POST \
-H "Authorization: Bearer $HA_TOKEN" \
-H "Content-Type: application/json" \
-d '{"entity_id": "automation.welcome_home"}' \
"http://10.10.10.210:8123/api/services/automation/turn_on"
```
## Monitoring
### View in Home Assistant UI
1. Go to http://10.10.10.210:8123
2. Settings → Automations & Scenes → Automations
3. Find "Welcome Home"
### Check Automation State
The automation is currently: **ON**
### Troubleshooting
If the automation doesn't trigger:
1. Check person.hutson GPS accuracy (should be < 50m)
2. Verify zone.home coordinates match your actual home location
3. Check automation was triggered during dark hours
4. Review Home Assistant logs for errors
## Related Documentation
- [Home Assistant API](./HOMEASSISTANT.md)
- [Personal Assistant Integration](../personal-assistant/CLAUDE.md)
- [Smart Home Control](../personal-assistant/docs/services-matrix.md)
## Future Enhancements
Potential improvements:
- Add motion sensor override (don't trigger if motion already detected)
- Integrate with calendar (different scenes for work vs personal time)
- Add climate control when thermostat is integrated
- Create "leaving home" automation to turn off lights
- Add notification to phone when automation triggers
- Adjust brightness based on time of day
- Add office lights during work hours
---
*Created: 2026-01-14*
*Last Updated: 2026-01-14*

View File

@@ -11,6 +11,7 @@ This is your **quick reference guide** for common homelab tasks. For detailed in
| Task | Documentation | Quick Command |
|------|--------------|---------------|
| **Gateway issues** | [GATEWAY.md](GATEWAY.md) | `ssh ucg-fiber 'free -m'` |
| **Tailscale/VPN issues** | [TAILSCALE.md](TAILSCALE.md) | `tailscale status` |
| **Add new public service** | [TRAEFIK.md](TRAEFIK.md) | Create Traefik config + Cloudflare DNS |
| **Check UPS status** | [UPS.md](UPS.md) | `ssh pve 'upsc cyberpower@localhost'` |
| **Check server temps** | [Temperature Check](#server-temperature-check) | `ssh pve 'grep Tctl ...'` |
@@ -85,6 +86,9 @@ nc -zw1 10.10.10.150 22000 && echo "Windows: UP" || echo "Windows: DOWN"
| Symptom | Check | Fix | Docs |
|---------|-------|-----|------|
| **Network down** | `ssh ucg-fiber 'free -m'` | Check memory, watchdog reboots auto | [GATEWAY.md](GATEWAY.md) |
| **Tailscale DNS not working** | `tailscale status` | Check PVE online, subnet routing | [TAILSCALE.md](TAILSCALE.md) |
| **Subnet unreachable** | `ping 10.10.10.10` | Check `--accept-routes` on local devices | [TAILSCALE.md](TAILSCALE.md) |
| **Relay-only connections** | `tailscale ping <ip>` | Check for VPN conflicts, restart tailscaled | [TAILSCALE.md](TAILSCALE.md) |
| Device not syncing | `curl Syncthing API` | Restart Syncthing | [SYNCTHING.md](SYNCTHING.md) |
| VM won't start | Storage/RAM available? | `ssh pve 'qm start VMID'` | [VMS.md](VMS.md) |
| Server running hot | Check KSM, CPU processes | Disable KSM | [POWER-MANAGEMENT.md](POWER-MANAGEMENT.md) |
@@ -246,9 +250,10 @@ ssh pve 'qm guest exec VMID -- bash -c "COMMAND"'
### Infrastructure
- [README.md](README.md) - Start here
- [GATEWAY.md](GATEWAY.md) - UniFi gateway, monitoring services
- [TAILSCALE.md](TAILSCALE.md) - VPN, subnet routing, DNS
- [VMS.md](VMS.md) - VM/CT inventory
- [STORAGE.md](STORAGE.md) - ZFS pools, shares
- [NETWORK.md](NETWORK.md) - Bridges, VLANs, Tailscale
- [NETWORK.md](NETWORK.md) - Bridges, VLANs, MTU
- [POWER-MANAGEMENT.md](POWER-MANAGEMENT.md) - Optimizations
- [UPS.md](UPS.md) - UPS config, NUT monitoring
@@ -310,6 +315,38 @@ git add -A && git commit -m "Update docs" && git push
## Recent Changes
### 2026-01-14
- **Guitar Room Humidity Automation** setup complete
- Homebridge installed on Mac Mini with `homebridge-plugin-govee` for BLE sensor access
- Govee H5074 temperature/humidity sensor bridged to Home Assistant
- VeSync integration added for Levoit LV600S humidifier control
- Automations created: turn ON below 45%, turn OFF above 47%
- Target: maintain 45-47% humidity for Lowden guitar storage
- **New Home Assistant integrations:**
- VeSync (vesync@htsn.io) - humidifier control
- HomeKit Controller - Homebridge bridge
- **Homebridge service:** `~/Library/LaunchAgents/com.homebridge.server.plist`
- **New HA entities:** `sensor.goveeh5074_5059_humidity`, `humidifier.lv600s`
### 2026-01-11
- **BlueMap web map** for Minecraft Hutworld server
- URL: https://map.htsn.io (password protected: hutworld / Suwanna123)
- BlueMap 5.15 plugin installed
- Port 8100 exposed in Crafty docker-compose
- Traefik routing with basicAuth middleware
- Fixed corrupted ViaVersion/ViaBackwards plugins
- Documented 1.21+ spawner give command syntax
- Fixed Docker file permission issues in Crafty container
### 2026-01-05
- Created [TAILSCALE.md](TAILSCALE.md) - comprehensive Tailscale VPN documentation
- **Fixed Tailscale subnet routing issues:**
- Switched primary subnet router from UCG-Fiber to PVE (gateway had relay-only connections)
- Disabled `--accept-routes` on UCG-Fiber and PiHole (devices on subnet must not accept subnet routes)
- Fixed PiHole ProtonVPN from full-tunnel to split-tunnel (DNS-only via fwmark routing)
- **Root cause:** Devices directly on 10.10.10.0/24 with `--accept-routes=true` were routing local traffic through Tailscale mesh instead of local interface
- **Key lesson:** Any device directly connected to an advertised subnet MUST have `--accept-routes=false`
### 2026-01-03
- Deployed **Crafty Controller 4** on docker-host2 for Minecraft server management
- URL: https://mc.htsn.io (Web GUI)
@@ -348,8 +385,32 @@ git add -A && git commit -m "Update docs" && git push
---
**Last Updated**: 2026-01-03
**Documentation Status**: ✅ Phase 1 Complete + Gateway Monitoring + MetaMCP
**Last Updated**: 2026-01-14
**Documentation Status**: ✅ Phase 1 Complete + Gateway Monitoring + MetaMCP + Tailscale + Humidity Automation
---
## Central Configuration Reference
All homelab credentials and hosts are centralized in these files (synced via Syncthing):
| File | Purpose | Usage |
|------|---------|-------|
| `~/.secrets` | API keys, tokens, credentials | `source ~/.secrets` then use `$VAR_NAME` |
| `~/.hosts` | IPs, hostnames, service URLs | `source ~/.hosts` then use `$IP_*` or `$HOST_*` |
| `~/.ssh/config` | SSH aliases for all homelab hosts | `ssh pve`, `ssh truenas`, `ssh docker-host`, etc. |
**Key variables for homelab:**
- `$SYNCTHING_API_KEY_*` - Syncthing API keys per device
- `$HA_TOKEN` - Home Assistant long-lived access token
- `$N8N_API_KEY` - n8n API key
- `$CF_API_KEY` - Cloudflare API key for Traefik DNS
- All SSH passwords: `$HUTSON_PC_PASS`, `$TRUENAS_PASS`, etc.
**When adding new credentials or hosts:**
1. Add to the central files (`~/.secrets` or `~/.hosts`)
2. Files sync via Syncthing to all machines
3. Update this CLAUDE.md if infrastructure changes
---

View File

@@ -130,8 +130,13 @@ curl -s -H "Authorization: Bearer $HA_TOKEN" \
- **Philips Hue** - Lights
- **Sonos** - Speakers
- **Nest** - Thermostat (climate.thermostat)
- **Motion Sensors** - Various locations
- **NUT (Network UPS Tools)** - UPS monitoring (added 2025-12-21)
- **VeSync** - Levoit humidifier control (added 2026-01-14)
- **HomeKit Controller** - Homebridge bridge for Govee sensors (added 2026-01-14)
- **Oura Ring v2** - Sleep/health tracking via HACS (added 2026-01-16)
- **HACS** - Home Assistant Community Store for custom integrations
### NUT / UPS Integration
@@ -168,14 +173,189 @@ entities:
name: Input Voltage
```
### VeSync / Levoit LV600S Integration
Controls the Levoit LV600S humidifier via VeSync cloud API.
**Account:** vesync@htsn.io
**Entities:**
| Entity ID | Description |
|-----------|-------------|
| `humidifier.lv600s` | Main humidifier on/off control |
| `sensor.lv600s_humidity` | Built-in humidity sensor (reads high near mist) |
| `number.lv600s_mist_level` | Mist intensity (1-9) |
| `switch.lv600s_display` | Display on/off |
| `binary_sensor.lv600s_low_water` | Low water warning |
| `binary_sensor.lv600s_water_tank_lifted` | Tank removed detection |
### Oura Ring Integration (HACS)
Monitors sleep, activity, and health metrics from Oura Ring via HACS custom integration.
**Installation:** HACS → Integrations → Oura Ring v2
**OAuth Credentials (Oura Developer Portal):**
- Client ID: `e925a2a0-7767-4390-8b80-3a385a5b3ddc`
- Client Secret: `xFSFSfUPihet1foWQRLAMUQbL9-kChqT_CjtHHpAxZs`
- Redirect URI: `https://my.home-assistant.io/redirect/oauth`
**Key Entities:**
| Entity ID | Description |
|-----------|-------------|
| `sensor.oura_ring_readiness_score` | Daily readiness (0-100) |
| `sensor.oura_ring_sleep_score` | Sleep quality (0-100) |
| `sensor.oura_ring_current_heart_rate` | Current HR (bpm) |
| `sensor.oura_ring_average_sleep_heart_rate` | Average HR during sleep |
| `sensor.oura_ring_lowest_sleep_heart_rate` | Lowest HR during sleep |
| `sensor.oura_ring_temperature_deviation` | Body temp deviation (°C) |
| `sensor.oura_ring_spo2_average` | Blood oxygen (%) |
| `sensor.oura_ring_steps` | Daily step count |
| `sensor.oura_ring_activity_score` | Activity score (0-100) |
**Troubleshooting:**
- If sensors show "unavailable", check config entry state: `setup_retry` usually means API returned no data
- Force sync the Oura app on your phone, then reload the integration
- The integration polls Oura's API periodically; data updates after ring syncs to cloud
### HomeKit Controller / Homebridge Integration
Connects to Homebridge running on Mac Mini to access BLE devices (Govee sensors).
**Homebridge Details:**
- Host: Mac Mini (localhost)
- Port: 51826
- PIN: 031-45-154
- Config: `~/.homebridge/config.json`
- Logs: `~/.homebridge/homebridge.log`
- LaunchAgent: `~/Library/LaunchAgents/com.homebridge.server.plist`
**Govee H5074 Entities:**
| Entity ID | Description |
|-----------|-------------|
| `sensor.goveeh5074_5059_humidity` | Room humidity (accurate reading) |
| `sensor.goveeh5074_5059_temperature` | Room temperature |
| `sensor.goveeh5074_5059_battery` | Sensor battery level |
**Homebridge Management:**
```bash
# Check status
launchctl list | grep homebridge
# View logs
tail -f ~/.homebridge/homebridge.log
# Restart Homebridge
launchctl stop com.homebridge.server
launchctl start com.homebridge.server
# Stop Homebridge
launchctl unload ~/Library/LaunchAgents/com.homebridge.server.plist
# Start Homebridge
launchctl load ~/Library/LaunchAgents/com.homebridge.server.plist
```
## Automations
TODO: Document key automations
### Guitar Room Humidity Control
Maintains 45-47% humidity for guitar storage (Lowden recommends 49% ±2%).
**Automations:**
| Automation | Trigger | Action |
|------------|---------|--------|
| `guitar_room_humidity_low_turn_on_humidifier` | Govee H5074 < 45% | Turn ON humidifier, set mist to 6 |
| `guitar_room_humidity_reached_turn_off_humidifier` | Govee H5074 > 47% | Turn OFF humidifier |
**Why two thresholds (hysteresis):**
- Prevents rapid on/off cycling
- 45% turn-on, 47% turn-off creates a 2% buffer
- Target range: 45-47% (conservatively below Lowden's 49% spec)
### Oura Ring Health & Sleep Automations
Uses Oura Ring biometrics for smart thermostat control and health alerts.
**Sleep/Wake Detection:**
| Automation | Trigger | Conditions | Action |
|------------|---------|------------|--------|
| `oura_sleep_detected_bedtime_mode` | HR < 55 bpm | Home, after 10pm | Thermostat → 66°F, front door light off, Telegram notify |
| `oura_wake_up_detected_morning_mode` | HR > 65 bpm | Home, 5-11am, thermostat < 68°F | Thermostat → 69°F, Telegram notify |
**Health Alerts:**
| Automation | Trigger | Action |
|------------|---------|--------|
| `oura_low_readiness_alert` | 8am daily, readiness < 70 | Telegram: suggest rest day |
| `oura_spo2_health_alert` | SpO2 < 94% | Urgent Telegram: health warning |
| `oura_fever_detection_alert` | Temp deviation > 1°C | Telegram: possible illness alert |
| `oura_sedentary_reminder` | 2pm weekdays, steps < 500 | Telegram: reminder to move |
**Sleep Comfort & Recovery:**
| Automation | Trigger | Conditions | Action |
|------------|---------|------------|--------|
| `oura_poor_sleep_recovery_mode` | 7am daily | Home, sleep score < 70 | Thermostat → 71°F (warmer for recovery) |
| `oura_sleep_temp_adjustment_too_hot` | Temp deviation > +0.5°C | Home, 10pm-6am, HR < 60 | Thermostat → 64°F |
| `oura_sleep_temp_adjustment_too_cold` | Temp deviation < -0.3°C | Home, 10pm-6am, HR < 60 | Thermostat → 68°F |
**Notification Setup:**
All notifications use `rest_command.notify_telegram` - ensure this is configured in `configuration.yaml`:
```yaml
rest_command:
notify_telegram:
url: "https://api.telegram.org/bot<TOKEN>/sendMessage"
method: POST
content_type: "application/json"
payload: '{"chat_id": "<CHAT_ID>", "text": "{{ message }}"}'
```
## SSH Access (Terminal & SSH Add-on)
The Terminal & SSH add-on provides remote shell access to Home Assistant OS.
**Connection:**
```bash
ssh root@10.10.10.210 -p 22
```
**Authentication:** SSH key from Mac Mini (`~/.ssh/id_ed25519.pub`)
**Hostname:** `core-ssh`
**Features:**
- Direct shell access to Home Assistant OS
- Access to Home Assistant CLI (`ha` command)
- File system access for debugging
## MCP Server Integration
Home Assistant has a built-in Model Context Protocol (MCP) Server integration for AI assistant connectivity.
**Status:** Enabled (configured with "Assist" service)
**Endpoint:** `http://10.10.10.210:8123/api/mcp`
**Claude Code Configuration:** Added to `~/.cursor/mcp.json`:
```json
{
"homeassistant": {
"type": "http",
"url": "http://10.10.10.210:8123/api/mcp",
"headers": {
"Authorization": "Bearer <HA_API_TOKEN>"
}
}
}
```
**Note:** The MCP server uses the Assist API to expose entities and services to AI clients.
## TODO
- [ ] Set static IP (currently DHCP at .210, should be .110)
- [ ] Add API token to this document
- [ ] Document installed integrations
- [ ] Document automations
- [x] Add API token to this document
- [x] Document installed integrations
- [x] Document automations
- [ ] Set up Traefik reverse proxy (ha.htsn.io)
- [x] Install Terminal & SSH add-on
- [x] Enable MCP Server integration

View File

@@ -45,7 +45,7 @@
| 10.10.10.1 | router | Gateway/Firewall |
| 10.10.10.102 | pve2 | Proxmox Server 2 |
| 10.10.10.120 | pve | Proxmox Server 1 (Primary) |
| 10.10.10.123 | mac-mini | Mac Mini (Syncthing node) |
| 10.10.10.125 | mac-mini | Mac Mini (Syncthing node) |
| 10.10.10.150 | windows-pc | Windows PC (Syncthing node) |
| 10.10.10.147 | macbook | MacBook Pro (Syncthing node) |
| 10.10.10.200 | truenas | TrueNAS (Storage/Syncthing hub) |

View File

@@ -72,6 +72,7 @@ This document tracks all IP addresses in the homelab infrastructure.
| Excalidraw | excalidraw.htsn.io | 10.10.10.206:8080 | Traefik-Primary |
| MetaMCP | metamcp.htsn.io | 10.10.10.207:12008 | Traefik-Primary |
| n8n | n8n.htsn.io | 10.10.10.207:5678 | Traefik-Primary |
| PA API | pa.htsn.io | 10.10.10.207:8401 | Traefik-Primary (Tailscale only) |
| Crafty Controller | mc.htsn.io | 10.10.10.207:8443 | Traefik-Primary |
| Plex | plex.htsn.io | 10.10.10.100:32400 | Traefik-Saltbox |
| Sonarr | sonarr.htsn.io | 10.10.10.100:8989 | Traefik-Saltbox |
@@ -132,6 +133,7 @@ This document tracks all IP addresses in the homelab infrastructure.
| Service | Port | Purpose |
|---------|------|---------|
| PA API | 8401 | Personal Assistant API (pa.htsn.io) - Tailscale only |
| MetaMCP | 12008 | MCP Aggregator/Gateway (metamcp.htsn.io) |
| n8n | 5678 | Workflow automation |
| Crafty Controller | 8443 | Minecraft server management (mc.htsn.io) |
@@ -149,6 +151,16 @@ This document tracks all IP addresses in the homelab infrastructure.
| Android Phone | 10.10.10.54 | 8384 | Xxz3jDT4akUJe6psfwZsbZwG2LhfZuDM |
| TrueNAS | 10.10.10.200 | 8384 | (check TrueNAS config) |
## Mac Mini Services (10.10.10.125)
| Service | Port | Purpose |
|---------|------|---------|
| MCP Bridge | 8400 | HTTP bridge for MCP tool execution (PA API backend) |
| Beeper Desktop | 23373 | Message aggregation (Telegram, iMessage, SMS) |
| Proton Bridge IMAP | 1143 | Personal email access |
| Proton Bridge SMTP | 1025 | Personal email sending |
| Syncthing | 8384 | File sync API |
## Notes
- **MTU 9000** (jumbo frames) enabled on storage networks

View File

@@ -1,11 +1,32 @@
# Minecraft Server - Hutworld
# Minecraft Servers
Minecraft server running on docker-host2 via Crafty Controller 4.
Minecraft servers running on docker-host2 via Crafty Controller 4.
---
## Servers Overview
| Server | Address | Port | Version | Status |
|--------|---------|------|---------|--------|
| **Hutworld** | hutworld.htsn.io | 25565 | Paper 1.21.11 | Running |
| **Backrooms** | backrooms.htsn.io | 25566 | Paper 1.21.4 | Running |
### Web Map
| Setting | Value |
|---------|-------|
| **URL** | https://map.htsn.io |
| **Username** | hutworld |
| **Password** | Suwanna123 |
| **Plugin** | BlueMap 5.15 |
| **Port** | 8100 (exposed via Docker) |
---
## Quick Reference
### Hutworld (Main Server)
| Setting | Value |
|---------|-------|
| **Web GUI** | https://mc.htsn.io |
@@ -14,7 +35,25 @@ Minecraft server running on docker-host2 via Crafty Controller 4.
| **Host** | docker-host2 (10.10.10.207) |
| **Server Type** | Paper 1.21.11 |
| **World Name** | hutworld |
| **Memory** | 2GB min / 4GB max |
| **Memory** | 4GB min / 8GB max |
### Backrooms (Horror/Exploration)
| Setting | Value |
|---------|-------|
| **Web GUI** | https://mc.htsn.io |
| **Game Server (Java)** | backrooms.htsn.io:25566 |
| **Host** | docker-host2 (10.10.10.207) |
| **Server Type** | Paper 1.21.4 |
| **World Name** | backrooms |
| **Memory** | 512MB min / 1.5GB max |
| **Datapack** | The Backrooms v2.2.0 |
**Backrooms Features:**
- 50+ custom dimensions based on Backrooms lore
- Use `/execute in backrooms:level0 run tp @s ~ ~ ~` to travel to Level 0
- Horror-themed exploration gameplay
- No client mods required (datapack only)
---
@@ -53,6 +92,7 @@ ssh docker-host2 'cat ~/crafty/data/config/default-creds.txt'
### Pending
- [ ] Install SilkSpawners plugin (allows mining spawners with Silk Touch)
- [ ] Change Crafty admin password to something memorable
- [ ] Test external connectivity from outside network
@@ -100,11 +140,13 @@ To import the hutworld server in Crafty:
| PluginPortal | 2.2.2 | Plugin management |
| Vault | 1.7.3 | Economy/permissions API |
| ViaVersion | Latest | Multi-version support |
| ViaBackwards | Latest | Older client support |
| ViaBackwards | 5.2.1 | Older client support |
| randomtp | Latest | Random teleportation |
| BlueMap | 5.15 | 3D web map with player tracking |
| WorldEdit | 7.3.10 | World editing and terraforming |
**Removed plugins** (cleaned up 2026-01-03):
- GriefPrevention, Multiverse-Core, Multiverse-Portals, ProtocolLib, WorldEdit, WorldGuard (disabled/orphaned)
- GriefPrevention, Multiverse-Core, Multiverse-Portals, ProtocolLib, WorldGuard (disabled/orphaned)
---
@@ -122,10 +164,11 @@ services:
- TZ=America/New_York
ports:
- "8443:8443" # Web GUI (HTTPS)
- "8123:8123" # Dynmap (if used)
- "8123:8123" # Crafty HTTP
- "25565:25565" # Minecraft Java
- "25566:25566" # Additional server
- "19132:19132/udp" # Minecraft Bedrock (Geyser)
- "8100:8100" # BlueMap web server
volumes:
- ./data/backups:/crafty/backups
- ./data/logs:/crafty/logs
@@ -168,12 +211,13 @@ http:
## Port Forwarding (UniFi)
Configured via UniFi API on UCG-Fiber (10.10.10.1):
Configured via UniFi controller on UCG-Fiber (10.10.10.1):
| Rule Name | Port | Protocol | Destination |
|-----------|------|----------|-------------|
| Minecraft Java | 25565 | TCP/UDP | 10.10.10.207:25565 |
| Minecraft Bedrock | 19132 | UDP | 10.10.10.207:19132 |
| Rule Name | Port | Protocol | Destination | Status |
|-----------|------|----------|-------------|--------|
| Minecraft Java | 25565 | TCP/UDP | 10.10.10.207:25565 | Active |
| Minecraft Bedrock | 19132 | UDP | 10.10.10.207:19132 | Active |
| Minecraft Backrooms | 25566 | TCP/UDP | 10.10.10.207:25566 | Active |
---
@@ -183,8 +227,9 @@ Configured via UniFi API on UCG-Fiber (10.10.10.1):
|--------|------|-------|---------|
| mc.htsn.io | CNAME | htsn.io | Yes (for web GUI) |
| hutworld.htsn.io | A | 70.237.94.174 | No (direct for game traffic) |
| backrooms.htsn.io | A | 70.237.94.174 | No (direct for game traffic) |
**Note:** Game traffic (25565, 19132) cannot be proxied through Cloudflare - only HTTP/HTTPS works with Cloudflare proxy.
**Note:** Game traffic (25565, 25566, 19132) cannot be proxied through Cloudflare - only HTTP/HTTPS works with Cloudflare proxy.
---
@@ -205,21 +250,23 @@ The editor is hosted by LuckPerms, so no additional port forwarding is needed.
### Automated Backups to TrueNAS
Backups run automatically every 6 hours and are stored on TrueNAS.
Backups run automatically every 2 hours and are stored on TrueNAS for both servers.
| Setting | Value |
|---------|-------|
| **Destination** | TrueNAS (10.10.10.200) |
| **Path** | `/mnt/vault/users/backups/minecraft/` |
| **Frequency** | Every 6 hours (12am, 6am, 12pm, 6pm) |
| **Retention** | 14 backups (~3.5 days of history) |
| **Size** | ~2.3 GB per backup |
| **Script** | `/home/hutson/minecraft-backup.sh` on docker-host2 |
| **Frequency** | Every 2 hours (12 backups per day) |
| **Retention** | 30 backups per server (~2.5 days of history) |
| **Hutworld Size** | ~2-7 GB per backup |
| **Backrooms Size** | ~100-150 MB per backup |
| **Script** | `/home/hutson/minecraft-backup-all.sh` on docker-host2 |
| **Log** | `/home/hutson/minecraft-backup.log` on docker-host2 |
### Backup Script
### Backup Scripts
**Location:** `~/minecraft-backup.sh` on docker-host2
**Main Script:** `~/minecraft-backup-all.sh` on docker-host2 (backs up both servers)
**Legacy Script:** `~/minecraft-backup.sh` on docker-host2 (Hutworld only)
```bash
#!/bin/bash
@@ -247,10 +294,10 @@ sshpass -p 'GrilledCh33s3#' scp -o StrictHostKeyChecking=no "$LOCAL_BACKUP" "$BA
# Clean up local temp file
rm -f "$LOCAL_BACKUP"
# Keep only last 14 backups on TrueNAS
# Keep only last 30 backups on TrueNAS
sshpass -p 'GrilledCh33s3#' ssh -o StrictHostKeyChecking=no hutson@10.10.10.200 '
cd /mnt/vault/users/backups/minecraft
ls -t hutworld-*.tar.gz 2>/dev/null | tail -n +15 | xargs -r rm -f
ls -t hutworld-*.tar.gz 2>/dev/null | tail -n +31 | xargs -r rm -f
'
```
@@ -260,7 +307,7 @@ sshpass -p 'GrilledCh33s3#' ssh -o StrictHostKeyChecking=no hutson@10.10.10.200
# View current schedule
ssh docker-host2 'crontab -l | grep minecraft'
# Output: 0 */6 * * * /home/hutson/minecraft-backup.sh >> /home/hutson/minecraft-backup.log 2>&1
# Output: 0 */2 * * * /home/hutson/minecraft-backup-all.sh >> /home/hutson/minecraft-backup.log 2>&1
```
### Manual Backup Commands
@@ -297,6 +344,69 @@ ssh docker-host2 'cd ~/crafty/data/servers && \
---
## Admin Commands
### Give Mob Spawner (1.21+ Syntax)
In Minecraft 1.21+, the NBT syntax changed. Use `minecraft:give` to bypass Essentials:
```
minecraft:give <player> spawner[block_entity_data={id:"minecraft:mob_spawner",SpawnData:{entity:{id:"minecraft:<mob_type>"}}}]
```
**Examples:**
```bash
# Magma cube spawner
minecraft:give suwann spawner[block_entity_data={id:"minecraft:mob_spawner",SpawnData:{entity:{id:"minecraft:magma_cube"}}}]
# Zombie spawner
minecraft:give suwann spawner[block_entity_data={id:"minecraft:mob_spawner",SpawnData:{entity:{id:"minecraft:zombie"}}}]
# Skeleton spawner
minecraft:give suwann spawner[block_entity_data={id:"minecraft:mob_spawner",SpawnData:{entity:{id:"minecraft:skeleton"}}}]
# Blaze spawner
minecraft:give suwann spawner[block_entity_data={id:"minecraft:mob_spawner",SpawnData:{entity:{id:"minecraft:blaze"}}}]
```
**Note:** Must use `minecraft:give` prefix to use vanilla command instead of Essentials `/give`.
### RCON Access
For remote console access to the server:
| Setting | Value |
|---------|-------|
| **Host** | 10.10.10.207 |
| **Port** | 25575 |
| **Password** | HutworldRCON2026 |
Example using mcrcon:
```bash
mcrcon -H 10.10.10.207 -P 25575 -p HutworldRCON2026
```
### BlueMap Commands
```bash
# Start full world render
/bluemap render
# Pause rendering
/bluemap pause
# Resume rendering
/bluemap resume
# Check render status
/bluemap status
# Reload BlueMap config
/bluemap reload
```
---
## Common Tasks
### Start/Stop Server
@@ -347,6 +457,58 @@ ssh docker-host2 'tail -f ~/crafty/data/servers/hutworld/logs/latest.log'
## Troubleshooting
### Plugin Permission Issues (IMPORTANT)
**Root Cause**: Crafty Docker container requires all files to be owned by `<user>:root` (not `<user>:<user>`) for permissions to work correctly.
**Permanent Fix**:
```bash
# Fix all permissions immediately
ssh docker-host2 'sudo chown -R hutson:root ~/crafty/data/servers/ && \
sudo find ~/crafty/data/servers/ -type d -exec chmod 2775 {} \; && \
sudo find ~/crafty/data/servers/ -type f -exec chmod 664 {} \;'
```
**Prevention**:
1. **Always upload plugins through Crafty web UI** - this ensures correct permissions
2. **Or use the import directory**: Copy to `~/crafty/data/import/` then restart container
3. **Never directly copy files** to the servers directory
**Check for permission issues**:
```bash
# Use the permission check script (recommended)
ssh docker-host2 '~/check-crafty-permissions.sh'
# Or manually check for wrong group ownership
ssh docker-host2 'find ~/crafty/data/servers -type f ! -group root -ls'
ssh docker-host2 'find ~/crafty/data/servers -type d ! -group root -ls'
```
**Permission Check Script**: Located at `~/check-crafty-permissions.sh` on docker-host2
- Automatically detects permission issues
- Offers to fix them with one command
- Ignores temporary files that are expected to have different permissions
### Crafty Shows Server Offline or "Another Instance Running"
**Cause**: This happens when the server was started manually (not through Crafty) or when Crafty loses track of the server process.
**Fix**:
```bash
# 1. Kill any orphaned server processes
ssh docker-host2 'docker exec crafty pkill -f "paper.jar"'
# 2. Restart Crafty container to clear state
ssh docker-host2 'cd ~/crafty && docker compose restart'
# 3. Wait 30-60 seconds - Crafty will auto-start the server
```
**Prevention**:
- Always use Crafty web UI to start/stop servers
- Never manually start the server with java command
- If you must restart, use the container restart method above
### Server won't start
```bash
@@ -383,6 +545,38 @@ ssh docker-host2 'netstat -tlnp | grep 25565'
2. Check Geyser config: `~/crafty/data/servers/hutworld/plugins/Geyser-Spigot/config.yml`
3. Ensure UDP 19132 is forwarded and not blocked
### Corrupted plugin JARs (ZipException)
If you see `java.util.zip.ZipException: zip END header not found`:
1. **Check all plugins for corruption:**
```bash
ssh docker-host2 'cd ~/crafty/data/servers/19f604a9-f037-442d-9283-0761c73cfd60/plugins && \
for jar in *.jar; do unzip -t "$jar" > /dev/null 2>&1 && echo "OK: $jar" || echo "CORRUPT: $jar"; done'
```
2. **Re-download corrupted plugins from Hangar/Modrinth/SpigotMC**
3. **Restart server**
### Session lock errors
If server fails with `session.lock: already locked`:
```bash
# Kill stale Java processes and remove locks
ssh docker-host2 'docker exec crafty bash -c "pkill -f paper.jar; rm -f /crafty/servers/*/hutworld*/session.lock"'
```
### Permission denied errors in Docker
If world files show `AccessDeniedException`:
```bash
# Fix permissions (crafty user is UID 1000)
ssh docker-host2 'docker exec crafty bash -c "chown -R 1000:0 /crafty/servers/19f604a9-f037-442d-9283-0761c73cfd60/ && chmod -R u+rwX /crafty/servers/19f604a9-f037-442d-9283-0761c73cfd60/"'
```
### LuckPerms missing users/permissions
If LuckPerms shows a fresh database (missing users like Suwan):
@@ -417,10 +611,11 @@ tar -xzf /tmp/hutworld-*.tar.gz -C /tmp --strip-components=2 \
## Migration History
### 2026-01-04: Backup System
### 2026-01-04: Backup System (Updated 2026-01-13)
- Configured automated backups to TrueNAS every 6 hours
- Set 14-backup retention (~3.5 days of recovery points)
- Configured automated backups to TrueNAS
- **Updated frequency:** Every 2 hours (was 6 hours)
- **Updated retention:** 30 backups (~2.5 days) (was 14 backups)
- Created backup script with compression and cleanup
- Storage: `/mnt/vault/users/backups/minecraft/`
@@ -475,4 +670,42 @@ tar -xzf /tmp/hutworld-*.tar.gz -C /tmp --strip-components=2 \
---
**Last Updated:** 2026-01-04
**Last Updated:** 2026-01-11
---
## Migration History (Hutworld)
### 2026-01-13: Server Infrastructure Upgrades ✅
- **RAM Upgraded:** Increased from 2GB/4GB to 4GB/8GB (min/max)
- **Storage Expanded:** VM disk increased from 32GB to 64GB (33% used)
- **RCON Enabled:** Remote console access configured on port 25575 - TESTED & WORKING
- **WorldEdit Installed:** Version 7.3.10 for world editing capabilities
- **Auto-Start Configured:** Server auto-starts with Crafty container
- **Docker Cleanup:** Freed 1.1GB by removing unused images and containers
- **Container Fixed:** Recreated with proper port mappings for RCON access
### 2026-01-11: BlueMap Web Map Added
- Installed BlueMap 5.15 plugin (supports MC 1.21.11)
- Exposed port 8100 in docker-compose.yml for BlueMap web server
- Configured Traefik routing: map.htsn.io → 10.10.10.207:8100
- Added basic auth password protection via Traefik middleware
- Fixed corrupted ViaVersion/ViaBackwards plugins (re-downloaded from Hangar)
- Fixed Docker file permission issues (chown to UID 1000)
- Documented 1.21+ spawner give command syntax
---
## Migration History (Backrooms)
### 2026-01-05: Backrooms Server Created
- Created new Backrooms server in Crafty Controller
- Installed Paper 1.21.4 build 232 (recommended version for datapack)
- Installed The Backrooms datapack v2.2.0 from Modrinth
- DNS record created for backrooms.htsn.io
- Memory configured for 512MB-1.5GB (VM memory constrained)
- Server running on port 25566
- **Pending:** Port forwarding for external access

View File

@@ -16,8 +16,9 @@ Documentation for system monitoring, health checks, and alerting across the home
| **Network** | ✅ Partial | Gateway watchdog | ✅ Auto-reboot | Connectivity check every 60s |
| **Services** | ❌ No | - | ❌ No | No health checks |
| **Backups** | ❌ No | - | ❌ No | No verification |
| **Claude Code** | ✅ Yes | Prometheus + Grafana | ✅ Yes | Token usage, burn rate, cost tracking |
**Overall Status**: ⚠️ **PARTIAL** - Gateway monitoring active, most else is manual
**Overall Status**: ⚠️ **PARTIAL** - Gateway monitoring active, Claude Code active, most else is manual
---
@@ -87,6 +88,133 @@ ssh ucg-fiber 'free -m && ps -eo pid,rss,comm --sort=-rss | head -12'
---
### Claude Code Token Monitoring
**Status**: ✅ **Active with alerts**
Monitors Claude Code token usage across all machines to track subscription consumption and prevent hitting weekly limits.
**Architecture**:
```
Claude Code (MacBook/Mac Mini)
▼ (OTLP HTTP push every 60s)
OTEL Collector (docker-host:4318)
▼ (Prometheus exporter on :8889)
Prometheus (docker-host:9090) ─── scrapes ───► otel-collector:8889
├──► Grafana Dashboard
└──► Alertmanager (burn rate alerts)
```
**Note**: Uses Prometheus exporter instead of Remote Write because Claude Code sends Delta temporality metrics, which Remote Write doesn't support.
**Monitored Devices**:
All Claude Code sessions on any device automatically push metrics via OTLP.
**What's monitored**:
- Token usage (input/output/cache) over time
- Burn rate (tokens/hour)
- Cost tracking (USD)
- Usage by model (Opus, Sonnet, Haiku)
- Session count
- Per-device breakdown
**Dashboard**: https://grafana.htsn.io/d/claude-code-usage/claude-code-token-usage
**Alerts Configured**:
| Alert | Threshold | Severity |
|-------|-----------|----------|
| High Burn Rate | >100k tokens/hour for 15min | Warning |
| Weekly Limit Risk | Projected >5M tokens/week | Critical |
| No Metrics | Scrape fails for 5min | Info |
**Configuration Files**:
- Shell config: `~/.zshrc` (on each Mac - synced via Syncthing)
- OTEL Collector: `/opt/monitoring/otel-collector/config.yaml` (docker-host)
- Alert rules: `/opt/monitoring/prometheus/rules/claude-code.yml` (docker-host)
**Shell Environment Setup** (in `~/.zshrc`):
```bash
# Claude Code OpenTelemetry Metrics (push to OTEL Collector)
export CLAUDE_CODE_ENABLE_TELEMETRY=1
export OTEL_METRICS_EXPORTER=otlp
export OTEL_EXPORTER_OTLP_ENDPOINT="http://10.10.10.206:4318"
export OTEL_EXPORTER_OTLP_PROTOCOL="http/protobuf"
export OTEL_METRIC_EXPORT_INTERVAL=60000
```
**Note**: These can be set either in shell environment (`~/.zshrc`) or in `~/.claude/settings.json` under the `env` block. Both methods work.
**OTEL Collector Config** (`/opt/monitoring/otel-collector/config.yaml`):
```yaml
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 10s
exporters:
prometheus:
endpoint: 0.0.0.0:8889
resource_to_telemetry_conversion:
enabled: true
service:
pipelines:
metrics:
receivers: [otlp]
processors: [batch]
exporters: [prometheus]
```
**Prometheus Scrape Config** (add to `/opt/monitoring/prometheus/prometheus.yml`):
```yaml
- job_name: "claude-code"
static_configs:
- targets: ["otel-collector:8889"]
labels:
group: "claude-code"
```
**Useful PromQL Queries**:
```promql
# Total tokens by model
sum(claude_code_token_usage_tokens_total) by (model)
# Burn rate (tokens/hour)
sum(rate(claude_code_token_usage_tokens_total[1h])) * 3600
# Total cost by model
sum(claude_code_cost_usage_USD_total) by (model)
# Usage by type (input, output, cacheRead, cacheCreation)
sum(claude_code_token_usage_tokens_total) by (type)
# Projected weekly usage (rough estimate)
sum(increase(claude_code_token_usage_tokens_total[24h])) * 7
```
**Important Notes**:
- After changing `~/.zshrc`, start a new terminal/shell session before running Claude Code
- Metrics only flow while Claude Code is running
- Weekly subscription resets Monday 1am (America/New_York)
- Verify env vars are set: `env | grep OTEL`
**Added**: 2026-01-16
---
### Syncthing Monitoring
**Status**: ⚠️ **Partial** - API available, no automated monitoring

38
N8N.md
View File

@@ -258,6 +258,44 @@ curl -H "X-N8N-API-KEY: YOUR_KEY" http://10.10.10.207:5678/api/v1/workflows
ssh docker-host2 'docker ps | grep n8n'
```
### Remove "This message was sent automatically by n8n" signature from Telegram messages
**Problem:** n8n Telegram node adds attribution signature to all messages by default.
**Solution:** Use the correct parameter name `appendAttribution` (camelCase, not snake_case) in `additionalFields`:
```bash
# Get workflow
curl -H "X-N8N-API-KEY: $(cat /tmp/n8n-key.txt)" \
http://10.10.10.207:5678/api/v1/workflows/WORKFLOW_ID > workflow.json
# Update all Telegram nodes (using jq)
cat workflow.json | jq '.nodes = (.nodes | map(
if .type == "n8n-nodes-base.telegram" then
.parameters.additionalFields.appendAttribution = false
else
.
end
))' | jq '{name, nodes, connections, settings, staticData}' > workflow-fixed.json
# Upload updated workflow
curl -X PUT \
-H "X-N8N-API-KEY: $(cat /tmp/n8n-key.txt)" \
-H 'Content-Type: application/json' \
-d @workflow-fixed.json \
http://10.10.10.207:5678/api/v1/workflows/WORKFLOW_ID
# Restart n8n to reload workflow
ssh docker-host2 'cd /opt/n8n && docker compose restart n8n'
```
**Important Notes:**
- Parameter must be `appendAttribution` (camelCase), not `append_attribution` or `append_n8n_attribution`
- Must restart n8n after updating workflow for changes to take effect
- This applies to all Telegram message nodes in the workflow
**Fixed:** 2026-01-23
---
## Integration Examples

339
PA-API.md Normal file
View File

@@ -0,0 +1,339 @@
# Personal Assistant API
Backend API for the Personal Assistant system - provides Claude-powered voice/text interface to all PA capabilities (calendar, tasks, messages, smart home, etc.).
---
## Quick Reference
| Setting | Value |
|---------|-------|
| **Domain** | pa.htsn.io |
| **Local IP** | 10.10.10.207:8401 |
| **Server** | docker-host2 (PVE2 VMID 302) |
| **Compose** | `/opt/pa-api/docker-compose.yml` |
| **Access** | Tailscale only (not publicly exposed) |
| **GitHub** | Private repo: `pa-api` |
---
## Architecture
```
Android/Telegram
┌─────────────────┐
│ PA API │ ← Claude SDK, model routing
│ docker-host2 │
│ :8401 │
└────────┬────────┘
┌────┴────┐
│ │
▼ ▼
┌───────┐ ┌──────────┐
│ Rube │ │MCP Bridge│ ← Mac Mini (Beeper, Proton, etc.)
│ Exa │ │ :8400 │
│ etc. │ └──────────┘
└───────┘
```
**PA API handles:**
- Claude SDK integration (no CLI startup delay)
- Model routing (Haiku/Sonnet/Opus)
- Session management
- Direct API tools (Exa, Ref, Rube, Airtable)
**MCP Bridge handles:**
- Tools requiring Mac Mini (Beeper, Proton Bridge, filesystem)
- Runs on Mac Mini at 10.10.10.125:8400
---
## API Endpoints
| Endpoint | Method | Purpose |
|----------|--------|---------|
| `/chat` | POST | Main query endpoint (streaming SSE) |
| `/health` | GET | Health check |
### POST /chat
**Request:**
```json
{
"message": "What's on my calendar today?",
"session_id": "abc123"
}
```
**Response (Server-Sent Events):**
```
data: {"type": "model", "name": "sonnet"}
data: {"type": "chunk", "text": "You have "}
data: {"type": "chunk", "text": "3 meetings today..."}
data: {"type": "done", "full_text": "You have 3 meetings today..."}
```
### Model Routing
| Query Type | Model | Examples |
|------------|-------|----------|
| Simple facts | Haiku | "How old is X?", "What's 15% of 80?" |
| PA queries | Sonnet | "What's on my calendar?", "Add task" |
| Complex reasoning | Opus | "Help me plan my week" |
**Override:** Say "Use Opus" to force model selection (sticky per session).
---
## Deployment
### Docker Compose
Location: `/opt/pa-api/docker-compose.yml`
```yaml
version: '3.8'
services:
pa-api:
image: pa-api:latest
build: .
container_name: pa-api
restart: unless-stopped
ports:
- "8401:8401"
environment:
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
- MCP_BRIDGE_URL=http://10.10.10.125:8400
- EXA_API_KEY=${EXA_API_KEY}
# Add other API keys as needed
volumes:
- ./data:/app/data
networks:
- pa-network
networks:
pa-network:
driver: bridge
```
### Environment Variables
| Variable | Purpose |
|----------|---------|
| `ANTHROPIC_API_KEY` | Claude API access |
| `MCP_BRIDGE_URL` | Mac Mini bridge endpoint |
| `EXA_API_KEY` | Exa web search |
| `AIRTABLE_API_KEY` | Airtable access |
Store in `/opt/pa-api/.env` (not committed to git).
---
## Traefik Configuration
File: `/etc/traefik/conf.d/pa-api.yaml` (on CT 202)
```yaml
http:
routers:
pa-api:
rule: "Host(`pa.htsn.io`)"
entryPoints:
- websecure
service: pa-api
tls:
certResolver: cloudflare
services:
pa-api:
loadBalancer:
servers:
- url: "http://10.10.10.207:8401"
```
**Note:** This service is Tailscale-only. The Traefik route exists for convenience but should not be exposed publicly via Cloudflare.
---
## Common Tasks
### Start/Stop Service
```bash
# SSH to docker-host2
ssh docker-host2
# Start
cd /opt/pa-api && docker-compose up -d
# Stop
cd /opt/pa-api && docker-compose down
# View logs
docker logs -f pa-api
# Restart
docker-compose restart pa-api
```
### Update Service
```bash
ssh docker-host2
cd /opt/pa-api
git pull
docker-compose build
docker-compose up -d
```
### Health Check
```bash
# From any machine on network
curl http://10.10.10.207:8401/health
# Test chat endpoint
curl -X POST http://10.10.10.207:8401/chat \
-H "Content-Type: application/json" \
-d '{"message": "Hello", "session_id": "test"}'
```
---
## MCP Bridge (Mac Mini)
The MCP Bridge runs on Mac Mini and exposes MCP tools as HTTP endpoints.
| Setting | Value |
|---------|-------|
| **Location** | Mac Mini (10.10.10.125) |
| **Port** | 8400 |
| **Purpose** | Execute MCP tools (Beeper, Proton, TickTick, HA, etc.) |
### Bridge Endpoints
| Endpoint | Method | Purpose |
|----------|--------|---------|
| `/tools` | GET | List available tools |
| `/execute` | POST | Execute a tool |
| `/health` | GET | Health check |
### Start MCP Bridge
```bash
# SSH to Mac Mini
ssh macmini
# Start bridge (managed by launchd)
launchctl load ~/Library/LaunchAgents/com.hutson.mcp-bridge.plist
# Check status
curl http://localhost:8400/health
```
---
## Integration Points
### Related Services
| Service | Relationship |
|---------|--------------|
| n8n | Telegram bot uses n8n → Claude CLI (separate path) |
| MetaMCP | PA API does NOT use MetaMCP (direct MCP Bridge) |
| Home Assistant | Controlled via MCP Bridge |
| Claude-Mem | Shared memory database for context |
### Clients
| Client | Connection |
|--------|------------|
| Android App | HTTPS via Tailscale → pa.htsn.io |
| (Future) Web UI | Same endpoint |
---
## Monitoring
### Health Checks
```bash
# PA API
curl -s http://10.10.10.207:8401/health | jq
# MCP Bridge
curl -s http://10.10.10.125:8400/health | jq
```
### Logs
```bash
# PA API logs
ssh docker-host2 'docker logs -f pa-api --tail 100'
# MCP Bridge logs (Mac Mini)
ssh macmini 'tail -f ~/Library/Logs/mcp-bridge.log'
```
---
## Troubleshooting
### PA API Not Responding
1. Check container status:
```bash
ssh docker-host2 'docker ps | grep pa-api'
```
2. Check logs for errors:
```bash
ssh docker-host2 'docker logs pa-api --tail 50'
```
3. Verify network:
```bash
curl http://10.10.10.207:8401/health
```
### MCP Bridge Not Responding
1. Check if Mac Mini is reachable:
```bash
ping 10.10.10.125
```
2. Check bridge process:
```bash
ssh macmini 'pgrep -f mcp-bridge'
```
3. Restart bridge:
```bash
ssh macmini 'launchctl unload ~/Library/LaunchAgents/com.hutson.mcp-bridge.plist'
ssh macmini 'launchctl load ~/Library/LaunchAgents/com.hutson.mcp-bridge.plist'
```
### Model Routing Issues
- Check Claude API key is valid
- Verify Haiku classifier is responding
- Check session storage for stuck model overrides
---
## Related Documentation
- [IP-ASSIGNMENTS.md](IP-ASSIGNMENTS.md) - Service IP mapping
- [VMS.md](VMS.md) - docker-host2 VM details
- [TRAEFIK.md](TRAEFIK.md) - Reverse proxy configuration
- [Personal Assistant Project](~/Projects/personal-assistant/CLAUDE.md) - PA system overview
- [Services Matrix](~/Projects/personal-assistant/docs/services-matrix.md) - All MCP tools
---
**Last Updated**: 2026-01-07

102
QUICK-REF-WELCOME-HOME.md Normal file
View File

@@ -0,0 +1,102 @@
# Welcome Home Automation - Quick Reference
## Quick Test (Manual Trigger)
```bash
HA_TOKEN="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiIwZThjZmJjMzVlNDA0NzYwOTMzMjg3MTQ5ZjkwOGU2NyIsImlhdCI6MTc2NTk5MjQ4OCwiZXhwIjoyMDgxMzUyNDg4fQ.r743tsb3E5NNlrwEEu9glkZdiI4j_3SKIT1n5PGUytY"
# Test the automation now (ignores conditions)
curl -X POST \
-H "Authorization: Bearer $HA_TOKEN" \
-H "Content-Type: application/json" \
-d '{"entity_id": "automation.welcome_home"}' \
"http://10.10.10.210:8123/api/services/automation/trigger"
```
## Current Configuration
**Lights that turn on:**
- Living Room (75%)
- Living Room Lamp (60%)
- Kitchen (80%)
**When:** After sunset (30 min early) OR before sunrise
**Trigger:** Entering home zone (100m radius)
## Quick Modifications
### Add Office Light
```bash
# Get current config
curl -s -H "Authorization: Bearer $HA_TOKEN" \
"http://10.10.10.210:8123/api/config/automation/config/welcome_home" > /tmp/welcome.json
# Edit /tmp/welcome.json and add to "actions" array:
# {
# "target": {"entity_id": "light.office"},
# "data": {"brightness_pct": 70},
# "action": "light.turn_on"
# }
# Update automation
curl -X POST \
-H "Authorization: Bearer $HA_TOKEN" \
-H "Content-Type: application/json" \
-d @/tmp/welcome.json \
"http://10.10.10.210:8123/api/config/automation/config/welcome_home"
```
### Change to Scene Instead
Replace all light actions with a single scene:
```json
{
"actions": [
{
"service": "scene.turn_on",
"target": {
"entity_id": "scene.living_room_relax"
}
}
]
}
```
## Status Check
```bash
# Check if automation is enabled
curl -s -H "Authorization: Bearer $HA_TOKEN" \
"http://10.10.10.210:8123/api/states/automation.welcome_home" | \
python3 -c "import json, sys; data=json.load(sys.stdin); print(f\"State: {data['state']}\"); print(f\"Last triggered: {data['attributes']['last_triggered']}\")"
# Check current location
curl -s -H "Authorization: Bearer $HA_TOKEN" \
"http://10.10.10.210:8123/api/states/person.hutson" | \
python3 -c "import json, sys; data=json.load(sys.stdin); print(f\"Location: {data['state']}\"); print(f\"GPS: {data['attributes']['latitude']}, {data['attributes']['longitude']}\"); print(f\"Accuracy: {data['attributes']['gps_accuracy']}m\")"
```
## Toggle On/Off
```bash
# Disable
curl -X POST -H "Authorization: Bearer $HA_TOKEN" \
-H "Content-Type: application/json" \
-d '{"entity_id": "automation.welcome_home"}' \
"http://10.10.10.210:8123/api/services/automation/turn_off"
# Enable
curl -X POST -H "Authorization: Bearer $HA_TOKEN" \
-H "Content-Type: application/json" \
-d '{"entity_id": "automation.welcome_home"}' \
"http://10.10.10.210:8123/api/services/automation/turn_on"
```
## Web UI
http://10.10.10.210:8123 → Settings → Automations & Scenes → "Welcome Home"
---
*Entity ID: automation.welcome_home*

View File

@@ -33,6 +33,7 @@ Documentation for Hutson's home infrastructure - two Proxmox servers running VMs
| [SERVICES.md](SERVICES.md) | Complete service inventory with URLs and credentials |
| [TRAEFIK.md](TRAEFIK.md) | Reverse proxy setup, adding services, SSL certificates |
| [HOMEASSISTANT.md](HOMEASSISTANT.md) | Home Assistant API, automations, integrations |
| [PA-API.md](PA-API.md) | Personal Assistant API, MCP Bridge, Claude integration |
| [SYNCTHING.md](SYNCTHING.md) | File sync across all devices, API access, troubleshooting |
| [SALTBOX.md](#) | Media automation stack (Plex, *arr apps) (coming soon) |
@@ -83,6 +84,7 @@ Documentation for Hutson's home infrastructure - two Proxmox servers running VMs
| **Plex** | Saltbox VM | https://plex.htsn.io |
| **Home Assistant** | VM 110 | https://homeassistant.htsn.io |
| **Gitea** | VM 300 | https://git.htsn.io |
| **PA API** | docker-host2 | https://pa.htsn.io (Tailscale) |
| **Pi-hole** | CT 200 | http://10.10.10.10/admin |
| **Traefik** | CT 202 | http://10.10.10.250:8080 |

View File

@@ -63,6 +63,20 @@ curl -sk "https://10.10.10.54:8384/rest/system/status" -H "X-API-Key: $API_KEY"
curl -sk "https://100.106.175.37:8384/rest/system/status" -H "X-API-Key: $API_KEY"
```
### TrueNAS (Docker Container)
```bash
API_KEY="LNWnrRmeyrw4dbngSmJMYN4a5Z2VnhSE"
# Access via Tailscale (port 20910, not 8384)
curl -s "http://100.100.94.71:20910/rest/system/status" -H "X-API-Key: $API_KEY"
# Or via local network
curl -s "http://10.10.10.200:20910/rest/system/status" -H "X-API-Key: $API_KEY"
```
**Note:** TrueNAS Syncthing runs in Docker with:
- Config: `/mnt/.ix-apps/app_mounts/syncthing/config`
- Data: `/mnt/vault/shares/syncthing` → mounted as `/data` in container
- Container name: `ix-syncthing-syncthing-1`
## Common Commands
### Check Status

296
TAILSCALE.md Normal file
View File

@@ -0,0 +1,296 @@
# Tailscale VPN Configuration
## Overview
Tailscale provides secure remote access to the homelab via a mesh VPN. This document covers the configuration, subnet routing, and critical gotchas learned from troubleshooting.
---
## Network Architecture
```
Remote Clients (MacBook, Phone)
▼ Tailscale Mesh (100.x.x.x)
┌───────┴────────┐
│ │
▼ ▼
PVE (Subnet Router) UCG-Fiber (Gateway)
100.113.177.80 100.94.246.32
│ │
│ 10.10.10.0/24 │
└──────────┬───────────┘
┌──────┴──────┐
│ │
PiHole TrueNAS
10.10.10.10 10.10.10.200
```
---
## Device Configuration
| Device | Tailscale IP | Role | Accept Routes | Advertise Routes |
|--------|--------------|------|---------------|------------------|
| **PVE** | 100.113.177.80 | Subnet Router (Primary) | **NO** | 10.10.10.0/24, 10.10.20.0/24 |
| **UCG-Fiber** | 100.94.246.32 | Gateway (backup) | **NO** | (disabled) |
| **PiHole** | 100.112.59.128 | DNS Server | **NO** | None |
| **TrueNAS** | 100.100.94.71 | NAS | Yes | None |
| **Mac-Mini** | 100.108.89.58 | Desktop | Yes | None |
| **MacBook** | 100.88.161.1 | Laptop | Yes | None |
| **Phone** | 100.106.175.37 | Mobile | Yes | None |
---
## Critical Configuration Rules
### 1. Devices on the Advertised Subnet MUST Have `--accept-routes=false`
**Problem:** If a device is directly connected to 10.10.10.0/24 AND has `--accept-routes=true`, Tailscale will route local subnet traffic through the mesh instead of the local interface.
**Symptom:** Device can't reach neighbors on the same subnet; `ip route get 10.10.10.X` shows `dev tailscale0` instead of the local interface.
**Fix:**
```bash
# On any device directly connected to 10.10.10.0/24
tailscale set --accept-routes=false
```
**Affected devices:**
- UCG-Fiber (gateway) - directly on 10.10.10.0/24
- PiHole - directly on 10.10.10.0/24
- PVE - directly on 10.10.10.0/24 (but is the subnet router, so different)
### 2. Only ONE Device Should Be Primary Subnet Router
**Problem:** Multiple devices advertising the same subnet can cause routing conflicts or failover issues.
**Current Setup:**
- **PVE** is the primary subnet router for both 10.10.10.0/24 and 10.10.20.0/24
- **UCG-Fiber** has subnet advertisement DISABLED (was causing relay-only connections)
**To change subnet router:**
1. Go to https://login.tailscale.com/admin/machines
2. Disable route on old device, enable on new device
3. Or set primary if both advertise
### 3. VPNs on Tailscale Devices Can Break Connectivity
**Problem:** A full-tunnel VPN (like ProtonVPN with `AllowedIPs = 0.0.0.0/0`) will route Tailscale's DERP/STUN traffic through the VPN, breaking NAT traversal.
**Symptom:** Device shows relay-only connections with asymmetric traffic (high TX, near-zero RX).
**Fix:** Use split-tunnel configuration that excludes Tailscale traffic. See [PiHole ProtonVPN Configuration](#pihole-protonvpn-split-tunnel) below.
---
## DNS Configuration
### Tailscale Admin DNS Settings
- **Nameserver:** 10.10.10.10 (PiHole via subnet route)
- **Fallback:** None configured
### How DNS Works
1. Remote client enables "Use Tailscale DNS"
2. DNS queries go to 10.10.10.10
3. Traffic routes through PVE (subnet router) to PiHole
4. PiHole resolves via Unbound (recursive) through ProtonVPN
---
## Subnet Routing
### Current Primary Routes
```
PVE advertises:
- 10.10.10.0/24 (LAN)
- 10.10.20.0/24 (Storage network)
```
### Verifying Routes
```bash
# From MacBook - check who's advertising routes
tailscale status --json | python3 -c "
import sys, json
data = json.load(sys.stdin)
for peer in data.get('Peer', {}).values():
routes = peer.get('PrimaryRoutes', [])
if routes:
print(f\"{peer.get('HostName')}: {routes}\")"
```
### Testing Subnet Connectivity
```bash
# Test from remote client
ping 10.10.10.10 # PiHole
ping 10.10.10.120 # PVE
ping 10.10.10.1 # Gateway
dig @10.10.10.10 google.com # DNS
```
---
## PiHole ProtonVPN Split-Tunnel
PiHole runs a WireGuard tunnel to ProtonVPN for encrypted upstream DNS queries. The configuration uses policy-based routing to ONLY route Unbound's DNS traffic through the VPN.
### Configuration File: `/etc/wireguard/piehole.conf`
```ini
[Interface]
PrivateKey = <key>
Address = 10.2.0.2/32
# CRITICAL: Disable automatic routing - we handle it manually
Table = off
# Policy routing: only route Unbound DNS through VPN
PostUp = ip route add default dev %i table 51820
PostUp = ip rule add fwmark 0x51820 table 51820 priority 100
PostUp = iptables -t mangle -N UNBOUND_VPN 2>/dev/null || true
PostUp = iptables -t mangle -F UNBOUND_VPN
PostUp = iptables -t mangle -A UNBOUND_VPN -d 10.0.0.0/8 -j RETURN
PostUp = iptables -t mangle -A UNBOUND_VPN -d 127.0.0.0/8 -j RETURN
PostUp = iptables -t mangle -A UNBOUND_VPN -d 100.64.0.0/10 -j RETURN
PostUp = iptables -t mangle -A UNBOUND_VPN -d 192.168.0.0/16 -j RETURN
PostUp = iptables -t mangle -A UNBOUND_VPN -d 172.16.0.0/12 -j RETURN
PostUp = iptables -t mangle -A UNBOUND_VPN -j MARK --set-mark 0x51820
PostUp = iptables -t mangle -A OUTPUT -p udp --dport 53 -m owner --uid-owner unbound -j UNBOUND_VPN
PostUp = iptables -t mangle -A OUTPUT -p tcp --dport 53 -m owner --uid-owner unbound -j UNBOUND_VPN
PostUp = iptables -t nat -A POSTROUTING -o %i -j MASQUERADE
PostDown = iptables -t mangle -D OUTPUT -p udp --dport 53 -m owner --uid-owner unbound -j UNBOUND_VPN
PostDown = iptables -t mangle -D OUTPUT -p tcp --dport 53 -m owner --uid-owner unbound -j UNBOUND_VPN
PostDown = iptables -t mangle -F UNBOUND_VPN
PostDown = iptables -t mangle -X UNBOUND_VPN
PostDown = ip rule del fwmark 0x51820 table 51820 priority 100
PostDown = ip route del default dev %i table 51820
PostDown = iptables -t nat -D POSTROUTING -o %i -j MASQUERADE
[Peer]
PublicKey = <ProtonVPN-key>
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = 149.102.242.1:51820
PersistentKeepalive = 25
```
**Key Points:**
- `Table = off` prevents wg-quick from adding default routes
- Only traffic from the `unbound` user to port 53 gets marked and routed through VPN
- Local, private, and Tailscale (100.64.0.0/10) traffic is excluded
---
## Troubleshooting
### Symptom: Can't reach subnet (10.10.10.x) from remote
**Check 1:** Is PVE online and advertising routes?
```bash
tailscale status | grep pve
# Should show "active" not "offline"
```
**Check 2:** Is PVE the primary subnet router?
```bash
tailscale status --json | python3 -c "..." # See above
```
**Check 3:** Can PVE reach the target on local network?
```bash
ssh pve 'ping -c 1 10.10.10.10'
```
### Symptom: Device shows "relay" with asymmetric traffic (high TX, low RX)
**Cause:** Usually a VPN or firewall blocking Tailscale's UDP traffic.
**Check:** Run netcheck on the affected device:
```bash
tailscale netcheck
```
Look for:
- Wrong external IP (indicates VPN routing issue)
- Missing DERP latencies
- `MappingVariesByDestIP: true` with no direct connections
### Symptom: Local devices can't reach each other
**Cause:** `--accept-routes=true` on a device that's directly on the subnet.
**Fix:**
```bash
# Check current setting
tailscale debug prefs | grep -i route
# Disable accept-routes
tailscale set --accept-routes=false
```
### Symptom: Gateway can ping Tailscale IPs but not local IPs
**Check routing:**
```bash
ip route get 10.10.10.120
# If it shows "dev tailscale0" instead of "dev br0", that's the problem
```
**Fix:** `tailscale set --accept-routes=false` on the gateway
---
## Maintenance Commands
### Restart Tailscale
```bash
# On Linux
systemctl restart tailscaled
# Check status
tailscale status
```
### Re-advertise Routes (PVE)
```bash
tailscale set --advertise-routes=10.10.10.0/24,10.10.20.0/24
```
### Check Connection Type
```bash
# Shows direct vs relay for each peer
tailscale status
# Detailed ping with path info
tailscale ping <tailscale-ip>
```
### Force Re-connection
```bash
tailscale down && tailscale up
```
---
## Known Issues
### UCG-Fiber Relay-Only Connections
The UniFi gateway sometimes fails to establish direct Tailscale connections, falling back to relay. This appears related to memory pressure or the gateway's NAT implementation. Current workaround: use PVE as the subnet router instead.
### Gateway Memory Pressure
The UCG-Fiber has limited RAM (~3GB) and can become unstable under load. The internet-watchdog service will auto-reboot if connectivity is lost. See [GATEWAY.md](GATEWAY.md).
---
## Change History
### 2026-01-05
- Switched subnet router from UCG-Fiber to PVE
- Fixed PiHole ProtonVPN from full-tunnel to split-tunnel (DNS-only)
- Disabled `--accept-routes` on UCG-Fiber and PiHole
- Documented critical configuration rules
---
**Last Updated:** 2026-01-05

View File

@@ -69,6 +69,9 @@ ssh pve 'pct exec 202 -- tail -f /var/log/traefik/traefik.log'
| AI Trade | aitrade.htsn.io | (trading server) |
| Pulse | pulse.htsn.io | 10.10.10.206:7655 (monitoring) |
| Happy | happy.htsn.io | 10.10.10.206:3002 (Happy Coder relay) |
| BlueMap | map.htsn.io | 10.10.10.207:8100 (Minecraft web map, password protected) |
| Notes Redirect | notes.htsn.io | 10.10.10.207:8765 (HTTP→obsidian:// redirect) |
| Todo Redirect | todo.htsn.io | 10.10.10.207:8765 (HTTP→ticktick:// redirect) |
---

View File

@@ -0,0 +1,114 @@
#!/bin/bash
# Crafty Permission Checker Script
# Checks for permission issues that could break plugin functionality
echo "Crafty Permission Check - $(date)"
echo "================================"
# Base directory
CRAFTY_DIR="/home/hutson/crafty/data/servers"
# Check if running on docker-host2
if [ "$(hostname)" != "docker-host2" ]; then
echo "⚠️ This script should be run on docker-host2"
echo " Use: ssh docker-host2 '~/check-crafty-permissions.sh'"
exit 1
fi
# Function to check permissions
check_permissions() {
local issues_found=0
# Check for files not owned by root group
echo -e "\n📁 Checking file ownership..."
wrong_group=$(find "$CRAFTY_DIR" -type f ! -group root 2>/dev/null)
if [ ! -z "$wrong_group" ]; then
echo "❌ Files with incorrect group (should be 'root'):"
echo "$wrong_group" | head -10
issues_found=$((issues_found + 1))
else
echo "✅ All files have correct group ownership (root)"
fi
# Check for directories not owned by root group
echo -e "\n📁 Checking directory ownership..."
wrong_dir_group=$(find "$CRAFTY_DIR" -type d ! -group root 2>/dev/null)
if [ ! -z "$wrong_dir_group" ]; then
echo "❌ Directories with incorrect group (should be 'root'):"
echo "$wrong_dir_group" | head -10
issues_found=$((issues_found + 1))
else
echo "✅ All directories have correct group ownership (root)"
fi
# Check for directories without setgid bit
echo -e "\n🔒 Checking setgid bit on directories..."
no_setgid=$(find "$CRAFTY_DIR" -type d ! -perm -g+s 2>/dev/null)
if [ ! -z "$no_setgid" ]; then
echo "⚠️ Directories without setgid bit (may cause future issues):"
echo "$no_setgid" | head -10
issues_found=$((issues_found + 1))
else
echo "✅ All directories have setgid bit set"
fi
# Check for files that crafty user can't read (excluding temp files)
echo -e "\n📖 Checking read permissions..."
unreadable=$(find "$CRAFTY_DIR" -type f ! -perm -g+r ! -name "*.tmp" 2>/dev/null)
if [ ! -z "$unreadable" ]; then
echo "❌ Files that crafty user can't read:"
echo "$unreadable" | head -10
issues_found=$((issues_found + 1))
else
echo "✅ All files are readable by crafty user"
fi
return $issues_found
}
# Function to fix permissions
fix_permissions() {
echo -e "\n🔧 Fixing permissions..."
# Fix ownership
sudo chown -R hutson:root "$CRAFTY_DIR"
# Fix directory permissions (2775 = rwxrwsr-x)
sudo find "$CRAFTY_DIR" -type d -exec chmod 2775 {} \;
# Fix file permissions (664 = rw-rw-r--)
sudo find "$CRAFTY_DIR" -type f -exec chmod 664 {} \;
echo "✅ Permissions fixed!"
}
# Main execution
echo "Checking Crafty server permissions..."
check_permissions
result=$?
if [ $result -gt 0 ]; then
echo -e "\n⚠ Found $result permission issue(s)!"
echo -n "Would you like to fix them automatically? (y/n): "
read -r response
if [[ "$response" =~ ^[Yy]$ ]]; then
fix_permissions
echo -e "\n🔄 Re-checking permissions..."
check_permissions
if [ $? -eq 0 ]; then
echo -e "\n✅ All permission issues resolved!"
else
echo -e "\n❌ Some issues remain. You may need to restart the Crafty container."
fi
else
echo -e "\nTo fix manually, run:"
echo "sudo chown -R hutson:root $CRAFTY_DIR"
echo "sudo find $CRAFTY_DIR -type d -exec chmod 2775 {} \;"
echo "sudo find $CRAFTY_DIR -type f -exec chmod 664 {} \;"
fi
else
echo -e "\n✅ No permission issues found!"
fi
echo -e "\n================================"
echo "Check complete - $(date)"

View File

@@ -0,0 +1,77 @@
#!/bin/bash
# Minecraft Servers Backup Script (All Servers)
# Backs up both Hutworld and Backrooms servers to TrueNAS
BACKUP_DEST="hutson@10.10.10.200:/mnt/vault/users/backups/minecraft"
DATE=$(date +%Y-%m-%d_%H%M)
echo "[$(date)] Starting Minecraft servers backup..."
# Backup Hutworld server
HUTWORLD_SRC="$HOME/crafty/data/servers/19f604a9-f037-442d-9283-0761c73cfd60"
HUTWORLD_BACKUP="/tmp/hutworld-$DATE.tar.gz"
echo "[$(date)] Backing up Hutworld server..."
tar -czf "$HUTWORLD_BACKUP" \
--exclude="*.jar" \
--exclude="cache" \
--exclude="libraries" \
--exclude=".paper-remapped" \
-C "$HOME/crafty/data/servers" \
19f604a9-f037-442d-9283-0761c73cfd60
echo "[$(date)] Hutworld backup created: $(ls -lh $HUTWORLD_BACKUP | awk '{print $5}')"
# Transfer Hutworld backup to TrueNAS
sshpass -p 'GrilledCh33s3#' scp -o StrictHostKeyChecking=no "$HUTWORLD_BACKUP" "$BACKUP_DEST/"
if [ $? -eq 0 ]; then
echo "[$(date)] Hutworld backup transferred successfully"
rm "$HUTWORLD_BACKUP"
else
echo "[$(date)] ERROR: Failed to transfer Hutworld backup"
fi
# Backup Backrooms server
BACKROOMS_SRC="$HOME/crafty/data/servers/64079d6c-acb0-48c4-9b21-23e0fa354522"
BACKROOMS_BACKUP="/tmp/backrooms-$DATE.tar.gz"
echo "[$(date)] Backing up Backrooms server..."
tar -czf "$BACKROOMS_BACKUP" \
--exclude="*.jar" \
--exclude="cache" \
--exclude="libraries" \
--exclude=".paper-remapped" \
-C "$HOME/crafty/data/servers" \
64079d6c-acb0-48c4-9b21-23e0fa354522
echo "[$(date)] Backrooms backup created: $(ls -lh $BACKROOMS_BACKUP | awk '{print $5}')"
# Transfer Backrooms backup to TrueNAS
sshpass -p 'GrilledCh33s3#' scp -o StrictHostKeyChecking=no "$BACKROOMS_BACKUP" "$BACKUP_DEST/"
if [ $? -eq 0 ]; then
echo "[$(date)] Backrooms backup transferred successfully"
rm "$BACKROOMS_BACKUP"
else
echo "[$(date)] ERROR: Failed to transfer Backrooms backup"
fi
# Clean up old backups (keep last 30 of each server)
echo "[$(date)] Cleaning up old backups..."
sshpass -p 'GrilledCh33s3#' ssh -o StrictHostKeyChecking=no hutson@10.10.10.200 '
cd /mnt/vault/users/backups/minecraft
# Keep only last 30 Hutworld backups
ls -t hutworld-*.tar.gz 2>/dev/null | tail -n +31 | xargs -r rm -f
# Keep only last 30 Backrooms backups
ls -t backrooms-*.tar.gz 2>/dev/null | tail -n +31 | xargs -r rm -f
echo "Current backups:"
echo "Hutworld: $(ls -1 hutworld-*.tar.gz 2>/dev/null | wc -l) backups"
echo "Backrooms: $(ls -1 backrooms-*.tar.gz 2>/dev/null | wc -l) backups"
echo "Total size: $(du -sh . | cut -f1)"
'
echo "[$(date)] All backups complete!"