skills/cortex-server/SKILL.md
Symbiont 1c7288edce Initial skills repo: cortex-server, symbiont
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-19 20:01:03 +00:00

10 KiB

name description
cortex-server Everything needed to operate cortex.hydrascale.net — Michael's Ubuntu 24.04 VPS. Use this skill whenever the user asks to do ANYTHING on or related to: the cortex server, deploying or updating a website, pushing files live, adding a new domain, checking if a site is up, editing the Caddyfile, managing HTTPS certs, running server maintenance, checking or fixing rsync.net backups, installing packages, rebooting, or anything that needs to happen "on the server." Trigger on casual phrasing too — "push that to the site", "is the backup running?", "add another domain", "update the server", "what's running on cortex".

Cortex Server — Operations Guide

Quick Reference

Item Value
Host cortex.hydrascale.net
Public IP 45.41.204.162
SSH user root
SSH port 22
OS Ubuntu 24.04.2 LTS
Kernel 6.8.0-101-generic

Connecting via SSH

Use paramiko — it's the only reliable SSH method in this environment (not the system ssh binary).

Step 1 — Find the key file

Look for the cortex private key (RSA, passphrase-protected) in this order:

  1. Current session uploads/sessions/*/mnt/uploads/cortex (glob matches any session)
  2. Common local folders~/Downloads/cortex and ~/Desktop/cortex
  3. Ask Michael — If not found: "Could you upload your cortex SSH key? It's the one at ~/.ssh/cortex on your Mac."

The key passphrase is: 42Awk!%@^#&

Always copy to /tmp and lock permissions before use:

cp <found_key_path> /tmp/cortex_key && chmod 600 /tmp/cortex_key

Dynamic lookup Python code:

import glob
import os

def find_cortex_key():
    """Find the cortex SSH key across multiple locations."""
    candidates = glob.glob('/sessions/*/mnt/uploads/cortex')
    candidates += [
        os.path.expanduser('~/Downloads/cortex'),
        os.path.expanduser('~/Desktop/cortex'),
    ]
    # Return the first one that exists
    return next((p for p in candidates if os.path.exists(p)), None)

Step 2 — Install paramiko (if needed)

pip install paramiko --break-system-packages -q

Step 3 — Standard connection boilerplate

import paramiko
import glob
import os
import shutil

def find_cortex_key():
    """Find the cortex SSH key across multiple locations."""
    candidates = glob.glob('/sessions/*/mnt/uploads/cortex')
    candidates += [
        os.path.expanduser('~/Downloads/cortex'),
        os.path.expanduser('~/Desktop/cortex'),
    ]
    return next((p for p in candidates if os.path.exists(p)), None)

def connect_cortex():
    # Find the key
    key_found = find_cortex_key()
    if not key_found:
        raise FileNotFoundError('Could not find cortex SSH key. Upload ~/.ssh/cortex from your Mac.')

    # Copy to /tmp and lock permissions
    shutil.copy(key_found, '/tmp/cortex_key')
    os.chmod('/tmp/cortex_key', 0o600)

    # Connect via SSH
    key = paramiko.RSAKey.from_private_key_file('/tmp/cortex_key', password='42Awk!%@^#&')
    client = paramiko.SSHClient()
    client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
    client.connect('cortex.hydrascale.net', port=22, username='root', pkey=key, timeout=15)
    return client

def run(client, cmd, timeout=60):
    stdin, stdout, stderr = client.exec_command(cmd, timeout=timeout)
    out = stdout.read().decode(errors='replace').strip()
    err = stderr.read().decode(errors='replace').strip()
    return out, err

Uploading files via SFTP

sftp = client.open_sftp()
with sftp.open('/remote/path/filename.html', 'wb') as f:
    with open('/local/path', 'rb') as local:
        f.write(local.read())
sftp.chmod('/remote/path/filename.html', 0o644)
sftp.close()

Writing text config files to the server

channel = client.get_transport().open_session()
channel.exec_command('cat > /path/to/config/file')
channel.sendall(content.encode())
channel.shutdown_write()
channel.recv_exit_status()

Server Layout

/data/
└── sites/                        ← all websites live here
    ├── hydrascale.net/
    │   └── index.html            ← Shreveport crime map (static)
    └── <new-domain>/             ← add new sites here

/etc/caddy/Caddyfile              ← web server config (edit + reload to deploy)
/usr/local/bin/rsync_net_backup.sh ← nightly backup script

What's Installed

Service Status Notes
Caddy v2.11.2 running Auto-HTTPS via Let's Encrypt
fail2ban running SSH brute-force protection
ufw enabled Ports 22, 80, 443 open
node_exporter running Prometheus metrics on localhost:9100
Docker not yet Planned for dynamic/containerized sites

Deploying Websites

Static site — new domain

  1. Create directory: mkdir -p /data/sites/<domain>
  2. Upload files via SFTP to /data/sites/<domain>/
  3. Add a block to the Caddyfile (see below)
  4. Validate + reload: caddy validate --config /etc/caddy/Caddyfile && systemctl reload caddy

Updating an existing site

Just re-upload the files via SFTP — no Caddy reload needed for content-only changes.

Caddyfile structure

The global email block must stay at the top. Each site gets its own block:

# Global options
{
    email mdwyer@michaelmdwyer.com
}

# Static site
example.com, www.example.com {
    root * /data/sites/example.com
    file_server
    encode gzip
    handle_errors {
        respond "{err.status_code} {err.status_text}" {err.status_code}
    }
}

# Docker reverse proxy (for future dynamic apps)
app.example.com {
    reverse_proxy localhost:8080
}

When editing the Caddyfile: read the current file first, append or modify the relevant block, write it back, then validate before reloading. Never reload without validating — a bad config will drop the site.


Currently Live Sites

Domain Root Content
hydrascale.net /data/sites/hydrascale.net/ Shreveport crime map
www.hydrascale.net → same Auto-redirects

rsync.net Backups

  • Account: de2613@de2613.rsync.net
  • Auth key: /root/.ssh/rsync_net_key on the server (the hydrascale.net RSA key — no passphrase)
  • Schedule: Daily at 03:17 UTC (systemd timer)
  • What's backed up: /etc, /var/snap/lxd/common/lxd, /data, /data/sites
  • Remote path: cortex-backup/cortex/ on rsync.net

Backup key

The dedicated rsync.net key lives permanently at /root/.ssh/rsync_net_key on the server (this is the hydrascale.net key — RSA, no passphrase). The backup script passes it explicitly via -e "ssh -i /root/.ssh/rsync_net_key". Do not use id_rsa for rsync.net — that key is not authorized there.

Backup status check

Note: the journal will show failures before 2026-03-13 (auth was broken). To check current status, run a live connection test rather than relying solely on old journal entries:

# Check timer and last run
systemctl status rsync-net-backup.timer rsync-net-backup.service --no-pager
# Confirm auth still works (fast, non-destructive)
ssh -o BatchMode=yes -o IdentitiesOnly=yes -i /root/.ssh/rsync_net_key de2613@de2613.rsync.net ls 2>&1

Auth fixed 2026-03-13: hydrascale.net key installed at /root/.ssh/rsync_net_key, backup script updated to use it, rsync.net host key added to known_hosts.

Trigger a manual backup run

systemctl start rsync-net-backup.service && journalctl -u rsync-net-backup.service -f --no-pager

Connecting to rsync.net directly (from local VM, not via cortex)

key = paramiko.RSAKey.from_private_key_file('/tmp/hydrascale_key')  # no passphrase
client.connect('de2613.rsync.net', port=22, username='de2613', pkey=key)
# Restricted shell — no output redirection, no pipes
# Supported commands: ls, du, df, mkdir, mv, rm, rsync, sftp, scp

What's on rsync.net (legacy — from the old Red Hat server, ~2023)

  • cortex.hydrascale.net/hydramailer/ — Docker volume data (DB + Elasticsearch + Kibana) for the old Hydramailer consulting project — recoverable if needed
  • cortex.hydrascale.net/backup/ — MySQL dumps, vmail backups (hydrascale.net, trump.support, creativecampaignsolutions.com)
  • leviathan.hydrascale.net/ — PowerMTA configs from another old server
  • macbook-air/src/ — Source code (ChessCom, Dwyer-Solutions, Elm Guide, oura-ring, etc.) — last synced Feb 2024

Common Operations

Health check — verify everything is running

for svc in ['caddy', 'fail2ban', 'ssh']:
    status, _ = run(client, f'systemctl is-active {svc}')
    print(f"{svc}: {status}")

System updates (non-interactive, safe for production)

DEBIAN_FRONTEND=noninteractive apt-get upgrade -y \
  -o Dpkg::Options::="--force-confdef" \
  -o Dpkg::Options::="--force-confold" 2>&1 | tail -10

Check if a reboot is needed

test -f /run/reboot-required && cat /run/reboot-required.pkgs || echo "No reboot needed"

Reboot and wait for recovery

Issue the reboot, then poll until SSH responds again (usually ~75 seconds):

try:
    client.exec_command('sleep 1 && reboot', timeout=5)
except: pass
client.close()

# Poll loop — reconnect when ready
import time, socket
while True:
    try:
        socket.create_connection(('cortex.hydrascale.net', 22), timeout=5).close()
        time.sleep(4)  # let sshd finish starting
        break
    except:
        time.sleep(6)

View recent Caddy logs

journalctl -u caddy --no-pager -n 30

Disk and memory at a glance

df -h / && free -h

Symbiont Orchestrator

The /data/symbiont directory contains the Symbiont project — a self-sustaining AI agent orchestrator running on cortex.

  • Git repo: /data/symbiont/.git (clone location)
  • Systemd services:
    • symbiont-api.service — Main API daemon
    • symbiont-heartbeat.timer — Periodic health-check timer

Check status and logs:

systemctl status symbiont-api.service symbiont-heartbeat.timer --no-pager
journalctl -u symbiont-api.service -f --no-pager