Documentation

Server Hardening & Recovery

Beta in name, production in security.

← THE EMERGENCE PROJECT

This is an honest account of how this server is hardened and why. Not a marketing checklist. Not a certification claim. A record of decisions made, trade-offs weighed, and failures caught before they shipped.

Security is treated as a posture — not a checklist, but a disposition that shapes every decision. The architecture is designed to be unforgiving: the same hardening that stops an attacker stops a careless operator. Every component follows the principle of least privilege.

We're publishing this because security by obscurity isn't security. If our approach has gaps, we want to know. If it's useful to someone building something similar, they should have it.


Stack Overview

The production environment runs two layers that are independently hardened.

LayerTechnologyRole
EdgeCloudflareTLS termination, DDoS, CDN
ApplicationNode.js / Express on Debian VPSIMAP sessions + Google OAuth track, business logic

Both tracks — IMAP and Google OAuth — run as Express applications on the same VPS, managed by PM2 and reverse-proxied through Caddy. They share the same OS-level hardening, the same network layer, and the same monitoring. This document covers the full stack.


1. Network Layer


2. SSH Hardening


3. Kernel & OS


4. Application User Isolation

The application runs as a dedicated system user (aimail) with:

If the application is compromised, the blast radius is contained to what aimail can reach — which is only what it needs to run.


5. Process Management

PM2 manages the Node.js process with a startup hook wired via pm2 startup systemd. The server survives reboots without manual intervention and without running the application as root. Process status, restart count, and memory usage are monitored in the daily report. Restart deltas are tracked — unexpected cycling triggers review.


6. Security Monitoring

Architecture Decision

The security layer is server-level, not application-level. The watcher doesn't know about Node.js, IMAP, or Gmail. It watches the OS. Every application that runs on this box gets the same protection for free. Build at the right layer and you only build it once.

What We Watch

On any of these events, an alert fires immediately — out-of-band to an external email account. An attacker who owns the machine cannot suppress the notification. The full log line is included: source IP, timestamp, key fingerprint. The alert account is MFA'd on a physical device.

The window between intrusion and awareness: minutes.

Least Privilege: The logwatcher User

The watcher runs as a dedicated system user (logwatcher) — not as the application user, not as root, not via a group escalation.

The simpler path was available: add aimail to the adm group — two commands, done. We didn't, because:

logwatcher gets exactly what it needs and nothing else:

ResourceAccessMechanism
/var/log/auth.logRead onlyPOSIX ACL scoped to that file
/var/lib/logwatcher/Read/writeOwns the directory
/etc/logwatcher/watcher.confReadSystem config path
/etc/msmtp/logwatcher.confReadSystem msmtp config — logwatcher has no home dir
Mail send (msmtp)ExecuteAppArmor local override

Isolation guarantee: If the application user is compromised, the watcher is untouched. If the watcher user is compromised, it cannot execute commands. The two are completely independent.

AppArmor

msmtp is confined by AppArmor. The default profile doesn't allow reads from /etc/msmtp/ or writes to /var/lib/logwatcher/. A local override (/etc/apparmor.d/local/usr.bin.msmtp) grants exactly those paths — nothing broader. The main AppArmor profile is untouched.

System Cron

The watcher cron runs via /etc/cron.d/security-watch with logwatcher named as the executing user. It does not depend on aimail's crontab, aimail's environment, or anything in aimail's home directory. System cron outlives user session changes.

Execute Bit in Git

Watcher scripts are versioned in the application repo. Execute permissions are baked into git via git update-index --chmod=+x. Every pull preserves them. No post-deploy chmod step that can be forgotten.


7. Daily System Report

An HTML email is delivered at 08:00 UTC every day, covering all critical metrics with colour-coded thresholds:

MetricGreenYellowOrangeRed
Disk usage< 70%70-80%80-85%> 85%
RAM usage< 75%75-85%85-90%> 90%
PM2 process statusOnlineNot online
Daily IMAP request count< 200200-350350-500> 500

The report also includes: uptime, PM2 restart deltas, fail2ban active bans, failed SSH attempt count for the day, and the last 3 banned IPs.


8. Transport & Response Headers

HeaderValue
Strict-Transport-Securitymax-age=31536000; includeSubDomains
Content-Security-PolicyRestrictive default, no frame ancestors
X-Frame-OptionsDENY
X-Content-Type-Optionsnosniff
Referrer-Policyno-referrer
Permissions-PolicyCamera, microphone, geolocation, payment — all disabled
x-powered-bySuppressed. Express version banner removed.

TLS is terminated at Caddy on the VPS via automatic HTTPS. Cloudflare handles DNS and edge caching for the public site.


9. Application Security

Session Management

Credential Handling — Full Lifecycle

IMAP credentials are never stored in plaintext and never logged. The system uses a split-key architecture — the encryption key and the encrypted data are stored in separate locations. Neither half is useful alone. Here is exactly what happens to your app password at every stage:

Login: You submit your email and app password. The password is used in plaintext exactly once — to authenticate against Gmail's IMAP server. If authentication succeeds, the server generates a random 12-character session ID and a random 256-bit encryption key. The password is encrypted with AES-256-GCM using that key and a random 12-byte IV. The encrypted blob is written to a session file on disk (/var/lib/aimail/sessions/, chmod 700). The encryption key is sent to the browser in an HttpOnly, Secure, SameSite cookie. The plaintext password is not retained.

At rest (signed in): Your password lives encrypted on disk — but the key to decrypt it is in your browser cookie. The server holds no keys in memory. A disk breach yields encrypted blobs with no way to decrypt them. A cookie theft yields keys with no encrypted data to use them on. Both halves are required.

Why disk, not RAM? In-memory session stores are often treated as the safer default — but from a penetration standpoint, the opposite is true. If an attacker gains read access to server memory (process dump, cold boot, heap inspection), an in-memory store hands them every active session in plaintext: credentials, tokens, keys — all of it, in one pass. They read RAM, they leave, and they have everything. With split-key disk sessions, a memory dump yields nothing — no keys are held in memory, no plaintext exists outside the instant of a single API call. A disk breach yields encrypted blobs that are computationally useless without the browser-side keys. The attacker has to compromise two independent systems simultaneously — the server filesystem and the user's browser — and do it within the same session window. That is a fundamentally harder problem than reading a process's memory.

Making a request: Your browser sends the cookie (containing the key). The server reads the encrypted session file from disk, decrypts the password using the key from the cookie, opens an IMAP connection, and the connection is closed. The decrypted password goes out of scope immediately — it exists in memory only for the duration of a single API call.

Logout: The session file is deleted from disk. The cookie is cleared. A background sweep also deletes any expired session files every 15 minutes, so even if you close the tab without logging out, the file self-destructs.

Session persistence: Users may opt in to "Stay signed in for 7 days." This extends the cookie lifetime and session hard cap. The encryption is identical — same AES-256-GCM, same split-key separation. The idle timeout (2 hours for standard sessions, 7 days for persistent) and hard cap ensure stale sessions are cleaned up. All session files are deleted after their expiry window regardless of the persist setting.

Split-key design: the encrypted credentials on disk are useless without the key in your browser. The key in your browser is useless without the encrypted file on disk. A server memory dump reveals nothing — no keys, no plaintext. Both halves must be compromised simultaneously for credential exposure.

Login Throttling

Progressive backoff keyed by IP + email (two-dimensional — a credential-stuffing attack from one IP doesn't lock out unrelated users at the same IP):

Rate Limiting

Input Validation

FieldConstraint
EmailFormat regex + max 254 characters
PasswordMax 128 characters
DatesStrict ISO 8601 format, must parse as valid
ModeAllowlist only
Request bodyCapped at 20kb

IMAP Safety

Error Handling

err.message is never forwarded to the client. Errors return generic messages only. Stack traces stay on the server.

CSRF

Evaluated and confirmed not needed for this architecture. Sessions are stateless, IMAP credentials are user-supplied per request, there is no state-changing operation a third-party site could forge.


10. Access Model

Two paths into the server:

SSH — requires a cryptographic key. Password auth is off. Brute force is auto-banned. The port is non-standard.

Hosting console — browser-based, behind MFA tied to a physical device. Available when SSH is unavailable.

Both paths hit the same wall: a 64-character sudo password that is not a passphrase and not guessable. An attacker who gets through the front door still cannot escalate. They're standing in the lobby.

Every door requires a different key. The keys don't live in the same place. Getting one doesn't get you the rest.


11. Privacy Posture

The application logs nothing about users by design.

Not Collected

Collected

An anonymous request count — one line appended per IMAP-triggering request. No identifying information. Resets daily. Shows as a single number in the daily report. Enough to know if traffic is growing. Nothing that tells you who.

Data Lifecycle


12. Log Rotation

LogRetentionFormat
PM2 application logs7 daysCompressed
Security watcher logs3 daysCompressed

Email alerts are the forensic record. Local logs are operational noise. Seven days is enough to diagnose any crash. Longer retention is not privacy-neutral when users are putting email credentials into this system.


13. Recovery Chain

Hardening is not the hard part of security. Recovery is.

You can build a wall no one can climb and still lose everything if you can't get back in yourself. Every system needs a recovery chain that is: documented, tested in principle, and held somewhere the system itself cannot corrupt.

Password manager
  └── recoverable via: recovery phrase (offline, physical)
        ├── SSH private key
        ├── VPS provider credentials → hosting console access (no SSH needed)
        └── sudo password → full server access

Failure Scenarios

FailureRecovery Path
SSH key lostVPS provider console (browser-based, key-free)
sudo password lostHosting provider rescue mode — mount filesystem — update sudoers
VPS provider account lostProvider support + identity verification — no automated path
Server compromisedRebuild from scratch. Session files on disk are encrypted (useless without browser-side keys). Delete session directory and redeploy.

The recovery chain must not run through the system it's recovering. Credentials backed up only to the server are not backed up. If the system is the problem, the system cannot be the solution.


14. What We Chose Not To Do

Deliberate omissions, not oversights.

ItemDecision
Password authenticationRejected permanently. OAuth (Google) and IMAP app passwords only.
Storing credentials in plaintextCredentials encrypted on disk with AES-256-GCM. Encryption key held only in browser cookie (split-key). Neither half useful alone. Destroyed on logout or expiration.
User registry / accountsNo user database exists. There is nothing to breach.
CASA assessmentEvaluated. Not viable for current project scope.
Fingerprint suppression as a substitute for hardeningWe remove version headers — that removes free information, not risk.

15. What This Doesn't Cover


Summary

LayerMeasure
NetworkUFW deny-all, three explicit allows, non-standard SSH port
OSfail2ban (10 attempts / 1hr window / 1hr ban), auto-updates, sysctl hardening (SYN cookies, ICMP, martians), service audit
SSHKey-only, 10 max attempts, 20s grace, single named user, no root
ProcessPM2 with reboot survival, monitored daily with restart delta tracking
DetectionOS-level watcher on 5-min cycle, dedicated least-privilege user, out-of-band alerts
MonitoringDaily colour-coded system + security report, fail2ban stats, SSH attempt counts
SessionSplit-key: encrypted on disk, key in browser cookie. 2h/7d idle TTL, 8h/7d hard cap, AES-256-GCM
ApplicationProgressive backoff, rate limiting, input validation, IMAP timeouts + concurrency cap
HeadersHSTS, CSP, X-Frame-Options, nosniff, Referrer-Policy, Permissions-Policy, no version banner
PrivacyNo tracking, anonymous request count only, no persistent user data
IsolationWatcher user independent of app user, app user independent of OS
RecoveryDocumented chain, offline root credential, multiple independent access paths

Testing Protocol

Security components are tested end-to-end before deploy. The security watcher was tested as the correct user, against the correct config, with confirmed alert delivery — before it merged to production. That is non-optional.

"We don't do faster here. We do better."


Part of the Emergence Project. Architecture by Jed and Webby. Infrastructure review by CC (Shell — Infrastructure).

Authors: Webby (Shell — WebDev) + CC (Shell — Infrastructure)
March 29, 2026