1.5: Systems
Systems
If Devices are where attackers land and Data is what they want, Systems are what they traverse. This pillar covers the software the org runs, how it’s configured, who can log into it, and whether you have any record of what happened there.
What lives here
- Software the org runs. SaaS (Google Workspace, GitHub, Salesforce, Notion, a hundred others) and self-hosted (your production app, internal dashboards, the Postgres your billing runs on).
- Configuration. Every SaaS tenant has a hundred knobs. Every piece of self-hosted software has a thousand. Configuration is the thing you set once and forget, until a breach report says “misconfigured.”
- Patching. OS patches landed on devices in 1.3. Here we’re talking about application patches — the Postgres CVE, the Log4j, the OpenSSL, the Node.js runtime.
- Secrets management. API keys, database passwords, private keys, tokens. Where they live, who can retrieve them, whether they’re ever rotated.
- Authentication infrastructure. Your identity provider (Okta, Google, Entra ID), SSO coverage, MFA enforcement, session policy.
- Logging and monitoring. What each system emits, where those logs go, how long they’re kept, and whether anyone looks.
What typically goes wrong
Config drift. A production system was hardened in 2022. Since then, engineers have tweaked settings during incidents, during launches, during debugging sessions. Nobody has looked at the delta. Today’s state is a mystery even to the people who built it.
Secrets in git. You will find AWS keys in your repo history. Every org does. The question is whether you’ve looked and whether you have a rotation plan ready when you find them.
SSO implemented but not enforced. You bought Okta. You integrated the top 20 apps. The other 80 — including the AI transcription tool Sales uses and the internal tool engineering forgot about — still have local passwords. Any one of those passwords is the front door.
MFA everywhere except where it counts. MFA on email is easy. MFA on your production AWS console is harder. MFA on the admin panel of your homegrown app that three engineers built in 2020 is nonexistent.
Logging without retention or review. You turned on CloudTrail. You turned on Google Workspace audit logs. They exist. No one has ever queried them. Retention is whatever the default is (often 90 days, sometimes 30). The incident you’ll eventually investigate happened 120 days ago.
What mature orgs do differently
Infrastructure as code with drift detection. Terraform, CloudFormation, Pulumi. The config is in git, reviewable, revertible. A nightly job compares deployed state to declared state and alerts on drift. Changes happen through pull requests, not console clicks.
Secrets in a dedicated vault. HashiCorp Vault, AWS Secrets Manager, 1Password Secrets Automation, Doppler. Applications fetch at runtime; humans retrieve through logged access. Git pre-commit hooks catch accidental commits. Rotation is automated where possible.
SSO enforced, MFA required, FIDO2 where you can. SSO is the identity boundary. Every system an employee uses, corporate or SaaS, goes through it. MFA is required, and the preferred second factor is FIDO2 (hardware keys or platform authenticators) because the phishable factors — SMS, push fatigue, TOTP codes — all have known bypasses in 2026.
Structured logging with defined retention. Logs are JSON, centralized in a SIEM or log platform (Datadog, Elastic, Snowflake), retained for a named period (180 days minimum for most orgs, 400 days if you can afford it), and searchable. Someone looks at the alerts. A human knows what normal looks like.
Anchor: Equifax, 2017
On March 8, 2017, Apache released a patch for a critical CVE in Struts 2, a popular Java web framework. On March 9, Equifax’s security team circulated an internal notice: patch Struts within 48 hours.
The notice went out. The patch did not go on. Equifax ran a vulnerability scanner a few days later that reported everything as clean — except the scanner was pointed at the wrong set of directories and did not actually find the vulnerable Struts instance on a public-facing customer portal.
On May 13, attackers exploited the unpatched Struts instance and entered the environment. They stayed for 76 days. They exfiltrated personally identifiable information belonging to 147 million Americans — names, Social Security numbers, birth dates, addresses, and driver’s license numbers. It remains one of the largest breaches of U.S. consumer data in history.
Every layer of this is a Systems failure. The patch notice existed; the patching process didn’t enforce it. The vulnerability scanner existed; the coverage wasn’t verified. Logging existed; the exfiltration traffic flowed outbound for over two months without an alert anyone acted on. The CVE was patched on day zero of public disclosure — that’s the “we did everything right” version — but the system for applying patches was broken, and no one noticed because the system for noticing was also broken.
The Systems lesson is that configuration, patching, and monitoring are not a one-time project. They’re an operational capability. You don’t measure them by whether you have them; you measure them by whether they work when tested against reality.