Skip to main content
Byte Clarity

Cybersecurity

Cybersecurity metrics that actually matter for a small business

Most cybersecurity metrics are vanity. Here are the handful that actually matter for a small business — simple to track, tied to real risk, useful in real decisions.

By Greg Douglas Published 8 min read

Updated

A woman viewed from behind studies a large wall of illuminated dashboards filled with charts, bar graphs, and data visualizations — representing data-driven decision making.
A woman viewed from behind studies a large wall of illuminated dashboards filled with charts, bar graphs, and data visualizations — representing data-driven decision making.

Most cybersecurity metrics are noise. “Number of attacks blocked this month” is a marketing number — it’s designed to look impressive in a report, not to help you make a decision. “Alerts triaged” is activity dressed up as outcomes.

For a small business trying to run an actual security program — not just commission one — the right metrics are few, simple, and tied to decisions. You should be able to track them in a spreadsheet. You should know what you’d do if one moved the wrong way.

This post is that short list: the metrics we’ve found genuinely useful when working with small businesses, organized by the questions they answer.

What makes a metric worth tracking

Before the list, a filter. A useful metric has three qualities:

  • It’s tied to an action. If the number moves the wrong way, there’s a specific thing you’d do. If there isn’t, the metric is decoration.
  • It’s measurable without a full-time analyst. Small businesses don’t have a SOC. A number that requires a data engineer to compute won’t get computed.
  • It’s comparable across time. You want trends, not snapshots. Ten consecutive months of “MFA coverage at 72%” is a much clearer signal than one month at “72%.”

The metrics below all pass those three filters. Pick three or four. Track them quarterly. That beats twenty metrics tracked never.

This framing aligns with the measurement discipline in the NIST Cybersecurity Framework 2.0 (the new Govern function explicitly calls out measurement) and the CIS Controls v8 Measures and Metrics companion — both are free and worth having on hand if you want more depth than this post covers.

Coverage metrics: the lagging numbers that compound

Coverage metrics ask “what percentage of the things-we-said-we’d-do actually happened?” They’re lagging indicators, which sounds less exciting than real-time dashboards — but a business where the coverage numbers are high and stable is a business that’s actually doing the work.

1. MFA coverage (%)

What it measures: The percentage of active user accounts with multi-factor authentication fully enforced (not just available).

Why it matters: As we covered in our password-management deep-dive, MFA is the single highest-leverage control in a small-business security program. Microsoft has repeatedly reported that MFA blocks more than 99% of automated account-compromise attacks. Your MFA coverage number is, effectively, how much of that 99% protection your business is actually getting.

Target: 100%. Anything less is a list of accounts to address, not a success to celebrate.

Where to find it: Microsoft 365 admin center → Sign-in activity report; Google Workspace admin → Security → 2-Step Verification; your identity provider’s dashboard.

2. Patching SLA compliance (%)

What it measures: The percentage of endpoints that are patched within the target window (e.g., critical CVEs within 14 days, all others within 30).

Why it matters: Exploited unpatched software is consistently among the top initial-access vectors in the Verizon DBIR year over year. An unpatched endpoint is a known, advertised vulnerability sitting on your team’s laptops.

Target: ≥95% within your defined window. Pick the window based on your risk tolerance; what matters is consistency.

Where to find it: Microsoft Intune / Jamf / your RMM tool; Windows Update for Business reports; Apple Business Manager.

3. Backup test restore success rate (%)

What it measures: Of the restore tests you performed this quarter, the percentage that succeeded on the first attempt with the expected data.

Why it matters: Having backups is not a security control. Successfully restoring from backups is. Most small-business backup failures surface only during a real incident — exactly the worst time to learn your tapes don’t restore. This metric converts theoretical backups into proven ones.

Target: 100%. If tests are failing, fix the backup before you need it.

Cadence: Quarterly. Pick a real file, restore it, open it, confirm it’s the right file. That’s the test.

Response metrics: how fast you move when something changes

Coverage tells you the program works. Response metrics tell you how quickly the business reacts when reality deviates from the plan — which, eventually, it always does.

4. Offboarding completion time (hours)

What it measures: The time between a person’s last day and complete deactivation of their access (SSO, email, password manager, any direct app accounts, physical access).

Why it matters: Ex-employee accounts are one of the most common — and most preventable — paths to small-business incidents. An account deactivated the day someone leaves is a closed door. An account still active two weeks later is an open invitation.

Target: Under 4 hours from last-day end-of-business. Ideally, same-hour.

How to track it: Timestamp the offboarding checklist. If your offboarding process doesn’t include a timestamp, that’s the fix.

5. Phishing reporting rate (%)

What it measures: The percentage of suspicious emails that reach inboxes and get reported by employees (rather than clicked, ignored, or deleted silently).

Why it matters: The measure that matters for phishing resilience isn’t whether people fall for test emails — it’s how quickly they report real ones. A high reporting rate means your team is watching, and you get the first hour of an incident back. A low reporting rate means incidents simmer unreported until they’re no longer small.

Target: Trending up, quarter over quarter. Most small businesses start in single digits; a mature program is in the 30-50% range.

How to track it: Run quarterly phishing simulations (Microsoft Attack Simulator, Google’s phishing tests, KnowBe4, etc.) — and count reports, not clicks. Reframe the whole program around reporting.

6. Mean time to patch critical vulnerabilities (days)

What it measures: For CVEs rated Critical or actively exploited (see CISA’s Known Exploited Vulnerabilities Catalog), how long from vendor disclosure to your environment being fully patched.

Why it matters: This is the urgency metric. Your general patching SLA can be 30 days, but critical-and-actively-exploited vulnerabilities are in a different category — they’re being used in the wild today.

Target: ≤ 7 days for anything on the CISA KEV list, ideally faster.

One leading indicator worth the effort

Leading indicators predict future outcomes rather than record past ones. They’re harder to get right, but one is genuinely worth tracking:

7. Training participation + incident reporting trend

What it measures: A combined, qualitative signal — are people both completing security training and reporting suspicious things at a rate that’s trending up, down, or flat?

Why it matters: A business where training completion is high and reporting rate is rising has a security culture that’s getting stronger. A business where training completion is dutiful but nobody ever reports anything has a compliance program, not a security program. The two need to move together.

Target: Training completion ≥ 95%, reporting rate trending up over rolling 4-quarter average.

What not to measure

Some metrics are tempting but mostly noise:

  • “Number of attacks blocked.” This is a vendor dashboard feature, not a measurement of your program. It conflates automated firewall events with real threats and tells you nothing actionable.
  • Raw alert counts. More alerts isn’t worse. Fewer alerts isn’t better. What matters is what’s in the alerts, and whether the ones that matter are getting attention.
  • “Cost per incident.” Tempting in a board deck. Nearly impossible to compute accurately for a small business, and when you can, the answer (“a lot”) doesn’t usually change your decisions.
  • Password complexity scores. Current NIST guidance explicitly rejects composition rules as a meaningful security signal. Length matters, breach-corpus screening matters. Complexity theater does not.

Tools to track these without buying a SIEM

You don’t need a Security Information and Event Management system to run this list. Most small businesses can get the numbers from tools they already pay for:

  • Microsoft Secure Score — a built-in dashboard in Microsoft 365 that surfaces MFA coverage, policy compliance, and a prioritized list of improvements. Free with any business tier.
  • Google Workspace Security Dashboard — similar function in the Google admin console; surfaces MFA adoption, external sharing, and account activity.
  • Your endpoint management tool (Intune, Jamf, RMM) for patching SLA and device-encryption coverage.
  • A simple spreadsheet. Honestly. Coverage metrics + trendlines, updated once a quarter, is a perfectly legitimate measurement program for a small business. Nothing wrong with it.

Start simple. You can always graduate to dashboarding tools later. What you can’t do is skip measurement and later wonder why the program drifted.

Using metrics to actually make decisions

The point of all this isn’t the report — it’s the decision the report drives. A quarterly measurement rhythm looks something like:

  • Pull the numbers. Takes about an hour if you’ve set it up.
  • Look at which moved. Anything trending the wrong way gets a concrete action assigned: “MFA coverage dropped from 98% to 92% — pull the list of non-compliant accounts and bring them current this week.”
  • Look at which are stable but below target. Pick one. Move it.
  • Put it in a one-page summary somewhere the right people see it. Not a 30-slide deck.

That rhythm — four or five metrics, quarterly cycle, clear actions — outperforms elaborate programs that never quite get off the ground.

Where this fits

Metrics are one part of a functioning cybersecurity program. If you haven’t yet built the foundation they measure, our cornerstone post — where small businesses should actually start with cybersecurity — walks through the five controls that cover most of the risk. When something does go wrong, the one-page incident response plan template is designed to keep a small business functional through a bad day. And if you want a broader decision framework around strategic IT in general, our Streamline/Secure/Grow primer is the piece that holds everything else together.

If the whole “what should we measure?” question is itself the gap — if nobody owns it, or the data lives in five different admin portals — that’s the kind of thing a managed partner handles as part of the work. A free discovery call is the fastest way to talk through your specific situation.

Measure less, measure things that matter, and act on what you find. That’s most of the game.

Filed under Cybersecurity Small Business Measurement

Keep reading

  1. Cybersecurity

    Ransomware protection for small businesses: what actually works

    A practical guide to ransomware protection for small businesses — how attacks enter, the prevention basics that work, the backup strategy that saves you, and what to do in the first hour if prevention fails.

  2. Cybersecurity

    Seven password-management practices that actually work for small businesses

    Practical password management for small businesses — what current NIST guidance actually says, which tools to use, and the practices that move the needle.

  3. Cybersecurity

    Where small businesses should actually start with cybersecurity

    Where to start with cybersecurity for a small business — five essential controls, a people strategy that works, and a one-page incident plan owners can actually use.

View all articles →