
There is a spending pattern I have observed across professional services firms for the better part of a decade, and it has not changed much despite the continuous evolution of the threat landscape. Organisations will invest significant sums in security technology such as Microsoft 365 licences with full E5 security capabilities, Defender tooling, Intune for endpoint management, Azure infrastructure with security services layered on top and then spend a fraction of that amount, or in many cases nothing at all, on independently verifying whether any of it is actually working.
I want to be precise about what I mean by ‘working’. I do not mean whether the tools are installed and running. They almost certainly are. I mean whether the configuration of those tools: the policies, the rules, the conditional access controls, the sensitivity labels, the sharing restrictions, the audit logs, is doing what it was intended to do, and has continued to do so as the environment has evolved around it.
That distinction matters enormously. And the data suggests most organisations are not drawing it.
The 2024 Verizon Data Breach Investigations Report analysed more than 10,600 confirmed data breaches across 94 countries, a record high. In the Professional, Scientific and Technical Services sector specifically, there were 2,599 incidents recorded, with 1,314 resulting in confirmed data disclosure. The two dominant attack patterns were social engineering and system intrusion. The most common initial action in breaches overall was the use of stolen credentials, involved in roughly 38 percent of cases; more than double the proportion that used phishing, and more than triple those that exploited vulnerabilities.
Read that again: the most reliable way into an organisation is not a sophisticated zero-day exploit. It is a username and a password that should not have worked. Which means the first question to ask is not ‘do we have the right tools?’ but ‘are the tools we have actually enforcing the controls that would make stolen credentials less useful?’
Multi-factor authentication, Conditional Access policies, legacy authentication blocks; these are the controls that close the credential gap. Microsoft 365 includes all of them. But inclusion is not the same as configuration, and configuration at a point in time is not the same as configuration that has held as the environment has drifted.
Roughly 23 percent of cloud security incidents stem from misconfiguration, according to SentinelOne’s 2024 research. The Cloud Security Alliance rates misconfiguration as the single leading cloud threat — above zero-day exploitation.
IBM’s 2024 Cost of a Data Breach Report found that the global average cost of a confirmed breach reached $4.88 million, a 10 percent increase on the prior year and the largest single-year jump since the pandemic. Breaches involving stolen or compromised credentials were the costliest vector to detect and contain, taking an average of 292 days from breach to containment. That is the better part of a year during which, in most cases, nobody knew anything was wrong.
For a 500-person professional services firm, the arithmetic is not abstract. A breach of that scale is an existential event for many firms, not because of the direct technical remediation cost, but because of the cascading consequences: operational disruption, regulatory notification obligations under UK GDPR, reputational damage to client relationships built over decades, and the possibility of regulatory action from the ICO or, for law firms, the SRA.
A professional services firm running Microsoft 365 E3 across 500 users is spending in the region of £140,000 per year on licences at current UK pricing, before Copilot add-ons or any additional Microsoft security tooling. An E5 deployment, which includes the full Defender suite, Azure AD Premium P2, and the broader security and compliance capabilities, will be meaningfully higher. These are not trivial costs. They are deliberate investments in productivity and, increasingly, in security infrastructure.
The E5 licence in particular is often purchased partly based on its security capabilities: Defender for Identity, Defender for Cloud Apps, Microsoft Purview, Privileged Identity Management. These are genuinely powerful tools when properly configured. The question is whether the configuration work was done to the standard required, whether it has been maintained as the environment evolved, and whether anyone has independently verified either of those things since the initial deployment.
In my experience, the answer to the third question is almost always no. Not because organisations do not care about security, they clearly do, or they would not be paying E5 prices, but because the verification step tends to fall into a structural gap. The Microsoft partner that handled the deployment has moved on to the next project. The internal IT team manages the day-to-day environment but rarely has the capacity or the independence to step back and assess it critically. And the annual audit, if there is one, tends to focus on policy documentation rather than technical control validation.
The tools are installed. The licences are paid. But misconfiguration does not announce itself. There is no failed service, no error message, no obvious incident to trigger investigation.
The UK Government’s Cyber Security Breaches Survey 2024 found that boards at smaller and mid-sized organisations typically discuss cyber security reactively, only when specific issues arise. In smaller firms, responsibility is frequently delegated entirely to external IT contractors, with the implicit assumption that any serious issue would be flagged. The survey found that only 58 percent of medium-sized businesses had a formal cyber security strategy in place, rising to 66 percent for large businesses.
That means in a third or more of large organisations, security spending is proceeding without a documented strategic framework for what it is intended to achieve or how it will be verified.
There is a useful parallel here with penetration testing. Most professional services firms above a certain size now commission an annual penetration test or include one in their cyber insurance renewal process. This is generally a positive development. The problem is that penetration testing has in many firms become a compliance exercise rather than a genuine security improvement mechanism.
The test is conducted. The report is produced. The findings sit in a document. A subset of the critical findings is remediated before the next renewal. The rest remain open, sometimes for years. The firm can point to the test as evidence of security diligence, but the underlying posture has not meaningfully improved.
The same dynamic plays out with Microsoft 365 security reviews. Organisations will commission a review, usually at the point of a deployment or an upgrade and then treat the output as a baseline to be maintained rather than a snapshot to be continuously challenged. But cloud environments do not stay static. Users join and leave. Permissions accumulate. Policies are adjusted for operational convenience. Integrations are added. Sharing settings drift. New features are deployed before the governance framework catches up.
In an on-premises environment, this kind of drift was slow and visible. In a cloud environment, it is fast and largely invisible, until something goes wrong.
I am not arguing for continuous external consultancy or for replacing internal IT capability. I am arguing for periodic independent verification, a structured assessment that asks the question an internal team cannot easily ask about its own environment: is this configured to the standard we believe it is, and does that standard match the risk we are being asked to manage?
In practice, that means reviewing the things that matter most and drift most readily: identity and Conditional Access policy coverage, MFA enforcement and legacy authentication blocking, external sharing configuration and accumulated anonymous link exposure, sensitivity label deployment and coverage rates, audit logging status, and for firms deploying AI tools, the governance framework around what those tools can reach and do.
None of these reviews require exotic tooling or specialist knowledge that is unavailable to most organisations. They require time, independence, and a willingness to challenge configurations that feel familiar. The most common finding in every environment I assess is not a sophisticated attack that has evaded detection. It is a misconfiguration that has been present, quietly, for months or years, often predating the most recent deployment project entirely.
The breach did not happen because the tools were not good enough. It happened because nobody checked whether the tools were doing what everyone assumed they were doing.
If you are an IT Director or senior technology leader at a professional services firm, this is probably not a comfortable article to read. The implication is that the security investment your organisation has made may be delivering less than you believe, not because anything has been done wrong, but because the gap between what your controls are supposed to do and what they actually do has never been independently tested.
The good news is that the gap, when it exists, is almost always closable. The issues that emerge from a structured assessment are rarely catastrophic in themselves; they become catastrophic when they are exploited. Finding and addressing them before that point is both technically straightforward and commercially rational.
The investment required to do that verification is a fraction of the annual licence spend it is designed to protect. The risk of not doing it is, as the data makes clear, not theoretical.
If this resonates, I would be glad to have the conversation. The first step is usually just an honest look at what is actually in place.
Trusted Microsoft Cloud Security Advisor with 27 years experience | Empowering Businesses to Embrace Cloud Innovation with Confidence
