Recent headlines involving major professional services firms are easy to read as isolated restructuring stories. One firm exits a major line of federal audit work. Another trims advisory roles. Partners are reduced. Teams are redeployed. Contracts shift. Budgets tighten.
But taken together, these stories may point to something deeper than a normal market correction. They may signal a reset in what clients are willing to pay for, tolerate, and trust.
In some cases, large firms have stepped back from major public-sector audit engagements following sustained delivery challenges, while government organizations themselves are working to streamline fragmented oversight models and improve accountability.
The specific details matter. But the larger signal may matter even more. Clients are becoming less tolerant of fragmented delivery, unclear accountability, and expensive layers that do not translate into measurable outcomes.
This is not simply a story about one firm, one contract, or one sector. It is a sign that professional services may be entering a new era—one shaped by AI, auditability, governance, and a higher expectation for real delivery.
A Model Built on Access and Scale
For decades, large-scale professional services depended on reputation, access, partner relationships, methodology, and the ability to coordinate complex work through layered teams.
That model created real value.
Large firms brought structure where clients lacked it. They provided trusted advice, specialized expertise, large delivery capacity, and the credibility needed to support major business, technology, and public-sector initiatives.
But the same model also created distance.
Distance between advice and execution.
Distance between leadership and delivery teams.
Distance between strategy and measurable outcomes.
Distance between the people selling the work and the people actually doing it.
In stable markets, that distance could be absorbed. In growth markets, it could even be hidden.
But in a more constrained economy, with tighter budgets, greater scrutiny, and faster technology cycles, distance becomes harder to justify.
Clients are not necessarily rejecting consulting.
They are rejecting unclear value.
Fewer Layers, Clearer Accountability
The market appears to be forcing a higher standard: less tolerance for top-heavy structures, and more demand for people who can connect governance, technology, delivery, and measurable outcomes.
That is an important distinction.
This is not only about cost-cutting. It is about confidence.
Organizations want to know whether the work they are funding actually improves delivery, decision-making, risk control, operational performance, or compliance posture.
They want fewer fragmented layers.
They want clearer accountability.
They want better outcomes.
And increasingly, they want the people advising them to understand the systems, data, workflows, risks, and delivery mechanics behind the recommendation.
That is where the professional services model begins to change.
Agentic AI Changes the Baseline
AI is often discussed as a broad technology wave, but the more important development may be the rise of agentic AI.
Traditional AI tools answer questions, summarize information, generate drafts, or assist with analysis.
Agentic AI goes further. It can plan, use tools, interact with systems, take steps toward a defined outcome, check results, and participate more directly in workflows.
Anthropic describes Claude Code, for example, as an agentic coding system that can read a codebase, make changes across files, run tests, and deliver committed code. Anthropic has also described effective agents as systems often built from simple, composable patterns rather than unnecessarily complex frameworks.
This matters because agentic AI compresses the distance between analysis and execution.
When AI can summarize, draft, analyze, classify, reconcile, generate code, test outputs, support workflows, and assist with decision preparation, the human value layer must move higher.
This does not eliminate the need for professionals.
It eliminates the illusion that coordination alone is expertise.
If a consultant’s value is mostly meetings, status updates, document movement, handoffs, and control of information, AI will compress that value.
But if the consultant brings judgment, architecture, governance, risk framing, delivery control, client trust, and systems integration, AI becomes an amplifier.
AI exposes weak consulting.
AI amplifies disciplined operators.
Governance Is What Makes This Serious
The future of AI in professional services is not uncontrolled automation.
It is governed automation.
This is where standards such as ISO/IEC 42001:2023 become important.
ISO describes ISO/IEC 42001 as an AI management system standard designed to help organizations manage AI risks and opportunities, including areas such as transparency, ethical considerations, and continuous improvement.
That framing matters.
The serious question is not simply, “Can we use AI?”
The better questions are:
- Can we govern it?
- Can we explain it?
- Can we monitor it?
- Can we audit it?
- Can we control its risks?
- Can we connect it to measurable business outcomes?
AI adoption without governance creates noise.
AI adoption with governance creates capability.
This is where the next professional services standard begins to emerge.
Organizations will need help designing operating models where AI is not just deployed, but managed.
Where outputs are reviewed. Where risks are classified. Where human accountability remains clear. Where data boundaries are respected. Where decision-making remains traceable. Where automation supports better work rather than simply accelerating poor process.
That is not hype.
That is management discipline.
The Rise of the Hybrid Operator
This shift creates space for a different kind of professional.
The technically fluent, governance-aware, delivery-minded operator.
Not simply a technologist.
Not simply a project manager.
Not simply an advisor.
Not simply a process owner.
But someone who can understand how systems work, how decisions are made, how risks are controlled, and how outcomes are delivered.
This profile draws from a range of cross-functional capabilities:
- business need
- technical architecture
- governance
- project delivery
- risk
- data
- AI enablement
- executive reporting
- auditability
That combination is becoming more valuable.
As AI takes on more workflow-level activity, the premium shifts toward people who can frame the work, structure the environment, interpret the outputs, govern the risks, and maintain accountability.
The future professional services advantage may not belong only to the largest intermediary.
It may belong to the clearest operator.
A Founder’s Checkpoint
From my own work building PMO control environments, exploring Microsoft Power Platform governance, and working across the realities of delivery, systems, and executive reporting, I see this moment as a meaningful reset.
This perspective did not come from theory alone. This is the kind of work I have spent years doing—often without it being formally named this way. It comes from years of working inside complex programs where schedules, resources, risks, stakeholders, data, governance, and delivery pressure all had to be brought into clearer alignment.
Over time, that work has required a cross-functional skillset: understanding business needs, technical environments, reporting structures, governance expectations, and the human realities of execution.
The opportunity now is not to chase every new AI tool.
The opportunity is to understand how AI changes execution—and then build accountable operating models around it.
A PMO control environment is not just a dashboard.
A governance layer is not just a policy.
A delivery model is not just a meeting cadence.
An AI strategy is not just access to a chatbot.
The real work is integration.
How do schedules, resources, risks, issues, approvals, telemetry, workflows, executive reporting, and AI-enabled assistance connect into one coherent system of delivery?
How does an organization know what is happening?
How does leadership trust the information?
How are exceptions surfaced?
How are decisions supported?
How are AI-generated outputs reviewed?
How is accountability preserved?
These are not abstract questions.
They are quickly becoming operational questions.
A Higher Standard, Not a Collapse
This is not a doom story for consulting.
It is a standards story.
The firms, teams, and professionals who adapt will not be the ones who merely use AI. They will be the ones who can govern it, explain it, measure it, and connect it to real delivery.
The next professional services standard may be less about size and more about clarity.
Less about layers and more about accountability.
Less about controlling the process and more about proving the work.
For organizations, this may mean demanding more precise value from their advisors.
For professionals, it may mean moving beyond narrow role definitions and developing the ability to connect systems, governance, delivery, and judgment.
For smaller firms and independent operators, it may create a real opening.
Not because scale no longer matters.
But because clarity, trust, and execution now matter more.
AI will not remove the need for human expertise.
But it will make weak expertise harder to hide.
And for disciplined operators willing to learn, adapt, and build with accountability, that may be the beginning of a much stronger professional services era.
____________________________________________________________________________________________________________________________
Note: This reflection draws on recent public reporting regarding changes within large professional services firms, as well as emerging AI governance standards such as ISO/IEC 42001.
____________________________________________________________________________________________________________________________





