A reflection on expertise, responsibility, and the role of artificial intelligence in modern work
Over the past two years, artificial intelligence has moved rapidly from curiosity to expectation in many professional environments. What began as a new set of experimental tools has quickly become embedded in everyday workflows. In some organizations, AI adoption is now actively encouraged — and in some cases even linked to professional development or promotion.
This shift is understandable. AI systems are remarkably capable. They can summarize large volumes of information, draft communications, explore technical concepts, and help accelerate research and documentation. For many professionals, these tools have become a valuable extension of their daily work.
As someone who has spent decades working in technology and systems integration, I find this transformation fascinating. I use AI tools regularly myself — often as part of research, writing, technical exploration, and even the development of complex technical proofs. In some cases, these collaborations have resulted in working demonstrations of PMO reporting environments built using tools like Microsoft Project, Dataverse, and Power BI.
When used thoughtfully, AI can dramatically expand the speed at which ideas are explored and refined. But as these tools become more common, an important distinction is beginning to emerge.
There is a difference between thinking with AI and letting AI think for you.
At first glance, the outputs may look similar. Both approaches can produce polished text, structured ideas, and seemingly intelligent explanations. But beneath the surface, the difference is significant.
Thinking with AI is collaborative. It is iterative. It involves questioning, refining, challenging assumptions, and bringing one’s own knowledge and experience into the process. In this mode, AI becomes something like a research partner — a tool that helps explore possibilities and surface perspectives that can then be evaluated and refined through human judgment.
In practice, this collaboration can become surprisingly dynamic. There are moments where the interaction resembles a debate more than a simple prompt-response exchange. Sometimes the system challenges assumptions; other times the human operator does. The result is not a finished answer delivered instantly, but a back-and-forth process that gradually refines the outcome.
Thinking for you, however, is something else entirely. In that mode, AI becomes a substitute rather than a collaborator. The tool produces answers, and the user accepts them with minimal scrutiny. The appearance of insight is generated quickly, but the deeper process of reasoning, verification, and synthesis may be bypassed.
The difference between these two approaches is not always obvious at first. In fact, one of the most striking features of modern AI systems is how convincingly they can produce structured, articulate output. A well-constructed prompt can generate an impressive document, a persuasive argument, or a detailed explanation within seconds.
But professional work rarely ends at the level of a well-written paragraph.
In real environments — particularly in complex technical or operational systems — ideas must withstand reality. Systems must integrate. Projects must deliver. Assumptions must hold under pressure.
Dependencies must be understood and managed. And when things fail, someone must diagnose the cause and determine how to fix it.
These environments reveal a simple truth: polished output is not the same as expertise.
Throughout my career I have been fortunate to work across hundreds of projects, including large-scale system integrations and programs involving substantial operational budgets. What those environments teach you very quickly is that technical work is rarely generic. Every system, every organization, and every implementation carries its own specific constraints.
When problems emerge — and they inevitably do — responding effectively requires something deeper than theoretical knowledge. It requires experience, pattern recognition, and the ability to diagnose issues under pressure. Over time, these instincts develop through repetition and exposure to real-world complexity.
This is why experience still matters.
In fields like systems integration, infrastructure delivery, program governance, or large-scale project management, the difference between theory and practice becomes visible quickly. Real systems contain edge cases, hidden dependencies, legacy constraints, and operational realities that cannot be resolved through surface-level analysis alone.
AI can assist with these environments, but it cannot replace the lived experience required to navigate them responsibly.
That is not a criticism of the technology. In fact, it highlights one of the most interesting aspects of the current moment.
Artificial intelligence does not eliminate the need for expertise. In many ways, it amplifies it.
Professionals who already possess deep knowledge and judgment can use AI tools to explore ideas more quickly, test assumptions, and accelerate certain forms of analysis. For them, AI becomes an amplifier — increasing the speed and reach of their thinking.
For those without that foundation, however, the situation is different. AI can make it easier to produce outputs that appear sophisticated, but the underlying reasoning may remain shallow. Over time, this creates a growing gap between synthetic competence and genuine capability.
In the short term, this distinction can be difficult to detect. AI-generated content can be polished and persuasive. But in environments where decisions have consequences — where systems must function and projects must succeed — the difference eventually becomes clear. The deeper question, then, is not whether professionals should use AI. The question is how they use it.
For my own work, the answer is relatively simple. AI is a tool that supports thinking, not a replacement for it. It can assist in research, explore different perspectives, and help refine ideas. But responsibility for judgment, accuracy, and final decisions always remains human.
In practice, that means treating AI outputs as inputs, not conclusions.
It means verifying assumptions, cross-checking information, and applying real-world context to what the model produces. It means remaining engaged in the thinking process rather than delegating it entirely to a machine.
This approach may be slower than simply accepting whatever an AI system produces. But it preserves something that remains essential to professional work: accountability.
Technology has always changed the way we work. From early computing systems to cloud infrastructure to modern automation platforms, each wave of innovation has expanded what professionals can do.
Artificial intelligence is simply the latest step in that evolution.
But tools do not eliminate responsibility. They change how responsibility is exercised.
The organizations that benefit most from AI will likely be those that recognize this distinction. Encouraging employees to explore new tools is valuable. But the goal should not be the production of more output for its own sake. The goal should be better thinking, better decisions, and stronger outcomes.
Artificial intelligence can contribute meaningfully to that process. Used thoughtfully, it can accelerate learning, improve communication, and support complex analysis.
But the real value will always come from the people who use it.
The future of professional work will not belong to those who avoid AI, nor to those who rely on it blindly.
It will belong to those who understand both its power and its limits — and who continue to practice the difficult but essential discipline of thinking.
Because the real risk with artificial intelligence is not that machines will replace human thinking.
The real risk is that some people may stop practicing it.
________________________________________________________________________________________________________________________________________________
For readers interested in how this collaborative approach works in practice, I’ve included a short demonstration of a PMO reporting environment built using Microsoft Project, Dataverse, and Power BI.
_________________________________________________________________________________________________________________________________________________





