Insights
February 12, 2026

Organisational Inertia and the Quiet Inversion of Purpose

A thoughtful exploration of organisational inertia, the quiet erosion of purpose at work, and why AI is revealing how far many institutions have moved from what they were created to do.

At their simplest, organisations exist to do one thing: to coordinate human effort in service of a purpose that individuals cannot achieve alone. Whether that purpose is building a product, serving a customer, solving a problem, or creating value in the world, the organisation is meant to be a means to an end, a temporary structure that exists to make that work possible.

Over time, however, many organisations lose this orientation. The structure that was originally created to support the work begins to take on a life of its own. Processes are added to manage risk, roles are introduced to manage process, and policies emerge to manage roles. Gradually, the organisation’s attention shifts inward. What began as a vehicle for purpose becomes a system primarily concerned with its own continuity, coherence, and internal order.

In this state, the organisation no longer experiences itself as a tool, but as the thing to be protected. Decisions are increasingly made in service of internal stability rather than external value. Activity becomes a proxy for contribution. Compliance begins to matter more than impact. The organisation still speaks the language of mission and purpose, but in practice much of its energy is spent managing itself.

This drift is rarely intentional. It is the predictable outcome of scale, longevity, and accumulated structure. But once it takes hold, it reshapes behaviour throughout the system, what is rewarded, what is discouraged, and what kinds of people are allowed to thrive.

What follows is an examination of how that drift plays out in practice, why it has become so widespread, and why new technologies like AI are revealing just how far many organisations have moved from the reason they exist at all.

How the Drift Shows Up in Practice

Once organisations become inwardly focused, the shift is rarely dramatic. It shows up gradually, through small, repeatable patterns, in what is rewarded, what is discouraged, and what quietly becomes risky. Over time, these patterns shape behaviour far more powerfully than any stated values or mission statements.

A common signal is how customer-centric, proactive behaviour is received. In many organisations, individuals who prioritise outcomes, delivery, and external value creation are increasingly treated as disruptive rather than valuable. Not because they fail to perform, but because their focus collides with systems designed primarily to manage risk, compliance, and internal stability.

At the same time, it is common to see ownership of internal process, administration, policy, and governance rewarded more reliably than contribution to products, revenue, or growth. Advancement becomes less closely tied to value creation and more closely aligned with protecting and navigating the internal system. The message is rarely explicit, but it becomes clear over time: challenging the organisation in service of the outside world carries more risk than maintaining internal order.

These dynamics are not anecdotal. Research in organisational psychology has long shown that in mature organisations, political skill and internal alignment often predict career progression more strongly than objective performance. When incentives reward stability, conformity, and risk avoidance, behaviour follows.

Over time, these patterns do more than shape careers. They reshape how people relate to their work.

What begins as a structural drift eventually becomes a human one.

The Human Cost

The consequences of organisational drift are felt most clearly in how people experience their working lives.

As organisations turn inward, the connection between effort and impact weakens. Employees no longer experience a reliable relationship between the quality of their contribution and the outcomes that matter. In such environments, behaviour adapts. People optimise less for value creation and more for survival within the system.

This pattern is reflected clearly in engagement data. Long-running global research from Gallup consistently finds that only around 20–21% of employees worldwide are meaningfully engaged at work, with the majority either disengaged or actively disconnected. Gallup estimates that low engagement now costs the global economy hundreds of billions of dollars annually in lost productivity, a figure that has deteriorated rather than improved in recent years.

Crucially, this disengagement is not best understood as apathy or poor work ethic. Gallup’s longitudinal analysis shows that engagement declines most sharply in environments where employees experience low agency, weak trust in leadership, and a poor connection between their daily work and meaningful outcomes. In other words, disengagement is often a rational response to systems that no longer reward impact.

Research in organisational behaviour reinforces this interpretation. Studies examining career progression in complex organisations consistently find that when political alignment and risk avoidance are more predictive of advancement than performance, discretionary effort declines. Employees narrow their scope of responsibility, retreat into role-defined activity, and focus on completing tasks rather than solving problems.

The psychological consequences are well documented. High levels of bureaucracy and role ambiguity correlate with increased burnout, stress, and presenteeism. Psychological safety erodes not through overt hostility, but through repeated signals that questioning how work is organised carries personal risk. Over time, curiosity gives way to caution, and initiative declines.

Importantly, this dynamic disproportionately affects high-capability individuals. Evidence suggests that employees with strong intrinsic motivation and a desire for impact disengage more quickly when they encounter persistent structural friction. When effort is met with resistance rather than reinforcement, motivation erodes fastest among those who care most about the quality of their work.

The result is a quiet but significant loss of organisational capacity. Talent remains employed, but its energy is constrained. Creativity diminishes. Improvement slows. What appears from the outside as stability is often a system gradually exhausting its own human potential.

This cost rarely appears on balance sheets. It accumulates slowly, in unchallenged assumptions, deferred improvements, lost ideas, and people who stop offering more than the minimum required. By the time it becomes visible, the organisation has often already paid a far higher price than it realises.

Why AI Exposes the Problem

It’s tempting to frame AI as a question about the future, about new capabilities, new risks, or new forms of intelligence. In practice, AI’s most immediate impact on organisations has little to do with what it can do, and much more to do with what it reveals.

Even the idea of AI has become destabilising.

The reason is simple. AI challenges some of the foundational assumptions on which modern organisations have been built: that complexity is necessary, that coordination requires layers, and that large numbers of people must be involved to produce, analyse, and interpret information at scale. When those assumptions are quietly undermined, the organisational structures built on top of them begin to look fragile.

This is why AI so often fails to deliver on its promised benefits inside established organisations. Research from MIT Sloan consistently shows that AI adoption only produces meaningful performance gains when organisations redesign workflows, incentives, and decision-making authority alongside the technology. Where structures remain unchanged, AI is absorbed into existing processes rather than transforming them.

In other words, AI does not override organisational inertia. It conforms to it.

What makes this moment different is that AI does not merely automate tasks. It collapses the cost of activities that once justified entire layers of organisational effort: drafting, summarising, analysing, reporting, and synthesising information. When those activities become inexpensive and widely accessible, the question shifts from “Can this be done?” to “Why does this structure exist at all?”

This is where discomfort sets in.

AI surfaces redundancy. It highlights contradiction. It exposes how much work exists to maintain internal systems rather than deliver external value. In that sense, AI functions less like a solution and more like a mirror, reflecting back the reality of how organisations actually operate.

Research from Oxford University has shown for nearly a decade that a significant proportion of routine cognitive work is technically automatable using existing technologies. AI does not introduce this possibility; it removes the remaining ambiguity. It forces organisations to confront how much of their activity is rooted in habit, risk avoidance, or historical compromise rather than necessity.

This helps explain why AI initiatives so often stall at the edges of organisations. Governance frameworks expand. Ethics committees proliferate. Pilot programmes multiply without scaling. These responses are not irrational; they are defensive. They reflect systems attempting to protect themselves from a technology that quietly questions their internal logic.

Importantly, AI also destabilises identity and authority. For decades, seniority in organisations has been tied to control over information, who produces it, who interprets it, who turns it into narrative. AI weakens that monopoly. When analysis and synthesis become widely available, positional authority is harder to justify on informational grounds alone.

This is why the mere presence of AI can feel threatening, even before it is meaningfully deployed. It reveals how far many organisations have drifted from their original purpose (coordinating human effort to create value) and how much of their energy is now spent maintaining the structures that surround that work.

AI does not cause this problem. It illuminates it.

Organisations that treat AI purely as a tool for efficiency or cost reduction often miss this entirely. Without confronting the underlying drift (the inward focus, the bloated processes, the misaligned incentives) AI simply accelerates existing patterns. The mirror becomes sharper, but the reflection does not change.

Why Legacy Organisations Are at Risk

The vulnerability facing many legacy organisations is not a lack of access to AI, capital, or talent. It is structural. These organisations are built on assumptions that no longer hold, about scale, coordination, and the amount of human effort required to produce value.

Most large organisations operating today were shaped in an era where information was scarce, coordination was expensive, and growth demanded hierarchy. Layers of management, reporting, and internal control were not inefficiencies; they were enabling mechanisms. Over time, those structures became normalised, then institutionalised. Complexity came to be interpreted as maturity.

Research suggests this inheritance now carries a measurable cost. Studies from McKinsey Global Institute estimate that 40–60% of time in large organisations is spent on non-value-adding activities such as internal coordination, reporting, and compliance. These costs compound as organisations grow, making them slower and less adaptive over time.

AI changes the economics underlying these structures.

When the cost of analysis, synthesis, documentation, and coordination collapses, many of the activities that once justified entire organisational layers lose their rationale. Tasks that previously required teams, committees, and approval chains can now be completed with far fewer people and far less friction. This does not automatically lead to organisational change, but it does expose how much complexity exists by default rather than necessity.

Empirical research supports this. Studies from MIT Sloan show that organisations adopting AI without redesigning workflows, decision rights, and incentives see limited or no productivity gains. Where AI is layered onto existing bureaucracy, it tends to amplify inertia rather than overcome it. In effect, AI conforms to structure, unless structure is intentionally changed.

In contrast, a growing number of organisations are being built with very different assumptions.

Companies such as Shopify have publicly stated that they now treat AI as a default capability rather than a specialised function. Shopify’s leadership has explicitly argued that new headcount should be justified only after teams have demonstrated that AI cannot already perform the work. The implication is not cost-cutting for its own sake, but organisational discipline: clarity about what genuinely requires human judgment and what does not.

Similarly, Klarna has reported that AI systems now handle a substantial proportion of customer service interactions, allowing the company to reduce operational complexity while maintaining service quality. The significance here is not the technology itself, but the organisational consequence: fewer layers, faster feedback loops, and clearer accountability.

In the software and infrastructure space, companies such as Stripe and OpenAI operate with comparatively lean structures relative to their impact, relying on small, highly autonomous teams supported by extensive automation and tooling. Decision-making authority is pushed downward, coordination overhead is minimised, and value creation is tightly coupled to output rather than internal activity.

These organisations are not “winning” because AI is superhuman. They are winning because AI encourages simplicity.

This creates a widening structural gap.

Legacy organisations tend to respond to AI by increasing governance, oversight, and risk management. New committees form. Pilot programmes multiply. Ethical frameworks and approval processes expand. Each response is individually rational, but collectively they reinforce the inward focus that already limits adaptability.

AI-first organisations respond differently. They reduce friction. They collapse roles. They remove unnecessary coordination. They design around small teams, fast cycles, and direct accountability. Where legacy organisations add structure to manage uncertainty, AI-native organisations remove structure to reduce it.

Over time, this divergence compounds.

The risk to legacy organisations is therefore not sudden disruption, but gradual displacement. Market share erodes at the margins. Talent migrates toward environments where effort and impact remain connected. Innovation continues, but becomes increasingly performative, visible internally, marginal externally.

By the time this is recognised, it is often framed as a technology gap. In reality, it is a design gap.

AI rewards organisations that can re-orient around value creation, simplify relentlessly, and reconnect structure to purpose. It penalises those that mistake internal coherence for effectiveness.

For many legacy organisations, the danger is not collapse, it is irrelevance, arrived at slowly and defended vigorously until it is too late.

How Some Organisations Are Operating Differently

If organisational inertia is structural rather than moral, then renewal is less about motivation and more about design. The organisations that appear to be coping best with AI-driven change are not those with the most ambitious transformation programmes, but those that have quietly reoriented how work is organised, how decisions are made, and what complexity they are willing to tolerate.

Several patterns stand out.

1. They treat structure as provisional, not sacred

In many legacy organisations, structure hardens over time. Roles, processes, and reporting lines persist long after the conditions that created them have changed. By contrast, organisations that remain adaptive treat structure as something to be revised regularly, a means to an end, not an identity.

Shopify has been explicit about this posture. Its leadership has described organisational design as something that should be revisited continually in light of changing tools and constraints. Rather than assuming headcount growth as a default response to new work, teams are expected to demonstrate why additional structure is required once AI and automation have been fully exploited. The effect is not simply efficiency, but clarity: fewer roles exist to defend inherited processes.

The lesson here is not “hire fewer people”, but avoid allowing structure to outlive its purpose.

2. They collapse the distance between decision and consequence

In inward-looking organisations, decisions are often made far from their effects. Layers of approval, coordination, and risk management dilute accountability. Work becomes safe but slow.

Organisations operating differently push decision-making closer to the work itself.

Stripe is frequently cited for its emphasis on small, autonomous teams with clear ownership. Rather than relying on heavy cross-functional governance, teams are given broad decision rights alongside clear expectations about outcomes. This reduces coordination overhead and makes failure visible quickly, rather than diffused across committees.

AI amplifies the benefits of this model. When analysis and synthesis are cheap, the limiting factor becomes judgment, and judgment improves when responsibility is clearly owned.

3. They distinguish clearly between judgment and automation

A common failure mode in AI adoption is to either over-automate indiscriminately or to protect human roles wholesale. More effective organisations are precise about what genuinely requires human judgment, and ruthless about automating the rest.

Klarna has publicly described how large portions of customer service and operational work are now handled by AI systems, allowing human teams to focus on exception handling, relationship management, and higher-order problem solving. The value here is not cost reduction alone, but role clarity. Humans are no longer maintaining the appearance of necessity in work that technology can already perform.

This precision matters. It reduces internal anxiety and prevents AI from becoming a symbolic overlay rather than a structural change.

4. They measure value creation, not internal activity

In many organisations, what gets measured is what is easiest to observe: meetings attended, documents produced, processes followed. These metrics reinforce inward focus.

Organisations that remain effective tend to anchor measurement more directly to external outcomes (customer satisfaction, cycle time, quality, and learning velocity) even when these are harder to capture.

Research from MIT Sloan shows that organisations deriving real productivity gains from AI tend to pair technology adoption with changes to performance measurement and incentives. Where internal activity remains the primary signal of value, AI has little effect. Where outcomes matter, behaviour adapts quickly.

5. They resist the urge to add governance as a reflex

AI introduces uncertainty, and uncertainty triggers defensive behaviour. Legacy organisations often respond by adding layers of oversight, approval, and policy. Individually, these measures feel prudent. Collectively, they reintroduce the very inertia AI might have helped to remove.

Organisations operating differently take a more restrained approach. They establish clear ethical and operational boundaries, but avoid turning uncertainty into permanent bureaucracy. Governance is treated as a constraint to be minimised, not a structure to be expanded.

This requires confidence, and a tolerance for discomfort. But it preserves adaptability.

What These Examples Have in Common

None of these organisations are perfect. None are immune to drift. And none of these approaches are easy to transplant wholesale.

What they share is not a love of AI, but a rejection of unnecessary complexity.

They are willing to:

  • Question inherited structures
  • Remove roles that exist primarily to manage other roles
  • Accept short-term discomfort in exchange for long-term coherence
  • Reconnect work to purpose rather than process

They do not treat AI as a strategy. They treat it as a forcing function, one that makes it harder to justify organisational forms that no longer serve their original purpose.

Closing Reflection

Most organisations do not fail because they lack intelligence, effort, or technology. They fail because, over time, they forget what they are for.

The drift described here is not the result of bad leadership or poor intent. It is a structural tendency that emerges when organisations grow, persist, and begin to optimise for their own continuity. Process accumulates. Roles harden. Incentives shift. Gradually, the organisation becomes more real than the work it exists to support.

AI does not introduce this problem. It makes it visible.

By lowering the cost of analysis, coordination, and production, AI removes many of the historical justifications for organisational complexity. In doing so, it exposes how much activity exists to sustain internal systems rather than create external value. This is uncomfortable, particularly for organisations whose identity and authority are bound up in those systems.

The risk for legacy organisations is not that they will be overtaken overnight. It is that they will remain active, busy, and internally coherent while becoming increasingly irrelevant to the world beyond them. Inertia rarely announces itself as failure. It presents as stability, professionalism, and control, until it quietly erodes competitiveness, meaning, and trust.

Yet this moment also creates a genuine choice.

Some organisations will continue to protect structure over purpose, adding layers to manage uncertainty and mistaking internal order for effectiveness. Others will take the more difficult path: questioning inherited assumptions, reducing unnecessary complexity, and reconnecting human effort to the value they exist to create.

That path is not easy. It involves discomfort, loss of status for some roles, and a willingness to dismantle systems that once felt essential. But it also restores something fundamental: clarity about why the organisation exists, and how people inside it can contribute meaningfully to that purpose.

AI will continue to improve. But the decisive factor will not be the sophistication of the technology. It will be whether organisations are willing to confront what the mirror is already showing them.

In the end, this is not a story about artificial intelligence.

It is a story about organisations deciding whether they exist to manage themselves, or to serve the world they were created for.

Get Started

If This Resonates
Let's Talk.

If you’re facing decisions that don’t fit neatly into a plan, I may be able to help.
Josh Hunt
Fractional Marketing LEader
I work with leadership teams facing complex decisions and moments of change.
Start a Conversation