I've been thinking about why so many discussions about AI in the enterprise feel oddly familiar, and I keep returning to an observation that initially seems tangential: we're essentially re-running the management theory debates of the 20th century, but at software speed.
This isn't immediately obvious, but bear with me—because understanding this parallel helps explain both the genuine transformation AI represents and the specific anxieties it's triggering in organizations today.
The Scientific Management Moment
Let's start with Frederick Taylor. In the early 1900s, Taylor's "Scientific Management" promised to revolutionize productivity through systematic analysis of work processes. His famous time-and-motion studies at Bethlehem Steel—determining the optimal size of a shovel, the ideal arc of a worker's swing—were the original "there's a science to this" moment. Taylor's insight was that knowledge about how to do work efficiently could be extracted from experienced workers, systematized, and then redistributed as standardized procedures.
The Taylorist promise was compelling: management could observe work, codify best practices, and then scale those practices across the organization independent of individual worker expertise. This was fundamentally about knowledge transfer—taking tacit expertise and making it explicit and replicable.
What strikes me is how precisely this maps onto what we're attempting with AI today. Large language models are, in effect, performing a Taylorist analysis at unprecedented scale: observing how knowledge work gets done (through training on vast corpora of text), identifying patterns, and then offering to execute those patterns on demand. When we prompt an AI to "write a marketing email in the style of a SaaS company" or "analyze this data like a management consultant would," we're essentially asking it to apply the systematized knowledge it extracted from observing millions of examples.
The parallel goes deeper. Taylor's critics—and there were many, including labor unions who correctly saw deskilling threats—argued that Scientific Management reduced workers to interchangeable parts, stripped away autonomy, and ignored the tacit knowledge that couldn't be captured in a stopwatch study. Sound familiar?
The Knowledge Work Inflection
But here's where the history gets more interesting, because management theory didn't end with Taylor. Peter Drucker's articulation of the "knowledge worker" in the 1950s represented a fundamental challenge to Taylorism. Drucker observed that an increasing share of workers were doing jobs where the work itself was non-routine and required judgment, creativity, and expertise that couldn't easily be reduced to standardized procedures.
The management challenge shifted from "extract and standardize knowledge" to "how do we enable knowledge workers to be productive when we can't fully specify what they're doing?" This led to decades of theory about autonomy, intrinsic motivation (Dan Pink), learning organizations (Peter Senge), and flatter hierarchies. The implicit assumption was that meaningful knowledge work was, by its nature, resistant to the kind of systematization Taylor championed.
This is the assumption AI is now stress-testing.
What's critical to understand is that AI isn't just a more efficient way to do Taylorism—it's revealing which knowledge work was always more systematizable than we wanted to admit, and which truly requires the human judgment Drucker emphasized.
When an AI can draft a competent legal memo, write serviceable code, or analyze a financial statement, it's not that the AI has achieved human-level reasoning—it's that we're discovering these tasks involve more pattern-matching and less irreducible expertise than we'd convinced ourselves. The tasks that seemed to require years of training and judgment turn out to have been, in retrospect, more algorithmic than we realized.
The Consulting Model as Intermediary
To understand why this matters strategically, it's worth examining how management consulting firms—McKinsey, BCG, Bain—have historically operated, because they represent a kind of intermediate case that illuminates both the old model and where AI fits.
Consulting firms essentially industrialized Drucker-era knowledge work. They couldn't fully Taylorize strategy or transformation (every client situation is genuinely different), but they developed frameworks—the 2x2 matrix, the value chain analysis, the five forces—that allowed them to apply patterned thinking across varied contexts. They built "intellectual property" that was really systematized approaches to problems that seemed bespoke.
The consulting model worked because:
Most business problems fall into recognizable categories
Firms could capture learnings from thousands of engagements
Smart generalists with frameworks could add value even without deep domain expertise
Clients paid for both the analysis and the external validation
What's fascinating is that this is exactly what large language models do, but without the $500/hour analysts. An LLM trained on business literature, case studies, and strategic documents can apply McKinsey-style frameworks to a new situation. It can do the pattern-matching that junior consultants spend years learning. (This doesn't mean it replaces senior consultants—more on that in a moment—but it does suggest a significant portion of the consulting value chain was always more mechanistic than the pricing suggested.)
The Current Tension: Augmentation vs. Automation
This brings us to the central tension in AI adoption today, which is fundamentally a question about which model of knowledge work was correct.
The "AI as augmentation" argument—that AI will enhance human knowledge workers rather than replace them—is essentially betting that Drucker was right, that most valuable work requires judgment, creativity, and contextual understanding that AI can support but not replicate. This is the comfortable narrative: AI handles routine tasks, freeing humans for higher-value strategic thinking.
The "AI as automation" argument suggests that much of what we've categorized as knowledge work is actually more Taylorist than we admitted—it's pattern-matching that can be systematized, just at a level of complexity that required human intelligence until now.
I think what's more likely—and more disruptive—is that both are true, but the dividing line is in a different place than most organizations assume. We're going to discover that:
Some roles we thought required deep expertise are actually very automatable (much of first-line legal work, routine analysis, straightforward content creation). These look like knowledge work but are actually sophisticated pattern-matching.
Some roles will become more valuable precisely because they can't be automated (genuine creative work, complex judgment calls with incomplete information, work requiring trust and human relationships). These represent irreducible human contribution.
Most roles will be unbundled—with AI handling components that are systematizable, while humans focus on the parts that aren't. This is augmentation, but it may not preserve jobs so much as transform them.
The question for any organization—and this is where management theory becomes immediately practical—is: do we understand which of our activities fall into which category?
The Strategic Implications
This is where things get strategically interesting, because how companies answer this question reveals their theory about their own competitive advantage.
Consider two approaches:
The Taylorist AI Strategy: Systematically analyze knowledge work, identify repeatable patterns, train AI to handle these patterns, and scale efficiency. This works if your competitive advantage is in execution—doing known things well at scale. Many SaaS companies are taking this approach: use AI to make sales more efficient, customer service more responsive, product development faster. The assumption is that the work itself is largely understood; AI just lets you do more of it, faster.
The Drucker AI Strategy: Use AI to handle baseline competencies, freeing human judgment for where it matters most. This works if your competitive advantage is in genuine insight, creativity, or relationship-based trust. Consulting firms should be taking this approach—let AI do the framework application and initial analysis, while humans focus on the nuanced judgment and client relationships that justify premium pricing.
What's fascinating is watching companies try to do both simultaneously, which often reveals confused thinking about what actually drives their value. To be sure, some companies genuinely do compete on both dimensions—but most don't, and AI is forcing a clarity about this that's uncomfortable.
The Deeper Question: What Is Work For?
There's a more profound implication here that connects back to management theory in an unexpected way.
The evolution from Scientific Management to knowledge work wasn't just about how we organize work—it was about why we work and what gives work meaning. Drucker's knowledge worker concept was, in part, a response to the alienation of Taylorism. The idea was that if work engaged our minds, used our judgment, and required our expertise, it could be intrinsically meaningful.
AI threatens this bargain, but not in the way most people assume. The threat isn't just that AI might do our jobs—it's that AI might reveal much of what we do isn't actually the irreplaceable knowledge work we thought it was. This is psychologically destabilizing in ways that go beyond economics.
When a lawyer discovers that much of document review is pattern-matching, or a writer realizes that much of content creation follows templates, or an analyst finds that most of their work involves applying standard frameworks—the AI isn't just threatening their job, it's challenging their professional identity.
This is why the AI conversation in organizations is so emotionally charged. We're not just discussing productivity tools; we're confronting questions about the nature of expertise, the value of experience, and what makes work meaningful. These are exactly the questions that animated management theory throughout the 20th century.
Looking Forward: The New Management Challenge
In retrospect, what's emerging is a new management challenge that synthesizes rather than replaces the old debates.
The question isn't "Taylorism or knowledge work?" but rather: "How do we build organizations where AI handles the systematizable components of work while humans focus on the genuinely irreducible aspects—and how do we ensure this division creates value and meaning rather than just extracting it?"
This requires managers to:
Honestly assess which activities are actually systematizable (even if they've been performed by highly trained workers). This is uncomfortable but necessary.
Invest in developing the capabilities that AI can't replicate—which increasingly look like Drucker's original vision: judgment, creativity, contextual understanding, relationship-building.
Redesign work itself around the AI/human interface, rather than just using AI to speed up existing processes. This is the difference between productivity improvement and actual transformation.
Grapple with the meaning question: If AI handles more of the cognitive tasks, what makes knowledge work meaningful? This isn't just an HR concern—it fundamentally affects motivation, retention, and ultimately organizational capability.
The AI Mirror
What makes this moment so significant is that AI is holding up a mirror to knowledge work itself. It's forcing us to distinguish between:
Pattern recognition and genuine insight
Application of frameworks and creative problem-solving
Expertise as accumulated pattern-matching and expertise as judgment under uncertainty
Work that engages our minds and work that gives our lives meaning
The evolution of management theory from Taylor to Drucker to today wasn't just about finding better ways to organize work—it was about progressively understanding what humans uniquely contribute. AI doesn't end that evolution; it accelerates it and makes the questions more urgent.
The companies that will thrive aren't necessarily those that adopt AI fastest, but those that most clearly understand what AI reveals about the nature of their work—and then reorganize around that understanding. That requires thinking seriously about management theory, not as historical curiosity, but as essential strategic framework.
Because the question "what should AI do versus what should humans do?" is really asking "what is work for, and what do humans uniquely contribute?" And that's been the central question of management theory for over a century.
We're just finally getting some answers—some of them uncomfortable, many of them clarifying, all of them forcing a reckoning with assumptions we didn't realize we'd made.
