In 1911, Frederick Winslow Taylor published The Principles of Scientific Management, arguing that work could be decomposed into measurable components, optimized through systematic study, and reassembled into more efficient processes. The book was controversial—critics accused Taylor of reducing workers to mere cogs in a machine—but its influence was undeniable. Within decades, Taylor's methods had transformed manufacturing, and eventually, extended into the knowledge work that would come to define the twentieth-century corporation.
I've been thinking about Taylor lately, not because his methods are newly relevant, but because we're witnessing their logical conclusion play out at the world's most prestigious consulting firm. McKinsey & Company—the institution that arguably did more than any other to industrialize management thinking itself—has just revealed that it now employs 25,000 AI agents alongside 40,000 humans. The ratio, which was closer to 3,000 agents just eighteen months ago, is expected to approach parity by year's end.
The headlines have focused on the automation angle: consultants replaced by machines, white-collar work under siege, the robots finally coming for the elite. But I think this framing misses what's actually interesting about McKinsey's disclosure. What we're seeing isn't the automation of consulting—it's the revelation of what consulting work actually consisted of all along.
Bob Sternfels, McKinsey's global managing partner, has been remarkably candid about the transformation underway. Speaking on the Harvard Business Review IdeaCast, he described a workforce of 60,000: 40,000 humans and 20,000 AI agents. Days later at CES, he updated that figure to 25,000 agents, a number McKinsey subsequently confirmed to Business Insider.
But the more revealing statistic came from the firm's internal deployment data. Since Lilli, McKinsey's generative AI platform, rolled out firm-wide in July 2023, 72% of employees have become active users, logging over 500,000 prompts monthly. The reported efficiency gain: up to 30% time savings in "searching and synthesizing knowledge."
That phrase deserves examination. Searching through documents is, by definition, pattern work—applying known criteria to find relevant information. Synthesizing information is largely pattern work—combining inputs according to established frameworks to produce structured outputs. The work that junior consultants traditionally spent "lots of time on," as one McKinsey leader put it—creating slide decks, compiling charts, applying best practices—follows, almost by definition, predictable patterns.
The firm reportedly saved 1.5 million hours in 2025 through AI. That's 1.5 million hours of what was presumably considered professional expertise, now revealed to be pattern-matching that machines can do faster.
This is not a criticism of McKinsey's consultants. It's a structural observation about what systematized knowledge work actually consists of—and it carries implications far beyond consulting.
The Pattern Paradox
I've written before about what I call the Pattern Paradox: the observation that organizations most successful at scaling through systematization face the greatest AI transformation vulnerability. Success through standardization creates the very conditions that make automation possible.
McKinsey is perhaps the clearest example of this paradox in action.
For a century, the firm's competitive advantage has been built on converting judgment into patterns. McKinsey's famed knowledge management system—containing decades of case studies, frameworks, and best practices—represents nothing less than the industrialization of consulting wisdom. The pyramid structure, with partners directing armies of analysts, optimized for applying proven patterns to novel client situations. The 7-S Framework, the Growth-Share Matrix (borrowed from BCG but emblematic of the approach), the endless two-by-two matrices—all represent judgment that's been systematized into reproducible methodology.
This systematization was the source of McKinsey's value. A client hiring McKinsey wasn't just buying smart people; they were buying access to a proprietary corpus of institutional knowledge about what works. That knowledge had been painstakingly extracted from thousands of engagements, codified into frameworks, and embedded in training programs that turned bright young graduates into McKinsey consultants who could reliably apply the McKinsey way.
The problem, now obvious in retrospect, is that codified knowledge is exactly the kind of knowledge that AI systems can access and apply. Kate Smaje, the senior partner leading McKinsey's AI transformation, put it starkly to the Wall Street Journal: "Do I think that this is existential for our profession? Yes, I do."
She's right, but perhaps not in the way that sounds. What's existential isn't consulting—it's the particular economic model that consulting has relied upon.
The Self-Assessment Gap
Here's what makes McKinsey's disclosure particularly illuminating: it reveals the gap between how knowledge workers perceive their work and what their work actually contains.
Most knowledge workers, when asked to estimate how much of their work is routine versus requiring genuine judgment, place themselves around 40% routine. When subjected to rigorous work composition analysis—systematic decomposition of tasks, time allocation studies, output classification—that number typically lands between 60-80%.
The more expertise someone has in a domain, paradoxically, the larger this perception gap tends to be. Deep expertise often manifests as the ability to rapidly apply sophisticated patterns—which feels like judgment precisely because the pattern-matching has become automatic. The senior consultant who can instantly diagnose a client's supply chain problem isn't exercising judgment so much as recognizing a pattern they've seen dozens of times before, even if the recognition feels like insight.
McKinsey's deployment of Lilli demonstrates this gap at organizational scale. If 30% of consultants' time was spent on work that AI can now do faster and better, that's 30% of what was presumably considered professional expertise revealed to be pattern work in disguise.
The firm's decision to test job candidates on their ability to work with Lilli during interviews is telling. According to CaseBasix, a consulting interview preparation firm, McKinsey is now evaluating "whether candidates can use AI as a productive support tool while maintaining ownership of the final answer." The focus is on "structured reasoning, iterative improvement, and sound decision making under time pressure."
What CaseBasix is describing—in the framework I've been developing—is the judgment work that remains irreducibly human. McKinsey isn't testing whether candidates can do the pattern work. They're testing whether candidates can supervise AI doing the pattern work while exercising the judgment that determines whether the output makes sense.
That's a fundamentally different skill than what consulting recruited for a decade ago.
The Business Model Transformation
The more interesting shift isn't technological but commercial. McKinsey is reportedly migrating away from fee-for-service toward outcome-based arrangements, with approximately 25% of their work now tied to measurable client impact rather than billable hours.
This is a significant departure from consulting's traditional economic model—and the logic behind it reveals something important about where value is migrating.
Under fee-for-service, pattern work is billable. Every hour an analyst spends applying frameworks to data generates revenue. The pyramid structure optimizes for this: partners sell work, analysts execute it, and the spread between what clients pay and what analysts cost creates margin. More pattern work means more billable hours means more revenue.
Under outcome-based arrangements, the economics invert. Pattern work becomes cost, not revenue. If AI can replicate analyst hours at marginal cost, then the only work worth doing—from a margin perspective—is the judgment work that leads to demonstrable client impact.
When Sternfels says McKinsey is "underwriting the outcome by tying our fees to the impact our work delivers," he's describing something structurally similar to insurance. The firm is betting that its judgment about which transformations will succeed is good enough to stake compensation on results. That's a very different capability than billing hours for analysis.
This shift has profound implications. Strategy work now represents less than 20% of McKinsey's global business. AI and technology consulting accounts for approximately 40% of revenue. The firm has pivoted from what Oliver Wyman CEO Nick Studer memorably called "a suit with PowerPoint" toward implementation partnerships that span data infrastructure, AI enablement, and workforce transformation.
The pattern work is being automated. The judgment work is being bet on. And the pricing model is following the value.
Three Implications
What does McKinsey's transformation reveal about AI's broader impact on knowledge work? I see three implications worth drawing out.
The Composition Question Precedes the Automation Question
Organizations typically approach AI by asking "what can we automate?" This is the wrong question—or rather, it's the second question, not the first.
The right first question is: "What does our work actually consist of?"
McKinsey's 25,000 agents didn't emerge from a technology assessment. They emerged from an understanding of work composition. The firm analyzed what consultants actually do—the hours spent on research, synthesis, deck-building, and framework application—and built agents to handle those tasks.
This distinction matters because capability measurement (can AI do task X?) and composition measurement (what percentage of role Y consists of task X?) answer different questions. The former tells you what's technically feasible. The latter tells you what transformation actually looks like.
Most organizations have never measured work composition. They assume their professionals do primarily judgment work because professionals are expensive and expertise seems unique. McKinsey's implicit admission—that elite consultants were spending 30%+ of their time on work AI can handle—should prompt serious reflection about that assumption.
Successful Systematization Predicts Transformation Vulnerability
The Pattern Paradox is not hypothetical. McKinsey's hundred years of knowledge codification created exactly the corpus that makes Lilli effective. The firm's commitment to best practices and reproducible frameworks meant that much of their expertise was already in a form amenable to AI retrieval and application.
Organizations that pride themselves on process excellence should understand this clearly: documented procedures, templates, training materials, and best practices are not just operational assets. They're also transformation liabilities. The more thoroughly you've converted judgment into process, the more exposed you are to AI disruption—because you've already done the hard work of making that knowledge systematic.
To be sure, this isn't an argument against systematization. Organizations that haven't systematized their knowledge face different problems: inconsistent quality, key-person dependencies, inability to scale. But it is an argument for clear-eyed assessment of where systematization has positioned you on the AI transformation curve.
The organizations that will be most disrupted are those that successfully industrialized their knowledge work over the past decades. That's the paradox: historical success at scaling predicts future vulnerability to automation.
The Value Migration is From Pattern to Judgment
McKinsey's shift to outcome-based pricing reflects a deeper truth: the margin on pattern work is collapsing. When agents can search, synthesize, and structure at near-zero marginal cost, the economic value migrates to whatever remains irreducibly human.
In McKinsey's framing, that's "setting the right aspirations, human judgment, and true creativity." These are the skills that determine which patterns to apply, whether the AI output makes sense for this specific context, and how to navigate the human dynamics of organizational change.
The consultant of tomorrow, as Sternfels put it, must "learn faster than ever before, at a rate you and I have never seen" and "collaborate seamlessly with both humans and AI agents." The pattern work will be automated; the judgment work becomes more critical, not less.
This is both threatening and liberating. Threatening because many knowledge workers have built careers on sophisticated pattern-matching that they didn't recognize as pattern-matching. Liberating because the time freed from pattern work becomes available for activities that actually require human intelligence.
The Infrastructure Gap
There's a competitive angle here that deserves attention. McKinsey can deploy AI internally because they have the infrastructure to do so—the knowledge base, the technical capability, the organizational will. What they haven't done, and perhaps cannot do, is productize the diagnostic capability that enabled their transformation.
Think about what McKinsey needed to know before they could deploy 25,000 agents effectively:
What does consultant work actually consist of?
Which tasks are pattern-based versus judgment-based?
Where are the automation opportunities versus augmentation opportunities?
How do we redesign workflows to leverage AI effectively?
These are work composition questions. McKinsey answered them for themselves, but they haven't built a methodology to answer them for clients. Lilli serves McKinsey's internal knowledge; it doesn't diagnose client workforce transformation needs.
This creates an interesting gap. Every organization approaching AI transformation faces the same composition question McKinsey faced: what does our work actually consist of? But most organizations don't have McKinsey's resources to answer that question through trial and error.
The firms that can provide that diagnostic capability—systematic measurement of work composition that precedes AI deployment—are positioned at a potentially valuable chokepoint. You can't automate what you don't understand. And understanding requires measurement.
The Uncomfortable Truth
There's a reason knowledge workers resist work composition analysis. It's threatening to discover that expertise often consists of sophisticated pattern-matching rather than irreducible judgment. The emotional response—"my work is too complex for AI"—is a defense mechanism, not an analysis.
McKinsey's disclosure should make that defense mechanism harder to maintain. If McKinsey's consultants—among the most rigorously selected knowledge workers on Earth—find that significant portions of their work can be handled by AI agents, the notion that other knowledge work is somehow immune becomes difficult to sustain.
The data from McKinsey's own research reinforces this. According to their 2025 state of AI survey, only 39% of organizations attribute any EBIT impact to AI, and most of those report less than 5% impact. The gap between AI capability and organizational value capture remains enormous—not because the technology doesn't work, but because most organizations haven't done the composition analysis required to deploy it effectively.
The highest-performing organizations in McKinsey's research distinguish themselves not by using different AI tools, but by treating "AI as a catalyst to transform their organizations, redesigning workflows and accelerating innovation." That's a composition insight, not a technology insight.
What This Means
McKinsey is conducting the largest-scale experiment in knowledge work transformation currently underway. Their findings—revealed not just in statements but in hiring practices, pricing models, and organizational restructuring—offer a preview of what's coming for every knowledge-intensive organization.
The question every organization should be asking isn't "will AI affect our work?" That question has been answered. The question is: "What percentage of our work is pattern-based, and what are we going to do about that before someone else decides for us?"
McKinsey had the resources and foresight to transform proactively. They analyzed their work, built the infrastructure, deployed the agents, and restructured their business model—all before the market forced their hand. Most organizations will not have that luxury. They'll discover their pattern-work exposure when competitors start winning on cost, or when employees start leaving for organizations that offer more interesting work, or when clients start asking why they're paying premium rates for deliverables that AI can produce.
The uncomfortable truth is that the 60-75% pattern work typical of knowledge roles isn't a verdict—it's a diagnostic. It tells you where transformation will occur and what capabilities you need to develop. The organizations that understand their composition can navigate the transition deliberately. The organizations that don't will navigate it reactively, which is another way of saying they won't navigate it at all.
Frederick Taylor argued that work could be decomposed, measured, and optimized. He was right, and the knowledge economy spent a century proving it by systematizing everything from consulting to legal services to financial analysis. The paradox is that this systematization—this conversion of judgment into pattern—created the conditions for the next transformation.
AI doesn't eliminate knowledge work. It reveals what knowledge work actually consisted of. The question now is what we do with that revelation.
The organizations that measure their work composition before deploying AI will transform deliberately. The organizations that don't will discover their pattern-work exposure the hard way—through margin compression, talent flight, and competitive displacement. The window for deliberate transformation is measured in months, not years.
Sources & References
Primary Sources - McKinsey Leadership
Bob Sternfels (Global Managing Partner)
Harvard Business Review. "McKinsey Faces Its AI Future." HBR IdeaCast. December 2025. https://hbr.org/2025/12/mckinsey-faces-its-ai-future
Multiple media interviews covering McKinsey's AI agent workforce deployment at CES 2026 and subsequent confirmations. Coverage: Yahoo Finance, AOL, Fortune
Kate Smaje (Senior Partner, AI Transformation Lead)
The Wall Street Journal. "AI Is Coming for the Consultants. Inside McKinsey, 'This Is Existential.'" August 2025. Quote: "Do I think that this is existential for our profession? Yes, I do."
Referenced in: Futurism, Naked Capitalism
McKinsey Platform & Statistics
Lilli AI Platform
McKinsey & Company. "Rewiring the way McKinsey works with Lilli, our generative AI platform." Tech and AI. https://www.mckinsey.com/capabilities/tech-and-ai/how-we-help-clients/rewiring-the-way-mckinsey-works-with-lilli
McKinsey & Company. "Meet Lilli, our generative AI tool that's a researcher, a time saver, and an inspiration." About McKinsey Blog. https://www.mckinsey.com/about-us/new-at-mckinsey-blog/meet-lilli-our-generative-ai-tool
CIO Dive. "McKinsey rolls out generative AI tool 'Lilli' to 7K employees." September 2023. https://www.ciodive.com/news/McKinsey-generative-AI-Lilli-platform-internal-employees/691231/
Key Statistics Verified:
72% of McKinsey employees use Lilli regularly
500,000+ prompts generated monthly
Up to 30% time savings in searching and synthesizing knowledge
1.5 million hours saved in 2025
Firm-wide rollout: July 2023
Over 75% of 43,000 employees using Lilli monthly (updated figures)
Average usage: 17 times per week per user
Research & Survey Data
McKinsey State of AI 2025
McKinsey & Company. "The State of AI: Global Survey 2025." QuantumBlack. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
Survey conducted June 25 - July 29, 2025
1,993 participants across 105 nations
Key finding: 39% of organizations attribute any EBIT impact to AI; most report <5% impact
High performers (6% of respondents): Those with 5%+ EBIT impact from AI
Analysis:
Medium. "Deep Dive into McKinsey's The State of AI in 2025: From 'Everyone Using AI' to 'A Few Using It Well.'" David Hung Yang. https://medium.com/@david.hung.yang/deep-dive-into-mckinseys-the-state-of-ai-in-2025-from-everyone-using-ai-to-a-few-using-it-6095987cec14
Winsome Marketing. "McKinsey's State of AI Report: 88% Adoption, But Only 6% Are Actually Winning." https://winsomemarketing.com/ai-in-marketing/mckinseys-state-of-ai-report-88-adoption-but-only-6-are-actually-winning
Hiring & Interview Practices
CaseBasix Analysis
CaseBasix. "McKinsey AI Interview Explained - Final Round Expectations." https://www.casebasix.com/pages/mckinsey-ai-interview
Description of McKinsey's pilot program testing candidates on Lilli usage during interviews
Focus on "structured reasoning, iterative improvement, and sound decision making under time pressure"
Media Coverage:
Fortune. "McKinsey challenges graduates to master AI tools as it shifts hiring hunt toward liberal arts majors." January 14, 2026. https://fortune.com/2026/01/14/how-to-get-hired-at-mckinsey-ai-tools-liberal-arts-creativity/
CIO. "McKinsey starts testing job candidates with AI assistant." https://www.cio.com/article/4117688/mckinsey-begins-testing-candidates-with-an-ai-assistant.html
The Irish Times. "McKinsey challenges graduates to use AI chatbot in recruitment overhaul." January 14, 2026. https://www.irishtimes.com/business/2026/01/14/mckinsey-challenges-graduates-to-use-ai-chatbot-in-recruitment-overhaul/
Industry Perspectives
Oliver Wyman - Nick Studer (CEO)
TechRepublic. "Musk Weighs In On Consulting Industry's 'Existential Transformation.'" Quote: "Companies don't want a suit with PowerPoint" and "The age of arrogance of the management consultant is over now." https://www.techrepublic.com/article/news-mckinsey-ai-consulting-shift-elon-musk/
Additional Context
Business Model & Revenue
Multiple sources reference McKinsey's shift toward outcome-based pricing (approximately 25% of engagements)
AI and technology consulting: ~40% of revenue
Strategy work: <20% of global business
Sources: The AI Opportunities Newsletter, Business Insider coverage
Historical Reference
Taylor, Frederick Winslow. The Principles of Scientific Management. Harper & Brothers, 1911.
Supplementary Reading
The AI Opportunities. "Inside McKinsey's AI Operating System." Substack. https://www.theaiopportunities.com/p/inside-mckinseys-ai-operating-system
Medium. "Turning AI Into Infrastructure: McKinsey's Playbook." Nick Talwar. https://medium.com/@talweezy/turning-ai-into-infrastructure-mckinseys-playbook-7a183fae1262
DigitalDefynd. "5 Ways McKinsey Is Using AI [Case Studies] [2026]." https://digitaldefynd.com/IQ/ways-mckinsey-is-using-ai/
