In 1865, William Stanley Jevons observed something counterintuitive about coal: improvements in steam engine efficiency didn't reduce coal consumption—they increased it. Better engines made coal useful for more applications, expanding total demand far beyond what efficiency saved. Jevons's Paradox, as this became known, explains why technological advancement that appears to threaten jobs often creates more work than it eliminates.
I've been thinking about Jevons lately—not as economic theory, but as lived experience. For the past decade, I've owned and operated Divergence Academy, a vocational trade school serving career transitioners, particularly veterans moving into civilian IT roles. We teach Cybersecurity, Cloud Engineering, GRC (Governance, Risk, and Compliance), and Intelligent Automation. For ten years, the model worked: take people in transition, teach them technical skills, place them in jobs.
Then, about 24 months ago, my model started breaking.
Not catastrophically. Not overnight. But placement rates began slipping. Employers' requirements shifted in ways my curriculum couldn't keep pace with. The jobs weren't disappearing—they were transforming, and faster than my institutional response time could match. Students were graduating with valuable technical skills but increasingly found themselves competing in a market that now demanded something additional, something harder to name: adaptive capacity in the face of AI-augmented workflows.
When you own the production facility—when declining placement rates hit your bottom line—academic debates about workforce transformation become operational urgencies. You can't afford to wait for someone else to build the infrastructure you need. You build it yourself, or you close.
So I built it. What emerged is what I'm calling The Helm Program, powered by the Scaffold Framework. This article is about that framework, the infrastructure it creates, and why I think it matters for more than just my trade school's survival.
My Three-Business Integration Problem
Here's the context that explains why this framework exists: I own three companies, each solving a different piece of the workforce transformation puzzle:
Divergence Academy (10 years) is my vocational trade school. I produce workers—not metaphorically, literally. People come in with military experience or career transitions, I train them in technical domains (Cybersecurity, Cloud, GRC, Intelligent Automation), they leave with certifications and, ideally, jobs. I'm not running a think tank analyzing workforce policy; I'm running an operational business where placement rates and student outcomes determine survival.
Euler Center (18 months in making) is my measurement and evaluation company. As Divergence Academy's needs evolved—particularly my need to assess not just technical skills but adaptive capacity—it became clear I needed measurement infrastructure that didn't exist in the market. So I built it. Euler Center develops frameworks for evaluating the kinds of "soft" skills that AI transformation makes critical: judgment, pattern recognition, contextual decision-making, emotional intelligence in ambiguous situations.
9brains (8+ years) is my AI consulting firm focused on GRID deployments: Campus, Content, Career, and Compliance AI systems. The Helm Program lives here. It's my forward deployment mechanism—the way I get AI operators embedded in organizations, starting with Compliance roles. I'm not selling AI tools; I'm building the human infrastructure that makes AI transformation productive rather than disruptive.
The integration isn't accidental. I can't measure adaptive capacity without understanding what jobs become when AI arrives. I can't train people for transformed roles without measurement frameworks that distinguish pattern work from judgment work. And I can't place my graduates into roles that don't have names yet without building the organizational infrastructure that creates those roles.
Most people trying to solve workforce transformation own one piece: the school, or the assessment tool, or the consulting practice. I own all three because the system requires all three, and none of them worked in isolation during the 24-month period when everything started shifting.
The Amarillo Moment: When Infrastructure Became Urgent
Three weeks ago, I was invited to speak at an industry event where data center infrastructure was the hot topic. The conversation kept circling back to one project: the Advanced Energy and Intelligence Campus being built near Amarillo, Texas—11 gigawatts of IT capacity across 5,800 acres, scheduled to begin delivering power by the end of 2026.
To put that scale in context: 11 gigawatts is enough to power roughly 8 million homes. Instead, it's powering AI data centers—the computational substrate of the transformation we're all talking about.
Here's what struck me: nobody at that event was talking about where the workforce comes from.
They discussed nuclear reactor configurations (four 1-gigawatt Westinghouse AP1000 reactors), natural gas infrastructure (sitting atop the Panhandle Hugoton Gas Field), solar arrays, and battery storage. They debated permitting timelines and grid interconnections. But when I asked about the electrical trade professionals who would build and maintain this infrastructure, I got blank stares.
Then someone mentioned BICSI.
BICSI—Building Industry Consulting Service International—is the global standard for Information and Communications Technology (ICT) infrastructure. Their certifications (RCDD for telecommunications design, DCDC for data centers, Installer programs for fiber and copper cabling) represent the training pipeline for the people who actually build the physical layer that AI runs on. Data centers don't build themselves. Someone has to design the structured cabling systems, install the fiber optics, configure the telecommunications distribution, manage the project timelines.
The Amarillo project will need thousands of BICSI-certified professionals. So will the dozens of similar projects breaking ground across Texas and the Southwest. The infrastructure is being funded. The timelines are aggressive. But the workforce development conversation is barely starting.
This is the gap I'm trying to address with The Helm Program—not just for data center electricians, but for the much larger population of knowledge workers whose jobs are being transformed by the AI infrastructure that places like Amarillo represent.
I'm beginning conversations with BICSI about how Divergence Academy can serve this emerging need. Not because it's an interesting market opportunity (though it is), but because it's the concrete, immediate version of the broader transformation challenge I've been wrestling with: jobs are changing faster than training infrastructure can respond, and somebody has to build the bridge.
My 24-Month Crucible: When Theory Met My Reality
Academic research on workforce transformation typically proceeds at the pace of grant cycles and publication timelines. Theory gets developed, pilot programs get funded, papers get peer-reviewed, and maybe five years later some insights filter into practice.
I didn't have five years. I had quarterly placement rate meetings and students whose career transitions couldn't wait for the research to catch up.
Around mid-2023, something shifted in my world. Employers who had reliably hired my Cybersecurity graduates started adding requirements that didn't fit my curriculum: "Must be comfortable with ambiguity." "Proven ability to synthesize across domains." "Experience navigating rapid change." These weren't technical requirements. They were signals that the shape of the work was changing.
Initially, I responded the way schools typically do: I added modules. A unit on "adaptability." Some content on "soft skills." I invited guest speakers to discuss "thriving in uncertainty." It didn't work. My students learned about adaptability but weren't developing adaptive capacity. The distinction matters: knowing about something isn't the same as being able to do it under pressure.
That's when I realized my problem wasn't content—it was diagnostic infrastructure. I couldn't measure what I needed to develop. And I couldn't develop it without first measuring where people actually stood.
This is where Euler Center entered the picture. I needed a framework that could:
Distinguish pattern work from judgment work in a given role
Assess someone's current ratio of pattern capacity vs. judgment capacity
Map the transformation pathway from their current role to their AI-augmented role
Measure progress as they developed new capacities
The standard tools—personality assessments, skills inventories, aptitude tests—weren't built for this. They measure relatively stable traits, not dynamic capacity for role transformation. I needed something different, so I built it.
The Scaffold Framework: My Methodology, Not My Manifesto
The Scaffold Framework emerged from this operational necessity. I'm not presenting it as finished theory or settled science. I'm presenting it as my evolving methodology—one that's working in the specific context of my trade school serving career transitioners, but that I think has broader applicability.
My framework rests on a few core premises:
1. Work Decomposes Into Patterns and Judgment
Every role contains both pattern work (rule-following, template application, structured process execution) and judgment work (ambiguity navigation, contextual decision-making, synthesis across domains). The ratio varies by role, but both are always present.
What I've observed is that AI's impact is asymmetric: It's remarkably good at pattern work and remarkably limited at judgment work. This creates a predictable transformation pattern in my view—roles shift toward higher judgment ratios as AI handles more pattern work.
The transformation isn't elimination; it's evolution. I see tax accountants not disappearing when AI can process returns; they become financial strategists who use AI for the pattern work while focusing on judgment about complex scenarios, client-specific optimization, and regulatory gray areas.
2. Most People Don't Know Their Pattern/Judgment Ratio
When I ask someone what percentage of their job is "following established patterns" vs. "making contextual judgments," I get a guess, not data. They haven't thought about their work this way. The categories don't map to how jobs are described or how people experience their workdays.
My Scaffold Framework includes diagnostic tools—what I call the AI Mirror—that analyze actual work tasks to derive someone's current ratio. Not what they think they do, but what the structure of their work reveals about where they spend cognitive energy.
This diagnostic capacity is critical in my experience. You can't navigate transformation if you don't know where you're starting from.
3. Transformation Pathways Are Role-Specific
I've learned that a sales representative and a graphic designer both might currently be 60% pattern work / 40% judgment work. But their transformation pathways are completely different. The sales rep I work with evolves toward relationship architecture and trust-building (their pattern work—pipeline management, CRM updates—gets automated). The graphic designer evolves toward creative strategy and brand storytelling (their pattern work—asset production, template application—gets automated).
Generic "soft skills" training fails in my observation because it treats all transformations as the same. The skills a sales rep needs to lean into aren't the skills a graphic designer needs to lean into. My framework maps role-specific pathways based on what judgment work becomes most valuable when pattern work gets automated in that domain.
4. Transition Requires Scaffolding, Not Just Skills
Most workforce development programs I've seen focus on skills acquisition: take this course, earn this certification, add this competency. That model assumes people just need to know what to do differently.
I've learned that knowledge isn't enough. Transitioning from a 60/40 role to an 80/20 judgment-heavy role isn't like learning a new software package. It's like switching from structured environments to ambiguous ones, from clear metrics to fuzzy signals, from execution to synthesis.
That transition requires what I call scaffolding—structured support that gradually removes as capacity builds. I don't just teach judgment skills; I create environments where people can practice judgment in progressively more complex scenarios, with decreasing levels of support, until the capacity becomes self-sustaining.
5. My Framework Self-Critiques
Here's the part that matters most for my intellectual honesty: I've designed the Scaffold Framework to evolve. Every cohort of students who move through Divergence Academy generates data about what works and what doesn't. Every placement (or failed placement) teaches me something about where my diagnostic tools were accurate and where they missed. Every employer conversation about why a hire succeeded or struggled updates my understanding of what judgment capacity actually looks like in practice.
I'm not claiming I've figured it out. I'm claiming I've built a framework that learns from its failures faster than traditional workforce development systems do. The measurement infrastructure I built (Euler Center) creates feedback loops that my training infrastructure (Divergence Academy) can act on, and my deployment infrastructure (9brains Helm Program) tests whether the training actually worked.
This is my methodological pragmatism, not theoretical certainty. My framework is right to the extent that it produces better outcomes, and it improves by my attending carefully to where it's wrong.
My Helm Program: Forward Deploying Infrastructure
The Helm Program is my deployment mechanism—the way my Scaffold Framework moves from Divergence Academy classrooms into actual organizations.
Here's how I've structured it:
Step 1: My Diagnostic Assessment
Participants (often early-career IT professionals or career transitioners like veterans) go through my AI Mirror assessment. This produces their current pattern/judgment ratio and identifies their role-specific transformation pathway. I might learn that a junior Compliance analyst is currently 70% pattern work (policy checking, audit documentation, report generation) and that their transformation path leads toward risk strategist and regulatory navigator roles.
Step 2: My Scaffolded Development
Participants enter the structured training I've developed that progressively builds judgment capacity. For the Compliance analyst, this means moving from rule-following exercises to ambiguous scenario navigation, from checklist completion to contextual policy interpretation, from audit execution to risk assessment.
My scaffolding is explicit: early exercises have heavy support and clear structure. As capacity builds, I decrease support and increase ambiguity. By the end, participants are operating in scenarios that mirror the actual judgment work their future roles require.
Step 3: My Forward Deployment
Here's where my Helm differs from traditional training programs: I don't just certify people and send them job hunting. I forward deploy them as AI Operators into organizations, starting with Compliance roles.
An AI Operator in my framework isn't just someone who uses AI tools. It's someone who understands the pattern/judgment distinction I teach, who can identify which work should flow to AI and which requires human judgment, who can design workflows that use AI effectively without creating new failure modes, and who can help organizations navigate the transition without panicking the workforce.
I embed these operators in Compliance departments initially (because Compliance has clear structure, measurable outcomes, and desperate need for efficiency gains) but their role is actually organizational infrastructure: they're building the capacity for thoughtful AI integration across the enterprise.
Step 4: My Feedback Loops
The deployment creates data that flows back to me. How well did my diagnostic predict transformation readiness? Where did my scaffolded training succeed or fail? What unexpected challenges emerged in actual roles? This data flows back to Euler Center (I update my measurement frameworks) and Divergence Academy (I update my training design).
The cycle repeats, getting better each iteration. That's my theory of improvement, anyway.
Five Serious Critiques I Face
Any framework attempting to address workforce transformation attracts critics. My Scaffold Framework faces five serious lines of critique that I think deserve my direct engagement:
1. The Complexity Defenders
Their argument: I'm oversimplifying work into "patterns" and "judgment." Real jobs are far more complex than binary categories. This reductionism might make for clean frameworks, but it erases the nuanced reality of how people actually work.
My response: Fair. My pattern/judgment distinction is reductive. All models are. The question isn't whether it captures every nuance of work—it doesn't—but whether it's useful enough to guide action.
In practice, I've found that my framework helps people think about their work in ways they hadn't before. It gives them a vocabulary for something they were experiencing but couldn't articulate: "Oh, that's why I find certain parts of my job draining (pattern work I don't care about) and other parts energizing (judgment work that uses my capabilities)."
My framework isn't claiming to be a complete theory of work. It's claiming to be a useful diagnostic tool for a specific purpose: helping people navigate AI-driven role transformation. For that purpose, I think the reduction works.
To that end, I'd be interested in critiques that improve my framework rather than just noting its incompleteness. What dimensions am I missing that matter operationally? How can I make my model more useful while keeping it actionable?
2. The Identity Protectors
Their argument: I'm treating work transformation as primarily technical and cognitive, but jobs are deeply tied to identity, social status, and meaning-making. Many people don't want to transition to more judgment-heavy roles. They find satisfaction in pattern work, in craftsmanship, in mastery of established processes. My framework assumes everyone should aspire to become a strategic advisor when many people just want to be excellent at what they already do.
My response: This is the critique that keeps me up at night, because it's pointing at something real.
My framework does embed a value hierarchy: it treats judgment work as the direction of travel, the place where human value concentrates when AI handles patterns. But who says judgment work is objectively superior? Why is "strategic advisor" better than "excellent accountant who loves the pattern work of getting every tax return exactly right"?
I don't have a fully satisfying answer. What I can say is this: the hierarchy I'm observing isn't normative ("you should prefer judgment work") but descriptive ("the labor market increasingly values judgment work when pattern work can be automated"). My Scaffold Framework is responding to that market reality, not creating it.
But the critique points toward a blind spot in my thinking: I need infrastructure for people who want to double down on pattern work excellence even as that work gets automated. Maybe that's AI-augmented craftsmanship—using AI to handle routine patterns while humans focus on the edge cases and exceptions that require deep domain expertise. That's a legitimate pathway my current framework doesn't well serve.
The identity issue is harder. Work isn't just economic—it's where many people derive meaning and social standing. Transformation that preserves economic value while eroding identity and status creates its own harms. I need to engage with this more thoughtfully than I currently do.
3. The Labor Economists
Their argument: I'm pushing Jevons's Paradox as optimistic framing, but labor economics doesn't straightforwardly support the claim that AI will create more jobs than it displaces. The evidence is mixed at best. Automation often concentrates gains at the top while displacing middle-skill workers, and there's no guarantee the new jobs AI creates will be accessible to the people whose jobs AI eliminates. My framework might help some individuals transition, but it doesn't address the structural question of whether aggregate employment increases or decreases.
My response: They're right that Jevons's Paradox isn't a guarantee—it's a pattern that sometimes holds. When new technology makes something more efficient, demand can expand enough to offset the efficiency gains, but only if:
The demand is elastic (people want more of it when it gets cheaper/better)
The capability unlocks new use cases
The gains get distributed broadly enough that new demand can materialize
For AI, I think the conditions hold, but I acknowledge uncertainty. AI isn't just making existing work more efficient; it's making previously impossible things (real-time language translation, image synthesis, code generation, complex data analysis) accessible to millions who couldn't afford experts before. That's classic Jevons territory—efficiency creating new demand.
But the critique about distribution is serious. If AI gains concentrate at the top while displacing middle-skill workers, we could see Jevons's Paradox at the economy level (more total economic activity) while experiencing labor market hollowing at the worker level (fewer good jobs for displaced workers).
This is why I think the civic infrastructure layer matters so much. Market forces alone won't ensure displaced workers can access new opportunities. Intentional infrastructure—like what I'm building with The Helm Program—is required to ensure transitions happen successfully. If we don't build that infrastructure, the economists' pessimistic scenario becomes more likely.
4. The Techno-Pessimists
Their argument: I'm assuming AI capabilities will plateau somewhere below general human judgment. What if they don't? What if AI systems develop judgment capacity that matches or exceeds humans? My entire framework rests on judgment being the "safe" domain for human workers, but that's just a temporary assumption. Once AI can handle judgment work, my transformation pathways lead nowhere.
My response: Fair point, and genuinely uncertain. I don't know where AI capabilities plateau.
What I can say is this: even if AI eventually develops sophisticated judgment capacity, the transition period matters. If AI takes decades to reach human-level judgment across domains (as opposed to narrow domains where it already exceeds humans), then infrastructure that helps people shift toward judgment work during those decades serves a real purpose.
And here's the more subtle point: even if AI develops judgment capacity, human judgment might remain valuable for different reasons—not because it's superior on technical metrics, but because humans care about outcomes in ways AI doesn't, because we're accountable to other humans in ways AI isn't, because contextual understanding includes social and political dimensions that resist pure optimization.
I'm not confident about any of this. If the techno-pessimists are right and AI rapidly achieves general judgment capacity, then yes, my Scaffold Framework's core premise collapses. That's a genuine risk I'm taking.
But I think it's still correct to build infrastructure for the transition period we're in, even if I'm uncertain about the endpoint. The alternative—waiting until we know AI's final capabilities before building workforce infrastructure—seems worse. By then it's too late.
5. The Humanistic Tradition
Their argument: My framework treats education as instrumental—it's all about job outcomes, economic value, workforce transition. But the deeper purpose of education is human flourishing, intellectual development, citizenship preparation, cultivation of wisdom and character. By reducing education to workforce preparation, I'm surrendering to a narrow, economistic view of human development. Even if my framework succeeds on its own terms (placement rates, earnings), it might be failing on dimensions that matter more.
My response: I feel the force of this critique, and I think about it constantly.
Divergence Academy is a vocational trade school. My students are coming to me for a specific purpose: career transition, economic stability, access to IT roles that provide middle-class incomes. They're not coming for liberal arts education or philosophical cultivation. They need jobs, and I help them get jobs.
But the humanistic critique asks: am I only doing that? Am I reducing human potential to market value?
I hope not. My Scaffold Framework's emphasis on judgment work is, in part, an attempt to preserve human agency and meaningful work in an AI-augmented economy. The transformation I'm facilitating isn't just "learn to use AI tools"—it's "develop capacities for navigating ambiguity, making contextual decisions, synthesizing across domains, exercising judgment in situations where rules don't provide answers."
Those are capacities that matter beyond economic value. They're civic capacities, intellectual capacities, human capacities. Someone who develops strong judgment isn't just a better employee; they're potentially a more thoughtful citizen, a more effective parent, a more capable problem-solver in life contexts that have nothing to do with work.
So I'd argue my framework isn't only instrumental, even though it operates in an instrumental context. I'm trying to cultivate genuinely valuable human capabilities, and the fact that those capabilities also have labor market value doesn't negate their broader worth.
That said, the critique points at a constant risk I face: vocational education always threatens to become too narrow, too focused on immediate employability at the expense of broader development. I don't think my Scaffold Framework fully resolves that tension. It's something I have to actively resist—the temptation to optimize only for placement rates and forget that I'm working with human beings whose flourishing matters beyond their market value.
What I'm Missing: The Infrastructure Gaps
My Scaffold Framework addresses one piece of workforce transformation infrastructure. But it's far from complete. Here are three critical gaps where progress requires different kinds of infrastructure that I can't build alone:
1. Policy Integration I Can't Control
Workforce development doesn't happen in a vacuum. It happens in a policy environment that shapes everything from unemployment insurance to certification requirements to education funding to labor market regulations.
My Helm Program operates effectively within that environment, but I'm not changing it. If unemployment insurance doesn't cover retraining periods, if certification bodies don't recognize non-traditional credentials, if hiring practices filter for college degrees regardless of actual capability—these policy constraints limit how much my program alone can accomplish.
There's a missing layer I can't build: policy infrastructure that makes transition easier. Portable benefits, skills-based hiring standards, recognition of competency-based credentials, funding mechanisms for mid-career retraining. Until that infrastructure exists, programs like mine are working around systemic barriers rather than removing them.
I don't have solutions to offer here. Policy reform is outside my domain. But it's important for me to acknowledge the limitation: my individual-level interventions, however effective, can't fully compensate for system-level design problems.
2. Capital Access I Can't Provide
Career transitions often require financial cushion. Someone making $65,000/year in a stable job might need to take a step back to $50,000 while building new capabilities, then step forward to $80,000 in a transformed role. That transition period requires savings or credit or family support—resources that aren't evenly distributed.
My Helm Program can help someone navigate the capability transition, but I can't solve their cash flow problem. If they can't afford a 6-month income dip, they can't take the transition risk, regardless of how good my training is.
This is where financial infrastructure matters that I can't build: income support during transitions, low-cost credit for retraining, employer-funded transition programs, wage insurance that protects against temporary income loss. Some of this exists (unemployment insurance, student loans), but it's not well-designed for mid-career transitions in response to technological change.
Again, this is outside what I can build as a trade school operator. But it's a real constraint on how many people can successfully navigate transformation, even when my developmental infrastructure exists.
3. Employer Coordination I Can't Force
My transformation pathways don't work if employers aren't hiring for the transformed roles. Someone can develop excellent judgment capacity as a "risk strategist and regulatory navigator" through my program, but if Compliance departments are still hiring for "detail-oriented rule-followers who can process checklists," my training doesn't lead anywhere.
There's a coordination problem: I need to build toward future job structures, but employers hire for current job structures. If employers move first (redesigning roles to leverage AI for pattern work while humans focus on judgment), my training can follow. If I move first (developing judgment capacity), employers might not recognize or value it.
I try to address this through forward deployment—I'm not just training people, I'm embedding AI Operators in organizations and demonstrating what these transformed roles can do. It's a proof-of-concept approach: show employers that judgment-focused workers using AI tools are more valuable than traditional role structures, and let them copy what works.
But this is slow, organization-by-organization work. Systemic transformation would require employer coordination at scale: industry standards for AI-augmented roles, professional associations that recognize new capability requirements, job posting taxonomies that distinguish pattern work from judgment work.
We're not there yet. And without that coordination infrastructure, transformation happens in pockets rather than economy-wide.
My Jevons Thesis: Why I Think This Matters
Let me return to where I started: Jevons's Paradox and what I think it means for AI transformation.
The standard worry is straightforward: AI automates work → jobs disappear → workers get displaced → society faces unemployment crisis. If that's the path, then my workforce development infrastructure is just a band-aid on structural damage.
But Jevons's Paradox suggests a different pattern that I find more plausible: AI automates pattern work → work becomes more efficient → new applications become viable → demand for judgment work expands → total employment increases in transformed roles.
I think the Jevons pattern is more likely than the displacement pattern, but it's not automatic. The difference between those two futures is infrastructure: whether we build pathways for people to move from displaced pattern work to expanding judgment work.
That infrastructure has several layers in my view:
Diagnostic infrastructure (like my Euler Center) that helps people understand their current capabilities and transformation pathways
Development infrastructure (like my Divergence Academy) that builds judgment capacity through scaffolded training
Deployment infrastructure (like my Helm Program) that forward deploys people into transformed roles and demonstrates what works
Policy infrastructure that makes transitions financially feasible and institutionally supported
Capital infrastructure that provides resources for transition periods
Employer infrastructure that recognizes and hires for transformed capabilities
My Scaffold Framework addresses the first three layers. The latter three require different actors—policymakers, financial institutions, employer coalitions. But all five layers matter.
If we get the infrastructure right, I think AI transformation could produce broadly distributed prosperity: more people doing more meaningful work (judgment-heavy roles that use distinctively human capabilities) at higher compensation (because judgment work is more valuable). That's the Jevons optimistic scenario I'm working toward.
If we don't get the infrastructure right, I think we get the displacement scenario: pattern work automated faster than people can transition to judgment work, gains concentrating at the top while median workers struggle, social conflict over who bears the costs of transformation.
The difference between those futures isn't primarily about AI capability in my view—it's about the institutional infrastructure we build now, in the window we have before transformation accelerates beyond our ability to manage it thoughtfully.
Why I'm Writing This: My Practitioner Stance
I want to be clear about my position here. I'm not an academic researcher studying workforce transformation from the outside. I'm not a policy analyst making recommendations from a think tank. I'm not a journalist reporting on trends.
I'm a practitioner who owns a vocational trade school, lived through 24 months of institutional struggle as my old model broke, built new infrastructure out of operational necessity, and am now making that infrastructure available because I think it might be useful beyond my specific context.
My Scaffold Framework isn't a research finding. It's my evolving method, built through trial and error, constantly updated based on what works in my practice. I'm presenting it not because I think it's finished, but because I think my approach—building infrastructure through operational iteration rather than waiting for theoretical consensus—is what the moment requires.
I'm also self-critiquing throughout. Every section on antagonist perspectives represents real concerns I've encountered, gaps I recognize, uncertainties I'm holding. My framework has limitations. Some of those limitations are fixable through iteration. Others might be fundamental.
But here's what I believe: the AI workforce transformation is happening whether we build good infrastructure or not. The question isn't whether roles will change—they're already changing in my students' experience. The question is whether we build pathways that help normal workers navigate that change successfully, or whether we let market forces and technological momentum produce chaotic displacement.
I'm building pathways. My Helm Program, powered by my Scaffold Framework, is my operational attempt to create the infrastructure that doesn't yet exist at scale. It's working in the contexts I can directly touch—veterans transitioning to IT careers through my Divergence Academy, organizations embedding AI Operators through my 9brains practice, measurement frameworks through my Euler Center.
Whether it scales beyond those contexts depends partly on whether my framework proves useful to other practitioners facing similar challenges. This article is, in part, an invitation: if you're running workforce development programs, if you're navigating organizational AI transformation, if you're trying to build better infrastructure for helping people transition—here's what I've learned. Take what's useful, improve what's incomplete, build your own version that fits your context.
Making This Concrete: My Data Center Connection
Let me bring this back to Amarillo and BICSI, because my abstractions about "workforce transformation" can feel distant from operational reality.
Here's the concrete problem I see: the Advanced Energy and Intelligence Campus near Amarillo will need thousands of workers. Not just during construction, but ongoing—data center operations require continuous technical maintenance, infrastructure monitoring, system optimization. These are specialized roles requiring specific certifications (BICSI credentials for structured cabling and telecommunications, electrical qualifications for high-voltage systems, HVAC expertise for cooling infrastructure, security clearances for sensitive sites).
The workforce doesn't currently exist at the scale required. BICSI training programs exist, but they're not designed for rapid scaling. Electrical trade programs exist, but they're not focused on data center infrastructure. Security clearance processes exist, but they take time.
This is a microcosm of the broader transformation challenge I face: known future demand, insufficient current supply, inadequate infrastructure for scaling fast enough.
Here's how I think my Scaffold Framework approach would work:
Step 1: My Diagnostic Assessment
I'd identify people with baseline technical capability (electricians, IT professionals, veterans with relevant military experience) and assess their current pattern/judgment ratio. A journeyman electrician might be 75% pattern work (following electrical codes, executing standardized installations) with judgment capacity focused on troubleshooting and site-specific problem-solving.
Step 2: My Role Transformation Mapping
I'd map how data center electrical work differs from traditional electrical work. The pattern work gets more complex (higher voltages, more sophisticated systems) but also more automated (monitoring systems handle routine checks). The judgment work shifts toward infrastructure optimization, predictive maintenance strategy, system integration across multiple domains.
Step 3: My Targeted Development
Rather than sending someone through a generic BICSI certification program, I'd scaffold the specific judgment capacities the role requires. How do you make contextual decisions about system redundancy? How do you navigate trade-offs between reliability, efficiency, and cost? How do you troubleshoot novel failure modes in complex systems?
Step 4: My Forward Deployment
I'd place people in roles at the Amarillo facility (or similar projects) with explicit scaffolding: paired with experienced professionals initially, given progressively more complex responsibilities as judgment capacity builds, supported with AI-augmented tools that handle pattern work while humans focus on contextual decisions.
Step 5: My Feedback
I'd measure what worked. Which of my diagnostic criteria predicted success? Which scaffolding approaches accelerated development? What unexpected challenges emerged? I'd update my framework for the next cohort.
This isn't hypothetical. I'm in active conversations with BICSI about exactly this kind of pipeline development through Divergence Academy. Not because I've figured everything out, but because I have operational infrastructure (Divergence Academy for training, Euler Center for assessment, 9brains for deployment) that can iterate toward solutions faster than waiting for someone else to build it.
The Amarillo project is scheduled to deliver its first gigawatt of power by the end of 2026—about 14 months from now. That's not enough time for traditional workforce development infrastructure to scale. It is enough time for my iterative, operationally-focused approach with the Scaffold Framework to make a dent.
And here's why this matters beyond Amarillo to me: if I can make this work for data center infrastructure (a relatively bounded, technical domain with clear skill requirements), I build proof-of-concept for more ambiguous transformations. The Compliance analyst transitioning to risk strategist, the graphic designer evolving toward creative director, the sales rep becoming relationship architect—these transformations are harder to specify than "electrician to data center infrastructure specialist," but the infrastructure challenges are similar.
I build it for the concrete case, then adapt for broader application. That's my practitioner methodology.
Conclusion: Infrastructure Determines Distribution
The AI transformation is happening. That's not a prediction in my view; it's an observation. LLMs are already changing how people write, code, analyze data, create content. Computer vision is transforming manufacturing, logistics, medical diagnosis. Automation is eliminating routine knowledge work across every industry I serve.
Jevons's Paradox suggests to me that this transformation could create more opportunity than it destroys—not by magic, but through the mechanism of expanded demand for work that AI makes more valuable. But that optimistic scenario requires infrastructure: pathways for people to move from automated pattern work to expanding judgment work, support systems that make transitions financially viable, organizational structures that recognize and reward transformed capabilities.
The question isn't whether that infrastructure will exist. Some version will emerge eventually—market pressures and social necessity will force it. The question is whether we build it thoughtfully now, while we have time to get it right, or whether we build it reactively later, after chaotic displacement has already done damage.
My Helm Program is one attempt to build that infrastructure now. It's not the only approach needed, and I'm not claiming to solve everything. But it represents my operational response as a practitioner to a real problem: students at my trade school needed pathways that didn't exist, so I built them. Other workers face similar challenges, so I'm making my framework available.
I've built infrastructure that connects production (my Divergence Academy), measurement (my Euler Center), and deployment (my 9brains practice) because all three are necessary and none worked in isolation. I've designed it to self-critique and improve because that's what operational infrastructure requires—it has to get better through use, not just exist as fixed theory.
Whether my Scaffold Framework proves valuable at scale remains to be determined. That's for other practitioners to test in their contexts, for other organizations to adapt to their needs, for labor economists to evaluate against outcomes data, for critics to find flaws that lead to improvements.
What I'm confident about is this: the civic infrastructure layer of AI transformation—the unsexy, operational work of building pathways that help normal workers navigate change—is where the difference between shared prosperity and concentrated extraction gets determined. We can't just build better AI tools and hope workforce adaptation happens automatically. We have to build the adaptation infrastructure intentionally.
I'm building that infrastructure through The Helm Program. Whether it succeeds will determine far more than my trade school's placement rates—it will help determine whether the AI age produces flourishing for millions or dislocation for millions.
And that's worth paying attention to.
Author's note: I am Sravan Ankaraju, owner of Divergence Academy, Euler Center, and 9brains. This analysis emerges from my decade of operational experience running a vocational trade school and my last 24 months of struggle to adapt to workforce transformation driven by AI. The Scaffold Framework is my evolving method for building the infrastructure I needed but couldn't find. I'm presenting it not as finished theory but as practitioner methodology that others might find useful. I have direct financial and operational interest in these programs' success, and I'm attempting to maintain intellectual honesty about both their potential and their limitations while actively building and testing them in the field.