There is a moment, familiar to anyone who has sat through a mandatory corporate training module, when the screen goes dark between slides and the progress bar at the bottom reads 47% complete. You have absorbed nothing. You know this. The platform does not. It is tracking your click, not your comprehension—registering presence, not learning. The module ends. A certificate generates. Somewhere in a dashboard, a completion rate ticks upward, and the organization files this as evidence that something has occurred.
Something has. Just not what anyone intended.
The global Learning and Development industry is worth somewhere between $350 and $400 billion by current projections, a number so large it has become its own kind of argument. Surely, the thinking goes, an investment of this magnitude must be producing something. And yet 92% of learning programs cannot connect their costs to measurable business results. Seventy-four percent of organizations report they are still losing ground on critical skill gaps. CEOs identify talent shortages as their primary barrier to growth—the same CEOs who just signed off on another platform expansion. The industry is spending generously and accomplishing, in aggregate, remarkably little. The question worth sitting with is not how to spend more. It is why so much spending has produced so little change.
The Measurement Problem Is a Moral Problem
The answer begins with what we choose to measure, and what we choose to measure reveals what we actually value.
For decades, L&D departments have reported in the currency of activity: courses completed, hours consumed, enrollment rates, quiz scores. These metrics are easy to capture, simple to present in a quarterly review, and almost entirely disconnected from whether anyone learned anything useful. They measure what happened inside the learning management system, not what happened inside the learner. They track behavior up to the moment of completion and then look away, incurious about what follows.
This is not an administrative failure. It is a philosophical one. When you measure activity instead of outcomes, you are implicitly arguing that activity is what matters—that the purpose of training is the training itself, not any downstream change in how people work, decide, or lead. You are building a system optimized for its own continuation.
The transition the industry now gestures toward—from volume-based L&D to what researchers call “value-based enablement”—sounds like a technical shift. It is actually a reckoning. It requires admitting that the last several decades of “more” have produced less than was claimed, that the certificates were not evidence of competence, and that the real cost of ineffective programs is not just the budget line but the opportunity cost: the skill gaps that widened while organizations measured completion rates instead of performance change.
The Pharmacology of Learning
The research literature offers a model borrowed from medicine, and it is worth pausing on the metaphor. The concept of dosage—the idea that efficacy is a function of quantity, that too little produces no effect and too much produces harm, and that somewhere in between lies the point of maximum benefit—turns out to describe educational technology with uncomfortable precision.
The data from the OECD’s Programme for International Student Assessment, collected across multiple waves beginning in 2012, reveals what researchers call an inverted U-curve. Students who use technology moderately outperform both those who never use it and those who use it constantly. The curve peaks somewhere around 30 minutes of targeted, purposeful daily engagement and then declines. Double the dose—60 minutes instead of 30—and learning gains flatten. Push further into several hours of unfocused screen time and performance degrades below where it started.
This is not intuitive. We do not generally believe that more instruction produces worse outcomes. And yet Cognitive Load Theory explains the mechanism clearly enough: human working memory has a fixed capacity. Every stimulus that is not essential to the learning task—a notification, an unnecessary interface element, an irrelevant module—consumes processing capacity that would otherwise be used to build knowledge. Beyond a certain threshold, the overhead of navigating the digital environment exceeds the marginal benefit of the content it contains. You are not learning from the platform. You are learning to survive it.
The corporate world has not absorbed this finding. It has done the opposite. The average employee can dedicate roughly 24 minutes per week to formal learning—about 1% of the work week—and the industry response has been to build larger catalogs, more comprehensive curricula, more modules. We have offered a 30-hour library to someone with 24 minutes. We have optimized for supply when the binding constraint was always attention.
The Shiny Object and Its Victims
There is a particular character I recognize in the accounts of how organizations adopt educational technology: the executive who has heard about a tool at a conference, who has absorbed the vendor pitch without the implementation research, who returns to the office with the conviction that this—this platform, this VR module, this AI-powered learning system—will solve what previous tools did not.
This pattern has a name in the literature: shiny object syndrome. It produces fragmented tools, frustrated teams, and wasted budgets. It is the corporate equivalent of a school district buying a tablet for every third-grader without asking what the teachers will do with them or whether the network can handle the traffic. The hardware arrives. The outcomes do not.
What makes this particularly interesting is the confidence with which it happens. Research shows that 94% of C-suite executives describe themselves as having intermediate to expert knowledge of AI—and yet executive confidence in their own AI strategies fell from 69% to 58% in a single year. These numbers do not contradict each other. They describe a specific kind of overconfidence: people who know enough to approve initiatives but not enough to evaluate them, who can navigate the vocabulary of innovation without accessing its substance. When leaders lack deep conceptual literacy but possess high institutional authority, they become the primary vectors for expensive, flashy interventions that lack viable business cases.
The victims of this pattern are not primarily the executives. They are the employees subjected to training that does not serve them, built by people who did not know what they needed, measured by metrics that could not detect the failure.
When Technology Earns Its Place
I want to be careful not to argue the wrong thing. The case against more is not a case against technology. It is a case against technology deployed without purpose, measured without honesty, and justified by activity rather than outcome.
The research on Virtual Reality training is instructive precisely because it demonstrates what genuine return on investment looks like. Boeing reduced training time for specialized manufacturing tasks by 75% using VR. Delta Air Lines increased technician proficiency checks from 3 to 150 per day—not a marginal improvement, but a structural transformation. A PwC study found that VR-trained employees were 275% more confident in applying what they had learned, and four times more focused during the training itself than their peers in traditional e-learning.
These results do not come from the technology. They come from the match between the technology and the task. VR is effective for manufacturing and aviation because those fields require muscle memory and embodied procedure—skills that a slide deck cannot transmit, that a quiz cannot test, that only repetition in a simulated high-stakes environment can produce. The tool earned its place by doing something that other tools genuinely could not.
This is the distinction that gets lost in the shiny object cycle: not whether a technology is sophisticated or impressive, but whether it is the right instrument for the specific learning gap you are trying to close. AI-driven tutoring that meets a learner inside their existing workflow—within Salesforce, within Microsoft Teams, without the friction of a separate portal and a login screen—can reduce training time by 40% while improving relevance. Not because AI is inherently superior, but because embedded learning in the flow of work eliminates the distance between instruction and application. It solves a real problem: the 24-minute week.
The Equity Problem Inside the Efficiency Argument
There is one dimension of this story that the efficiency literature tends to minimize, and I think it deserves naming directly.
The “second digital divide” is not about access to devices—that gap, while not closed, is narrowing in most developed economies. The new divide is about the quality of what those devices are used for. Research consistently shows that students and employees from socioeconomically advantaged backgrounds use technology for creative, autonomous, high-agency work: building, designing, researching, collaborating. Employees and students from disadvantaged backgrounds more often encounter technology as a delivery mechanism for surveillance and drill—automated worksheets, compliance modules, repetitive practice tasks.
The same dosage that produces gains at the right point on the curve produces harm when the content is low-quality and the learner has no power to redirect it. An executive’s child uses an iPad to make a film. A warehouse worker clicks through a mandatory safety compliance module and answers the same five questions about forklift procedures that they answered last year. Both are technically receiving “digital learning.” The curve looks entirely different for each of them.
L&D leaders who have absorbed the efficiency argument without the equity argument risk building systems that serve the people who already have the most and automate mediocrity for everyone else. AI personalization that is trained on dominant cultural and linguistic datasets can systematically disadvantage employees whose learning styles, dialects, or professional histories diverge from the encoded norm. More precise measurement of outcomes is only equitable if it measures outcomes for everyone, not just the employees whose performance is already easiest to improve.
What Honest Measurement Requires
The shift to outcome-based metrics is real, and some of it is genuinely promising. Time to competence—how quickly a new hire reaches baseline performance—is a meaningful number. Behavioral change observed at 30, 60, and 90-day intervals is actual evidence. Linking training programs to changes in customer satisfaction scores or sales win rates requires intellectual honesty about causation, but it is at least asking the right question.
What these metrics share is that they require something the volume-based metrics did not: a willingness to be wrong. When you measure completion rates, the system almost always succeeds. When you measure performance change, you discover that most programs produced none. This is uncomfortable information. It implicates everyone who approved the budget, designed the curriculum, and reported favorably on the completion data. It asks organizations to sit with the possibility that the investment was not returned.
The mathematics of replacement make the stakes concrete. Replacing an employee typically costs between 150% and 200% of their annual salary. For someone earning $50,000, that is up to $100,000 in recruitment, onboarding, and productivity loss—before accounting for institutional knowledge that walked out the door with them. L&D programs that measurably improve retention are worth serious money. L&D programs that improve completion rates are worth a line in a slide deck.
The Simpler Argument
Here is what the research, stripped of its academic vocabulary, is actually saying: we built systems designed to generate activity rather than change, measured them with metrics that could only detect activity, and then wondered why the activity accumulated without the change following.
The path forward is not revolutionary. It is disciplined. It requires deciding what behavior needs to change before designing any training. It requires measuring that behavior before, during, and after—and being willing to publish the results regardless of what they show. It requires matching tools to tasks rather than tools to enthusiasm. It requires recognizing that 30 minutes of precisely targeted, embedded, well-designed learning produces more than 30 hours of curriculum that nobody had time to finish.
Less is not a retreat. It is the discovery that abundance was always the wrong frame. Learning is not a catalog. It is not a platform. It is not a certification. It is a change in what someone can do that they could not do before.
Everything else is a completion rate.
Tags: corporate learning and development ROI, cognitive load theory workplace training, EdTech dosage curve PISA, AI-driven employee enablement, outcome-based L&D metrics


