Nine Persistent Myths of Learning and Development

1. “We need to design for learning styles (visual, auditory, kinesthetic).”

The ‘given’:
People learn best when you match instruction to their preferred style (VARK etc.). So we should profile learners and customize accordingly.

What the evidence actually says:

  • People do have preferences, but matching teaching to those preferences has repeatedly failed to show meaningful benefits in controlled studies.
  • Recent reviews still find that, where any positive effects appear, they’re tiny, inconsistent, or due to weak study design.

What actually helps:
Design for how memory and attention work, not for imagined “types”:

  • Use dual coding (words + visuals), retrieval practice, spaced repetition, worked examples.
  • Give learners multiple ways to process information rather than boxing them as “a visual learner so we won’t use text.”

2. “Dale’s Cone / the Learning Pyramid proves people remember 90% of what they do.”

The ‘given’:
Slides with a pyramid or cone:
10% of what we read
20% of what we hear
…90% of what we do.

What the evidence actually says:

  • Edgar Dale never published any retention percentages; his “Cone of Experience” was about levels of abstraction, not memory.
  • The famous percentages are fabricated and have been widely debunked.

What actually helps:

  • “Doing” can help, but only when the task is well designed: clear goals, feedback, manageable cognitive load.
  • Reading or listening can be highly effective when coupled with retrieval and elaboration. Mode is less important than what the learner does with it.

3. “70-20-10 is how learning actually breaks down – it’s a research-based ratio.”

The ‘given’:
70% on-the-job, 20% social, 10% formal. The numbers are treated as empirical truth.

What the evidence actually says:

  • The original 70-20-10 idea came from self-reported estimates in a small sample of executives, not from large-scale, causal research.
  • Later critiques point out that the precision of “70-20-10” is misleading; there’s no robust evidence that these are real, generalizable ratios.

What actually helps:

  • Treat 70-20-10 as a reminder that formal training is only a slice of capability building, not as a KPI.
  • Design ecosystems: formal learning, coaching, feedback, on-the-job practice, resources in the flow of work. Forget the exact numbers.

4. “Digital natives / Gen Z learn fundamentally differently because their brains are wired by tech.”

The ‘given’:
Young employees are “digital natives,” older ones “digital immigrants,” so we must design completely different learning for each generation.

What the evidence actually says:

  • The “digital natives vs immigrants” framing is a 2001 concept that was catchy, not rigorously evidenced.
  • Studies examining actual technology use and learning find big within-generation differences and much smaller, inconsistent differences between generations.
  • Systematic reviews of generational differences at work show that claims about fundamentally different values and learning preferences are often exaggerated and not strongly supported.

What actually helps:

  • Design for experience, role, and context, not birth year.
  • Segment by prior knowledge, digital fluency, and job demands. You’ll get far more leverage than building “Gen Z courses” vs “Boomer courses.”

5. “Humans now have an 8-second attention span (less than a goldfish), so everything must be microlearning.”

The ‘given’:
We “know” modern learners can only pay attention for 8 seconds, so long-form learning is dead.

What the evidence actually says:

  • The goldfish comparison traces back to a misinterpreted Microsoft marketing report, not rigorous cognitive research.
  • Attention isn’t a single fixed span; we have different kinds of attention (sustained, selective, etc.), and humans can stay engaged for hours when something is meaningful and well structured.

What actually helps:

  • Assume attention is earned, not capped.
  • Use clear goals, narrative, interactivity, varied pacing, and reduce distractions.
  • Microlearning is powerful for spaced practice and performance support, not because the brain “can’t handle more.”

6. “Multitasking is the new normal – learners can happily learn while half-doing something else.”

The ‘given’:
Everyone multitasks now; it’s fine if they’re checking email or Teams while in virtual training.

What the evidence actually says:

  • What we call multitasking is usually rapid task switching, and switch costs can eat up a large chunk of productive time and reduce accuracy.
  • Frequent switching reduces comprehension and memory, especially for demanding learning tasks.

What actually helps:

  • Design and set norms for single-tasking during critical learning moments: shorter focused sprints, clear expectations (“cameras on, notifications off”), active tasks that make zoning out obvious.
  • Use recordings and resources for genuine low-stakes “second screen” consumption, not for first exposure to complex concepts.

7. “If learners like the training and rate it highly, it must be effective.”

The ‘given’:
Great smile sheets and high NPS = great learning.

What the evidence actually says:

  • Meta-analyses show weak or no reliable correlation between student evaluations of teaching and actual learning outcomes; in some cases, easier courses with lenient grading get better ratings but worse downstream performance.

What actually helps:

  • Keep reaction data, but don’t confuse enjoyment with impact.
  • Add measures of:
    • Retrieval and decision-making (can they use the knowledge?).
    • On-the-job behavior change.
    • Business and risk metrics where feasible.

8. “If they completed the module and passed the quiz, they’ve learned it.”

The ‘given’:
Completion + 80% on a multiple-choice quiz = job done, learning achieved.

What the evidence actually says:

  • Basic knowledge checks often measure short-term recall, guessing, or pattern spotting, not durable capability.
  • LTEM and similar models explicitly classify attendance/completion as weak indicators with low confidence about real learning or transfer.

What actually helps:
Shift from “Did they finish?” to:

  • Can they make the right decisions in realistic scenarios?
  • Can they perform key tasks to standard?
  • Do metrics (errors, rework, safety incidents, sales behaviors, etc.) move in the right direction after the intervention?

9. “Most performance problems are training problems.”

The ‘given’:
Something’s going wrong → “We need a training.”

What the evidence actually says:

  • Classic performance consulting work consistently shows that many gaps are caused by environment: unclear expectations, poor tools, bad processes, misaligned incentives, or workload, not lack of knowledge/skill.
  • Training aimed at non-training causes at best wastes money; at worst it frustrates people and erodes trust in L&D.

(You won’t find a single neat meta-analysis here; this is supported across decades of performance-improvement literature and modern evaluation frameworks like LTEM and LEADS that emphasize systems, not just courses.)

What actually helps:

  • Treat “training” as one lever in a performance system, not the default answer.
  • Diagnose first: Can they do it now? Do they have reasons to? Do systems, tools, and managers support the behavior?

How you can use this in practice

If you want to operationalize this, here’s a simple move:

For your next project, explicitly challenge at least two of these “givens” in your kickoff or design review. For example:

  • Replace learning styles talk with a short explanation of evidence-based design.
  • Swap the learning pyramid slide for a quick explanation of practice and feedback.
  • When someone cites 70-20-10 as a target, reframe it as “Let’s map all the places learning can happen for this role” instead of optimizing for a ratio.