Human-Plus Outputs
Discusses how emergent capabilities in AI systems lead to outputs that surpass human expert performance.
Jason Wei, Yi Tay, Rishi Bommasani, et al. • 2022
We investigate the emergence of abilities in language models, finding that certain complex reasoning capabilities appear suddenly at specific model scales rather than gradually improving. This suggests fundamental phase transitions in AI capabilities that could lead to rapid improvements in performance across many domains.
Language Models Exhibit Emergent Abilities in Complex Reasoning Tasks
Matthias Lehmann, Philipp B. Cornelius & Fabian J. Sting (2025)
Theme | What Happens? | Evidence & Nuance |
---|---|---|
Average effect | Across the whole sample, LLM access does not change total learning gains. | |
Substitution – asking the bot to do the work | Students cover more topics but understand each one less. | |
Complementarity – using the bot for explanations/tutoring | Topic volume unchanged, depth of understanding rises. | |
Equity impact | LLMs widen the gap: students with lower prior knowledge learn less when allowed to rely on LLMs. | |
Copy‑paste affordance | When copy‑paste is enabled, students request "full solutions" far more often, fueling substitution and longer‑term decline. | |
Perceived vs. actual learning | Access inflates students' sense of how much they've learned beyond measured gains. |
Bottom line: LLMs are neither panacea nor poison; they magnify whatever study habits students bring to them. Design learning environments that channel AI toward explanation and reflection, not quick fixes, to unlock their real educational value.
Discusses how emergent capabilities in AI systems lead to outputs that surpass human expert performance.
Explains how effective prompting becomes crucial for unlocking emergent reasoning abilities in AI systems.