Back

The Cost of AI: Signs of Brain Fry & Cognitive Debt

Mindful Leader·Apr 14, 2026· 8 minutes

By The Mindful Leader Team


Is AI unburdening knowledge work — or doing the opposite? In our Spring 2026 Research Roundup, we look at three new studies: a large workplace survey, an eight-month embedded ethnography, and a small but widely discussed neuroimaging preprint. Together, they suggest AI tools are reshaping the cognitive demands of knowledge work in ways worth paying attention to.

What happens to sustained attention when the workday becomes, in large part, the supervision of AI?  Do faculties like memory and the sense of ownership dim when using AI? Are the effects temporary or chronic, and what can we do to mitigate or recover from them?

Monitoring AI, Not Using It, Drives the Cognitive Fatigue Workers Call "Brain Fry"

Published in: Harvard Business Review (reporting BCG Henderson Institute research)
Publication Date: March 5, 2026
Key Researchers and Institutions: Julie Bedard, Matthew Kropp, and Megan Hsu (Boston Consulting Group); Olivia T. Karaman and Jason Hawes (University of California, Riverside); Gabriella Rosen Kellerman, MD (Boston Consulting Group)
Study Size: Survey of 1,488 full-time US workers at large companies

The workplace productivity conversation in 2026 has begun to sound like two different conversations. One features AI vendors describing unlocked capacity and freed attention. The other features the workers themselves, who increasingly describe a state of mental static. What they now call, with a research label attached, "AI brain fry." The BCG team set out to answer which conversation the data actually supports.

The researchers surveyed workers across industries and roles, measuring mental effort, mental fatigue, information overload, error rates, and intent to quit against patterns of AI use — including the number of tools used simultaneously and, critically, the ratio of oversight to delegation.

Key findings:

  • Productivity peaks at three AI tools; above four, cognitive strain rises even as performance declines. The BCG data traces a measurable curve. One or two tools produce real gains; three is the peak; four or more reverses the pattern. The ceiling on AI multitasking appears to resemble the familiar ceiling on conventional multitasking.
  • Fourteen percent of AI-using workers reported brain fry. Participants described mental fog, headaches, slower decision-making, and what one called "a dozen browser tabs open in my head." The authors distinguish brain fry as acute cognitive strain, distinct from burnout's chronic, emotional dimension.
  • The mechanism is oversight, not use. High-oversight AI workflows — reviewing, correcting, and interpreting model outputs — were associated with 14% more mental effort, 12% more fatigue, and 19% more information overload than task-replacement workflows. 
  • Brain fry correlated with more errors, decision fatigue, and higher intent to quit. Workers holding many AI streams in attention simultaneously were also the workers most likely to be looking for the exit.

The problem is not necessarily AI. It's a workflow that requires holding multiple streams of unfinished, half-trustworthy output in attention at once. The study's most usable distinction is that brain fry is acute rather than chronic, pointing toward structural interventions like tool limits, attention protection, and genuine transitions between cognitive streams.

AI Expands the Sphere of Work Rather Than Shrinking It

Published in: Harvard Business Review
Publication Date: February 9, 2026
Key Researchers and Universities: Aruna Ranganathan (Associate Professor) and Xingqi Maggie Ye (PhD candidate), Haas School of Business, University of California, Berkeley
Study Size: Eight-month embedded ethnography at a ~200-person US technology company; close observation of 40 workers across engineering, product, design, research, and operations

If the BCG survey captures scale, the Berkeley study captures texture. From April to December 2025, Ranganathan and Ye spent two days per week on-site at a mid-sized tech firm, following how AI tools were actually integrated into daily work. What they found complicates the standard productivity narrative in a specific, uncomfortable direction.

The researchers began with the hypothesis that AI would free up time and mental space. Their interviews pointed elsewhere. Rather than removing work, the tools appeared to change the shape of work — and usually by adding to it.

Key findings:

  • Task expansion. When AI lowered the activation cost of a task, workers took on work that had previously belonged to other roles. The designer began writing copy. The engineer began drafting documentation. The product manager began generating mock frontends. Each expansion felt small. The cumulative load was substantial. The company did not mandate the expansion. Workers, as Ranganathan and Ye put it, "did more because AI made 'doing more' feel possible, accessible, and in many cases intrinsically rewarding."
  • More multitasking. Workers managed what the researchers describe as a continual "new rhythm" — writing code while AI generated an alternative version, running multiple agents in parallel, reviving long-deferred tasks because AI could "handle them" in the background. The feeling was momentum. The underlying state was constant switching and verification.
  • Self-reinforcing cycle of workload creep. AI accelerated certain tasks, which raised expectations for speed; higher speed made workers more reliant on AI; greater reliance widened the scope of what they attempted. As one engineer in the study put it: "You had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less. But then really, you don't work less. You just work the same amount or even more."

Ranganathan and Ye recommend that organizations build what they call an AI practice: intentional norms and routines that structure how AI is used, when it is appropriate to stop, and how work should and should not expand in response to new capability. Among their specific suggestions is a "decision pause" — requiring, before a major decision is finalized, one counterargument and one explicit link to organizational goals. The aim, in their words, is to widen the attention field just enough to protect against drift.

Preliminary Neuroimaging Evidence Suggests Outsourcing Writing to ChatGPT Leaves a Cognitive Trace

Published in: arXiv (preprint — not peer-reviewed at time of writing)
Publication Date: v1 June 10, 2025; v2 December 31, 2025
Key Researchers and Universities: Nataliya Kosmyna, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein, and Pattie Maes (MIT Media Lab)
Study Size: 54 participants in Sessions 1–3; 18 participants in the pivotal Session 4

The term "cognitive debt" was popularized in this paper, a study the MIT Media Lab team chose to release as a preprint rather than wait the projected eight or more months for journal publication, because findings were already relevant to decisions being made about AI in education.

Participants were 54 Boston-area adults assigned to three groups: an LLM group using ChatGPT, a Search Engine group using Google, and a Brain-only group using no tools. Each wrote three 20-minute SAT-style essays across three sessions while EEG recorded brain activity across 32 regions. In a fourth session, 18 participants switched conditions — LLM users writing without tools, and Brain-only users writing with ChatGPT.

Key findings:

  • Neural connectivity scaled inversely with external support. The Brain-only group showed the strongest and most distributed networks across alpha, theta, and delta bands — frequencies associated with creative ideation, memory load, and semantic processing. Search Engine users showed intermediate engagement. LLM users showed the weakest coupling.
  • Memory of one's own writing thinned noticeably under LLM use. In Session 1, 83% of LLM-group participants (15 of 18) reported difficulty quoting their own essays, and none produced correct quotes. The impairment attenuated over subsequent sessions: by Session 3, 6 of 18 still failed to quote correctly. Self-reported ownership of the work was lowest in the LLM group and highest in the Brain-only group. Two English teachers, assessing the essays blind, described the LLM-group writing as "soulless."
  • Effects appeared to persist after tool removal. When LLM users wrote without AI in Session 4, they showed reduced alpha and beta connectivity, which the authors interpreted as persistent under-engagement. Whether this is a durable effect or a short-term adaptation, the study cannot say.
  • Kosmyna's framing: "cognitive debt." The authors define the term as "a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive processes required for independent thinking" — one that defers mental effort in the short term but accumulates long-term costs in critical inquiry, creativity, and resistance to manipulation.

The findings are striking. They have also been amplified well beyond what the study can support, and that gap is worth naming for readers who may be encountering the research through secondhand coverage. The preprint status is not a footnote — it is a structural limitation. Only 18 participants completed the crossover session that drives the most-cited finding. The paper is a signal worth taking seriously. It is not yet a conclusion.


This article is part of our Research & Trends Series where we share the latest research and studies shaping our field.