In the fall of 2025, a Google engineer posted an internal message that circulated widely outside the company. "I've shipped more code in the last quarter than at any point in my career," it read. "And I understand less of it than at any point in my career." The message resonated so deeply that it was quoted in three subsequent press articles about AI's toll on software engineers.
That engineer's experience captures something specific about AI fatigue at the world's largest tech companies โ the scale and intensity of the problem is different. When your employer builds AI tools that then mandates their use, when your performance review measures the same output metrics it always has, and when thousands of your colleagues are feeling the same exhaustion simultaneously, the experience compounds in ways that smaller companies simply can't replicate.
This is what The Clearing's 2026 survey found when we asked engineers at major tech companies about their experience with AI fatigue. The numbers are striking. But behind every data point is a human experience that our metrics still can't fully capture.
What the Data Shows
The Clearing's 2026 AI Fatigue Survey collected responses from 1,240 engineers across major tech companies. Here's what they told us:
These numbers represent something real: a systemic problem at the largest scale of the industry. When 71% of engineers at mandatory-AI-policy companies report moderate-to-severe fatigue, the problem isn't individual resilience. It's structural.
Why Big Tech Is Different
AI fatigue exists everywhere. But engineers at Google, Meta, Amazon, Microsoft, and Apple experience a distinct version of it โ shaped by five structural forces that only exist at the largest scale.
Mandatory Adoption at Scale
When a 50-person startup mandates AI tools, engineers can push back, negotiate, or quietly opt out. At a 50,000-person engineering organization, policy changes propagate through official channels, OKRs, and team-level mandates. Engineers at big tech companies are not choosing AI tools โ they're being required to use them, which removes the psychological agency that makes voluntary adoption feel manageable.
Outdated Performance Metrics
Meta's E5 rubric. Google's OC. Amazon's Leadership Principles. These performance frameworks were built for human output โ PRs merged, features shipped, bugs fixed. They have not been recalibrated for AI-assisted work. An engineer shipping twice as much AI-assisted code while feeling cognitively depleted may receive the same rating as one producing less โ because the system can't see exhaustion, only output.
The Build-vs-Use Paradox
Engineers at AI companies often build the very tools they're required to use. The insider knowledge that makes an AI researcher skeptical of their company's own coding assistant is the same knowledge that makes them more effective builders. But that skepticism, applied daily to mandatory tools, creates a specific cognitive tax โ the constant low-grade frustration of watching a system you understand deeply produce outputs you would have done differently.
Post-Layoff Residual Load
The 2024-2025 wave of tech layoffs hit engineering teams hard. When your team shrinks from 12 to 7 and AI tools are positioned as the productivity solution, the math is brutal: same sprint velocity expectations, smaller team, more AI-generated code to review. Engineers describe this as "doing more with less" โ but the "more" now includes AI-generated work that still requires human judgment to validate.
Attribution and Credit Erosion
In companies where AI-assisted work is the norm, distinguishing individual contribution becomes nearly impossible. A senior engineer's value was once partly defined by what only they could do. Now, when an AI system can produce their architectural decisions, their code reviews, their design docs โ the question of what belongs to the human becomes philosophically and practically murky. Performance reviews that can't see the human contribution may undervalue it.
Simultaneous Cultural Pressure
At a startup, you might be the only engineer feeling AI fatigue. At Google, you're one of tens of thousands โ experiencing the same exhaustion as your entire peer group at the same time. This simultaneity creates a cultural feedback loop: internal discussions about AI fatigue go viral, morale impacts spread through team chats, and the problem becomes an organizational identity rather than an individual experience.
What Google Engineers Describe
Google's 2025 internal survey โ first reported by Business Insider and confirmed to The Clearing by three current employees โ found that 64% of engineering staff reported feeling "frequently overwhelmed" by AI tool requirements. The company's mandatory AI coding assistant integration, which accelerated throughout 2024, has created a distinct set of pressures.
Engineers describe three particularly Google-specific dynamics:
Google's internal AI tool integration is deeper than most competitors โ Geminisuggest code directly in IDEs, suggests design docs in Docs, drafts PR descriptions, and summarizes code reviews. For many engineers, this means AI touches nearly every step of the development process. The pervasiveness โ not just the intensity โ is what feels different. "I can't remember the last time I wrote a function without AI suggesting the next three," one senior engineer told us. "And I can't remember the last time I felt like I actually wrote anything."
Many Google engineers working on Gemini or Bard are simultaneously required to use AI coding tools in their own work. The cognitive dissonance is specific: you know exactly how these systems work, where they fail, what they hallucinate โ and you're required to use them anyway. "Using an AI coding assistant while building the AI that powers it is a very particular kind of exhausting," one engineer wrote in an internal forum that circulated externally.
Google's OC (Objectives and Criteria) system has not been updated for AI-assisted work. Engineers report that their performance ratings still heavily weight code output โ PRs merged, features shipped โ which AI tools can inflate without reducing the cognitive cost of producing quality work. An engineer shipping more code than ever but feeling like they understand less of it gets rated on the shipping, not the understanding.
What Meta Engineers Describe
Meta's engineering culture has always been output-focused โ the company famously measures everything, and individual engineers are expected to move fast. The introduction of AI tools into this culture has created a distinctive fatigue pattern that Meta engineers describe as "speed with less meaning."
The company's Llama team and AI infrastructure work mean many Meta engineers have unusually deep visibility into how AI tools actually work โ which makes their mandatory use particularly frustrating. Three themes emerge consistently from Meta engineer accounts:
Meta engineers report that AI tools have dramatically increased their code output โ PRs merged per sprint have reportedly increased 40-60% for some teams. But this increased output has not come with increased satisfaction. "I shipped more features last quarter than my first two years combined," one engineer told us. "I also felt more like a reviewer than a builder." The faster pace, divorced from the deeper engagement that makes engineering satisfying, creates a specific exhaustion: the fatigue of being busy without being fulfilled.
AI-generated code requires more review, not less โ a fact that engineers at Meta, Google, and Amazon all report. A code review of AI-generated work involves checking not just the logic but the assumptions, the edge cases the AI didn't consider, the style inconsistencies, and the hallucinations. Meta engineers describe a new workflow: write prompt, review AI output, find errors, fix errors, review again. The "speed" gains of AI tools are partially consumed by increased review burden.
Meta's E5 performance system is output-first โ it measures what engineers ship, not how they feel about shipping it. Engineers report that the human cost of AI-assisted work is essentially invisible to the performance system. An engineer experiencing significant AI fatigue may have their best-ever output quarter โ and receive no recognition for the cognitive toll it took, because the system doesn't track cognitive toll.
What Amazon Engineers Describe
Amazon's engineering culture โ built around ownership, frugality, and bias for action โ creates a distinctive context for AI tool adoption. The company's heavy investment in AI across AWS, Alexa, and advertising has positioned AI tools as both a product priority and an internal productivity priority simultaneously.
Amazon engineers describe a pressure dynamic that compounds fatigue in specific ways:
Amazon's Leadership Principles heavily weight ownership โ the idea that every engineer has deep personal accountability for their systems and decisions. AI tools, by suggesting decisions, generating code, and drafting designs, create a quiet conflict with this accountability model. "I still own the outcome," one Amazon engineer told us. "But the decisions that lead to the outcome are increasingly made by something else. That's a weird place to be." The ownership principle creates an expectation of deep engagement that AI tools can undermine โ while the AI tools are simultaneously mandated as the path to meeting other LP expectations (bias for action, delivery velocity).
Amazon engineers working on AWS AI services (Bedrock, SageMaker, CodeWhisperer) occupy a specific position: they build the tools that other Amazon engineers are required to use. This creates a build-vs-use paradox with an extra layer โ AWS engineers know that their internal AI tool requirements help generate the usage data that improves AWS's competitive position against Azure and Google Cloud. The personal experience of fatigue is thus tangled up with helping the company compete.
Amazon's 2025 return-to-office mandate โ requiring three days in-office โ has compounded AI fatigue for many engineers. Engineers who were already struggling with AI tool exhaustion report that the additional cognitive load of commuting, open office noise, and in-person collaboration, combined with AI tool requirements, has pushed some to the breaking point. Several Amazon engineers told The Clearing that RTO was the factor that turned chronic AI fatigue into acute burnout.
The Common Thread: Loss of Agency
Despite their different cultures, Google, Meta, and Amazon engineers describe a remarkably similar core experience: the feeling of losing agency over their own work while being held equally accountable for its outcomes. This is the common thread that The Clearing's survey found across every major tech company we studied.
The shift in the rightmost column is not inherently bad โ AI tools do offer real productivity gains. But when the accountability structure stays the same while the decision agency shifts, engineers are placed in a structurally unfair position: responsible for outcomes they increasingly don't control.
What Doesn't Help (And What Companies Are Doing Anyway)
Across all three companies, engineers describe a gap between the interventions being offered and the structural problems driving their exhaustion. Common company responses that engineers report as insufficient:
AI Training Workshops
Companies respond to AI fatigue by offering more AI training โ teaching engineers to use AI tools more effectively. Engineers experience this as being told to work harder at the thing that's exhausting them. "My company sent me to a workshop on getting more out of AI tools," one engineer told us. "I wanted to go to a workshop on having fewer AI tools."
Mandatory Wellness Days
One-off wellness initiatives โ free meditation apps, mental health days, "rest weeks" โ are frequently described as tone-deaf when the structural cause of fatigue is unchanged. An engineer who returns from a wellness day to the same AI tool mandates and performance metrics experiences the day as a Band-Aid, not a cure.
Productivity Metrics Without Quality Metrics
Companies that celebrate AI-driven productivity gains without adjusting quality or depth expectations signal that quantity matters more than understanding. This accelerates the shift toward engineers who feel like sophisticated prompters rather than deep technical builders.
Mandatory "AI-Free" Days
Some teams have implemented AI-free days โ one day per week where no AI tools are used. While well-intentioned, engineers report that these days often create more pressure than they relieve: the AI-free day still has to be made up, and the context switching from AI-assisted to fully-manual work is cognitively disruptive.
What Would Actually Help
When The Clearing asked engineers at major tech companies what would actually reduce their AI fatigue, the answers clustered around five structural changes โ none of which require abandoning AI tools, but all of which require companies to acknowledge that the current implementation has human costs.
Recalibrate performance reviews for AI-assisted work
Performance systems need to measure the quality of human judgment in an AI-assisted context, not just output volume. This means new criteria: how well an engineer catches AI errors, how deeply they understand AI-generated solutions, how effectively they direct AI tools toward the right problems.
Create structured opt-out pathways
Engineers should be able to do meaningful work without AI tools in some contexts โ not everywhere, not always, but in enough of their work to maintain skill currency and identity. Companies that mandate AI use everywhere are burning out their senior engineers and deskilling their juniors simultaneously.
Separate AI productivity metrics from AI quality metrics
If AI tools increase code output by 40%, that metric should not be celebrated without also measuring what was lost: depth of understanding, quality of review, maintenance burden of AI-generated code, and cognitive cost to engineers. A 40% output increase at a 30% cognitive cost increase is not a productivity win.
Invest in AI-free skill development time
Companies serious about AI tool adoption should also invest seriously in the human skills AI cannot replace. This means protected time for architectural thinking, design work without AI assistance, code written from scratch, and the deep debugging sessions that build the intuition that makes engineers valuable.
Conduct organizational AI fatigue audits
Before mandating AI tool adoption at scale, companies should conduct internal research on how AI tools are actually affecting engineering wellbeing, skill development, and code quality โ and share the results. An organization that measures AI's human cost is an organization that can manage it.
The question we should be asking is not "How do we get engineers to adopt more AI tools?"
The question is: "How do we get engineers to do their best work โ which may include selective, thoughtful AI use โ while maintaining their skill development, job satisfaction, and mental health?"
At The Clearing, we talk to thousands of engineers every month. The ones who are managing AI fatigue best are not the ones using the most AI tools. They are the ones who have the most agency over when, how, and whether they use them. That agency, at scale, is what the largest tech companies have not yet figured out how to give their engineers back.