February 24, 2025

What is GenAI doing to our brains?

A new study suggests some pitfalls (and ways to avoid them)
MarTech
TABLE OF CONTENTS

If you’re reading this, you’ve probably used some form of AI at work in the past 48 hours. These tools have swiftly shifted from novel to ubiquitous—and they’ve exponentially improved since the early days of widespread ChatGPT “hallucinations” and uncanny-valley imagery.

While AI-driven workflows certainly save time and free up bandwidth, a clear-eyed assessment of any new tools is essential. 

Which brings us to a recent study (“The Impact of Generative AI on Critical Thinking”)  conducted by Microsoft and Carnegie Mellon University, which draws on the expertise of 319 “knowledge workers” and the ways AI is impacting their work.

You can browse the full 25-page report here, but if you have other things to do with your day, we’ve got three thought-provoking takeaways below.

1. Overconfidence in AI can erode independent critical thinking

While GenAI streamlines many cognitive tasks, it also introduces risks of over-reliance. 

The study observed that "knowledge workers’ confidence in AI doing the task negatively correlates with their enaction of critical thinking"​. In other words, when workers trust AI too much, they are less likely to scrutinize its outputs critically. 

However, the reverse is also true—workers with higher confidence in their own skills are more likely to engage in critical thinking when using GenAI.

This brings us back to the importance of that well-worn phrase, “human-in-the-loop.” For most use cases, current AI tools are meant to work in tandem with human ingenuity and creativity. They’re not “set it and forget it” applications that do your job while you binge Netflix in your home office. 

2. A shift from “information gathering” to “information verification”

Instead of spending time searching for information, workers must now critically assess whether AI-generated outputs are accurate and relevant. 

This shift demands a new skill set—evaluating AI responses for bias, factual accuracy, and contextual appropriateness.

A cynical take here might suggest that AI tools can’t be trusted, or that any bandwidth saved during the “gathering” process is squandered by all that “verification.” But a more nuanced view would simply approach new use cases of AI with cautious optimism, rather than blind faith. 

For instance, imagine you’ve just onboarded a new intern in your data analytics department. They present themselves with confidence and are always receptive to your queries.  Still, you’d most likely have a senior member of the department check their work, at least at first, to make sure that it’s accurate and relevant. 

3. AI can support skill development—if designed thoughtfully

Despite concerns about AI reducing critical thinking, the study highlights opportunities for AI to enhance it. GenAI tools that encourage user reflection, offer guided critiques, and explain their reasoning can help workers develop their analytical skills. 

"GenAI tools could incorporate features that facilitate user learning, such as providing explanations of AI reasoning, suggesting areas for user refinement, or offering guided critiques,” the study notes.

And while certain aspects of GenAI reasoning are too opaque or complicated to properly “explain” (hence anxieties around the “black box” nature of the tech) an increased comprehension of what’s happening under the hood is vital. 

It ensures that those working with AI tools feel empowered and engaged: partners in skill-building, rather than appendages to a technology they trust, but don’t understand. 

Scott Indrisek

Scott Indrisek is the Senior Editorial Lead at Stagwell Marketing Cloud

Take five minutes to elevate your marketing POV
Twice monthly, get the latest from Into the Cloud in your inbox.
Related articles
What is GenAI doing to our brains?
A new study suggests some pitfalls (and ways to avoid them)
MarTech
How much should AI tools really cost?
New tools require a new understanding of pricing and ROI.
MarTech