Preserving critical thinking amid AI adoption

Preserving critical thinking amid AI adoption


Key points:

AI is now at the center of almost every conversation in education technology. It is reshaping how we create content, build assessments, and support learners. The opportunities are enormous. But one quiet risk keeps growing in the background: losing our habit of critical thinking.

I see this risk not as a theory but as something I have felt myself.

The moment I almost outsourced my judgment

A few months ago, I was working on a complex proposal for a client. Pressed for time, I asked an AI tool to draft an analysis of their competitive landscape. The output looked polished and convincing. It was tempting to accept it and move on.

Then I forced myself to pause. I began questioning the sources behind the statements and found a key market shift the model had missed entirely. If I had skipped that short pause, the proposal would have gone out with a blind spot that mattered to the client.

That moment reminded me that AI is fast and useful, but the responsibility for real thinking is still mine. It also showed me how easily convenience can chip away at judgment.

AI as a thinking partner

The most powerful way to use AI is to treat it as a partner that widens the field of ideas while leaving the final call to us. AI can collect data in seconds, sketch multiple paths forward, and expose us to perspectives we might never consider on our own.

In my own work at Magic EdTech, for example, our teams have used AI to quickly analyze thousands of pages of curriculum to flag accessibility issues. The model surfaces patterns and anomalies that would take a human team weeks to find. Yet the real insight comes when we bring educators and designers together to ask why those patterns matter and how they affect real classrooms. AI sets the table, but we still cook the meal.

There is a subtle but critical difference between using AI to replace thinking and using it to stretch thinking. Replacement narrows our skills over time. Stretching builds new mental flexibility. The partner model forces us to ask better questions, weigh trade-offs, and make calls that only human judgment can resolve.

Habits to keep your edge

Protecting critical thinking is not about avoiding AI. It is about building habits that keep our minds active when AI is everywhere.

Here are three I find valuable:

1. Name the fragile assumption
Each time you receive AI output, ask: What is one assumption here that could be wrong? Spend a few minutes digging into that. It forces you to reenter the problem space instead of just editing machine text.

2. Run the reverse test
Before you adopt an AI-generated idea, imagine the opposite. If the model suggests that adaptive learning is the key to engagement, ask: What if it is not? Exploring the counter-argument often reveals gaps and deeper insights.

3. Slow the first draft
It is tempting to let AI draft emails, reports, or code and just sign off. Instead, start with a rough human outline first. Even if it is just bullet points, you anchor the work in your own reasoning and use the model to enrich–not originate–your thinking.

These small practices keep the human at the center of the process and turn AI into a gym for the mind rather than a crutch.

Why this matters for education

For those of us in education technology, the stakes are unusually high. The tools we build help shape how students learn and how teachers teach. If we let critical thinking atrophy inside our companies, we risk passing that weakness to the very people we serve.

Students will increasingly use AI for research, writing, and even tutoring. If the adults designing their digital classrooms accept machine answers without question, we send the message that surface-level synthesis is enough. We would be teaching efficiency at the cost of depth.

By contrast, if we model careful reasoning and thoughtful use of AI, we can help the next generation see these tools for what they are: accelerators of understanding, not replacements for it. AI can help us scale accessibility, personalize instruction, and analyze learning data in ways that were impossible before. But its highest value appears only when it meets human curiosity and judgment.

Building a culture of shared judgment

This is not just an individual challenge. Teams need to build rituals that honor slow thinking in a fast AI environment. Another practice is rotating the role of “critical friend” in meetings. One person’s task is to challenge the group’s AI-assisted conclusions and ask what could go wrong. This simple habit trains everyone to keep their reasoning sharp.

Next time you lean on AI for a key piece of work, pause before you accept the answer. Write down two decisions in that task that only a human can make. It might be about context, ethics, or simple gut judgment. Then share those reflections with your team. Over time this will create a culture where AI supports wisdom rather than diluting it.

The real promise of AI is not that it will think for us, but that it will free us to think at a higher level.

The danger is that we may forget to climb.

The future of education and the integrity of our own work depend on remaining climbers. Let the machines speed the climb, but never let them choose the summit.

Laura Ascione
Latest posts by Laura Ascione (see all)





Source link