You’re using ChatGPT? A true story about why AI literacy starts with us


Key points:

I was recently called into a Grade 10 Math class to cover for a teacher who had to step out for an emergency. Math isn’t my strongest subject, and when a student asked me to help solve an equation involving angles, I hesitated. Rather than guessing, I pulled out my phone and said, “Let’s ask ChatGPT.”

The room fell silent. Students exchanged surprised glances, and I heard one whisper, almost in disbelief, “She’s using ChatGPT.” It wasn’t just curiosity in their voices, it was a kind of quiet shock. They didn’t expect a teacher to use the same tool they’d been experimenting with themselves. Later, I learned that many students were already using a math-solving app called Gauth AI, but discreetly. Seeing an adult use AI out loud and without shame broke some kind of invisible rule.

That moment opened my eyes. Our students are already using AI. The question is: Are we helping them use it safely, ethically, and effectively, or are we leaving them to figure it out on their own?

Why AI literacy matters now

In today’s classrooms, AI is no longer a future issue, it’s here. Research shows that nearly 50 percent of K-12 students already use tools like ChatGPT weekly. Meanwhile, 62 percent of employers are seeking AI skills in new hires, and over 78 percent of organizations report using AI technologies in 2024, up from 55 percent the year before.

Despite these numbers, families often feel left out of the conversation. Many parents aren’t sure what generative AI even is. Many schools haven’t clearly communicated guidelines for AI use in classrooms. Students, caught in the middle, are learning about AI in silence, experimenting without guidance, absorbing misinformation, or internalizing the idea that AI is something to be hidden.

That’s why AI literacy is essential–not just for students, but for families and schools as well.

Teaching AI literacy: SEE it, model it, practice it

AI literacy isn’t about coding or programming; it’s about understanding how to use AI safely, ethically, and effectively. That’s where the SEE Framework comes in:

Safely: Understand privacy concerns and avoid unsafe tools or prompts.

Ethically: Know when AI use is appropriate and how to avoid misuse (like plagiarism or cheating).

Effectively: Use AI to enhance learning, not replace it–whether for brainstorming, exploring questions, or reinforcing concepts.

When students internalize these values, AI becomes a tool for empowerment, not a shortcut for evasion.

To reinforce SEE principles at home and school, consider the following steps:

1. Start the conversation: Ask students what they already know or do with AI. Don’t start with warnings–start with curiosity.

2. Model transparency: Demonstrate how to ask the right questions, check for accuracy, and reflect on results.

3. Set shared boundaries: Clarify when and how AI can be used for schoolwork. Emphasize AI as a support tool, not a replacement.

4. Encourage co-learning: Parents, teachers, and students can learn together by exploring tools and discussing their uses.

But what if they misuse it? Balancing trust and caution

One of the most common concerns teachers and parents express is: What if students misuse AI? These are valid questions, but the truth is, we can’t guarantee perfect use.

The risks of avoiding AI far outweigh the risks of introducing it. Avoidance allows misuse to happen in silence.

Just as we teach responsible use of the internet and social media, we must teach responsible use of AI. This includes:

Verifying AI responses with trusted sources

Asking students to explain how AI supported their thinking

Designing assignments that prioritize reflection and originality

Yes, students might misuse it. But they’ll also learn from it–if we give them the chance.

The accuracy question: Can we trust AI’s answers?

Another critical issue is accuracy. AI tools, including ChatGPT, can sometimes provide wrong or misleading answers–known as ‘hallucinations.’

This makes critical thinking more important than ever. Students should be taught to question AI’s output:

Does this make sense?

Can I find this fact somewhere else?

What’s the source behind this answer?

Instead of fearing AI’s flaws, we can use them as teachable moments. That’s not just AI literacy, it’s life literacy.

AI literacy is human literacy

Ultimately, teaching students how to use AI responsibly is not just about the technology. It’s about fostering curiosity, judgment, integrity, and communication–skills they’ll need no matter what tools the future holds.

So when a student whispers, “She’s using ChatGPT,” it should no longer be a moment of surprise–it should be a sign that we’re finally having the right conversations.

If we’re honest, collaborative, and clear about what AI can (and can’t) do, we can help students move from secrecy to self-awareness, and from passive users to responsible thinkers.

Latest posts by eSchool Media Contributors (see all)





Source link