Is AI Making Us Dumber?

In February 2023, I attended a conference for academic support teachers, and one of the workshops addressed ChatGPT. As an elementary school teacher, it wasn’t on my radar at all. The stance that the workshop leaders took wasn’t exactly “if you can’t beat them, join them,” but it wasn’t far off. If kids are going to be exploring ChatGPT anyway, they reasoned, we teachers need to make it our job to learn about it and any benefits it might have in the classroom.

I didn’t give ChatGPT another thought until last year’s preplanning, when one of our admin gave a ChatGPT demonstration by having it write her presentation. Were there some gaffes? Yes, but it did a decent job of covering her topic.

That was my only taste of ChatGPT until one day a few months later when I was completely burnt out and needed to write a lesson plan for my 3rd graders. One of my colleagues said, “Have ChatGPT write it for you.” So I thought, why not? I pulled it up and asked it to write my lesson plan. At the beginning of ChatGPT’s lesson plan, it gave instructions for whichever spelling pattern I was teaching (I wish I’d saved it—I can’t remember what pattern it was now), and while the structure of the lesson plan was fine, the explanation of the spelling pattern was incorrect. In ChatGPT’s defense, it is difficult to give printed instructions for a lesson that is dependent on sounds and articulation. I can imagine it being just as difficult to read a speech therapist’s lesson plan. Even so, the fundamental principals were just wrong. It concerned me that other teachers who are lost and looking to ChatGPT to help them might assume that it’s correct and teach it verbatim. You may think it doesn’t matter if kids don’t learn how to spell (after all, they can just have AI write it for them—yeesh), but what if it writes an incorrect chemistry or algebra lesson?

One thing I will give ChatGPT is that it asks for feedback, and I did not hold back. I told it that I couldn’t use the lesson plan because there were errors, and it asked me what those errors were because, as AI, it has the ability to learn. I told it what was wrong, but again, if I didn’t know what I was talking about, I could fill ChatGPT with all kinds of nonsense that it would internalize and use in the next lesson plan that some unsuspecting teacher asks for. In that way, it reminds me of Wikipedia—which, by the way, my school teaches students not to use as a trusted source. Sure, you may find facts there, but it’s also been known to have bogus information, such as that Sinbad died in 2007 (he’s still alive and well 17 years later).

ChatGPT isn’t the only AI out there. It seems like there’s something new every day. I see commercials for Grammarly constantly. Do we really need AI to help us with our emails? (Okay, I’ll admit, some people need to get help from somewhere. Apparently, it’s too much to ask people to proofread a two-liner before hitting send.) Even WordPress is trying to get me to use AI to “improve” this blog. I’m sorry, if I reach a smaller audience because I’m not using AI, at least that audience is reading my words.

And AI doesn’t just write and edit for you—it can also take a candid photo and make it look like a professional headshot. While this is a nice alternative to having to spend big bucks on a photographer, it’s also a hop, skip, and a jump away from fudging reality. If AI can make a snapshot of me at Disney World look like a professional headshot, couldn’t it also make it look like I’m best friends with J.K. Rowling? Or like I spent two weeks at a fancy resort that I’ve never actually visited? If seeing is believing… what if we can’t believe what we see anymore?

Thinking I’m a dinosaur who needs to get with the times, I asked my 16-year-old what he thinks about ChatGPT. To my surprise, I know more about it than he does. The extent of his knowledge is that it’s AI, therefore he has no interest in it. I asked him why—after all, he’s my dyslexic kiddo who has legitimate access to all the assistive technology he could ever want. What Peter said is that AI is allowing people to get dumber because they don’t have to think. There you have it from a high schooler, folks.

And as if the universe was giving me extra incentive to tackle this topic, I read this the other day: “Calculating machines could provide swift answers to complex sums, but what happened when the human mind atrophied and forgot how to calculate?” (Sisterhood of Dune by Brian Herbert and Kevin J. Anderson—yep, this lecture brought to you by a sci-fi geek).

I am all for assistive technology. After all, I’m the same person who made this vision board in grad school:

Just as I have students with dyscalculia (a math disorder) who are allowed to use calculators on math tests, there are assistive technologies that help people with just about any learning disability you can imagine. The more research that comes out about different learners, the more we’re able to differentiate and allow people to learn according to how they are wired. But before assistive technology can be used, the people using it need to know why they’re using these tools and how to use them properly. Putting a calculator into a child’s hands does no good if she doesn’t know which functions to use or the order of operations. Only once she understands the basic principles of math can she use the calculator to free up some of her working memory so she can think through problems and solve them correctly. In other words, we still have to teach people how to think.

Before writing this post, I did go back to ChatGPT to have it write a lesson plan on r-controlled vowels. The activities that it outlined were okay, but it lumped areriror, and ur into one lesson without any explicit instruction about the different sounds or how to differentiate between erir, and ur, which all sound the same. I’m sure I care more about this than most because I’m a specialist, but that’s the point: I’m the specialist, not ChatGPT. The next time I’m feeling overwhelmed, I’ll just take a breather and remember that, even on my worst days, I’m a better teacher than AI.

Here’s the thing: generative AI should only be used to supplement what we already know. It should not be the only source we turn to for anything, and when it’s used at all, it should be with extreme caution and—dare I say?—skepticism. In a time when it’s so easy to let our minds atrophy in front of screens, AI gives us another excuse to let our thinking “muscles” go slack. It’s such an issue that, when submitting a piece of writing for publication, I have to check a box saying it’s my own creation and that no part of it was written by artificial intelligence. Plagiarism, while still an issue, is no longer the main way that people claim works that aren’t their own.

I’ll leave you with this:

I love creating teaching materials or having brainwaves that make me lose myself in a piece of writing for long stretches of time. Don’t let AI steal what you love to do and turn it into a cheap imitation of your original, hard work.


For worksheets, activities, reading passages, lesson plans, and more (that I created), please check out my Teachers Pay Teachers store, Mrs C loves to read: https://www.teacherspayteachers.com/store/mrs-c-loves-to-read

Leave a comment