A year and a half has past since AI came into our lives, and I would like to say my life is relatively the same, but it is not. And this is despite teaching mainly process-driven lessons–focusing students to practice creativity, patience, revision, and deep-thinking, which potentially makes AI more of a helper than a replacement tool. Despite all that, I’m kind of exhausted. I can’t help but think that if AI had a sense of humor, AI might enjoy dunking on my meager biological bandwidth.
A year ago, back when AI was all new-ish, I wrote about it here and here, joined a panel on it, and attended a day of workshops on it. I wasn’t alone. Everyone was AI news/opinion scrolling and talking about it with their colleagues or interested education outsiders. What was going to happen? What needed to happen? Idealistically, I thought that the classroom really did not need to change. That is, if we were focusing on process-driven learning.
But here’s the thing: You don’t get into teaching without a strong sense of fairness. Whether it’s fairness in ensuring that all students have the same chances for a solid education or whether students are working to the best of their abilities (and not putting one over on the teacher [aka: cheating]). It’s very hard to find any teacher without a strong fairness radar. I suspect it’s part of whatever drives people to join the profession, much like long distance runners naturally love, among other things, the calm that comes with sustained effort.
But this fairness trait, I think that’s what has been the exhausting factor in the introduction of AI.
When teachers talk about AI, often the issue of the calculator comes up. If a tool is used so much that it is a natural part of life, why learn without it?
I was one of those students that didn’t understand why I couldn’t use such a tool. And you know, I bet most math teachers thought the same when they were students: teenagers and children are beings of expedience, constantly trying to make things efficient and not seeing the long term goals of slow and sustained thinking.
But there are good points for allowing early calculator use. In respect to other tool use, a calculator makes sense. We don’t need to learn how to create the paper we are writing on or understand how a car works in order to drive it. We don’t have to learn how to make a fire or make our own clothes. There is an argument to let skills die if impractical. We don’t really live in a world where we need to know our multiplication tables. In fact, a lot of people probably pull out their phones at restaurants to calculate tips.
And here is where I’m leaving that argument about the calculator behind (not my forté!) and switching from calculations to writing, which is what AI is becoming better and better at doing (supposedly). When I make that logical argumentative switch, it does become a bit clearer why it’s so hard to realize that the benefit of writing is not just the end product, even though we are constantly using tools not for their ends but for their process. Most of them are in the form of hobbies like knitting or playing music. Or we might get into some mode of exercise in order to get more physically fit but realize that a strong byproduct to that is a better state of mind, which holds all sorts of other benefits like better sleep and happier moods.
If school is anything, it is an enforced time stopper. School demands slowing down in order to see more and, therefore, internalize more. It is a singular experience, being forced to slow down, and we rarely come across this in other points of our life, unless we are sufficiently motivated.
But what do you do with a species of human who don’t see the value of slowing, even if you tell them what that value is? And it’s not just biology that works against teenagers. So many variables in a student’s life can make them choose to be efficient: workload, lack of self-esteem, trust in the system, worries about social lives (that are way more heightened with a teenager), etc. But this is a major life lesson that school hopefully imparts on its students–that in the face of life, we must do hard things. Teachers know this, and it’s what unites the building.
And whether or not us ELA teachers teach a process-driven classroom or maybe the most idealistically awesome way of teaching ever know, we are now constantly thinking these questions: If I make such and such assignment, will students use AI to complete it? How will I know? Do students have too much of a crutch on Grammarly? Should I allow that? Should I disallow works cited helpers? (MLA formatting is super easy anyway.) Should I have them write a first draft in a notebook? Or will they just copy their work from an AI into their notebook? Should I demand that all work be completed in class?
Exhausting.
Furthermore, we have all this smartphone hubbub going on with Jonathan Haidt’s new book. (I suspect, from reading various criticisms, that the argument about social media being a cause of depression is well-meaning but based on problematic research. Smartphones are most assuredly a distraction for teenagers in the classroom, but do they increase negative mental health? But shoutout to Haidt’s open attitude toward critics. It is a model for us all.)
There is a lot of fear going round, and fear and education can be a toxic mix.
Idealistically, the answer to our problems is simple. None of the technologies we popularly use are inherently bad. Everyone should learn how to use tools ethically and responsibly. Teachers are tool users and tool teachers. And I think that it is our duty as educators to confront the use of tools used in our field in order to refine the brain to focus, organize, create, etc. It is what we already do with our profession’s most important tools: reading and writing, both tools you can misuse for terrible ends.
But where does that leave the well-meaning teacher that wants to ensure their students don’t find compelling and easy opportunities to use AI to cheat? And how should we change the way we teach because of it?
I think the answer to that lies in what students get out of education. If students find something compelling, worthwhile, they will want to learn. In fact, AI has reminded me, very strongly, that teaching students to write is about making them care what they say. When you get that far, it’s almost a self-fulfilling prophecy. Seems easy, but it is very difficult. And that’s not the only problem. Once you get the buy-in, students still need guidance to push against the limits of their creativity. If only they’d realize the prose they write is much like their evolving identities. It takes time to patch all of that together amongst all the mistakes–that hat that just totally didn’t work out, that joke that you realized wasn’t a good idea to tell, that friend who is turning out to not be a positive influence.
So it’s tempting to trick-out our classrooms with gamified curriculums or change the topics we study to surface-level entertainment, like writing analyses of TikToks or whatever pseudo-entertainment gets disguised as real work. We would do anything for student buy-in. (Though I should point out that smart minds have successfully analyzed seemingly depthless pop culture in a very deep way–David Foster Wallace comes to mind.)
We want to do whatever is in our power to make education not just a rite of passage. And I think that is a real problem we have in education right now–students just wanting to do what they can to get through it. Not all, mind you, but you’d be surprised how many good students can feel this way. With that mindset, that’s where the dangers of AI come into play. And that’s why we teachers are having a rough go of it right now. It’s totally not fun to be suspicious of students as a default mindset.
I suspect the real probably lies in system. As AI is a bandaid for a vastly wrong interpretation of education, the actions that counteract AI-use are equally insufficient to solve the problem. These sorts of bandaids are everywhere, and they are the real problems in education. And their fruition most likely doesn’t come about because of maliciousness but because education stakeholders are trying to find ways in which to solve systemic problems with the least amount of friction. The thing is that educational problems are not easy to solve without massive amounts of friction. That and more resources than we currently have to spend. Education, especially public education, is complex and needs complex fixes.
So, AI–it’s here. It’s going to help and not help. (Though right now, I’m not sure about the “help” part in terms of my experimentations with AI.) But I think what we need to focus on in the education world is not taking things away or implementing idealistic changes that only bandaid a true problem. What we need is to rethink the system, and that’s a truly difficult thing to accomplish.