"Can you really build a life when you don’t know what is real and what is fake?"
Listening to Students Talk About AI
Over the past month or so, I’ve spent a lot of time (even more than usual) listening to students talk about AI.
Several weeks ago, I listened to a panel of college students as they discussed their hopes and worries about this technology, as a part of the AACU’s Institute on AI, Pedagogy, and Curriculum. In some ways, it was a continuation of the conversations I have all the time. After all, one of the panelists is a student of mine.
On the other hand, listening to the panel reminded me just how diverse students’ voices are. Some students used AI regularly and were fully behind it as the future of work, while others refused to use the technology because of bias and the danger of cognitive offloading. It was fascinating and instructive to see those students bounce ideas off of each other.
Then, a couple of days ago, an article encouraged me to go even deeper in thinking about the student perspective.
Mary Ruskell, a High School Senior in North Carolina, published an article on CNN titled “What this teenager wants you to know about the damaging effects of AI.”
It pushed me to think about how I so often discuss AI Literacy.
How AI Literacy Leads to Nihilism
Ruskell begins by describing her experiences with being fooled online.
She points out this is a nearly universal experience for her and her friends. Initially, she thought she could identify fake images relatively easily. She’d see extra fingers and immediately know that the image was generated with Generative AI.
No problem.
But that has since changed. She’s no longer confident that she can tell the difference between real and fake images. She laments, “As generative AI improves, however, completely fake content is getting harder for me to spot.”
Ruskell’s point is nothing new. I constantly talk to my students about the importance of approach any internet content with a healthy degree of skepticism. I even think about this as the hallmark of “AI Literacy.” Being AI literate means — among other things — recognizing the ways that AI-generated content can fool us.
But here’s my blind spot.
My approach to this skepticism is logical. Very logical. When I think of being AI literate, I think of fact-checking. I think of matching online stories against those printed by other, trusted sources. I think of “lateral reading,” the process of opening more and more tabs on my web browser as I verify information.
Ruskell’s article asks me to think about this from a more emotional angle:
It’s a good reminder. Images and videos aren’t just pieces of online content, that we need to fact-check and verify. They are forms of communication. They are forms of building human connections.
My students rely on these images and videos to emotionally connect to each other.
So, what happens when we apply the “be skeptical of everything” mantra of AI Literacy to something that’s so pivotal for human connection and emotional bonding?
The answer, Ruskell suggests, could be a form of nihilism: “One of my teachers thinks that members of Generation Z (born between 1997 and 2012) are nihilists, whether they know it or not. My friends and I struggle with these questions, and I think my teacher may be right.”
What Happens When Mistrust Becomes Our Default?
This form of nihilism eats at the very core of human purpose.
I want to hover over those words a bit. In a world where Generative AI can generate very convincing content that feels human-created, “Can you really build a life when you don’t know what is real and what is fake, when you can never trust what you see, what you learn, or how the world works?”
Her words strike a chord with me.
In the classroom, it’s so tempting to lay out the guidelines of AI Literacy with a sense of detachment. I often feel like I’m outlining a scientific method for them, so that they can suss out truth even when it’s becoming harder and harder to do so.
But to better understand where AI is leading us — and whether or not we like the destination — we need to think about the emotional toll.
We need to think about what worldview comes of a process that could make mistrust our default approach to life.
What This Means For the Classroom
Our students need us. And we need them.
Now more than ever, it’s important that we talk to our students about what’s happening around us. We need to listen.
We need to listen not only because their voices are valuable in and of themselves (which is, of course, true), but because listening to them to is the only way we can be there for them.
And our students are dealing with some heavy stuff.
The quote is heartbreaking but gives me a little hope.
I'm struck by a nuance in the language: I don't believe that students are nihilistic (believe in nothing). I think they are afraid that nothing matters...
We (everybody, not just teachers) have to continue to fight for a future where truth and authenticity matter and have value
Powerful. Generative AI could cognitively and emotionally arrest an entire generation, setting us up for a future with no leaders, creators or problem solvers.