Stealth AI
What happens when students use AI all the time, but aren't allowed to talk about it?
Last week, I talked to Zach Kinzler. He’s great. He’s sincere. He’s optimistic.
He’s also interviewed hundreds of college students over the past year. He has asked them important questions about how they feel about AI, how they are working it into their everyday lives, and what they think schools should do about it.
I recently sat down with him (virtually, of course) and tried to learn as much as I could about what students are actually saying about AI.
This is what I learned.
P.S. The full interview is included at the end of this article.
Lesson #1: Fear Is Powerful
It seems obvious. I know.
But the prevalence of fear still surprises me.
Zach interviewed many college students who were using AI to learn everyday. But about 30% of them asked Zach to turn off the camera, just in case their professor got wind of the interview.
Can we hover over that for a second?
Students who regularly use a program to learn material (whether they are using it responsibly or not) feel unsafe mentioning it to their professors. They use AI in the shadows of the night to complete assignments and learn material. Then, they shut up and never mention it.
In many ways, this comes back to one of my general rules: You cannot ban AI in the classroom. You can only issue a gag rule.
And if you do issue a gag rule, then it deprives students of the space they often need to make heads and tails of this technology.
As Zach says it, “they [students] need help.” And we’re not giving it to them.
Now, that’s just one manifestation of fear. Professors are scared too. Specifically, professors are scared of losing their position as “knowledge masters.”
I get it, I really do. When I was a graduate student and a beginner teacher, I felt like I had to maintain decorum and an aura of knowledge at all times. As a beginner, I felt like I needed to plan everything out from beginning to end.
It wasn’t until I was a more experienced teacher that I realized that experimenting in the classroom is necessary.
Professor need to feel safe to experience and admit that they are learning along the way. Students need to feel safe enough to talk about this technology, so that they can have a space for thinking critically about how to use it.
Without that, it’s hard to see how we move forward.
Lesson #2: Humility Is Going To Be Key
If there’s one thing that I learned from interviewing Zach, it’s that conversation changes things.
All I did was put a little pressure on a pain point, and what happened was that teachers at a whole university started to think about it.
Open conversation catalyzes change, though (admittedly) not as quickly as many of us would like.
Now, don’t get me wrong. I think (and so does Zach) that we need to be very careful about pushing for actual change and moving past conversation and discussion. Open dialogue is necessary, but not sufficient, for change to actually happen.
We need to listen to actual students talking about actual uses, and reflecting on their actual feelings.
No more abstraction.
The kind of open conversation Zach is talking about takes humility, empathy, and collaboration. It means admitting what we don’t know, and a willingness to learn through connecting with others.
Because let’s face it…
This is an all-hands-on-deck situation.
We’re trying to figure out the future of education in a world filled with AI programs, AI agents, and other emerging technologies.
We’re going to need to bring as many people into the conversation as we can.
And right now, colleges aren’t giving students a seat at the table.
That’s a problem.
The Full Interview
The following interview is part of the EdUp AI Podcast. Click here for more episodes from the current season.
Episode 1: The Educator’s AI Dilemma
Episode 2: Why Don’t Colleges Change?
Episode 3: Why We Should Pay Attention to Pi
Episode 4: What If Professors Could Clone Themselves?
Episode 5: Let’s Talk About Student Startups
Episode 6: How Can AI Help Teach Public Speaking
Episode 7: What Are Students Saying About AI?
I feel like AI is analogous to Wikipedia. It can be really useful and a help to getting started, but don't cut and paste that and turn it in as your product. And if you are, sure the plagiarism might be harder to identify, so maybe you get away with it. But all you're really getting away with at the end of the day is short-changing yourself and developing your skills to learn, synthesize, and use information on your own.
Love this post, Jason. I've had to stop the fear mongering recently despite how comfortable I was with it. Relating ChatGPT to Dr. Frankenstein's creature helped me.